He, Ning; Sun, Hechun; Dai, Miaomiao
2014-05-01
To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.
El-Amrawy, Fatema
2015-01-01
Objectives The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Methods Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. Results The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. Conclusions The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure. PMID:26618039
El-Amrawy, Fatema; Nounou, Mohamed Ismail
2015-10-01
The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure.
NASA Astrophysics Data System (ADS)
Hou, Yanqing; Verhagen, Sandra; Wu, Jie
2016-12-01
Ambiguity Resolution (AR) is a key technique in GNSS precise positioning. In case of weak models (i.e., low precision of data), however, the success rate of AR may be low, which may consequently introduce large errors to the baseline solution in cases of wrong fixing. Partial Ambiguity Resolution (PAR) is therefore proposed such that the baseline precision can be improved by fixing only a subset of ambiguities with high success rate. This contribution proposes a new PAR strategy, allowing to select the subset such that the expected precision gain is maximized among a set of pre-selected subsets, while at the same time the failure rate is controlled. These pre-selected subsets are supposed to obtain the highest success rate among those with the same subset size. The strategy is called Two-step Success Rate Criterion (TSRC) as it will first try to fix a relatively large subset with the fixed failure rate ratio test (FFRT) to decide on acceptance or rejection. In case of rejection, a smaller subset will be fixed and validated by the ratio test so as to fulfill the overall failure rate criterion. It is shown how the method can be practically used, without introducing a large additional computation effort. And more importantly, how it can improve (or at least not deteriorate) the availability in terms of baseline precision comparing to classical Success Rate Criterion (SRC) PAR strategy, based on a simulation validation. In the simulation validation, significant improvements are obtained for single-GNSS on short baselines with dual-frequency observations. For dual-constellation GNSS, the improvement for single-frequency observations on short baselines is very significant, on average 68%. For the medium- to long baselines, with dual-constellation GNSS the average improvement is around 20-30%.
NASA Astrophysics Data System (ADS)
Gu, Defeng; Ju, Bing; Liu, Junhong; Tu, Jia
2017-09-01
Precise relative position determination is a prerequisite for radar interferometry by formation flying satellites. It has been shown that this can be achieved by high-quality, dual-frequency GPS receivers that provide precise carrier-phase observations. The precise baseline determination between satellites flying in formation can significantly improve the accuracy of interferometric products, and has become a research interest. The key technologies of baseline determination using spaceborne dual-frequency GPS for gravity recovery and climate experiment (GRACE) formation are presented, including zero-difference (ZD) reduced dynamic orbit determination, double-difference (DD) reduced dynamic relative orbit determination, integer ambiguity resolution and relative receiver antenna phase center variation (PCV) estimation. We propose an independent baseline determination method based on a new strategy of integer ambiguity resolution and correction of relative receiver antenna PCVs, and implement the method in the NUDTTK software package. The algorithms have been tested using flight data over a period of 120 days from GRACE. With the original strategy of integer ambiguity resolution based on Melbourne-Wübbena (M-W) combinations, the average success rate is 85.6%, and the baseline precision is 1.13 mm. With the new strategy of integer ambiguity resolution based on a priori relative orbit, the average success rate and baseline precision are improved by 5.8% and 0.11 mm respectively. A relative ionosphere-free phase pattern estimation result is given in this study, and with correction of relative receiver antenna PCVs, the baseline precision is further significantly improved by 0.34 mm. For ZD reduced dynamic orbit determination, the orbit precision for each GRACE satellite A or B in three dimensions (3D) is about 2.5 cm compared to Jet Propulsion Laboratory (JPL) post science orbits. For DD reduced dynamic relative orbit determination, the final baseline precision for two GRACE satellites formation is 0.68 mm validated by K-Band Ranging (KBR) observations, and average ambiguity success rate of about 91.4% could be achieved.
High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link
NASA Technical Reports Server (NTRS)
Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli
2016-01-01
We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.
Precise Point Positioning with Partial Ambiguity Fixing.
Li, Pan; Zhang, Xiaohong
2015-06-10
Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate.
Precise Point Positioning with Partial Ambiguity Fixing
Li, Pan; Zhang, Xiaohong
2015-01-01
Reliable and rapid ambiguity resolution (AR) is the key to fast precise point positioning (PPP). We propose a modified partial ambiguity resolution (PAR) method, in which an elevation and standard deviation criterion are first used to remove the low-precision ambiguity estimates for AR. Subsequently the success rate and ratio-test are simultaneously used in an iterative process to increase the possibility of finding a subset of decorrelated ambiguities which can be fixed with high confidence. One can apply the proposed PAR method to try to achieve an ambiguity-fixed solution when full ambiguity resolution (FAR) fails. We validate this method using data from 450 stations during DOY 021 to 027, 2012. Results demonstrate the proposed PAR method can significantly shorten the time to first fix (TTFF) and increase the fixing rate. Compared with FAR, the average TTFF for PAR is reduced by 14.9% for static PPP and 15.1% for kinematic PPP. Besides, using the PAR method, the average fixing rate can be increased from 83.5% to 98.2% for static PPP, from 80.1% to 95.2% for kinematic PPP respectively. Kinematic PPP accuracy with PAR can also be significantly improved, compared to that with FAR, due to a higher fixing rate. PMID:26067196
Optimal firing rate estimation
NASA Technical Reports Server (NTRS)
Paulin, M. G.; Hoffman, L. F.
2001-01-01
We define a measure for evaluating the quality of a predictive model of the behavior of a spiking neuron. This measure, information gain per spike (Is), indicates how much more information is provided by the model than if the prediction were made by specifying the neuron's average firing rate over the same time period. We apply a maximum Is criterion to optimize the performance of Gaussian smoothing filters for estimating neural firing rates. With data from bullfrog vestibular semicircular canal neurons and data from simulated integrate-and-fire neurons, the optimal bandwidth for firing rate estimation is typically similar to the average firing rate. Precise timing and average rate models are limiting cases that perform poorly. We estimate that bullfrog semicircular canal sensory neurons transmit in the order of 1 bit of stimulus-related information per spike.
A method for estimating radioactive cesium concentrations in cattle blood using urine samples.
Sato, Itaru; Yamagishi, Ryoma; Sasaki, Jun; Satoh, Hiroshi; Miura, Kiyoshi; Kikuchi, Kaoru; Otani, Kumiko; Okada, Keiji
2017-12-01
In the region contaminated by the Fukushima nuclear accident, radioactive contamination of live cattle should be checked before slaughter. In this study, we establish a precise method for estimating radioactive cesium concentrations in cattle blood using urine samples. Blood and urine samples were collected from a total of 71 cattle on two farms in the 'difficult-to-return zone'. Urine 137 Cs, specific gravity, electrical conductivity, pH, sodium, potassium, calcium, and creatinine were measured and various estimation methods for blood 137 Cs were tested. The average error rate of the estimation was 54.2% without correction. Correcting for urine creatinine, specific gravity, electrical conductivity, or potassium improved the precision of the estimation. Correcting for specific gravity using the following formula gave the most precise estimate (average error rate = 16.9%): [blood 137 Cs] = [urinary 137 Cs]/([specific gravity] - 1)/329. Urine samples are faster to measure than blood samples because urine can be obtained in larger quantities and has a higher 137 Cs concentration than blood. These advantages of urine and the estimation precision demonstrated in our study, indicate that estimation of blood 137 Cs using urine samples is a practical means of monitoring radioactive contamination in live cattle. © 2017 Japanese Society of Animal Science.
Numerical simulation and experiment on effect of ultrasonic in polymer extrusion processing
NASA Astrophysics Data System (ADS)
Wan, Yue; Fu, ZhiHong; Wei, LingJiao; Zang, Gongzheng; Zhang, Lei
2018-01-01
The influence of ultrasonic wave on the flow field parameters and the precision of extruded products are studied. Firstly, the effect of vibration power on the average velocity of the outlet, the average viscosity of the die section, the average shear rate and the inlet pressure of the die section were studied by using the Polyflow software. Secondly, the effects of ultrasonic strength on the die temperature and the drop of die pressure were studied experimentally by different head temperature and different screw speed. Finally, the relationship between die pressure and extrusion flow rate under different ultrasonic power were studied through experiments.
Walker, Thad Gilbert; Lancor, Brian Robert; Wyllie, Robert
2014-04-15
Precise measurements of a precessional rate of noble gas in a magnetic field is obtained by constraining the time averaged direction of the spins of a stimulating alkali gas to lie in a plane transverse to the magnetic field. In this way, the magnetic field of the alkali gas does not provide a net contribution to the precessional rate of the noble gas.
NASA Astrophysics Data System (ADS)
Heimburger, A. M. F.; Shepson, P. B.; Stirm, B. H.; Susdorf, C.; Cambaliza, M. O. L.
2015-12-01
Since the Copenhagen accord in 2009, several countries have affirmed their commitment to reduce their greenhouse gas emissions. The United States and Canada committed to reduce their emissions by 17% below 2005 levels, by 2020, Europe by 14% and China by ~40%. To achieve such targets, coherent and effective strategies in mitigating atmospheric carbon emissions must be implemented in the next decades. Whether such goals are actually achieved, they require that reductions are "measurable", "reportable", and "verifiable". Management of greenhouse gas emissions must focus on urban environments since ~74% of CO2 emissions worldwide will be from cities, while measurement approaches are highly uncertain (~50% to >100%). The Indianapolis Flux Experiment (INFLUX) was established to develop, assess and improve top-down and bottom-up quantifications of urban greenhouse gas emissions. Based on an aircraft mass balance approach, we performed a series of experiments focused on the improvement of CO2, CH4 and CO emission rates quantification from Indianapolis, our final objective being to drastically improve the method overall uncertainty from the previous estimate of 50%. In November-December 2014, we conducted nine methodologically identical mass balance experiments in a short period of time (24 days, one downwind distance) for assumed constant total emission rate conditions, as a means to obtain an improved standard deviation of the mean determination. By averaging the individual emission rate determinations, we were able to obtain a method precision of 17% and 16% for CO2 and CO, respectively, at the 95%C.L. CH4 emission rates are highly variable day to day, leading to precision of 60%. Our results show that repetitive sampling can enable improvement in precision of the aircraft top-down methods through averaging.
Studies into the averaging problem: Macroscopic gravity and precision cosmology
NASA Astrophysics Data System (ADS)
Wijenayake, Tharake S.
2016-08-01
With the tremendous improvement in the precision of available astrophysical data in the recent past, it becomes increasingly important to examine some of the underlying assumptions behind the standard model of cosmology and take into consideration nonlinear and relativistic corrections which may affect it at percent precision level. Due to its mathematical rigor and fully covariant and exact nature, Zalaletdinov's macroscopic gravity (MG) is arguably one of the most promising frameworks to explore nonlinearities due to inhomogeneities in the real Universe. We study the application of MG to precision cosmology, focusing on developing a self-consistent cosmology model built on the averaging framework that adequately describes the large-scale Universe and can be used to study real data sets. We first implement an algorithmic procedure using computer algebra systems to explore new exact solutions to the MG field equations. After validating the process with an existing isotropic solution, we derive a new homogeneous, anisotropic and exact solution. Next, we use the simplest (and currently only) solvable homogeneous and isotropic model of MG and obtain an observable function for cosmological expansion using some reasonable assumptions on light propagation. We find that the principal modification to the angular diameter distance is through the change in the expansion history. We then linearize the MG field equations and derive a framework that contains large-scale structure, but the small scale inhomogeneities have been smoothed out and encapsulated into an additional cosmological parameter representing the averaging effect. We derive an expression for the evolution of the density contrast and peculiar velocities and integrate them to study the growth rate of large-scale structure. We find that increasing the magnitude of the averaging term leads to enhanced growth at late times. Thus, for the same matter content, the growth rate of large scale structure in the MG model is stronger than that of the standard model. Finally, we constrain the MG model using Cosmic Microwave Background temperature anisotropy data, the distance to supernovae data, the galaxy power spectrum, the weak lensing tomography shear-shear cross-correlations and the baryonic acoustic oscillations. We find that for this model the averaging density parameter is very small and does not cause any significant shift in the other cosmological parameters. However, it can lead to increased errors on some cosmological parameters such as the Hubble constant and the amplitude of the linear matter spectrum at the scale of 8h. {-1}Mpc. Further studiesare needed to explore other solutions and models of MG as well as their effects on precision cosmology.
Mahoney, P P; Ray, S J; Li, G; Hieftje, G M
1999-04-01
The coupling of an electrothermal vaporization (ETV) apparatus to an inductively coupled plasma time-of-flight mass spectrometer (ICP-TOFMS) is described. The ability of the ICP-TOFMS to produce complete elemental mass spectra at high repetition rates is experimentally demonstrated. A signal-averaging data acquisition board is employed to rapidly record complete elemental spectra throughout the vaporization stage of the ETV temperature cycle; a solution containing 34 elements is analyzed. The reduction of both molecular and atomic isobaric interferences through the temperature program of the furnace is demonstrated. Isobaric overlaps among the isotopes of cadmium, tin, and indium are resolved by exploiting differences in the vaporization characteristics of the elements. Figures of merit for the system are defined with several different data acquisition schemes capable of operating at the high repetition rate of the TOF instrument. With the use of both ion counting and a boxcar averager, the dynamic range is shown to be linear over a range of at least 6 orders of magnitude. A pair of boxcar averagers are used to measure the isotope ratio for silver with a precision of 1.9% RSD, despite a cycle-to-cycle precision of 19% RSD. Detection limits of 10-80 fg are calculated for seven elements, based upon a 10-microL injection.
Predicting Atomic Decay Rates Using an Informational-Entropic Approach
NASA Astrophysics Data System (ADS)
Gleiser, Marcelo; Jiang, Nan
2018-06-01
We show that a newly proposed Shannon-like entropic measure of shape complexity applicable to spatially-localized or periodic mathematical functions known as configurational entropy (CE) can be used as a predictor of spontaneous decay rates for one-electron atoms. The CE is constructed from the Fourier transform of the atomic probability density. For the hydrogen atom with degenerate states labeled with the principal quantum number n, we obtain a scaling law relating the n-averaged decay rates to the respective CE. The scaling law allows us to predict the n-averaged decay rate without relying on the traditional computation of dipole matrix elements. We tested the predictive power of our approach up to n = 20, obtaining an accuracy better than 3.7% within our numerical precision, as compared to spontaneous decay tables listed in the literature.
Predicting Atomic Decay Rates Using an Informational-Entropic Approach
NASA Astrophysics Data System (ADS)
Gleiser, Marcelo; Jiang, Nan
2018-02-01
We show that a newly proposed Shannon-like entropic measure of shape complexity applicable to spatially-localized or periodic mathematical functions known as configurational entropy (CE) can be used as a predictor of spontaneous decay rates for one-electron atoms. The CE is constructed from the Fourier transform of the atomic probability density. For the hydrogen atom with degenerate states labeled with the principal quantum number n, we obtain a scaling law relating the n-averaged decay rates to the respective CE. The scaling law allows us to predict the n-averaged decay rate without relying on the traditional computation of dipole matrix elements. We tested the predictive power of our approach up to n = 20, obtaining an accuracy better than 3.7% within our numerical precision, as compared to spontaneous decay tables listed in the literature.
Assessing Quality of Care and Elder Abuse in Nursing Homes via Google Reviews.
Mowery, Jared; Andrei, Amanda; Le, Elizabeth; Jian, Jing; Ward, Megan
2016-01-01
It is challenging to assess the quality of care and detect elder abuse in nursing homes, since patients may be incapable of reporting quality issues or abuse themselves, and resources for sending inspectors are limited. This study correlates Google reviews of nursing homes with Centers for Medicare and Medicaid Services (CMS) inspection results in the Nursing Home Compare (NHC) data set, to quantify the extent to which the reviews reflect the quality of care and the presence of elder abuse. A total of 16,160 reviews were collected, spanning 7,170 nursing homes. Two approaches were tested: using the average rating as an overall estimate of the quality of care at a nursing home, and using the average scores from a maximum entropy classifier trained to recognize indications of elder abuse. The classifier achieved an F-measure of 0.81, with precision 0.74 and recall 0.89. The correlation for the classifier is weak but statistically significant: = 0.13, P < .001, and 95% confidence interval (0.10, 0.16). The correlation for the ratings exhibits a slightly higher correlation: = 0.15, P < .001. Both the classifier and rating correlations approach approximately 0.65 when the effective average number of reviews per provider is increased by aggregating similar providers. These results indicate that an analysis of Google reviews of nursing homes can be used to detect indications of elder abuse with high precision and to assess the quality of care, but only when a sufficient number of reviews are available.
Artacho, Paulina; Jouanneau, Isabelle; Le Galliard, Jean-François
2013-01-01
Studies of the relationship of performance and behavioral traits with environmental factors have tended to neglect interindividual variation even though quantification of this variation is fundamental to understanding how phenotypic traits can evolve. In ectotherms, functional integration of locomotor performance, thermal behavior, and energy metabolism is of special interest because of the potential for coadaptation among these traits. For this reason, we analyzed interindividual variation, covariation, and repeatability of the thermal sensitivity of maximal sprint speed, preferred body temperature, thermal precision, and resting metabolic rate measured in ca. 200 common lizards (Zootoca vivipara) that varied by sex, age, and body size. We found significant interindividual variation in selected body temperatures and in the thermal performance curve of maximal sprint speed for both the intercept (expected trait value at the average temperature) and the slope (measure of thermal sensitivity). Interindividual differences in maximal sprint speed across temperatures, preferred body temperature, and thermal precision were significantly repeatable. A positive relationship existed between preferred body temperature and thermal precision, implying that individuals selecting higher temperatures were more precise. The resting metabolic rate was highly variable but was not related to thermal sensitivity of maximal sprint speed or thermal behavior. Thus, locomotor performance, thermal behavior, and energy metabolism were not directly functionally linked in the common lizard.
Fast EEG spike detection via eigenvalue analysis and clustering of spatial amplitude distribution
NASA Astrophysics Data System (ADS)
Fukami, Tadanori; Shimada, Takamasa; Ishikawa, Bunnoshin
2018-06-01
Objective. In the current study, we tested a proposed method for fast spike detection in electroencephalography (EEG). Approach. We performed eigenvalue analysis in two-dimensional space spanned by gradients calculated from two neighboring samples to detect high-amplitude negative peaks. We extracted the spike candidates by imposing restrictions on parameters regarding spike shape and eigenvalues reflecting detection characteristics of individual medical doctors. We subsequently performed clustering, classifying detected peaks by considering the amplitude distribution at 19 scalp electrodes. Clusters with a small number of candidates were excluded. We then defined a score for eliminating spike candidates for which the pattern of detected electrodes differed from the overall pattern in a cluster. Spikes were detected by setting the score threshold. Main results. Based on visual inspection by a psychiatrist experienced in EEG, we evaluated the proposed method using two statistical measures of precision and recall with respect to detection performance. We found that precision and recall exhibited a trade-off relationship. The average recall value was 0.708 in eight subjects with the score threshold that maximized the F-measure, with 58.6 ± 36.2 spikes per subject. Under this condition, the average precision was 0.390, corresponding to a false positive rate 2.09 times higher than the true positive rate. Analysis of the required processing time revealed that, using a general-purpose computer, our method could be used to perform spike detection in 12.1% of the recording time. The process of narrowing down spike candidates based on shape occupied most of the processing time. Significance. Although the average recall value was comparable with that of other studies, the proposed method significantly shortened the processing time.
Connolly, Mark P; Tashjian, Cole; Kotsopoulos, Nikolaos; Bhatt, Aomesh; Postma, Maarten J
2017-07-01
Numerous approaches are used to estimate indirect productivity losses using various wage estimates applied to poor health in working aged adults. Considering the different wage estimation approaches observed in the published literature, we sought to assess variation in productivity loss estimates when using average wages compared with age-specific wages. Published estimates for average and age-specific wages for combined male/female wages were obtained from the UK Office of National Statistics. A polynomial interpolation was used to convert 5-year age-banded wage data into annual age-specific wages estimates. To compare indirect cost estimates, average wages and age-specific wages were used to project productivity losses at various stages of life based on the human capital approach. Discount rates of 0, 3, and 6 % were applied to projected age-specific and average wage losses. Using average wages was found to overestimate lifetime wages in conditions afflicting those aged 1-27 and 57-67, while underestimating lifetime wages in those aged 27-57. The difference was most significant for children where average wage overestimated wages by 15 % and for 40-year-olds where it underestimated wages by 14 %. Large differences in projecting productivity losses exist when using the average wage applied over a lifetime. Specifically, use of average wages overestimates productivity losses between 8 and 15 % for childhood illnesses. Furthermore, during prime working years, use of average wages will underestimate productivity losses by 14 %. We suggest that to achieve more precise estimates of productivity losses, age-specific wages should become the standard analytic approach.
Vowles, Kevin E; McEntee, Mindy L; Julnes, Peter Siyahhan; Frohe, Tessa; Ney, John P; van der Goes, David N
2015-04-01
Opioid use in chronic pain treatment is complex, as patients may derive both benefit and harm. Identification of individuals currently using opioids in a problematic way is important given the substantial recent increases in prescription rates and consequent increases in morbidity and mortality. The present review provides updated and expanded information regarding rates of problematic opioid use in chronic pain. Because previous reviews have indicated substantial variability in this literature, several steps were taken to enhance precision and utility. First, problematic use was coded using explicitly defined terms, referring to different patterns of use (ie, misuse, abuse, and addiction). Second, average prevalence rates were calculated and weighted by sample size and study quality. Third, the influence of differences in study methodology was examined. In total, data from 38 studies were included. Rates of problematic use were quite broad, ranging from <1% to 81% across studies. Across most calculations, rates of misuse averaged between 21% and 29% (range, 95% confidence interval [CI]: 13%-38%). Rates of addiction averaged between 8% and 12% (range, 95% CI: 3%-17%). Abuse was reported in only a single study. Only 1 difference emerged when study methods were examined, where rates of addiction were lower in studies that identified prevalence assessment as a primary, rather than secondary, objective. Although significant variability remains in this literature, this review provides guidance regarding possible average rates of opioid misuse and addiction and also highlights areas in need of further clarification.
Soliton microcomb range measurement
NASA Astrophysics Data System (ADS)
Suh, Myoung-Gyun; Vahala, Kerry J.
2018-02-01
Laser-based range measurement systems are important in many application areas, including autonomous vehicles, robotics, manufacturing, formation flying of satellites, and basic science. Coherent laser ranging systems using dual-frequency combs provide an unprecedented combination of long range, high precision, and fast update rate. We report dual-comb distance measurement using chip-based soliton microcombs. A single pump laser was used to generate dual-frequency combs within a single microresonator as counterpropagating solitons. We demonstrated time-of-flight measurement with 200-nanometer precision at an averaging time of 500 milliseconds within a range ambiguity of 16 millimeters. Measurements at distances up to 25 meters with much lower precision were also performed. Our chip-based source is an important step toward miniature dual-comb laser ranging systems that are suitable for photonic integration.
Automatic detection and quantitative analysis of cells in the mouse primary motor cortex
NASA Astrophysics Data System (ADS)
Meng, Yunlong; He, Yong; Wu, Jingpeng; Chen, Shangbin; Li, Anan; Gong, Hui
2014-09-01
Neuronal cells play very important role on metabolism regulation and mechanism control, so cell number is a fundamental determinant of brain function. Combined suitable cell-labeling approaches with recently proposed three-dimensional optical imaging techniques, whole mouse brain coronal sections can be acquired with 1-μm voxel resolution. We have developed a completely automatic pipeline to perform cell centroids detection, and provided three-dimensional quantitative information of cells in the primary motor cortex of C57BL/6 mouse. It involves four principal steps: i) preprocessing; ii) image binarization; iii) cell centroids extraction and contour segmentation; iv) laminar density estimation. Investigations on the presented method reveal promising detection accuracy in terms of recall and precision, with average recall rate 92.1% and average precision rate 86.2%. We also analyze laminar density distribution of cells from pial surface to corpus callosum from the output vectorizations of detected cell centroids in mouse primary motor cortex, and find significant cellular density distribution variations in different layers. This automatic cell centroids detection approach will be beneficial for fast cell-counting and accurate density estimation, as time-consuming and error-prone manual identification is avoided.
Quantitative NO{sub 2} molecular tagging velocimetry at 500 kHz frame rate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Naibo; Nishihara, Munetake; Lempert, Walter R.
2010-11-29
NO{sub 2} molecular tagging velocimetry (MTV) is demonstrated at repetition rates as high as 500 kHz in a laboratory scale Mach 5 wind tunnel. A pulse burst laser and a home built optical parametric oscillator system were used to simultaneously generate the required 355 and 226 nm wavelengths for NO{sub 2} photodissociation (tagging) and NO planar laser induced fluorescence imaging (interrogation), respectively. NO{sub 2} MTV images were obtained both in front and behind the characteristic bow shock from a 5 mm diameter cylinder. From Gaussian curve fitting, an average free stream flow velocity of 719 m/s was obtained. Absolute statisticalmore » precision in velocity of {approx}11.5 m/s was determined, corresponding to relative precision of 1.6%-5%, depending upon the region of the flow probed.« less
Acquisition of peak responding: what is learned?
Balci, Fuat; Gallistel, Charles R; Allen, Brian D; Frank, Krystal M; Gibson, Jacqueline M; Brunner, Daniela
2009-01-01
We investigated how the common measures of timing performance behaved in the course of training on the peak procedure in C3H mice. Following fixed interval (FI) pre-training, mice received 16 days of training in the peak procedure. The peak time and spread were derived from the average response rates while the start and stop times and their relative variability were derived from a single-trial analysis. Temporal precision (response spread) appeared to improve in the course of training. This apparent improvement in precision was, however, an averaging artifact; it was mediated by the staggered appearance of timed stops, rather than by the delayed occurrence of start times. Trial-by-trial analysis of the stop times for individual subjects revealed that stops appeared abruptly after three to five sessions and their timing did not change as training was prolonged. Start times and the precision of start and stop times were generally stable throughout training. Our results show that subjects do not gradually learn to time their start or stop of responding. Instead, they learn the duration of the FI, with robust temporal control over the start of the response; the control over the stop of response appears abruptly later.
Acquisition of peak responding: What is learned?
Balci, Fuat; Gallistel, Charles R.; Allen, Brian D.; Frank, Krystal M.; Gibson, Jacqueline M.; Brunner, Daniela
2009-01-01
We investigated how the common measures of timing performance behaved in the course of training on the peak procedure in C3H mice. Following fixed interval (FI) pre-training, mice received 16 days of training in the peak procedure. The peak time and spread were derived from the average response rates while the start and stop times and their relative variability were derived from a single-trial analysis. Temporal precision (response spread) appeared to improve in the course of training. This apparent improvement in precision was, however, an averaging artifact; it was mediated by the staggered appearance of timed stops, rather than by the delayed occurrence of start times. Trial-by-trial analysis of the stop times for individual subjects revealed that stops appeared abruptly after three to five sessions and their timing did not change as training was prolonged. Start times and the precision of start and stop times were generally stable throughout training. Our results show that subjects do not gradually learn to time their start or stop of responding. Instead, they learn the duration of the FI, with robust temporal control over the start of the response; the control over the stop of response appears abruptly later. PMID:18950695
Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M
2011-01-01
Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.
Fusion of Deep Learning and Compressed Domain features for Content Based Image Retrieval.
Liu, Peizhong; Guo, Jing-Ming; Wu, Chi-Yi; Cai, Danlin
2017-08-29
This paper presents an effective image retrieval method by combining high-level features from Convolutional Neural Network (CNN) model and low-level features from Dot-Diffused Block Truncation Coding (DDBTC). The low-level features, e.g., texture and color, are constructed by VQ-indexed histogram from DDBTC bitmap, maximum, and minimum quantizers. Conversely, high-level features from CNN can effectively capture human perception. With the fusion of the DDBTC and CNN features, the extended deep learning two-layer codebook features (DL-TLCF) is generated using the proposed two-layer codebook, dimension reduction, and similarity reweighting to improve the overall retrieval rate. Two metrics, average precision rate (APR) and average recall rate (ARR), are employed to examine various datasets. As documented in the experimental results, the proposed schemes can achieve superior performance compared to the state-of-the-art methods with either low- or high-level features in terms of the retrieval rate. Thus, it can be a strong candidate for various image retrieval related applications.
Neural coding of repetitive clicks in the medial geniculate body of cat.
Rouiller, E; de Ribaupierre, Y; Toros-Morel, A; de Ribaupierre, F
1981-09-01
The activity of 418 medial geniculate body (MGB) units was studied in response to repetitive acoustic pulses in 35 nitrous oxide anaesthetized cats. The proportion of MGB neurons insensitive to repetitive clicks was close to 30%. On the basis of their pattern of discharge, the responsive units were divided into three categories. The majority of them (71%), classified as "lockers', showed discharges precisely time-locked to the individual clicks of the train. A few units (8%), called "groupers', had discharges loosely synchronized to low-rate repetitive clicks. When the spikes were not synchronized, the cell had transient or sustained responses for a limited frequency range and was classified as a "special responder' (21%). Responses of "lockers' were time-locked up to a limiting rate, which varied between 10 and 800 Hz; half of the "lockers' had a limiting rate of locking equal to or higher than 100 Hz. The degree of entrainment, defined as the probability that each click evokes at least one spike, regularly decreases for increasing rates; on the other hand, the precision of locking increasing increases with frequency. The time jitter observed at 100 Hz might be as small as 0.2 ms and was 1.2 ms on average. The population of "lockers' can mark with precision the transients of complex sounds and has response properties still compatible with a temporal coding of the fundamental frequency of most animal vocalizations.
Visual Inspection Reliability for Precision Manufactured Parts.
See, Judi E
2015-12-01
Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. Visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied. Eighty-two inspectors in the U.S. Nuclear Security Enterprise inspected 140 parts for eight different defects. Inspectors correctly rejected 85% of defective items and incorrectly rejected 35% of acceptable parts. Use of a phased inspection approach based on inspector confidence ratings was not an effective or efficient technique to improve the overall accuracy of the process. Results did verify that inspection is a workload-intensive task, dominated by mental demand and effort. Hits for Nuclear Security Enterprise inspection were not vastly superior to the industry average of 80%, and they were achieved at the expense of a high scrap rate not typically observed during visual inspection tasks. This study provides the first empirical data to address the reliability of visual inspection for precision manufactured parts used in nuclear weapons. Results enhance current understanding of the process of visual inspection and can be applied to improve reliability for precision manufactured parts. © 2015, Human Factors and Ergonomics Society.
Tashman, Scott; Anderst, William
2003-04-01
Dynamic assessment of three-dimensional (3D) skeletal kinematics is essential for understanding normal joint function as well as the effects of injury or disease. This paper presents a novel technique for measuring in-vivo skeletal kinematics that combines data collected from high-speed biplane radiography and static computed tomography (CT). The goals of the present study were to demonstrate that highly precise measurements can be obtained during dynamic movement studies employing high frame-rate biplane video-radiography, to develop a method for expressing joint kinematics in an anatomically relevant coordinate system and to demonstrate the application of this technique by calculating canine tibio-femoral kinematics during dynamic motion. The method consists of four components: the generation and acquisition of high frame rate biplane radiographs, identification and 3D tracking of implanted bone markers, CT-based coordinate system determination, and kinematic analysis routines for determining joint motion in anatomically based coordinates. Results from dynamic tracking of markers inserted in a phantom object showed the system bias was insignificant (-0.02 mm). The average precision in tracking implanted markers in-vivo was 0.064 mm for the distance between markers and 0.31 degree for the angles between markers. Across-trial standard deviations for tibio-femoral translations were similar for all three motion directions, averaging 0.14 mm (range 0.08 to 0.20 mm). Variability in tibio-femoral rotations was more dependent on rotation axis, with across-trial standard deviations averaging 1.71 degrees for flexion/extension, 0.90 degree for internal/external rotation, and 0.40 degree for varus/valgus rotation. Advantages of this technique over traditional motion analysis methods include the elimination of skin motion artifacts, improved tracking precision and the ability to present results in a consistent anatomical reference frame.
Soares, André E R; Schrago, Carlos G
2015-01-07
Although taxon sampling is commonly considered an important issue in phylogenetic inference, it is rarely considered in the Bayesian estimation of divergence times. In fact, the studies conducted to date have presented ambiguous results, and the relevance of taxon sampling for molecular dating remains unclear. In this study, we developed a series of simulations that, after six hundred Bayesian molecular dating analyses, allowed us to evaluate the impact of taxon sampling on chronological estimates under three scenarios of among-lineage rate heterogeneity. The first scenario allowed us to examine the influence of the number of terminals on the age estimates based on a strict molecular clock. The second scenario imposed an extreme example of lineage specific rate variation, and the third scenario permitted extensive rate variation distributed along the branches. We also analyzed empirical data on selected mitochondrial genomes of mammals. Our results showed that in the strict molecular-clock scenario (Case I), taxon sampling had a minor impact on the accuracy of the time estimates, although the precision of the estimates was greater with an increased number of terminals. The effect was similar in the scenario (Case III) based on rate variation distributed among the branches. Only under intensive rate variation among lineages (Case II) taxon sampling did result in biased estimates. The results of an empirical analysis corroborated the simulation findings. We demonstrate that taxonomic sampling affected divergence time inference but that its impact was significant if the rates deviated from those derived for the strict molecular clock. Increased taxon sampling improved the precision and accuracy of the divergence time estimates, but the impact on precision is more relevant. On average, biased estimates were obtained only if lineage rate variation was pronounced. Copyright © 2014 Elsevier Ltd. All rights reserved.
The influence of economic business cycles on United States suicide rates.
Wasserman, I M
1984-01-01
A number of social science investigators have shown that a downturn in the economy leads to an increase in the suicide rate. However, the previous works on the subject are flawed by the fact that they employ years as their temporal unit of analysis. This time period is so large that it makes it difficult for investigators to precisely determine the length of the lag effect, while at the same time removing the autocorrelation effects. Also, although most works on suicide and the business cycle employ unemployment as a measure of a downturn in the business cycle, the average duration of unemployment represents a better measure for determining the social impact of an economic downturn. From 1947 to 1977 the average monthly duration of unemployment is statistically related to the suicide rate using multivariate time-series analysis. From 1910 to 1939 the Ayres business index, a surrogate measure for movement in the business cycle, is statistically related to the monthly suicide rate. An examination of the findings confirms that in most cases a downturn in the economy causes an increase in the suicide rate.
Soliton microcomb range measurement.
Suh, Myoung-Gyun; Vahala, Kerry J
2018-02-23
Laser-based range measurement systems are important in many application areas, including autonomous vehicles, robotics, manufacturing, formation flying of satellites, and basic science. Coherent laser ranging systems using dual-frequency combs provide an unprecedented combination of long range, high precision, and fast update rate. We report dual-comb distance measurement using chip-based soliton microcombs. A single pump laser was used to generate dual-frequency combs within a single microresonator as counterpropagating solitons. We demonstrated time-of-flight measurement with 200-nanometer precision at an averaging time of 500 milliseconds within a range ambiguity of 16 millimeters. Measurements at distances up to 25 meters with much lower precision were also performed. Our chip-based source is an important step toward miniature dual-comb laser ranging systems that are suitable for photonic integration. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.
2012-01-01
Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.
Limits to the Stability of Pulsar Time
NASA Technical Reports Server (NTRS)
Petit, Gerard
1996-01-01
The regularity of the rotation rate of millisecond pulsars is the underlying hypothesis for using these neutron stars as 'celestial clocks'. Given their remote location in our galaxy and to our lack of precise knowledge on the galactic environment, a number of phenomena effect the apparent rotation rate observed on Earth. This paper reviews these phenomena and estimates the order of magnitude of their effect. It concludes that an ensemble pulsar time based on a number of selected millisecond pulsars should have a fractional frequency stability close to 2 x 10(sup -15) for an averaging time of a few years.
Role of spatial averaging in multicellular gradient sensing.
Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew
2016-05-20
Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.
Role of spatial averaging in multicellular gradient sensing
NASA Astrophysics Data System (ADS)
Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew
2016-06-01
Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.
[Research of Identify Spatial Object Using Spectrum Analysis Technique].
Song, Wei; Feng, Shi-qi; Shi, Jing; Xu, Rong; Wang, Gong-chang; Li, Bin-yu; Liu, Yu; Li, Shuang; Cao Rui; Cai, Hong-xing; Zhang, Xi-he; Tan, Yong
2015-06-01
The high precision scattering spectrum of spatial fragment with the minimum brightness of 4.2 and the resolution of 0.5 nm has been observed using spectrum detection technology on the ground. The obvious differences for different types of objects are obtained by the normalizing and discrete rate analysis of the spectral data. Each of normalized multi-frame scattering spectral line shape for rocket debris is identical. However, that is different for lapsed satellites. The discrete rate of the single frame spectrum of normalized space debris for rocket debris ranges from 0.978% to 3.067%, and the difference of oscillation and average value is small. The discrete rate for lapsed satellites ranges from 3.118 4% to 19.472 7%, and the difference of oscillation and average value relatively large. The reason is that the composition of rocket debris is single, while that of the lapsed satellites is complex. Therefore, the spectrum detection technology on the ground can be used to the classification of the spatial fragment.
Tian, Chao; Wang, Lixin; Novick, Kimberly A
2016-10-15
High-precision analysis of atmospheric water vapor isotope compositions, especially δ(17) O values, can be used to improve our understanding of multiple hydrological and meteorological processes (e.g., differentiate equilibrium or kinetic fractionation). This study focused on assessing, for the first time, how the accuracy and precision of vapor δ(17) O laser spectroscopy measurements depend on vapor concentration, delta range, and averaging-time. A Triple Water Vapor Isotope Analyzer (T-WVIA) was used to evaluate the accuracy and precision of δ(2) H, δ(18) O and δ(17) O measurements. The sensitivity of accuracy and precision to water vapor concentration was evaluated using two international standards (GISP and SLAP2). The sensitivity of precision to delta value was evaluated using four working standards spanning a large delta range. The sensitivity of precision to averaging-time was assessed by measuring one standard continuously for 24 hours. Overall, the accuracy and precision of the δ(2) H, δ(18) O and δ(17) O measurements were high. Across all vapor concentrations, the accuracy of δ(2) H, δ(18) O and δ(17) O observations ranged from 0.10‰ to 1.84‰, 0.08‰ to 0.86‰ and 0.06‰ to 0.62‰, respectively, and the precision ranged from 0.099‰ to 0.430‰, 0.009‰ to 0.080‰ and 0.022‰ to 0.054‰, respectively. The accuracy and precision of all isotope measurements were sensitive to concentration, with the higher accuracy and precision generally observed under moderate vapor concentrations (i.e., 10000-15000 ppm) for all isotopes. The precision was also sensitive to the range of delta values, although the effect was not as large compared with the sensitivity to concentration. The precision was much less sensitive to averaging-time than the concentration and delta range effects. The accuracy and precision performance of the T-WVIA depend on concentration but depend less on the delta value and averaging-time. The instrument can simultaneously and continuously measure δ(2) H, δ(18) O and δ(17) O values in water vapor, opening a new window to better understand ecological, hydrological and meteorological processes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Racial and ethnic differences among amyotrophic lateral sclerosis cases in the United States.
Rechtman, Lindsay; Jordan, Heather; Wagner, Laurie; Horton, D Kevin; Kaye, Wendy
2015-03-01
Our objective was to describe racial and ethnic differences of amyotrophic lateral sclerosis (ALS) in distinct geographic locations around the United States (U.S.). ALS cases for the period 2009-2011 were identified using active case surveillance in three states and eight metropolitan areas. Of the 5883 unique ALS cases identified, 74.8% were white, 9.3% were African-American/black, 3.6% were Asian, 12.0% were an unknown race, and 0.3% were marked as some other race. For ethnicity, 77.5% were defined as non-Hispanic, 10.8% Hispanic, and 11.7% were of unknown ethnicity. The overall crude average annual incidence rate was 1.52 per 100,000 person-years and the rate differed by race and ethnicity. The overall age-adjusted average annual incidence rate was 1.44 per 100,000 person-years and the age-adjusted average incidence rates also differed by race and ethnicity. Racial differences were also found in payer type, time from symptom onset to diagnosis, reported El Escorial criteria, and age at diagnosis. In conclusion, calculated incidence rates demonstrate that ALS occurs less frequently in African-American/blacks and Asians compared to whites, and less frequently in Hispanics compared to non-Hispanics in the U.S. A more precise understanding of racial and ethnic variations in ALS may help to reveal candidates for further studies of disease etiology and disease progression.
High-power picosecond laser with 400W average power for large scale applications
NASA Astrophysics Data System (ADS)
Du, Keming; Brüning, Stephan; Gillner, Arnold
2012-03-01
Laser processing is generally known for low thermal influence, precise energy processing and the possibility to ablate every type of material independent on hardness and vaporisation temperature. The use of ultra-short pulsed lasers offers new possibilities in the manufacturing of high end products with extra high processing qualities. For achieving a sufficient and economical processing speed, high average power is needed. To scale the power for industrial uses the picosecond laser system has been developed, which consists of a seeder, a preamplifier and an end amplifier. With the oscillator/amplifier system more than 400W average power and maximum pulse energy 1mJ was obtained. For study of high speed processing of large embossing metal roller two different ps laser systems have been integrated into a cylinder engraving machine. One of the ps lasers has an average power of 80W while the other has 300W. With this high power ps laser fluencies of up to 30 J/cm2 at pulse repetition rates in the multi MHz range have been achieved. Different materials (Cu, Ni, Al, steel) have been explored for parameters like ablation rate per pulse, ablation geometry, surface roughness, influence of pulse overlap and number of loops. An enhanced ablation quality and an effective ablation rate of 4mm3/min have been achieved by using different scanning systems and an optimized processing strategy. The max. achieved volume rate is 20mm3/min.
Design of mechanical arm for an automatic sorting system of recyclable cans
NASA Astrophysics Data System (ADS)
Resti, Y.; Mohruni, A. S.; Burlian, F.; Yani, I.; Amran, A.
2018-04-01
The use of a mechanical arm for an automatic sorting system of used cans should be designed carefully. The right design will result in a high precision sorting rate and a short sorting time. The design includes first; design manipulator,second; determine link and joint specifications, and third; build mechanical systems and control systems. This study aims to design the mechanical arm as a hardware system for automatic cans sorting system. The material used for the manipulator is the aluminum plate. The manipulator is designed using 6 links and 6 join where the 6th link is the end effectorand the 6th join is the gripper. As a driving motor used servo motor, while as a microcontroller used Arduino Uno which is connected with Matlab programming language. Based on testing, a mechanical arm designed for this recyclable canned recycling system has a precision sorting rate at 93%, where the average total time required for sorting is 10.82 seconds.
McCall, Brian P; Horwitz, Irwin B; Carr, Bethanie S
2007-09-01
Injuries to adolescents from occupational activities has been recognized as a significant public health concern. The objective of this study was to quantify adolescent injury rates, analyze risk factors, and measure the severity of injuries sustained using Oregon workers' compensation data. From 1990-1997, a total of 8060 workers' compensation claims, submitted by claimants 16-19 years old, were accepted by Oregon and used in these analyses. Data from the Bureau of Labor Statistics were used to derive injury rates. An overall estimated claim rate of 134.2 (95% confidence interval [CI] 124.9-143.6) per 10,000 adolescent workers was found, with males having over twice the rate of females. The total average annual claim cost was $3,168,457, representing $3145 per claim. The average total temporary disability period per claim was 22.3 days. Precision production workers had the highest claim rate of 296.2 (95% CI 178.9-413.4) and highest associated costs ($8266) for all occupations, whereas those in the farming/fishing/forestry occupation had the longest average periods of indemnification with 31.6 days. Day shift workers had the highest claim rates and most severe injuries relative to other shifts. The injury rates found among adolescent workers demonstrates that continued safety interventions and increased training are needed. Because of high claim rate and injury severity, particular attention should be focused on adolescents in food service, manufacturing, and agricultural occupations. Understanding the differences of adolescent circadian rhythm patterns in establishing work schedules and supervisory practices could also prove valuable for decreasing injury risk.
Astronomical and physical data for meteoroids recorded by the Altair radar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, P. G.; ReVelle, D. O.
We present preliminary results of orbital and physical measurements of a small selection of meteoroids observed at UHF frequencies by the ALTAIR radar on Kwajalein Island on November 17, 1998. The head echoes observed by ALTAIR allowed precise determination of velocities and decelerations from which orbits and masses of individual meteoroids derived from numerical modelling have been measured. During these observations, the ALTAIR radar detected average head echo rates of 1665 per hour.
Direct Optical Measurement of Vorticity in Fluid Flow
2015-12-11
was later employed to measure the angular velocity of a microparticle trapped and spinning in an optical trap [7]. II. Objectives We believe it...known theoretically. Two sets of experiments are presented. In the first, the signal from a group of 6 μm microparticles is integrated to obtain the...vorticity is known precisely. In one experiment measurements with a group of 6 μm microparticles is used to obtain the average fluid rotation rate about the
Heat input and accumulation for ultrashort pulse processing with high average power
NASA Astrophysics Data System (ADS)
Finger, Johannes; Bornschlegel, Benedikt; Reininghaus, Martin; Dohrn, Andreas; Nießen, Markus; Gillner, Arnold; Poprawe, Reinhart
2018-05-01
Materials processing using ultrashort pulsed laser radiation with pulse durations <10 ps is known to enable very precise processing with negligible thermal load. However, even for the application of picosecond and femtosecond laser radiation, not the full amount of the absorbed energy is converted into ablation products and a distinct fraction of the absorbed energy remains as residual heat in the processed workpiece. For low average power and power densities, this heat is usually not relevant for the processing results and dissipates into the workpiece. In contrast, when higher average powers and repetition rates are applied to increase the throughput and upscale ultrashort pulse processing, this heat input becomes relevant and significantly affects the achieved processing results. In this paper, we outline the relevance of heat input for ultrashort pulse processing, starting with the heat input of a single ultrashort laser pulse. Heat accumulation during ultrashort pulse processing with high repetition rate is discussed as well as heat accumulation for materials processing using pulse bursts. In addition, the relevance of heat accumulation with multiple scanning passes and processing with multiple laser spots is shown.
Derivation and precision of mean field electrodynamics with mesoscale fluctuations
NASA Astrophysics Data System (ADS)
Zhou, Hongzhe; Blackman, Eric G.
2018-06-01
Mean field electrodynamics (MFE) facilitates practical modelling of secular, large scale properties of astrophysical or laboratory systems with fluctuations. Practitioners commonly assume wide scale separation between mean and fluctuating quantities, to justify equality of ensemble and spatial or temporal averages. Often however, real systems do not exhibit such scale separation. This raises two questions: (I) What are the appropriate generalized equations of MFE in the presence of mesoscale fluctuations? (II) How precise are theoretical predictions from MFE? We address both by first deriving the equations of MFE for different types of averaging, along with mesoscale correction terms that depend on the ratio of averaging scale to variation scale of the mean. We then show that even if these terms are small, predictions of MFE can still have a significant precision error. This error has an intrinsic contribution from the dynamo input parameters and a filtering contribution from differences in the way observations and theory are projected through the measurement kernel. Minimizing the sum of these contributions can produce an optimal scale of averaging that makes the theory maximally precise. The precision error is important to quantify when comparing to observations because it quantifies the resolution of predictive power. We exemplify these principles for galactic dynamos, comment on broader implications, and identify possibilities for further work.
NASA Astrophysics Data System (ADS)
Gao, Peng
2018-06-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
NASA Astrophysics Data System (ADS)
Gao, Peng
2018-04-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
Methods for semi-automated indexing for high precision information retrieval.
Berrios, Daniel C; Cucina, Russell J; Fagan, Lawrence M
2002-01-01
To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy.
[Influence of trabecular microstructure modeling on finite element analysis of dental implant].
Shen, M J; Wang, G G; Zhu, X H; Ding, X
2016-09-01
To analyze the influence of trabecular microstructure modeling on the biomechanical distribution of implant-bone interface with a three-dimensional finite element mandible model of trabecular structure. Dental implants were embeded in the mandibles of a beagle dog. After three months of the implant installation, the mandibles with dental implants were harvested and scaned by micro-CT and cone-beam CT. Two three-dimensional finite element mandible models, trabecular microstructure(precise model) and macrostructure(simplified model), were built. The values of stress and strain of implant-bone interface were calculated using the software of Ansys 14.0. Compared with the simplified model, the precise models' average values of the implant bone interface stress increased obviously and its maximum values did not change greatly. The maximum values of quivalent stress of the precise models were 80% and 110% of the simplified model and the average values were 170% and 290% of simplified model. The maximum and average values of equivalent strain of precise models were obviously decreased, and the maximum values of the equivalent effect strain were 17% and 26% of simplified model and the average ones were 21% and 16% of simplified model respectively. Stress and strain concentrations at implant-bone interface were obvious in the simplified model. However, the distributions of stress and strain were uniform in the precise model. The precise model has significant effect on the distribution of stress and strain at implant-bone interface.
NASA Technical Reports Server (NTRS)
Irion, F. W.; Moyer, E. J.; Gunson, M. R.; Rinsland, C. P.; Yung, Y. L.; Michelsen, H. A.; Salawitch, R. J.; Chang, A. Y.; Newchurch, M. J.; Abbas, M. M.;
1996-01-01
Stratospheric mixing ratios of CH3D from 100 mb to 17mb (approximately equals 15 to 28 km)and HDO from 100 mb to 10 mb (approximately equals 15 to 32 km) have been inferred from high resolution solar occultation infrared spectra from the Atmospheric Trace MOlecule Spectroscopy (ATMOS) Fourier-transform interferometer. The spectra, taken on board the Space Shuttle during the Spacelab 3 and ATLAS-1, -2, and -3 missions, extend in latitude from 70 deg S to 65 deg N. We find CH3D entering the stratosphere at an average mixing ratio of (9.9 +/- 0.8) x 10(exp -10) with a D/H ratio in methane (7.1 +/- 7.4)% less than that in Standard Mean Ocean Water (SMOW) (1 sigma combined precision and systematic error). In the mid to lower stratosphere, the average lifetime of CH3D is found to be (1.19 +/- 0.02) times that of CH4, resulting in an increasing D/H ratio in methane as air 'ages' and the methane mixing ratio decreases. We find an average of (1.0 +/- 0.1) molecules of stratospheric HDO are produced for each CH3D destroyed (1 sigma combined precision and systematic error), indicating that the rate of HDO production is approximately equal to the rate of CH3D destruction. Assuming negligible amounts of deuterium in species other than HDO, CH3D and HD, this limits the possible change in the stratospheric HD mixing ratio below about 10 mb to be +/- 0.1 molecules HD created per molecule CH3D destroyed.
The contribution of Multi-GNSS Experiment (MGEX) to precise point positioning
NASA Astrophysics Data System (ADS)
Guo, Fei; Li, Xingxing; Zhang, Xiaohong; Wang, Jinling
2017-06-01
In response to the changing world of GNSS, the International GNSS Service (IGS) has initiated the Multi-GNSS Experiment (MGEX). As part of the MGEX project, initial precise orbit and clock products have been released for public use, which are the key prerequisites for multi-GNSS precise point positioning (PPP). In particular, precise orbits and clocks at intervals of 5 min and 30 s are presently available for the new emerging systems. This paper investigates the benefits of multi-GNSS for PPP. Firstly, orbit and clock consistency tests (between different providers) were performed for GPS, GLONASS, Galileo and BeiDou. In general, the differences of GPS are, respectively, 1.0-1.5 cm for orbit and 0.1 ns for clock. The consistency of GLONASS is worse than GPS by a factor of 2-3, i.e. 2-4 cm for orbit and 0.2 ns for clock. However, the corresponding differences of Galileo and BeiDou are significantly larger than those of GPS and GLONASS, particularly for the BeiDou GEO satellites. Galileo as well as BeiDou IGSO/MEO products have a consistency of 0.1-0.2 m for orbit, and 0.2-0.3 ns for clock. As to BeiDou GEO satellites, the difference of their orbits reaches 3-4 m in along-track, 0.5-0.6 m in cross-track, and 0.2-0.3 m in the radial directions, together with an average RMS of 0.6 ns for clock. Furthermore, the short-term stability of multi-GNSS clocks was analyzed by Allan deviation. Results show that clock stability of the onboard GNSS is highly dependent on the satellites generations, operational lifetime, orbit types, and frequency standards. Finally, kinematic PPP tests were conducted to investigate the contribution of multi-GNSS and higher rate clock corrections. As expected, the positioning accuracy as well as convergence speed benefit from the fusion of multi-GNSS and higher rate of precise clock corrections. The multi-GNSS PPP improves the positioning accuracy by 10-20%, 40-60%, and 60-80% relative to the GPS-, GLONASS-, and BeiDou-only PPP. The usage of 30 s interval clock products decreases interpolation errors, and the positioning accuracy is improved by an average of 30-50% for the all the cases except for the BeiDou-only PPP.
Li, Jian; Wu, Huan-Yu; Li, Yan-Ting; Jin, Hui-Ming; Gu, Bao-Ke; Yuan, Zheng-An
2010-01-01
To explore the feasibility of establishing and applying of autoregressive integrated moving average (ARIMA) model to predict the incidence rate of dysentery in Shanghai, so as to provide the theoretical basis for prevention and control of dysentery. ARIMA model was established based on the monthly incidence rate of dysentery of Shanghai from 1990 to 2007. The parameters of model were estimated through unconditional least squares method, the structure was determined according to criteria of residual un-correlation and conclusion, and the model goodness-of-fit was determined through Akaike information criterion (AIC) and Schwarz Bayesian criterion (SBC). The constructed optimal model was applied to predict the incidence rate of dysentery of Shanghai in 2008 and evaluate the validity of model through comparing the difference of predicted incidence rate and actual one. The incidence rate of dysentery in 2010 was predicted by ARIMA model based on the incidence rate from January 1990 to June 2009. The model ARIMA (1, 1, 1) (0, 1, 2)(12) had a good fitness to the incidence rate with both autoregressive coefficient (AR1 = 0.443) during the past time series, moving average coefficient (MA1 = 0.806) and seasonal moving average coefficient (SMA1 = 0.543, SMA2 = 0.321) being statistically significant (P < 0.01). AIC and SBC were 2.878 and 16.131 respectively and predicting error was white noise. The mathematic function was (1-0.443B) (1-B) (1-B(12))Z(t) = (1-0.806B) (1-0.543B(12)) (1-0.321B(2) x 12) micro(t). The predicted incidence rate in 2008 was consistent with the actual one, with the relative error of 6.78%. The predicted incidence rate of dysentery in 2010 based on the incidence rate from January 1990 to June 2009 would be 9.390 per 100 thousand. ARIMA model can be used to fit the changes of incidence rate of dysentery and to forecast the future incidence rate in Shanghai. It is a predicted model of high precision for short-time forecast.
Estimation of particulate nutrient load using turbidity meter.
Yamamoto, K; Suetsugi, T
2006-01-01
The "Nutrient Load Hysteresis Coefficient" was proposed to evaluate the hysteresis of the nutrient loads to flow rate quantitatively. This could classify the runoff patterns of nutrient load into 15 patterns. Linear relationships between the turbidity and the concentrations of particulate nutrients were observed. It was clarified that the linearity was caused by the influence of the particle size on turbidity output and accumulation of nutrients on smaller particles (diameter < 23 microm). The L-Q-Turb method, which is a new method for the estimation of runoff loads of nutrients using a regression curve between the turbidity and the concentrations of particulate nutrients, was developed. This method could raise the precision of the estimation of nutrient loads even if they had strong hysteresis to flow rate. For example, as for the runoff load of total phosphorus load on flood events in a total of eight cases, the averaged error of estimation of total phosphorus load by the L-Q-Turb method was 11%, whereas the averaged estimation error by the regression curve between flow rate and nutrient load was 28%.
Improving precision of glomerular filtration rate estimating model by ensemble learning.
Liu, Xun; Li, Ningshan; Lv, Linsheng; Fu, Yongmei; Cheng, Cailian; Wang, Caixia; Ye, Yuqiu; Li, Shaomin; Lou, Tanqi
2017-11-09
Accurate assessment of kidney function is clinically important, but estimates of glomerular filtration rate (GFR) by regression are imprecise. We hypothesized that ensemble learning could improve precision. A total of 1419 participants were enrolled, with 1002 in the development dataset and 417 in the external validation dataset. GFR was independently estimated from age, sex and serum creatinine using an artificial neural network (ANN), support vector machine (SVM), regression, and ensemble learning. GFR was measured by 99mTc-DTPA renal dynamic imaging calibrated with dual plasma sample 99mTc-DTPA GFR. Mean measured GFRs were 70.0 ml/min/1.73 m 2 in the developmental and 53.4 ml/min/1.73 m 2 in the external validation cohorts. In the external validation cohort, precision was better in the ensemble model of the ANN, SVM and regression equation (IQR = 13.5 ml/min/1.73 m 2 ) than in the new regression model (IQR = 14.0 ml/min/1.73 m 2 , P < 0.001). The precision of ensemble learning was the best of the three models, but the models had similar bias and accuracy. The median difference ranged from 2.3 to 3.7 ml/min/1.73 m 2 , 30% accuracy ranged from 73.1 to 76.0%, and P was > 0.05 for all comparisons of the new regression equation and the other new models. An ensemble learning model including three variables, the average ANN, SVM, and regression equation values, was more precise than the new regression model. A more complex ensemble learning strategy may further improve GFR estimates.
Shao, Jiaxin; Rapacchi, Stanislas; Nguyen, Kim-Lien; Hu, Peng
2016-02-01
To develop an accurate and precise myocardial T1 mapping technique using an inversion recovery spoiled gradient echo readout at 3.0 Tesla (T). The modified Look-Locker inversion-recovery (MOLLI) sequence was modified to use fast low angle shot (FLASH) readout, incorporating a BLESSPC (Bloch Equation Simulation with Slice Profile Correction) T1 estimation algorithm, for accurate myocardial T1 mapping. The FLASH-MOLLI with BLESSPC fitting was compared with different approaches and sequences with regards to T1 estimation accuracy, precision and image artifact based on simulation, phantom studies, and in vivo studies of 10 healthy volunteers and three patients at 3.0 Tesla. The FLASH-MOLLI with BLESSPC fitting yields accurate T1 estimation (average error = -5.4 ± 15.1 ms, percentage error = -0.5% ± 1.2%) for T1 from 236-1852 ms and heart rate from 40-100 bpm in phantom studies. The FLASH-MOLLI sequence prevented off-resonance artifacts in all 10 healthy volunteers at 3.0T. In vivo, there was no significant difference between FLASH-MOLLI-derived myocardial T1 values and "ShMOLLI+IE" derived values (1458.9 ± 20.9 ms versus 1464.1 ± 6.8 ms, P = 0.50); However, the average precision by FLASH-MOLLI was significantly better than that generated by "ShMOLLI+IE" (1.84 ± 0.36% variance versus 3.57 ± 0.94%, P < 0.001). The FLASH-MOLLI with BLESSPC fitting yields accurate and precise T1 estimation, and eliminates banding artifacts associated with bSSFP at 3.0T. © 2015 Wiley Periodicals, Inc.
Studies on fast triggering and high precision tracking with Resistive Plate Chambers
NASA Astrophysics Data System (ADS)
Aielli, G.; Ball, R.; Bilki, B.; Chapman, J. W.; Cardarelli, R.; Dai, T.; Diehl, E.; Dubbert, J.; Ferretti, C.; Feng, H.; Francis, K.; Guan, L.; Han, L.; Hou, S.; Levin, D.; Li, B.; Liu, L.; Paolozzi, L.; Repond, J.; Roloff, J.; Santonico, R.; Song, H. Y.; Wang, X. L.; Wu, Y.; Xia, L.; Xu, L.; Zhao, T.; Zhao, Z.; Zhou, B.; Zhu, J.
2013-06-01
We report on studies of fast triggering and high precision tracking using Resistive Plate Chambers (RPCs). Two beam tests were carried out with the 180 GeV/c muon beam at CERN using glass RPCs with gas gaps of 1.15 mm and equipped with readout strips with 1.27 mm pitch. This is the first beam test of RPCs with fine-pitch readout strips that explores precision tracking and triggering capabilities. RPC signals were acquired with precision timing and charge integrating readout electronics at both ends of the strips. The time resolution was measured to be better than 600 ps and the average spatial resolution was found to be 220 μm using charge information and 287 μm only using signal arrival time information. The dual-ended readout allows the determination of the average and the difference of the signal arrival times. The average time was found to be independent of the incident particle position along the strip and is useful for triggering purposes. The time difference yielded a determination of the hit position with a precision of 7.5 mm along the strip. These results demonstrate the feasibility using RPCs for fast and high-resolution triggering and tracking.
Precision Learning Assessment: An Alternative to Traditional Assessment Techniques.
ERIC Educational Resources Information Center
Caltagirone, Paul J.; Glover, Christopher E.
1985-01-01
A continuous and curriculum-based assessment method, Precision Learning Assessment (PLA), which integrates precision teaching and norm-referenced techniques, was applied to a math computation curriculum for 214 third graders. The resulting districtwide learning curves defining average annual progress through the computation curriculum provided…
Optical coherence tomography image-guided smart laser knife for surgery.
Katta, Nitesh; McElroy, Austin B; Estrada, Arnold D; Milner, Thomas E
2018-03-01
Surgical oncology can benefit from specialized tools that enhance imaging and enable precise cutting and removal of tissue without damage to adjacent structures. The combination of high-resolution, fast optical coherence tomography (OCT) co-aligned with a nanosecond pulsed thulium (Tm) laser offers advantages over conventional surgical laser systems. Tm lasers provide superior beam quality, high volumetric tissue removal rates with minimal residual thermal footprint in tissue, enabling a reduction in unwanted damage to delicate adjacent sub-surface structures such as nerves or micro-vessels. We investigated such a combined Tm/OCT system with co-aligned imaging and cutting beams-a configuration we call a "smart laser knife." A blow-off model that considers absorption coefficients and beam delivery systems was utilized to predict Tm cut depth, tissue removal rate and spatial distribution of residual thermal injury. Experiments were performed to verify the volumetric removal rate predicted by the model as a function of average power. A bench-top, combined Tm/OCT system was constructed using a 15W 1940 nm nanosecond pulsed Tm fiber laser (500 μJ pulse energy, 100 ns pulse duration, 30 kHz repetition rate) for removing tissue and a swept source laser (1310 ± 70 nm, 100 kHz sweep rate) for OCT imaging. Tissue phantoms were used to demonstrate precise surgery with blood vessel avoidance. Depth imaging informed cutting/removal of targeted tissue structures by the Tm laser was performed. Laser cutting was accomplished around and above phantom blood vessels while avoiding damage to vessel walls. A tissue removal rate of 5.5 mm 3 /sec was achieved experimentally, in comparison to the model prediction of approximately 6 mm 3 /sec. We describe a system that combines OCT and laser tissue modification with a Tm laser. Simulation results of the tissue removal rate using a simple model, as a function of average power, are in good agreement with experimental results using tissue phantoms. Lasers Surg. Med. 50:202-212, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Methods for semi-automated indexing for high precision information retrieval
NASA Technical Reports Server (NTRS)
Berrios, Daniel C.; Cucina, Russell J.; Fagan, Lawrence M.
2002-01-01
OBJECTIVE: To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. DESIGN: Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. PARTICIPANTS: Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. MEASUREMENTS: Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. RESULTS: Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). SUMMARY: Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy.
Methods for Semi-automated Indexing for High Precision Information Retrieval
Berrios, Daniel C.; Cucina, Russell J.; Fagan, Lawrence M.
2002-01-01
Objective. To evaluate a new system, ISAID (Internet-based Semi-automated Indexing of Documents), and to generate textbook indexes that are more detailed and more useful to readers. Design. Pilot evaluation: simple, nonrandomized trial comparing ISAID with manual indexing methods. Methods evaluation: randomized, cross-over trial comparing three versions of ISAID and usability survey. Participants. Pilot evaluation: two physicians. Methods evaluation: twelve physicians, each of whom used three different versions of the system for a total of 36 indexing sessions. Measurements. Total index term tuples generated per document per minute (TPM), with and without adjustment for concordance with other subjects; inter-indexer consistency; ratings of the usability of the ISAID indexing system. Results. Compared with manual methods, ISAID decreased indexing times greatly. Using three versions of ISAID, inter-indexer consistency ranged from 15% to 65% with a mean of 41%, 31%, and 40% for each of three documents. Subjects using the full version of ISAID were faster (average TPM: 5.6) and had higher rates of concordant index generation. There were substantial learning effects, despite our use of a training/run-in phase. Subjects using the full version of ISAID were much faster by the third indexing session (average TPM: 9.1). There was a statistically significant increase in three-subject concordant indexing rate using the full version of ISAID during the second indexing session (p < 0.05). Summary. Users of the ISAID indexing system create complex, precise, and accurate indexing for full-text documents much faster than users of manual methods. Furthermore, the natural language processing methods that ISAID uses to suggest indexes contributes substantially to increased indexing speed and accuracy. PMID:12386114
A Predictive Algorithm to Detect Opioid Use Disorder
Lee, Chee; Sharma, Maneesh; Kantorovich, Svetlana
2018-01-01
Purpose: The purpose of this study was to determine the clinical utility of an algorithm-based decision tool designed to assess risk associated with opioid use in the primary care setting. Methods: A prospective, longitudinal study was conducted to assess the utility of precision medicine testing in 1822 patients across 18 family medicine/primary care clinics in the United States. Using the profile, patients were categorized into low, moderate, and high risk for opioid use. Physicians who ordered testing were asked to complete patient evaluations and document their actions, decisions, and perceptions regarding the utility of the precision medicine tests. Results: Approximately 47% of primary care physicians surveyed used the profile to guide clinical decision-making. These physicians rated the benefit of the profile on patient care an average of 3.6 on a 5-point scale (1 indicating no benefit and 5 indicating significant benefit). Eighty-eight percent of all clinicians surveyed felt the test exhibited some benefit to their patient care. The most frequent utilization for the profile was to guide a change in opioid prescribed. Physicians reported greater benefit of profile utilization for minority patients. Patients whose treatment was guided by the profile had pain levels that were reduced, on average, 2.7 levels on the numeric rating scale. Conclusions: The profile provided primary care physicians with a useful tool to stratify the risk of opioid use disorder and was rated as beneficial for decision-making and patient improvement by the majority of physicians surveyed. Physicians reported the profile resulted in greater clinical improvement for minorities, highlighting the objective use of this profile to guide judicial use of opioids in high-risk patients. Significantly, when physicians used the profile to guide treatment decisions, patient-reported pain was greatly reduced. PMID:29383324
A Predictive Algorithm to Detect Opioid Use Disorder: What Is the Utility in a Primary Care Setting?
Lee, Chee; Sharma, Maneesh; Kantorovich, Svetlana; Brenton, Ashley
2018-01-01
The purpose of this study was to determine the clinical utility of an algorithm-based decision tool designed to assess risk associated with opioid use in the primary care setting. A prospective, longitudinal study was conducted to assess the utility of precision medicine testing in 1822 patients across 18 family medicine/primary care clinics in the United States. Using the profile, patients were categorized into low, moderate, and high risk for opioid use. Physicians who ordered testing were asked to complete patient evaluations and document their actions, decisions, and perceptions regarding the utility of the precision medicine tests. Approximately 47% of primary care physicians surveyed used the profile to guide clinical decision-making. These physicians rated the benefit of the profile on patient care an average of 3.6 on a 5-point scale (1 indicating no benefit and 5 indicating significant benefit). Eighty-eight percent of all clinicians surveyed felt the test exhibited some benefit to their patient care. The most frequent utilization for the profile was to guide a change in opioid prescribed. Physicians reported greater benefit of profile utilization for minority patients. Patients whose treatment was guided by the profile had pain levels that were reduced, on average, 2.7 levels on the numeric rating scale. The profile provided primary care physicians with a useful tool to stratify the risk of opioid use disorder and was rated as beneficial for decision-making and patient improvement by the majority of physicians surveyed. Physicians reported the profile resulted in greater clinical improvement for minorities, highlighting the objective use of this profile to guide judicial use of opioids in high-risk patients. Significantly, when physicians used the profile to guide treatment decisions, patient-reported pain was greatly reduced.
Multi-sector thermo-physiological head simulator for headgear research
NASA Astrophysics Data System (ADS)
Martinez, Natividad; Psikuta, Agnes; Corberán, José Miguel; Rossi, René M.; Annaheim, Simon
2017-02-01
A novel thermo-physiological human head simulator for headgear testing was developed by coupling a thermal head manikin with a thermo-physiological model. As the heat flux at head-site is directly measured by the head manikin, this method provides a realistic quantification of the heat transfer phenomena occurring in the headgear, such as moisture absorption-desorption cycles, condensation, or moisture migration across clothing layers. Before coupling, the opportunities of the head manikin for representing the human physiology were evaluated separately. The evaluation revealed reduced precision in forehead and face temperature predictions under extreme heterogeneous temperature distributions and no initial limitation for simulating temperature changes observed in the human physiology. The thermo-physiological model predicted higher sweat rates when applied for coupled than for pure virtual simulations. After coupling, the thermo-physiological human head simulator was validated using eight human experiments. It precisely predicted core, mean skin, and forehead temperatures with average rmsd values within the average experimental standard deviation (rmsd of 0.20 ± 0.15, 0.83 ± 0.34, and 1.04 ± 0.54 °C, respectively). However, in case of forehead, precision was lower for the exposures including activity than for the sedentary exposures. The representation of the human sweat evaporation could be affected by a reduced evaporation efficiency and the manikin sweat dynamics. The industry will benefit from this thermo-physiological human head simulator leading to the development of helmet designs with enhanced thermal comfort and, therefore, with higher acceptance by users.
Technique for Evaluating the Erosive Properties of Ablative Internal Insulation Materials
NASA Technical Reports Server (NTRS)
McComb, J. C.; Hitner, J. M.
1989-01-01
A technique for determining the average erosion rate versus Mach number of candidate internal insulation materials was developed for flight motor applications in 12 inch I.D. test firing hardware. The method involved the precision mounting of a mechanical measuring tool within a conical test cartridge fabricated from either a single insulation material or two non-identical materials each of which constituted one half of the test cartridge cone. Comparison of the internal radii measured at nine longitudinal locations and between eight to thirty two azimuths, depending on the regularity of the erosion pattern before and after test firing, permitted calculation of the average erosion rate and Mach number. Systematic criteria were established for identifying erosion anomalies such as the formation of localized ridges and for excluding such anomalies from the calculations. The method is discussed and results presented for several asbestos-free materials developed in-house for the internal motor case insulation in solid propellant rocket motors.
High-Precision Half-Life and Branching Ratio Measurements for the Superallowed β+ Emitter 26Alm
NASA Astrophysics Data System (ADS)
Finlay, P.; Svensson, C. E.; Demand, G. A.; Garrett, P. E.; Green, K. L.; Leach, K. G.; Phillips, A. A.; Rand, E. T.; Ball, G.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Leslie, J. R.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Grinyer, G. F.; Sumithrarachchi, C. S.; Williams, S. J.; Triambak, S.
2013-03-01
High-precision half-life and branching-ratio measurements for the superallowed β+ emitter 26Alm were performed at the TRIUMF-ISAC radioactive ion beam facility. An upper limit of ≤ 15 ppm at 90% C.L. was determined for the sum of all possible non-analogue β+/EC decay branches of 26Alm, yielding a superallowed branching ratio of 100.0000+0-0.0015%. A value of T1/2 = 6:34654(76) s was determined for the 26Alm half-life which is consistent with, but 2.5 times more precise than, the previous world average. Combining these results with world-average measurements yields an ft value of 3037.58(60) s, the most precisely determined for any superallowed emitting nucleus to date. This high-precision ft value for 26Alm provides a new benchmark to refine theoretical models of isospin-symmetry-breaking effects in superallowed β decays.
Fixed precision sampling plans for white apple leafhopper (Homoptera: Cicadellidae) on apple.
Beers, Elizabeth H; Jones, Vincent P
2004-10-01
Constant precision sampling plans for the white apple leafhopper, Typhlocyba pomaria McAtee, were developed so that it could be used as an indicator species for system stability as new integrated pest management programs without broad-spectrum pesticides are developed. Taylor's power law was used to model the relationship between the mean and the variance, and Green's constant precision sequential sample equation was used to develop sampling plans. Bootstrap simulations of the sampling plans showed greater precision (D = 0.25) than the desired precision (Do = 0.3), particularly at low mean population densities. We found that by adjusting the Do value in Green's equation to 0.4, we were able to reduce the average sample number by 25% and provided an average D = 0.31. The sampling plan described allows T. pomaria to be used as reasonable indicator species of agroecosystem stability in Washington apple orchards.
Optimization of Trade-offs in Error-free Image Transmission
NASA Astrophysics Data System (ADS)
Cox, Jerome R.; Moore, Stephen M.; Blaine, G. James; Zimmerman, John B.; Wallace, Gregory K.
1989-05-01
The availability of ubiquitous wide-area channels of both modest cost and higher transmission rate than voice-grade lines promises to allow the expansion of electronic radiology services to a larger community. The band-widths of the new services becoming available from the Integrated Services Digital Network (ISDN) are typically limited to 128 Kb/s, almost two orders of magnitude lower than popular LANs can support. Using Discrete Cosine Transform (DCT) techniques, a compressed approximation to an image may be rapidly transmitted. However, intensity or resampling transformations of the reconstructed image may reveal otherwise invisible artifacts of the approximate encoding. A progressive transmission scheme reported in ISO Working Paper N800 offers an attractive solution to this problem by rapidly reconstructing an apparently undistorted image from the DCT coefficients and then subse-quently transmitting the error image corresponding to the difference between the original and the reconstructed images. This approach achieves an error-free transmission without sacrificing the perception of rapid image delivery. Furthermore, subsequent intensity and resampling manipulations can be carried out with confidence. DCT coefficient precision affects the amount of error information that must be transmitted and, hence the delivery speed of error-free images. This study calculates the overall information coding rate for six radiographic images as a function of DCT coefficient precision. The results demonstrate that a minimum occurs for each of the six images at an average coefficient precision of between 0.5 and 1.0 bits per pixel (b/p). Apparently undistorted versions of these six images can be transmitted with a coding rate of between 0.25 and 0.75 b/p while error-free versions can be transmitted with an overall coding rate between 4.5 and 6.5 b/p.
Abou-Senna, Hatem; Radwan, Essam; Westerlund, Kurt; Cooper, C David
2013-07-01
The Intergovernmental Panel on Climate Change (IPCC) estimates that baseline global GHG emissions may increase 25-90% from 2000 to 2030, with carbon dioxide (CO2 emissions growing 40-110% over the same period. On-road vehicles are a major source of CO2 emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, for example, carbon monoxide (CO), nitrogen oxides (NO(x)), and particulate matter (PM). Therefore, the need to accurately quantify transportation-related emissions from vehicles is essential. The new US. Environmental Protection Agency (EPA) mobile source emissions model, MOVES2010a (MOVES), can estimate vehicle emissions on a second-by-second basis, creating the opportunity to combine a microscopic traffic simulation model (such as VISSIM) with MOVES to obtain accurate results. This paper presents an examination of four different approaches to capture the environmental impacts of vehicular operations on a 10-mile stretch of Interstate 4 (I-4), an urban limited-access highway in Orlando, FL. First (at the most basic level), emissions were estimated for the entire 10-mile section "by hand" using one average traffic volume and average speed. Then three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link drive schedules (LDS), and second-by-second operating mode distributions (OPMODE). This paper analyzes how the various approaches affect predicted emissions of CO, NO(x), PM2.5, PM10, and CO2. The results demonstrate that obtaining precise and comprehensive operating mode distributions on a second-by-second basis provides more accurate emission estimates. Specifically, emission rates are highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach. Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. With MOVES, there is an opportunity for higher precision and accuracy. Integrating a microscopic traffic simulation model (such as VISSIM) with MOVES allows one to obtain precise and accurate emissions estimates. The proposed emission rate estimation process also can be extended to gridded emissions for ozone modeling, or to localized air quality dispersion modeling, where temporal and spatial resolution of emissions is essential to predict the concentration of pollutants near roadways.
Precision measurement of electric organ discharge timing from freely moving weakly electric fish.
Jun, James J; Longtin, André; Maler, Leonard
2012-04-01
Physiological measurements from an unrestrained, untethered, and freely moving animal permit analyses of neural states correlated to naturalistic behaviors of interest. Precise and reliable remote measurements remain technically challenging due to animal movement, which perturbs the relative geometries between the animal and sensors. Pulse-type electric fish generate a train of discrete and stereotyped electric organ discharges (EOD) to sense their surroundings actively, and rapid modulation of the discharge rate occurs while free swimming in Gymnotus sp. The modulation of EOD rates is a useful indicator of the fish's central state such as resting, alertness, and learning associated with exploration. However, the EOD pulse waveforms remotely observed at a pair of dipole electrodes continuously vary as the fish swims relative to the electrodes, which biases the judgment of the actual pulse timing. To measure the EOD pulse timing more accurately, reliably, and noninvasively from a free-swimming fish, we propose a novel method based on the principles of waveform reshaping and spatial averaging. Our method is implemented using envelope extraction and multichannel summation, which is more precise and reliable compared with other widely used threshold- or peak-based methods according to the tests performed under various source-detector geometries. Using the same method, we constructed a real-time electronic pulse detector performing an additional online pulse discrimination routine to enhance further the detection reliability. Our stand-alone pulse detector performed with high temporal precision (<10 μs) and reliability (error <1 per 10(6) pulses) and permits longer recording duration by storing only event time stamps (4 bytes/pulse).
Epidemiology of drowning deaths in the Philippines, 1980 to 2011.
Martinez, Rammell Eric; Go, John Juliard; Guevarra, Jonathan
2016-01-01
Drowning kills 372 000 people yearly worldwide and is a serious public health issue in the Philippines. This study aims to determine if the drowning death rates in the Philippine Health Statistics (PHS) reports from 1980 to 2011 were underestimated. A retrospective descriptive study was conducted to describe the trend of deaths caused by drowning in the Philippines from official and unofficial sources in the period 1980 to 2011. Information about deaths related to cataclysmic causes, particularly victims of storms and floods, and maritime accidents in the Philippines during the study period were reviewed and compared with the PHS drowning death data. An average of 2496 deaths per year caused by drowning were recorded in the PHS reports from 1980 to 2011 (range 671-3656). The average death rate was 3.5/100 000 population (range 1.3-4.7). An average of 4196 drowning deaths were recorded from 1980 to 2011 (range 1220 to 8788) when catacylsmic events and maritime accidents were combined with PHS data. The average death rate was 6/100 000 population (range 2.5-14.2). Our results showed that on average there were 1700 more drowning deaths per year when deaths caused by cataclysms and maritime accidents were added to the PHS data. This illustrated that drowning deaths were underestimated in the official surveillance data. Passive surveillance and irregular data management are contributing to underestimation of drowning in the Philippines. Additionally, deaths due to flooding, storms and maritime accidents are not counted as drowning deaths, which further contributes to the underestimation. Surveillance of drowning data can be improved using more precise case definitions and a multisectoral approach.
Sub-sampling genetic data to estimate black bear population size: A case study
Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.
2007-01-01
Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.
NASA Astrophysics Data System (ADS)
Metzger, T. L.; Pizzuto, J. E.; Schook, D. M.; Hasse, T. R.; Affinito, R. A.
2017-12-01
Dendrochronological dating of buried trees precisely determines the germination year and identifies the stratigraphic context of germination for the trees. This recently developed application of dendrochronology provides accurate time-averaged sedimentation rates of overbank deposition along floodplains and can be used to identify burial events. Previous studies have demonstrated that tamarisk (Tamarix ramosissima) and sandbar willow (Salix exigua) develop anatomical changes within the tree rings (increased vessel size and decreased ring widths) on burial, but observations of plains cottonwood (Populus deltoides ssp. monilifera) are lacking. In September 2016 and June 2017, five buried plains cottonwoods were excavated along a single transect of the interior of a meander bend of the Powder River, Montana. Sediment samples were obtained near each tree for 210Pb and 137Cs dating, which will allow for comparison between dendrochronological and isotopic dating methods. The plains cottonwood samples collected exhibit anatomical changes associated with burial events that are observed in other species. All trees germinated at the boundary between thinly bedded fine sand and mud and coarse sand underlain by sand and gravel, indicating plains cottonwoods germinate on top of point bars prior to overbank deposition. The precise germination age and depth provide elevations and minimum age constraints for the point bar deposits and maximum ages for the overlying sediment, helping constrain past channel positions and overbank deposition rates. Germination years of the excavated trees, estimated from cores taken 1.5 m above ground level, range from 2014 to 1862. Accurate establishment years determined by cross-dating the buried section of the tree can add an additional 10 years to the cored age. The sedimentation rate and accumulation thickness varied with tree age. The germination year, total sediment accumulation, and average sedimentation rate at the five sampled trees is: 2011, 35 cm, 7.0 cm/year; 1973, 77 cm, 1.8 cm/year; 1962, 140 cm, 2.6 cm/year; 1960, 123 cm, 2.2 cm/year; and 1862, 112 cm, 0.7 cm/year. These sedimentation rates indicate that the cumulative sedimentation decreases as a power law with increasing tree age.
Heart Rate During Sleep: Implications for Monitoring Training Status
Waldeck, Miriam R.; Lambert, Michael I.
2003-01-01
Resting heart rate has sometimes been used as a marker of training status. It is reasonable to assume that the relationship between heart rate and training status should be more evident during sleep when extraneous factors that may influence heart rate are reduced. Therefore the aim of the study was to assess the repeatability of monitoring heart rate during sleep when training status remained unchanged, to determine if this measurement had sufficient precision to be used as a marker of training status. The heart rate of ten female subjects was monitored for 24 hours on three occasions over three weeks whilst training status remained unchanged. Average, minimum and maximum heart rate during sleep was calculated. The average heart rate of the group during sleep was similar on each of the three tests (65 ± 9, 63 ± 6 and 67 ± 7 beats·min-1 respectively). The range in minimum heart rate variation during sleep for all subjects over the three testing sessions was from 0 to 10 beats·min-1 (mean = 5 ± 3 beats·min-1) and for maximum heart rate variation was 2 to 31 beats·min-1 (mean = 13 ± 9 beats·min-1). In summary it was found that on an individual basis the minimum heart rate during sleep varied by about 8 beats·min-1. This amount of intrinsic day-to-day variation needs to be considered when changes in heart rate that may occur with changes in training status are interpreted. PMID:24688273
The rate of separation of magnetic lines of force in a random magnetic field.
NASA Technical Reports Server (NTRS)
Jokipii, J. R.
1973-01-01
The mixing of magnetic lines of force, as represented by their rate of separation, as a function of distance along the magnetic field, is considered with emphasis on neighboring lines of force. This effect is particularly important in understanding the transport of charged particles perpendicular to the average magnetic field. The calculation is carried out in the approximation that the separation changes by an amount small compared with the correlation scale normal to the field, in a distance along the field of a few correlation scales. It is found that the rate of separation is very sensitive to the precise form of the power spectrum. Application to the interplanetary and interstellar magnetic fields is discussed, and it is shown that in some cases field lines, much closer together than the correlation scale, separate at a rate which is effectively as rapid as if they were many correlation lengths apart.
Srinivasan, Divya; Mathiassen, Svend Erik; Hallman, David M; Samani, Afshin; Madeleine, Pascal; Lyskov, Eugene
2016-01-01
Most previous studies of concurrent physical and cognitive demands have addressed tasks of limited relevance to occupational work, and with dissociated physical and cognitive task components. This study investigated effects on muscle activity and heart rate variability of executing a repetitive occupational task with an added cognitive demand integral to correct task performance. Thirty-five healthy females performed 7.5 min of standardized repetitive pipetting work in a baseline condition and a concurrent cognitive condition involving a complex instruction for correct performance. Average levels and variabilities of electromyographic activities in the upper trapezius and extensor carpi radialis (ECR) muscles were compared between these two conditions. Heart rate and heart rate variability were also assessed to measure autonomic nervous system activation. Subjects also rated perceived fatigue in the neck-shoulder region, as well as exertion. Concurrent cognitive demands increased trapezius muscle activity from 8.2% of maximum voluntary exertion (MVE) in baseline to 9.0% MVE (p = 0.0005), but did not significantly affect ECR muscle activity, heart rate, heart rate variability, perceived fatigue or exertion. Trapezius muscle activity increased by about 10%, without any accompanying cardiovascular response to indicate increased sympathetic activation. We suggest this slight increase in trapezius muscle activity to be due to changed muscle activation patterns within or among shoulder muscles. The results suggest that it may be possible to introduce modest cognitive demands necessary for correct performance in repetitive precision work without any major physiological effects, at least in the short term.
Methodological evaluation and comparison of five urinary albumin measurements.
Liu, Rui; Li, Gang; Cui, Xiao-Fan; Zhang, Dong-Ling; Yang, Qing-Hong; Mu, Xiao-Yan; Pan, Wen-Jie
2011-01-01
Microalbuminuria is an indicator of kidney damage and a risk factor for the progression kidney disease, cardiovascular disease, and so on. Therefore, accurate and precise measurement of urinary albumin is critical. However, there are no reference measurement procedures and reference materials for urinary albumin. Nephelometry, turbidimetry, colloidal gold method, radioimmunoassay, and chemiluminescence immunoassay were performed for methodological evaluation, based on imprecision test, recovery rate, linearity, haemoglobin interference rate, and verified reference interval. Then we tested 40 urine samples from diabetic patients by each method, and compared the result between assays. The results indicate that nephelometry is the method with best analytical performance among the five methods, with an average intraassay coefficient of variation (CV) of 2.6%, an average interassay CV of 1.7%, a mean recovery of 99.6%, a linearity of R=1.00 from 2 to 250 mg/l, and an interference rate of <10% at haemoglobin concentrations of <1.82 g/l. The correlation (r) between assays was from 0.701 to 0.982, and the Bland-Altman plots indicated each assay provided significantly different results from each other. Nephelometry is the clinical urinary albumin method with best analytical performance in our study. © 2011 Wiley-Liss, Inc.
Accuracy evaluation of intraoral optical impressions: A clinical study using a reference appliance.
Atieh, Mohammad A; Ritter, André V; Ko, Ching-Chang; Duqum, Ibrahim
2017-09-01
Trueness and precision are used to evaluate the accuracy of intraoral optical impressions. Although the in vivo precision of intraoral optical impressions has been reported, in vivo trueness has not been evaluated because of limitations in the available protocols. The purpose of this clinical study was to compare the accuracy (trueness and precision) of optical and conventional impressions by using a novel study design. Five study participants consented and were enrolled. For each participant, optical and conventional (vinylsiloxanether) impressions of a custom-made intraoral Co-Cr alloy reference appliance fitted to the mandibular arch were obtained by 1 operator. Three-dimensional (3D) digital models were created for stone casts obtained from the conventional impression group and for the reference appliances by using a validated high-accuracy reference scanner. For the optical impression group, 3D digital models were obtained directly from the intraoral scans. The total mean trueness of each impression system was calculated by averaging the mean absolute deviations of the impression replicates from their 3D reference model for each participant, followed by averaging the obtained values across all participants. The total mean precision for each impression system was calculated by averaging the mean absolute deviations between all the impression replicas for each participant (10 pairs), followed by averaging the obtained values across all participants. Data were analyzed using repeated measures ANOVA (α=.05), first to assess whether a systematic difference in trueness or precision of replicate impressions could be found among participants and second to assess whether the mean trueness and precision values differed between the 2 impression systems. Statistically significant differences were found between the 2 impression systems for both mean trueness (P=.010) and mean precision (P=.007). Conventional impressions had higher accuracy with a mean trueness of 17.0 ±6.6 μm and mean precision of 16.9 ±5.8 μm than optical impressions with a mean trueness of 46.2 ±11.4 μm and mean precision of 61.1 ±4.9 μm. Complete arch (first molar-to-first molar) optical impressions were less accurate than conventional impressions but may be adequate for quadrant impressions. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Stability Analysis of Receiver ISB for BDS/GPS
NASA Astrophysics Data System (ADS)
Zhang, H.; Hao, J. M.; Tian, Y. G.; Yu, H. L.; Zhou, Y. L.
2017-07-01
Stability analysis of receiver ISB (Inter-System Bias) is essential for understanding the feature of ISB as well as the ISB modeling and prediction. In order to analyze the long-term stability of ISB, the data from MGEX (Multi-GNSS Experiment) covering 3 weeks, which are from 2014, 2015 and 2016 respectively, are processed with the precise satellite clock and orbit products provided by Wuhan University and GeoForschungsZentrum (GFZ). Using the ISB calculated by BDS (BeiDou Navigation Satellite System)/GPS (Global Positioning System) combined PPP (Precise Point Positioning), the daily stability and weekly stability of ISB are investigated. The experimental results show that the diurnal variation of ISB is stable, and the average of daily standard deviation is about 0.5 ns. The weekly averages and standard deviations of ISB vary greatly in different years. The weekly averages of ISB are relevant to receiver types. There is a system bias between ISB calculated from the precise products provided by Wuhan University and GFZ. In addition, the system bias of the weekly average ISB of different stations is consistent with each other.
Neural control of finger movement via intracortical brain-machine interface
NASA Astrophysics Data System (ADS)
Irwin, Z. T.; Schroeder, K. E.; Vu, P. P.; Bullard, A. J.; Tat, D. M.; Nu, C. S.; Vaskov, A.; Nason, S. R.; Thompson, D. E.; Bentley, J. N.; Patil, P. G.; Chestek, C. A.
2017-12-01
Objective. Intracortical brain-machine interfaces (BMIs) are a promising source of prosthesis control signals for individuals with severe motor disabilities. Previous BMI studies have primarily focused on predicting and controlling whole-arm movements; precise control of hand kinematics, however, has not been fully demonstrated. Here, we investigate the continuous decoding of precise finger movements in rhesus macaques. Approach. In order to elicit precise and repeatable finger movements, we have developed a novel behavioral task paradigm which requires the subject to acquire virtual fingertip position targets. In the physical control condition, four rhesus macaques performed this task by moving all four fingers together in order to acquire a single target. This movement was equivalent to controlling the aperture of a power grasp. During this task performance, we recorded neural spikes from intracortical electrode arrays in primary motor cortex. Main results. Using a standard Kalman filter, we could reconstruct continuous finger movement offline with an average correlation of ρ = 0.78 between actual and predicted position across four rhesus macaques. For two of the monkeys, this movement prediction was performed in real-time to enable direct brain control of the virtual hand. Compared to physical control, neural control performance was slightly degraded; however, the monkeys were still able to successfully perform the task with an average target acquisition rate of 83.1%. The monkeys’ ability to arbitrarily specify fingertip position was also quantified using an information throughput metric. During brain control task performance, the monkeys achieved an average 1.01 bits s-1 throughput, similar to that achieved in previous studies which decoded upper-arm movements to control computer cursors using a standard Kalman filter. Significance. This is, to our knowledge, the first demonstration of brain control of finger-level fine motor skills. We believe that these results represent an important step towards full and dexterous control of neural prosthetic devices.
Multi-sector thermo-physiological head simulator for headgear research.
Martinez, Natividad; Psikuta, Agnes; Corberán, José Miguel; Rossi, René M; Annaheim, Simon
2017-02-01
A novel thermo-physiological human head simulator for headgear testing was developed by coupling a thermal head manikin with a thermo-physiological model. As the heat flux at head-site is directly measured by the head manikin, this method provides a realistic quantification of the heat transfer phenomena occurring in the headgear, such as moisture absorption-desorption cycles, condensation, or moisture migration across clothing layers. Before coupling, the opportunities of the head manikin for representing the human physiology were evaluated separately. The evaluation revealed reduced precision in forehead and face temperature predictions under extreme heterogeneous temperature distributions and no initial limitation for simulating temperature changes observed in the human physiology. The thermo-physiological model predicted higher sweat rates when applied for coupled than for pure virtual simulations. After coupling, the thermo-physiological human head simulator was validated using eight human experiments. It precisely predicted core, mean skin, and forehead temperatures with average rmsd values within the average experimental standard deviation (rmsd of 0.20 ± 0.15, 0.83 ± 0.34, and 1.04 ± 0.54 °C, respectively). However, in case of forehead, precision was lower for the exposures including activity than for the sedentary exposures. The representation of the human sweat evaporation could be affected by a reduced evaporation efficiency and the manikin sweat dynamics. The industry will benefit from this thermo-physiological human head simulator leading to the development of helmet designs with enhanced thermal comfort and, therefore, with higher acceptance by users.
Analysis of One-Way Laser Ranging Data to LRO, Time Transfer and Clock Characterization
NASA Technical Reports Server (NTRS)
Bauer, S.; Hussmann, H.; Oberst, J.; Dirkx, D.; Mao, D.; Neumann, G. A.; Mazarico, E.; Torrence, M. H.; McGarry, J. F.; Smith, D. E.;
2016-01-01
We processed and analyzed one-way laser ranging data from International Laser Ranging Service ground stations to NASA's Lunar Reconnaissance Orbiter (LRO), obtained from June 13, 2009 until September 30, 2014. We pair and analyze the one-way range observables from station laser fire and spacecraft laser arrival times by using nominal LRO orbit models based on the GRAIL gravity field. We apply corrections for instrument range walk, as well as for atmospheric and relativistic effects. In total we derived a tracking data volume of approximately 3000 hours featuring 64 million Full Rate and 1.5 million Normal Point observations. From a statistical analysis of the dataset we evaluate the experiment and the ground station performance. We observe a laser ranging measurement precision of 12.3 centimeters in case of the Full Rate data which surpasses the LOLA (Lunar Orbiting Laser Altimeter) timestamp precision of 15 centimeters. The averaging to Normal Point data further reduces the measurement precision to 5.6 centimeters. We characterized the LRO clock with fits throughout the mission time and estimated the rate to 6.9 times10 (sup -8), the aging to 1.6 times 10 (sup -12) per day and the change of aging to 2.3 times 10 (sup -14) per day squared over all mission phases. The fits also provide referencing of onboard time to the TDB (Barycentric Dynamical Time) time scale at a precision of 166 nanoseconds over two and 256 nanoseconds over all mission phases, representing ground to space time transfer. Furthermore we measure ground station clock differences from the fits as well as from simultaneous passes which we use for ground to ground time transfer from common view observations. We observed relative offsets ranging from 33 to 560 nanoseconds and relative rates ranging from 2 times 10 (sup -13) to 6 times 10 (sup -12) between the ground station clocks during selected mission phases. We study the results from the different methods and discuss their applicability for time transfer.
Precision targeting in guided munition using IR sensor and MmW radar
NASA Astrophysics Data System (ADS)
Sreeja, S.; Hablani, H. B.; Arya, H.
2015-10-01
Conventional munitions are not guided with sensors and therefore miss the target, particularly if the target is mobile. The miss distance of these munitions can be decreased by incorporating sensors to detect the target and guide the munition during flight. This paper is concerned with a Precision Guided Munition(PGM) equipped with an infrared sensor and a millimeter wave radar [IR and MmW, for short]. Three-dimensional flight of the munition and its pitch and yaw motion models are developed and simulated. The forward and lateral motion of a target tank on the ground is modeled as two independent second-order Gauss-Markov process. To estimate the target location on the ground and the line-of-sight rate to intercept it an Extended Kalman Filter is composed whose state vector consists of cascaded state vectors of missile dynamics and target dynamics. The line-of-sight angle measurement from the infrared seeker is by centroiding the target image in 40 Hz. The centroid estimation of the images in the focal plane is at a frequency of 10 Hz. Every 10 Hz, centroids of four consecutive images are averaged, yielding a time-averaged centroid, implying some measurement delay. The miss distance achieved by including by image processing delays is 1:45m.
Precision targeting in guided munition using infrared sensor and millimeter wave radar
NASA Astrophysics Data System (ADS)
Sulochana, Sreeja; Hablani, Hari B.; Arya, Hemendra
2016-07-01
Conventional munitions are not guided with sensors and therefore miss the target, particularly if the target is mobile. The miss distance of these munitions can be decreased by incorporating sensors to detect the target and guide the munition during flight. This paper is concerned with a precision guided munition equipped with an infrared (IR) sensor and a millimeter wave radar (MmW). Three-dimensional flight of the munition and its pitch and yaw motion models are developed and simulated. The forward and lateral motion of a target tank on the ground is modeled as two independent second-order Gauss-Markov processes. To estimate the target location on the ground and the line-of-sight (LOS) rate to intercept it, an extended Kalman filter is composed whose state vector consists of cascaded state vectors of missile dynamics and target dynamics. The LOS angle measurement from the IR seeker is by centroiding the target image in 40 Hz. The centroid estimation of the images in the focal plane is at a frequency of 10 Hz. Every 10 Hz, centroids of four consecutive images are averaged, yielding a time-averaged centroid, implying some measurement delay. The miss distance achieved by including image processing delays is 1.45 m.
NASA Astrophysics Data System (ADS)
Zhang, Yumin; Zhu, Lianqing; Luo, Fei; Dong, Mingli; Ding, Xiangdong; He, Wei
2016-06-01
A metallic packaging technique of fiber Bragg grating (FBG) sensors is developed for measurement of strain and temperature, and it can be simply achieved via one-step ultrasonic welding. The average strain transfer rate of the metal-packaged sensor is theoretically evaluated by a proposed model aiming at surface-bonded metallic packaging FBG. According to analytical results, the metallic packaging shows higher average strain transfer rate compared with traditional adhesive packaging under the same packaging conditions. Strain tests are performed on an elaborate uniform strength beam for both tensile and compressive strains; strain sensitivities of approximately 1.16 and 1.30 pm/μɛ are obtained for the tensile and compressive situations, respectively. Temperature rising and cooling tests are also executed from 50°C to 200°C, and the sensitivity of temperature is 36.59 pm/°C. All the measurements of strain and temperature exhibit good linearity and stability. These results demonstrate that the metal-packaged sensors can be successfully fabricated by one-step welding technique and provide great promise for long-term and high-precision structural health monitoring.
Precision half-life measurement of 11C: The most precise mirror transition F t value
NASA Astrophysics Data System (ADS)
Valverde, A. A.; Brodeur, M.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Blankstein, D.; Brown, G.; Burdette, D. P.; Frentz, B.; Gilardy, G.; Hall, M. R.; King, S.; Kolata, J. J.; Long, J.; Macon, K. T.; Nelson, A.; O'Malley, P. D.; Skulski, M.; Strauss, S. Y.; Vande Kolk, B.
2018-03-01
Background: The precise determination of the F t value in T =1 /2 mixed mirror decays is an important avenue for testing the standard model of the electroweak interaction through the determination of Vu d in nuclear β decays. 11C is an interesting case, as its low mass and small QE C value make it particularly sensitive to violations of the conserved vector current hypothesis. The present dominant source of uncertainty in the 11CF t value is the half-life. Purpose: A high-precision measurement of the 11C half-life was performed, and a new world average half-life was calculated. Method: 11C was created by transfer reactions and separated using the TwinSol facility at the Nuclear Science Laboratory at the University of Notre Dame. It was then implanted into a tantalum foil, and β counting was used to determine the half-life. Results: The new half-life, t1 /2=1220.27 (26 ) s, is consistent with the previous values but significantly more precise. A new world average was calculated, t1/2 world=1220.41 (32 ) s, and a new estimate for the Gamow-Teller to Fermi mixing ratio ρ is presented along with standard model correlation parameters. Conclusions: The new 11C world average half-life allows the calculation of a F tmirror value that is now the most precise value for all superallowed mixed mirror transitions. This gives a strong impetus for an experimental determination of ρ , to allow for the determination of Vu d from this decay.
Pressure Monitoring Using Hybrid fs/ps Rotational CARS
NASA Technical Reports Server (NTRS)
Kearney, Sean P.; Danehy, Paul M.
2015-01-01
We investigate the feasibility of gas-phase pressure measurements at kHz-rates using fs/ps rotational CARS. Femtosecond pump and Stokes pulses impulsively prepare a rotational Raman coherence, which is then probed by a high-energy 6-ps pulse introduced at a time delay from the Raman preparation. Rotational CARS spectra were recorded in N2 contained in a room-temperature gas cell for pressures from 0.1 to 3 atm and probe delays ranging from 10-330 ps. Using published self-broadened collisional linewidth data for N2, both the spectrally integrated coherence decay rate and the spectrally resolved decay were investigated as means for detecting pressure. Shot-averaged and single-laser-shot spectra were interrogated for pressure and the accuracy and precision as a function of probe delay and cell pressure are discussed. Single-shot measurement accuracies were within 0.1 to 6.5% when compared to a transducer values, while the precision was generally between 1% and 6% of measured pressure for probe delays of 200 ps or more, and better than 2% as the delay approached 300 ps. A byproduct of the pressure measurement is an independent but simultaneous measurement of the gas temperature.
Limits to Forecasting Precision for Outbreaks of Directly Transmitted Diseases
Drake, John M
2006-01-01
Background Early warning systems for outbreaks of infectious diseases are an important application of the ecological theory of epidemics. A key variable predicted by early warning systems is the final outbreak size. However, for directly transmitted diseases, the stochastic contact process by which outbreaks develop entails fundamental limits to the precision with which the final size can be predicted. Methods and Findings I studied how the expected final outbreak size and the coefficient of variation in the final size of outbreaks scale with control effectiveness and the rate of infectious contacts in the simple stochastic epidemic. As examples, I parameterized this model with data on observed ranges for the basic reproductive ratio (R 0) of nine directly transmitted diseases. I also present results from a new model, the simple stochastic epidemic with delayed-onset intervention, in which an initially supercritical outbreak (R 0 > 1) is brought under control after a delay. Conclusion The coefficient of variation of final outbreak size in the subcritical case (R 0 < 1) will be greater than one for any outbreak in which the removal rate is less than approximately 2.41 times the rate of infectious contacts, implying that for many transmissible diseases precise forecasts of the final outbreak size will be unattainable. In the delayed-onset model, the coefficient of variation (CV) was generally large (CV > 1) and increased with the delay between the start of the epidemic and intervention, and with the average outbreak size. These results suggest that early warning systems for infectious diseases should not focus exclusively on predicting outbreak size but should consider other characteristics of outbreaks such as the timing of disease emergence. PMID:16435887
Error analysis of high-rate GNSS precise point positioning for seismic wave measurement
NASA Astrophysics Data System (ADS)
Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan
2017-06-01
High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.
NASA Astrophysics Data System (ADS)
Choi, Mi-Ran; Hundertmark, Dirk; Lee, Young-Ran
2017-10-01
We prove a threshold phenomenon for the existence/non-existence of energy minimizing solitary solutions of the diffraction management equation for strictly positive and zero average diffraction. Our methods allow for a large class of nonlinearities; they are, for example, allowed to change sign, and the weakest possible condition, it only has to be locally integrable, on the local diffraction profile. The solutions are found as minimizers of a nonlinear and nonlocal variational problem which is translation invariant. There exists a critical threshold λcr such that minimizers for this variational problem exist if their power is bigger than λcr and no minimizers exist with power less than the critical threshold. We also give simple criteria for the finiteness and strict positivity of the critical threshold. Our proof of existence of minimizers is rather direct and avoids the use of Lions' concentration compactness argument. Furthermore, we give precise quantitative lower bounds on the exponential decay rate of the diffraction management solitons, which confirm the physical heuristic prediction for the asymptotic decay rate. Moreover, for ground state solutions, these bounds give a quantitative lower bound for the divergence of the exponential decay rate in the limit of vanishing average diffraction. For zero average diffraction, we prove quantitative bounds which show that the solitons decay much faster than exponentially. Our results considerably extend and strengthen the results of Hundertmark and Lee [J. Nonlinear Sci. 22, 1-38 (2012) and Commun. Math. Phys. 309(1), 1-21 (2012)].
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production. PMID:23844144
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.
The Qatar genome: a population-specific tool for precision medicine in the Middle East
Fakhro, Khalid A; Staudt, Michelle R; Ramstetter, Monica Denise; Robay, Amal; Malek, Joel A; Badii, Ramin; Al-Marri, Ajayeb Al-Nabet; Khalil, Charbel Abi; Al-Shakaki, Alya; Chidiac, Omar; Stadler, Dora; Zirie, Mahmoud; Jayyousi, Amin; Salit, Jacqueline; Mezey, Jason G; Crystal, Ronald G; Rodriguez-Flores, Juan L
2016-01-01
Reaching the full potential of precision medicine depends on the quality of personalized genome interpretation. In order to facilitate precision medicine in regions of the Middle East and North Africa (MENA), a population-specific genome for the indigenous Arab population of Qatar (QTRG) was constructed by incorporating allele frequency data from sequencing of 1,161 Qataris, representing 0.4% of the population. A total of 20.9 million single nucleotide polymorphisms (SNPs) and 3.1 million indels were observed in Qatar, including an average of 1.79% novel variants per individual genome. Replacement of the GRCh37 standard reference with QTRG in a best practices genome analysis workflow resulted in an average of 7* deeper coverage depth (an improvement of 23%) and 756,671 fewer variants on average, a reduction of 16% that is attributed to common Qatari alleles being present in QTRG. The benefit for using QTRG varies across ancestries, a factor that should be taken into consideration when selecting an appropriate reference for analysis. PMID:27408750
Do stochastic inhomogeneities affect dark-energy precision measurements?
Ben-Dayan, I; Gasperini, M; Marozzi, G; Nugier, F; Veneziano, G
2013-01-11
The effect of a stochastic background of cosmological perturbations on the luminosity-redshift relation is computed to second order through a recently proposed covariant and gauge-invariant light-cone averaging procedure. The resulting expressions are free from both ultraviolet and infrared divergences, implying that such perturbations cannot mimic a sizable fraction of dark energy. Different averages are estimated and depend on the particular function of the luminosity distance being averaged. The energy flux being minimally affected by perturbations at large z is proposed as the best choice for precision estimates of dark-energy parameters. Nonetheless, its irreducible (stochastic) variance induces statistical errors on Ω(Λ)(z) typically lying in the few-percent range.
NASA Astrophysics Data System (ADS)
Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo
2016-09-01
High-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (aluminum, copper, and stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high-average power picosecond laser in conjunction with a unique, in-house developed polygon mirror-based biaxial scanning system. Therefore, different concepts of polygon scanners are engineered and tested to find the best architecture for high-speed and precision laser beam scanning. In order to identify the optimum conditions for efficient processing when using high-average laser powers, the depths of cavities made in the samples by varying the processing parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. For overlapping pulses of optimum fluence, the removal rate is as high as 27.8 mm3/min for aluminum, 21.4 mm3/min for copper, 15.3 mm3/min for stainless steel, and 129.1 mm3/min for Al2O3, when a laser beam of 187 W average laser powers irradiates. On stainless steel, it is demonstrated that the removal rate increases to 23.3 mm3/min when the laser beam is very fast moving. This is thanks to the low pulse overlap as achieved with 800 m/s beam deflection speed; thus, laser beam shielding can be avoided even when irradiating high-repetitive 20-MHz pulses.
Shepard, D S
1983-01-01
A preliminary model is developed for estimating the extent of savings, if any, likely to result from discontinuing a specific inpatient service. By examining the sources of referral to the discontinued service, the model estimates potential demand and how cases will be redistributed among remaining hospitals. This redistribution determines average cost per day in hospitals that receive these cases, relative to average cost per day of the discontinued service. The outflow rate, which measures the proportion of cases not absorbed in other acute care hospitals, is estimated as 30 percent for the average discontinuation. The marginal cost ratio, which relates marginal costs of cases absorbed in surrounding hospitals to the average costs in those hospitals, is estimated as 87 percent in the base case. The model was applied to the discontinuation of all inpatient services in the 75-bed Chelsea Memorial Hospital, near Boston, Massachusetts, using 1976 data. As the precise value of key parameters is uncertain, sensitivity analysis was used to explore a range of values. The most likely result is a small increase ($120,000) in the area's annual inpatient hospital costs, because many patients are referred to more costly teaching hospitals. A similar situation may arise with other urban closures. For service discontinuations to generate savings, recipient hospitals must be low in costs, the outflow rate must be large, and the marginal cost ratio must be low. PMID:6668181
NASA Astrophysics Data System (ADS)
Xie, Yanan; Zhou, Mingliang; Pan, Dengke
2017-10-01
The forward-scattering model is introduced to describe the response of normalized radar cross section (NRCS) of precipitation with synthetic aperture radar (SAR). Since the distribution of near-surface rainfall is related to the rate of near-surface rainfall and horizontal distribution factor, a retrieval algorithm called modified regression empirical and model-oriented statistical (M-M) based on the volterra integration theory is proposed. Compared with the model-oriented statistical and volterra integration (MOSVI) algorithm, the biggest difference is that the M-M algorithm is based on the modified regression empirical algorithm rather than the linear regression formula to retrieve the value of near-surface rainfall rate. Half of the empirical parameters are reduced in the weighted integral work and a smaller average relative error is received while the rainfall rate is less than 100 mm/h. Therefore, the algorithm proposed in this paper can obtain high-precision rainfall information.
Krajewski, C; Fain, M G; Buckley, L; King, D G
1999-11-01
ki ctes over whether molecular sequence data should be partitioned for phylogenetic analysis often confound two types of heterogeneity among partitions. We distinguish historical heterogeneity (i.e., different partitions have different evolutionary relationships) from dynamic heterogeneity (i.e., different partitions show different patterns of sequence evolution) and explore the impact of the latter on phylogenetic accuracy and precision with a two-gene, mitochondrial data set for cranes. The well-established phylogeny of cranes allows us to contrast tree-based estimates of relevant parameter values with estimates based on pairwise comparisons and to ascertain the effects of incorporating different amounts of process information into phylogenetic estimates. We show that codon positions in the cytochrome b and NADH dehydrogenase subunit 6 genes are dynamically heterogenous under both Poisson and invariable-sites + gamma-rates versions of the F84 model and that heterogeneity includes variation in base composition and transition bias as well as substitution rate. Estimates of transition-bias and relative-rate parameters from pairwise sequence comparisons were comparable to those obtained as tree-based maximum likelihood estimates. Neither rate-category nor mixed-model partitioning strategies resulted in a loss of phylogenetic precision relative to unpartitioned analyses. We suggest that weighted-average distances provide a computationally feasible alternative to direct maximum likelihood estimates of phylogeny for mixed-model analyses of large, dynamically heterogenous data sets. Copyright 1999 Academic Press.
Direct MSTID mitigation in precise GPS processing
NASA Astrophysics Data System (ADS)
Hernández-Pajares, Manuel; Wielgosz, Pawel; Paziewski, Jacek; Krypiak-Gregorczyk, Anna; Krukowska, Marta; Stepniak, Katarzyna; Kaplon, Jan; Hadas, Tomasz; Sosnica, Krzysztof; Bosy, Jaroslaw; Orus-Perez, Raul; Monte-Moreno, Enric; Yang, Heng; Garcia-Rigo, Alberto; Olivares-Pulido, Germán.
2017-03-01
In this paper, the authors summarize a simple and efficient approach developed to mitigate the problem in precise Global Navigation Satellite Systems (GNSS) positioning originated by the most frequent ionospheric wave signatures: the medium-scale traveling ionospheric disturbances (MSTIDs). The direct GNSS Ionospheric Interferometry technique (hereinafter dGII), presented in this paper, is applied for correcting MSTID effects on precise Real Time Kinematic (RTK) and tropospheric determination. It consists of the evolution of the former climatic Differential Delay Mitigation Model for MSTIDs (DMTID), for real-time conditions, using ionospheric data from a single permanent receiver only. The performance is demonstrated with networks of GNSS receivers in Poland, treated as users under real-time conditions, during two representative days in winter and summer seasons (days 353 and 168 of year 2013). In range domain, dGII typically reduces the ionospheric delay error up to 10-90% of the value when the MSTID mitigation model is not applied. The main dGII impact on precise positioning is that we can obtain reliable RTK position faster. In particular, the ambiguity success rate parameter increases, from 74% to 83%, with respect to the original uncorrected observations. The average of time to first fix is shortened from 30 s to 13 s. The improvement in troposphere estimaton, due to any potential impact of the MSTID mitigation model, was most difficult to demonstrate.
Single-ping ADCP measurements in the Strait of Gibraltar
NASA Astrophysics Data System (ADS)
Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo
2016-04-01
In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.
Photometric Type Ia supernova surveys in narrow-band filters
NASA Astrophysics Data System (ADS)
Xavier, Henrique S.; Abramo, L. Raul; Sako, Masao; Benítez, Narciso; Calvão, Maurício O.; Ederoclite, Alessandro; Marín-Franch, Antonio; Molino, Alberto; Reis, Ribamar R. R.; Siffert, Beatriz B.; Sodré, Laerte.
2014-11-01
We study the characteristics of a narrow-band Type Ia supernova (SN) survey through simulations based on the upcoming Javalambre Physics of the accelerating Universe Astrophysical Survey. This unique survey has the capabilities of obtaining distances, redshifts and the SN type from a single experiment thereby circumventing the challenges faced by the resource-intensive spectroscopic follow-up observations. We analyse the flux measurements signal-to-noise ratio and bias, the SN typing performance, the ability to recover light-curve parameters given by the SALT2 model, the photometric redshift precision from Type Ia SN light curves and the effects of systematic errors on the data. We show that such a survey is not only feasible but may yield large Type Ia SN samples (up to 250 SNe at z < 0.5 per month of search) with low core-collapse contamination (˜1.5 per cent), good precision on the SALT2 parameters (average σ _{m_B}=0.063, σ _{x_1}=0.47 and σc = 0.040) and on the distance modulus (average σμ = 0.16, assuming an intrinsic scatter σint = 0.14), with identified systematic uncertainties σsys ≲ 0.10σstat. Moreover, the filters are narrow enough to detect most spectral features and obtain excellent photometric redshift precision of σz = 0.005, apart from ˜2 per cent of outliers. We also present a few strategies for optimizing the survey's outcome. Together with the detailed host galaxy information, narrow-band surveys can be very valuable for the study of SN rates, spectral feature relations, intrinsic colour variations and correlations between SN and host galaxy properties, all of which are important information for SN cosmological applications.
Processes of arroyo filling in northern New Mexico, USA
Friedman, Jonathan M.; Vincent, Kirk R.; Griffin, Eleanor R.; Scott, Michael L.; Shafroth, Patrick B.; Auble, Gregor T.
2015-01-01
We documented arroyo evolution at the tree, trench, and arroyo scales along the lower Rio Puerco and Chaco Wash in northern New Mexico, USA. We excavated 29 buried living woody plants and used burial signatures in their annual rings to date stratigraphy in four trenches across the arroyos. Then, we reconstructed the history of arroyo evolution by combining trench data with arroyo-scale information from aerial imagery, light detection and ranging (LiDAR), longitudinal profiles, and repeat surveys of cross sections. Burial signatures in annual rings of salt cedar and willow dated sedimentary beds greater than 30 cm thick with annual precision. Along both arroyos, incision occurred until the 1930s in association with extreme high flows, and subsequent filling involved vegetation development, channel narrowing, increased sinuosity, and finally vertical aggradation. A strongly depositional sediment transport regime interacted with floodplain shrubs to produce a characteristic narrow, trapezoidal channel. The 55 km study reach along the Rio Puerco demonstrated upstream progression of arroyo widening and filling, but not of arroyo incision, channel narrowing, or floodplain vegetation development. We conclude that the occurrence of upstream progression within large basins like the Rio Puerco makes precise synchrony across basins impossible. Arroyo wall retreat is now mostly limited to locations where meanders impinge on the arroyo wall, forming hairpin bends, for which entry to and exit from the wall are stationary. Average annual sediment storage within the Rio Puerco study reach between 1955 and 2005 was 4.8 × 105 t/yr, 16% of the average annual suspended sediment yield, and 24% of the long-term bedrock denudation rate. At this rate, the arroyo would fill in 310 yr.
A comparison of computer-assisted and manual wound size measurement.
Thawer, Habiba A; Houghton, Pamela E; Woodbury, M Gail; Keast, David; Campbell, Karen
2002-10-01
Accurate and precise wound measurements are a critical component of every wound assessment. To examine the reliability and validity of a new computerized technique for measuring human and animal wounds, chronic human wounds (N = 45) and surgical animal wounds (N = 38) were assessed using manual and computerized techniques. Using intraclass correlation coefficients, intrarater and interrater reliability of surface area measurements obtained using the computerized technique were compared to those obtained using acetate tracings and planimetry. A single measurement of surface area using either technique produced excellent intrarater and interrater reliability for both human and animal wounds, but the computerized technique was more precise than the manual technique for measuring the surface area of animal wounds. For both types of wounds and measurement techniques, intrarater and interrater reliability improved when the average of three repeated measurements was obtained. The precision of each technique with human wounds and the precision of the manual technique with animal wounds also improved when three repeated measurement results were averaged. Concurrent validity between the two techniques was excellent for human wounds but poor for the smaller animal wounds, regardless of whether single or the average of three repeated surface area measurements was used. The computerized technique permits reliable and valid assessment of the surface area of both human and animal wounds.
Guidelines on Thermal Comfort of Air Conditioned Indoor Environment
NASA Astrophysics Data System (ADS)
Miura, Toyohiko
The thermal comfort of air conditioned indoor environment for workers depended, of course, on metabolic rate of work, race, sex, age, clothing, climate of the district and state of acclimatization. The attention of the author was directed to the seasonal variation and the sexual difference of comfortable temperature and a survey through a year was conducted on the thermal comfort, and health conditions of workers engaged in light work in a precision machine factory, in some office workers. Besides, a series of experiments were conducted for purpose of determinning the optimum temperature of cooling in summer time in relation to the outdoor temperature. It seemed that many of workers at present would prefer somewhat higher temperature than those before the World War II. Forty years ago the average homes and offices were not so well heated as today, and clothing worn on the average was considerably heavier.
Optical Vector Receiver Operating Near the Quantum Limit
NASA Astrophysics Data System (ADS)
Vilnrotter, V. A.; Lau, C.-W.
2005-05-01
An optical receiver concept for binary signals with performance approaching the quantum limit at low average-signal energies is developed and analyzed. A conditionally nulling receiver that reaches the quantum limit in the absence of background photons has been devised by Dolinar. However, this receiver requires ideal optical combining and complicated real-time shaping of the local field; hence, it tends to be difficult to implement at high data rates. A simpler nulling receiver that approaches the quantum limit without complex optical processing, suitable for high-rate operation, had been suggested earlier by Kennedy. Here we formulate a vector receiver concept that incorporates the Kennedy receiver with a physical beamsplitter, but it also utilizes the reflected signal component to improve signal detection. It is found that augmenting the Kennedy receiver with classical coherent detection at the auxiliary beamsplitter output, and optimally processing the vector observations, always improves on the performance of the Kennedy receiver alone, significantly so at low average-photon rates. This is precisely the region of operation where modern codes approach channel capacity. It is also shown that the addition of background radiation has little effect on the performance of the coherent receiver component, suggesting a viable approach for near-quantum-limited performance in high background environments.
An alternative to Rasch analysis using triadic comparisons and multi-dimensional scaling
NASA Astrophysics Data System (ADS)
Bradley, C.; Massof, R. W.
2016-11-01
Rasch analysis is a principled approach for estimating the magnitude of some shared property of a set of items when a group of people assign ordinal ratings to them. In the general case, Rasch analysis not only estimates person and item measures on the same invariant scale, but also estimates the average thresholds used by the population to define rating categories. However, Rasch analysis fails when there is insufficient variance in the observed responses because it assumes a probabilistic relationship between person measures, item measures and the rating assigned by a person to an item. When only a single person is rating all items, there may be cases where the person assigns the same rating to many items no matter how many times he rates them. We introduce an alternative to Rasch analysis for precisely these situations. Our approach leverages multi-dimensional scaling (MDS) and requires only rank orderings of items and rank orderings of pairs of distances between items to work. Simulations show one variant of this approach - triadic comparisons with non-metric MDS - provides highly accurate estimates of item measures in realistic situations.
Minute ventilation of cyclists, car and bus passengers: an experimental study.
Zuurbier, Moniek; Hoek, Gerard; van den Hazel, Peter; Brunekreef, Bert
2009-10-27
Differences in minute ventilation between cyclists, pedestrians and other commuters influence inhaled doses of air pollution. This study estimates minute ventilation of cyclists, car and bus passengers, as part of a study on health effects of commuters' exposure to air pollutants. Thirty-four participants performed a submaximal test on a bicycle ergometer, during which heart rate and minute ventilation were measured simultaneously at increasing cycling intensity. Individual regression equations were calculated between heart rate and the natural log of minute ventilation. Heart rates were recorded during 280 two hour trips by bicycle, bus and car and were calculated into minute ventilation levels using the individual regression coefficients. Minute ventilation during bicycle rides were on average 2.1 times higher than in the car (individual range from 1.3 to 5.3) and 2.0 times higher than in the bus (individual range from 1.3 to 5.1). The ratio of minute ventilation of cycling compared to travelling by bus or car was higher in women than in men. Substantial differences in regression equations were found between individuals. The use of individual regression equations instead of average regression equations resulted in substantially better predictions of individual minute ventilations. The comparability of the gender-specific overall regression equations linking heart rate and minute ventilation with one previous American study, supports that for studies on the group level overall equations can be used. For estimating individual doses, the use of individual regression coefficients provides more precise data. Minute ventilation levels of cyclists are on average two times higher than of bus and car passengers, consistent with the ratio found in one small previous study of young adults. The study illustrates the importance of inclusion of minute ventilation data in comparing air pollution doses between different modes of transport.
Du, Yang; Tan, Jian-guo; Chen, Li; Wang, Fang-ping; Tan, Yao; Zhou, Jian-feng
2012-08-18
To explore a gingival shade matching method and to evaluate the precision and accuracy of a dental spectrophotometer modified to be used in gingival color measurement. Crystaleye, a dental spectrophotometer (Olympus, Tokyo, Japan) with a custom shading cover was tested. For precision assessment, two experienced experimenters measured anterior maxillary incisors five times for each tooth. A total of 20 healthy gingival sites (attached gingiva, free gingiva and medial gingival papilla in anterior maxillary region) were measured,the Commission Internationale de I' Eclairage (CIE) color parameters (CIE L*a*b*) of which were analyzed using the supporting software. For accuracy assessment, a rectangular area of approximately 3 mm×3 mm was chosen in the attached gingival portion for spectral analysis. PR715 (SpectraScan;Photo Research Inc.,California, USA), a spectroradiometer, was utilized as standard control. Average color differences (ΔE) between the values from PR715 and Crystaleye were calculated. In precision assessment,ΔL* between the values in all the test sites and average values were from(0.28±0.16)to(0.78±0.57), with Δa*and Δb* from(0.28±0.15)to (0.87±0.65),from(0.19±0.09)to( 0.58±0.78), respectively. Average ΔE between values in all test sites and average values were from (0.62 ± 0.17) to (1.25 ± 0.98) CIELAB units, with a total average ΔE(0.90 ± 0.18). In accuracy assessment, ΔL* with control device were from(0.58±0.50)to(2.22±1.89),with Δa*and Δb* from(1.03±0.67)to(2.99±1.32),from(0.68±0.78)to(1.26±0.83), respectively. Average ΔE with the control device were from (2.44±0.82) to (3.51±1.03) CIELAB units, with a total average ΔE (2.96 ± 1.08). With appropriate modification, Crystaleye, the spectrophotometer, has demonstrated relative minor color variations that can be useful in gingival color measurement.
Cramer, Emily
2016-01-01
Abstract Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital‐acquired pressure ulcer rates and evaluate a standard signal‐noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step‐down, medical, surgical, and medical‐surgical nursing units from 1,299 US hospitals were analyzed. Using beta‐binomial models, we estimated between‐unit variability (signal) and within‐unit variability (noise) in annual unit pressure ulcer rates. Signal‐noise reliability was computed as the ratio of between‐unit variability to the total of between‐ and within‐unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal‐noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal‐noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. PMID:27223598
Linder, Suzanne K; Kamath, Geetanjali R; Pratt, Gregory F; Saraykar, Smita S; Volk, Robert J
2015-04-01
To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a health care decision-making instrument commonly used in clinical settings. We searched the literature using two methods: (1) keyword searching using variations of "Control Preferences Scale" and (2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, and Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Keyword searches in bibliographic databases yielded high average precision (90%) but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45-54%), but precision ranged from 35% to 75% with Scopus being the most precise. Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time, and resources should dictate the combination of which methods and databases are used. Copyright © 2015 Elsevier Inc. All rights reserved.
Linder, Suzanne K.; Kamath, Geetanjali R.; Pratt, Gregory F.; Saraykar, Smita S.; Volk, Robert J.
2015-01-01
Objective To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a healthcare decision-making instrument commonly used in clinical settings. Study Design & Setting We searched the literature using two methods: 1) keyword searching using variations of “control preferences scale” and 2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Results Keyword searches in bibliographic databases yielded high average precision (90%), but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45–54%), but precision ranged from 35–75% with Scopus being the most precise. Conclusion Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time and resources should dictate the combination of which methods and databases are used. PMID:25554521
Oude Voshaar, Martijn A H; Ten Klooster, Peter M; Bode, Christina; Vonkeman, Harald E; Glas, Cees A W; Jansen, Tim; van Albada-Kuipers, Iet; van Riel, Piet L C M; van de Laar, Mart A F J
2015-03-01
To compare the psychometric functioning of multidimensional disease-specific, multiitem generic, and single-item measures of fatigue in patients with rheumatoid arthritis (RA). Confirmatory factor analysis (CFA) and longitudinal item response theory (IRT) modeling were used to evaluate the measurement structure and local reliability of the Bristol RA Fatigue Multi-Dimensional Questionnaire (BRAF-MDQ), the Medical Outcomes Study Short Form-36 (SF-36) vitality scale, and the BRAF Numerical Rating Scales (BRAF-NRS) in a sample of 588 patients with RA. A 1-factor CFA model yielded a similar fit to a 5-factor model with subscale-specific dimensions, and the items from the different instruments adequately fit the IRT model, suggesting essential unidimensionality in measurement. The SF-36 vitality scale outperformed the BRAF-MDQ at lower levels of fatigue, but was less precise at moderate to higher levels of fatigue. At these levels of fatigue, the living, cognition, and emotion subscales of the BRAF-MDQ provide additional precision. The BRAF-NRS showed a limited measurement range with its highest precision centered on average levels of fatigue. The different instruments appear to access a common underlying domain of fatigue severity, but differ considerably in their measurement precision along the continuum. The SF-36 vitality scale can be used to measure fatigue severity in samples with relatively mild fatigue. For samples expected to have higher levels of fatigue, the multidimensional BRAF-MDQ appears to be a better choice. The BRAF-NRS are not recommended if precise assessment is required, for instance in longitudinal settings.
Model-free aftershock forecasts constructed from similar sequences in the past
NASA Astrophysics Data System (ADS)
van der Elst, N.; Page, M. T.
2017-12-01
The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity forecast may be useful to emergency managers and non-specialists when confidence or expertise in parametric forecasting may be lacking. The method makes over-tuning impossible, and minimizes the rate of surprises. At the least, this forecast constitutes a useful benchmark for more precisely tuned parametric forecasts.
Warning: This keyboard will deconstruct--the role of the keyboard in skilled typewriting.
Crump, Matthew J C; Logan, Gordon D
2010-06-01
Skilled actions are commonly assumed to be controlled by precise internal schemas or cognitive maps. We challenge these ideas in the context of skilled typing, where prominent theories assume that typing is controlled by a well-learned cognitive map that plans finger movements without feedback. In two experiments, we demonstrate that online physical interaction with the keyboard critically mediates typing skill. Typists performed single-word and paragraph typing tasks on a regular keyboard, a laser-projection keyboard, and two deconstructed keyboards, made by removing successive layers of a regular keyboard. Averaged over the laser and deconstructed keyboards, response times for the first keystroke increased by 37%, the interval between keystrokes increased by 120%, and error rate increased by 177%, relative to those of the regular keyboard. A schema view predicts no influence of external motor feedback, because actions could be planned internally with high precision. We argue that the expert knowledge mediating action control emerges during online interaction with the physical environment.
Item-Level Psychometrics of the Glasgow Outcome Scale: Extended Structured Interviews.
Hong, Ickpyo; Li, Chih-Ying; Velozo, Craig A
2016-04-01
The Glasgow Outcome Scale-Extended (GOSE) structured interview captures critical components of activities and participation, including home, shopping, work, leisure, and family/friend relationships. Eighty-nine community dwelling adults with mild-moderate traumatic brain injury (TBI) were recruited (average = 2.7 year post injury). Nine items of the 19 items were used for the psychometrics analysis purpose. Factor analysis and item-level psychometrics were investigated using the Rasch partial-credit model. Although the principal components analysis of residuals suggests that a single measurement factor dominates the measure, the instrument did not meet the factor analysis criteria. Five items met the rating scale criteria. Eight items fit the Rasch model. The instrument demonstrated low person reliability (0.63), low person strata (2.07), and a slight ceiling effect. The GOSE demonstrated limitations in precisely measuring activities/participation for individuals after TBI. Future studies should examine the impact of the low precision of the GOSE on effect size. © The Author(s) 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alhroob, M.; Boyd, G.; Hasib, A.
Precision ultrasonic measurements in binary gas systems provide continuous real-time monitoring of mixture composition and flow. Using custom micro-controller-based electronics, we have developed an ultrasonic instrument, with numerous potential applications, capable of making continuous high-precision sound velocity measurements. The instrument measures sound transit times along two opposite directions aligned parallel to - or obliquely crossing - the gas flow. The difference between the two measured times yields the gas flow rate while their average gives the sound velocity, which can be compared with a sound velocity vs. molar composition look-up table for the binary mixture at a given temperature andmore » pressure. The look-up table may be generated from prior measurements in known mixtures of the two components, from theoretical calculations, or from a combination of the two. We describe the instrument and its performance within numerous applications in the ATLAS experiment at the CERN Large Hadron Collider (LHC). The instrument can be of interest in other areas where continuous in-situ binary gas analysis and flowmetry are required. (authors)« less
Ship navigation using Navstar GPS - An application study
NASA Technical Reports Server (NTRS)
Mohan, S. N.
1982-01-01
Ocean current measurement applications in physical oceanography require knowledge of inertial ship velocity to a precision of 1-2 cm/sec over a typical five minute averaging interval. The navigation accuracy must be commensurate with data precision obtainable from ship borne acoustic profilers used in sensing ocean currents. The Navstar Global Positioning System is viewed as a step in user technological simplification, extension in coverage availability, and enhancement in performance accuracy as well as reliability over the existing systems, namely, Loran-C, Transit, and Omega. Error analyses have shown the possibility of attaining the 1-2 cm/sec accuracy during active GPS coverage at a data rate of four position fixes per minute under varying sea-states. This paper is intended to present results of data validation exercises leading to design of an experiment at sea for deployment of both a GPS y-set and a direct Doppler measurement system as the autonomous navigation system used in conjunction with an acoustic Doppler as the sensor for ocean current measurement.
Probabilistic metrology or how some measurement outcomes render ultra-precise estimates
NASA Astrophysics Data System (ADS)
Calsamiglia, J.; Gendra, B.; Muñoz-Tapia, R.; Bagan, E.
2016-10-01
We show on theoretical grounds that, even in the presence of noise, probabilistic measurement strategies (which have a certain probability of failure or abstention) can provide, upon a heralded successful outcome, estimates with a precision that exceeds the deterministic bounds for the average precision. This establishes a new ultimate bound on the phase estimation precision of particular measurement outcomes (or sequence of outcomes). For probe systems subject to local dephasing, we quantify such precision limit as a function of the probability of failure that can be tolerated. Our results show that the possibility of abstaining can set back the detrimental effects of noise.
Flowmeter for determining average rate of flow of liquid in a conduit
Kennerly, J.M.; Lindner, G.M.; Rowe, J.C.
1981-04-30
This invention is a compact, precise, and relatively simple device for use in determining the average rate of flow of a liquid through a conduit. The liquid may be turbulent and contain bubbles of gas. In a preferred embodiment, the flowmeter includes an electrical circuit and a flow vessel which is connected as a segment of the conduit conveying the liquid. The vessel is provided with a valved outlet and is partitioned by a vertical baffle into coaxial chambers whose upper regions are vented to permit the escape of gas. The inner chamber receives turbulent downflowing liquid from the conduit and is sized to operate at a lower pressure than the conduit, thus promoting evolution of gas from the liquid. Lower zones of the two chambers are interconnected so that the downflowing liquid establishes liquid levels in both chambers. The liquid level in the outer chamber is comparatively calm, being to a large extent isolated from the turbulence in the inner chamber once the liquid in the outer chamber has risen above the liquid-introduction zone for that chamber. Lower and upper probes are provided in the outer chamber for sensing the liquid level therein at points above its liquid-introduction zone. An electrical circuit is connected to the probes to display the time required for the liquid level in the outer chamber to successively contact the lower and upper probes. The average rate of flow through the conduit can be determined from the above-mentioned time and the vessel volume filled by the liquid during that time.
Flowmeter for determining average rate of flow of liquid in a conduit
Kennerly, John M.; Lindner, Gordon M.; Rowe, John C.
1982-01-01
This invention is a compact, precise, and relatively simple device for use in determining the average rate of flow of a liquid through a conduit. The liquid may be turbulent and contain bubbles of gas. In a preferred embodiment, the flowmeter includes an electrical circuit and a flow vessel which is connected as a segment of the conduit conveying the liquid. The vessel is provided with a valved outlet and is partitioned by a vertical baffle into coaxial chambers whose upper regions are vented to permit the escape of gas. The inner chamber receives turbulent downflowing liquid from the conduit and is sized to operate at a lower pressure than the conduit, thus promoting evolution of gas from the liquid. Lower zones of the two chambers are interconnected so that the downflowing liquid establishes liquid levels in both chambers. The liquid level in the outer chamber is comparatively calm, being to a large extent isolated from the turbulence in the inner chamber once the liquid in the outer chamber has risen above the liquid-introduction zone for that chamber. Lower and upper probes are provided in the outer chamber for sensing the liquid level therein at points above its liquid-introduction zone. An electrical circuit is connected to the probes to display the time required for the liquid level in the outer chamber to successively contact the lower and upper probes. The average rate of flow through the conduit can be determined from the above-mentioned time and the vessel volume filled by the liquid during that time.
Evaluating GPS biologging technology for studying spatial ecology of large constricting snakes
Smith, Brian; Hart, Kristen M.; Mazzotti, Frank J.; Basille, Mathieu; Romagosa, Christina M.
2018-01-01
Background: GPS telemetry has revolutionized the study of animal spatial ecology in the last two decades. Until recently, it has mainly been deployed on large mammals and birds, but the technology is rapidly becoming miniaturized, and applications in diverse taxa are becoming possible. Large constricting snakes are top predators in their ecosystems, and accordingly they are often a management priority, whether their populations are threatened or invasive. Fine-scale GPS tracking datasets could greatly improve our ability to understand and manage these snakes, but the ability of this new technology to deliver high-quality data in this system is unproven. In order to evaluate GPS technology in large constrictors, we GPS-tagged 13 Burmese pythons (Python bivittatus) in Everglades National Park and deployed an additional 7 GPS tags on stationary platforms to evaluate habitat-driven biases in GPS locations. Both python and test platform GPS tags were programmed to attempt a GPS fix every 90 min.Results: While overall fix rates for the tagged pythons were low (18.1%), we were still able to obtain an average of 14.5 locations/animal/week, a large improvement over once-weekly VHF tracking. We found overall accuracy and precision to be very good (mean accuracy = 7.3 m, mean precision = 12.9 m), but a very few imprecise locations were still recorded (0.2% of locations with precision > 1.0 km). We found that dense vegetation did decrease fix rate, but we concluded that the low observed fix rate was also due to python microhabitat selection underground or underwater. Half of our recovered pythons were either missing their tag or the tag had malfunctioned, resulting in no data being recovered.Conclusions: GPS biologging technology is a promising tool for obtaining frequent, accurate, and precise locations of large constricting snakes. We recommend future studies couple GPS telemetry with frequent VHF locations in order to reduce bias and limit the impact of catastrophic failures on data collection, and we recommend improvements to GPS tag design to lessen the frequency of these failures.
Do Physicians' Financial Incentives Affect Medical Treatment and Patient Health?†
Clemens, Jeffrey; Gottlieb, Joshua D.
2014-01-01
We investigate whether physicians' financial incentives influence health care supply, technology diffusion, and resulting patient outcomes. In 1997, Medicare consolidated the geographic regions across which it adjusts physician payments, generating area-specific price shocks. Areas with higher payment shocks experience significant increases in health care supply. On average, a 2 percent increase in payment rates leads to a 3 percent increase in care provision. Elective procedures such as cataract surgery respond much more strongly than less discretionary services. Non-radiologists expand their provision of MRIs, suggesting effects on technology adoption. We estimate economically small health impacts, albeit with limited precision. PMID:25170174
Time-optimized laser micro machining by using a new high dynamic and high precision galvo scanner
NASA Astrophysics Data System (ADS)
Jaeggi, Beat; Neuenschwander, Beat; Zimmermann, Markus; Zecherle, Markus; Boeckler, Ernst W.
2016-03-01
High accuracy, quality and throughput are key factors in laser micro machining. To obtain these goals the ablation process, the machining strategy and the scanning device have to be optimized. The precision is influenced by the accuracy of the galvo scanner and can further be enhanced by synchronizing the movement of the mirrors with the laser pulse train. To maintain a high machining quality i.e. minimum surface roughness, the pulse-to-pulse distance has also to be optimized. Highest ablation efficiency is obtained by choosing the proper laser peak fluence together with highest specific removal rate. The throughput can now be enhanced by simultaneously increasing the average power, the repetition rate as well as the scanning speed to preserve the fluence and the pulse-to-pulse distance. Therefore a high scanning speed is of essential importance. To guarantee the required excellent accuracy even at high scanning speeds a new interferometry based encoder technology was used, that provides a high quality signal for closed-loop control of the galvo scanner position. Low inertia encoder design enables a very dynamic scanner system, which can be driven to very high line speeds by a specially adapted control solution. We will present results with marking speeds up to 25 m/s using a f = 100 mm objective obtained with a new scanning system and scanner tuning maintaining a precision of about 5 μm. Further it will be shown that, especially for short line lengths, the machining time can be minimized by choosing the proper speed which has not to be the maximum one.
High-Precision Half-Life Measurement for the Superallowed β+ Emitter 22Mg
NASA Astrophysics Data System (ADS)
Dunlop, Michelle
2017-09-01
High precision measurements of the Ft values for superallowed Fermi beta transitions between 0+ isobaric analogue states allow for stringent tests of the electroweak interaction. These transitions provide an experimental probe of the Conserved-Vector-Current hypothesis, the most precise determination of the up-down element of the Cabibbo-Kobayashi-Maskawa matrix, and set stringent limits on the existence of scalar currents in the weak interaction. To calculate the Ft values several theoretical corrections must be applied to the experimental data, some of which have large model dependent variations. Precise experimental determinations of the ft values can be used to help constrain the different models. The uncertainty in the 22Mg superallowed Ft value is dominated by the uncertainty in the experimental ft value. The adopted half-life of 22Mg is determined from two measurements which disagree with one another, resulting in the inflation of the weighted-average half-life uncertainty by a factor of 2. The 22Mg half-life was measured with a precision of 0.02% via direct β counting at TRIUMF's ISAC facility, leading to an improvement in the world-average half-life by more than a factor of 3.
Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.
2006-01-01
We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media, Inc.
Developing an item bank to measure the coping strategies of people with hereditary retinal diseases.
Prem Senthil, Mallika; Khadka, Jyoti; De Roach, John; Lamey, Tina; McLaren, Terri; Campbell, Isabella; Fenwick, Eva K; Lamoureux, Ecosse L; Pesudovs, Konrad
2018-05-05
Our understanding of the coping strategies used by people with visual impairment to manage stress related to visual loss is limited. This study aims to develop a sophisticated coping instrument in the form of an item bank implemented via Computerised adaptive testing (CAT) for hereditary retinal diseases. Items on coping were extracted from qualitative interviews with patients which were supplemented by items from a literature review. A systematic multi-stage process of item refinement was carried out followed by expert panel discussion and cognitive interviews. The final coping item bank had 30 items. Rasch analysis was used to assess the psychometric properties. A CAT simulation was carried out to estimate an average number of items required to gain precise measurement of hereditary retinal disease-related coping. One hundred eighty-nine participants answered the coping item bank (median age = 58 years). The coping scale demonstrated good precision and targeting. The standardised residual loadings for items revealed six items grouped together. Removal of the six items reduced the precision of the main coping scale and worsened the variance explained by the measure. Therefore, the six items were retained within the main scale. Our CAT simulation indicated that, on average, less than 10 items are required to gain a precise measurement of coping. This is the first study to develop a psychometrically robust coping instrument for hereditary retinal diseases. CAT simulation indicated that on an average, only four and nine items were required to gain measurement at moderate and high precision, respectively.
Uncertainty in LiDAR derived Canopy Height Models in three unique forest ecosystems
NASA Astrophysics Data System (ADS)
Goulden, T.; Leisso, N.; Scholl, V.; Hass, B.
2016-12-01
The National Ecological Observatory Network (NEON) is a continental-scale ecological observation platform designed to collect and disseminate data that contributes to understanding and forecasting the impacts of climate change, land use change, and invasive species on ecology. NEON will collect in-situ and airborne data over 81 sites across the US, including Alaska, Hawaii, and Puerto Rico. The Airborne Observation Platform (AOP) group within the NEON project operates a payload suite that includes a waveform / discrete LiDAR, imaging spectrometer (NIS) and high resolution RGB camera. One of the products derived from the discrete LiDAR is a canopy height model (CHM) raster developed at 1 m spatial resolution. Currently, it is hypothesized that differencing annually acquired CHM products allows identification of tree growth at in-situ distributed plots throughout the NEON sites. To test this hypothesis, the precision of the CHM product was determined through a specialized flight plan that independently repeated up to 20 observations of the same area with varying view geometries. The flight plan was acquired at three NEON sites, each with a unique forest types including 1) San Joaquin Experimental Range (SJER, open woodland dominated by oaks), 2) Soaproot Saddle (SOAP, mixed conifer deciduous forest), and 3) Oak Ridge National Laboratory (ORNL, oak hickory and pine forest). A CHM was developed for each flight line at each site and the overlap area was used to empirically estimate a site-specific precision of the CHM. The average cell-by-cell CHM precision at SJER, SOAP and ORNL was 1.34 m, 4.24 m and 0.72 m respectively. Given the average growth rate of the dominant species at each site and the average CHM uncertainty, the minimum time interval required between LiDAR acquisitions to confidently conclude growth had occurred at the plot scale was estimated to be between one and four years. The minimum interval time was shown to be primarily dependent on the CHM uncertainty and number of cells within a plot which contained vegetation. This indicates that users of NEON data should not expect that changes in canopy height can be confidently identified between annual AOP acquisitions for all areas of NEON sites.
beta-Blockade used in precision sports: effect on pistol shooting performance.
Kruse, P; Ladefoged, J; Nielsen, U; Paulev, P E; Sørensen, J P
1986-08-01
In a double-blind cross-over study of 33 marksmen (standard pistol, 25 m) the adrenergic beta 1-receptor blocker, metoprolol, was compared to placebo. Metoprolol obviously improved the pistol shooting performance compared with placebo. Shooting improved by 13.4% of possible improvement (i.e., 600 points minus actual points obtained) as an average (SE = 4%, 2P less than 0.002). The most skilled athletes demonstrated the clearest metoprolol improvement. We found no correlation between the shooting improvement and changes in the cardiovascular variables (i.e., changes of heart rate and systolic blood pressure) and no correlation to the estimated maximum O2 uptake. The shooting improvement is an effect of metoprolol on hand tremor. Emotional increase of heart rate and systolic blood pressure seem to be a beta 1-receptor phenomenon.
Low cost monocrystalline silicon sheet fabrication for solar cells by advanced ingot technology
NASA Technical Reports Server (NTRS)
Fiegl, G. F.; Bonora, A. C.
1980-01-01
The continuous liquid feed (CLF) Czochralski furnace and the enhanced I.D. slicing technology for the low-cost production of monocrystalline silicon sheets for solar cells are discussed. The incorporation of the CLF system is shown to improve ingot production rate significantly. As demonstrated in actual runs, higher than average solidification rates (75 to 100 mm/hr for 150 mm 1-0-0 crystals) can be achieved, when the system approaches steady-state conditions. The design characteristics of the CLF furnace are detailed, noting that it is capable of precise control of dopant impurity incorporation in the axial direction of the crystal. The crystal add-on cost is computed to be $11.88/sq m, considering a projected 1986 25-slice per cm conversion factor with an 86% crystal growth yield.
Sub-ppb, Autonomous, Real-time Detection of VOCs with iCRDS
NASA Astrophysics Data System (ADS)
Leen, J.; Gupta, M.; Baer, D. S.
2013-12-01
The continuous, real-time detection of sub-parts-per-billion (ppb) concentrations of volatile organic compounds (VOCs) such as trichloroethylene (TCE) and tetrachloroethylene (PCE) remains difficult, time consuming and expensive. In particular, short term exposure spikes and diurnal variations are difficult or impossible to detect with traditional TO-15 measurements. We present laboratory and field performance data from an instrument based on incoherent cavity ringdown spectroscopy (iCRDS) that operates in the mid-infrared (bands from 860-1060 cm-1 or 970-1280 cm-1) and is capable of detecting a broad range of VOCs, in situ, continuously and autonomously. We have demonstrated the measurement of TCE in zero air with a precision of 0.17 ppb (1σ in 4 minutes). PCE was measured with a precision of 0.15 ppb (1σ in 4 minutes). Both of these measured precisions exceed the EPA's commercial building action limit, which for TCE is 0.92 ppb (5 μg/m3) and for PCE is 0.29 ppb (2 μg/m3). Additionally, the instrument is capable of precisely measuring and quantifying BTEX compounds (benzene, toluene, ethylbenzene, xylene), including differentiation of xylene isomers. We have demonstrated the accurate, interference free measurement of Mountain View, California air doped with TCE concentrations ranging from 4.22 ppb (22.8 μg/m3) to 17.74 ppb (96 μg/m3) with a precision of 1.42 ppb (1σ in 4 minutes). Mountain View, California air doped with 10.83 ppb of PCE (74.0 μg/m3) was measured with a precision of 0.54 ppb (1σ in 4 minutes). Finally, the instrument was deployed to the Superfund site at Moffett Naval Air Station in Mountain View, California where contaminated ground water results in vapor intrusion of TCE and PCE. For two weeks, the instrument operated continuously and autonomously, successfully measuring TCE and PCE concentrations in both the breathing zone and steam tunnel air. TCE concentrations in the breathing zone averaged 0.186 × 0.669 ppb while tunnel air averaged 17.38 × 4.96 ppb, in excellent agreement with previous TO-15 8 hr averages. PCE concentrations in the breathing zone averaged 0.063 × 0.270 ppb while tunnel air averaged 0.755 × 0.359 ppb, again, in excellent agreement with previous TO-15 8 hr averages. The iCRDS instrument has shown the ability to continuously and autonomously measure sub-ppb levels of toxic VOCs in the field, offering an unprecedented picture of the short term dynamics associated with vapor intrusion and ground water pollution.
Spectral distortion of dual-comb spectrometry due to repetition rate fluctuation
NASA Astrophysics Data System (ADS)
Hong-Lei, Yang; Hao-Yun, Wei; Yan, Li
2016-04-01
Dual-comb spectrometry suffers the fluctuations of parameters in combs. We demonstrate that the repetition rate is more important than any other parameter, since the fluctuation of the repetition rate leads to a change of difference in the repetition rate between both combs, consequently causing the conversion factor variation and spectral frequency misalignment. The measured frequency noise power spectral density of the repetition rate exhibits an integrated residual frequency modulation of 1.4 Hz from 1 Hz to 100 kHz in our system. This value corresponds to the absorption peak fluctuation within a root mean square value of 0.19 cm-1 that is verified by both simulation and experimental result. Further, we can also simulate spectrum degradation as the fluctuation varies. After modifying misaligned spectra and averaging, the measured result agrees well with the simulated spectrum based on the GEISA database. Project supported by the State Key Laboratory of Precision Measurement Technology & Instruments of Tsinghua University and the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61205147).
NASA Astrophysics Data System (ADS)
Xu, Rui; Zhou, Miaolei
2018-04-01
Piezo-actuated stages are widely applied in the high-precision positioning field nowadays. However, the inherent hysteresis nonlinearity in piezo-actuated stages greatly deteriorates the positioning accuracy of piezo-actuated stages. This paper first utilizes a nonlinear autoregressive moving average with exogenous inputs (NARMAX) model based on the Pi-sigma fuzzy neural network (PSFNN) to construct an online rate-dependent hysteresis model for describing the hysteresis nonlinearity in piezo-actuated stages. In order to improve the convergence rate of PSFNN and modeling precision, we adopt the gradient descent algorithm featuring three different learning factors to update the model parameters. The convergence of the NARMAX model based on the PSFNN is analyzed effectively. To ensure that the parameters can converge to the true values, the persistent excitation condition is considered. Then, a self-adaption compensation controller is designed for eliminating the hysteresis nonlinearity in piezo-actuated stages. A merit of the proposed controller is that it can directly eliminate the complex hysteresis nonlinearity in piezo-actuated stages without any inverse dynamic models. To demonstrate the effectiveness of the proposed model and control methods, a set of comparative experiments are performed on piezo-actuated stages. Experimental results show that the proposed modeling and control methods have excellent performance.
NASA Astrophysics Data System (ADS)
Tang, Xiaoli; Lin, Tong; Jiang, Steve
2009-09-01
We propose a novel approach for potential online treatment verification using cine EPID (electronic portal imaging device) images for hypofractionated lung radiotherapy based on a machine learning algorithm. Hypofractionated radiotherapy requires high precision. It is essential to effectively monitor the target to ensure that the tumor is within the beam aperture. We modeled the treatment verification problem as a two-class classification problem and applied an artificial neural network (ANN) to classify the cine EPID images acquired during the treatment into corresponding classes—with the tumor inside or outside of the beam aperture. Training samples were generated for the ANN using digitally reconstructed radiographs (DRRs) with artificially added shifts in the tumor location—to simulate cine EPID images with different tumor locations. Principal component analysis (PCA) was used to reduce the dimensionality of the training samples and cine EPID images acquired during the treatment. The proposed treatment verification algorithm was tested on five hypofractionated lung patients in a retrospective fashion. On average, our proposed algorithm achieved a 98.0% classification accuracy, a 97.6% recall rate and a 99.7% precision rate. This work was first presented at the Seventh International Conference on Machine Learning and Applications, San Diego, CA, USA, 11-13 December 2008.
Estimating the Critical Point of Crowding in the Emergency Department for the Warning System
NASA Astrophysics Data System (ADS)
Chang, Y.; Pan, C.; Tseng, C.; Wen, J.
2011-12-01
The purpose of this study is to deduce a function from the admissions/discharge rate of patient flow to estimate a "Critical Point" that provides a reference for warning systems in regards to crowding in the emergency department (ED) of a hospital or medical clinic. In this study, a model of "Input-Throughput-Output" was used in our established mathematical function to evaluate the critical point. The function is defined as dPin/dt=dwait/dt+Cp×B+ dPout/dt where Pin= number of registered patients, Pwait= number of waiting patients, Cp= retention rate per bed (calculated for the critical point), B= number of licensed beds in the treatment area, and Pout= number of patients discharged from the treatment area. Using the average Cp of ED crowding, we could start the warning system at an appropriate time and then plan for necessary emergency response to facilitate the patient process more smoothly. It was concluded that ED crowding could be quantified using the average value of Cp and the value could be used as a reference for medical staff to give optimal emergency medical treatment to patients. Therefore, additional practical work should be launched to collect more precise quantitative data.
Importance sampling studies of helium using the Feynman-Kac path integral method
NASA Astrophysics Data System (ADS)
Datta, S.; Rejcek, J. M.
2018-05-01
In the Feynman-Kac path integral approach the eigenvalues of a quantum system can be computed using Wiener measure which uses Brownian particle motion. In our previous work on such systems we have observed that the Wiener process numerically converges slowly for dimensions greater than two because almost all trajectories will escape to infinity. One can speed up this process by using a generalized Feynman-Kac (GFK) method, in which the new measure associated with the trial function is stationary, so that the convergence rate becomes much faster. We thus achieve an example of "importance sampling" and, in the present work, we apply it to the Feynman-Kac (FK) path integrals for the ground and first few excited-state energies for He to speed up the convergence rate. We calculate the path integrals using space averaging rather than the time averaging as done in the past. The best previous calculations from variational computations report precisions of 10-16 Hartrees, whereas in most cases our path integral results obtained for the ground and first excited states of He are lower than these results by about 10-6 Hartrees or more.
Investigation of ultrashort-pulsed laser on dental hard tissue
NASA Astrophysics Data System (ADS)
Uchizono, Takeyuki; Awazu, Kunio; Igarashi, Akihiro; Kato, Junji; Hirai, Yoshito
2007-02-01
Ultrashort-pulsed laser (USPL) can ablate various materials with precious less thermal effect. In laser dentistry, to solve the problem that were the generation of crack and carbonized layer by irradiating with conventional laser such as Er:YAG and CO II laser, USPL has been studied to ablate dental hard tissues by several researchers. We investigated the effectiveness of ablation on dental hard tissues by USPL. In this study, Ti:sapphire laser as USPL was used. The laser parameter had the pulse duration of 130 fsec, 800nm wavelength, 1KHz of repetition rate and the average power density of 90~360W/cm2. Bovine root dentin plates and crown enamel plates were irradiated with USPL at 1mm/sec using moving stage. The irradiated samples were analyzed by SEM, EDX, FTIR and roughness meter. In all irradiated samples, the cavity margin and wall were sharp and steep, extremely. In irradiated dentin samples, the surface showed the opened dentin tubules and no smear layer. The Ca/P ratio by EDX measurement and the optical spectrum by FTIR measurement had no change on comparison irradiated samples and non-irradiated samples. These results confirmed that USPL could ablate dental hard tissue, precisely and non-thermally. In addition, the ablation depths of samples were 10μm, 20μm, and 60μm at 90 W/cm2, 180 W/cm2, and 360 W/cm2, approximately. Therefore, ablation depth by USPL depends on the average power density. USPL has the possibility that can control the precision and non-thermal ablation with depth direction by adjusting the irradiated average power density.
Video-rate or high-precision: a flexible range imaging camera
NASA Astrophysics Data System (ADS)
Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.
2008-02-01
A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.
Precision Medicine-Nobody Is Average.
Vinks, A A
2017-03-01
Medicine gets personal and tailor-made treatments are underway. Hospitals have started to advertise their advanced genomic testing capabilities and even their disruptive technologies to help foster a culture of innovation. The prediction in the lay press is that in decades from now we may look back and see 2017 as the year precision medicine blossomed. It is all part of the Precision Medicine Initiative that takes into account individual differences in people's genes, environments, and lifestyles. © 2017 ASCPT.
Staggs, Vincent S; Cramer, Emily
2016-08-01
Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital-acquired pressure ulcer rates and evaluate a standard signal-noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step-down, medical, surgical, and medical-surgical nursing units from 1,299 US hospitals were analyzed. Using beta-binomial models, we estimated between-unit variability (signal) and within-unit variability (noise) in annual unit pressure ulcer rates. Signal-noise reliability was computed as the ratio of between-unit variability to the total of between- and within-unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal-noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal-noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc.
Next generation phenotyping using narrative reports in a rare disease clinical data warehouse.
Garcelon, Nicolas; Neuraz, Antoine; Salomon, Rémi; Bahi-Buisson, Nadia; Amiel, Jeanne; Picard, Capucine; Mahlaoui, Nizar; Benoit, Vincent; Burgun, Anita; Rance, Bastien
2018-05-31
Secondary use of data collected in Electronic Health Records opens perspectives for increasing our knowledge of rare diseases. The clinical data warehouse (named Dr. Warehouse) at the Necker-Enfants Malades Children's Hospital contains data collected during normal care for thousands of patients. Dr. Warehouse is oriented toward the exploration of clinical narratives. In this study, we present our method to find phenotypes associated with diseases of interest. We leveraged the frequency and TF-IDF to explore the association between clinical phenotypes and rare diseases. We applied our method in six use cases: phenotypes associated with the Rett, Lowe, Silver Russell, Bardet-Biedl syndromes, DOCK8 deficiency and Activated PI3-kinase Delta Syndrome (APDS). We asked domain experts to evaluate the relevance of the top-50 (for frequency and TF-IDF) phenotypes identified by Dr. Warehouse and computed the average precision and mean average precision. Experts concluded that between 16 and 39 phenotypes could be considered as relevant in the top-50 phenotypes ranked by descending frequency discovered by Dr. Warehouse (resp. between 11 and 41 for TF-IDF). Average precision ranges from 0.55 to 0.91 for frequency and 0.52 to 0.95 for TF-IDF. Mean average precision was 0.79. Our study suggests that phenotypes identified in clinical narratives stored in Electronic Health Record can provide rare disease specialists with candidate phenotypes that can be used in addition to the literature. Clinical Data Warehouses can be used to perform Next Generation Phenotyping, especially in the context of rare diseases. We have developed a method to detect phenotypes associated with a group of patients using medical concepts extracted from free-text clinical narratives.
Barnard, D R; Knue, G J; Dickerson, C Z; Bernier, U R; Kline, D L
2011-06-01
Capture rates of insectary-reared female Aedes albopictus (Skuse), Anopheles quadrimaculatus Say, Culex nigripalpus Theobald, Culex quinquefasciatus Say and Aedes triseriatus (Say) in CDC-type light traps (LT) supplemented with CO2 and using the human landing (HL) collection method were observed in matched-pair experiments in outdoor screened enclosures. Mosquito responses were compared on a catch-per-unit-effort basis using regression analysis with LT and HL as the dependent and independent variables, respectively. The average number of mosquitoes captured in 1 min by LT over a 24-h period was significantly related to the average number captured in 1 min by HL only for Cx. nigripalpus and Cx. quinquefasciatus. Patterns of diel activity indicated by a comparison of the mean response to LT and HL at eight different times in a 24-h period were not superposable for any species. The capture rate efficiency of LT when compared with HL was ≤15% for all mosquitoes except Cx. quinquefasciatus (43%). Statistical models of the relationship between mosquito responses to each collection method indicate that, except for Ae. albopictus, LT and HL capture rates are significantly related only during certain times of the diel period. Estimates of mosquito activity based on observations made between sunset and sunrise were most precise in this regard for An. quadrimaculatus and Cx. nigripalpus, as were those between sunrise and sunset for Cx. quinquefasciatus and Ae. triseriatus.
Grall; Leonard; Sacks
2000-02-01
Recent advances in column heating technology have made possible very fast linear temperature programming for high-speed gas chromatography. A fused-silica capillary column is contained in a tubular metal jacket, which is resistively heated by a precision power supply. With very rapid column heating, the rate of peak-capacity production is significantly enhanced, but the total peak capacity and the boiling-point resolution (minimum boiling-point difference required for the separation of two nonpolar compounds on a nonpolar column) are reduced relative to more conventional heating rates used with convection-oven instruments. As temperature-programming rates increase, elution temperatures also increase with the result that retention may become insignificant prior to elution. This results in inefficient utilization of the down-stream end of the column and causes a loss in the rate of peak-capacity production. The rate of peak-capacity production is increased by the use of shorter columns and higher carrier gas velocities. With high programming rates (100-600 degrees C/min), column lengths of 6-12 m and average linear carrier gas velocities in the 100-150 cm/s range are satisfactory. In this study, the rate of peak-capacity production, the total peak capacity, and the boiling point resolution are determined for C10-C28 n-alkanes using 6-18 m long columns, 50-200 cm/s average carrier gas velocities, and 60-600 degrees C/min programming rates. It was found that with a 6-meter-long, 0.25-mm i.d. column programmed at a rate of 600 degrees C/min, a maximum peak-capacity production rate of 6.1 peaks/s was obtained. A total peak capacity of about 75 peaks was produced in a 37-s long separation spanning a boiling-point range from n-C10 (174 degrees C) to n-C28 (432 degrees C).
Larsson, Anne; Johansson, Adam; Axelsson, Jan; Nyholm, Tufve; Asklund, Thomas; Riklund, Katrine; Karlsson, Mikael
2013-02-01
The aim of this study was to evaluate MR-based attenuation correction of PET emission data of the head, based on a previously described technique that calculates substitute CT (sCT) images from a set of MR images. Images from eight patients, examined with (18)F-FLT PET/CT and MRI, were included. sCT images were calculated and co-registered to the corresponding CT images, and transferred to the PET/CT scanner for reconstruction. The new reconstructions were then compared with the originals. The effect of replacing bone with soft tissue in the sCT-images was also evaluated. The average relative difference between the sCT-corrected PET images and the CT-corrected PET images was 1.6% for the head and 1.9% for the brain. The average standard deviations of the relative differences within the head were relatively high, at 13.2%, primarily because of large differences in the nasal septa region. For the brain, the average standard deviation was lower, 4.1%. The global average difference in the head when replacing bone with soft tissue was 11%. The method presented here has a high rate of accuracy, but high-precision quantitative imaging of the nasal septa region is not possible at the moment.
A rapid radiative transfer model for reflection of solar radiation
NASA Technical Reports Server (NTRS)
Xiang, X.; Smith, E. A.; Justus, C. G.
1994-01-01
A rapid analytical radiative transfer model for reflection of solar radiation in plane-parallel atmospheres is developed based on the Sobolev approach and the delta function transformation technique. A distinct advantage of this model over alternative two-stream solutions is that in addition to yielding the irradiance components, which turn out to be mathematically equivalent to the delta-Eddington approximation, the radiance field can also be expanded in a mathematically consistent fashion. Tests with the model against a more precise multistream discrete ordinate model over a wide range of input parameters demonstrate that the new approximate method typically produces average radiance differences of less than 5%, with worst average differences of approximately 10%-15%. By the same token, the computational speed of the new model is some tens to thousands times faster than that of the more precise model when its stream resolution is set to generate precise calculations.
NASA Astrophysics Data System (ADS)
Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi
2015-02-01
Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.
Kobayashi, Shingo; Shinomiya, Takayuki; Kitamura, Hisashi; Ishikawa, Takahiro; Imaseki, Hitoshi; Oikawa, Masakazu; Kodaira, Satoshi; Miyaushiro, Norihiro; Takashima, Yoshio; Uchihori, Yukio
2015-01-01
We constructed a new car-borne survey system called Radi-Probe with a portable germanium gamma-ray spectrometer onboard a cargo truck, to identify radionuclides and quantify surface contamination from the accident at Fukushima Dai-ichi Nuclear Power Station. The system can quickly survey a large area and obtain ambient dose equivalent rates and gamma-ray energy spectra with good energy resolution. We also developed a new calibration method for the system to deal with an actual nuclear disaster, and quantitative surface deposition densities of radionuclides, such as (134)Cs and (137)Cs, and kerma rates of each radionuclide can be calculated. We carried out car-borne survey over northeastern and eastern Japan (Tohoku and Kanto regions of Honshu) from 25 September through 7 October 2012. We discuss results of the distribution of ambient dose equivalent rate H(∗)(10), (134)Cs and (137)Cs surface deposition densities, spatial variation of (134)Cs/(137)Cs ratio, and the relationship between surface deposition densities of (134)Cs/(137)Cs and H(∗)(10). The ratio of (134)Cs/(137)Cs was nearly constant within our measurement precision, with average 1.06 ± 0.04 in northeastern and eastern Japan (decay-corrected to 11 March, 2011), although small variations from the average were observed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Thomas, Freddy; Jamin, Eric
2009-09-01
An international collaborative study of isotopic methods applied to control the authenticity of vinegar was organized in order to support the recognition of these procedures as official methods. The determination of the 2H/1H ratio of the methyl site of acetic acid by SNIF-NMR (site-specific natural isotopic fractionation-nuclear magnetic resonance) and the determination of the 13C/12C ratio, by IRMS (isotope ratio mass spectrometry) provide complementary information to characterize the botanical origin of acetic acid and to detect adulterations of vinegar using synthetic acetic acid. Both methods use the same initial steps to recover pure acetic acid from vinegar. In the case of wine vinegar, the determination of the 18O/16O ratio of water by IRMS allows to differentiate wine vinegar from vinegars made from dried grapes. The same set of vinegar samples was used to validate these three determinations. The precision parameters of the method for measuring delta13C (carbon isotopic deviation) were found to be similar to the values previously obtained for similar methods applied to wine ethanol or sugars extracted from fruit juices: the average repeatability (r) was 0.45 per thousand, and the average reproducibility (R) was 0.91 per thousand. As expected from previous in-house study of the uncertainties, the precision parameters of the method for measuring the 2H/1H ratio of the methyl site were found to be slightly higher than the values previously obtained for similar methods applied to wine ethanol or fermentation ethanol in fruit juices: the average repeatability was 1.34 ppm, and the average reproducibility was 1.62 ppm. This precision is still significantly smaller than the differences between various acetic acid sources (delta13C and delta18O) and allows a satisfactory discrimination of vinegar types. The precision parameters of the method for measuring delta18O were found to be similar to the values previously obtained for other methods applied to wine and fruit juices: the average repeatability was 0.15 per thousand, and the average reproducibility was 0.59 per thousand. The above values are proposed as repeatability and reproducibility limits in the current state of the art. On the basis of this satisfactory inter-laboratory precision and on the accuracy demonstrated by a spiking experiment, the authors recommend the adoption of the three isotopic determinations included in this study as official methods for controlling the authenticity of vinegar.
Progress on a Multichannel, Dual-Mixer Stability Analyzer
NASA Technical Reports Server (NTRS)
Kirk, Albert; Cole, Steven; Stevens, Gary; Tucker, Blake; Greenhall, Charles
2005-01-01
Several documents describe aspects of the continuing development of a multichannel, dual-mixer system for simultaneous characterization of the instabilities of multiple precise, low-noise oscillators. One of the oscillators would be deemed to be a reference oscillator, its frequency would be offset by an amount (100 Hz) much greater than the desired data rate, and each of the other oscillators would be compared with the frequency-offset signal by operation of a combination of hardware and software. A high-rate time-tag counter would collect zero-crossing times of the approximately equal 100-Hz beat notes. The system would effect a combination of interpolation and averaging to process the time tags into low-rate phase residuals at the desired grid times. Circuitry that has been developed since the cited prior article includes an eight-channel timer board to replace an obsolete commercial time-tag counter, plus a custom offset generator, cleanup loop, distribution amplifier, zero-crossing detector, and frequency divider.
Diagnostic for a high-repetition rate electron photo-gun and first measurements
NASA Astrophysics Data System (ADS)
Filippetto, D.; Doolittle, L.; Huang, G.; Norum, E.; Portmann, G.; Qian, H.; Sannibale, F.
2015-05-01
The APEX electron source at LBNL combines the high-repetition-rate with the high beam brightness typical of photoguns, delivering low emittance electron pulses at MHz frequency. Proving the high beam quality of the beam is an essential step for the success of the experiment, opening the doors of the high average power to brightness-hungry applications as X-Ray FELs, MHz ultrafast electron diffraction etc.. As first step, a complete characterization of the beam parameters is foreseen at the Gun beam energy of 750 keV. Diagnostics for low and high current measurements have been installed and tested, and measurements of cathode lifetime and thermal emittance in a RF environment with mA current performed. The recent installation of a double slit system, a deflecting cavity and a high precision spectrometer, allow the exploration of the full 6D phase space. Here we discuss the present layout of the machine and future upgrades, showing the latest results at low and high repetition rate, together with the tools and techniques used.
High-precision branching ratio measurement for the superallowed β+ emitter Ga62
NASA Astrophysics Data System (ADS)
Finlay, P.; Ball, G. C.; Leslie, J. R.; Svensson, C. E.; Towner, I. S.; Austin, R. A. E.; Bandyopadhyay, D.; Chaffey, A.; Chakrawarthy, R. S.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Kanungo, R.; Leach, K. G.; Mattoon, C. M.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Ressler, J. J.; Sarazin, F.; Savajols, H.; Schumaker, M. A.; Wong, J.
2008-08-01
A high-precision branching ratio measurement for the superallowed β+ decay of Ga62 was performed at the Isotope Separator and Accelerator (ISAC) radioactive ion beam facility. The 8π spectrometer, an array of 20 high-purity germanium detectors, was employed to detect the γ rays emitted following Gamow-Teller and nonanalog Fermi β+ decays of Ga62, and the SCEPTAR plastic scintillator array was used to detect the emitted β particles. Thirty γ rays were identified following Ga62 decay, establishing the superallowed branching ratio to be 99.858(8)%. Combined with the world-average half-life and a recent high-precision Q-value measurement for Ga62, this branching ratio yields an ft value of 3074.3±1.1 s, making Ga62 among the most precisely determined superallowed ft values. Comparison between the superallowed ft value determined in this work and the world-average corrected F tmacr value allows the large nuclear-structure-dependent correction for Ga62 decay to be experimentally determined from the CVC hypothesis to better than 7% of its own value, the most precise experimental determination for any superallowed emitter. These results provide a benchmark for the refinement of the theoretical description of isospin-symmetry breaking in A⩾62 superallowed decays.
Application of Template Matching for Improving Classification of Urban Railroad Point Clouds
Arastounia, Mostafa; Oude Elberink, Sander
2016-01-01
This study develops an integrated data-driven and model-driven approach (template matching) that clusters the urban railroad point clouds into three classes of rail track, contact cable, and catenary cable. The employed dataset covers 630 m of the Dutch urban railroad corridors in which there are four rail tracks, two contact cables, and two catenary cables. The dataset includes only geometrical information (three dimensional (3D) coordinates of the points) with no intensity data and no RGB data. The obtained results indicate that all objects of interest are successfully classified at the object level with no false positives and no false negatives. The results also show that an average 97.3% precision and an average 97.7% accuracy at the point cloud level are achieved. The high precision and high accuracy of the rail track classification (both greater than 96%) at the point cloud level stems from the great impact of the employed template matching method on excluding the false positives. The cables also achieve quite high average precision (96.8%) and accuracy (98.4%) due to their high sampling and isolated position in the railroad corridor. PMID:27973452
High-precision half-life measurements of the T =1 /2 mirror β decays 17F and 33Cl
NASA Astrophysics Data System (ADS)
Grinyer, J.; Grinyer, G. F.; Babo, M.; Bouzomita, H.; Chauveau, P.; Delahaye, P.; Dubois, M.; Frigot, R.; Jardin, P.; Leboucher, C.; Maunoury, L.; Seiffert, C.; Thomas, J. C.; Traykov, E.
2015-10-01
Background: Measurements of the f t values for T =1 /2 mirror β+ decays offer a method to test the conserved vector current hypothesis and to determine Vud, the up-down matrix element of the Cabibbo-Kobayashi-Maskawa matrix. In most mirror decays used for these tests, uncertainties in the f t values are dominated by the uncertainties in the half-lives. Purpose: Two precision half-life measurements were performed for the T =1 /2 β+ emitters, 17F and 33Cl, in order to eliminate the half-life as the leading source of uncertainty in their f t values. Method: Half-lives of 17F and 33Cl were determined using β counting of implanted radioactive ion beam samples on a moving tape transport system at the Système de Production d'Ions Radioactifs Accélérés en Ligne low-energy identification station at the Grand Accélérateur National d'Ions Lourds. Results: The 17F half-life result, 64.347 (35) s, precise to ±0.05 % , is a factor of 5 times more precise than the previous world average. The half-life of 33Cl was determined to be 2.5038 (22) s. The current precision of ±0.09 % is nearly 2 times more precise compared to the previous world average. Conclusions: The precision achieved during the present measurements implies that the half-life no longer dominates the uncertainty of the f t values for both T =1 /2 mirror decays 17F and 33Cl.
Shock, Jennifer L; Fischer, Kael F; DeRisi, Joseph L
2007-01-01
The rate of mRNA decay is an essential element of post-transcriptional regulation in all organisms. Previously, studies in several organisms found that the specific half-life of each mRNA is precisely related to its physiologic role, and plays an important role in determining levels of gene expression. We used a genome-wide approach to characterize mRNA decay in Plasmodium falciparum. We found that, globally, rates of mRNA decay increase dramatically during the asexual intra-erythrocytic developmental cycle. During the ring stage of the cycle, the average mRNA half-life was 9.5 min, but this was extended to an average of 65 min during the late schizont stage of development. Thus, a major determinant of mRNA decay rate appears to be linked to the stage of intra-erythrocytic development. Furthermore, we found specific variations in decay patterns superimposed upon the dominant trend of progressive half-life lengthening. These variations in decay pattern were frequently enriched for genes with specific cellular functions or processes. Elucidation of Plasmodium mRNA decay rates provides a key element for deciphering mechanisms of genetic control in this parasite, by complementing and extending previous mRNA abundance studies. Our results indicate that progressive stage-dependent decreases in mRNA decay rate function are a major determinant of mRNA accumulation during the schizont stage of intra-erythrocytic development. This type of genome-wide change in mRNA decay rate has not been observed in any other organism to date, and indicates that post-transcriptional regulation may be the dominant mechanism of gene regulation in P. falciparum.
Metaphor Identification in Large Texts Corpora
Neuman, Yair; Assaf, Dan; Cohen, Yohai; Last, Mark; Argamon, Shlomo; Howard, Newton; Frieder, Ophir
2013-01-01
Identifying metaphorical language-use (e.g., sweet child) is one of the challenges facing natural language processing. This paper describes three novel algorithms for automatic metaphor identification. The algorithms are variations of the same core algorithm. We evaluate the algorithms on two corpora of Reuters and the New York Times articles. The paper presents the most comprehensive study of metaphor identification in terms of scope of metaphorical phrases and annotated corpora size. Algorithms’ performance in identifying linguistic phrases as metaphorical or literal has been compared to human judgment. Overall, the algorithms outperform the state-of-the-art algorithm with 71% precision and 27% averaged improvement in prediction over the base-rate of metaphors in the corpus. PMID:23658625
Yan, Ming; Li, Wenxue; Yang, Kangwen; Zhou, Hui; Shen, Xuling; Zhou, Qian; Ru, Qitian; Bai, Dongbi; Zeng, Heping
2012-05-01
We report on a simple scheme to precisely control carrier-envelope phase of a nonlinear-polarization-rotation mode-locked self-started Yb-fiber laser system with an average output power of ∼7 W and a pulse width of 130 fs. The offset frequency was locked to the repetition rate of ∼64.5 MHz with a relative linewidth of ∼1.4 MHz by using a self-referenced feed-forward scheme based on an acousto-optic frequency shifter. The phase noise and timing jitter were calculated to be 370 mrad and 120 as, respectively.
High-Precision Mass Measurement of
NASA Astrophysics Data System (ADS)
Valverde, A. A.; Brodeur, M.; Bollen, G.; Eibach, M.; Gulyuz, K.; Hamaker, A.; Izzo, C.; Ong, W.-J.; Puentes, D.; Redshaw, M.; Ringle, R.; Sandler, R.; Schwarz, S.; Sumithrarachchi, C. S.; Surbrook, J.; Villari, A. C. C.; Yandow, I. T.
2018-01-01
We report the mass measurement of
Enhancing a Web Crawler with Arabic Search Capability
2010-09-01
7 Figure 2. Monolingual 11-point precision results. From [14]...........................................8 Figure 3. Lucene...libraries (prefixes dictionary , stems dictionary and suffixes dictionary ). If all the word elements (prefix, stem, suffix) are found in their...stemmer improved over 90% in average precision from raw retrieval. The authors concluded that stemming is very effective on Arabic IR. For monolingual
Evaluation of Small Mass Spectrometer Systems
NASA Technical Reports Server (NTRS)
Arkin, C. Richard; Griffin, Timothy P.; Ottens, Andrew K.; Diaz, Jorge A.; Follistein, Duke W.; Adams, Fredrick W.; Helms, William R.; Voska, N. (Technical Monitor)
2002-01-01
This work is aimed at understanding the aspects of designing a miniature mass spectrometer (MS) system. A multitude of commercial and government sectors, such as the military, environmental agencies and industrial manufacturers of semiconductors, refrigerants, and petroleum products, would find a small, portable, rugged and reliable MS system beneficial. Several types of small MS systems are evaluated and discussed, including linear quadrupole, quadrupole ion trap, time of flight and sector. The performance of each system in terms of accuracy, precision, limits of detection, response time, recovery time, scan rate, volume and weight is assessed. A performance scale is setup to rank each systems and an overall performance score is given to each system. All experiments involved the analysis of hydrogen, helium, oxygen and argon in a nitrogen background with the concentrations of the components of interest ranging from 0-5000 part-per-million (ppm). The relative accuracies of the systems vary from < 1% to approx. 40% with an average below 10%. Relative precisions varied from 1% to 20%, with an average below 5%. The detection limits had a large distribution, ranging from 0.2 to 170 ppm. The systems had a diverse response time ranging from 4 s to 210 s as did the recovery time with a 6 s to 210 s distribution. Most instruments had scan times near, 1 s, however one instrument exceeded 13 s. System weights varied from 9 to 52 kg and sizes from 15 x 10(exp 3)cu cm to 110 x 10(exp 3) cu cm.
All-fiber high-power monolithic femtosecond laser at 1.59 µm with 63-fs pulse width
NASA Astrophysics Data System (ADS)
Hekmat, M. J.; Omoomi, M.; Gholami, A.; Yazdabadi, A. Bagheri; Abdollahi, M.; Hamidnejad, E.; Ebrahimi, A.; Normohamadi, H.
2018-01-01
In this research, by adopting an alternative novel approach to ultra-short giant pulse generation which basically originated from difficulties with traditional employed methods, an optimized Er/Yb co-doped double-clad fiber amplifier is applied to boost output average power of single-mode output pulses to a high level of 2-W at 1.59-µm central wavelength. Output pulses of approximately 63-fs pulse width at 52-MHz repetition rate are obtained in an all-fiber monolithic laser configuration. The idea of employing parabolic pulse amplification for stretching output pulses together with high-power pulse amplification using Er/Yb co-doped active fibers for compressing and boosting output average power plays crucial role in obtaining desired results. The proposed configuration enjoys massive advantages over previously reported literature which make it well-suited for high-power precision applications such as medical surgery. Detailed dynamics of pulse stretching and compressing in active fibers with different GVD parameters are numerically and experimentally investigated.
Yang, Lei; Hao, Dongmei; Wu, Shuicai; Zhong, Rugang; Zeng, Yanjun
2013-06-01
Rats are often used in the electromagnetic field (EMF) exposure experiments. In the study for the effect of 900 MHz EMF exposure on learning and memory in SD rats, the specific absorption rate (SAR) and the temperature rise in the rat head are numerically evaluated. The digital anatomical model of a SD rat is reconstructed with the MRI images. Numerical method as finite difference time domain has been applied to assess the SAR and the temperature rise during the exposure. Measurements and simulations are conducted to characterize the net radiated power of the dipole to provide a precise dosimetric result. The whole-body average SAR and the localized SAR averaging over 1, 0.5 and 0.05 g mass for different organs/tissues are given. It reveals that during the given exposure experiment setup, no significant temperature rise occurs. The reconstructed anatomical rat model could be used in the EMF simulation and the dosimetric result provides useful information for the biological effect studies.
Influence of tides in viscoelastic bodies of planet and satellite on the satellite's orbital motion
NASA Astrophysics Data System (ADS)
Emelyanov, N. V.
2018-06-01
The problem of influence of tidal friction in both planetary and satellite bodies upon satellite's orbital motion is considered. Using the differential equations in satellite's rectangular planetocentric coordinates, the differential equations describing the changes in semimajor axis and eccentricity are derived. The equations in rectangular coordinates were taken from earlier works on the problem. The calcultations carried out for a number of test examples prove that the averaged solutions of equations in coordinates and precise solutions of averaged equations in the Keplerian elements are identical. For the problem of tides raised on planet's body, it was found that, if satellite's mean motion n is equal to 11/18 Ω, where Ω is the planet's angular rotation rate, the orbital eccentricity does not change. This conclusion is in agreement with the results of other authors. It was also found that there is essential discrepancy between the equations in the elements obtained in this paper and analogous equations published by earlier researchers.
The content of Ca, Cu, Fe, Mg and Mn and antioxidant activity of green coffee brews.
Stelmach, Ewelina; Pohl, Pawel; Szymczycha-Madeja, Anna
2015-09-01
A simple and fast method of the analysis of green coffee infusions was developed to measure total concentrations of Ca, Cu, Fe, Mg and Mn by high resolution-continuum source flame atomic absorption spectrometry. The precision of the method was within 1-8%, while the accuracy was within -1% to 2%. The method was used to the analysis of infusions of twelve green coffees of different geographical origin. It was found that Ca and Mg were leached the easiest, i.e., on average 75% and 70%, respectively. As compared to the mug coffee preparation, the rate of the extraction of elements was increased when infusions were prepared using dripper or Turkish coffee preparation methods. Additionally, it was established that the antioxidant activity of green coffee infusions prepared using the mug coffee preparation was high, 75% on average, and positively correlated with the total content of phenolic compounds and the concentration of Ca in the brew. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ganry, L; Quilichini, J; Bandini, C M; Leyder, P; Hersant, B; Meningaud, J P
2017-08-01
Very few surgical teams currently use totally independent and free solutions to perform three-dimensional (3D) surgical modelling for osseous free flaps in reconstructive surgery. This study assessed the precision and technical reproducibility of a 3D surgical modelling protocol using free open-source software in mandibular reconstruction with fibula free flaps and surgical guides. Precision was assessed through comparisons of the 3D surgical guide to the sterilized 3D-printed guide, determining accuracy to the millimetre level. Reproducibility was assessed in three surgical cases by volumetric comparison to the millimetre level. For the 3D surgical modelling, a difference of less than 0.1mm was observed. Almost no deformations (<0.2mm) were observed post-autoclave sterilization of the 3D-printed surgical guides. In the three surgical cases, the average precision of fibula free flap modelling was between 0.1mm and 0.4mm, and the average precision of the complete reconstructed mandible was less than 1mm. The open-source software protocol demonstrated high accuracy without complications. However, the precision of the surgical case depends on the surgeon's 3D surgical modelling. Therefore, surgeons need training on the use of this protocol before applying it to surgical cases; this constitutes a limitation. Further studies should address the transfer of expertise. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Study on chemical mechanical polishing of silicon wafer with megasonic vibration assisted.
Zhai, Ke; He, Qing; Li, Liang; Ren, Yi
2017-09-01
Chemical mechanical polishing (CMP) is the primary method to realize the global planarization of silicon wafer. In order to improve this process, a novel method which combined megasonic vibration to assist chemical mechanical polishing (MA-CMP) is developed in this paper. A matching layer structure of polishing head was calculated and designed. Silicon wafers are polished by megasonic assisted chemical mechanical polishing and traditional chemical mechanical polishing respectively, both coarse polishing and precision polishing experiments were carried out. With the use of megasonic vibration, the surface roughness values Ra reduced from 22.260nm to 17.835nm in coarse polishing, and the material removal rate increased by approximately 15-25% for megasonic assisted chemical mechanical polishing relative to traditional chemical mechanical polishing. Average Surface roughness values Ra reduced from 0.509nm to 0.387nm in precision polishing. The results show that megasonic assisted chemical mechanical polishing is a feasible method to improve polishing efficiency and surface quality. The material removal and finishing mechanisms of megasonic vibration assisted polishing are investigated too. Copyright © 2017 Elsevier B.V. All rights reserved.
High-precision two-way optic-fiber time transfer using an improved time code.
Wu, Guiling; Hu, Liang; Zhang, Hao; Chen, Jianping
2014-11-01
We present a novel high-precision two-way optic-fiber time transfer scheme. The Inter-Range Instrumentation Group (IRIG-B) time code is modified by increasing bit rate and defining new fields. The modified time code can be transmitted directly using commercial optical transceivers and is able to efficiently suppress the effect of the Rayleigh backscattering in the optical fiber. A dedicated codec (encoder and decoder) with low delay fluctuation is developed. The synchronization issue is addressed by adopting a mask technique and combinational logic circuit. Its delay fluctuation is less than 27 ps in terms of the standard deviation. The two-way optic-fiber time transfer using the improved codec scheme is verified experimentally over 2 m to100 km fiber links. The results show that the stability over 100 km fiber link is always less than 35 ps with the minimum value of about 2 ps at the averaging time around 1000 s. The uncertainty of time difference induced by the chromatic dispersion over 100 km is less than 22 ps.
Gaffield, Michael A; Christie, Jason M
2017-05-03
Inhibition from molecular layer interneurons (MLIs) is thought to play an important role in cerebellar function by sharpening the precision of Purkinje cell spike output. Yet the coding features of MLIs during behavior are poorly understood. To study MLI activity, we used in vivo Ca 2+ imaging in head-fixed mice during the performance of a rhythmic motor behavior, licking during water consumption. MLIs were robustly active during lick-related movement across a lobule-specific region of the cerebellum showing high temporal correspondence within their population. Average MLI Ca 2+ activity strongly correlated with movement rate but not to the intentional, or unexpected, adjustment of lick position or to sensory feedback that varied with task condition. Chemogenetic suppression of MLI output reduced lick rate and altered tongue movements, indicating that activity of these interneurons not only encodes temporal aspects of movement kinematics but also influences motor outcome pointing to an integral role in online control of rhythmic behavior. SIGNIFICANCE STATEMENT The cerebellum helps fine-tune coordinated motor actions via signaling from projection neurons called Purkinje cells. Molecular layer interneurons (MLIs) provide powerful inhibition onto Purkinje cells, but little is understood about how this inhibitory circuit is engaged during behavior or what type of information is transmitted through these neurons. Our work establishes that MLIs in the lateral cerebellum are broadly activated during movement with calcium activity corresponding to movement rate. We also show that suppression of MLI output slows and disorganizes the precise movement pattern. Therefore, MLIs are an important circuit element in the cerebellum allowing for accurate motor control. Copyright © 2017 the authors 0270-6474/17/374751-15$15.00/0.
Anti-aliasing techniques in photon-counting depth imaging using GHz clock rates
NASA Astrophysics Data System (ADS)
Krichel, Nils J.; McCarthy, Aongus; Collins, Robert J.; Buller, Gerald S.
2010-04-01
Single-photon detection technologies in conjunction with low laser illumination powers allow for the eye-safe acquisition of time-of-flight range information on non-cooperative target surfaces. We previously presented a photon-counting depth imaging system designed for the rapid acquisition of three-dimensional target models by steering a single scanning pixel across the field angle of interest. To minimise the per-pixel dwelling times required to obtain sufficient photon statistics for accurate distance resolution, periodic illumination at multi- MHz repetition rates was applied. Modern time-correlated single-photon counting (TCSPC) hardware allowed for depth measurements with sub-mm precision. Resolving the absolute target range with a fast periodic signal is only possible at sufficiently short distances: if the round-trip time towards an object is extended beyond the timespan between two trigger pulses, the return signal cannot be assigned to an unambiguous range value. Whereas constructing a precise depth image based on relative results may still be possible, problems emerge for large or unknown pixel-by-pixel separations or in applications with a wide range of possible scene distances. We introduce a technique to avoid range ambiguity effects in time-of-flight depth imaging systems at high average pulse rates. A long pseudo-random bitstream is used to trigger the illuminating laser. A cyclic, fast-Fourier supported analysis algorithm is used to search for the pattern within return photon events. We demonstrate this approach at base clock rates of up to 2 GHz with varying pattern lengths, allowing for unambiguous distances of several kilometers. Scans at long stand-off distances and of scenes with large pixel-to-pixel range differences are presented. Numerical simulations are performed to investigate the relative merits of the technique.
NASA Astrophysics Data System (ADS)
Furukawa, Ryoto; Uemura, Ryu; Fujita, Koji; Sjolte, Jesper; Yoshimura, Kei; Matoba, Sumito; Iizuka, Yoshinori
2017-10-01
A precise age scale based on annual layer counting is essential for investigating past environmental changes from ice core records. However, subannual scale dating is hampered by the irregular intraannual variabilities of oxygen isotope (δ18O) records. Here we propose a dating method based on matching the δ18O variations between ice core records and records simulated by isotope-enabled climate models. We applied this method to a new δ18O record from an ice core obtained from a dome site in southeast Greenland. The close similarity between the δ18O records from the ice core and models enables correlation and the production of a precise age scale, with an accuracy of a few months. A missing δ18O minimum in the 1995/1996 winter is an example of an indistinct δ18O seasonal cycle. Our analysis suggests that the missing δ18O minimum is likely caused by a combination of warm air temperature, weak moisture transport, and cool ocean temperature. Based on the age scale, the average accumulation rate from 1960 to 2014 is reconstructed as 1.02 m yr-1 in water equivalent. The annual accumulation rate shows an increasing trend with a slope of 3.6 mm yr-1, which is mainly caused by the increase in the autumn accumulation rate of 2.6 mm yr-1. This increase is likely linked to the enhanced hydrological cycle caused by the decrease in Arctic sea ice area. Unlike the strong seasonality of precipitation amount in the ERA reanalysis data in the southeast dome region, our reconstructed accumulation rate suggests a weak seasonality.
Fidelity of the ensemble code for visual motion in primate retina.
Frechette, E S; Sher, A; Grivich, M I; Petrusca, D; Litke, A M; Chichilnisky, E J
2005-07-01
Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of approximately 100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of approximately 1%. The elementary motion signal was conveyed in approximately 10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.
NASA Astrophysics Data System (ADS)
Chi, Chongwei; Kou, Deqiang; Ye, Jinzuo; Mao, Yamin; Qiu, Jingdan; Wang, Jiandong; Yang, Xin; Tian, Jie
2015-03-01
Introduction: Precision and personalization treatments are expected to be effective methods for early stage cancer studies. Breast cancer is a major threat to women's health and sentinel lymph node biopsy (SLNB) is an effective method to realize precision and personalized treatment for axillary lymph node (ALN) negative patients. In this study, we developed a surgical navigation system (SNS) based on optical molecular imaging technology for the precise detection of the sentinel lymph node (SLN) in breast cancer patients. This approach helps surgeons in precise positioning during surgery. Methods: The SNS was mainly based on the technology of optical molecular imaging. A novel optical path has been designed in our hardware system and a feature-matching algorithm has been devised to achieve rapid fluorescence and color image registration fusion. Ten in vivo studies of SLN detection in rabbits using indocyanine green (ICG) and blue dye were executed for system evaluation and 8 breast cancer patients accepted the combination method for therapy. Results: The detection rate of the combination method was 100% and an average of 2.6 SLNs was found in all patients. Our results showed that the method of using SNS to detect SLN has the potential to promote its application. Conclusion: The advantage of this system is the real-time tracing of lymph flow in a one-step procedure. The results demonstrated the feasibility of the system for providing accurate location and reliable treatment for surgeons. Our approach delivers valuable information and facilitates more detailed exploration for image-guided surgery research.
A precision medicine approach for psychiatric disease based on repeated symptom scores.
Fojo, Anthony T; Musliner, Katherine L; Zandi, Peter P; Zeger, Scott L
2017-12-01
For psychiatric diseases, rich information exists in the serial measurement of mental health symptom scores. We present a precision medicine framework for using the trajectories of multiple symptoms to make personalized predictions about future symptoms and related psychiatric events. Our approach fits a Bayesian hierarchical model that estimates a population-average trajectory for all symptoms and individual deviations from the average trajectory, then fits a second model that uses individual symptom trajectories to estimate the risk of experiencing an event. The fitted models are used to make clinically relevant predictions for new individuals. We demonstrate this approach on data from a study of antipsychotic therapy for schizophrenia, predicting future scores for positive, negative, and general symptoms, and the risk of treatment failure in 522 schizophrenic patients with observations over 8 weeks. While precision medicine has focused largely on genetic and molecular data, the complementary approach we present illustrates that innovative analytic methods for existing data can extend its reach more broadly. The systematic use of repeated measurements of psychiatric symptoms offers the promise of precision medicine in the field of mental health. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Predicting online ratings based on the opinion spreading process
NASA Astrophysics Data System (ADS)
He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo
2015-10-01
Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.
Precise charm to strange mass ratio and light quark masses from full lattice QCD.
Davies, C T H; McNeile, C; Wong, K Y; Follana, E; Horgan, R; Hornbostel, K; Lepage, G P; Shigemitsu, J; Trottier, H
2010-04-02
By using a single formalism to handle charm, strange, and light valence quarks in full lattice QCD for the first time, we are able to determine ratios of quark masses to 1%. For m(c)/m(s) we obtain 11.85(16), an order of magnitude more precise than the current PDG average. Combined with 1% determinations of the charm quark mass now possible this gives m(s)(2 GeV)=92.4(1.5) MeV. The MILC result for m(s)/m(l)=27.2(3) yields m(l)(2 GeV)=3.40(7) MeV for the average of u and d quark masses.
How precise are reported protein coordinate data?
Konagurthu, Arun S; Allison, Lloyd; Abramson, David; Stuckey, Peter J; Lesk, Arthur M
2014-03-01
Atomic coordinates in the Worldwide Protein Data Bank (wwPDB) are generally reported to greater precision than the experimental structure determinations have actually achieved. By using information theory and data compression to study the compressibility of protein atomic coordinates, it is possible to quantify the amount of randomness in the coordinate data and thereby to determine the realistic precision of the reported coordinates. On average, the value of each C(α) coordinate in a set of selected protein structures solved at a variety of resolutions is good to about 0.1 Å.
High-precision multi-node clock network distribution.
Chen, Xing; Cui, Yifan; Lu, Xing; Ci, Cheng; Zhang, Xuesong; Liu, Bo; Wu, Hong; Tang, Tingsong; Shi, Kebin; Zhang, Zhigang
2017-10-01
A high precision multi-node clock network for multiple users was built following the precise frequency transmission and time synchronization of 120 km fiber. The network topology adopts a simple star-shaped network structure. The clock signal of a hydrogen maser (synchronized with UTC) was recovered from a 120 km telecommunication fiber link and then was distributed to 4 sub-stations. The fractional frequency instability of all substations is in the level of 10 -15 in a second and the clock offset instability is in sub-ps in root-mean-square average.
Production of monodisperse, polymeric microspheres
NASA Technical Reports Server (NTRS)
Rembaum, Alan (Inventor); Rhim, Won-Kyu (Inventor); Hyson, Michael T. (Inventor); Chang, Manchium (Inventor)
1990-01-01
Very small, individual polymeric microspheres with very precise size and a wide variation in monomer type and properties are produced by deploying a precisely formed liquid monomer droplet, suitably an acrylic compound such as hydroxyethyl methacrylate into a containerless environment. The droplet which assumes a spheroid shape is subjected to polymerizing radiation such as ultraviolet or gamma radiation as it travels through the environment. Polymeric microspheres having precise diameters varying no more than plus or minus 5 percent from an average size are recovered. Many types of fillers including magnetic fillers may be dispersed in the liquid droplet.
Improved quantitation and reproducibility in multi-PET/CT lung studies by combining CT information.
Holman, Beverley F; Cuplov, Vesna; Millner, Lynn; Endozo, Raymond; Maher, Toby M; Groves, Ashley M; Hutton, Brian F; Thielemans, Kris
2018-06-05
Matched attenuation maps are vital for obtaining accurate and reproducible kinetic and static parameter estimates from PET data. With increased interest in PET/CT imaging of diffuse lung diseases for assessing disease progression and treatment effectiveness, understanding the extent of the effect of respiratory motion and establishing methods for correction are becoming more important. In a previous study, we have shown that using the wrong attenuation map leads to large errors due to density mismatches in the lung, especially in dynamic PET scans. Here, we extend this work to the case where the study is sub-divided into several scans, e.g. for patient comfort, each with its own CT (cine-CT and 'snap shot' CT). A method to combine multi-CT information into a combined-CT has then been developed, which averages the CT information from each study section to produce composite CT images with the lung density more representative of that in the PET data. This combined-CT was applied to nine patients with idiopathic pulmonary fibrosis, imaged with dynamic 18 F-FDG PET/CT to determine the improvement in the precision of the parameter estimates. Using XCAT simulations, errors in the influx rate constant were found to be as high as 60% in multi-PET/CT studies. Analysis of patient data identified displacements between study sections in the time activity curves, which led to an average standard error in the estimates of the influx rate constant of 53% with conventional methods. This reduced to within 5% after use of combined-CTs for attenuation correction of the study sections. Use of combined-CTs to reconstruct the sections of a multi-PET/CT study, as opposed to using the individually acquired CTs at each study stage, produces more precise parameter estimates and may improve discrimination between diseased and normal lung.
Suárez Rodríguez, David; del Valle Soto, Miguel
2017-01-01
Background The aim of this study is to find the differences between two specific interval exercises. We begin with the hypothesis that the use of microintervals of work and rest allow for greater intensity of play and a reduction in fatigue. Methods Thirteen competition-level male tennis players took part in two interval training exercises comprising nine 2 min series, which consisted of hitting the ball with cross-court forehand and backhand shots, behind the service box. One was a high-intensity interval training (HIIT), made up of periods of continuous work lasting 2 min, and the other was intermittent interval training (IIT), this time with intermittent 2 min intervals, alternating periods of work with rest periods. Average heart rate (HR) and lactate levels were registered in order to observe the physiological intensity of the two exercises, along with the Borg Scale results for perceived exertion and the number of shots and errors in order to determine the intensity achieved and the degree of fatigue throughout the exercise. Results There were no significant differences in the average heart rate, lactate or the Borg Scale. Significant differences were registered, on the other hand, with a greater number of shots in the first two HIIT series (series 1 p>0.009; series 2 p>0.056), but not in the third. The number of errors was significantly lower in all the IIT series (series 1 p<0.035; series 2 p<0.010; series 3 p<0.001). Conclusion Our study suggests that high-intensity intermittent training allows for greater intensity of play in relation to the real time spent on the exercise, reduced fatigue levels and the maintaining of greater precision in specific tennis-related exercises. PMID:29021912
Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps
Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi
2015-01-01
Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003
Evaluation of a role functioning computer adaptive test (RF-CAT).
Anatchkova, M; Rose, M; Ware, J; Bjorner, J B
2013-06-01
To evaluate the validity and participants' acceptance of an online assessment of role function using computer adaptive test (RF-CAT). The RF-CAT and a set of established quality of life instruments were administered in a cross-sectional study in a panel sample (n = 444) recruited from the general population with over-selection of participants with selected self-report chronic conditions (n = 225). The efficiency, score accuracy, validity, and acceptability of the RF-CAT were evaluated and compared to existing measures. The RF-CAT with a stopping rule of six items with content balancing used 25 of the available bank items and was completed on average in 66 s. RF-CAT and the legacy tools scores were highly correlated (.64-.84) and successfully discriminated across known groups. The RF-CAT produced a more precise assessment over a wider range than the SF-36 Role Physical scale. Patients' evaluations of the RF-CAT system were positive overall, with no differences in ratings observed between the CAT and static assessments. The RF-CAT was feasible, more precise than the static SF-36 RP and equally acceptable to participants as legacy measures. In empirical tests of validity, the better performance of the CAT was not uniformly statistically significant. Further research exploring the relationship between gained precision and discriminant power of the CAT assessment is needed.
High-Precision Half-Life Measurement for the Superallowed β+ Emitter Alm26
NASA Astrophysics Data System (ADS)
Finlay, P.; Ettenauer, S.; Ball, G. C.; Leslie, J. R.; Svensson, C. E.; Andreoiu, C.; Austin, R. A. E.; Bandyopadhyay, D.; Cross, D. S.; Demand, G.; Djongolov, M.; Garrett, P. E.; Green, K. L.; Grinyer, G. F.; Hackman, G.; Leach, K. G.; Pearson, C. J.; Phillips, A. A.; Sumithrarachchi, C. S.; Triambak, S.; Williams, S. J.
2011-01-01
A high-precision half-life measurement for the superallowed β+ emitter Alm26 was performed at the TRIUMF-ISAC radioactive ion beam facility yielding T1/2=6346.54±0.46stat±0.60systms, consistent with, but 2.5 times more precise than, the previous world average. The Alm26 half-life and ft value, 3037.53(61) s, are now the most precisely determined for any superallowed β decay. Combined with recent theoretical corrections for isospin-symmetry-breaking and radiative effects, the corrected Ft value for Alm26, 3073.0(12) s, sets a new benchmark for the high-precision superallowed Fermi β-decay studies used to test the conserved vector current hypothesis and determine the Vud element of the Cabibbo-Kobayashi-Maskawa quark mixing matrix.
Differential absorption radar techniques: water vapor retrievals
NASA Astrophysics Data System (ADS)
Millán, Luis; Lebsock, Matthew; Livesey, Nathaniel; Tanelli, Simone
2016-06-01
Two radar pulses sent at different frequencies near the 183 GHz water vapor line can be used to determine total column water vapor and water vapor profiles (within clouds or precipitation) exploiting the differential absorption on and off the line. We assess these water vapor measurements by applying a radar instrument simulator to CloudSat pixels and then running end-to-end retrieval simulations. These end-to-end retrievals enable us to fully characterize not only the expected precision but also their potential biases, allowing us to select radar tones that maximize the water vapor signal minimizing potential errors due to spectral variations in the target extinction properties. A hypothetical CloudSat-like instrument with 500 m by ˜ 1 km vertical and horizontal resolution and a minimum detectable signal and radar precision of -30 and 0.16 dBZ, respectively, can estimate total column water vapor with an expected precision of around 0.03 cm, with potential biases smaller than 0.26 cm most of the time, even under rainy conditions. The expected precision for water vapor profiles was found to be around 89 % on average, with potential biases smaller than 77 % most of the time when the profile is being retrieved close to surface but smaller than 38 % above 3 km. By using either horizontal or vertical averaging, the precision will improve vastly, with the measurements still retaining a considerably high vertical and/or horizontal resolution.
Serra, Gerardo V.; Porta, Norma C. La; Avalos, Susana; Mazzuferi, Vilma
2013-01-01
The alfalfa caterpillar, Colias lesbia (Fabricius) (Lepidoptera: Pieridae), is a major pest of alfalfa, Medicago sativa L. (Fabales: Fabaceae), crops in Argentina. Its management is based mainly on chemical control of larvae whenever the larvae exceed the action threshold. To develop and validate fixed-precision sequential sampling plans, an intensive sampling programme for C. lesbia eggs was carried out in two alfalfa plots located in the Province of Córdoba, Argentina, from 1999 to 2002. Using Resampling for Validation of Sampling Plans software, 12 additional independent data sets were used to validate the sequential sampling plan with precision levels of 0.10 and 0.25 (SE/mean), respectively. For a range of mean densities of 0.10 to 8.35 eggs/sample, an average sample size of only 27 and 26 sample units was required to achieve a desired precision level of 0.25 for the sampling plans of Green and Kuno, respectively. As the precision level was increased to 0.10, average sample size increased to 161 and 157 sample units for the sampling plans of Green and Kuno, respectively. We recommend using Green's sequential sampling plan because it is less sensitive to changes in egg density. These sampling plans are a valuable tool for researchers to study population dynamics and to evaluate integrated pest management strategies. PMID:23909840
NASA Astrophysics Data System (ADS)
Bai, Mingkun; Chevalier, Marie-Luce; Pan, Jiawei; Replumaz, Anne; Leloup, Philippe Hervé; Métois, Marianne; Li, Haibing
2018-03-01
The left-lateral strike-slip Xianshuihe fault system located in the eastern Tibetan Plateau is considered as one of the most tectonically active intra-continental fault system in China, along which more than 20 M > 6.5 and more than 10 M > 7 earthquakes occurred since 1700. Therefore, studying its activity, especially its slip rate at different time scales, is essential to evaluate the regional earthquake hazard. Here, we focus on the central segment of the Xianshuihe fault system, where the Xianshuihe fault near Kangding city splays into three branches: the Selaha, Yalahe and Zheduotang faults. In this paper we use precise dating together with precise field measurements of offsets to re-estimate the slip rate of the fault that was suggested without precise age constraints. We studied three sites where the active Selaha fault cuts and left-laterally offsets moraine crests and levees. We measured horizontal offsets of 96 ± 20 m at Tagong levees (TG), 240 ± 15 m at Selaha moraine (SLH) and 80 ± 5 m at Yangjiagou moraine (YJG). Using 10Be cosmogenic dating, we determined abandonment ages at Tagong, Selaha and Yangjiagou of 12.5 (+ 2.5 / - 2.2) ka, 22 ± 2 ka, and 18 ± 2 ka, respectively. By matching the emplacement age of the moraines or levees with their offsets, we obtain late Quaternary horizontal average slip-rates of 7.6 (+ 2.3 / - 1.9) mm/yr at TG and 10.7 (+ 1.3 / - 1.1) mm/yr at SLH, i.e., 5.7-12 mm/yr or between 9.6 and 9.9 mm/yr assuming that the slip rate should be constant between the nearby TG and SLH sites. At YJG, we obtain a lower slip rate of 4.4 ± 0.5 mm/yr, most likely because the parallel Zheduotang fault shares the slip rate at this longitude, therefore suggesting a ∼5 mm/yr slip rate along the Zheduotang fault. The ∼10 mm/yr late Quaternary rate along the Xianshuihe fault is higher than that along the Ganzi fault to the NW (6-8 mm/yr). This appears to be linked to the existence of the Longriba fault system that separates the Longmenshan and Bayan Har blocks north of the Xianshuihe fault system. A higher slip rate along the short (∼60 km) and discontinuous Selaha fault compared to that along the long (∼300 km) and linear Ganzi fault suggests a high hazard for a M > 6 earthquake in the Kangding area in the near future, which could devastate that densely populated city.
Sohn, J H; Smith, R; Yoong, E; Hudson, N; Kim, T I
2004-01-01
A novel laboratory wind tunnel, with the capability to control factors such as air flow-rate, was developed to measure the kinetics of odour emissions from liquid effluent. The tunnel allows the emission of odours and other volatiles under an atmospheric transport system similar to ambient conditions. Sensors for wind speed, temperature and humidity were installed and calibrated. To calibrate the wind tunnel, trials were performed to determine the gas recovery efficiency under different air flow-rates (ranging from 0.001 to 0.028m3/s) and gas supply rates (ranging from 2.5 to 10.0 L/min) using a standard CO gas mixture. The results have shown gas recovery efficiencies ranging from 61.7 to 106.8%, while the average result from the trials was 81.14%. From statistical analysis, it was observed that the highest, most reliable gas recovery efficiency of the tunnel was 88.9%. The values of air flow-rate and gas supply rate corresponding to the highest gas recovery efficiency were 0.028 m3/s and 10.0 L/min respectively. This study suggested that the wind tunnel would provide precise estimates of odour emission rate. However, the wind tunnel needs to be calibrated to compensate for errors caused by different air flow-rates.
Activity recognition of assembly tasks using body-worn microphones and accelerometers.
Ward, Jamie A; Lukowicz, Paul; Tröster, Gerhard; Starner, Thad E
2006-10-01
In order to provide relevant information to mobile users, such as workers engaging in the manual tasks of maintenance and assembly, a wearable computer requires information about the user's specific activities. This work focuses on the recognition of activities that are characterized by a hand motion and an accompanying sound. Suitable activities can be found in assembly and maintenance work. Here, we provide an initial exploration into the problem domain of continuous activity recognition using on-body sensing. We use a mock "wood workshop" assembly task to ground our investigation. We describe a method for the continuous recognition of activities (sawing, hammering, filing, drilling, grinding, sanding, opening a drawer, tightening a vise, and turning a screwdriver) using microphones and three-axis accelerometers mounted at two positions on the user's arms. Potentially "interesting" activities are segmented from continuous streams of data using an analysis of the sound intensity detected at the two different locations. Activity classification is then performed on these detected segments using linear discriminant analysis (LDA) on the sound channel and hidden Markov models (HMMs) on the acceleration data. Four different methods at classifier fusion are compared for improving these classifications. Using user-dependent training, we obtain continuous average recall and precision rates (for positive activities) of 78 percent and 74 percent, respectively. Using user-independent training (leave-one-out across five users), we obtain recall rates of 66 percent and precision rates of 63 percent. In isolation, these activities were recognized with accuracies of 98 percent, 87 percent, and 95 percent for the user-dependent, user-independent, and user-adapted cases, respectively.
USDA-ARS?s Scientific Manuscript database
An inter-laboratory trial was conducted to validate the operation of the CottonscanTM technology as useful technique for determining the average fiber linear density of cotton. A significant inter-laboratory trial was completed and confirmed that the technology is quite acceptable. For fibers fin...
Mitcham, Trevor; Taghavi, Houra; Long, James; Wood, Cayla; Fuentes, David; Stefan, Wolfgang; Ward, John; Bouchard, Richard
2017-09-01
Photoacoustic (PA) imaging is capable of probing blood oxygen saturation (sO 2 ), which has been shown to correlate with tissue hypoxia, a promising cancer biomarker. However, wavelength-dependent local fluence changes can compromise sO 2 estimation accuracy in tissue. This work investigates using PA imaging with interstitial irradiation and local fluence correction to assess precision and accuracy of sO 2 estimation of blood samples through ex vivo bovine prostate tissue ranging from 14% to 100% sO 2 . Study results for bovine blood samples at distances up to 20 mm from the irradiation source show that local fluence correction improved average sO 2 estimation error from 16.8% to 3.2% and maintained an average precision of 2.3% when compared to matched CO-oximeter sO 2 measurements. This work demonstrates the potential for future clinical translation of using fluence-corrected and interstitially driven PA imaging to accurately and precisely assess sO 2 at depth in tissue with high resolution.
Test system stability and natural variability of a Lemna gibba L. bioassay.
Scherr, Claudia; Simon, Meinhard; Spranger, Jörg; Baumgartner, Stephan
2008-09-04
In ecotoxicological and environmental studies Lemna spp. are used as test organisms due to their small size, rapid predominantly vegetative reproduction, easy handling and high sensitivity to various chemicals. However, there is not much information available concerning spatial and temporal stability of experimental set-ups used for Lemna bioassays, though this is essential for interpretation and reliability of results. We therefore investigated stability and natural variability of a Lemna gibba bioassay assessing area-related and frond number-related growth rates under controlled laboratory conditions over about one year. Lemna gibba L. was grown in beakers with Steinberg medium for one week. Area-related and frond number-related growth rates (r(area) and r(num)) were determined with a non-destructive image processing system. To assess inter-experimental stability, 35 independent experiments were performed with 10 beakers each in the course of one year. We observed changes in growth rates by a factor of two over time. These did not correlate well with temperature or relative humidity in the growth chamber. In order to assess intra-experimental stability, we analysed six systematic negative control experiments (nontoxicant tests) with 96 replicate beakers each. Evaluation showed that the chosen experimental set-up was stable and did not produce false positive results. The coefficient of variation was lower for r(area) (2.99%) than for r(num) (4.27%). It is hypothesised that the variations in growth rates over time under controlled conditions are partly due to endogenic periodicities in Lemna gibba. The relevance of these variations for toxicity investigations should be investigated more closely. Area-related growth rate seems to be more precise as non-destructive calculation parameter than number-related growth rate. Furthermore, we propose two new validity criteria for Lemna gibba bioassays: variability of average specific and section-by-section segmented growth rate, complementary to average specific growth rate as the only validity criterion existing in guidelines for duckweed bioassays.
Test System Stability and Natural Variability of a Lemna Gibba L. Bioassay
Scherr, Claudia; Simon, Meinhard; Spranger, Jörg; Baumgartner, Stephan
2008-01-01
Background In ecotoxicological and environmental studies Lemna spp. are used as test organisms due to their small size, rapid predominantly vegetative reproduction, easy handling and high sensitivity to various chemicals. However, there is not much information available concerning spatial and temporal stability of experimental set-ups used for Lemna bioassays, though this is essential for interpretation and reliability of results. We therefore investigated stability and natural variability of a Lemna gibba bioassay assessing area-related and frond number-related growth rates under controlled laboratory conditions over about one year. Methology/Principal Findings Lemna gibba L. was grown in beakers with Steinberg medium for one week. Area-related and frond number-related growth rates (r(area) and r(num)) were determined with a non-destructive image processing system. To assess inter-experimental stability, 35 independent experiments were performed with 10 beakers each in the course of one year. We observed changes in growth rates by a factor of two over time. These did not correlate well with temperature or relative humidity in the growth chamber. In order to assess intra-experimental stability, we analysed six systematic negative control experiments (nontoxicant tests) with 96 replicate beakers each. Evaluation showed that the chosen experimental set-up was stable and did not produce false positive results. The coefficient of variation was lower for r(area) (2.99%) than for r(num) (4.27%). Conclusions/Significance It is hypothesised that the variations in growth rates over time under controlled conditions are partly due to endogenic periodicities in Lemna gibba. The relevance of these variations for toxicity investigations should be investigated more closely. Area-related growth rate seems to be more precise as non-destructive calculation parameter than number-related growth rate. Furthermore, we propose two new validity criteria for Lemna gibba bioassays: variability of average specific and section-by-section segmented growth rate, complementary to average specific growth rate as the only validity criterion existing in guidelines for duckweed bioassays. PMID:18769541
Analysis of the characteristics of competitive badminton
Cabello, M; Gonzalez-Badillo, J
2003-01-01
Objective: To describe the characteristics of badminton in order to determine the energy requirements, temporal structure, and movements in the game that indicate performance level. To use the findings to plan training with greater precision. Methods: Eleven badminton players (mean (SD) age 21.8 (3.26) years) with international experience from four different countries (France, Italy, Spain, and Portugal) were studied. Two of the Spanish players were monitored in several matches, giving a total of 14 samples, all during the 1999 Spanish International Tournament. Blood lactate concentration was measured with a reflective photometer. Maximum and average heart rates were recorded with a heart rate monitor. Temporal structure and actions during the matches were determined from video recordings. All variables were measured during and after the game and later analysed using a descriptive study. Results: The results confirmed the high demands of the sport, with a maximum heart rate of 190.5 beats/min and an average of 173.5 beats/min during matches over 28 minutes long and performance intervals of 6.4 seconds and rest time of 12.9 seconds between exchanges. Conclusions: The results suggest that badminton is characterised by repetitive efforts of alactic nature and great intensity which are continuously performed throughout the match. An awareness of these characteristics, together with data on the correlations between certain actions such as unforced errors and winning shots and the final result of the match, will aid in more appropriate planning and monitoring of specific training. PMID:12547746
NASA Astrophysics Data System (ADS)
Karczewicz, Marta; Chen, Peisong; Joshi, Rajan; Wang, Xianglin; Chien, Wei-Jung; Panchal, Rahul; Coban, Muhammed; Chong, In Suk; Reznik, Yuriy A.
2011-01-01
This paper describes video coding technology proposal submitted by Qualcomm Inc. in response to a joint call for proposal (CfP) issued by ITU-T SG16 Q.6 (VCEG) and ISO/IEC JTC1/SC29/WG11 (MPEG) in January 2010. Proposed video codec follows a hybrid coding approach based on temporal prediction, followed by transform, quantization, and entropy coding of the residual. Some of its key features are extended block sizes (up to 64x64), recursive integer transforms, single pass switched interpolation filters with offsets (single pass SIFO), mode dependent directional transform (MDDT) for intra-coding, luma and chroma high precision filtering, geometry motion partitioning, adaptive motion vector resolution. It also incorporates internal bit-depth increase (IBDI), and modified quadtree based adaptive loop filtering (QALF). Simulation results are presented for a variety of bit rates, resolutions and coding configurations to demonstrate the high compression efficiency achieved by the proposed video codec at moderate level of encoding and decoding complexity. For random access hierarchical B configuration (HierB), the proposed video codec achieves an average BD-rate reduction of 30.88c/o compared to the H.264/AVC alpha anchor. For low delay hierarchical P (HierP) configuration, the proposed video codec achieves an average BD-rate reduction of 32.96c/o and 48.57c/o, compared to the H.264/AVC beta and gamma anchors, respectively.
NASA Technical Reports Server (NTRS)
Sellers, Piers
2012-01-01
Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.
Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.
1988-01-01
External quality assurance monitoring of the National Atmospheric Deposition Program (NADP) and National Trends Network (NTN) was performed by the U.S. Geological Survey during 1985. The monitoring consisted of three primary programs: (1) an intersite comparison program designed to assess the precision and accuracy of onsite pH and specific conductance measurements made by NADP and NTN site operators; (2) a blind audit sample program designed to assess the effect of routine field handling on the precision and bias of NADP and NTN wet deposition data; and (3) an interlaboratory comparison program designed to compare analytical data from the laboratory processing NADP and NTN samples with data produced by other laboratories routinely analyzing wet deposition samples and to provide estimates of individual laboratory precision. An average of 94% of the site operators participated in the four voluntary intersite comparisons during 1985. A larger percentage of participating site operators met the accuracy goal for specific conductance measurements (average, 87%) than for pH measurements (average, 67%). Overall precision was dependent on the actual specific conductance of the test solution and independent of the pH of the test solution. Data for the blind audit sample program indicated slight positive biases resulting from routine field handling for all analytes except specific conductance. These biases were not large enough to be significant for most data users. Data for the blind audit sample program also indicated that decreases in hydrogen ion concentration were accompanied by decreases in specific conductance. Precision estimates derived from the blind audit sample program indicate that the major source of uncertainty in wet deposition data is the routine field handling that each wet deposition sample receives. Results of the interlaboratory comparison program were similar to results of previous years ' evaluations, indicating that the participating laboratories produced comparable data when they analyzed identical wet deposition samples, and that the laboratory processing NADP and NTN samples achieved the best analyte precision of the participating laboratories. (Author 's abstract)
Cortical activity patterns predict speech discrimination ability
Engineer, Crystal T; Perez, Claudia A; Chen, YeTing H; Carraway, Ryan S; Reed, Amanda C; Shetake, Jai A; Jakkamsetti, Vikram; Chang, Kevin Q; Kilgard, Michael P
2010-01-01
Neural activity in the cerebral cortex can explain many aspects of sensory perception. Extensive psychophysical and neurophysiological studies of visual motion and vibrotactile processing show that the firing rate of cortical neurons averaged across 50–500 ms is well correlated with discrimination ability. In this study, we tested the hypothesis that primary auditory cortex (A1) neurons use temporal precision on the order of 1–10 ms to represent speech sounds shifted into the rat hearing range. Neural discrimination was highly correlated with behavioral performance on 11 consonant-discrimination tasks when spike timing was preserved and was not correlated when spike timing was eliminated. This result suggests that spike timing contributes to the auditory cortex representation of consonant sounds. PMID:18425123
Current Status of the Beam Position Monitoring System at TLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuo, C. H.; Hu, K. H.; Chen, Jenny
2006-11-20
The beam position monitoring system is an important part of a synchrotron light source that supports its routine operation and studies of beam physics. The Taiwan light source is equipped with 59 BPMs. Highly precise closed orbits are measured by multiplexing BPMs. Data are acquired using multi-channel 16-bit ADC modules. Orbit data are sampled every millisecond. Fast orbit data are shared in a reflective memory network to support fast orbit feedback. Averaged data were updated to control database at a rate of 10 Hz. A few new generation digital BPMs were tested to evaluate their performance and functionality. This reportmore » summarizes the system structure, the software environment and the preliminary beam test of the BPM system.« less
A method of calibrating wind velocity sensors with a modified gas flow calibrator
NASA Technical Reports Server (NTRS)
Stump, H. P.
1978-01-01
A procedure was described for calibrating air velocity sensors in the exhaust flow of a gas flow calibrator. The average velocity in the test section located at the calibrator exhaust was verified from the mass flow rate accurately measured by the calibrator's precision sonic nozzles. Air at elevated pressures flowed through a series of screens, diameter changes, and flow straighteners, resulting in a smooth flow through the open test section. The modified system generated air velocities of 2 to 90 meters per second with an uncertainty of about two percent for speeds below 15 meters per second and four percent for the higher speeds. Wind tunnel data correlated well with that taken in the flow calibrator.
Current Status of the Beam Position Monitoring System at TLS
NASA Astrophysics Data System (ADS)
Kuo, C. H.; Hu, K. H.; Chen, Jenny; Lee, Demi; Wang, C. J.; Hsu, S. Y.; Hsu, K. T.
2006-11-01
The beam position monitoring system is an important part of a synchrotron light source that supports its routine operation and studies of beam physics. The Taiwan light source is equipped with 59 BPMs. Highly precise closed orbits are measured by multiplexing BPMs. Data are acquired using multi-channel 16-bit ADC modules. Orbit data are sampled every millisecond. Fast orbit data are shared in a reflective memory network to support fast orbit feedback. Averaged data were updated to control database at a rate of 10 Hz. A few new generation digital BPMs were tested to evaluate their performance and functionality. This report summarizes the system structure, the software environment and the preliminary beam test of the BPM system.
Climate-change-driven accelerated sea-level rise detected in the altimeter era.
Nerem, R S; Beckley, B D; Fasullo, J T; Hamlington, B D; Masters, D; Mitchum, G T
2018-02-27
Using a 25-y time series of precision satellite altimeter data from TOPEX/Poseidon, Jason-1, Jason-2, and Jason-3, we estimate the climate-change-driven acceleration of global mean sea level over the last 25 y to be 0.084 ± 0.025 mm/y 2 Coupled with the average climate-change-driven rate of sea level rise over these same 25 y of 2.9 mm/y, simple extrapolation of the quadratic implies global mean sea level could rise 65 ± 12 cm by 2100 compared with 2005, roughly in agreement with the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5) model projections. Copyright © 2018 the Author(s). Published by PNAS.
Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations
NASA Technical Reports Server (NTRS)
Stefanski, Philip L.
2014-01-01
A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.
NASA Astrophysics Data System (ADS)
Gu, Defeng; Liu, Ye; Yi, Bin; Cao, Jianfeng; Li, Xie
2017-12-01
An experimental satellite mission termed atmospheric density detection and precise orbit determination (APOD) was developed by China and launched on 20 September 2015. The micro-electro-mechanical system (MEMS) GPS receiver provides the basis for precise orbit determination (POD) within the range of a few decimetres. The in-flight performance of the MEMS GPS receiver was assessed. The average number of tracked GPS satellites is 10.7. However, only 5.1 GPS satellites are available for dual-frequency navigation because of the loss of many L2 observations at low elevations. The variations in the multipath error for C1 and P2 were estimated, and the maximum multipath error could reach up to 0.8 m. The average code noises are 0.28 m (C1) and 0.69 m (P2). Using the MEMS GPS receiver, the orbit of the APOD nanosatellite (APOD-A) was precisely determined. Two types of orbit solutions are proposed: a dual-frequency solution and a single-frequency solution. The antenna phase center variations (PCVs) and code residual variations (CRVs) were estimated, and the maximum value of the PCVs is 4.0 cm. After correcting the antenna PCVs and CRVs, the final orbit precision for the dual-frequency and single-frequency solutions were 7.71 cm and 12.91 cm, respectively, validated using the satellite laser ranging (SLR) data, which were significantly improved by 3.35 cm and 25.25 cm. The average RMS of the 6-h overlap differences in the dual-frequency solution between two consecutive days in three dimensions (3D) is 4.59 cm. The MEMS GPS receiver is the Chinese indigenous onboard receiver, which was successfully used in the POD of a nanosatellite. This study has important reference value for improving the MEMS GPS receiver and its application in other low Earth orbit (LEO) nanosatellites.
Homogeneous SPC/E water nucleation in large molecular dynamics simulations.
Angélil, Raymond; Diemand, Jürg; Tanaka, Kyoko K; Tanaka, Hidekazu
2015-08-14
We perform direct large molecular dynamics simulations of homogeneous SPC/E water nucleation, using up to ∼ 4 ⋅ 10(6) molecules. Our large system sizes allow us to measure extremely low and accurate nucleation rates, down to ∼ 10(19) cm(-3) s(-1), helping close the gap between experimentally measured rates ∼ 10(17) cm(-3) s(-1). We are also able to precisely measure size distributions, sticking efficiencies, cluster temperatures, and cluster internal densities. We introduce a new functional form to implement the Yasuoka-Matsumoto nucleation rate measurement technique (threshold method). Comparison to nucleation models shows that classical nucleation theory over-estimates nucleation rates by a few orders of magnitude. The semi-phenomenological nucleation model does better, under-predicting rates by at worst a factor of 24. Unlike what has been observed in Lennard-Jones simulations, post-critical clusters have temperatures consistent with the run average temperature. Also, we observe that post-critical clusters have densities very slightly higher, ∼ 5%, than bulk liquid. We re-calibrate a Hale-type J vs. S scaling relation using both experimental and simulation data, finding remarkable consistency in over 30 orders of magnitude in the nucleation rate range and 180 K in the temperature range.
NASA Astrophysics Data System (ADS)
Li, Qing; Lin, Haibo; Xiu, Yu-Feng; Wang, Ruixue; Yi, Chuijie
The test platform of wheat precision seeding based on image processing techniques is designed to develop the wheat precision seed metering device with high efficiency and precision. Using image processing techniques, this platform gathers images of seeds (wheat) on the conveyer belt which are falling from seed metering device. Then these data are processed and analyzed to calculate the qualified rate, reseeding rate and leakage sowing rate, etc. This paper introduces the whole structure, design parameters of the platform and hardware & software of the image acquisition system were introduced, as well as the method of seed identification and seed-space measurement using image's threshold and counting the seed's center. By analyzing the experimental result, the measurement error is less than ± 1mm.
Bias in IPCC Methodologies for Assessment of N2O emissions from Crop Residue
USDA-ARS?s Scientific Manuscript database
Nitrogen use efficiencies are difficult to measure and reported recoveries from fertilizer N by crops average =50%. Worldwide studies (20+) conducted with precise 15N techniques that trace the fate of N found that an average of 66% of the fertilizer was recovered in crops and soils. In other words,...
Cassman, K G
1999-05-25
Wheat (Triticum aestivum L.), rice (Oryza sativa L.), and maize (Zea mays L.) provide about two-thirds of all energy in human diets, and four major cropping systems in which these cereals are grown represent the foundation of human food supply. Yield per unit time and land has increased markedly during the past 30 years in these systems, a result of intensified crop management involving improved germplasm, greater inputs of fertilizer, production of two or more crops per year on the same piece of land, and irrigation. Meeting future food demand while minimizing expansion of cultivated area primarily will depend on continued intensification of these same four systems. The manner in which further intensification is achieved, however, will differ markedly from the past because the exploitable gap between average farm yields and genetic yield potential is closing. At present, the rate of increase in yield potential is much less than the expected increase in demand. Hence, average farm yields must reach 70-80% of the yield potential ceiling within 30 years in each of these major cereal systems. Achieving consistent production at these high levels without causing environmental damage requires improvements in soil quality and precise management of all production factors in time and space. The scope of the scientific challenge related to these objectives is discussed. It is concluded that major scientific breakthroughs must occur in basic plant physiology, ecophysiology, agroecology, and soil science to achieve the ecological intensification that is needed to meet the expected increase in food demand.
Present-day crustal deformation and strain transfer in northeastern Tibetan Plateau
NASA Astrophysics Data System (ADS)
Li, Yuhang; Liu, Mian; Wang, Qingliang; Cui, Duxin
2018-04-01
The three-dimensional present-day crustal deformation and strain partitioning in northeastern Tibetan Plateau are analyzed using available GPS and precise leveling data. We used the multi-scale wavelet method to analyze strain rates, and the elastic block model to estimate slip rates on the major faults and internal strain within each block. Our results show that shear strain is strongly localized along major strike-slip faults, as expected in the tectonic extrusion model. However, extrusion ends and transfers to crustal contraction near the eastern margin of the Tibetan Plateau. The strain transfer is abrupt along the Haiyuan Fault and diffusive along the East Kunlun Fault. Crustal contraction is spatially correlated with active uplifting. The present-day strain is concentrated along major fault zones; however, within many terranes bounded by these faults, intra-block strain is detectable. Terranes having high intra-block strain rates also show strong seismicity. On average the Ordos and Sichuan blocks show no intra-block strain, but localized strain on the southwestern corner of the Ordos block indicates tectonic encroachment.
UWB multi-burst transmit driver for averaging receivers
Dallum, Gregory E
2012-11-20
A multi-burst transmitter for ultra-wideband (UWB) communication systems generates a sequence of precisely spaced RF bursts from a single trigger event. There are two oscillators in the transmitter circuit, a gated burst rate oscillator and a gated RF burst or RF power output oscillator. The burst rate oscillator produces a relatively low frequency, i.e., MHz, square wave output for a selected transmit cycle, and drives the RF burst oscillator, which produces RF bursts of much higher frequency, i.e., GHz, during the transmit cycle. The frequency of the burst rate oscillator sets the spacing of the RF burst packets. The first oscillator output passes through a bias driver to the second oscillator. The bias driver conditions, e.g., level shifts, the signal from the first oscillator for input into the second oscillator, and also controls the length of each RF burst. A trigger pulse actuates a timing circuit, formed of a flip-flop and associated reset time delay circuit, that controls the operation of the first oscillator, i.e., how long it oscillates (which defines the transmit cycle).
An environmental chamber for investigating the evaporation of volatile chemicals.
Dillon, H K; Rumph, P F
1998-03-01
An inexpensive test chamber has been constructed that provides an environment appropriate for testing the effects of temperature and chemical interactions on gaseous emissions from test solutions. Temperature, relative humidity, and ventilation rate can be controlled and a well-mixed atmosphere can be maintained. The system is relatively simple and relies on heated tap water or ice to adjust the temperature. Temperatures ranging from 9 to 21 degrees C have been maintained. At an average temperature of 15.1 degrees C, temperatures at any location within the chamber vary by no more than 0.5 degree C, and the temperature of the test solution within the chamber varies by no more than 0.1 degree C. The temperatures within the chamber are stable enough to generate precise steady-state concentrations. The wind velocities within the chamber are reproducible from run to run. Consequently, the effect of velocity on the rate of evaporation of a test chemical is expected to be uniform from run to run. Steady-state concentrations can be attained in less than 1 hour at an air exchange rate of about 5 per hour.
Estimation of open water evaporation using land-based meteorological data
NASA Astrophysics Data System (ADS)
Li, Fawen; Zhao, Yong
2017-10-01
Water surface evaporation is an important process in the hydrologic and energy cycles. Accurate simulation of water evaporation is important for the evaluation of water resources. In this paper, using meteorological data from the Aixinzhuang reservoir, the main factors affecting water surface evaporation were determined by the principal component analysis method. To illustrate the influence of these factors on water surface evaporation, the paper first adopted the Dalton model to simulate water surface evaporation. The results showed that the simulation precision was poor for the peak value zone. To improve the model simulation's precision, a modified Dalton model considering relative humidity was proposed. The results show that the 10-day average relative error is 17.2%, assessed as qualified; the monthly average relative error is 12.5%, assessed as qualified; and the yearly average relative error is 3.4%, assessed as excellent. To validate its applicability, the meteorological data of Kuancheng station in the Luan River basin were selected to test the modified model. The results show that the 10-day average relative error is 15.4%, assessed as qualified; the monthly average relative error is 13.3%, assessed as qualified; and the yearly average relative error is 6.0%, assessed as good. These results showed that the modified model had good applicability and versatility. The research results can provide technical support for the calculation of water surface evaporation in northern China or similar regions.
Precision orbit raising trajectories. [solar electric propulsion orbital transfer program
NASA Technical Reports Server (NTRS)
Flanagan, P. F.; Horsewood, J. L.; Pines, S.
1975-01-01
A precision trajectory program has been developed to serve as a test bed for geocentric orbit raising steering laws. The steering laws to be evaluated have been developed using optimization methods employing averaging techniques. This program provides the capability of testing the steering laws in a precision simulation. The principal system models incorporated in the program are described, including the radiation environment, the solar array model, the thrusters and power processors, the geopotential, and the solar system. Steering and array orientation constraints are discussed, and the impact of these constraints on program design is considered.
NASA Astrophysics Data System (ADS)
Charolais, A.; Rignot, E. J.; Milillo, P.; Scheuchl, B.; Mouginot, J.
2017-12-01
The floating extensions of glaciers, or ice shelves, melt vigorously in contact with ocean waters. Melt is non uniform, with the highest melt taking place in the deepest part of the cavity, where thermal forcing is the greatest because of 1) the pressure dependence of the freezing point of the seawater/ice mixture and 2) subglacial water injects fresh, buoyant, cold melt water to fuel stronger ice-ocean interactions. Melt also forms along preferential channels, which are not stationary, and create lines of weakness in the shelf. Ice shelf melt rates have been successfully measured from space over the entire Antarctic continent and on the ice shelves in Greenland using an Eulerian approach that combines ice thickness, ice velocity vectors, surface mass balance data, and measurements of ice thinning rates. The Eulerian approach is limited by the precision of the thickness gradients, typically of a few km, and requires significant spatial averaging to remove advection effects. A Lagrangian approach has been shown to be robust to advection effects and provides higher resolution details. We implemented a Lagrangian methodology for time-tagged World View DEMs by the Polar Geoscience Center (PGS) at the University of Minnesota and time-tagged TanDEM-X DEMs separated by one year. We derive melt rates on a 300-m grid with a precision of a few m/yr. Melt is strongest along grounding lines and along preferred channels. Channels are non-stationary because melt is not the same on opposite sides of the channels. Examining time series of data and comparing with the time-dependent grounding line positions inferred from satellite radar interferometry, we evaluate the magnitude of melt near the grounding line and even within the grounding zone. A non-zero melt rate in the grounding zone has vast implications for ice sheet modeling. This work is funded by a grant from NASA Cryosphere Program.
Format effects in two teacher rating scales of hyperactivity.
Sandoval, J
1981-06-01
The object of this study was to investigate the effect of differences in format on the precision of teacher ratings and thus on the reliability and validity of two teacher rating scales of children's hyperactive behavior. Teachers (N = 242) rated a sample of children in their classrooms using rating scales assessing similar attributes with different formats. For a sub-sample the rating scales were readministered after 2 weeks. The results indicated that improvement can be made in the precision of teacher ratings that may be reflected in improved reliability and validity.
High-precision half-life determination for 21Na using a 4 π gas-proportional counter
NASA Astrophysics Data System (ADS)
Finlay, P.; Laffoley, A. T.; Ball, G. C.; Bender, P. C.; Dunlop, M. R.; Dunlop, R.; Hackman, G.; Leslie, J. R.; MacLean, A. D.; Miller, D.; Moukaddam, M.; Olaizola, B.; Severijns, N.; Smith, J. K.; Southall, D.; Svensson, C. E.
2017-08-01
A high-precision half-life measurement for the superallowed β+ transition between the isospin T =1 /2 mirror nuclei 21Na and 21Ne has been performed at the TRIUMF-ISAC radioactive ion beam facility yielding T1 /2=22.4506 (33 ) s, a result that is a factor of 4 more precise than the previous world-average half-life for 21Na and represents the single most precisely determined half-life for a transition between mirror nuclei to date. The contribution to the uncertainty in the 21Na F tmirror value due to the half-life is now reduced to the level of the nuclear-structure-dependent theoretical corrections, leaving the branching ratio as the dominant experimental uncertainty.
Field comparison of several commercially available radon detectors.
Field, R W; Kross, B C
1990-01-01
To determine the accuracy and precision of commercially available radon detectors in a field setting, 15 detectors from six companies were exposed to radon and compared to a reference radon level. The detectors from companies that had already passed National Radon Measurement Proficiency Program testing had better precision and accuracy than those detectors awaiting proficiency testing. Charcoal adsorption detectors and diffusion barrier charcoal adsorption detectors performed very well, and the latter detectors displayed excellent time averaging ability. Alternatively, charcoal liquid scintillation detectors exhibited acceptable accuracy but poor precision, and bare alpha registration detectors showed both poor accuracy and precision. The mean radon level reported by the bare alpha registration detectors was 68 percent lower than the radon reference level. PMID:2368851
Modeling gamma radiation dose in dwellings due to building materials.
de Jong, Peter; van Dijk, Willem
2008-01-01
A model is presented that calculates the absorbed dose rate in air of gamma radiation emitted by building materials in a rectangular body construction. The basis for these calculations is formed by a fixed set of specific absorbed dose rates (the dose rate per Bq kg(-1) 238U, 232Th, and 40K), as determined for a standard geometry with the dimensions 4 x 5 x 2.8 m3. Using the computer codes Marmer and MicroShield, correction factors are assessed that quantify the influence of several room and material related parameters on the specific absorbed dose rates. The investigated parameters are the position in the construction; the thickness, density, and dimensions of the construction parts; the contribution from the outer leave; the presence of doors and windows; the attenuation by internal partition walls; the contribution from building materials present in adjacent rooms; and the effect of non-equilibrium due to 222Rn exhalation. To verify the precision, the proposed method is applied to three Dutch reference dwellings, i.e., a row house, a coupled house, and a gallery apartment. The averaged difference with MCNP calculations is found to be 4%.
Liu, Zhijian; Li, Hao; Tang, Xindong; Zhang, Xinyu; Lin, Fan; Cheng, Kewei
2016-01-01
Heat collection rate and heat loss coefficient are crucial indicators for the evaluation of in service water-in-glass evacuated tube solar water heaters. However, the direct determination requires complex detection devices and a series of standard experiments, wasting too much time and manpower. To address this problem, we previously used artificial neural networks and support vector machine to develop precise knowledge-based models for predicting the heat collection rates and heat loss coefficients of water-in-glass evacuated tube solar water heaters, setting the properties measured by "portable test instruments" as the independent variables. A robust software for determination was also developed. However, in previous results, the prediction accuracy of heat loss coefficients can still be improved compared to those of heat collection rates. Also, in practical applications, even a small reduction in root mean square errors (RMSEs) can sometimes significantly improve the evaluation and business processes. As a further study, in this short report, we show that using a novel and fast machine learning algorithm-extreme learning machine can generate better predicted results for heat loss coefficient, which reduces the average RMSEs to 0.67 in testing.
Doppler Global Velocimeter Development for the Large Wind Tunnels at Ames Research Center
NASA Technical Reports Server (NTRS)
Reinath, Michael S.
1997-01-01
Development of an optical, laser-based flow-field measurement technique for large wind tunnels is described. The technique uses laser sheet illumination and charged coupled device detectors to rapidly measure flow-field velocity distributions over large planar regions of the flow. Sample measurements are presented that illustrate the capability of the technique. An analysis of measurement uncertainty, which focuses on the random component of uncertainty, shows that precision uncertainty is not dependent on the measured velocity magnitude. For a single-image measurement, the analysis predicts a precision uncertainty of +/-5 m/s. When multiple images are averaged, this uncertainty is shown to decrease. For an average of 100 images, for example, the analysis shows that a precision uncertainty of +/-0.5 m/s can be expected. Sample applications show that vectors aligned with an orthogonal coordinate system are difficult to measure directly. An algebraic transformation is presented which converts measured vectors to the desired orthogonal components. Uncertainty propagation is then used to show how the uncertainty propagates from the direct measurements to the orthogonal components. For a typical forward-scatter viewing geometry, the propagation analysis predicts precision uncertainties of +/-4, +/-7, and +/-6 m/s, respectively, for the U, V, and W components at 68% confidence.
Chen, I-Wen; Papagiakoumou, Eirini; Emiliani, Valentina
2018-06-01
Optogenetics neuronal targeting combined with single-photon wide-field illumination has already proved its enormous potential in neuroscience, enabling the optical control of entire neuronal networks and disentangling their role in the control of specific behaviors. However, establishing how a single or a sub-set of neurons controls a specific behavior, or how functionally identical neurons are connected in a particular task, or yet how behaviors can be modified in real-time by the complex wiring diagram of neuronal connections requires more sophisticated approaches enabling to drive neuronal circuits activity with single-cell precision and millisecond temporal resolution. This has motivated on one side the development of flexible optical methods for two-photon (2P) optogenetic activation using either, or a hybrid of two approaches: scanning and parallel illumination. On the other side, it has stimulated the engineering of new opsins with modified spectral characteristics, channel kinetics and spatial distribution of expression, offering the necessary flexibility of choosing the appropriate opsin for each application. The need for optical manipulation of multiple targets with millisecond temporal resolution has imposed three-dimension (3D) parallel holographic illumination as the technique of choice for optical control of neuronal circuits organized in 3D. Today 3D parallel illumination exists in several complementary variants, each with a different degree of simplicity, light uniformity, temporal precision and axial resolution. In parallel, the possibility to reach hundreds of targets in 3D volumes has prompted the development of low-repetition rate amplified laser sources enabling high peak power, while keeping low average power for stimulating each cell. All together those progresses open the way for a precise optical manipulation of neuronal circuits with unprecedented precision and flexibility. Copyright © 2018 Elsevier Ltd. All rights reserved.
van der Laak, Jeroen A W M; Dijkman, Henry B P M; Pahlplatz, Martin M M
2006-03-01
The magnification factor in transmission electron microscopy is not very precise, hampering for instance quantitative analysis of specimens. Calibration of the magnification is usually performed interactively using replica specimens, containing line or grating patterns with known spacing. In the present study, a procedure is described for automated magnification calibration using digital images of a line replica. This procedure is based on analysis of the power spectrum of Fourier transformed replica images, and is compared to interactive measurement in the same images. Images were used with magnification ranging from 1,000 x to 200,000 x. The automated procedure deviated on average 0.10% from interactive measurements. Especially for catalase replicas, the coefficient of variation of automated measurement was considerably smaller (average 0.28%) compared to that of interactive measurement (average 3.5%). In conclusion, calibration of the magnification in digital images from transmission electron microscopy may be performed automatically, using the procedure presented here, with high precision and accuracy.
Code of Federal Regulations, 2012 CFR
2012-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
Code of Federal Regulations, 2013 CFR
2013-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
Code of Federal Regulations, 2014 CFR
2014-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
Code of Federal Regulations, 2011 CFR
2011-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
Code of Federal Regulations, 2010 CFR
2010-10-01
... certifications regarding geographic rate averaging and rate integration requirements. 64.1900 Section 64.1900... Rate Averaging and Rate Integration Requirements § 64.1900 Nondominant interexchange carrier certifications regarding geographic rate averaging and rate integration requirements. (a) A nondominant provider...
NASA Astrophysics Data System (ADS)
Schille, Joerg; Schneider, Lutz; Streek, André; Kloetzer, Sascha; Loeschner, Udo
2016-03-01
In this paper, high-throughput ultrashort pulse laser machining is investigated on various industrial grade metals (Aluminium, Copper, Stainless steel) and Al2O3 ceramic at unprecedented processing speeds. This is achieved by using a high pulse repetition frequency picosecond laser with maximum average output power of 270 W in conjunction with a unique, in-house developed two-axis polygon scanner. Initially, different concepts of polygon scanners are engineered and tested to find out the optimal architecture for ultrafast and precision laser beam scanning. Remarkable 1,000 m/s scan speed is achieved on the substrate, and thanks to the resulting low pulse overlap, thermal accumulation and plasma absorption effects are avoided at up to 20 MHz pulse repetition frequencies. In order to identify optimum processing conditions for efficient high-average power laser machining, the depths of cavities produced under varied parameter settings are analyzed and, from the results obtained, the characteristic removal values are specified. The maximum removal rate is achieved as high as 27.8 mm3/min for Aluminium, 21.4 mm3/min for Copper, 15.3 mm3/min for Stainless steel and 129.1 mm3/min for Al2O3 when full available laser power is irradiated at optimum pulse repetition frequency.
Dalenberg, Jelle R; Nanetti, Luca; Renken, Remco J; de Wijk, René A; Ter Horst, Gert J
2014-01-01
Consumers show high interindividual variability in food liking during repeated exposure. To investigate consumer liking during repeated exposure, data is often interpreted on a product level by averaging results over all consumers. However, a single product may elicit inconsistent behaviors in consumers; averaging will mix and hide possible subgroups of consumer behaviors, leading to a misinterpretation of the results. To deal with the variability in consumer liking, we propose to use clustering on data from consumer-product combinations to investigate the nature of the behavioral differences within the complete dataset. The resulting behavioral clusters can then be used to describe product acceptance. To test this approach we used two independent data sets in which young adults were repeatedly exposed to drinks and snacks, respectively. We found that five typical consumer behaviors existed in both datasets. These behaviors differed both in the average level of liking as well as its temporal dynamics. By investigating the distribution of a single product across typical consumer behaviors, we provide more precise insight in how consumers divide in subgroups based on their product liking (i.e. product modality). This work shows that taking into account and using interindividual differences can unveil information about product acceptance that would otherwise be ignored.
Dalenberg, Jelle R.; Nanetti, Luca; Renken, Remco J.; de Wijk, René A.; ter Horst, Gert J.
2014-01-01
Consumers show high interindividual variability in food liking during repeated exposure. To investigate consumer liking during repeated exposure, data is often interpreted on a product level by averaging results over all consumers. However, a single product may elicit inconsistent behaviors in consumers; averaging will mix and hide possible subgroups of consumer behaviors, leading to a misinterpretation of the results. To deal with the variability in consumer liking, we propose to use clustering on data from consumer-product combinations to investigate the nature of the behavioral differences within the complete dataset. The resulting behavioral clusters can then be used to describe product acceptance. To test this approach we used two independent data sets in which young adults were repeatedly exposed to drinks and snacks, respectively. We found that five typical consumer behaviors existed in both datasets. These behaviors differed both in the average level of liking as well as its temporal dynamics. By investigating the distribution of a single product across typical consumer behaviors, we provide more precise insight in how consumers divide in subgroups based on their product liking (i.e. product modality). This work shows that taking into account and using interindividual differences can unveil information about product acceptance that would otherwise be ignored. PMID:24667832
Stochastic simulation and analysis of biomolecular reaction networks
Frazier, John M; Chushak, Yaroslav; Foy, Brent
2009-01-01
Background In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not recieved much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Results Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. Conclusion The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior. PMID:19534796
Carey, Robert I; Kyle, Christopher C; Carey, Donna L; Leveillee, Raymond J
2008-01-01
To prepare artificial kidney stones of defined shape, size, mass, and material composition via precision injection molding of Ultracal 30 cement slurries into an inexpensive biodegradable mold. A calcium alginate and silica-based mold was used to prepare casts of varying shapes in a reproducible manner. Ultracal 30 cement slurries mixed 1:1 with water were injected into these casts and allowed to harden. The artificial stones were recovered and their physical properties determined. Ex-vivo and in-vivo responses to holmium laser lithotripsy were examined. Spheres, half spheres, cylinders, cubes, tapered conical structures, and flat angulated structures were prepared with high precision without post-molding manipulations. Large spheres of average mass 0.661 g (+/- 0.037), small spheres of average mass 0.046 g (+/- 0.0026), and hexagons of average mass 0.752 g (+/- 0.0180) were found to have densities (1610-1687 kg/m(3)) within the expected range for Ultracal 30 cement stones. Ex-vivo holmium laser lithotripsy of small spheres in saline showed uniformly reproducible efficiencies of comminution. Implantation of a tapered conical stone into the ureter of a porcine model demonstrated stone comminution in vivo consistent with that seen in the ex-vivo models. We present an environmentally safe, technically simple procedure for the formation of artificial kidney stones of predetermined size and shape. The technique does not require the use of hazardous solvents or postprocedural processing of the stones. These stones are intended for use in standardized experiments of lithotripsy efficiency in which the shape of the stone as well as the mass can be predetermined and precisely controlled.
Automatically finding relevant citations for clinical guideline development.
Bui, Duy Duc An; Jonnalagadda, Siddhartha; Del Fiol, Guilherme
2015-10-01
Literature database search is a crucial step in the development of clinical practice guidelines and systematic reviews. In the age of information technology, the process of literature search is still conducted manually, therefore it is costly, slow and subject to human errors. In this research, we sought to improve the traditional search approach using innovative query expansion and citation ranking approaches. We developed a citation retrieval system composed of query expansion and citation ranking methods. The methods are unsupervised and easily integrated over the PubMed search engine. To validate the system, we developed a gold standard consisting of citations that were systematically searched and screened to support the development of cardiovascular clinical practice guidelines. The expansion and ranking methods were evaluated separately and compared with baseline approaches. Compared with the baseline PubMed expansion, the query expansion algorithm improved recall (80.2% vs. 51.5%) with small loss on precision (0.4% vs. 0.6%). The algorithm could find all citations used to support a larger number of guideline recommendations than the baseline approach (64.5% vs. 37.2%, p<0.001). In addition, the citation ranking approach performed better than PubMed's "most recent" ranking (average precision +6.5%, recall@k +21.1%, p<0.001), PubMed's rank by "relevance" (average precision +6.1%, recall@k +14.8%, p<0.001), and the machine learning classifier that identifies scientifically sound studies from MEDLINE citations (average precision +4.9%, recall@k +4.2%, p<0.001). Our unsupervised query expansion and ranking techniques are more flexible and effective than PubMed's default search engine behavior and the machine learning classifier. Automated citation finding is promising to augment the traditional literature search. Copyright © 2015 Elsevier Inc. All rights reserved.
A real-time surface inspection system for precision steel balls based on machine vision
NASA Astrophysics Data System (ADS)
Chen, Yi-Ji; Tsai, Jhy-Cherng; Hsu, Ya-Chen
2016-07-01
Precision steel balls are one of the most fundament components for motion and power transmission parts and they are widely used in industrial machinery and the automotive industry. As precision balls are crucial for the quality of these products, there is an urgent need to develop a fast and robust system for inspecting defects of precision steel balls. In this paper, a real-time system for inspecting surface defects of precision steel balls is developed based on machine vision. The developed system integrates a dual-lighting system, an unfolding mechanism and inspection algorithms for real-time signal processing and defect detection. The developed system is tested under feeding speeds of 4 pcs s-1 with a detection rate of 99.94% and an error rate of 0.10%. The minimum detectable surface flaw area is 0.01 mm2, which meets the requirement for inspecting ISO grade 100 precision steel balls.
A historical perspective of VR water management for improved crop production
USDA-ARS?s Scientific Manuscript database
Variable-rate water management, or the combination of precision agriculture technology and irrigation, has been enabled by many of the same technologies as other precision agriculture tools. However, adding variable-rate capability to existing irrigation equipment design, or designing new equipment ...
Bonnard, M; Galléa, C; De Graaf, J B; Pailhous, J
2007-02-01
The corticospinal system (CS) is well known to be of major importance for controlling the thumb-index grip, in particular for force grading. However, for a given force level, the way in which the involvement of this system could vary with increasing demands on precise force control is not well-known. Using transcranial magnetic stimulation and functional magnetic resonance imagery, the present experiments investigated whether increasing the precision demands while keeping the averaged force level similar during an isometric dynamic low-force control task, involving the thumb-index grip, does affect the corticospinal excitability to the thumb-index muscles and the activation of the motor cortices, primary and non-primary (supplementary motor area, dorsal and ventral premotor and in the contralateral area), at the origin of the CS. With transcranial magnetic stimulation, we showed that, when precision demands increased, the CS excitability increased to either the first dorsal interosseus or the opponens pollicis, and never to both, for similar ongoing electromyographic activation patterns of these muscles. With functional magnetic resonance imagery, we demonstrated that, for the same averaged force level, the amplitude of blood oxygen level-dependent signal increased in relation to the precision demands in the hand area of the contralateral primary motor cortex in the contralateral supplementary motor area, ventral and dorsal premotor area. Together these results show that, during the course of force generation, the CS integrates online top-down information to precisely fit the motor output to the task's constraints and that its multiple cortical origins are involved in this process, with the ventral premotor area appearing to have a special role.
Precision Seismic Monitoring of Volcanic Eruptions at Axial Seamount
NASA Astrophysics Data System (ADS)
Waldhauser, F.; Wilcock, W. S. D.; Tolstoy, M.; Baillard, C.; Tan, Y. J.; Schaff, D. P.
2017-12-01
Seven permanent ocean bottom seismometers of the Ocean Observatories Initiative's real time cabled observatory at Axial Seamount off the coast of the western United States record seismic activity since 2014. The array captured the April 2015 eruption, shedding light on the detailed structure and dynamics of the volcano and the Juan de Fuca midocean ridge system (Wilcock et al., 2016). After a period of continuously increasing seismic activity primarily associated with the reactivation of caldera ring faults, and the subsequent seismic crisis on April 24, 2015 with 7000 recorded events that day, seismicity rates steadily declined and the array currently records an average of 5 events per day. Here we present results from ongoing efforts to automatically detect and precisely locate seismic events at Axial in real-time, providing the computational framework and fundamental data that will allow rapid characterization and analysis of spatio-temporal changes in seismogenic properties. We combine a kurtosis-based P- and S-phase onset picker and time domain cross-correlation detection and phase delay timing algorithms together with single-event and double-difference location methods to rapidly and precisely (tens of meters) compute the location and magnitudes of new events with respect to a 2-year long, high-resolution background catalog that includes nearly 100,000 events within a 5×5 km region. We extend the real-time double-difference location software DD-RT to efficiently handle the anticipated high-rate and high-density earthquake activity during future eruptions. The modular monitoring framework will allow real-time tracking of other seismic events such as tremors and sea-floor lava explosions that enable the timing and location of lava flows and thus guide response research cruises to the most interesting sites. Finally, rapid detection of eruption precursors and initiation will allow for adaptive sampling by the OOI instruments for optimal recording of future eruptions. With a higher eruption recurrence rate than land-based volcanoes the Axial OOI observatory offers the opportunity to monitor and study volcanic eruptions throughout multiple cycles.
Averaging Bias Correction for Future IPDA Lidar Mission MERLIN
NASA Astrophysics Data System (ADS)
Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien
2018-04-01
The CNES/DLR
Predicting cancer rates in astronauts from animal carcinogenesis studies and cellular markers
NASA Technical Reports Server (NTRS)
Williams, J. R.; Zhang, Y.; Zhou, H.; Osman, M.; Cha, D.; Kavet, R.; Cuccinotta, F.; Dicello, J. F.; Dillehay, L. E.
1999-01-01
The radiation space environment includes particles such as protons and multiple species of heavy ions, with much of the exposure to these radiations occurring at extremely low average dose-rates. Limitations in databases needed to predict cancer hazards in human beings from such radiations are significant and currently do not provide confidence that such predictions are acceptably precise or accurate. In this article, we outline the need for animal carcinogenesis data based on a more sophisticated understanding of the dose-response relationship for induction of cancer and correlative cellular endpoints by representative space radiations. We stress the need for a model that can interrelate human and animal carcinogenesis data with cellular mechanisms. Using a broad model for dose-response patterns which we term the "subalpha-alpha-omega (SAO) model", we explore examples in the literature for radiation-induced cancer and for radiation-induced cellular events to illustrate the need for data that define the dose-response patterns more precisely over specific dose ranges, with special attention to low dose, low dose-rate exposure. We present data for multiple endpoints in cells, which vary in their radiosensitivity, that also support the proposed model. We have measured induction of complex chromosome aberrations in multiple cell types by two space radiations, Fe-ions and protons, and compared these to photons delivered at high dose-rate or low dose-rate. Our data demonstrate that at least three factors modulate the relative efficacy of Fe-ions compared to photons: (i) intrinsic radiosensitivity of irradiated cells; (ii) dose-rate; and (iii) another unspecified effect perhaps related to reparability of DNA lesions. These factors can produce respectively up to at least 7-, 6- and 3-fold variability. These data demonstrate the need to understand better the role of intrinsic radiosensitivity and dose-rate effects in mammalian cell response to ionizing radiation. Such understanding is critical in extrapolating databases between cellular response, animal carcinogenesis and human carcinogenesis, and we suggest that the SAO model is a useful tool for such extrapolation.
Modeling a Mathematical to Quantify the Degree of Emergency Department Crowding
NASA Astrophysics Data System (ADS)
Chang, Y.; Pan, C.; Wen, J.
2012-12-01
The purpose of this study is to deduce a function from the admissions/discharge rate of patient flow to estimate a "Critical Point" that provides a reference for warning systems in regards to crowding in the emergency department (ED) of a hospital or medical clinic. In this study, a model of "Input-Throughput-Output" was used in our established mathematical function to evaluate the critical point. The function was defined as ∂ρ/∂t=-K×∂ρ/∂x , where ρ= number of patients per unit distance (also called density), t= time, x= distance, K= distance of patients movement per unit time. Using the average K of ED crowding, we could initiate the warning system at appropriate time and plan necessary emergency response to facilitate the patient process more smoothly. It was concluded that ED crowding can be quantified using the average value of K, and the value can be used as a reference for medical staff to give optimal emergency medical treatment to patients. Therefore, additional practical work should be launched to collect more precise quantitative data.
Robot-assisted laparoscopic pyeloplasty: minimum 1-year follow-up
NASA Astrophysics Data System (ADS)
Patel, Vipul; Thaly, Rahul; Shah, Ketul
2007-02-01
Objectives: To evaluate the feasibility and efficacy of robotic-assisted laparoscopic pyeloplasty. Laparoscopic pyeloplasty has been shown to have a success rate comparable to that of the open surgical approach. However, the steep learning curve has hindered its acceptance into mainstream urologic practice. The introduction of robotic assistance provides advantages that have the potential to facilitate precise dissection and intracorporeal suturing. Methods: A total of 50 patients underwent robotic-assisted laparoscopic dismembered pyeloplasty. A four-trocar technique was used. Most patients were discharged home on day 1, with stent removal at 3 weeks. Patency of the ureteropelvic junction was assessed in all patients with mercaptotriglycylglycine Lasix renograms at 1, 3, 6, 9, and 12 months, then every 6 months for 1 year, and then yearly. Results: Each patient underwent a successful procedure without open conversion or transfusion. The average estimated blood loss was 40 ml. The operative time averaged 122 minutes (range 60 to 330) overall. Crossing vessels were present in 30% of the patients and were preserved in all cases. The time for the anastomosis averaged 20 minutes (range 10 to 100). Intraoperatively, no complications occurred. Postoperatively, the average hospital stay was 1.1 days. The stents were removed at an average of 20 days (range 14 to 28) postoperatively. The average follow-up was 11.7 months; at the last follow-up visit, each patient was doing well. Of the 50 patients, 48 underwent one or more renograms, demonstrating stable renal function, improved drainage, and no evidence of recurrent obstruction. Conclusions: Robotic-assisted laparoscopic pyeloplasty is a feasible technique for ureteropelvic junction reconstruction. The procedure provides a minimally invasive alternative with good short-term results.
Analysis of de-noising methods to improve the precision of the ILSF BPM electronic readout system
NASA Astrophysics Data System (ADS)
Shafiee, M.; Feghhi, S. A. H.; Rahighi, J.
2016-12-01
In order to have optimum operation and precise control system at particle accelerators, it is required to measure the beam position with the precision of sub-μm. We developed a BPM electronic readout system at Iranian Light Source Facility and it has been experimentally tested at ALBA accelerator facility. The results show the precision of 0.54 μm in beam position measurements. To improve the precision of this beam position monitoring system to sub-μm level, we have studied different de-noising methods such as principal component analysis, wavelet transforms, filtering by FIR, and direct averaging method. An evaluation of the noise reduction was given to testify the ability of these methods. The results show that the noise reduction based on Daubechies wavelet transform is better than other algorithms, and the method is suitable for signal noise reduction in beam position monitoring system.
High-Precision Half-Life Measurement for the Superallowed {beta}{sup +} Emitter {sup 26}Al{sup m}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finlay, P.; Svensson, C. E.; Green, K. L.
2011-01-21
A high-precision half-life measurement for the superallowed {beta}{sup +} emitter {sup 26}Al{sup m} was performed at the TRIUMF-ISAC radioactive ion beam facility yielding T{sub 1/2}=6346.54{+-}0.46{sub stat{+-}}0.60{sub syst} ms, consistent with, but 2.5 times more precise than, the previous world average. The {sup 26}Al{sup m} half-life and ft value, 3037.53(61) s, are now the most precisely determined for any superallowed {beta} decay. Combined with recent theoretical corrections for isospin-symmetry-breaking and radiative effects, the corrected Ft value for {sup 26}Al{sup m}, 3073.0(12) s, sets a new benchmark for the high-precision superallowed Fermi {beta}-decay studies used to test the conserved vector current hypothesismore » and determine the V{sub ud} element of the Cabibbo-Kobayashi-Maskawa quark mixing matrix.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, W. A.; Smith, M. S.; Pittman, S.
2016-05-01
Alpha particles emitted from the decay of uranium in a UF 6 matrix can interact with fluorine and generate neutrons via the 19F(α,n) 22Na reaction. These neutrons can be used to determine the uranium content in a UF 6 storage cylinder. The accuracy of this self-interrogating, non-destructive assay (NDA) technique is, however, limited by the uncertainty of the 19F(α,n) 22Na cross section. We have performed complementary measurements of the 19F(α,n) 22Na reaction with both 4He and 19F beams to improve the precision of the 19F(α,n) 22Na cross section over the alpha energy range that encompasses common actinide alpha decay neededmore » for NDA studies. We have determined an absolute cross section for the 19F(α,n) 22Na reaction to an average precision of 7.6% over the alpha energy range of 3.9 – 6.7 MeV. We utilized this cross section in a simulation of a 100 g spherical UF 6 assembly and obtained a change in neutron emission rate values of approximately 10-12%, and a significant (factor of 3.6) decrease in the neutron emission rate uncertainty (from 50-51% to 13-14%), compared to simulations using the old cross section. Our new absolute cross section enables improved interpretations of NDAs of containers of arbitrary size and configuration.« less
Using National Data to Estimate Average Cost Effectiveness of EFNEP Outcomes by State/Territory
ERIC Educational Resources Information Center
Baral, Ranju; Davis, George C.; Blake, Stephanie; You, Wen; Serrano, Elena
2013-01-01
This report demonstrates how existing national data can be used to first calculate upper limits on the average cost per participant and per outcome per state/territory for the Expanded Food and Nutrition Education Program (EFNEP). These upper limits can then be used by state EFNEP administrators to obtain more precise estimates for their states,…
Liu, Xuenan; Yang, Xuezhi; Jin, Jing; Li, Jiangshan
2018-06-05
Recent researches indicate that facial epidermis color varies with the rhythm of heat beats. It can be captured by consumer-level cameras and, astonishingly, be adopted to estimate heart rate (HR). The HR estimated remains not as precise as required in practical environment where illumination interference, facial expressions, or motion artifacts are involved, though numerous methods have been proposed in the last few years. A novel algorithm is proposed to make non-contact HR estimation technique more robust. First, the face of subject is detected and tracked to follow the head movement. The facial region then falls into several blocks, and the chrominance feature of each block is extracted to establish raw HR sub-signal. Self-adaptive signals separation (SASS) is performed to separate the noiseless HR sub-signals from raw sub-signals. On that basis, the noiseless sub-signals full of HR information are selected using weight-based scheme to establish the holistic HR signal, from which average HR is computed adopting wavelet transform and data filter. Forty subjects take part in our experiments, whose facial videos are recorded by a normal webcam with the frame rate of 30 fps under ambient lighting conditions. The average HR estimated by our method correlates strongly with ground truth measurements, as indicated in experimental results measured in static scenario with the Pearson's correlation r=0.980 and dynamic scenario with the Pearson's correlation r=0.897. Our method, compared to the newest method, decreases the error rate by 38.63% and increases the Pearson's correlation by 15.59%, indicating that our method evidently outperforms state-of-the-art non-contact HR estimation methods in realistic environments. © 2018 Institute of Physics and Engineering in Medicine.
A content-based retrieval of mammographic masses using the curvelet descriptor
NASA Astrophysics Data System (ADS)
Narváez, Fabian; Díaz, Gloria; Gómez, Francisco; Romero, Eduardo
2012-03-01
Computer-aided diagnosis (CAD) that uses content based image retrieval (CBIR) strategies has became an important research area. This paper presents a retrieval strategy that automatically recovers mammography masses from a virtual repository of mammographies. Unlike other approaches, we do not attempt to segment masses but instead we characterize the regions previously selected by an expert. These regions are firstly curvelet transformed and further characterized by approximating the marginal curvelet subband distribution with a generalized gaussian density (GGD). The content based retrieval strategy searches similar regions in a database using the Kullback-Leibler divergence as the similarity measure between distributions. The effectiveness of the proposed descriptor was assessed by comparing the automatically assigned label with a ground truth available in the DDSM database.1 A total of 380 masses with different shapes, sizes and margins were used for evaluation, resulting in a mean average precision rate of 89.3% and recall rate of 75.2% for the retrieval task.
NASA Astrophysics Data System (ADS)
Li, Ming-Lung; Wang, Yi-Chou; Liou, Tong-Miin; Lin, Chao-An
2014-10-01
Precise locations of rupture region under contrast agent leakage of five ruptured cerebral artery aneurysms during computed tomography angiography, which is to our knowledge for the first time, were successfully identified among 101 patients. These, together with numerical simulations based on the reconstructed aneurysmal models, were used to analyze hemodynamic parameters of aneurysms under different cardiac cyclic flow rates. For side wall type aneurysms, different inlet flow rates have mild influences on the shear stresses distributions. On the other hand, for branch type aneurysms, the predicted wall shear stress (WSS) correlates strongly with the increase of inlet vessel velocity. The mean and time averaged WSSes at rupture regions are found to be lower than those over the surface of the aneurysms. Also, the levels of the oscillatory shear index (OSI) are higher than the reported threshold value, supporting the assertion that high OSI correlates with rupture of the aneurysm. However, the present results also indicate that OSI level at the rupture region is relatively lower.
Chocolate Classification by an Electronic Nose with Pressure Controlled Generated Stimulation
Valdez, Luis F.; Gutiérrez, Juan Manuel
2016-01-01
In this work, we will analyze the response of a Metal Oxide Gas Sensor (MOGS) array to a flow controlled stimulus generated in a pressure controlled canister produced by a homemade olfactometer to build an E-nose. The built E-nose is capable of chocolate identification between the 26 analyzed chocolate bar samples and four features recognition (chocolate type, extra ingredient, sweetener and expiration date status). The data analysis tools used were Principal Components Analysis (PCA) and Artificial Neural Networks (ANNs). The chocolate identification E-nose average classification rate was of 81.3% with 0.99 accuracy (Acc), 0.86 precision (Prc), 0.84 sensitivity (Sen) and 0.99 specificity (Spe) for test. The chocolate feature recognition E-nose gives a classification rate of 85.36% with 0.96 Acc, 0.86 Prc, 0.85 Sen and 0.96 Spe. In addition, a preliminary sample aging analysis was made. The results prove the pressure controlled generated stimulus is reliable for this type of studies. PMID:27775628
Chocolate Classification by an Electronic Nose with Pressure Controlled Generated Stimulation.
Valdez, Luis F; Gutiérrez, Juan Manuel
2016-10-20
In this work, we will analyze the response of a Metal Oxide Gas Sensor (MOGS) array to a flow controlled stimulus generated in a pressure controlled canister produced by a homemade olfactometer to build an E-nose. The built E-nose is capable of chocolate identification between the 26 analyzed chocolate bar samples and four features recognition (chocolate type, extra ingredient, sweetener and expiration date status). The data analysis tools used were Principal Components Analysis (PCA) and Artificial Neural Networks (ANNs). The chocolate identification E-nose average classification rate was of 81.3% with 0.99 accuracy (Acc), 0.86 precision (Prc), 0.84 sensitivity (Sen) and 0.99 specificity (Spe) for test. The chocolate feature recognition E-nose gives a classification rate of 85.36% with 0.96 Acc, 0.86 Prc, 0.85 Sen and 0.96 Spe. In addition, a preliminary sample aging analysis was made. The results prove the pressure controlled generated stimulus is reliable for this type of studies.
Postpyloric regulation of gastric emptying in rhesus monkeys.
McHugh, P R; Moran, T H; Wirth, J B
1982-09-01
Saline (0.9% NaCl) empties rapidly and exponentially from the stomach of the rhesus monkey, but glucose solutions empty at a calorie-constant rate of 0.4 kcal/min. By means of indwelling intragastric and intraduodenal cannulae we can demonstrate an inhibition on the delivery of saline from the stomach provoked by glucose placed beyond the pylorus. The inhibition varies directly with the glucose calories in the intestine and averages 2.5 min/kcal. That these two results (0.4 kcal/min and 2.5 min/kcal) are reciprocals suggests a feedback inhibition on the gastric emptying of nutrients arising from beyond the pylorus and adequate to explain the rate of glucose delivery to the intestine. A control theory description of gastric emptying that includes such feedback regulation can be derived from these data to explain the different gastric emptying patterns of nutrients and nonnutrient solutions. These patterns give this visceral system a precision in its management of nutrients that can provide information crucial to preabsorptive satiety.
Neutron Lifetime and Axial Coupling Connection
NASA Astrophysics Data System (ADS)
Czarnecki, Andrzej; Marciano, William J.; Sirlin, Alberto
2018-05-01
Experimental studies of neutron decay, n →p e ν ¯, exhibit two anomalies. The first is a 8.6(2.1) s, roughly 4 σ difference between the average beam measured neutron lifetime, τnbeam=888.0 (2.0 ) s , and the more precise average trapped ultracold neutron determination, τntrap=879.4 (6 ) s . The second is a 5 σ difference between the pre2002 average axial coupling, gA, as measured in neutron decay asymmetries gApre 2002=1.2637 (21 ) , and the more recent, post2002, average gApost 2002=1.2755 (11 ), where, following the UCNA Collaboration division, experiments are classified by the date of their most recent result. In this Letter, we correlate those τn and gA values using a (slightly) updated relation τn(1 +3 gA2)=5172.0 (1.1 ) s . Consistency with that relation and better precision suggest τnfavored=879.4 (6 ) s and gAfavored=1.2755 (11 ) as preferred values for those parameters. Comparisons of gAfavored with recent lattice QCD and muonic hydrogen capture results are made. A general constraint on exotic neutron decay branching ratios, <0.27 %, is discussed and applied to a recently proposed solution to the neutron lifetime puzzle.
NASA Astrophysics Data System (ADS)
Anderson, J. L.
2008-12-01
Precise level surveys of the Puna Geothermal Ventures power plant site have been conducted at 2 to 3 year intervals over the past 16 years following an initial pre-production base-line survey in 1992. Pre-1992 USGS studies near the plant showed slow general subsidence and this pattern has continued since then. The average rate of subsidence for the first 11 years of the present survey series was 0.71 cm per year (1992- 2003). It was against this background of subsidence that small but significant upward movements were detected in 2005 in an area approximately 500 m wide directly under the power plant. This positive anomaly had an amplitude of only 0.5 cm but was clearly discernable because of the part-per-million resolution possible with traditional precise leveling. The 13-year (at that time) data set made it possible to interpret this event with confidence. The cause of the deformation was reported in 2005 to be shallow and localized in comparison to factors contributing to the subsidence of the surrounding area. Subsequent drilling activity penetrated magma beneath the anomaly, providing strong physical evidence that fluid pressure was the probable cause of the anomaly.
Using hyperspectral data in precision farming applications
USDA-ARS?s Scientific Manuscript database
Precision farming practices such as variable rate applications of fertilizer and agricultural chemicals require accurate field variability mapping. This chapter investigated the value of hyperspectral remote sensing in providing useful information for five applications of precision farming: (a) Soil...
Jung, Won-Mo; Park, In-Soo; Lee, Ye-Seul; Kim, Chang-Eop; Lee, Hyangsook; Hahm, Dae-Hyun; Park, Hi-Joon; Jang, Bo-Hyoung; Chae, Younbyoung
2018-04-12
Comprehension of the medical diagnoses of doctors and treatment of diseases is important to understand the underlying principle in selecting appropriate acupoints. The pattern recognition process that pertains to symptoms and diseases and informs acupuncture treatment in a clinical setting was explored. A total of 232 clinical records were collected using a Charting Language program. The relationship between symptom information and selected acupoints was trained using an artificial neural network (ANN). A total of 11 hidden nodes with the highest average precision score were selected through a tenfold cross-validation. Our ANN model could predict the selected acupoints based on symptom and disease information with an average precision score of 0.865 (precision, 0.911; recall, 0.811). This model is a useful tool for diagnostic classification or pattern recognition and for the prediction and modeling of acupuncture treatment based on clinical data obtained in a real-world setting. The relationship between symptoms and selected acupoints could be systematically characterized through knowledge discovery processes, such as pattern identification.
NASA Astrophysics Data System (ADS)
Tian, C.; Wang, L.; Novick, K. A.
2016-12-01
High-precision triple oxygen isotope analysis can be used to improve our understanding of multiple hydrological and meteorological processes. Recent studies focus on understanding 17O-excess variation of tropical storms, high-latitude snow and ice-core as well as spatial distribution of meteoric water (tap water). The temporal scale of 17O-excess variation in middle-latitude precipitation is needed to better understand which processes control on the 17O-excess variations. This study focused on assessing how the accuracy and precision of vapor δ17O laser spectroscopy measurements depend on vapor concentration, delta range, and averaging-time. Meanwhile, we presented 17O-excess data from two-year, event based precipitation sampling in the east-central United States. A Triple Water Vapor Isotope Analyzer (T-WVIA) was used to evaluate the accuracy and precision of δ2H, δ18O and δ17O measurements. GISP and SLAP2 from IAEA and four working standards were used to evaluate the sensitivity in the three factors. Overall, the accuracy and precision of all isotope measurements were sensitive to concentration, with higher accuracy and precision generally observed under moderate vapor concentrations (i.e., 10000-15000 ppm) for all isotopes. Precision was also sensitive to the range of delta values, though the effect was not as large when compared to the sensitivity to concentration. The precision was much less sensitive to averaging time when compared with concentration and delta range effects. The preliminary results showed that 17O-excess variation was lower in summer (23±17 per meg) than in winter (34±16 per meg), whereas spring values (30±21 per meg) was similar to fall (29±13 per meg). That means kinetic fractionation influences the isotopic composition and 17O-excess in different seasons.
Advanced bioanalytics for precision medicine.
Roda, Aldo; Michelini, Elisa; Caliceti, Cristiana; Guardigli, Massimo; Mirasoli, Mara; Simoni, Patrizia
2018-01-01
Precision medicine is a new paradigm that combines diagnostic, imaging, and analytical tools to produce accurate diagnoses and therapeutic interventions tailored to the individual patient. This approach stands in contrast to the traditional "one size fits all" concept, according to which researchers develop disease treatments and preventions for an "average" patient without considering individual differences. The "one size fits all" concept has led to many ineffective or inappropriate treatments, especially for pathologies such as Alzheimer's disease and cancer. Now, precision medicine is receiving massive funding in many countries, thanks to its social and economic potential in terms of improved disease prevention, diagnosis, and therapy. Bioanalytical chemistry is critical to precision medicine. This is because identifying an appropriate tailored therapy requires researchers to collect and analyze information on each patient's specific molecular biomarkers (e.g., proteins, nucleic acids, and metabolites). In other words, precision diagnostics is not possible without precise bioanalytical chemistry. This Trend article highlights some of the most recent advances, including massive analysis of multilayer omics, and new imaging technique applications suitable for implementing precision medicine. Graphical abstract Precision medicine combines bioanalytical chemistry, molecular diagnostics, and imaging tools for performing accurate diagnoses and selecting optimal therapies for each patient.
Current status and future directions of precision agriculture for aerial application in the USA
USDA-ARS?s Scientific Manuscript database
Precision aerial application in the USA is less than a decade old since the development of the first variable-rate aerial application system. Many areas of the United States rely on readily available agricultural airplanes or helicopters for pest management. Variable-rate aerial application provides...
A precise measurement of the [Formula: see text] meson oscillation frequency.
Aaij, R; Abellán Beteta, C; Adeva, B; Adinolfi, M; Affolder, A; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Anderson, J; Andreassi, G; Andreotti, M; Andrews, J E; Appleby, R B; Aquines Gutierrez, O; Archilli, F; d'Argent, P; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Bachmann, S; Back, J J; Badalov, A; Baesso, C; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Batozskaya, V; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Bel, L J; Bellee, V; Belloli, N; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bertolin, A; Bettler, M-O; van Beuzekom, M; Bien, A; Bifani, S; Billoir, P; Bird, T; Birnkraut, A; Bizzeti, A; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borsato, M; Bowcock, T J V; Bowen, E; Bozzi, C; Braun, S; Britsch, M; Britton, T; Brodzicka, J; Brook, N H; Buchanan, E; Bursche, A; Buytaert, J; Cadeddu, S; Calabrese, R; Calvi, M; Calvo Gomez, M; Campana, P; Campora Perez, D; Capriotti, L; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carniti, P; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Cavallero, G; Cenci, R; Charles, M; Charpentier, Ph; Chefdeville, M; Chen, S; Cheung, S-F; Chiapolini, N; Chrzaszcz, M; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Cogoni, V; Cojocariu, L; Collazuol, G; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombes, M; Coquereau, S; Corti, G; Corvo, M; Couturier, B; Cowan, G A; Craik, D C; Crocombe, A; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Dall'Occo, E; Dalseno, J; David, P N Y; Davis, A; De Aguiar Francisco, O; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Simone, P; Dean, C-T; Decamp, D; Deckenhoff, M; Del Buono, L; Déléage, N; Demmer, M; Derkach, D; Deschamps, O; Dettori, F; Dey, B; Di Canto, A; Di Ruscio, F; Dijkstra, H; Donleavy, S; Dordei, F; Dorigo, M; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dreimanis, K; Dufour, L; Dujany, G; Dupertuis, F; Durante, P; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Easo, S; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; El Rifai, I; Elsasser, Ch; Ely, S; Esen, S; Evans, H M; Evans, T; Falabella, A; Färber, C; Farley, N; Farry, S; Fay, R; Ferguson, D; Fernandez Albor, V; Ferrari, F; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fohl, K; Fol, P; Fontana, M; Fontanelli, F; C Forshaw, D; Forty, R; Frank, M; Frei, C; Frosini, M; Fu, J; Furfaro, E; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; García Pardiñas, J; Garra Tico, J; Garrido, L; Gascon, D; Gaspar, C; Gauld, R; Gavardi, L; Gazzoni, G; Gerick, D; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianì, S; Gibson, V; Girard, O G; Giubega, L; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gotti, C; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graverini, E; Graziani, G; Grecu, A; Greening, E; Gregson, S; Griffith, P; Grillo, L; Grünberg, O; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Hadavizadeh, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hamilton, B; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; He, J; Head, T; Heijne, V; Heister, A; Hennessy, K; Henrard, P; Henry, L; Hernando Morata, J A; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hoballah, M; Hombach, C; Hulsbergen, W; Humair, T; Hussain, N; Hutchcroft, D; Hynds, D; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jalocha, J; Jans, E; Jawahery, A; Jing, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kandybei, S; Kanso, W; Karacson, M; Karbach, T M; Karodia, S; Kecke, M; Kelsey, M; Kenyon, I R; Kenzie, M; Ketel, T; Khanji, B; Khurewathanakul, C; Kirn, T; Klaver, S; Klimaszewski, K; Kochebina, O; Kolpin, M; Komarov, I; Koopman, R F; Koppenburg, P; Kozeiha, M; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Krzemien, W; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; K Kuonen, A; Kurek, K; Kvaratskheliya, T; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lanfranchi, G; Langenbruch, C; Langhans, B; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Lees, J-P; Lefèvre, R; Leflat, A; Lefrançois, J; Lemos Cid, E; Leroy, O; Lesiak, T; Leverington, B; Li, Y; Likhomanenko, T; Liles, M; Lindner, R; Linn, C; Lionetto, F; Liu, B; Liu, X; Loh, D; Longstaff, I; Lopes, J H; Lucchesi, D; Lucio Martinez, M; Luo, H; Lupato, A; Luppi, E; Lupton, O; Lusardi, N; Lusiani, A; Machefert, F; Maciuc, F; Maev, O; Maguire, K; Malde, S; Malinin, A; Manca, G; Mancinelli, G; Manning, P; Mapelli, A; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marino, P; Marks, J; Martellotti, G; Martin, M; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massafferri, A; Matev, R; Mathad, A; Mathe, Z; Matteuzzi, C; Mauri, A; Maurin, B; Mazurov, A; McCann, M; McCarthy, J; McNab, A; McNulty, R; Meadows, B; Meier, F; Meissner, M; Melnychuk, D; Merk, M; Michielin, E; Milanes, D A; Minard, M-N; Mitzel, D S; Molina Rodriguez, J; Monroy, I A; Monteil, S; Morandin, M; Morawski, P; Mordà, A; Morello, M J; Moron, J; Morris, A B; Mountain, R; Muheim, F; Müller, D; Müller, J; Müller, K; Müller, V; Mussini, M; Muster, B; Naik, P; Nakada, T; Nandakumar, R; Nandi, A; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, A D; Nguyen, T D; Nguyen-Mau, C; Niess, V; Niet, R; Nikitin, N; Nikodem, T; Novoselov, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Ogilvy, S; Okhrimenko, O; Oldeman, R; Onderwater, C J G; Osorio Rodrigues, B; Otalora Goicochea, J M; Otto, A; Owen, P; Oyanguren, A; Palano, A; Palombo, F; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Pappalardo, L L; Pappenheimer, C; Parkes, C; Passaleva, G; Patel, G D; Patel, M; Patrignani, C; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perret, P; Pescatore, L; Petridis, K; Petrolini, A; Petruzzo, M; Picatoste Olloqui, E; Pietrzyk, B; Pilař, T; Pinci, D; Pistone, A; Piucci, A; Playfer, S; Plo Casasus, M; Poikela, T; Polci, F; Poluektov, A; Polyakov, I; Polycarpo, E; Popov, A; Popov, D; Popovici, B; Potterat, C; Price, E; Price, J D; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Quagliani, R; Rachwal, B; Rademacker, J H; Rama, M; Rangel, M S; Raniuk, I; Rauschmayr, N; Raven, G; Redi, F; Reichert, S; Reid, M M; Dos Reis, A C; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Lopez, J A; Rodriguez Perez, P; Roiser, S; Romanovsky, V; Romero Vidal, A; W Ronayne, J; Rotondo, M; Rouvinet, J; Ruf, T; Ruiz Valls, P; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santimaria, M; Santovetti, E; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrina, D; Schael, S; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmelzer, T; Schmidt, B; Schneider, O; Schopper, A; Schubiger, M; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Semennikov, A; Sergi, A; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Shires, A; Siddi, B G; Silva Coutinho, R; Silva de Oliveira, L; Simi, G; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, E; Smith, E; Smith, I T; Smith, J; Smith, M; Snoek, H; Sokoloff, M D; Soler, F J P; Soomro, F; Souza, D; Souza De Paula, B; Spaan, B; Spradlin, P; Sridharan, S; Stagni, F; Stahl, M; Stahl, S; Stefkova, S; Steinkamp, O; Stenyakin, O; Stevenson, S; Stoica, S; Stone, S; Storaci, B; Stracka, S; Straticiuc, M; Straumann, U; Sun, L; Sutcliffe, W; Swientek, K; Swientek, S; Syropoulos, V; Szczekowski, M; Szczypka, P; Szumlak, T; T'Jampens, S; Tayduganov, A; Tekampe, T; Teklishyn, M; Tellarini, G; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Todd, J; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Torr, N; Tournefier, E; Tourneur, S; Trabelsi, K; Tran, M T; Tresch, M; Trisovic, A; Tsaregorodtsev, A; Tsopelas, P; Tuning, N; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vacca, C; Vagnoni, V; Valenti, G; Vallier, A; Vazquez Gomez, R; Vazquez Regueiro, P; Vázquez Sierra, C; Vecchi, S; van Veghel, M; Velthuis, J J; Veltri, M; Veneziano, G; Vesterinen, M; Viaud, B; Vieira, D; Vieites Diaz, M; Vilasis-Cardona, X; Vollhardt, A; Volyanskyy, D; Voong, D; Vorobyev, A; Vorobyev, V; Voß, C; de Vries, J A; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wandernoth, S; Wang, J; Ward, D R; Watson, N K; Websdale, D; Weiden, A; Whitehead, M; Wilkinson, G; Wilkinson, M; Williams, M; Williams, M P; Williams, M; Williams, T; Wilson, F F; Wimberley, J; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wright, S; Wyllie, K; Xie, Y; Xu, Z; Yang, Z; Yu, J; Yuan, X; Yushchenko, O; Zangoli, M; Zavertyaev, M; Zhang, L; Zhang, Y; Zhelezov, A; Zhokhov, A; Zhong, L; Zhukov, V; Zucchelli, S
2016-01-01
The oscillation frequency, [Formula: see text], of [Formula: see text] mesons is measured using semileptonic decays with a [Formula: see text] or [Formula: see text] meson in the final state. The data sample corresponds to 3.0[Formula: see text] of pp collisions, collected by the LHCb experiment at centre-of-mass energies [Formula: see text] = 7 and 8[Formula: see text]. A combination of the two decay modes gives [Formula: see text], where the first uncertainty is statistical and the second is systematic. This is the most precise single measurement of this parameter. It is consistent with the current world average and has similar precision.
NASA Astrophysics Data System (ADS)
Ligero, Rufino; Casas-Ruiz, Melquiades; Barrera, Manuel; Barbero, Luis
2010-05-01
The techniques for the direct measurement of the sedimentation rate are reliable but slow and imprecise, given that the time intervals of measurement cannot be very long. Consequently it is an extremely laborious task to obtain a representative map of the sedimentation rates and such maps are available for very few zones. However, for most environmental studies, it is very important to know the sedimentation rates. The high degree of accuracy of the gamma spectrometric techniques together with the application of the model describes in this work, has allowed the determination of the sedimentation rates in a wide spatial area such of the Bay of Cadiz to be obtained with precision and consuming considerably less time in comparison to the traditional techniques. Even so, the experimental conditions required for the sample cores are fairly restrictive, and although the radiological method provides a quantitative advance in measurement, the experimental difficulty in the execution of the study is not greatly diminished. For this reason, a second model has been derived based on the measurement of the inventory, which offers economies in time and financial cost, and which allows the sedimentation rate in a region to be determined with satisfactory accuracy. Furthermore, it has been shown that the application of this model requires a precise determination of 137Cs inventories. The sedimentation rates estimated by the 137Cs inventory method ranged from 0.26 cm/year to 1.72 cm/year. The average value of the sedimentation rate obtained is 0.59 cm/year, and this rate has been compared with those resulting from the application of the 210Pb dating technique. A good agreement between the two procedures has been found. From the study carried out, it has been possible for the first time, to draw a map of sedimentation rates for this zone where numerous physical-chemical, oceanographic and ecological studies converge, since it is situated in a region of great environmental interest. This area, which is representative of common environmental coastal scenarios, is particularly sensitive to perturbations related to climate change, and the results of the study will allow to carry out short and medium term evaluations of this change.
3He(α, γ)7Be cross section in a wide energy range
NASA Astrophysics Data System (ADS)
Szücs, Tamás; Gyürky, György; Halász, Zoltán; Kiss, Gábor Gy.; Fülöp, Zsolt
2018-01-01
The reaction rate of the 3He(α,γ)7 Be reaction is important both in the Big Bang Nucleosynthesis (BBN) and in the Solar hydrogen burning. There have been a lot of experimental and theoretical efforts to determine this reaction rate with high precision. Some long standing issues have been solved by the more precise investigations, like the different S(0) values predicted by the activation and in-beam measurement. However, the recent, more detailed astrophysical model predictions require the reaction rate with even higher precision to unravel new issues like the Solar composition. One way to increase the precision is to provide a comprehensive dataset in a wide energy range, extending the experimental cross section database of this reaction. This paper presents a new cross section measurement between Ecm = 2.5 - 4.4 MeV, in an energy range which extends above the 7Be proton separation threshold.
Zhang, Shuangyou; Wu, Jiutao; Leng, Jianxiao; Lai, Shunnan; Zhao, Jianye
2014-11-15
In this Letter, we demonstrate a fully stabilized Er:fiber frequency comb by using a fiber-based, high-precision optical-microwave phase detector. To achieve high-precision and long-term phase locking of the repetition rate to a microwave reference, frequency control techniques (tuning pump power and cavity length) are combined together as its feedback. Since the pump power has been used for stabilization of the repetition rate, we introduce a pair of intracavity prisms as a regulator for carrier-envelope offset frequency, thereby phase locking one mode of the comb to the rubidium saturated absorption transition line. The stabilized comb performs the same high stability as the reference for the repetition rate and provides a residual frequency instability of 3.6×10(-13) for each comb mode. The demonstrated stabilization scheme could provide a high-precision comb for optical communication, direct frequency comb spectroscopy.
Guo, Gang; Cai, Wei; Zhang, Xu
2016-11-01
The aim of the present study was to investigate a method of laparoscopic nephron-sparing surgery (LNSS) for renal cell carcinoma (RCC) based on the precise anatomy of the nephron, and to decrease the incidence of hemorrhage and urinary leakage. Between January 2012 and December 2013, 31 patients who presented to the General Hospital of the People's Liberation Army (Beijing, China) were treated for RCC. The mean tumor size was 3.4±0.7 cm in diameter (range, 1.2-6.0 cm). During surgery, the renal artery was blocked, and subsequently, an incision in the renal capsule and renal cortex was performed, at 3-5 mm from the tumor edge. Subsequent to the incision of the renal parenchyma, scissors with blunt and sharp edge were used to separate the base of the tumor from the normal renal medulla, in the direction of the ray medullary in the renal pyramids. The basal blood vessels were incised following the hemostasis of the region using bipolar coagulation. The minor renal calyces were stripped carefully and the wound was closed with an absorbable sutures. The arterial occlusion time, duration of surgery, intraoperative bleeding volume, post-operative drainage volume, pathological results and complications were recorded. The surgery was successful for all patients. The estimated average intraoperative bleeding volume was 55.7 ml, the average surgical duration was 95.5 min, the average arterial occlusion time was 21.2 min, the average post-operative drainage volume was 92.3 ml and the average post-operative length of hospital stay was 6.1 days. No hemorrhage or urinary leakage was observed in the patients following the surgery. LNSS for RCC based on the precise anatomy of the nephron was concluded to be effective and feasible. The surgery is useful for the complete removal of tumors and guarantees a negative margin, which may also decrease the incidence of hemorrhage and urinary leakage following surgery.
Testing and evaluation of the LES-6 pulsed plasma thruster by means of a torsion pendulum system
NASA Technical Reports Server (NTRS)
Hamidian, J. P.; Dahlgren, J. B.
1973-01-01
Performance characteristics of the LES-6 pulsed plasma thruster over a range of input conditions were investigated by means of a torsion pendulum system. Parameters of particular interest included the impulse bit and time average thrust (and their repeatability), specific impulse, mass ablated per discharge, specific thrust, energy per unit area, efficiency, and variation of performance with ignition command rate. Intermittency of the thruster as affected by input energy and igniter resistance were also investigated. Comparative experimental data correlation with the data presented. The results of these tests indicate that the LES-6 thruster, with some identifiable design improvements, represents an attractive reaction control thruster for attitude contol applications on long-life spacecraft requiring small metered impulse bits for precise pointing control of science instruments.
NASA Astrophysics Data System (ADS)
Durgadas, C. V.; Sharma, C. P.; Paul, W.; Rekha, M. R.; Sreenivasan, K.
2012-09-01
This study refers an aqueous synthesis of methotrexate (MTX)-conjugated gold nanoparticles (GNPs), their interaction with HepG2 cells, and the use of Raman imaging to observe cellular internalization and drug delivery. GNPs of average size 3.5-5 nm were stabilized using the amine terminated bifunctional biocompatible copolymer and amended by conjugating MTX, an anticancer drug. The nanoparticles were released MTX at a faster rate in acidic pH and subsequently found to form aggregates. The Raman signals of cellular components were found to be enhanced by the aggregated particles enabling the mapping to visualize site-specific drug delivery. The methodology seems to have potential in optimizing the characteristics of nanodrug carriers for emptying the cargo precisely at specified sites.
Analysis of possibilities of waste heat recovery in off-road vehicles
NASA Astrophysics Data System (ADS)
Wojciechowski, K. T.; Zybala, R.; Leszczynski, J.; Nieroda, P.; Schmidt, M.; Merkisz, J.; Lijewski, P.; Fuc, P.
2012-06-01
The paper presents the preliminary results of the waste heat recovery investigations for an agricultural tractor engine (7.4 dm3) and excavator engine (7.2 dm3) in real operating conditions. The temperature of exhaust gases and exhaust mass flow rate has been measured by precise portable exhaust emissions analyzer SEMTECH DS (SENSORS Inc.). The analysis shows that engines of tested vehicles operate approximately at constant speed and load. The average temperature of exhaust gases is in the range from 300 to 400 °C for maximum gas mass flows of 1100 kg/h and 1400 kg/h for tractor and excavator engine respectively. Preliminary tests show that application of TEGs in tested off-road vehicles offers much more beneficial conditions for waste heat recovery than in case of automotive engines.
The use of piezosurgery in cranial surgery in children.
Ramieri, Valerio; Saponaro, Gianmarco; Lenzi, Jacopo; Caporlingua, Federico; Polimeni, Antonella; Silvestri, Alessandro; Pizzuti, Antonio; Roggini, Mario; Tarani, Luigi; Papoff, Paola; Giancotti, Antonella; Castori, Marco; Manganaro, Lucia; Cascone, Piero; Piero, Cascone
2015-05-01
Piezosurgery is an alternative surgical technique, now widely tested, that uses ultrasounds for bone cutting. This device uses ultrasounds to section hard tissues without harming surrounding soft tissues. The authors analyzed their experience in craniomaxillofacial procedures with piezosurgery. A comparison between operation timing and complication rates between piezosurgery and traditional cutting instruments has been performed. A total of 27 patients were examined (15 females and 12 males; average age, of 5.5 months) affected by craniosynostosis. The aim of this study was to analyze the advantages and disadvantages of piezosurgery in pediatric craniofacial procedures. Piezoelectric device in this study has shown being a valid instrument for bone cutting in accurate procedures, because it allows performing a more precise and safer cutting, without the risk of harming surrounding tissues.
NASA Astrophysics Data System (ADS)
Matthews, J. B. R.
2012-09-01
Sea Surface Temperature (SST) measurements have been obtained from a variety of different platforms, instruments and depths over the post-industrial period. Today most measurements come from ships, moored and drifting buoys and satellites. Shipboard methods include temperature measurement of seawater sampled by bucket and in engine cooling water intakes. Engine intake temperatures are generally thought to average a few tenths of a °C warmer than simultaneous bucket temperatures. Here I review SST measurement methods, studies comparing shipboard methods by field experiment and adjustments applied to SST datasets to account for variable methods. In opposition to contemporary thinking, I find average bucket-intake temperature differences reported from field studies inconclusive. Non-zero average differences often have associated standard deviations that are several times larger than the averages themselves. Further, average differences have been found to vary widely between ships and between cruises on the same ship. The cause of non-zero average differences is typically unclear given the general absence of additional temperature observations to those from buckets and engine intakes. Shipboard measurements appear of variable quality, highly dependent upon the accuracy and precision of the thermometer used and the care of the observer where manually read. Methods are generally poorly documented, with written instructions not necessarily reflecting actual practices of merchant mariners. Measurements cannot be expected to be of high quality where obtained by untrained sailors using thermometers of low accuracy and precision.
Precision orbit determination performance for CryoSat-2
NASA Astrophysics Data System (ADS)
Schrama, Ernst
2018-01-01
In this paper we discuss our efforts to perform precision orbit determination (POD) of CryoSat-2 which depends on Doppler and satellite laser ranging tracking data. A dynamic orbit model is set-up and the residuals between the model and the tracking data is evaluated. The average r.m.s. of the 10 s averaged Doppler tracking pass residuals is approximately 0.39 mm/s; and the average of the laser tracking pass residuals becomes 1.42 cm. There are a number of other tests to verify the quality of the orbit solution, we compare our computed orbits against three independent external trajectories provided by the CNES. The CNES products are part of the CryoSat-2 products distributed by ESA. The radial differences of our solution relative to the CNES precision orbits shows an average r.m.s. of 1.25 cm between Jun-2010 and Apr-2017. The SIRAL altimeter crossover difference statistics demonstrate that the quality of our orbit solution is comparable to that of the POE solution computed by the CNES. In this paper we will discuss three important changes in our POD activities that have brought the orbit performance to this level. The improvements concern the way we implement temporal gravity accelerations observed by GRACE; the implementation of ITRF2014 coordinates and velocities for the DORIS beacons and the SLR tracking sites. We also discuss an adjustment of the SLR retroreflector position within the satellite reference frame. An unexpected result is that we find a systematic difference between the median of the 10 s Doppler tracking residuals which displays a statistically significant pattern in the South Atlantic Anomaly (SSA) area where the median of the velocity residuals varies in the range of -0.15 to +0.15 mm/s.
NASA Astrophysics Data System (ADS)
Salenbien, W.; Baker, P. A.; Fritz, S. C.; Guedron, S.
2014-12-01
Lake Titicaca is one of the most important archives of paleoclimate in tropical South America, and prior studies have elucidated patterns of climate variation at varied temporal scales over the past 0.5 Ma. Yet, slow sediment accumulation rates in the main deeper basin of the lake have precluded analysis of the lake's most recent history at high resolution. To obtain a paleoclimate record of the last few millennia at multi-decadal resolution, we obtained five short cores, ranging from 139 to 181 cm in length, from the shallower Wiñaymarka sub-basin of of Lake Titicaca, where sedimentation rates are higher than in the lake's main basin. Selected cores have been analyzed for their geochemical signature by scanning XRF, diatom stratigraphy, sedimentology, and for 14C age dating. A total of 72 samples were 14C-dated using a Gas Ion Source automated high-throughput method for carbonate samples (mainly Littoridina sp. and Taphius montanus gastropod shells) at NOSAMS (Woods Hole Oceanographic Institute) with an analytical precision higher than 2%. The method has lower analytical precision compared with traditional AMS radiocarbon dating, but the lower cost enables analysis of a larger number of samples, and the error associated with the lower precision is relatively small for younger samples (< ~8,000 years). A 172-cm-long core was divided into centimeter long sections, and 47 14C dates were obtained from 1-cm intervals, averaging one date every 3-4 cm. The other cores were radiocarbon dated with a sparser sampling density that focused on visual unconformities and shell beds. The high-resolution radiocarbon analysis reveals complex sedimentation patterns in visually continuous sections, with abundant indicators of bioturbated or reworked sediments and periods of very rapid sediment accumulation. These features are not evident in the sparser sampling strategy but have significant implications for reconstructing past lake level and paleoclimatic history.
Development and evaluation of a hybrid averaged orbit generator
NASA Technical Reports Server (NTRS)
Mcclain, W. D.; Long, A. C.; Early, L. W.
1978-01-01
A rapid orbit generator based on a first-order application of the Generalized Method of Averaging has been developed for the Research and Development (R&D) version of the Goddard Trajectory Determination System (GTDS). The evaluation of the averaged equations of motion can use both numerically averaged and recursively evaluated, analytically averaged perturbation models. These equations are numerically integrated to obtain the secular and long-period motion. Factors affecting efficient orbit prediction are discussed and guidelines are presented for treatment of each major perturbation. Guidelines for obtaining initial mean elements compatible with the theory are presented. An overview of the orbit generator is presented and comparisons with high precision methods are given.
On the precision of experimentally determined protein folding rates and φ-values
De Los Rios, Miguel A.; Muralidhara, B.K.; Wildes, David; Sosnick, Tobin R.; Marqusee, Susan; Wittung-Stafshede, Pernilla; Plaxco, Kevin W.; Ruczinski, Ingo
2006-01-01
φ-Values, a relatively direct probe of transition-state structure, are an important benchmark in both experimental and theoretical studies of protein folding. Recently, however, significant controversy has emerged regarding the reliability with which φ-values can be determined experimentally: Because φ is a ratio of differences between experimental observables it is extremely sensitive to errors in those observations when the differences are small. Here we address this issue directly by performing blind, replicate measurements in three laboratories. By monitoring within- and between-laboratory variability, we have determined the precision with which folding rates and φ-values are measured using generally accepted laboratory practices and under conditions typical of our laboratories. We find that, unless the change in free energy associated with the probing mutation is quite large, the precision of φ-values is relatively poor when determined using rates extrapolated to the absence of denaturant. In contrast, when we employ rates estimated at nonzero denaturant concentrations or assume that the slopes of the chevron arms (mf and mu) are invariant upon mutation, the precision of our estimates of φ is significantly improved. Nevertheless, the reproducibility we thus obtain still compares poorly with the confidence intervals typically reported in the literature. This discrepancy appears to arise due to differences in how precision is calculated, the dependence of precision on the number of data points employed in defining a chevron, and interlaboratory sources of variability that may have been largely ignored in the prior literature. PMID:16501226
Measuring changes in Plasmodium falciparum transmission: Precision, accuracy and costs of metrics
Tusting, Lucy S.; Bousema, Teun; Smith, David L.; Drakeley, Chris
2016-01-01
As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review eleven metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs, and presenting an overall critique. We also review the non-linear scaling relationships between five metrics of malaria transmission; the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our review highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, sero-conversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection. PMID:24480314
Haslem, Derrick S.; Chakravarty, Ingo; Fulde, Gail; Gilbert, Heather; Tudor, Brian P.; Lin, Karen; Ford, James M.; Nadauld, Lincoln D.
2018-01-01
The impact of precision oncology on guiding treatment decisions of late-stage cancer patients was previously studied in a retrospective analysis. However, the overall survival and costs were not previously evaluated. We report the overall survival and healthcare costs associated with precision oncology in these patients with advanced cancer. Building on a matched cohort study of 44 patients with metastatic cancer who received all of their care within a single institution, we evaluated the overall survival and healthcare costs for each patient. We analyzed the outcomes of 22 patients who received genomic testing and targeted therapy (precision oncology) between July 1, 2013 and January 31, 2015, and compared to 22 historically controlled patients (control) who received standard chemotherapy (N = 17) or best supportive care (N = 5). The median overall survival was 51.7 weeks for the targeted treatment group and 25.8 weeks for the control group (P = 0.008) when matching on age, gender, histological diagnosis and previous treatment lines. Average costs over the entire period were $2,720 per week for the targeted treatment group and $3,453 per week for the control group, (P = 0.036). A separate analysis of 1,814 patients with late-stage cancer diagnoses found that those who received a targeted cancer treatment (N = 93) had 6.9% lower costs in the last 3 months of life compared with those who did not. These findings suggest that precision oncology may improve overall survival for refractory cancer patients while lowering average per-week healthcare costs, resource utilization and end-of-life costs. PMID:29552312
7075-T6 and 2024-T351 Aluminum Alloy Fatigue Crack Growth Rate Data
NASA Technical Reports Server (NTRS)
Forth, Scott C.; Wright, Christopher W.; Johnston, William M., Jr.
2005-01-01
Experimental test procedures for the development of fatigue crack growth rate data has been standardized by the American Society for Testing and Materials. Over the past 30 years several gradual changes have been made to the standard without rigorous assessment of the affect these changes have on the precision or variability of the data generated. Therefore, the ASTM committee on fatigue crack growth has initiated an international round robin test program to assess the precision and variability of test results generated using the standard E647-00. Crack growth rate data presented in this report, in support of the ASTM roundrobin, shows excellent precision and repeatability.
Long range personalized cancer treatment strategies incorporating evolutionary dynamics.
Yeang, Chen-Hsiang; Beckman, Robert A
2016-10-22
Current cancer precision medicine strategies match therapies to static consensus molecular properties of an individual's cancer, thus determining the next therapeutic maneuver. These strategies typically maintain a constant treatment while the cancer is not worsening. However, cancers feature complicated sub-clonal structure and dynamic evolution. We have recently shown, in a comprehensive simulation of two non-cross resistant therapies across a broad parameter space representing realistic tumors, that substantial improvement in cure rates and median survival can be obtained utilizing dynamic precision medicine strategies. These dynamic strategies explicitly consider intratumoral heterogeneity and evolutionary dynamics, including predicted future drug resistance states, and reevaluate optimal therapy every 45 days. However, the optimization is performed in single 45 day steps ("single-step optimization"). Herein we evaluate analogous strategies that think multiple therapeutic maneuvers ahead, considering potential outcomes at 5 steps ahead ("multi-step optimization") or 40 steps ahead ("adaptive long term optimization (ALTO)") when recommending the optimal therapy in each 45 day block, in simulations involving both 2 and 3 non-cross resistant therapies. We also evaluate an ALTO approach for situations where simultaneous combination therapy is not feasible ("Adaptive long term optimization: serial monotherapy only (ALTO-SMO)"). Simulations utilize populations of 764,000 and 1,700,000 virtual patients for 2 and 3 drug cases, respectively. Each virtual patient represents a unique clinical presentation including sizes of major and minor tumor subclones, growth rates, evolution rates, and drug sensitivities. While multi-step optimization and ALTO provide no significant average survival benefit, cure rates are significantly increased by ALTO. Furthermore, in the subset of individual virtual patients demonstrating clinically significant difference in outcome between approaches, by far the majority show an advantage of multi-step or ALTO over single-step optimization. ALTO-SMO delivers cure rates superior or equal to those of single- or multi-step optimization, in 2 and 3 drug cases respectively. In selected virtual patients incurable by dynamic precision medicine using single-step optimization, analogous strategies that "think ahead" can deliver long-term survival and cure without any disadvantage for non-responders. When therapies require dose reduction in combination (due to toxicity), optimal strategies feature complex patterns involving rapidly interleaved pulses of combinations and high dose monotherapy. This article was reviewed by Wendy Cornell, Marek Kimmel, and Andrzej Swierniak. Wendy Cornell and Andrzej Swierniak are external reviewers (not members of the Biology Direct editorial board). Andrzej Swierniak was nominated by Marek Kimmel.
NASA Astrophysics Data System (ADS)
Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong
2016-09-01
With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.
On measurement noise in the European TWSTFT network.
Piester, Dirk; Bauch, Andreas; Becker, Jürgen; Staliuniene, Egle; Schlunegger, Christian
2008-09-01
Two-way satellite time and frequency transfer (TWSTFT) using geostationary telecommunication satellites is widely used in the timing community today and has also been chosen as the primary means to effect synchronization of elements of the ground segment of the European satellite navigation system Galileo. We investigated the link performance in a multistation network based on operational parameters such as the number of simultaneously transmitting stations, transmit and receive power, and chip rates of the pseudorandom noise modulation of the transmitted signals. Our work revealed that TWSTFT through a "quiet" transponder channel (2 stations transmitting only) leads to a measurement noise, expressed by the 1 pps jitter, reduced by a factor of 1.4 compared with a busy transponder carrying signals of 12 stations. The frequency transfer capability expressed by the Allan deviation is reduced at short averaging times by the same amount. At averaging times of >1 d, no such reduction could be observed, which points to the fact that other noise sources dominate at such averaging times. We also found that higher transmit power increases the carrier-to-noise density ratio at the receive station and thus entails lower jitter but causes interference with other station's signals. In addition, the use of lower chip rates, which could be accommodated by a reduced assigned bandwidth on the satellite transponder, is not recommended. The 1 pps jitter would go up by a factor of 2.5 when going from 2.5 MCh/s to 1 MCh/s. The 2 Galileo precise timing facilities (PTFs) can be included in the currently operated network of 12 stations in Europe and all requirements on the TWSTFT performance can be met, provided that suitable ground equipment will be installed in the Galileo ground segment.
Automatically identifying health outcome information in MEDLINE records.
Demner-Fushman, Dina; Few, Barbara; Hauser, Susan E; Thoma, George
2006-01-01
Understanding the effect of a given intervention on the patient's health outcome is one of the key elements in providing optimal patient care. This study presents a methodology for automatic identification of outcomes-related information in medical text and evaluates its potential in satisfying clinical information needs related to health care outcomes. An annotation scheme based on an evidence-based medicine model for critical appraisal of evidence was developed and used to annotate 633 MEDLINE citations. Textual, structural, and meta-information features essential to outcome identification were learned from the created collection and used to develop an automatic system. Accuracy of automatic outcome identification was assessed in an intrinsic evaluation and in an extrinsic evaluation, in which ranking of MEDLINE search results obtained using PubMed Clinical Queries relied on identified outcome statements. The accuracy and positive predictive value of outcome identification were calculated. Effectiveness of the outcome-based ranking was measured using mean average precision and precision at rank 10. Automatic outcome identification achieved 88% to 93% accuracy. The positive predictive value of individual sentences identified as outcomes ranged from 30% to 37%. Outcome-based ranking improved retrieval accuracy, tripling mean average precision and achieving 389% improvement in precision at rank 10. Preliminary results in outcome-based document ranking show potential validity of the evidence-based medicine-model approach in timely delivery of information critical to clinical decision support at the point of service.
Ding, Michael Q; Chen, Lujia; Cooper, Gregory F; Young, Jonathan D; Lu, Xinghua
2018-02-01
Precision oncology involves identifying drugs that will effectively treat a tumor and then prescribing an optimal clinical treatment regimen. However, most first-line chemotherapy drugs do not have biomarkers to guide their application. For molecularly targeted drugs, using the genomic status of a drug target as a therapeutic indicator has limitations. In this study, machine learning methods (e.g., deep learning) were used to identify informative features from genome-scale omics data and to train classifiers for predicting the effectiveness of drugs in cancer cell lines. The methodology introduced here can accurately predict the efficacy of drugs, regardless of whether they are molecularly targeted or nonspecific chemotherapy drugs. This approach, on a per-drug basis, can identify sensitive cancer cells with an average sensitivity of 0.82 and specificity of 0.82; on a per-cell line basis, it can identify effective drugs with an average sensitivity of 0.80 and specificity of 0.82. This report describes a data-driven precision medicine approach that is not only generalizable but also optimizes therapeutic efficacy. The framework detailed herein, when successfully translated to clinical environments, could significantly broaden the scope of precision oncology beyond targeted therapies, benefiting an expanded proportion of cancer patients. Mol Cancer Res; 16(2); 269-78. ©2017 AACR . ©2017 American Association for Cancer Research.
Increasing Endurance by Building Fluency: Precision Teaching Attention Span.
ERIC Educational Resources Information Center
Binder, Carl; And Others
1990-01-01
Precision teaching techniques can be used to chart students' attention span or endurance. Individual differences in attention span can then be better understood and dealt with effectively. The effects of performance duration on performance level, on error rates, and on learning rates are discussed. Implications for classroom practice are noted.…
Determining Energy Expenditure during Some Household and Garden Tasks.
ERIC Educational Resources Information Center
Gunn, Simon M.; Brooks, Anthony G.; Withers, Robert T.; Gore, Christopher J.; Owen, Neville; Booth, Michael L.; Bauman, Adrian E.
2002-01-01
Calculated the reproducibility and precision for VO2 during moderate paced walking and four housework and gardening activities, examining which rated at least 3.0 when calculating exercise intensity in METs and multiples of measured resting metabolic rate (MRM). VO2 was measured with reproducibility and precision. Expressing energy expenditure in…
Sensor-based precision fertilization for field crops
USDA-ARS?s Scientific Manuscript database
From the development of the first viable variable-rate fertilizer systems in the upper Midwest USA, precision agriculture is now approaching three decades old. Early precision fertilization practice relied on laboratory analysis of soil samples collected on a spatial pattern to define the nutrient-s...
Raymond, Mark R; Clauser, Brian E; Furman, Gail E
2010-10-01
The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.
Intercomparison of SO2 camera systems for imaging volcanic gas plumes
Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-Francois; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred
2015-01-01
SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.
Intercomparison of SO2 camera systems for imaging volcanic gas plumes
NASA Astrophysics Data System (ADS)
Kern, Christoph; Lübcke, Peter; Bobrowski, Nicole; Campion, Robin; Mori, Toshiya; Smekens, Jean-François; Stebel, Kerstin; Tamburello, Giancarlo; Burton, Mike; Platt, Ulrich; Prata, Fred
2015-07-01
SO2 camera systems are increasingly being used to image volcanic gas plumes. The ability to derive SO2 emission rates directly from the acquired imagery at high time resolution allows volcanic process studies that incorporate other high time-resolution datasets. Though the general principles behind the SO2 camera have remained the same for a number of years, recent advances in CCD technology and an improved understanding of the physics behind the measurements have driven a continuous evolution of the camera systems. Here we present an intercomparison of seven different SO2 cameras. In the first part of the experiment, the various technical designs are compared and the advantages and drawbacks of individual design options are considered. Though the ideal design was found to be dependent on the specific application, a number of general recommendations are made. Next, a time series of images recorded by all instruments at Stromboli Volcano (Italy) is compared. All instruments were easily able to capture SO2 clouds emitted from the summit vents. Quantitative comparison of the SO2 load in an individual cloud yielded an intra-instrument precision of about 12%. From the imagery, emission rates were then derived according to each group's standard retrieval process. A daily average SO2 emission rate of 61 ± 10 t/d was calculated. Due to differences in spatial integration methods and plume velocity determination, the time-dependent progression of SO2 emissions varied significantly among the individual systems. However, integration over distinct degassing events yielded comparable SO2 masses. Based on the intercomparison data, we find an approximate 1-sigma precision of 20% for the emission rates derived from the various SO2 cameras. Though it may still be improved in the future, this is currently within the typical accuracy of the measurement and is considered sufficient for most applications.
Earth Rotation Parameter Solutions using BDS and GPS Data from MEGX Network
NASA Astrophysics Data System (ADS)
Xu, Tianhe; Yu, Sumei; Li, Jiajing; He, Kaifei
2014-05-01
Earth rotation parameters (ERPs) are necessary parameters to achieve mutual transformation of the celestial reference frame and earth-fix reference frame. They are very important for satellite precise orbit determination (POD), high-precision space navigation and positioning. In this paper, the determination of ERPs including polar motion (PM), polar motion rate (PMR) and length of day (LOD) are presented using BDS and GPS data of June 2013 from MEGX network based on least square (LS) estimation with constraint condition. BDS and GPS data of 16 co-location stations from MEGX network are the first time used to estimate the ERPs. The results show that the RMSs of x and y component errors of PM and PM rate are about 0.9 mas, 1.0 mas, 0.2 mas/d and 0.3 mas/d respectively using BDS data. The RMS of LOD is about 0.03 ms/d using BDS data. The RMSs of x and y component errors of PM and PM rate are about 0.2 mas, 0.2 mas/d respectively using GPS data. The RMS of LOD is about 0.02 ms/d using GPS data. The optimal relative weight is determined by using variance component estimation when combining BDS and GPS data. The accuracy improvements of adding BDS data is between 8% to 20% for PM and PM rate. There is no obvious improvement in LOD when BDS data is involved. System biases between BDS and GPS are also resolved per station. They are very stable from day to day with the average accuracy of about 20 cm. Keywords: Earth rotation parameter; International GNSS Service; polar motion; length of day; least square with constraint condition Acknowledgments: This work was supported by Natural Science Foundation of China (41174008) and the Foundation for the Author of National Excellent Doctoral Dissertation of China (2007B51) .
Geiler-Samerotte, Kerry A; Hashimoto, Tatsunori; Dion, Michael F; Budnik, Bogdan A; Airoldi, Edoardo M; Drummond, D Allan
2013-01-01
Countless studies monitor the growth rate of microbial populations as a measure of fitness. However, an enormous gap separates growth-rate differences measurable in the laboratory from those that natural selection can distinguish efficiently. Taking advantage of the recent discovery that transcript and protein levels in budding yeast closely track growth rate, we explore the possibility that growth rate can be more sensitively inferred by monitoring the proteomic response to growth, rather than growth itself. We find a set of proteins whose levels, in aggregate, enable prediction of growth rate to a higher precision than direct measurements. However, we find little overlap between these proteins and those that closely track growth rate in other studies. These results suggest that, in yeast, the pathways that set the pace of cell division can differ depending on the growth-altering stimulus. Still, with proper validation, protein measurements can provide high-precision growth estimates that allow extension of phenotypic growth-based assays closer to the limits of evolutionary selection.
Are false-positive rates leading to an overestimation of noise-induced hearing loss?
Schlauch, Robert S; Carney, Edward
2011-04-01
To estimate false-positive rates for rules proposed to identify early noise-induced hearing loss (NIHL) using the presence of notches in audiograms. Audiograms collected from school-age children in a national survey of health and nutrition (the Third National Health and Nutrition Examination Survey [NHANES III]; National Center for Health Statistics, 1994) were examined using published rules for identifying noise notches at various pass-fail criteria. These results were compared with computer-simulated "flat" audiograms. The proportion of these identified as having a noise notch is an estimate of the false-positive rate for a particular rule. Audiograms from the NHANES III for children 6-11 years of age yielded notched audiograms at rates consistent with simulations, suggesting that this group does not have significant NIHL. Further, pass-fail criteria for rules suggested by expert clinicians, applied to NHANES III audiometric data, yielded unacceptably high false-positive rates. Computer simulations provide an effective method for estimating false-positive rates for protocols used to identify notched audiograms. Audiometric precision could possibly be improved by (a) eliminating systematic calibration errors, including a possible problem with reference levels for TDH-style earphones; (b) repeating and averaging threshold measurements; and (c) using earphones that yield lower variability for 6.0 and 8.0 kHz--2 frequencies critical for identifying noise notches.
Berman, Elena S.F.; Levin, Naomi E.; Landais, Amaelle; Li, Shuning; Owano, Thomas
2013-01-01
Stable isotopes of water have long been used to improve understanding of the hydrological cycle, catchment hydrology, and polar climate. Recently, there has been increasing interest in measurement and use of the less-abundant 17O isotope in addition to 2H and 18O. Off-axis integrated cavity output spectroscopy (OA-ICOS) is demonstrated for accurate and precise measurements δ18O, δ17O, and 17O-excess in liquid water. OA-ICOS involves no sample conversion and has a small footprint, allowing measurements to be made by researchers collecting the samples. Repeated (514) high-throughput measurements of the international isotopic reference water standard GISP demonstrate the precision and accuracy of OA-ICOS: δ18OVSMOW-SLAP =−24.74 ± 0.07 ‰ (1σ) and δ17OVSMOW-SLAP = −13.12 ± 0.05 ‰ (1σ). For comparison, the IAEA value for δ18OVSMOW-SLAP is −24.76 ± 0.09 ‰ (1σ) and an average of previously reported values for δ17OVSMOW-SLAP is −13.12 ± 0.06 ‰ (1σ). Multiple (26) high-precision measurements of GISP provide a 17O-excessVSMOW-SLAP of 23 ± 10 per meg (1σ); an average of previously reported values for 17O-excessVSMOW-SLAP is 22 ± 11 per meg (1σ). For all these OA-ICOS measurements, precision can be further enhanced by additional averaging. OA-ICOS measurements were compared with two independent isotope ratio mass spectrometry (IRMS) laboratories and shown to have comparable accuracy and precision as the current fluorination-IRMS techniques in δ18O, δ17O, and 17O-excess. The ability to measure accurately δ18O, δ17O, and 17O-excess in liquid water inexpensively and without sample conversion is expected to increase vastly the application of δ17O and 17O-excess measurements for scientific understanding of the water cycle, atmospheric convection, and climate modeling among others. PMID:24032448
Real-time Nyquist signaling with dynamic precision and flexible non-integer oversampling.
Schmogrow, R; Meyer, M; Schindler, P C; Nebendahl, B; Dreschmann, M; Meyer, J; Josten, A; Hillerkuss, D; Ben-Ezra, S; Becker, J; Koos, C; Freude, W; Leuthold, J
2014-01-13
We demonstrate two efficient processing techniques for Nyquist signals, namely computation of signals using dynamic precision as well as arbitrary rational oversampling factors. With these techniques along with massively parallel processing it becomes possible to generate and receive high data rate Nyquist signals with flexible symbol rates and bandwidths, a feature which is highly desirable for novel flexgrid networks. We achieved maximum bit rates of 252 Gbit/s in real-time.
Mendez, Bernardino M; Chiodo, Michael V; Patel, Parit A
2015-07-01
Virtual surgical planning using three-dimensional (3D) printing technology has improved surgical efficiency and precision. A limitation to this technology is that production of 3D surgical models requires a third-party source, leading to increased costs (up to $4000) and prolonged assembly times (averaging 2-3 weeks). The purpose of this study is to evaluate the feasibility, cost, and production time of customized skull models created by an "in-office" 3D printer for craniofacial reconstruction. Two patients underwent craniofacial reconstruction with the assistance of "in-office" 3D printing technology. Three-dimensional skull models were created from a bioplastic filament with a 3D printer using computed tomography (CT) image data. The cost and production time for each model were measured. For both patients, a customized 3D surgical model was used preoperatively to plan split calvarial bone grafting and intraoperatively to more efficiently and precisely perform the craniofacial reconstruction. The average cost for surgical model production with the "in-office" 3D printer was $25 (cost of bioplastic materials used to create surgical model) and the average production time was 14 hours. Virtual surgical planning using "in office" 3D printing is feasible and allows for a more cost-effective and less time consuming method for creating surgical models and guides. By bringing 3D printing to the office setting, we hope to improve intraoperative efficiency, surgical precision, and overall cost for various types of craniofacial and reconstructive surgery.
Visual coding with a population of direction-selective neurons.
Fiscella, Michele; Franke, Felix; Farrow, Karl; Müller, Jan; Roska, Botond; da Silveira, Rava Azeredo; Hierlemann, Andreas
2015-10-01
The brain decodes the visual scene from the action potentials of ∼20 retinal ganglion cell types. Among the retinal ganglion cells, direction-selective ganglion cells (DSGCs) encode motion direction. Several studies have focused on the encoding or decoding of motion direction by recording multiunit activity, mainly in the visual cortex. In this study, we simultaneously recorded from all four types of ON-OFF DSGCs of the rabbit retina using a microelectronics-based high-density microelectrode array (HDMEA) and decoded their concerted activity using probabilistic and linear decoders. Furthermore, we investigated how the modification of stimulus parameters (velocity, size, angle of moving object) and the use of different tuning curve fits influenced decoding precision. Finally, we simulated ON-OFF DSGC activity, based on real data, in order to understand how tuning curve widths and the angular distribution of the cells' preferred directions influence decoding performance. We found that probabilistic decoding strategies outperformed, on average, linear methods and that decoding precision was robust to changes in stimulus parameters such as velocity. The removal of noise correlations among cells, by random shuffling trials, caused a drop in decoding precision. Moreover, we found that tuning curves are broad in order to minimize large errors at the expense of a higher average error, and that the retinal direction-selective system would not substantially benefit, on average, from having more than four types of ON-OFF DSGCs or from a perfect alignment of the cells' preferred directions. Copyright © 2015 the American Physiological Society.
Visual coding with a population of direction-selective neurons
Farrow, Karl; Müller, Jan; Roska, Botond; Azeredo da Silveira, Rava; Hierlemann, Andreas
2015-01-01
The brain decodes the visual scene from the action potentials of ∼20 retinal ganglion cell types. Among the retinal ganglion cells, direction-selective ganglion cells (DSGCs) encode motion direction. Several studies have focused on the encoding or decoding of motion direction by recording multiunit activity, mainly in the visual cortex. In this study, we simultaneously recorded from all four types of ON-OFF DSGCs of the rabbit retina using a microelectronics-based high-density microelectrode array (HDMEA) and decoded their concerted activity using probabilistic and linear decoders. Furthermore, we investigated how the modification of stimulus parameters (velocity, size, angle of moving object) and the use of different tuning curve fits influenced decoding precision. Finally, we simulated ON-OFF DSGC activity, based on real data, in order to understand how tuning curve widths and the angular distribution of the cells' preferred directions influence decoding performance. We found that probabilistic decoding strategies outperformed, on average, linear methods and that decoding precision was robust to changes in stimulus parameters such as velocity. The removal of noise correlations among cells, by random shuffling trials, caused a drop in decoding precision. Moreover, we found that tuning curves are broad in order to minimize large errors at the expense of a higher average error, and that the retinal direction-selective system would not substantially benefit, on average, from having more than four types of ON-OFF DSGCs or from a perfect alignment of the cells' preferred directions. PMID:26289471
NASA Technical Reports Server (NTRS)
Heard, Walter L., Jr.; Lake, Mark S.
1993-01-01
A procedure that enables astronauts in extravehicular activity (EVA) to perform efficient on-orbit assembly of large paraboloidal precision reflectors is presented. The procedure and associated hardware are verified in simulated Og (neutral buoyancy) assembly tests of a 14 m diameter precision reflector mockup. The test article represents a precision reflector having a reflective surface which is segmented into 37 individual panels. The panels are supported on a doubly curved tetrahedral truss consisting of 315 struts. The entire truss and seven reflector panels were assembled in three hours and seven minutes by two pressure-suited test subjects. The average time to attach a panel was two minutes and three seconds. These efficient assembly times were achieved because all hardware and assembly procedures were designed to be compatible with EVA assembly capabilities.
Ultra high pressure liquid chromatography. Column permeability and changes of the eluent properties.
Gritti, Fabrice; Guiochon, Georges
2008-04-11
The behavior of four similar liquid chromatography columns (2.1mm i.d. x 30, 50, 100, and 150 mm, all packed with fine particles, average d(p) approximately 1.7 microm, of bridged ethylsiloxane/silica hybrid-C(18), named BEH-C(18)) was studied in wide ranges of temperature and pressure. The pressure and the temperature dependencies of the viscosity and the density of the eluent (pure acetonitrile) along the columns were also derived, using the column permeabilities and applying the Kozeny-Carman and the heat balance equations. The heat lost through the external surface area of the chromatographic column was directly derived from the wall temperature of the stainless steel tube measured with a precision of +/-0.2 degrees C in still air and +/-0.1 degrees C in the oven compartment. The variations of the density and viscosity of pure acetonitrile as a function of the temperature and pressure was derived from empirical correlations based on precise experimental data acquired between 298 and 373 K and at pressures up to 1.5 kbar. The measurements were made with the Acquity UPLC chromatograph that can deliver a maximum flow rate of 2 mL/min and apply a maximum column inlet pressure of 1038 bar. The average Kozeny-Carman permeability constant of the columns was 144+/-3.5%. The temperature hence the viscosity and the density profiles of the eluent along the column deviate significantly from linear behavior under high-pressure gradients. For a 1000 bar pressure drop, we measured DeltaT=25-30 K, (Deltaeta/eta) approximately 100%, and (Deltarho/rho) approximately 10%. These results show that the radial temperature profiles are never fully developed within 1% for any of the columns, even under still-air conditions. This represents a practical advantage regarding the apparent column efficiency at high flow rates, since the impact of the differential analyte velocity between the column center and the column wall is not maximum. The interpretation of the peak profiles recorded in UPLC is discussed.
Zhou, L.; Chao, T.T.; Meier, A.L.
1984-01-01
An electrothermal atomic-absorption spectrophotometric method is described for the determination of total tin in geological materials, with use of a tungsten-impregnated graphite furnace. The sample is decomposed by fusion with lithium metaborate and the melt is dissolved in 10% hydrochloric acid. Tin is then extracted into trioctylphosphine oxide-methyl isobutyl ketone prior to atomization. Impregnation of the furnace with a sodium tungstate solution increases the sensitivity of the determination and improves the precision of the results. The limits of determination are 0.5-20 ppm of tin in the sample. Higher tin values can be determined by dilution of the extract. Replicate analyses of eighteen geological reference samples with diverse matrices gave relative standard deviations ranging from 2.0 to 10.8% with an average of 4.6%. Average tin values for reference samples were in general agreement with, but more precise than, those reported by others. Apparent recoveries of tin added to various samples ranged from 95 to 111% with an average of 102%. ?? 1984.
High Precision Prediction of Functional Sites in Protein Structures
Buturovic, Ljubomir; Wong, Mike; Tang, Grace W.; Altman, Russ B.; Petkovic, Dragutin
2014-01-01
We address the problem of assigning biological function to solved protein structures. Computational tools play a critical role in identifying potential active sites and informing screening decisions for further lab analysis. A critical parameter in the practical application of computational methods is the precision, or positive predictive value. Precision measures the level of confidence the user should have in a particular computed functional assignment. Low precision annotations lead to futile laboratory investigations and waste scarce research resources. In this paper we describe an advanced version of the protein function annotation system FEATURE, which achieved 99% precision and average recall of 95% across 20 representative functional sites. The system uses a Support Vector Machine classifier operating on the microenvironment of physicochemical features around an amino acid. We also compared performance of our method with state-of-the-art sequence-level annotator Pfam in terms of precision, recall and localization. To our knowledge, no other functional site annotator has been rigorously evaluated against these key criteria. The software and predictive models are incorporated into the WebFEATURE service at http://feature.stanford.edu/wf4.0-beta. PMID:24632601
Design and algorithm research of high precision airborne infrared touch screen
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Bing; Wang, Shuang-Jie; Fu, Yan; Chen, Zhao-Quan
2016-10-01
There are shortcomings of low precision, touch shaking, and sharp decrease of touch precision when emitting and receiving tubes are failure in the infrared touch screen. A high precision positioning algorithm based on extended axis is proposed to solve these problems. First, the unimpeded state of the beam between emitting and receiving tubes is recorded as 0, while the impeded state is recorded as 1. Then, the method of oblique scan is used, in which the light of one emitting tube is used for five receiving tubes. The impeded information of all emitting and receiving tubes is collected as matrix. Finally, according to the method of arithmetic average, the position of the touch object is calculated. The extended axis positioning algorithm is characteristic of high precision in case of failure of individual infrared tube and affects slightly the precision. The experimental result shows that the 90% display area of the touch error is less than 0.25D, where D is the distance between adjacent emitting tubes. The conclusion is gained that the algorithm based on extended axis has advantages of high precision, little impact when individual infrared tube is failure, and using easily.
47 CFR 64.1801 - Geographic rate averaging and rate integration.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Geographic rate averaging and rate integration. 64.1801 Section 64.1801 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON... Rate Integration § 64.1801 Geographic rate averaging and rate integration. (a) The rates charged by...
A precise measurement of the $B^0$ meson oscillation frequency
Aaij, R.; Abellán Beteta, C.; Adeva, B.; ...
2016-07-21
The oscillation frequency, Δm d, of B 0 mesons is measured using semileptonic decays with a D – or D* – meson in the final state. The data sample corresponds to 3.0fb –1 of pp collisions, collected by the LHCb experiment at centre-of-mass energies √s = 7 and 8TeV. A combination of the two decay modes gives Δm d=(505.0±2.1±1.0)ns –1, where the first uncertainty is statistical and the second is systematic. This is the most precise single measurement of this parameter. It is consistent with the current world average and has similar precision.
Double resonance calibration of g factor standards: Carbon fibers as a high precision standard
NASA Astrophysics Data System (ADS)
Herb, Konstantin; Tschaggelar, Rene; Denninger, Gert; Jeschke, Gunnar
2018-04-01
The g factor of paramagnetic defects in commercial high performance carbon fibers was determined by a double resonance experiment based on the Overhauser shift due to hyperfine coupled protons. Our carbon fibers exhibit a single, narrow and perfectly Lorentzian shaped ESR line and a g factor slightly higher than gfree with g = 2.002644 =gfree · (1 + 162ppm) with a relative uncertainty of 15ppm . This precisely known g factor and their inertness qualify them as a high precision g factor standard for general purposes. The double resonance experiment for calibration is applicable to other potential standards with a hyperfine interaction averaged by a process with very short correlation time.
Neutron Lifetime and Axial Coupling Connection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czarnecki, Andrzej; Marciano, William J.; Sirlin, Alberto
Here, experimental studies of neutron decay, n→pe¯ν, exhibit two anomalies. The first is a 8.6(2.1) s, roughly 4σ difference between the average beam measured neutron lifetime, τ beam n = 888.0(2.0) s, and the more precise average trapped ultracold neutron determination, τ trap n = 879.4(6) s. The second is a 5σ difference between the pre2002 average axial coupling, gA, as measured in neutron decay asymmetries g pre2002 A = 1.2637(21), and the more recent, post2002, average g post2002 A = 1.2755(11), where, following the UCNA Collaboration division, experiments are classified by the date of their most recent result. Inmore » this Letter, we correlate those τ n and g A values using a (slightly) updated relation τ n(1+3g 2 A) = 5172.0(1.1) s. Consistency with that relation and better precision suggest τ favored n = 879.4(6) s and g favored A = 1.2755(11) as preferred values for those parameters. Comparisons of g favored A with recent lattice QCD and muonic hydrogen capture results are made. A general constraint on exotic neutron decay branching ratios, <0.27%, is discussed and applied to a recently proposed solution to the neutron lifetime puzzle.« less
Neutron Lifetime and Axial Coupling Connection
Czarnecki, Andrzej; Marciano, William J.; Sirlin, Alberto
2018-05-16
Here, experimental studies of neutron decay, n→pe¯ν, exhibit two anomalies. The first is a 8.6(2.1) s, roughly 4σ difference between the average beam measured neutron lifetime, τ beam n = 888.0(2.0) s, and the more precise average trapped ultracold neutron determination, τ trap n = 879.4(6) s. The second is a 5σ difference between the pre2002 average axial coupling, gA, as measured in neutron decay asymmetries g pre2002 A = 1.2637(21), and the more recent, post2002, average g post2002 A = 1.2755(11), where, following the UCNA Collaboration division, experiments are classified by the date of their most recent result. Inmore » this Letter, we correlate those τ n and g A values using a (slightly) updated relation τ n(1+3g 2 A) = 5172.0(1.1) s. Consistency with that relation and better precision suggest τ favored n = 879.4(6) s and g favored A = 1.2755(11) as preferred values for those parameters. Comparisons of g favored A with recent lattice QCD and muonic hydrogen capture results are made. A general constraint on exotic neutron decay branching ratios, <0.27%, is discussed and applied to a recently proposed solution to the neutron lifetime puzzle.« less
NASA Astrophysics Data System (ADS)
Song, YoungJae; Sepulveda, Francisco
2017-02-01
Objective. Self-paced EEG-based BCIs (SP-BCIs) have traditionally been avoided due to two sources of uncertainty: (1) precisely when an intentional command is sent by the brain, i.e., the command onset detection problem, and (2) how different the intentional command is when compared to non-specific (or idle) states. Performance evaluation is also a problem and there are no suitable standard metrics available. In this paper we attempted to tackle these issues. Approach. Self-paced covert sound-production cognitive tasks (i.e., high pitch and siren-like sounds) were used to distinguish between intentional commands (IC) and idle states. The IC states were chosen for their ease of execution and negligible overlap with common cognitive states. Band power and a digital wavelet transform were used for feature extraction, and the Davies-Bouldin index was used for feature selection. Classification was performed using linear discriminant analysis. Main results. Performance was evaluated under offline and simulated-online conditions. For the latter, a performance score called true-false-positive (TFP) rate, ranging from 0 (poor) to 100 (perfect), was created to take into account both classification performance and onset timing errors. Averaging the results from the best performing IC task for all seven participants, an 77.7% true-positive (TP) rate was achieved in offline testing. For simulated-online analysis the best IC average TFP score was 76.67% (87.61% TP rate, 4.05% false-positive rate). Significance. Results were promising when compared to previous IC onset detection studies using motor imagery, in which best TP rates were reported as 72.0% and 79.7%, and which, crucially, did not take timing errors into account. Moreover, based on our literature review, there is no previous covert sound-production onset detection system for spBCIs. Results showed that the proposed onset detection technique and TFP performance metric have good potential for use in SP-BCIs.
Decadal-scale rates of reef erosion following El Niño-related mass coral mortality.
Roff, George; Zhao, Jian-Xin; Mumby, Peter J
2015-12-01
As the frequency and intensity of coral mortality events increase under climate change, understanding how declines in coral cover may affect the bioerosion of reef frameworks is of increasing importance. Here, we explore decadal-scale rates of bioerosion of the framework building coral Orbicella annularis by grazing parrotfish following the 1997/1998 El Niño-related mass mortality event at Long Cay, Belize. Using high-precision U-Th dating and CT scan analysis, we quantified in situ rates of external bioerosion over a 13-year period (1998-2011). Based upon the error-weighted average U-Th age of dead O. annularis skeletons, we estimate the average external bioerosion between 1998 and 2011 as 0.92 ± 0.55 cm depth. Empirical observations of herbivore foraging, and a nonlinear numerical response of parrotfish to an increase in food availability, were used to create a model of external bioerosion at Long Cay. Model estimates of external bioerosion were in close agreement with U-Th estimates (0.85 ± 0.09 cm). The model was then used to quantify how rates of external bioerosion changed across a gradient of coral mortality (i.e., from few corals experiencing mortality following coral bleaching to complete mortality). Our results indicate that external bioerosion is remarkably robust to declines in coral cover, with no significant relationship predicted between the rate of external bioerosion and the proportion of O. annularis that died in the 1998 bleaching event. The outcome was robust because the reduction in grazing intensity that follows coral mortality was compensated for by a positive numerical response of parrotfish to an increase in food availability. Our model estimates further indicate that for an O. annularis-dominated reef to maintain a positive state of reef accretion, a necessity for sustained ecosystem function, live cover of O. annularis must not drop below a ~5-10% threshold of cover. © 2015 John Wiley & Sons Ltd.
Plusquellec, P; Muckle, G; Dewailly, E; Ayotte, P; Jacobson, S W; Jacobson, J L
2007-01-01
The aim of this study was to investigate the association between prenatal exposure to lead (Pb) and several aspects of behavioral function during infancy through examiner ratings and behavioral coding of video recordings. The sample consisted of 169 11-month-old Inuit infants from Arctic Quebec. Umbilical cord and maternal blood samples were used to document prenatal exposure to Pb. Average blood Pb levels were 4.6 mug/dL and 5.9 mug/dL in cord and maternal samples respectively. The Behavior Rating Scales (BRS) from the Bayley Scales of Infant Development (BSID-II) were used to assess behavior. Attention was assessed through the BRS and behavioral coding of video recordings taken during the administration of the BSID-II. Whereas the examiner ratings of behaviors detected very few associations with prenatal Pb exposure, cord blood Pb concentrations were significantly related to the direct observational measures of infant attention, after adjustment for confounding variables. These data provide evidence that increasing the specificity and the precision of the behavioral assessment has considerable potential for improving our ability to detect low-to-moderate associations between neurotoxicants, such Pb and infant behavior.
Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J
2015-01-01
Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867
Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J
2015-01-01
Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.
Propagation of stage measurement uncertainties to streamflow time series
NASA Astrophysics Data System (ADS)
Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary
2016-04-01
Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.
Core-shell-structured nanothermites synthesized by atomic layer deposition
NASA Astrophysics Data System (ADS)
Qin, Lijun; Gong, Ting; Hao, Haixia; Wang, Keyong; Feng, Hao
2013-12-01
Thermite materials feature very exothermic solid-state redox reactions. However, the energy release rates of traditional thermite mixtures are limited by the reactant diffusion velocities. In this work, atomic layer deposition (ALD) is utilized to synthesize thermite materials with greatly enhanced reaction rates. By depositing certain types of metal oxides (oxidizers) onto a commercial Al nanopowder, core-shell-structured nanothermites can be produced. The average film deposition rate on the Al nanopowder is 0.17 nm/cycle for ZnO and 0.031 nm/cycle for SnO2. The thickness of the oxidizer layer can be precisely controlled by adjusting the ALD cycle number. The compositions, morphologies, and structures of the ALD nanothermites are characterized by X-ray photoelectron spectroscopy, scanning electron microscopy, and high-resolution transmission electron microscopy. The characterization results reveal nearly perfect coverage of the Al nanoparticles by uniform ALD oxidizer layers and confirm the formation of core-shell nanoparticles. Combustion properties of the nanothermites are probed by laser ignition technique. Reactions of the core-shell-structured nanothermites are several times faster than the mixture of nanopowders. The promoted reaction rate is mostly attributed to the uniform distribution of reactants on the nanometer scale. These core-shell-structured nanothermites provide a potential pathway to control and enhance thermite reactions.
Precision gravity studies at Cerro Prieto: a progress report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grannell, R.B.; Kroll, R.C.; Wyman, R.M.
A third and fourth year of precision gravity data collection and reduction have now been completed at the Cerro Prieto geothermal field. In summary, 66 permanently monumented stations were occupied between December and April of 1979 to 1980 and 1980 to 1981 by a LaCoste and Romberg gravity meter (G300) at least twice, with a minimum of four replicate values obtained each time. Station 20 alternate, a stable base located on Cerro Prieto volcano, was used as the reference base for the third year and all the stations were tied to this base, using four to five hour loops. Themore » field data were reduced to observed gravity values by (1) multiplication with the appropriate calibration factor; (2) removal of calculated tidal effects; (3) calculation of average values at each station, and (4) linear removal of accumulated instrumental drift which remained after carrying out the first three reductions. Following the reduction of values and calculation of gravity differences between individual stations and the base stations, standard deviations were calculated for the averaged occupation values (two to three per station). In addition, pooled variance calculations were carried out to estimate precision for the surveys as a whole.« less
An ontology-driven, case-based clinical decision support model for removable partial denture design
NASA Astrophysics Data System (ADS)
Chen, Qingxiao; Wu, Ji; Li, Shusen; Lyu, Peijun; Wang, Yong; Li, Miao
2016-06-01
We present the initial work toward developing a clinical decision support model for specific design of removable partial dentures (RPDs) in dentistry. We developed an ontological paradigm to represent knowledge of a patient’s oral conditions and denture component parts. During the case-based reasoning process, a cosine similarity algorithm was applied to calculate similarity values between input patients and standard ontology cases. A group of designs from the most similar cases were output as the final results. To evaluate this model, the output designs of RPDs for 104 randomly selected patients were compared with those selected by professionals. An area under the curve of the receiver operating characteristic (AUC-ROC) was created by plotting true-positive rates against the false-positive rate at various threshold settings. The precision at position 5 of the retrieved cases was 0.67 and at the top of the curve it was 0.96, both of which are very high. The mean average of precision (MAP) was 0.61 and the normalized discounted cumulative gain (NDCG) was 0.74 both of which confirmed the efficient performance of our model. All the metrics demonstrated the efficiency of our model. This methodology merits further research development to match clinical applications for designing RPDs. This paper is organized as follows. After the introduction and description of the basis for the paper, the evaluation and results are presented in Section 2. Section 3 provides a discussion of the methodology and results. Section 4 describes the details of the ontology, similarity algorithm, and application.
An ontology-driven, case-based clinical decision support model for removable partial denture design.
Chen, Qingxiao; Wu, Ji; Li, Shusen; Lyu, Peijun; Wang, Yong; Li, Miao
2016-06-14
We present the initial work toward developing a clinical decision support model for specific design of removable partial dentures (RPDs) in dentistry. We developed an ontological paradigm to represent knowledge of a patient's oral conditions and denture component parts. During the case-based reasoning process, a cosine similarity algorithm was applied to calculate similarity values between input patients and standard ontology cases. A group of designs from the most similar cases were output as the final results. To evaluate this model, the output designs of RPDs for 104 randomly selected patients were compared with those selected by professionals. An area under the curve of the receiver operating characteristic (AUC-ROC) was created by plotting true-positive rates against the false-positive rate at various threshold settings. The precision at position 5 of the retrieved cases was 0.67 and at the top of the curve it was 0.96, both of which are very high. The mean average of precision (MAP) was 0.61 and the normalized discounted cumulative gain (NDCG) was 0.74 both of which confirmed the efficient performance of our model. All the metrics demonstrated the efficiency of our model. This methodology merits further research development to match clinical applications for designing RPDs. This paper is organized as follows. After the introduction and description of the basis for the paper, the evaluation and results are presented in Section 2. Section 3 provides a discussion of the methodology and results. Section 4 describes the details of the ontology, similarity algorithm, and application.
NASA Astrophysics Data System (ADS)
Banse, Karl; Yong, Marina
1990-05-01
As a proxy for satellite (coastal zone color scanner) observations and concurrent measurements of primary production rates, data from 138 stations occupied seasonally during 1967-1968 in the offshore, eastern tropical Pacific were analyzed in terms of six temporal groups and four current regimes. In multiple linear regressions on column production Pt, we found that simulated satellite pigment is generally weakly correlated, but sometimes not correlated with Pt, and that incident irradiance, sea surface temperature, nitrate, transparency, and depths of mixed layer or nitracline assume little or no importance. After a proxy for the light-saturated chlorophyll-specific photosynthetic rate pmax is added, the coefficient of determination (r2) ranges from 0.55 to 0.91 (median of 0.85) for the 10 cases. In stepwise multiple linear regressions the pmax proxy is the best predictor for Pt. Pt can be calculated fairly accurately (on the average, within 10-20%) from satellite pigment, the 10% light depth, and station values (but not from regional or seasonal means) of the pmax proxy; for individual stations the precision is 35-84% (median of 57% for the 10 groupings; p = 0.05) of the means of observed values. At present, pmax cannot be estimated from space; in the data set it is not even highly correlated with irradiance, temperature, and nitrate at depth of occurrence. Therefore extant models for calculating Pt in this tropical ocean have inherent limits of accuracy as well as of precision owing to ignorance about a physiological parameter.
Su, Kang-Yi; Kao, Jau-Tsuen; Ho, Bing-Ching; Chen, Hsuan-Yu; Chang, Gee-Cheng; Ho, Chao-Chi; Yu, Sung-Liang
2016-01-01
Molecular diagnostics in cancer pharmacogenomics is indispensable for making targeted therapy decisions especially in lung cancer. For routine clinical practice, the flexible testing platform and implemented quality system are important for failure rate and turnaround time (TAT) reduction. We established and validated the multiplex EGFR testing by MALDI-TOF MS according to ISO15189 regulation and CLIA recommendation in Taiwan. Totally 8,147 cases from Aug-2011 to Jul-2015 were assayed and statistical characteristics were reported. The intra-run precision of EGFR mutation frequency was CV 2.15% (L858R) and 2.77% (T790M); the inter-run precision was CV 3.50% (L858R) and 2.84% (T790M). Accuracy tests by consensus reference biomaterials showed 100% consistence with datasheet (public database). Both analytical sensitivity and specificity were 100% while taking Sanger sequencing as the gold-standard method for comparison. EGFR mutation frequency of peripheral blood mononuclear cell for reference range determination was 0.002 ± 0.016% (95% CI: 0.000–0.036) (L858R) and 0.292 ± 0.289% (95% CI: 0.000–0.871) (T790M). The average TAT was 4.5 working days and the failure rate was less than 0.1%. In conclusion, this study provides a comprehensive report of lung cancer EGFR mutation detection from platform establishment, method validation to clinical routine practice. It may be a reference model for molecular diagnostics in cancer pharmacogenomics. PMID:27480787
On the precision of automated activation time estimation
NASA Technical Reports Server (NTRS)
Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.
1988-01-01
We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.
[High Precision Identification of Igneous Rock Lithology by Laser Induced Breakdown Spectroscopy].
Wang, Chao; Zhang, Wei-gang; Yan, Zhi-quan
2015-09-01
In the field of petroleum exploration, lithology identification of finely cuttings sample, especially high precision identification of igneous rock with similar property, has become one of the geological problems. In order to solve this problem, a new method is proposed based on element analysis of Laser-Induced Breakdown Spectroscopy (LIBS) and Total Alkali versus Silica (TAS) diagram. Using independent LIBS system, factors influencing spectral signal, such as pulse energy, acquisition time delay, spectrum acquisition method and pre-ablation are researched through contrast experiments systematically. The best analysis conditions of igneous rock are determined: pulse energy is 50 mJ, acquisition time delay is 2 μs, the analysis result is integral average of 20 different points of sample's surface, and pre-ablation has been proved not suitable for igneous rock sample by experiment. The repeatability of spectral data is improved effectively. Characteristic lines of 7 elements (Na, Mg, Al, Si, K, Ca, Fe) commonly used for lithology identification of igneous rock are determined, and igneous rock samples of different lithology are analyzed and compared. Calibration curves of Na, K, Si are generated by using national standard series of rock samples, and all the linearly dependent coefficients are greater than 0.9. The accuracy of quantitative analysis is investigated by national standard samples. Element content of igneous rock is analyzed quantitatively by calibration curve, and its lithology is identified accurately by the method of TAS diagram, whose accuracy rate is 90.7%. The study indicates that LIBS can effectively achieve the high precision identification of the lithology of igneous rock.
The MIGHTI Wind Retrieval Algorithm: Description and Verification
NASA Astrophysics Data System (ADS)
Harding, Brian J.; Makela, Jonathan J.; Englert, Christoph R.; Marr, Kenneth D.; Harlander, John M.; England, Scott L.; Immel, Thomas J.
2017-10-01
We present an algorithm to retrieve thermospheric wind profiles from measurements by the Michelson Interferometer for Global High-resolution Thermospheric Imaging (MIGHTI) instrument on NASA's Ionospheric Connection Explorer (ICON) mission. MIGHTI measures interferometric limb images of the green and red atomic oxygen emissions at 557.7 nm and 630.0 nm, spanning 90-300 km. The Doppler shift of these emissions represents a remote measurement of the wind at the tangent point of the line of sight. Here we describe the algorithm which uses these images to retrieve altitude profiles of the line-of-sight wind. By combining the measurements from two MIGHTI sensors with perpendicular lines of sight, both components of the vector horizontal wind are retrieved. A comprehensive truth model simulation that is based on TIME-GCM winds and various airglow models is used to determine the accuracy and precision of the MIGHTI data product. Accuracy is limited primarily by spherical asymmetry of the atmosphere over the spatial scale of the limb observation, a fundamental limitation of space-based wind measurements. For 80% of the retrieved wind samples, the accuracy is found to be better than 5.8 m/s (green) and 3.5 m/s (red). As expected, significant errors are found near the day/night boundary and occasionally near the equatorial ionization anomaly, due to significant variations of wind and emission rate along the line of sight. The precision calculation includes pointing uncertainty and shot, read, and dark noise. For average solar minimum conditions, the expected precision meets requirements, ranging from 1.2 to 4.7 m/s.
Lividini, Keith; Fiedler, John L; Bermudez, Odilia I
2013-12-01
Observed-Weighed Food Record Surveys (OWFR) are regarded as the most precise dietary assessment methodology, despite their recognized shortcomings, which include limited availability, high cost, small samples with uncertain external validity that rarely include all household members, Hawthorne effects, and using only 1 or 2 days to identify "usual intake." Although Household Consumption and Expenditures Surveys (HCES) also have significant limitations, they are increasingly being used to inform nutrition policy To investigate differences in fortification simulations based on OWFR and HCES from Bangladesh. The pre- and postfortification nutrient intake levels from the two surveys were compared. The total population-based rank orderings of oil, wheat flour, and sugar coverage were identical for the two surveys. OWFR found differences in women's and children's coverage rates and average quantities consumed for all three foods that were not detected by HCES. Guided by the Food Fortification Formulator, we found that these differences did not result in differences in recommended fortification levels. Differences were found, however, in estimated impacts: although both surveys found that oil would be effective in reducing the prevalence of inadequate vitamin A intake among both subpopulations, only OWFR also found that sugar and wheat flour fortification would significantly reduce inadequate vitamin A intake among children. Despite the less precise measure of food consumption from HCES, the two surveys provide similar guidance for designing a fortification program. The external validity of these findings is limited. With relatively minor modifications, the precision of HCES in dietary assessment and the use ofHCES in fortification programming could be strengthened.
Using normalization 3D model for automatic clinical brain quantative analysis and evaluation
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping
2003-05-01
Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.
Precision half-life measurement of 17F
NASA Astrophysics Data System (ADS)
Brodeur, M.; Nicoloff, C.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Gupta, Y. K.; Hall, M. R.; Hall, O.; Hu, J.; Kelly, J. M.; Kolata, J. J.; Long, J.; O'Malley, P.; Schultz, B. E.
2016-02-01
Background: The precise determination of f t values for superallowed mixed transitions between mirror nuclide are gaining attention as they could provide an avenue to test the theoretical corrections used to extract the Vu d matrix element from superallowed pure Fermi transitions. The 17F decay is particularly interesting as it proceeds completely to the ground state of 17O, removing the need for branching ratio measurements. The dominant uncertainty on the f t value of the 17F mirror transition stems from a number of conflicting half-life measurements. Purpose: A precision half-life measurement of 17F was performed and compared to previous results. Methods: The life-time was determined from the β counting of implanted 17F on a Ta foil that was removed from the beam for counting. The 17F beam was produced by transfers reaction and separated by the TwinSol facility of the Nuclear Science Laboratory of the University of Notre Dame. Results: The measured value of t1/2 new=64.402 (42) s is in agreement with several past measurements and represents one of the most precise measurements to date. In anticipation of future measurements of the correlation parameters for the decay and using the new world average t1/2 world=64.398 (61) s, we present a new estimate of the mixing ratio ρ for the mixed transition as well as the correlation parameters based on assuming Standard Model validity. Conclusions: The relative uncertainty on the new world average for the half-life is dominated by the large χ2=31 of the existing measurements. More precision measurements with different systematics are needed to remedy to the situation.
Young, Chao-Wang; Hsieh, Jia-Ling; Ay, Chyung
2012-01-01
This study adopted a microelectromechanical fabrication process to design a chip integrated with electroosmotic flow and dielectrophoresis force for single cell lysis. Human histiocytic lymphoma U937 cells were driven rapidly by electroosmotic flow and precisely moved to a specific area for cell lysis. By varying the frequency of AC power, 15 V AC at 1 MHz of frequency configuration achieved 100% cell lysing at the specific area. The integrated chip could successfully manipulate single cells to a specific position and lysis. The overall successful rate of cell tracking, positioning, and cell lysis is 80%. The average speed of cell driving was 17.74 μm/s. This technique will be developed for DNA extraction in biomolecular detection. It can simplify pre-treatment procedures for biotechnological analysis of samples. PMID:22736957
Young, Chao-Wang; Hsieh, Jia-Ling; Ay, Chyung
2012-01-01
This study adopted a microelectromechanical fabrication process to design a chip integrated with electroosmotic flow and dielectrophoresis force for single cell lysis. Human histiocytic lymphoma U937 cells were driven rapidly by electroosmotic flow and precisely moved to a specific area for cell lysis. By varying the frequency of AC power, 15 V AC at 1 MHz of frequency configuration achieved 100% cell lysing at the specific area. The integrated chip could successfully manipulate single cells to a specific position and lysis. The overall successful rate of cell tracking, positioning, and cell lysis is 80%. The average speed of cell driving was 17.74 μm/s. This technique will be developed for DNA extraction in biomolecular detection. It can simplify pre-treatment procedures for biotechnological analysis of samples.
The Physical Conditions of a Lensed Star-Forming Galaxy at Z=1.7
NASA Technical Reports Server (NTRS)
Rigby, Jane; Wuyts, E.; Gladders, M.; Sharon, K.; Becker, G.
2011-01-01
We report rest-frame optical Keck/NIRSPEC spectroscopy of the brightest lensed galaxy yet discovered, RCSGA 032727-132609 at z=1.7037. From precise measurements of the nebular lines, we infer a number of physical properties: redshift ' extinction, star formation rate ' ionization parameter, electron density, electron temperature, oxygen abundance, and N/O, Ne/O, and Ar/O abundance ratios, The limit on [O III] 4363 A tightly constrains the oxygen abundance via the "direct" or Te method, for the first time in an average-metallicity galaxy at z approx.2. We compare this result to several standard "bright-line" O abundance diagnostics, thereby testing these empirically-calibrated diagnostics in situ. Finally, we explore the positions of lensed and unlensed galaxies in standard diagnostic diagrams, to explore the diversity of ionization conditions and mass-metallicity ratios at z=2.
Are patient specific meshes required for EIT head imaging?
Jehl, Markus; Aristovich, Kirill; Faulkner, Mayo; Holder, David
2016-06-01
Head imaging with electrical impedance tomography (EIT) is usually done with time-differential measurements, to reduce time-invariant modelling errors. Previous research suggested that more accurate head models improved image quality, but no thorough analysis has been done on the required accuracy. We propose a novel pipeline for creation of precise head meshes from magnetic resonance imaging and computed tomography scans, which was applied to four different heads. Voltages were simulated on all four heads for perturbations of different magnitude, haemorrhage and ischaemia, in five different positions and for three levels of instrumentation noise. Statistical analysis showed that reconstructions on the correct mesh were on average 25% better than on the other meshes. However, the stroke detection rates were not improved. We conclude that a generic head mesh is sufficient for monitoring patients for secondary strokes following head trauma.
Optimal design and experimental analyses of a new micro-vibration control payload-platform
NASA Astrophysics Data System (ADS)
Sun, Xiaoqing; Yang, Bintang; Zhao, Long; Sun, Xiaofen
2016-07-01
This paper presents a new payload-platform, for precision devices, which possesses the capability of isolating the complex space micro-vibration in low frequency range below 5 Hz. The novel payload-platform equipped with smart material actuators is investigated and designed through optimization strategy based on the minimum energy loss rate, for the aim of achieving high drive efficiency and reducing the effect of the magnetic circuit nonlinearity. Then, the dynamic model of the driving element is established by using the Lagrange method and the performance of the designed payload-platform is further discussed through the combination of the controlled auto regressive moving average (CARMA) model with modified generalized prediction control (MGPC) algorithm. Finally, an experimental prototype is developed and tested. The experimental results demonstrate that the payload-platform has an impressive potential of micro-vibration isolation.
Precisely and Accurately Inferring Single-Molecule Rate Constants
Kinz-Thompson, Colin D.; Bailey, Nevette A.; Gonzalez, Ruben L.
2017-01-01
The kinetics of biomolecular systems can be quantified by calculating the stochastic rate constants that govern the biomolecular state versus time trajectories (i.e., state trajectories) of individual biomolecules. To do so, the experimental signal versus time trajectories (i.e., signal trajectories) obtained from observing individual biomolecules are often idealized to generate state trajectories by methods such as thresholding or hidden Markov modeling. Here, we discuss approaches for idealizing signal trajectories and calculating stochastic rate constants from the resulting state trajectories. Importantly, we provide an analysis of how the finite length of signal trajectories restrict the precision of these approaches, and demonstrate how Bayesian inference-based versions of these approaches allow rigorous determination of this precision. Similarly, we provide an analysis of how the finite lengths and limited time resolutions of signal trajectories restrict the accuracy of these approaches, and describe methods that, by accounting for the effects of the finite length and limited time resolution of signal trajectories, substantially improve this accuracy. Collectively, therefore, the methods we consider here enable a rigorous assessment of the precision, and a significant enhancement of the accuracy, with which stochastic rate constants can be calculated from single-molecule signal trajectories. PMID:27793280
Design of a novel instrument for active neutron interrogation of artillery shells.
Bélanger-Champagne, Camille; Vainionpää, Hannes; Peura, Pauli; Toivonen, Harri; Eerola, Paula; Dendooven, Peter
2017-01-01
The most common explosives can be uniquely identified by measuring the elemental H/N ratio with a precision better than 10%. Monte Carlo simulations were used to design two variants of a new prompt gamma neutron activation instrument that can achieve this precision. The instrument features an intense pulsed neutron generator with precise timing. Measuring the hydrogen peak from the target explosive is especially challenging because the instrument itself contains hydrogen, which is needed for neutron moderation and shielding. By iterative design optimization, the fraction of the hydrogen peak counts coming from the explosive under interrogation increased from [Formula: see text]% to [Formula: see text]% (statistical only) for the benchmark design. In the optimized design variants, the hydrogen signal from a high-explosive shell can be measured to a statistics-only precision better than 1% in less than 30 minutes for an average neutron production yield of 109 n/s.
Design of a novel instrument for active neutron interrogation of artillery shells
Vainionpää, Hannes; Peura, Pauli; Toivonen, Harri; Eerola, Paula; Dendooven, Peter
2017-01-01
The most common explosives can be uniquely identified by measuring the elemental H/N ratio with a precision better than 10%. Monte Carlo simulations were used to design two variants of a new prompt gamma neutron activation instrument that can achieve this precision. The instrument features an intense pulsed neutron generator with precise timing. Measuring the hydrogen peak from the target explosive is especially challenging because the instrument itself contains hydrogen, which is needed for neutron moderation and shielding. By iterative design optimization, the fraction of the hydrogen peak counts coming from the explosive under interrogation increased from 53-7+7% to 74-10+8% (statistical only) for the benchmark design. In the optimized design variants, the hydrogen signal from a high-explosive shell can be measured to a statistics-only precision better than 1% in less than 30 minutes for an average neutron production yield of 109 n/s. PMID:29211773
Methods for the Precise Locating and Forming of Arrays of Curved Features into a Workpiece
Gill, David Dennis; Keeler, Gordon A.; Serkland, Darwin K.; Mukherjee, Sayan D.
2008-10-14
Methods for manufacturing high precision arrays of curved features (e.g. lenses) in the surface of a workpiece are described utilizing orthogonal sets of inter-fitting locating grooves to mate a workpiece to a workpiece holder mounted to the spindle face of a rotating machine tool. The matching inter-fitting groove sets in the workpiece and the chuck allow precisely and non-kinematically indexing the workpiece to locations defined in two orthogonal directions perpendicular to the turning axis of the machine tool. At each location on the workpiece a curved feature can then be on-center machined to create arrays of curved features on the workpiece. The averaging effect of the corresponding sets of inter-fitting grooves provide for precise repeatability in determining, the relative locations of the centers of each of the curved features in an array of curved features.
Stability, precision, and near-24-hour period of the human circadian pacemaker
NASA Technical Reports Server (NTRS)
Czeisler, C. A.; Duffy, J. F.; Shanahan, T. L.; Brown, E. N.; Mitchell, J. F.; Rimmer, D. W.; Ronda, J. M.; Silva, E. J.; Allan, J. S.; Emens, J. S.;
1999-01-01
Regulation of circadian period in humans was thought to differ from that of other species, with the period of the activity rhythm reported to range from 13 to 65 hours (median 25.2 hours) and the period of the body temperature rhythm reported to average 25 hours in adulthood, and to shorten with age. However, those observations were based on studies of humans exposed to light levels sufficient to confound circadian period estimation. Precise estimation of the periods of the endogenous circadian rhythms of melatonin, core body temperature, and cortisol in healthy young and older individuals living in carefully controlled lighting conditions has now revealed that the intrinsic period of the human circadian pacemaker averages 24.18 hours in both age groups, with a tight distribution consistent with other species. These findings have important implications for understanding the pathophysiology of disrupted sleep in older people.
NASA Astrophysics Data System (ADS)
Chen, Ting-Yu
2012-06-01
This article presents a useful method for relating anchor dependency and accuracy functions to multiple attribute decision-making (MADM) problems in the context of Atanassov intuitionistic fuzzy sets (A-IFSs). Considering anchored judgement with displaced ideals and solution precision with minimal hesitation, several auxiliary optimisation models have proposed to obtain the optimal weights of the attributes and to acquire the corresponding TOPSIS (the technique for order preference by similarity to the ideal solution) index for alternative rankings. Aside from the TOPSIS index, as a decision-maker's personal characteristics and own perception of self may also influence the direction in the axiom of choice, the evaluation of alternatives is conducted based on distances of each alternative from the positive and negative ideal alternatives, respectively. This article originates from Li's [Li, D.-F. (2005), 'Multiattribute Decision Making Models and Methods Using Intuitionistic Fuzzy Sets', Journal of Computer and System Sciences, 70, 73-85] work, which is a seminal study of intuitionistic fuzzy decision analysis using deduced auxiliary programming models, and deems it a benchmark method for comparative studies on anchor dependency and accuracy functions. The feasibility and effectiveness of the proposed methods are illustrated by a numerical example. Finally, a comparative analysis is illustrated with computational experiments on averaging accuracy functions, TOPSIS indices, separation measures from positive and negative ideal alternatives, consistency rates of ranking orders, contradiction rates of the top alternative and average Spearman correlation coefficients.
Pressure and Chemical Potential: Effects Hydrophilic Soils Have on Adsorption and Transport
NASA Astrophysics Data System (ADS)
Bennethum, L. S.; Weinstein, T.
2003-12-01
Using the assumption that thermodynamic properties of fluid is affected by its proximity to the solid phase, a theoretical model has been developed based on upscaling and fundamental thermodynamic principles (termed Hybrid Mixture Theory). The theory indicates that Darcy's law and the Darcy-scale chemical potential (which determines the rate of adsorption and diffusion) need to be modified in order to apply to soils containing hydrophilic soils. In this talk we examine the Darcy-scale definition of pressure and chemical potential, especially as it applies to hydrophilic soils. To arrive at our model, we used hybrid mixture theory - first pioneered by Hassanizadeh and Gray in 1979. The technique involves averaging the field equations (i.e. conservation of mass, momentum balance, energy balance, etc.) to obtain macroscopic field equations, where each field variable is defined precisely in terms of its microscale counterpart. To close the system consistently with classical thermodynamics, the entropy inequality is exploited in the sense of Coleman and Noll. With the exceptions that the macroscale field variables are defined precisely in terms of their microscale counterparts and that microscopic interfacial equations can also be treated in a similar manner, the resulting system of equations is consistent with those derived using classical mixture theory. Hence the terminology, Hybrid Mixture Theory.
NASA Astrophysics Data System (ADS)
Pfister, T.; Günther, P.; Nöthen, M.; Czarske, J.
2010-02-01
Both in production engineering and process control, multidirectional displacements, deformations and vibrations of moving or rotating components have to be measured dynamically, contactlessly and with high precision. Optical sensors would be predestined for this task, but their measurement rate is often fundamentally limited. Furthermore, almost all conventional sensors measure only one measurand, i.e. either out-of-plane or in-plane distance or velocity. To solve this problem, we present a novel phase coded heterodyne laser Doppler distance sensor (PH-LDDS), which is able to determine out-of-plane (axial) position and in-plane (lateral) velocity of rough solid-state objects simultaneously and independently with a single sensor. Due to the applied heterodyne technique, stationary or purely axially moving objects can also be measured. In addition, it is shown theoretically as well as experimentally that this sensor offers concurrently high temporal resolution and high position resolution since its position uncertainty is in principle independent of the lateral object velocity in contrast to conventional distance sensors. This is a unique feature of the PH-LDDS enabling precise and dynamic position and shape measurements also of fast moving objects. With an optimized sensor setup, an average position resolution of 240 nm was obtained.
NASA Astrophysics Data System (ADS)
Ona, Toshihiro; Nishijima, Hiroshi; Kosaihira, Atsushi; Shibata, Junko
2008-04-01
In vitro rapid and quantitative cell-based assay is demanded to verify the efficacy prediction of cancer drugs since a cancer patient may have unconventional aspects of tumor development. Here, we show the rapid and non-label quantitative verifying method and instrumentation of apoptosis for cell cycle-arrest type cancer drugs (Roscovitine and D-allose) by reaction analysis of living liver cancer cells cultured on a sensor chip with a newly developed high precision (50 ndeg s -1 average fluctuation) surface plasmon resonance (SPR) sensor. The time-course cell reaction as the SPR angle change rate for 10 min from 30 min cell culture with a drug was significantly related to cell viability. By the simultaneous detection of differential SPR angle change and fluorescence by specific probes using the new instrument, the SPR angle was related to the nano-order potential decrease in inner mitochondrial membrane potential. The results obtained are universally valid for the cell cycle-arrest type cancer drugs, which mediate apoptosis through different cell-signaling pathways, by a liver cancer cell line of Hep G2 (P<0.001). This system towards the application to evaluate personal therapeutic potentials of drugs using cancer cells from patients in clinical use.
NASA Astrophysics Data System (ADS)
Aaij, R.; Abellán Beteta, C.; Adeva, B.; Adinolfi, M.; Affolder, A.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Andreassi, G.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Aquines Gutierrez, O.; Archilli, F.; d'Argent, P.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Bachmann, S.; Back, J. J.; Badalov, A.; Baesso, C.; Baldini, W.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Batozskaya, V.; Battista, V.; Bay, A.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Bel, L. J.; Bellee, V.; Belloli, N.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Benson, S.; Benton, J.; Berezhnoy, A.; Bernet, R.; Bertolin, A.; Betti, F.; Bettler, M.-O.; van Beuzekom, M.; Bifani, S.; Billoir, P.; Bird, T.; Birnkraut, A.; Bizzeti, A.; Blake, T.; Blanc, F.; Blouw, J.; Blusk, S.; Bocci, V.; Bondar, A.; Bondar, N.; Bonivento, W.; Borgheresi, A.; Borghi, S.; Borisyak, M.; Borsato, M.; Bowcock, T. J. V.; Bowen, E.; Bozzi, C.; Braun, S.; Britsch, M.; Britton, T.; Brodzicka, J.; Brook, N. H.; Buchanan, E.; Burr, C.; Bursche, A.; Buytaert, J.; Cadeddu, S.; Calabrese, R.; Calvi, M.; Calvo Gomez, M.; Campana, P.; Campora Perez, D.; Capriotti, L.; Carbone, A.; Carboni, G.; Cardinale, R.; Cardini, A.; Carniti, P.; Carson, L.; Carvalho Akiba, K.; Casse, G.; Cassina, L.; Castillo Garcia, L.; Cattaneo, M.; Cauet, Ch.; Cavallero, G.; Cenci, R.; Charles, M.; Charpentier, Ph.; Chefdeville, M.; Chen, S.; Cheung, S.-F.; Chiapolini, N.; Chrzaszcz, M.; Cid Vidal, X.; Ciezarek, G.; Clarke, P. E. L.; Clemencic, M.; Cliff, H. V.; Closier, J.; Coco, V.; Cogan, J.; Cogneras, E.; Cogoni, V.; Cojocariu, L.; Collazuol, G.; Collins, P.; Comerma-Montells, A.; Contu, A.; Cook, A.; Coombes, M.; Coquereau, S.; Corti, G.; Corvo, M.; Couturier, B.; Cowan, G. A.; Craik, D. C.; Crocombe, A.; Cruz Torres, M.; Cunliffe, S.; Currie, R.; D'Ambrosio, C.; Dall'Occo, E.; Dalseno, J.; David, P. N. Y.; Davis, A.; De Aguiar Francisco, O.; De Bruyn, K.; De Capua, S.; De Cian, M.; De Miranda, J. M.; De Paula, L.; De Simone, P.; Dean, C.-T.; Decamp, D.; Deckenhoff, M.; Del Buono, L.; Déléage, N.; Demmer, M.; Derkach, D.; Deschamps, O.; Dettori, F.; Dey, B.; Di Canto, A.; Di Ruscio, F.; Dijkstra, H.; Donleavy, S.; Dordei, F.; Dorigo, M.; Dosil Suárez, A.; Dovbnya, A.; Dreimanis, K.; Dufour, L.; Dujany, G.; Dungs, K.; Durante, P.; Dzhelyadin, R.; Dziurda, A.; Dzyuba, A.; Easo, S.; Egede, U.; Egorychev, V.; Eidelman, S.; Eisenhardt, S.; Eitschberger, U.; Ekelhof, R.; Eklund, L.; El Rifai, I.; Elsasser, Ch.; Ely, S.; Esen, S.; Evans, H. M.; Evans, T.; Falabella, A.; Färber, C.; Farley, N.; Farry, S.; Fay, R.; Fazzini, D.; Ferguson, D.; Fernandez Albor, V.; Ferrari, F.; Ferreira Rodrigues, F.; Ferro-Luzzi, M.; Filippov, S.; Fiore, M.; Fiorini, M.; Firlej, M.; Fitzpatrick, C.; Fiutowski, T.; Fleuret, F.; Fohl, K.; Fol, P.; Fontana, M.; Fontanelli, F.; Forshaw, D. C.; Forty, R.; Frank, M.; Frei, C.; Frosini, M.; Fu, J.; Furfaro, E.; Gallas Torreira, A.; Galli, D.; Gallorini, S.; Gambetta, S.; Gandelman, M.; Gandini, P.; Gao, Y.; García Pardiñas, J.; Garra Tico, J.; Garrido, L.; Gascon, D.; Gaspar, C.; Gavardi, L.; Gazzoni, G.; Gerick, D.; Gersabeck, E.; Gersabeck, M.; Gershon, T.; Ghez, Ph.; Gianı, S.; Gibson, V.; Girard, O. G.; Giubega, L.; Gligorov, V. V.; Göbel, C.; Golubkov, D.; Golutvin, A.; Gomes, A.; Gotti, C.; Grabalosa Gándara, M.; Graciani Diaz, R.; Granado Cardoso, L. A.; Graugés, E.; Graverini, E.; Graziani, G.; Grecu, A.; Griffith, P.; Grillo, L.; Grünberg, O.; Gui, B.; Gushchin, E.; Guz, Yu.; Gys, T.; Hadavizadeh, T.; Hadjivasiliou, C.; Haefeli, G.; Haen, C.; Haines, S. C.; Hall, S.; Hamilton, B.; Han, X.; Hansmann-Menzemer, S.; Harnew, N.; Harnew, S. T.; Harrison, J.; He, J.; Head, T.; Heijne, V.; Heister, A.; Hennessy, K.; Henrard, P.; Henry, L.; Hernando Morata, J. A.; van Herwijnen, E.; Heß, M.; Hicheur, A.; Hill, D.; Hoballah, M.; Hombach, C.; Hongming, L.; Hulsbergen, W.; Humair, T.; Hushchyn, M.; Hussain, N.; Hutchcroft, D.; Hynds, D.; Idzik, M.; Ilten, P.; Jacobsson, R.; Jaeger, A.; Jalocha, J.; Jans, E.; Jawahery, A.; John, M.; Johnson, D.; Jones, C. R.; Joram, C.; Jost, B.; Jurik, N.; Kandybei, S.; Kanso, W.; Karacson, M.; Karbach, T. M.; Karodia, S.; Kecke, M.; Kelsey, M.; Kenyon, I. R.; Kenzie, M.; Ketel, T.; Khairullin, E.; Khanji, B.; Khurewathanakul, C.; Kirn, T.; Klaver, S.; Klimaszewski, K.; Kochebina, O.; Kolpin, M.; Komarov, I.; Koopman, R. F.; Koppenburg, P.; Kozeiha, M.; Kravchuk, L.; Kreplin, K.; Kreps, M.; Krokovny, P.; Kruse, F.; Krzemien, W.; Kucewicz, W.; Kucharczyk, M.; Kudryavtsev, V.; Kuonen, A. K.; Kurek, K.; Kvaratskheliya, T.; Lacarrere, D.; Lafferty, G.; Lai, A.; Lambert, D.; Lanfranchi, G.; Langenbruch, C.; Langhans, B.; Latham, T.; Lazzeroni, C.; Le Gac, R.; van Leerdam, J.; Lees, J.-P.; Lefèvre, R.; Leflat, A.; Lefrançois, J.; Lemos Cid, E.; Leroy, O.; Lesiak, T.; Leverington, B.; Li, Y.; Likhomanenko, T.; Liles, M.; Lindner, R.; Linn, C.; Lionetto, F.; Liu, B.; Liu, X.; Loh, D.; Longstaff, I.; Lopes, J. H.; Lucchesi, D.; Lucio Martinez, M.; Luo, H.; Lupato, A.; Luppi, E.; Lupton, O.; Lusardi, N.; Lusiani, A.; Machefert, F.; Maciuc, F.; Maev, O.; Maguire, K.; Malde, S.; Malinin, A.; Manca, G.; Mancinelli, G.; Manning, P.; Mapelli, A.; Maratas, J.; Marchand, J. F.; Marconi, U.; Marin Benito, C.; Marino, P.; Marks, J.; Martellotti, G.; Martin, M.; Martinelli, M.; Martinez Santos, D.; Martinez Vidal, F.; Martins Tostes, D.; Massacrier, L. M.; Massafferri, A.; Matev, R.; Mathad, A.; Mathe, Z.; Matteuzzi, C.; Mauri, A.; Maurin, B.; Mazurov, A.; McCann, M.; McCarthy, J.; McNab, A.; McNulty, R.; Meadows, B.; Meier, F.; Meissner, M.; Melnychuk, D.; Merk, M.; Merli, A.; Michielin, E.; Milanes, D. A.; Minard, M.-N.; Mitzel, D. S.; Molina Rodriguez, J.; Monroy, I. A.; Monteil, S.; Morandin, M.; Morawski, P.; Mordà, A.; Morello, M. J.; Moron, J.; Morris, A. B.; Mountain, R.; Muheim, F.; Müller, D.; Müller, J.; Müller, K.; Müller, V.; Mussini, M.; Muster, B.; Naik, P.; Nakada, T.; Nandakumar, R.; Nandi, A.; Nasteva, I.; Needham, M.; Neri, N.; Neubert, S.; Neufeld, N.; Neuner, M.; Nguyen, A. D.; Nguyen-Mau, C.; Niess, V.; Nieswand, S.; Niet, R.; Nikitin, N.; Nikodem, T.; Novoselov, A.; O'Hanlon, D. P.; Oblakowska-Mucha, A.; Obraztsov, V.; Ogilvy, S.; Okhrimenko, O.; Oldeman, R.; Onderwater, C. J. G.; Osorio Rodrigues, B.; Otalora Goicochea, J. M.; Otto, A.; Owen, P.; Oyanguren, A.; Palano, A.; Palombo, F.; Palutan, M.; Panman, J.; Papanestis, A.; Pappagallo, M.; Pappalardo, L. L.; Pappenheimer, C.; Parker, W.; Parkes, C.; Passaleva, G.; Patel, G. D.; Patel, M.; Patrignani, C.; Pearce, A.; Pellegrino, A.; Penso, G.; Pepe Altarelli, M.; Perazzini, S.; Perret, P.; Pescatore, L.; Petridis, K.; Petrolini, A.; Petruzzo, M.; Picatoste Olloqui, E.; Pietrzyk, B.; Pikies, M.; Pinci, D.; Pistone, A.; Piucci, A.; Playfer, S.; Plo Casasus, M.; Poikela, T.; Polci, F.; Poluektov, A.; Polyakov, I.; Polycarpo, E.; Popov, A.; Popov, D.; Popovici, B.; Potterat, C.; Price, E.; Price, J. D.; Prisciandaro, J.; Pritchard, A.; Prouve, C.; Pugatch, V.; Puig Navarro, A.; Punzi, G.; Qian, W.; Quagliani, R.; Rachwal, B.; Rademacker, J. H.; Rama, M.; Ramos Pernas, M.; Rangel, M. S.; Raniuk, I.; Raven, G.; Redi, F.; Reichert, S.; dos Reis, A. C.; Renaudin, V.; Ricciardi, S.; Richards, S.; Rihl, M.; Rinnert, K.; Rives Molina, V.; Robbe, P.; Rodrigues, A. B.; Rodrigues, E.; Rodriguez Lopez, J. A.; Rodriguez Perez, P.; Rogozhnikov, A.; Roiser, S.; Romanovsky, V.; Romero Vidal, A.; Ronayne, J. W.; Rotondo, M.; Ruf, T.; Ruiz Valls, P.; Saborido Silva, J. J.; Sagidova, N.; Saitta, B.; Salustino Guimaraes, V.; Sanchez Mayordomo, C.; Sanmartin Sedes, B.; Santacesaria, R.; Santamarina Rios, C.; Santimaria, M.; Santovetti, E.; Sarti, A.; Satriano, C.; Satta, A.; Saunders, D. M.; Savrina, D.; Schael, S.; Schiller, M.; Schindler, H.; Schlupp, M.; Schmelling, M.; Schmelzer, T.; Schmidt, B.; Schneider, O.; Schopper, A.; Schubiger, M.; Schune, M.-H.; Schwemmer, R.; Sciascia, B.; Sciubba, A.; Semennikov, A.; Sergi, A.; Serra, N.; Serrano, J.; Sestini, L.; Seyfert, P.; Shapkin, M.; Shapoval, I.; Shcheglov, Y.; Shears, T.; Shekhtman, L.; Shevchenko, V.; Shires, A.; Siddi, B. G.; Silva Coutinho, R.; Silva de Oliveira, L.; Simi, G.; Sirendi, M.; Skidmore, N.; Skwarnicki, T.; Smith, E.; Smith, I. T.; Smith, J.; Smith, M.; Snoek, H.; Sokoloff, M. D.; Soler, F. J. P.; Soomro, F.; Souza, D.; Souza De Paula, B.; Spaan, B.; Spradlin, P.; Sridharan, S.; Stagni, F.; Stahl, M.; Stahl, S.; Stefkova, S.; Steinkamp, O.; Stenyakin, O.; Stevenson, S.; Stoica, S.; Stone, S.; Storaci, B.; Stracka, S.; Straticiuc, M.; Straumann, U.; Sun, L.; Sutcliffe, W.; Swientek, K.; Swientek, S.; Syropoulos, V.; Szczekowski, M.; Szumlak, T.; T'Jampens, S.; Tayduganov, A.; Tekampe, T.; Tellarini, G.; Teubert, F.; Thomas, C.; Thomas, E.; van Tilburg, J.; Tisserand, V.; Tobin, M.; Todd, J.; Tolk, S.; Tomassetti, L.; Tonelli, D.; Topp-Joergensen, S.; Tournefier, E.; Tourneur, S.; Trabelsi, K.; Traill, M.; Tran, M. T.; Tresch, M.; Trisovic, A.; Tsaregorodtsev, A.; Tsopelas, P.; Tuning, N.; Ukleja, A.; Ustyuzhanin, A.; Uwer, U.; Vacca, C.; Vagnoni, V.; Valenti, G.; Vallier, A.; Vazquez Gomez, R.; Vazquez Regueiro, P.; Vázquez Sierra, C.; Vecchi, S.; van Veghel, M.; Velthuis, J. J.; Veltri, M.; Veneziano, G.; Vesterinen, M.; Viaud, B.; Vieira, D.; Vieites Diaz, M.; Vilasis-Cardona, X.; Volkov, V.; Vollhardt, A.; Voong, D.; Vorobyev, A.; Vorobyev, V.; Voß, C.; de Vries, J. A.; Waldi, R.; Wallace, C.; Wallace, R.; Walsh, J.; Wang, J.; Ward, D. R.; Watson, N. K.; Websdale, D.; Weiden, A.; Whitehead, M.; Wicht, J.; Wilkinson, G.; Wilkinson, M.; Williams, M.; Williams, M. P.; Williams, M.; Williams, T.; Wilson, F. F.; Wimberley, J.; Wishahi, J.; Wislicki, W.; Witek, M.; Wormser, G.; Wotton, S. A.; Wraight, K.; Wright, S.; Wyllie, K.; Xie, Y.; Xu, Z.; Yang, Z.; Yin, H.; Yu, J.; Yuan, X.; Yushchenko, O.; Zangoli, M.; Zavertyaev, M.; Zhang, L.; Zhang, Y.; Zhelezov, A.; Zhokhov, A.; Zhong, L.; Zhukov, V.; Zucchelli, S.; LHCb Collaboration
2016-06-01
Charm meson oscillations are observed in a time-dependent analysis of the ratio of D0→K+π-π+π- to D0→K-π+π-π+ decay rates, using data corresponding to an integrated luminosity of 3.0 fb-1 recorded by the LHCb experiment. The measurements presented are sensitive to the phase-space averaged ratio of doubly Cabibbo-suppressed to Cabibbo-favored amplitudes rDK 3 π and the product of the coherence factor RDK 3 π and a charm mixing parameter yK3 π ' . The constraints measured are rDK 3 π=(5.67 ±0.12 )×10-2 , which is the most precise determination to date, and RDK 3 πyK3 π '=(0.3 ±1.8 )×10-3 , which provides useful input for determinations of the CP-violating phase γ in B±→D K± , D →K∓π±π∓π± decays. The analysis also gives the most precise measurement of the D0→K+π-π+π- branching fraction, and the first observation of D0-D¯ 0 oscillations in this decay mode, with a significance of 8.2 standard deviations.
Estimate of within population incremental selection through branch imbalance in lineage trees
Liberman, Gilad; Benichou, Jennifer I.C.; Maman, Yaakov; Glanville, Jacob; Alter, Idan; Louzoun, Yoram
2016-01-01
Incremental selection within a population, defined as limited fitness changes following mutation, is an important aspect of many evolutionary processes. Strongly advantageous or deleterious mutations are detected using the synonymous to non-synonymous mutations ratio. However, there are currently no precise methods to estimate incremental selection. We here provide for the first time such a detailed method and show its precision in multiple cases of micro-evolution. The proposed method is a novel mixed lineage tree/sequence based method to detect within population selection as defined by the effect of mutations on the average number of offspring. Specifically, we propose to measure the log of the ratio between the number of leaves in lineage trees branches following synonymous and non-synonymous mutations. The method requires a high enough number of sequences, and a large enough number of independent mutations. It assumes that all mutations are independent events. It does not require of a baseline model and is practically not affected by sampling biases. We show the method's wide applicability by testing it on multiple cases of micro-evolution. We show that it can detect genes and inter-genic regions using the selection rate and detect selection pressures in viral proteins and in the immune response to pathogens. PMID:26586802
Felisberto, Filipe; Fdez-Riverola, Florentino; Pereira, António
2014-05-21
The low average birth rate in developed countries and the increase in life expectancy have lead society to face for the first time an ageing situation. This situation associated with the World's economic crisis (which started in 2008) forces the need of equating better and more efficient ways of providing more quality of life for the elderly. In this context, the solution presented in this work proposes to tackle the problem of monitoring the elderly in a way that is not restrictive for the life of the monitored, avoiding the need for premature nursing home admissions. To this end, the system uses the fusion of sensory data provided by a network of wireless sensors placed on the periphery of the user. Our approach was also designed with a low-cost deployment in mind, so that the target group may be as wide as possible. Regarding the detection of long-term problems, the tests conducted showed that the precision of the system in identifying and discerning body postures and body movements allows for a valid monitorization and rehabilitation of the user. Moreover, concerning the detection of accidents, while the proposed solution presented a near 100% precision at detecting normal falls, the detection of more complex falls (i.e., hampered falls) will require further study.
Ion current as a precise measure of the loading rate of a magneto-optical trap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, W.; Bailey, K.; Lu, Z. -T.
2014-01-01
We have demonstrated that the ion current resulting from collisions between metastable krypton atoms in a magneto-optical trap can be used to precisely measure the trap loading rate. We measured both the ion current of the abundant isotope Kr-83 (isotopic abundance = 11%) and the single-atom counting rate of the rare isotope Kr-85 (isotopic abundance similar to 1 x 10(-11)), and found the two quantities to be proportional at a precision level of 0.9%. This work results in a significant improvement in using the magneto-optical trap as an analytical tool for noble-gas isotope ratio measurements, and will benefit both atomicmore » physics studies and applications in the earth sciences. (C) 2014 Optical Society of America« less
Monitoring Chewing and Eating in Free-Living Using Smart Eyeglasses.
Zhang, Rui; Amft, Oliver
2018-01-01
We propose to 3-D-print personal fitted regular-look smart eyeglasses frames equipped with bilateral electromyography recording to monitor temporalis muscles' activity for automatic dietary monitoring. Personal fitting supported electrode-skin contacts are at temple ear bend and temple end positions. We evaluated the smart monitoring eyeglasses during in-lab and free-living studies of food chewing and eating event detection with ten participants. The in-lab study was designed to explore three natural food hardness levels and determine parameters of an energy-based chewing cycle detection. Our free-living study investigated whether chewing monitoring and eating event detection using smart eyeglasses is feasible in free-living. An eating event detection algorithm was developed to determine intake activities based on the estimated chewing rate. Results showed an average food hardness classification accuracy of 94% and chewing cycle detection precision and recall above 90% for the in-lab study and above 77% for the free-living study covering 122 hours of recordings. Eating detection revealed the 44 eating events with an average accuracy above 95%. We conclude that smart eyeglasses are suitable for monitoring chewing and eating events in free-living and even could provide further insights into the wearer's natural chewing patterns.
Anomalous amplification of a homodyne signal via almost-balanced weak values.
Liu, Wei-Tao; Martínez-Rincón, Julián; Viza, Gerardo I; Howell, John C
2017-03-01
We propose precision measurements of ultra-small angular velocities of a mirror within a modified Sagnac interferometer, where the counter-propagating beams are spatially separated, using the recently proposed technique of almost-balanced weak values amplification (ABWV) [Phys. Rev. Lett.116, 100803 (2016)PRLTAO0031-900710.1103/PhysRevLett.116.100803]. The separation between the two beams provides additional amplification with respect to using collinear beams in a Sagnac interferometer. Within the same setup, the weak-value amplification technique is also performed for comparison. Much higher amplification factors can be obtained using the almost-balanced weak values technique, with the best one achieved in our experiments being as high as 1.2×107. In addition, the amplification factor monotonically increases with decreasing of the post-selection phase for the ABWV case in our experiments, which is not the case for weak-value amplification (WVA) at small post-selection phases. Both techniques consist of measuring the angular velocity. The sensitivity of the ABWV technique is ∼38 nrad/s per averaged pulse for a repetition rate of 1 Hz and ∼33 nrad/s per averaged pulse for the WVA technique.
High-resolution gravity field modeling using GRAIL mission data
NASA Astrophysics Data System (ADS)
Lemoine, F. G.; Goossens, S. J.; Sabaka, T. J.; Nicholas, J. B.; Mazarico, E.; Rowlands, D. D.; Neumann, G. A.; Loomis, B.; Chinn, D. S.; Smith, D. E.; Zuber, M. T.
2015-12-01
The Gravity Recovery and Interior Laboratory (GRAIL) spacecraft were designed to map the structure of the Moon through high-precision global gravity mapping. The mission consisted of two spacecraft with Ka-band inter-satellite tracking complemented by tracking from Earth. The mission had two phases: a primary mapping mission from March 1 until May 29, 2012 at an average altitude of 50 km, and an extended mission from August 30 until December 14, 2012, with an average altitude of 23 km before November 18, and 20 and 11 km after. High-resolution gravity field models using both these data sets have been estimated, with the current resolution being degree and order 1080 in spherical harmonics. Here, we focus on aspects of the analysis of the GRAIL data: we investigate eclipse modeling, the influence of empirical accelerations on the results, and we discuss the inversion of large-scale systems. In addition to global models we also estimated local gravity adjustments in areas of particular interest such as Mare Orientale, the south pole area, and the farside. We investigate the use of Ka-band Range Rate (KBRR) data versus numerical derivatives of KBRR data, and show that the latter have the capability to locally improve correlations with topography.
Interferometric Radar Observations of Glaciar San Rafael, Chile
NASA Technical Reports Server (NTRS)
Rignot, Eric; Forster, Richard; Isacks, Bryan
1996-01-01
Interferometric radar observations of Glaciar San Rafael, Chile, were collected in October 1994 by NASA's Spaceborne Imaging Radar C (SIR-C) at both L- (24cm) and C-band frequency (5.6cm), with vertical transmit and receive polarization. The C-band data did not yield good geophysical products, because the temporal coherence of the signal was significantly reduced after 24h. The L-band data were, however, successfully employed to map the surface topography of the icefield with a 10m uncertainty in height, and measure ice velocity with a precision of 4 mm/d or 1.4 m/a. The corresponding error in strain rates is 0.05/a at a 30 m horizontal spacing. The one-dimensional interferometric velocities were subsequently converted to horizontal displacements by assuming a flow direction and complemented by feature-tracking results near the calving front. The results provide a comprehensive view of the ice-flow dynamics of Glaciar San Rafael. The glacier has a core of rapid flow, 4.5 km in width and 3.5 degrees in average slope,surrounded by slower moving ice, not by rock. Ice velocity is 2.6 m/d or 0.95 km/a near the equilibrium line altitude (1200m), increasing rapidly before the glacier enters the narrower terminal valley, to reach 17.5 m/d or 6.4 km/a at the calving front. Strain rates are dominated by lateral shearing at the glacier margins (0.4-0.7/a), except for the terminal-valley section, where longitudinal strain rates average close to 1/a. This spectacular longitudinal increase in ice velocity in the last few kilometers may be a fundamental feature of tidewater glaciers.
A 184-year record of river meander migration from tree rings, aerial imagery, and cross sections
NASA Astrophysics Data System (ADS)
Schook, Derek M.; Rathburn, Sara L.; Friedman, Jonathan M.; Wolf, J. Marshall
2017-09-01
Channel migration is the primary mechanism of floodplain turnover in meandering rivers and is essential to the persistence of riparian ecosystems. Channel migration is driven by river flows, but short-term records cannot disentangle the effects of land use, flow diversion, past floods, and climate change. We used three data sets to quantify nearly two centuries of channel migration on the Powder River in Montana. The most precise data set came from channel cross sections measured an average of 21 times from 1975 to 2014. We then extended spatial and temporal scales of analysis using aerial photographs (1939-2013) and by aging plains cottonwoods along transects (1830-2014). Migration rates calculated from overlapping periods across data sets mostly revealed cross-method consistency. Data set integration revealed that migration rates have declined since peaking at 5 m/year in the two decades after the extreme 1923 flood (3000 m3/s). Averaged over the duration of each data set, cross section channel migration occurred at 0.81 m/year, compared to 1.52 m/year for the medium-length air photo record and 1.62 m/year for the lengthy cottonwood record. Powder River peak annual flows decreased by 48% (201 vs. 104 m3/s) after the largest flood of the post-1930 gaged record (930 m3/s in 1978). Declining peak discharges led to a 53% reduction in channel width and a 29% increase in sinuosity over the 1939-2013 air photo record. Changes in planform geometry and reductions in channel migration make calculations of floodplain turnover rates dependent on the period of analysis. We found that the intensively studied last four decades do not represent the past two centuries.
A 184-year record of river meander migration from tree rings, aerial imagery, and cross sections
Schook, Derek M.; Rathburn, Sara L.; Friedman, Jonathan M.; Wolf, J. Marshall
2017-01-01
Channel migration is the primary mechanism of floodplain turnover in meandering rivers and is essential to the persistence of riparian ecosystems. Channel migration is driven by river flows, but short-term records cannot disentangle the effects of land use, flow diversion, past floods, and climate change. We used three data sets to quantify nearly two centuries of channel migration on the Powder River in Montana. The most precise data set came from channel cross sections measured an average of 21 times from 1975 to 2014. We then extended spatial and temporal scales of analysis using aerial photographs (1939–2013) and by aging plains cottonwoods along transects (1830–2014). Migration rates calculated from overlapping periods across data sets mostly revealed cross-method consistency. Data set integration revealed that migration rates have declined since peaking at 5 m/year in the two decades after the extreme 1923 flood (3000 m3/s). Averaged over the duration of each data set, cross section channel migration occurred at 0.81 m/year, compared to 1.52 m/year for the medium-length air photo record and 1.62 m/year for the lengthy cottonwood record. Powder River peak annual flows decreased by 48% (201 vs. 104 m3/s) after the largest flood of the post-1930 gaged record (930 m3/s in 1978). Declining peak discharges led to a 53% reduction in channel width and a 29% increase in sinuosity over the 1939–2013 air photo record. Changes in planform geometry and reductions in channel migration make calculations of floodplain turnover rates dependent on the period of analysis. We found that the intensively studied last four decades do not represent the past two centuries
NASA Astrophysics Data System (ADS)
Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla
2017-02-01
The present study aimed to elucidate if comparison of angular segments of Pigment epithelium central limit- Inner limit of the retina Minimal Distance, measured over 2π radians in the frontal plane (PIMD-2π) between visits of a patient, renders sufficient precision for detection of loss of nerve fibers in the optic nerve head. An optic nerve head raster scanned cube was captured with a TOPCON 3D OCT 2000 (Topcon, Japan) device in one early to moderate stage glaucoma eye of each of 13 patients. All eyes were recorded at two visits less than 1 month apart. At each visit, 3 volumes were captured. Each volume was extracted from the OCT device for analysis. Then, angular PIMD was segmented three times over 2π radians in the frontal plane, resolved with a semi-automatic algorithm in 500 equally separated steps, PIMD-2π. It was found that individual segmentations within volumes, within visits, within subjects can be phase adjusted to each other in the frontal plane using cross-correlation. Cross correlation was also used to phase adjust volumes within visits within subjects and visits to each other within subjects. Then, PIMD-2π for each subject was split into 250 bundles of 2 adjacent PIMDs. Finally, the sources of variation for estimates of segments of PIMD-2π were derived with analysis of variance assuming a mixed model. The variation among adjacent PIMDS was found very small in relation to the variation among segmentations. The variation among visits was found insignificant in relation to the variation among volumes and the variance for segmentations was found to be on the order of 20 % of that for volumes. The estimated variances imply that, if 3 segmentations are averaged within a volume and at least 10 volumes are averaged within a visit, it is possible to estimate around a 10 % reduction of a PIMD-2π segment from baseline to a subsequent visit as significant. Considering a loss rate for a PIMD-2π segment of 23 μm/yr., 4 visits per year, and averaging 3 segmentations per volume and 3 volumes per visit, a significant reduction from baseline can be detected with a power of 80 % in about 18 months. At higher loss rate for a PIMD-2π segment, a significant difference from baseline can be detected earlier. Averaging over more volumes per visit considerably decreases the time for detection of a significant reduction of a segment of PIMD-2π. Increasing the number of segmentations averaged per visit only slightly reduces the time for detection of a significant reduction. It is concluded that phase adjustment in the frontal plane with cross correlation allows high precision estimates of a segment of PIMD-2π that imply substantially shorter followup time for detection of a significant change than mean deviation (MD) in a visual field estimated with the Humphrey perimeter or neural rim area (NRA) estimated with the Heidelberg retinal tomograph.
Li, Tingting; Wang, Wei; Zhao, Haijian; He, Falin; Zhong, Kun; Yuan, Shuai; Wang, Zhiguo
2017-09-07
This study aimed to investigate the status of internal quality control (IQC) for cardiac biomarkers from 2011 to 2016 so that we can have overall knowledge of the precision level of measurements in China and set appropriate precision specifications. Internal quality control data of cardiac biomarkers, including creatinine kinase MB (CK-MB) (μg/L), CK-MB(U/L), myoglobin (Mb), cardiac troponin I (cTnI), cardiac troponin T (cTnT), and homocysteines (HCY), were collected by a web-based external quality assessment (EQA) system. Percentages of laboratories meeting five precision quality specifications for current coefficient of variations (CVs) were calculated. Then, appropriate precision specifications were chosen for these six analytes. Finally, the CVs and IQC practice were further analyzed with different grouping methods. The current CVs remained nearly constant for 6 years. cTnT had the highest pass rates every year against five specifications, whereas HCY had the lowest pass rates. Overall, most analytes had a satisfactory performance (pass rates >80%), except for HCY, if one-third TEa or the minimum specification were employed. When the optimal specification was applied, the performance of most analytes was frustrating (pass rates < 60%) except for cTnT. The appropriate precision specifications of Mb, cTnI, cTnT and HCY were set as current CVs less than 9.20%, 9.90%, 7.50%, 10.54%, 7.63%, and 6.67%, respectively. The data of IQC practices indicated wide variation and substantial progress. The precision performance of cTnT was already satisfying, while the other five analytes, especially HCY, were still frustrating; thus, ongoing investigation and continuous improvement for IQC are still needed. © 2017 Wiley Periodicals, Inc.
Prochazka, Ivan; Kodet, Jan; Panek, Petr
2012-11-01
We have designed, constructed, and tested the overall performance of the electronic circuit for the two-way time transfer between two timing devices over modest distances with sub-picosecond precision and a systematic error of a few picoseconds. The concept of the electronic circuit enables to carry out time tagging of pulses of interest in parallel to the comparison of the time scales of these timing devices. The key timing parameters of the circuit are: temperature change of the delay is below 100 fs/K, timing stability time deviation better than 8 fs for averaging time from minutes to hours, sub-picosecond time transfer precision, and a few picoseconds time transfer accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frey, R.E.
1994-12-01
A precise measurement of the left-right cross section asymmetry (A{sub LR}) for Z boson production by e{sup +}e{sup {minus}} collisions has been attained at the SLAC Linear Collider with the SLD detector. The author describes this measurement for the 1993 data run, emphasizing the significant improvements in polarized beam operation which took place for this run, where the luminosity-weighted electron beam polarization averaged 62.6 {+-} 1.2%. Preliminary 1993 results for A{sub LR} are presented. When combined with the (less precise) 1992 result, the preliminary result for the effective weak mixing angle is sin{sup 2}{theta}{sub W}{sup eff} = 0.2290 {+-} 0.0010.
Bruegger, Lukas; Studer, Peter; Schmid, Stefan W; Pestel, Gunther; Reichen, Juerg; Seiler, Christian; Candinas, Daniel; Inderbitzin, Daniel
2008-01-01
Non-invasive pulse spectrophotometry to measure indocyanine green (ICG) elimination correlates well with the conventional invasive ICG clearance test. Nevertheless, the precision of this method remains unclear for any application, including small-for-size liver remnants. We therefore measured ICG plasma disappearance rate (PDR) during the anhepatic phase of orthotopic liver transplantation using pulse spectrophotometry. Measurements were done in 24 patients. The median PDR after exclusion of two outliers and two patients with inconstant signal was 1.55%/min (95% confidence interval [CI]=0.8-2.2). No correlation with patient age, gender, body mass, blood loss, administration of fresh frozen plasma, norepinephrine dose, postoperative albumin (serum), or difference in pre and post transplant body weight was detected. In conclusion, we found an ICG-PDR different from zero in the anhepatic phase, an overestimation that may arise in particular from a redistribution into the interstitial space. If ICG pulse spectrophotometry is used to measure functional hepatic reserve, the verified average difference from zero (1.55%/min) determined in our study needs to be taken into account.
Puente Hills blind-thrust system, Los Angeles, California
Shaw, J.H.; Plesch, A.; Dolan, J.F.; Pratt, T.L.; Fiore, P.
2002-01-01
We describe the three-dimensional geometry and Quaternary slip history of the Puente Hills blind-thrust system (PHT) using seismic reflection profiles, petroleum well data, and precisely located seismicity. The PHT generated the 1987 Whittier Narrows (moment magnitude [Mw] 6.0) earthquake and extends for more than 40 km along strike beneath the northern Los Angeles basin. The PHT comprises three, north-dipping ramp segments that are overlain by contractional fault-related folds. Based on an analysis of these folds, we produce Quaternary slip profiles along each ramp segment. The fault geometry and slip patterns indicate that segments of the PHT are related by soft-linkage boundaries, where the fault ramps are en echelon and displacements are gradually transferred from one segment to the next. Average Quaternary slip rates on the ramp segments range from 0.44 to 1.7 mm/yr, with preferred rates between 0.62 and 1.28 mm/yr. Using empirical relations among rupture area, magnitude, and coseismic displacement, we estimate the magnitude and frequency of single (Mw 6.5-6.6) and multisegment (Mw 7.1) rupture scenarios for the PHT.
Michmizos, Kostis P; Nikita, Konstantina S
2011-01-01
The crucial engagement of the subthalamic nucleus (STN) with the neurosurgical procedure of deep brain stimulation (DBS) that alleviates medically intractable Parkinsonian tremor augments the need to refine our current understanding of STN. To enhance the efficacy of DBS as a result of precise targeting, STN boundaries are accurately mapped using extracellular microelectrode recordings (MERs). We utilized the intranuclear MER to acquire the local field potential (LFP) and drive an Izhikevich model of an STN neuron. Using the model as the test bed for clinically acquired data, we demonstrated that stimulation of the STN neuron produces excitatory responses that tonically increase its average firing rate and alter the pattern of its neuronal activity. We also found that the spiking rhythm increases linearly with the increase of amplitude, frequency, and duration of the DBS pulse, inside the clinical range. Our results are in agreement with the current hypothesis that DBS increases the firing rate of STN and masks its pathological bursting firing pattern.
Multiple Reaction Monitoring Enables Precise Quantification of 97 Proteins in Dried Blood Spots*
Chambers, Andrew G.; Percy, Andrew J.; Yang, Juncong; Borchers, Christoph H.
2015-01-01
The dried blood spot (DBS) methodology provides a minimally invasive approach to sample collection and enables room-temperature storage for most analytes. DBS samples have successfully been analyzed by liquid chromatography multiple reaction monitoring mass spectrometry (LC/MRM-MS) to quantify a large range of small molecule biomarkers and drugs; however, this strategy has only recently been explored for MS-based proteomics applications. Here we report the development of a highly multiplexed MRM assay to quantify endogenous proteins in human DBS samples. This assay uses matching stable isotope-labeled standard peptides for precise, relative quantification, and standard curves to characterize the analytical performance. A total of 169 peptides, corresponding to 97 proteins, were quantified in the final assay with an average linear dynamic range of 207-fold and an average R2 value of 0.987. The total range of this assay spanned almost 5 orders of magnitude from serum albumin (P02768) at 18.0 mg/ml down to cholinesterase (P06276) at 190 ng/ml. The average intra-assay and inter-assay precision for 6 biological samples ranged from 6.1–7.5% CV and 9.5–11.0% CV, respectively. The majority of peptide targets were stable after 154 days at storage temperatures from −20 °C to 37 °C. Furthermore, protein concentration ratios between matching DBS and whole blood samples were largely constant (<20% CV) across six biological samples. This assay represents the highest multiplexing yet achieved for targeted protein quantification in DBS samples and is suitable for biomedical research applications. PMID:26342038
Grima, R
2010-07-21
Chemical master equations provide a mathematical description of stochastic reaction kinetics in well-mixed conditions. They are a valid description over length scales that are larger than the reactive mean free path and thus describe kinetics in compartments of mesoscopic and macroscopic dimensions. The trajectories of the stochastic chemical processes described by the master equation can be ensemble-averaged to obtain the average number density of chemical species, i.e., the true concentration, at any spatial scale of interest. For macroscopic volumes, the true concentration is very well approximated by the solution of the corresponding deterministic and macroscopic rate equations, i.e., the macroscopic concentration. However, this equivalence breaks down for mesoscopic volumes. These deviations are particularly significant for open systems and cannot be calculated via the Fokker-Planck or linear-noise approximations of the master equation. We utilize the system-size expansion including terms of the order of Omega(-1/2) to derive a set of differential equations whose solution approximates the true concentration as given by the master equation. These equations are valid in any open or closed chemical reaction network and at both the mesoscopic and macroscopic scales. In the limit of large volumes, the effective mesoscopic rate equations become precisely equal to the conventional macroscopic rate equations. We compare the three formalisms of effective mesoscopic rate equations, conventional rate equations, and chemical master equations by applying them to several biochemical reaction systems (homodimeric and heterodimeric protein-protein interactions, series of sequential enzyme reactions, and positive feedback loops) in nonequilibrium steady-state conditions. In all cases, we find that the effective mesoscopic rate equations can predict very well the true concentration of a chemical species. This provides a useful method by which one can quickly determine the regions of parameter space in which there are maximum differences between the solutions of the master equation and the corresponding rate equations. We show that these differences depend sensitively on the Fano factors and on the inherent structure and topology of the chemical network. The theory of effective mesoscopic rate equations generalizes the conventional rate equations of physical chemistry to describe kinetics in systems of mesoscopic size such as biological cells.
Schrank, Elisa S; Hitch, Lester; Wallace, Kevin; Moore, Richard; Stanhope, Steven J
2013-10-01
Passive-dynamic ankle-foot orthosis (PD-AFO) bending stiffness is a key functional characteristic for achieving enhanced gait function. However, current orthosis customization methods inhibit objective premanufacture tuning of the PD-AFO bending stiffness, making optimization of orthosis function challenging. We have developed a novel virtual functional prototyping (VFP) process, which harnesses the strengths of computer aided design (CAD) model parameterization and finite element analysis, to quantitatively tune and predict the functional characteristics of a PD-AFO, which is rapidly manufactured via fused deposition modeling (FDM). The purpose of this study was to assess the VFP process for PD-AFO bending stiffness. A PD-AFO CAD model was customized for a healthy subject and tuned to four bending stiffness values via VFP. Two sets of each tuned model were fabricated via FDM using medical-grade polycarbonate (PC-ISO). Dimensional accuracy of the fabricated orthoses was excellent (average 0.51 ± 0.39 mm). Manufacturing precision ranged from 0.0 to 0.74 Nm/deg (average 0.30 ± 0.36 Nm/deg). Bending stiffness prediction accuracy was within 1 Nm/deg using the manufacturer provided PC-ISO elastic modulus (average 0.48 ± 0.35 Nm/deg). Using an experimentally derived PC-ISO elastic modulus improved the optimized bending stiffness prediction accuracy (average 0.29 ± 0.57 Nm/deg). Robustness of the derived modulus was tested by carrying out the VFP process for a disparate subject, tuning the PD-AFO model to five bending stiffness values. For this disparate subject, bending stiffness prediction accuracy was strong (average 0.20 ± 0.14 Nm/deg). Overall, the VFP process had excellent dimensional accuracy, good manufacturing precision, and strong prediction accuracy with the derived modulus. Implementing VFP as part of our PD-AFO customization and manufacturing framework, which also includes fit customization, provides a novel and powerful method to predictably tune and precisely manufacture orthoses with objectively customized fit and functional characteristics.
Expertise for upright faces improves the precision but not the capacity of visual working memory.
Lorenc, Elizabeth S; Pratte, Michael S; Angeloni, Christopher F; Tong, Frank
2014-10-01
Considerable research has focused on how basic visual features are maintained in working memory, but little is currently known about the precision or capacity of visual working memory for complex objects. How precisely can an object be remembered, and to what extent might familiarity or perceptual expertise contribute to working memory performance? To address these questions, we developed a set of computer-generated face stimuli that varied continuously along the dimensions of age and gender, and we probed participants' memories using a method-of-adjustment reporting procedure. This paradigm allowed us to separately estimate the precision and capacity of working memory for individual faces, on the basis of the assumptions of a discrete capacity model, and to assess the impact of face inversion on memory performance. We found that observers could maintain up to four to five items on average, with equally good memory capacity for upright and upside-down faces. In contrast, memory precision was significantly impaired by face inversion at every set size tested. Our results demonstrate that the precision of visual working memory for a complex stimulus is not strictly fixed but, instead, can be modified by learning and experience. We find that perceptual expertise for upright faces leads to significant improvements in visual precision, without modifying the capacity of working memory.
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Definition of weighted average exchange rate. 1... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Export Trade Corporations § 1.989(b)-1 Definition of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate...
Development and simulation of microfluidic Wheatstone bridge for high-precision sensor
NASA Astrophysics Data System (ADS)
Shipulya, N. D.; Konakov, S. A.; Krzhizhanovskaya, V. V.
2016-08-01
In this work we present the results of analytical modeling and 3D computer simulation of microfluidic Wheatstone bridge, which is used for high-accuracy measurements and precision instruments. We propose and simulate a new method of a bridge balancing process by changing the microchannel geometry. This process is based on the “etching in microchannel” technology we developed earlier (doi:10.1088/1742-6596/681/1/012035). Our method ensures a precise control of the flow rate and flow direction in the bridge microchannel. The advantage of our approach is the ability to work without any control valves and other active electronic systems, which are usually used for bridge balancing. The geometrical configuration of microchannels was selected based on the analytical estimations. A detailed 3D numerical model was based on Navier-Stokes equations for a laminar fluid flow at low Reynolds numbers. We investigated the behavior of the Wheatstone bridge under different process conditions; found a relation between the channel resistance and flow rate through the bridge; and calculated the pressure drop across the system under different total flow rates and viscosities. Finally, we describe a high-precision microfluidic pressure sensor that employs the Wheatstone bridge and discuss other applications in complex precision microfluidic systems.
Island colonisation and the evolutionary rates of body size in insular neonate snakes
Aubret, F
2015-01-01
Island colonisation by animal populations is often associated with dramatic shifts in body size. However, little is known about the rates at which these evolutionary shifts occur, under what precise selective pressures and the putative role played by adaptive plasticity on driving such changes. Isolation time played a significant role in the evolution of body size in island Tiger snake populations, where adaptive phenotypic plasticity followed by genetic assimilation fine-tuned neonate body and head size (hence swallowing performance) to prey size. Here I show that in long isolated islands (>6000 years old) and mainland populations, neonate body mass and snout-vent length are tightly correlated with the average prey body mass available at each site. Regression line equations were used to calculate body size values to match prey size in four recently isolated populations of Tiger snakes. Rates of evolution in body mass and snout-vent length, calculated for seven island snake populations, were significantly correlated with isolation time. Finally, rates of evolution in body mass per generation were significantly correlated with levels of plasticity in head growth rates. This study shows that body size evolution occurs at a faster pace in recently isolated populations and suggests that the level of adaptive plasticity for swallowing abilities may correlate with rates of body mass evolution. I hypothesise that, in the early stages of colonisation, adaptive plasticity and directional selection may combine and generate accelerated evolution towards an ‘optimal' phenotype. PMID:25074570
NASA Astrophysics Data System (ADS)
Sánchez, Daniel; Kraus, F. Bernhard; Hernández, Manuel De Jesús; Vandame, Rémy
2007-07-01
Recruitment precision, i.e. the proportion of recruits that reach an advertised food source, is a crucial adaptation of social bees to their environment. Studies with honeybees showed that recruitment precision is not a fixed feature, but it may be enhanced by factors like experience and distance. However, little is known regarding the recruitment precision of stingless bees. Hence, in this study, we examined the effects of experience and spatial distance on the precision of the food communication system of the stingless bee Scaptotrigona mexicana. We conducted the experiments by training bees to a three-dimensional artificial patch at several distances from the colony. We recorded the choices of individual recruited foragers, either being newcomers (foragers without experience with the advertised food source) or experienced (foragers that had previously visited the feeder). We found that the average precision of newcomers (95.6 ± 2.61%) was significantly higher than that of experienced bees (80.2 ± 1.12%). While this might seem counter-intuitive on first sight, this “loss” of precision can be explained by the tendency of experienced recruits to explore nearby areas to find new rewarding food sources after they had initially learned the exact location of the food source. Increasing the distance from the colony had no significant effect on the precision of the foraging bees. Thus, our data show that experience, but not the distance of the food source, affected the patch precision of S. mexicana foragers.
Fine-grained information extraction from German transthoracic echocardiography reports.
Toepfer, Martin; Corovic, Hamo; Fette, Georg; Klügl, Peter; Störk, Stefan; Puppe, Frank
2015-11-12
Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports with very high precision and high recall on the majority of documents at the University Hospital of Würzburg. Extracted results populate a clinical data warehouse which supports clinical research.
Applications of inertial-sensor high-inheritance instruments to DSN precision antenna pointing
NASA Technical Reports Server (NTRS)
Goddard, R. E.
1992-01-01
Laboratory test results of the initialization and tracking performance of an existing inertial-sensor-based instrument are given. The instrument, although not primarily designed for precision antenna pointing applications, demonstrated an on-average 10-hour tracking error of several millidegrees. The system-level instrument performance is shown by analysis to be sensor limited. Simulated instrument improvements show a tracking error of less than 1 mdeg, which would provide acceptable performance, i.e., low pointing loss, for the DSN 70-m antenna sub network, operating at Ka-band (1-cm wavelength).
Applications of inertial-sensor high-inheritance instruments to DSN precision antenna pointing
NASA Technical Reports Server (NTRS)
Goddard, R. E.
1992-01-01
Laboratory test results of the initialization and tracking performance of an existing inertial-sensor-based instrument are given. The instrument, although not primarily designed for precision antenna pointing applications, demonstrated an on-average 10-hour tracking error of several millidegrees. The system-level instrument performance is shown by analysis to be sensor limited. Simulated instrument improvements show a tracking error of less than 1 mdeg, which would provide acceptable performance, i.e., low pointing loss, for the Deep Space Network 70-m antenna subnetwork, operating at Ka-band (1-cm wavelength).
Diameter control of single-walled carbon nanotube forests from 1.3–3.0 nm by arc plasma deposition
Chen, Guohai; Seki, Yasuaki; Kimura, Hiroe; Sakurai, Shunsuke; Yumura, Motoo; Hata, Kenji; Futaba, Don N.
2014-01-01
We present a method to both precisely and continuously control the average diameter of single-walled carbon nanotubes in a forest ranging from 1.3 to 3.0 nm with ~1 Å resolution. The diameter control of the forest was achieved through tuning of the catalyst state (size, density, and composition) using arc plasma deposition of nanoparticles. This 1.7 nm control range and 1 Å precision exceed the highest reports to date. PMID:24448201
Error in telemetry studies: Effects of animal movement on triangulation
Schmutz, Joel A.; White, Gary C.
1990-01-01
We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Programmable noise bandwidth reduction by means of digital averaging
NASA Technical Reports Server (NTRS)
Poklemba, John J. (Inventor)
1993-01-01
Predetection noise bandwidth reduction is effected by a pre-averager capable of digitally averaging the samples of an input data signal over two or more symbols, the averaging interval being defined by the input sampling rate divided by the output sampling rate. As the averaged sample is clocked to a suitable detector at a much slower rate than the input signal sampling rate the noise bandwidth at the input to the detector is reduced, the input to the detector having an improved signal to noise ratio as a result of the averaging process, and the rate at which such subsequent processing must operate is correspondingly reduced. The pre-averager forms a data filter having an output sampling rate of one sample per symbol of received data. More specifically, selected ones of a plurality of samples accumulated over two or more symbol intervals are output in response to clock signals at a rate of one sample per symbol interval. The pre-averager includes circuitry for weighting digitized signal samples using stored finite impulse response (FIR) filter coefficients. A method according to the present invention is also disclosed.
NASA Astrophysics Data System (ADS)
Reitz, M. A.; Seeber, L.; Schaefer, J. M.; Ferguson, E. K.
2012-12-01
Early studies pioneering the method for catchment wide erosion rates by measuring 10Be in alluvial sediment were taken at river mouths and used the sand size grain fraction from the riverbeds in order to average upstream erosion rates and measure erosion patterns. Finer particles (<0.0625 mm) were excluded to reduce the possibility of a wind-blown component of sediment and coarser particles (>2 mm) were excluded to better approximate erosion from the entire upstream catchment area (coarse grains are generally found near the source). Now that the sensitivity of 10Be measurements is rapidly increasing, we can precisely measure erosion rates from rivers eroding active tectonic regions. These active regions create higher energy drainage systems that erode faster and carry coarser sediment. In these settings, does the sand-sized fraction fully capture the average erosion of the upstream drainage area? Or does a different grain size fraction provide a more accurate measure of upstream erosion? During a study of the Neto River in Calabria, southern Italy, we took 8 samples along the length of the river, focusing on collecting samples just below confluences with major tributaries, in order to use the high-resolution erosion rate data to constrain tectonic motion. The samples we measured were sieved to either a 0.125 mm - 0.710 mm fraction or the 0.125 mm - 4 mm fraction (depending on how much of the former was available). After measuring these 8 samples for 10Be and determining erosion rates, we used the approach by Granger et al. [1996] to calculate the subcatchment erosion rates between each sample point. In the subcatchments of the river where we used grain sizes up to 4 mm, we measured very low 10Be concentrations (corresponding to high erosion rates) and calculated nonsensical subcatchment erosion rates (i.e. negative rates). We, therefore, hypothesize that the coarser grain sizes we included are preferentially sampling a smaller upstream area, and not the entire upstream catchment, which is assumed when measurements are based solely on the sand sized fraction. To test this hypothesis, we used samples with a variety of grain sizes from the Shillong Plateau. We sieved 5 samples into three grain size fractions: 0.125 mm - 710 mm, 710 mm - 4 mm, and >4 mm and measured 10Be concentrations in each fraction. Although there is some variation in the grain size fraction that yields the highest erosion rate, generally, the coarser grain size fractions have higher erosion rates. More significant are the results when calculating the subcatchment erosion rates, which suggest that even medium sized grains (710 mm - 4 mm) are sampling an area smaller than the entire upstream area; this finding is consistent with the nonsensical results from the Neto River study. This result has numerous implications for the interpretations of 10Be erosion rates: most importantly, an alluvial sample may not be averaging the entire upstream area, even when using the sand size fraction - resulting erosion rates more pertinent for that sample point rather than the entire catchment.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-21
... exist: Weighted- average Exporter dumping margin (percent) Golden Dragon Precise Copper Tube Group, Inc., Hong Kong 3.55 GD Trading Co., Ltd., and Golden Dragon Holding (Hong Kong) International, Ltd Hong Kong...
Stone, William J.
1986-01-01
A zero-home locator includes a fixed phototransistor switch and a moveable actuator including two symmetrical, opposed wedges, each wedge defining a point at which switching occurs. The zero-home location is the average of the positions of the points defined by the wedges.
Stone, W.J.
1983-10-31
A zero-home locator includes a fixed phototransistor switch and a moveable actuator including two symmetrical, opposed wedges, each wedge defining a point at which switching occurs. The zero-home location is the average of the positions of the points defined by the wedges.
Precision linear ramp function generator
Jatko, W.B.; McNeilly, D.R.; Thacker, L.H.
1984-08-01
A ramp function generator is provided which produces a precise linear ramp function which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.
Precision linear ramp function generator
Jatko, W. Bruce; McNeilly, David R.; Thacker, Louis H.
1986-01-01
A ramp function generator is provided which produces a precise linear ramp unction which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.
Mass transfer equation for proteins in very high-pressure liquid chromatography.
Gritti, Fabrice; Guiochon, Georges
2009-04-01
The mass transfer kinetics of human insulin was investigated on a 50 mm x 2.1 mm column packed with 1.7 microm BEH-C(18) particles, eluted with a water/acetonitrile/trifluoroacetic acid (TFA) (68/32/0.1, v/v/v) solution. The different contributions to the mass transfer kinetics, e.g., those of longitudinal diffusion, eddy dispersion, the film mass transfer resistance, cross-particle diffusivity, adsorption-desorption kinetics, and transcolumn differential sorption, were incorporated into a general mass transfer equation designed to account for the mass transfer kinetics of proteins under high pressure. More specifically, this equation includes the effects of pore size exclusion, pressure, and temperature on the band broadening of a protein. The flow rate was first increased from 0.001 to 0.250 mL/min, the pressure drop increasing from 2 to 298 bar, and the column being placed in stagnant air at 296.5 K, in order to determine the effective diffusivity of insulin through the porous particles, the mass transfer rate constants, and the adsorption equilibrium constant in the low-pressure range. Then, the column inlet pressure was increased by using capillary flow restrictors downstream the column, at the constant flow rate of 0.03 mL/min. The column temperature was kept uniform by immersing the column in a circulating water bath thermostatted at 298.7 and 323.15 K, successively. The results showed that the surface diffusion coefficient of insulin decreases faster than its bulk diffusion coefficient with increasing average column pressure. This is consistent with the adsorption energy of insulin onto the BEH-C(18) surface increasing strongly with increasing pressure. In contrast, given the precision of the height equivalent to a theoretical plate (HETP) measurement (+/-12%), the adsorption kinetics of insulin appears to be rather independent of the pressure. On average, the adsorption rate constant of insulin is doubled from about 40 to 80 s(-1) when the temperature increases from 298.7 to 323.15 K.
1987-02-06
precisely the less favorable variant preventing extrication from foreign indebtedness. In addition to the trend toward stagnation of growth rate...another alarming economic indicator is precisely the decline in export capabilities on the sector most important to us, namely, in the sales of Polish...market economy, become a huge exporter with a tremendous competitive strength precisely on the markets on which we still have something to say. And
Testing averaged cosmology with type Ia supernovae and BAO data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, B.; Alcaniz, J.S.; Coley, A.A.
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO datamore » is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.« less
Vongsak, Boonyadist; Sithisarn, Pongtip; Gritsanapan, Wandee
2013-01-01
Moringa oleifera Lamarck (Moringaceae) is used as a multipurpose medicinal plant for the treatment of various diseases. Isoquercetin, astragalin, and crypto-chlorogenic acid have been previously found to be major active components in the leaves of this plant. In this study, a thin-layer-chromatography (TLC-)densitometric method was developed and validated for simultaneous quantification of these major components in the 70% ethanolic extracts of M. oleifera leaves collected from 12 locations. The average amounts of crypto-chlorogenic acid, isoquercetin, and astragalin were found to be 0.0473, 0.0427, and 0.0534% dry weight, respectively. The method was validated for linearity, precision, accuracy, limit of detection, limit of quantitation, and robustness. The linearity was obtained in the range of 100-500 ng/spot with a correlation coefficient (r) over 0.9961. Intraday and interday precisions demonstrated relative standard deviations of less than 5%. The accuracy of the method was confirmed by determining the recovery. The average recoveries of each component from the extracts were in the range of 98.28 to 99.65%. Additionally, the leaves from Chiang Mai province contained the highest amounts of all active components. The proposed TLC-densitometric method was simple, accurate, precise, and cost-effective for routine quality controlling of M. oleifera leaf extracts.
Frequency downconversion and phase noise in MIT.
Watson, S; Williams, R J; Griffiths, H; Gough, W; Morris, A
2002-02-01
High-frequency (3-30 MHz) operation of MIT systems offers advantages in terms of the larger induced signal amplitudes compared to systems operating in the low- or medium-frequency ranges. Signal distribution at HF, however, presents difficulties, in particular with isolation and phase stability. It is therefore valuable to translate received signals to a lower frequency range through heterodyne downconversion, a process in which relative signal amplitude and phase information is in theory retained. Measurement of signal amplitude and phase is also simplified at lower frequencies. The paper presents details of measurements on a direct phase measurement system utilizing heterodyne downconversion and compares the relative performance of three circuit configurations. The 100-sample average precision of a circuit suitable for use as a receiver within an MIT system was 0.008 degrees for input amplitude -21 dBV. As the input amplitude was reduced from -21 to -72 dBV variation in the measured phase offset was observed, with the offset varying by 1.8 degrees. The precision of the circuit deteriorated with decreasing input amplitude, but was found to provide a 100-sample average precision of <0.022 degrees down to an input amplitude of -60 dBV. The characteristics of phase noise within the system are discussed.
Technical aspects of real time positron emission tracking for gated radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamberland, Marc; Xu, Tong, E-mail: txu@physics.carleton.ca; McEwen, Malcolm R.
2016-02-15
Purpose: Respiratory motion can lead to treatment errors in the delivery of radiotherapy treatments. Respiratory gating can assist in better conforming the beam delivery to the target volume. We present a study of the technical aspects of a real time positron emission tracking system for potential use in gated radiotherapy. Methods: The tracking system, called PeTrack, uses implanted positron emission markers and position sensitive gamma ray detectors to track breathing motion in real time. PeTrack uses an expectation–maximization algorithm to track the motion of fiducial markers. A normalized least mean squares adaptive filter predicts the location of the markers amore » short time ahead to account for system response latency. The precision and data collection efficiency of a prototype PeTrack system were measured under conditions simulating gated radiotherapy. The lung insert of a thorax phantom was translated in the inferior–superior direction with regular sinusoidal motion and simulated patient breathing motion (maximum amplitude of motion ±10 mm, period 4 s). The system tracked the motion of a {sup 22}Na fiducial marker (0.34 MBq) embedded in the lung insert every 0.2 s. The position of the was marker was predicted 0.2 s ahead. For sinusoidal motion, the equation used to model the motion was fitted to the data. The precision of the tracking was estimated as the standard deviation of the residuals. Software was also developed to communicate with a Linac and toggle beam delivery. In a separate experiment involving a Linac, 500 monitor units of radiation were delivered to the phantom with a 3 × 3 cm photon beam and with 6 and 10 MV accelerating potential. Radiochromic films were inserted in the phantom to measure spatial dose distribution. In this experiment, the period of motion was set to 60 s to account for beam turn-on latency. The beam was turned off when the marker moved outside of a 5-mm gating window. Results: The precision of the tracking in the IS direction was 0.53 mm for a sinusoidally moving target, with an average count rate ∼250 cps. The average prediction error was 1.1 ± 0.6 mm when the marker moved according to irregular patient breathing motion. Across all beam deliveries during the radiochromic film measurements, the average prediction error was 0.8 ± 0.5 mm. The maximum error was 2.5 mm and the 95th percentile error was 1.5 mm. Clear improvement of the dose distribution was observed between gated and nongated deliveries. The full-width at halfmaximum of the dose profiles of gated deliveries differed by 3 mm or less than the static reference dose distribution. Monitoring of the beam on/off times showed synchronization with the location of the marker within the latency of the system. Conclusions: PeTrack can track the motion of internal fiducial positron emission markers with submillimeter precision. The system can be used to gate the delivery of a Linac beam based on the position of a moving fiducial marker. This highlights the potential of the system for use in respiratory-gated radiotherapy.« less
42 CFR 447.255 - Related information.
Code of Federal Regulations, 2011 CFR
2011-10-01
... assurances described in § 447.253(a), the following information: (a) The amount of the estimated average... which that estimated average rate increased or decreased relative to the average payment rate in effect... and, to the extent feasible, long-term effect the change in the estimated average rate will have on...
Jamil, Muhammad; Ahmad, Omar; Poh, Kian Keong; Yap, Choon Hwai
2017-07-01
Current Doppler echocardiography quantification of mitral regurgitation (MR) severity has shortcomings. Proximal isovelocity surface area (PISA)-based methods, for example, are unable to account for the fact that ultrasound Doppler can measure only one velocity component: toward or away from the transducer. In the present study, we used ultrasound-based computational fluid dynamics (Ub-CFD) to quantify mitral regurgitation and study its advantages and disadvantages compared with 2-D and 3-D PISA methods. For Ub-CFD, patient-specific mitral valve geometry and velocity data were obtained from clinical ultrasound followed by 3-D CFD simulations at an assumed flow rate. We then obtained the average ratio of the ultrasound Doppler velocities to CFD velocities in the flow convergence region, and scaled CFD flow rate with this ratio as the final measured flow rate. We evaluated Ub-CFD, 2-D PISA and 3-D PISA with an in vitro flow loop, which featured regurgitation flow through (i) a simplified flat plate with round orifice and (ii) a 3-D printed realistic mitral valve and regurgitation orifice. The Ub-CFD and 3-D PISA methods had higher precision than the 2-D PISA method. Ub-CFD had consistent accuracy under all conditions tested, whereas 2-D PISA had the lowest overall accuracy. In vitro investigations indicated that the accuracy of 2-D and 3-D PISA depended significantly on the choice of aliasing velocity. Evaluation of these techniques was also performed for two clinical cases, and the dependency of PISA on aliasing velocity was similarly observed. Ub-CFD was robustly accurate and precise and has promise for future translation to clinical practice. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Leonard, Graham S.; Calvert, Andrew T.; Hopkins, Jenni L.; Wilson, Colin J. N.; Smid, Elaine R.; Lindsay, Jan M.; Champion, Duane E.
2017-09-01
The Auckland Volcanic Field (AVF), which last erupted ca. 550 years ago, is a late Quaternary monogenetic basaltic volcanic field (ca. 500 km2) in the northern North Island of New Zealand. Prior to this study only 12 out of the 53 identified eruptive centres of the AVF had been reliably dated. Careful sample preparation and 40Ar/39Ar analysis has increased the number of well-dated centres in the AVF to 35. The high precision of the results is attributed to selection of fresh, non-vesicular, non-glassy samples from lava flow interiors. Sample selection was coupled with separation techniques that targeted only the groundmass of samples with < 5% glass and with groundmass feldspars > 10 μm wide, coupled with ten-increment furnace step-heating of large quantities (up to 200 mg) of material. The overall AVF age data indicate an onset at 193.2 ± 2.8 ka, an apparent six-eruption flare-up from 30 to 34 ka, and a ≤ 10 kyr hiatus between the latest and second-to-latest eruptions. Such non-uniformity shows that averaging the number of eruptions over the life-span of the AVF to yield a mean eruption rate is overly simplistic. Together with large variations in eruption volumes, and the large sizes and unusual chemistry within the latest eruptions (Rangitoto 1 and Rangitoto 2), our results illuminate a complex episodic eruption history. In particular, the rate of volcanism in AVF has increased since 60 ka, suggesting that the field is still in its infancy. Multiple centres with unusual paleomagnetic inclination and declination orientations are confirmed to fit into a number of geomagnetic excursions, with five identified in the Mono Lake, two within the Laschamp, one within the post-Blake or Blake, and two possibly within the Hilina Pali.
Sharp, W.D.; Turrin, B.D.; Renne, P.R.; Lanphere, M.A.
1996-01-01
Mauna Kea lava flows cored in the HilIo hole range in age from <200 ka to about 400 ka based on 40Ar/39Ar incremental heating and K-Ar analyses of 16 groundmass samples and one coexisting plagioclase. The lavas, all subaerially deposited, include a lower section consisting only of tholeiitic basalts and an upper section of interbedded alkalic, transitional tholeiitic, and tholeiitic basalts. The lower section has yielded predominantly complex, discordant 40Ar/39Ar age spectra that result from mobility of 40Ar and perhaps K, the presence of excess 40Ar, and redistribution of 39Ar by recoil. Comparison of K-Ar ages with 40Ar/39Ar integrated ages indicates that some of these samples have also lost 39Ar. Nevertheless, two plateau ages of 391 ?? 40 and 400 ?? 26 ka from deep in the hole, combined with data from the upper section, show that the tholeiitic section accumulated at an average rate of about 7 to 8 m/kyr and has an mean recurrence interval of 0.5 kyr/flow unit. Samples from the upper section yield relatively precise 40Ar/39Ar plateau and isotope correlation ages of 326 ?? 23, 241 ?? 5, 232 ?? 4, and 199 ?? 9 ka for depths of -415.7 m to -299.2 m. Within their uncertainty, these ages define a linear relationship with depth, with an average accumulation rate of 0.9 m/kyr and an average recurrence interval of 4.8 kyr/flow unit. The top of the Mauna Kea sequence at -280 m must be older than the plateau age of 132 ?? 32 ka, obtained for the basal Mauna Loa flow in the corehole. The upward decrease in lava accumulation rate is a consequence of the decreasing magma supply available to Mauna Kea as it rode the Pacific plate away from its magma source, the Hawaiian mantle plume. The age-depth relation in the core hole may be used to test and refine models that relate the growth of Mauna Kea to the thermal and compositional structure of the mantle plume.
Marateb, Hamid Reza; Farahi, Morteza; Rojas, Monica; Mañanas, Miguel Angel; Farina, Dario
2016-01-01
Knowledge of the location of muscle Innervation Zones (IZs) is important in many applications, e.g. for minimizing the quantity of injected botulinum toxin for the treatment of spasticity or for deciding on the type of episiotomy during child delivery. Surface EMG (sEMG) can be noninvasively recorded to assess physiological and morphological characteristics of contracting muscles. However, it is not often possible to record signals of high quality. Moreover, muscles could have multiple IZs, which should all be identified. We designed a fully-automatic algorithm based on the enhanced image Graph-Cut segmentation and morphological image processing methods to identify up to five IZs in 60-ms intervals of very-low to moderate quality sEMG signal detected with multi-channel electrodes (20 bipolar channels with Inter Electrode Distance (IED) of 5 mm). An anisotropic multilayered cylinder model was used to simulate 750 sEMG signals with signal-to-noise ratio ranging from -5 to 15 dB (using Gaussian noise) and in each 60-ms signal frame, 1 to 5 IZs were included. The micro- and macro- averaged performance indices were then reported for the proposed IZ detection algorithm. In the micro-averaging procedure, the number of True Positives, False Positives and False Negatives in each frame were summed up to generate cumulative measures. In the macro-averaging, on the other hand, precision and recall were calculated for each frame and their averages are used to determine F1-score. Overall, the micro (macro)-averaged sensitivity, precision and F1-score of the algorithm for IZ channel identification were 82.7% (87.5%), 92.9% (94.0%) and 87.5% (90.6%), respectively. For the correctly identified IZ locations, the average bias error was of 0.02±0.10 IED ratio. Also, the average absolute conduction velocity estimation error was 0.41±0.40 m/s for such frames. The sensitivity analysis including increasing IED and reducing interpolation coefficient for time samples was performed. Meanwhile, the effect of adding power-line interference and using other image interpolation methods on the deterioration of the performance of the proposed algorithm was investigated. The average running time of the proposed algorithm on each 60-ms sEMG frame was 25.5±8.9 (s) on an Intel dual-core 1.83 GHz CPU with 2 GB of RAM. The proposed algorithm correctly and precisely identified multiple IZs in each signal epoch in a wide range of signal quality and is thus a promising new offline tool for electrophysiological studies.
Farahi, Morteza; Rojas, Monica; Mañanas, Miguel Angel; Farina, Dario
2016-01-01
Knowledge of the location of muscle Innervation Zones (IZs) is important in many applications, e.g. for minimizing the quantity of injected botulinum toxin for the treatment of spasticity or for deciding on the type of episiotomy during child delivery. Surface EMG (sEMG) can be noninvasively recorded to assess physiological and morphological characteristics of contracting muscles. However, it is not often possible to record signals of high quality. Moreover, muscles could have multiple IZs, which should all be identified. We designed a fully-automatic algorithm based on the enhanced image Graph-Cut segmentation and morphological image processing methods to identify up to five IZs in 60-ms intervals of very-low to moderate quality sEMG signal detected with multi-channel electrodes (20 bipolar channels with Inter Electrode Distance (IED) of 5 mm). An anisotropic multilayered cylinder model was used to simulate 750 sEMG signals with signal-to-noise ratio ranging from -5 to 15 dB (using Gaussian noise) and in each 60-ms signal frame, 1 to 5 IZs were included. The micro- and macro- averaged performance indices were then reported for the proposed IZ detection algorithm. In the micro-averaging procedure, the number of True Positives, False Positives and False Negatives in each frame were summed up to generate cumulative measures. In the macro-averaging, on the other hand, precision and recall were calculated for each frame and their averages are used to determine F1-score. Overall, the micro (macro)-averaged sensitivity, precision and F1-score of the algorithm for IZ channel identification were 82.7% (87.5%), 92.9% (94.0%) and 87.5% (90.6%), respectively. For the correctly identified IZ locations, the average bias error was of 0.02±0.10 IED ratio. Also, the average absolute conduction velocity estimation error was 0.41±0.40 m/s for such frames. The sensitivity analysis including increasing IED and reducing interpolation coefficient for time samples was performed. Meanwhile, the effect of adding power-line interference and using other image interpolation methods on the deterioration of the performance of the proposed algorithm was investigated. The average running time of the proposed algorithm on each 60-ms sEMG frame was 25.5±8.9 (s) on an Intel dual-core 1.83 GHz CPU with 2 GB of RAM. The proposed algorithm correctly and precisely identified multiple IZs in each signal epoch in a wide range of signal quality and is thus a promising new offline tool for electrophysiological studies. PMID:27978535
Tang, Ze; Park, Ju H; Feng, Jianwen
2018-04-01
This paper is concerned with the exponential synchronization issue of nonidentically coupled neural networks with time-varying delay. Due to the parameter mismatch phenomena existed in neural networks, the problem of quasi-synchronization is thus discussed by applying some impulsive control strategies. Based on the definition of average impulsive interval and the extended comparison principle for impulsive systems, some criteria for achieving the quasi-synchronization of neural networks are derived. More extensive ranges of impulsive effects are discussed so that impulse could either play an effective role or play an adverse role in the final network synchronization. In addition, according to the extended formula for the variation of parameters with time-varying delay, precisely exponential convergence rates and quasi-synchronization errors are obtained, respectively, in view of different types impulsive effects. Finally, some numerical simulations with different types of impulsive effects are presented to illustrate the effectiveness of theoretical analysis.
Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Gaura, Elena; Brusey, James; Zhang, Xuekun; Dutkiewicz, Eryk
2016-07-18
Super dense wireless sensor networks (WSNs) have become popular with the development of Internet of Things (IoT), Machine-to-Machine (M2M) communications and Vehicular-to-Vehicular (V2V) networks. While highly-dense wireless networks provide efficient and sustainable solutions to collect precise environmental information, a new channel access scheme is needed to solve the channel collision problem caused by the large number of competing nodes accessing the channel simultaneously. In this paper, we propose a space-time random access method based on a directional data transmission strategy, by which collisions in the wireless channel are significantly decreased and channel utility efficiency is greatly enhanced. Simulation results show that our proposed method can decrease the packet loss rate to less than 2 % in large scale WSNs and in comparison with other channel access schemes for WSNs, the average network throughput can be doubled.
NASA Astrophysics Data System (ADS)
Borisov, V. M.; Vinokhodov, A. Yu; Ivanov, A. S.; Kiryukhin, Yu B.; Mishchenko, V. A.; Prokof'ev, A. V.; Khristoforov, O. B.
2009-10-01
The development of high-power discharge sources emitting in the 13.5±0.135-nm spectral band is of current interest because they are promising for applications in industrial EUV (extreme ultraviolet) lithography for manufacturing integrated circuits according to technological precision standards of 22 nm and smaller. The parameters of EUV sources based on a laser-induced discharge in tin vapours between rotating disc electrodes are investigated. The properties of the discharge initiation by laser radiation at different wavelengths are established and the laser pulse parameters providing the maximum energy characteristics of the EUV source are determined. The EUV source developed in the study emits an average power of 276 W in the 13.5±0.135-nm spectral band on conversion to the solid angle 2π sr in the stationary regime at a pulse repetition rate of 3000 Hz.
Bloomgarden, Z T; Inzucchi, S E; Karnieli, E; Le Roith, D
2008-07-01
The proposed use of a more precise standard for glycated (A(1c)) and non-glycated haemoglobin would lead to an A(1c) value, when expressed as a percentage, that is lower than that currently in use. One approach advocated to address the potential confusion that would ensue is to replace 'HbA(1c)' with a new term, 'A(1c)-derived average glucose.' We review evidence from several sources suggesting that A(1c) is, in fact, inherently imprecise as a measure of average glucose, so that the proposed terminology should not be adopted.
Double resonance calibration of g factor standards: Carbon fibers as a high precision standard.
Herb, Konstantin; Tschaggelar, Rene; Denninger, Gert; Jeschke, Gunnar
2018-04-01
The g factor of paramagnetic defects in commercial high performance carbon fibers was determined by a double resonance experiment based on the Overhauser shift due to hyperfine coupled protons. Our carbon fibers exhibit a single, narrow and perfectly Lorentzian shaped ESR line and a g factor slightly higher than g free with g=2.002644=g free ·(1+162ppm) with a relative uncertainty of 15ppm. This precisely known g factor and their inertness qualify them as a high precision g factor standard for general purposes. The double resonance experiment for calibration is applicable to other potential standards with a hyperfine interaction averaged by a process with very short correlation time. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Bone mineral density of the femoral neck in resurfacing hip arthroplasty
Ovesen, Ole; Brixen, Kim; Varmarken, Jens-Erik; Overgaard, SØren
2010-01-01
Background and purpose Resurfacing total hip arthroplasty (RTHA) may preserve the femoral neck bone stock postoperatively. Bone mineral density (BMD) may be affected by the hip position, which might bias longitudinal studies. We investigated the dependency of BMD precision on type of ROI and hip position. Method We DXA-scanned the femoral neck of 15 resurfacing patients twice with the hip in 3 different rotations: 15° internal, neutral, and 15° external. For each position, BMD was analyzed with 3 surface area models. One model measured BMD in the total femoral neck, the second model divided the neck in two, and the third model had 6 divisions. Results When all hip positions were pooled, average coefficients of variation (CVs) of 3.1%, 3.6%, and 4.6% were found in the 1-, 2-, and 6-region models, respectively. The externally rotated hip position was less reproducible. When rotating in increments of 15° or 30°, the average CVs rose to 7.2%, 7.3%, and 12% in the 3 models. Rotation affected the precision most in the model that divided the neck in 6 subregions, predominantly in the lateral and distal regions. For larger-region models, some rotation could be allowed without compromising the precision. Interpretation If hip rotation is strictly controlled, DXA can reliably provide detailed topographical information about the BMD changes around an RTHA. As rotation strongly affects the precision of the BMD measurements in small regions, we suggest that a less detailed model should be used for analysis in studies where the leg position has not been firmly controlled. PMID:20367420
Adhikari, Badri; Hou, Jie; Cheng, Jianlin
2018-03-01
In this study, we report the evaluation of the residue-residue contacts predicted by our three different methods in the CASP12 experiment, focusing on studying the impact of multiple sequence alignment, residue coevolution, and machine learning on contact prediction. The first method (MULTICOM-NOVEL) uses only traditional features (sequence profile, secondary structure, and solvent accessibility) with deep learning to predict contacts and serves as a baseline. The second method (MULTICOM-CONSTRUCT) uses our new alignment algorithm to generate deep multiple sequence alignment to derive coevolution-based features, which are integrated by a neural network method to predict contacts. The third method (MULTICOM-CLUSTER) is a consensus combination of the predictions of the first two methods. We evaluated our methods on 94 CASP12 domains. On a subset of 38 free-modeling domains, our methods achieved an average precision of up to 41.7% for top L/5 long-range contact predictions. The comparison of the three methods shows that the quality and effective depth of multiple sequence alignments, coevolution-based features, and machine learning integration of coevolution-based features and traditional features drive the quality of predicted protein contacts. On the full CASP12 dataset, the coevolution-based features alone can improve the average precision from 28.4% to 41.6%, and the machine learning integration of all the features further raises the precision to 56.3%, when top L/5 predicted long-range contacts are evaluated. And the correlation between the precision of contact prediction and the logarithm of the number of effective sequences in alignments is 0.66. © 2017 Wiley Periodicals, Inc.
Zapp, Jascha; Domsch, Sebastian; Weingärtner, Sebastian; Schad, Lothar R
2017-05-01
To characterize the reversible transverse relaxation in pulmonary tissue and to study the benefit of a quadratic exponential (Gaussian) model over the commonly used linear exponential model for increased quantification precision. A point-resolved spectroscopy sequence was used for comprehensive sampling of the relaxation around spin echoes. Measurements were performed in an ex vivo tissue sample and in healthy volunteers at 1.5 Tesla (T) and 3 T. The goodness of fit using χred2 and the precision of the fitted relaxation time by means of its confidence interval were compared between the two relaxation models. The Gaussian model provides enhanced descriptions of pulmonary relaxation with lower χred2 by average factors of 4 ex vivo and 3 in volunteers. The Gaussian model indicates higher sensitivity to tissue structure alteration with increased precision of reversible transverse relaxation time measurements also by average factors of 4 ex vivo and 3 in volunteers. The mean relaxation times of the Gaussian model in volunteers are T2,G' = (1.97 ± 0.27) msec at 1.5 T and T2,G' = (0.83 ± 0.21) msec at 3 T. Pulmonary signal relaxation was found to be accurately modeled as Gaussian, providing a potential biomarker T2,G' with high sensitivity. Magn Reson Med 77:1938-1945, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Constructing Precisely Computing Networks with Biophysical Spiking Neurons.
Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T
2015-07-15
While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks including irregular, Poisson-like spike times, and a tight balance between excitation and inhibition. These results significantly increase the biological plausibility of the spike-based approach to network computation, and uncover how several components of biological networks may work together to efficiently carry out computation. Copyright © 2015 the authors 0270-6474/15/3510112-23$15.00/0.
The Daya Bay antineutrino detector filling system and liquid mass measurement
NASA Astrophysics Data System (ADS)
Band, H. R.; Cherwinka, J. J.; Draeger, E.; Heeger, K. M.; Hinrichs, P.; Lewis, C. A.; Mattison, H.; McFarlane, M. C.; Webber, D. M.; Wenman, D.; Wang, W.; Wise, T.; Xiao, Q.
2013-09-01
The Daya Bay Reactor Neutrino Experiment has measured the neutrino mixing angle θ13 to world-leading precision. The experiment uses eight antineutrino detectors filled with 20-tons of gadolinium-doped liquid scintillator to detect antineutrinos emitted from the Daya Bay nuclear power plant through the inverse beta decay reaction. The precision measurement of sin22θ13 relies on the relative antineutrino interaction rates between detectors at near (400 m) and far (roughly 1.8 km) distances from the nuclear reactors. The measured interaction rate in each detector is directly proportional to the number of protons in the liquid scintillator target. A precision detector filling system was developed to simultaneously fill the three liquid zones of the antineutrino detectors and measure the relative target mass between detectors to < 0.02%. This paper describes the design, operation, and performance of the system and the resulting precision measurement of the detectors' target liquid masses.
Dudgeon, Christine L; Pollock, Kenneth H; Braccini, J Matias; Semmens, Jayson M; Barnett, Adam
2015-07-01
Capture-mark-recapture models are useful tools for estimating demographic parameters but often result in low precision when recapture rates are low. Low recapture rates are typical in many study systems including fishing-based studies. Incorporating auxiliary data into the models can improve precision and in some cases enable parameter estimation. Here, we present a novel application of acoustic telemetry for the estimation of apparent survival and abundance within capture-mark-recapture analysis using open population models. Our case study is based on simultaneously collecting longline fishing and acoustic telemetry data for a large mobile apex predator, the broadnose sevengill shark (Notorhynchus cepedianus), at a coastal site in Tasmania, Australia. Cormack-Jolly-Seber models showed that longline data alone had very low recapture rates while acoustic telemetry data for the same time period resulted in at least tenfold higher recapture rates. The apparent survival estimates were similar for the two datasets but the acoustic telemetry data showed much greater precision and enabled apparent survival parameter estimation for one dataset, which was inestimable using fishing data alone. Combined acoustic telemetry and longline data were incorporated into Jolly-Seber models using a Monte Carlo simulation approach. Abundance estimates were comparable to those with longline data only; however, the inclusion of acoustic telemetry data increased precision in the estimates. We conclude that acoustic telemetry is a useful tool for incorporating in capture-mark-recapture studies in the marine environment. Future studies should consider the application of acoustic telemetry within this framework when setting up the study design and sampling program.
NASA Astrophysics Data System (ADS)
Zhao, Jun; Quan, Guo-Zheng; Pan, Jia; Wang, Xuan; Wu, Dong-Sen; Xia, Yu-Feng
2018-01-01
Constitutive model of materials is one of the most requisite mathematical model in the finite element analysis, which describes the relationships of flow behaviors with strain, strain rate and temperature. In order to construct such constitutive relationships of ultra-high-strength BR1500HS steel at medium and low temperature regions, the true stress-strain data over a wide temperature range of 293-873 K and strain rate range of 0.01-10 s-1 were collected from a series of isothermal uniaxial tensile tests. The experimental results show that stress-strain relationships are highly non-linear and susceptible to three parameters involving temperature, strain and strain rate. By considering the impacts of strain rate and temperature on strain hardening, a modified constitutive model based on Johnson-Cook model was proposed to characterize flow behaviors in medium and low temperature ranges. The predictability of the improved model was also evaluated by the relative error (W(%)), correlation coefficient (R) and average absolute relative error (AARE). The R-value and AARE-value for modified constitutive model at medium and low temperature regions are 0.9915 & 1.56 % and 0.9570 & 5.39 %, respectively, which indicates that the modified constitutive model can precisely estimate the flow behaviors for BR1500HS steel in the medium and low temperature regions.
Underwater Wireless Sensor Communications in the 2.4 GHz ISM Frequency Band
Lloret, Jaime; Sendra, Sandra; Ardid, Miguel; Rodrigues, Joel J. P. C.
2012-01-01
One of the main problems in underwater communications is the low data rate available due to the use of low frequencies. Moreover, there are many problems inherent to the medium such as reflections, refraction, energy dispersion, etc., that greatly degrade communication between devices. In some cases, wireless sensors must be placed quite close to each other in order to take more accurate measurements from the water while having high communication bandwidth. In these cases, while most researchers focus their efforts on increasing the data rate for low frequencies, we propose the use of the 2.4 GHz ISM frequency band in these special cases. In this paper, we show our wireless sensor node deployment and its performance obtained from a real scenario and measures taken for different frequencies, modulations and data transfer rates. The performed tests show the maximum distance between sensors, the number of lost packets and the average round trip time. Based on our measurements, we provide some experimental models of underwater communication in fresh water using EM waves in the 2.4 GHz ISM frequency band. Finally, we compare our communication system proposal with the existing systems. Although our proposal provides short communication distances, it provides high data transfer rates. It can be used for precision monitoring in applications such as contaminated ecosystems or for device communicate at high depth. PMID:22666029
NASA Astrophysics Data System (ADS)
Cartwright, Ian; Cendón, Dioni; Currell, Matthew; Meredith, Karina
2017-12-01
Documenting the location and magnitude of groundwater recharge is critical for understanding groundwater flow systems. Radioactive tracers, notably 14C, 3H, 36Cl, and the noble gases, together with other tracers whose concentrations vary over time, such as the chlorofluorocarbons or sulfur hexafluoride, are commonly used to estimate recharge rates. This review discusses some of the advantages and problems of using these tracers to estimate recharge rates. The suite of tracers allows recharge to be estimated over timescales ranging from a few years to several hundred thousand years, which allows both the long-term and modern behaviour of groundwater systems to be documented. All tracers record mean residence times and mean recharge rates rather than a specific age and date of recharge. The timescale over which recharge rates are averaged increases with the mean residence time. This is an advantage in providing representative recharge rates but presents a problem in comparing recharge rates derived from these tracers with those from other techniques, such as water table fluctuations or lysimeters. In addition to issues relating to the sampling and interpretation of specific tracers, macroscopic dispersion and mixing in groundwater flow systems limit how precisely groundwater residence times and recharge rates may be estimated. Additionally, many recharge studies have utilised existing infrastructure that may not be ideal for this purpose (e.g., wells with long screens that sample groundwater several kilometres from the recharge area). Ideal recharge studies would collect sufficient information to optimise the use of specific tracers and minimise the problems of mixing and dispersion.
NASA Technical Reports Server (NTRS)
Mao, Dandan; McGarry, Jan F.; Mazarico, Erwan; Neumann, Gregory A.; Sun, Xiaoli; Torrence, Mark H.; Zagwodzki, Thomas W.; Rowlands, David D.; Hoffman, Evan D.; Horvath, Julie E.;
2016-01-01
We describe the results of the Laser Ranging (LR) experiment carried out from June 2009 to September 2014 in order to make one-way time-of-flight measurements of laser pulses between Earth-based laser ranging stations and the Lunar Reconnaissance Orbiter (LRO) orbiting the Moon. Over 4,000 hours of successful LR data are obtained from 10 international ground stations. The 20-30 centimeter precision of the full-rate LR data is further improved to 5-10 centimeter after conversion into normal points. The main purpose of LR is to utilize the high accuracy normal point data to improve the quality of the LRO orbits, which are nomi- nally determined by the radiometric S-band tracking data. When independently used in the LRO precision orbit determination process with the high-resolution GRAIL (Gravity Recovery and Interior Laboratory) gravity model, LR data provide good orbit solutions, with an average difference of approximately 50 meters in total position, and approximately 20 centimeters in radial direction, compared to the definitive LRO trajectory. When used in combination with the S-band tracking data, LR data help to improve the orbit accuracy in the radial direction to approximately 15 centimeters. In order to obtain highly accurate LR range measurements for precise orbit determination results, it is critical to closely model the behavior of the clocks both at the ground stations and on the spacecraft. LR provides a unique data set to calibrate the spacecraft clock. The LRO spacecraft clock is characterized by the LR data to a timing knowledge of 0.015 milliseconds over the entire 5 years of LR operation. We here present both the engineering setup of the LR experiments and the detailed analysis results of the LR data.
Zheng, Y.
2013-01-01
Temporal sound cues are essential for sound recognition, pitch, rhythm, and timbre perception, yet how auditory neurons encode such cues is subject of ongoing debate. Rate coding theories propose that temporal sound features are represented by rate tuned modulation filters. However, overwhelming evidence also suggests that precise spike timing is an essential attribute of the neural code. Here we demonstrate that single neurons in the auditory midbrain employ a proportional code in which spike-timing precision and firing reliability covary with the sound envelope cues to provide an efficient representation of the stimulus. Spike-timing precision varied systematically with the timescale and shape of the sound envelope and yet was largely independent of the sound modulation frequency, a prominent cue for pitch. In contrast, spike-count reliability was strongly affected by the modulation frequency. Spike-timing precision extends from sub-millisecond for brief transient sounds up to tens of milliseconds for sounds with slow-varying envelope. Information theoretic analysis further confirms that spike-timing precision depends strongly on the sound envelope shape, while firing reliability was strongly affected by the sound modulation frequency. Both the information efficiency and total information were limited by the firing reliability and spike-timing precision in a manner that reflected the sound structure. This result supports a temporal coding strategy in the auditory midbrain where proportional changes in spike-timing precision and firing reliability can efficiently signal shape and periodicity temporal cues. PMID:23636724
Anterior capsulotomy with an ultrashort-pulse laser.
Tackman, Ramon Naranjo; Kuri, Jorge Villar; Nichamin, Louis D Skip; Edwards, Keith
2011-05-01
To assess the precision of laser anterior capsulotomy compared with that of manual continuous curvilinear capsulorhexis (CCC). Asociación Para Evitar La Ceguera en México IAP, Hospital Dr. Luis Sánchez Bulnes, Mexico City, Mexico. Nonrandomized single-center clinical trial. In patients presenting for cataract surgery, the LensAR Laser System was used to create a laser anterior capsulotomy of the surgeon's desired size. Capsule buttons were retrieved and measured and then compared with buttons retrieved from eyes having a manually torn CCC. Deviation from the intended diameter and the regularity of shape were assessed. When removing the capsule buttons at the start of surgery, the surgeon rated the ease of removal on a scale of 1 to 10 (1 = required manual capsulorhexis around the whole diameter; 10 = button free floating or required no manual detachment from remaining capsule during removal). The mean deviation from the intended diameter was 0.16 mm ± 0.17 (SD) for laser anterior capsulotomy and 0.42 ± 0.54 mm for CCC (P=.03). The mean absolute deviation from the intended diameter was 0.20 ± 0.12 mm and 0.49 ± 0.47 mm, respectively (P=.003). The mean of the average squared residuals was 0.01 ± 0.03 and 0.02 ± 0.04, respectively (P=.09). The median rating of the ease of removal was 9 (range 5 to 10). Laser anterior capsulotomy created a more precise capsule opening than CCC, and the buttons created by the laser procedure were easy to remove at the beginning of cataract surgery. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Precision measurement of the mass and lifetime of the Ξ(b)(0) baryon.
Aaij, R; Adeva, B; Adinolfi, M; Affolder, A; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Anderson, J; Andreassen, R; Andreotti, M; Andrews, J E; Appleby, R B; Aquines Gutierrez, O; Archilli, F; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Bachmann, S; Back, J J; Badalov, A; Balagura, V; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Batozskaya, V; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Belogurov, S; Belous, K; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bettler, M-O; van Beuzekom, M; Bien, A; Bifani, S; Bird, T; Bizzeti, A; Bjørnstad, P M; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borgia, A; Borsato, M; Bowcock, T J V; Bowen, E; Bozzi, C; Brambach, T; van den Brand, J; Bressieux, J; Brett, D; Britsch, M; Britton, T; Brodzicka, J; Brook, N H; Brown, H; Bursche, A; Busetto, G; Buytaert, J; Cadeddu, S; Calabrese, R; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Campora Perez, D; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carranza-Mejia, H; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Cenci, R; Charles, M; Charpentier, Ph; Chen, S; Cheung, S-F; Chiapolini, N; Chrzaszcz, M; Ciba, K; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombes, M; Coquereau, S; Corti, G; Corvo, M; Counts, I; Couturier, B; Cowan, G A; Craik, D C; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Dalseno, J; David, P; David, P N Y; Davis, A; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Silva, W; De Simone, P; Decamp, D; Deckenhoff, M; Del Buono, L; Déléage, N; Derkach, D; Deschamps, O; Dettori, F; Di Canto, A; Dijkstra, H; Donleavy, S; Dordei, F; Dorigo, M; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dreimanis, K; Dujany, G; Dupertuis, F; Durante, P; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Easo, S; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; El Rifai, I; Elsasser, Ch; Ely, S; Esen, S; Evans, H-M; Evans, T; Falabella, A; Färber, C; Farinelli, C; Farley, N; Farry, S; Ferguson, D; Fernandez Albor, V; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fontana, M; Fontanelli, F; Forty, R; Francisco, O; Frank, M; Frei, C; Frosini, M; Fu, J; Furfaro, E; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; Garofoli, J; Garra Tico, J; Garrido, L; Gaspar, C; Gauld, R; Gavardi, L; Gavrilov, G; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianelle, A; Giani', S; Gibson, V; Giubega, L; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gordon, H; Gotti, C; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graziani, G; Grecu, A; Greening, E; Gregson, S; Griffith, P; Grillo, L; Grünberg, O; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hamilton, B; Hampson, T; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; Hartmann, T; He, J; Head, T; Heijne, V; Hennessy, K; Henrard, P; Henry, L; Hernando Morata, J A; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hoballah, M; Hombach, C; Hulsbergen, W; Hunt, P; Hussain, N; Hutchcroft, D; Hynds, D; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jalocha, J; Jans, E; Jaton, P; Jawahery, A; Jing, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kaballo, M; Kandybei, S; Kanso, W; Karacson, M; Karbach, T M; Karodia, S; Kelsey, M; Kenyon, I R; Ketel, T; Khanji, B; Khurewathanakul, C; Klaver, S; Kochebina, O; Kolpin, M; Komarov, I; Koopman, R F; Koppenburg, P; Korolev, M; Kozlinskiy, A; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; Kurek, K; Kvaratskheliya, T; La Thi, V N; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lambert, R W; Lanciotti, E; Lanfranchi, G; Langenbruch, C; Langhans, B; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Lees, J-P; Lefèvre, R; Leflat, A; Lefrançois, J; Leo, S; Leroy, O; Lesiak, T; Leverington, B; Li, Y; Liles, M; Lindner, R; Linn, C; Lionetto, F; Liu, B; Liu, G; Lohn, S; Longstaff, I; Lopes, J H; Lopez-March, N; Lowdon, P; Lu, H; Lucchesi, D; Luo, H; Lupato, A; Luppi, E; Lupton, O; Machefert, F; Machikhiliyan, I V; Maciuc, F; Maev, O; Malde, S; Manca, G; Mancinelli, G; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marino, P; Märki, R; Marks, J; Martellotti, G; Martens, A; Martín Sánchez, A; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massafferri, A; Matev, R; Mathe, Z; Matteuzzi, C; Mazurov, A; McCann, M; McCarthy, J; McNab, A; McNulty, R; McSkelly, B; Meadows, B; Meier, F; Meissner, M; Merk, M; Milanes, D A; Minard, M-N; Moggi, N; Molina Rodriguez, J; Monteil, S; Morandin, M; Morawski, P; Mordà, A; Morello, M J; Moron, J; Morris, A-B; Mountain, R; Muheim, F; Müller, K; Muresan, R; Mussini, M; Muster, B; Naik, P; Nakada, T; Nandakumar, R; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, A D; Nguyen, T D; Nguyen-Mau, C; Nicol, M; Niess, V; Niet, R; Nikitin, N; Nikodem, T; Novoselov, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Oggero, S; Ogilvy, S; Okhrimenko, O; Oldeman, R; Onderwater, G; Orlandea, M; Otalora Goicochea, J M; Owen, P; Oyanguren, A; Pal, B K; Palano, A; Palombo, F; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Parkes, C; Parkinson, C J; Passaleva, G; Patel, G D; Patel, M; Patrignani, C; Pazos Alvarez, A; Pearce, A; Pellegrino, A; Pepe Altarelli, M; Perazzini, S; Perez Trigo, E; Perret, P; Perrin-Terrin, M; Pescatore, L; Pesen, E; Petridis, K; Petrolini, A; Picatoste Olloqui, E; Pietrzyk, B; Pilař, T; Pinci, D; Pistone, A; Playfer, S; Plo Casasus, M; Polci, F; Poluektov, A; Polycarpo, E; Popov, A; Popov, D; Popovici, B; Potterat, C; Price, E; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Rachwal, B; Rademacker, J H; Rakotomiaramanana, B; Rama, M; Rangel, M S; Raniuk, I; Rauschmayr, N; Raven, G; Reichert, S; Reid, M M; Dos Reis, A C; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Roa Romero, D A; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Perez, P; Roiser, S; Romanovsky, V; Romero Vidal, A; Rotondo, M; Rouvinet, J; Ruf, T; Ruffini, F; Ruiz, H; Ruiz Valls, P; Sabatino, G; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santovetti, E; Sapunov, M; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrie, M; Savrina, D; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmidt, B; Schneider, O; Schopper, A; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Seco, M; Semennikov, A; Sepp, I; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Shires, A; Silva Coutinho, R; Simi, G; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, N A; Smith, E; Smith, E; Smith, J; Smith, M; Snoek, H; Sokoloff, M D; Soler, F J P; Soomro, F; Souza, D; Souza De Paula, B; Spaan, B; Sparkes, A; Spradlin, P; Stagni, F; Stahl, M; Stahl, S; Steinkamp, O; Stenyakin, O; Stevenson, S; Stoica, S; Stone, S; Storaci, B; Stracka, S; Straticiuc, M; Straumann, U; Stroili, R; Subbiah, V K; Sun, L; Sutcliffe, W; Swientek, K; Swientek, S; Syropoulos, V; Szczekowski, M; Szczypka, P; Szilard, D; Szumlak, T; T'Jampens, S; Teklishyn, M; Tellarini, G; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Torr, N; Tournefier, E; Tourneur, S; Tran, M T; Tresch, M; Tsaregorodtsev, A; Tsopelas, P; Tuning, N; Ubeda Garcia, M; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vagnoni, V; Valenti, G; Vallier, A; Vazquez Gomez, R; Vazquez Regueiro, P; Vázquez Sierra, C; Vecchi, S; Velthuis, J J; Veltri, M; Veneziano, G; Vesterinen, M; Viaud, B; Vieira, D; Vieites Diaz, M; Vilasis-Cardona, X; Vollhardt, A; Volyanskyy, D; Voong, D; Vorobyev, A; Vorobyev, V; Voß, C; Voss, H; de Vries, J A; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wandernoth, S; Wang, J; Ward, D R; Watson, N K; Websdale, D; Whitehead, M; Wicht, J; Wiedner, D; Wilkinson, G; Williams, M P; Williams, M; Wilson, F F; Wimberley, J; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wright, S; Wu, S; Wyllie, K; Xie, Y; Xing, Z; Xu, Z; Yang, Z; Yuan, X; Yushchenko, O; Zangoli, M; Zavertyaev, M; Zhang, L; Zhang, W C; Zhang, Y; Zhelezov, A; Zhokhov, A; Zhong, L; Zvyagin, A
2014-07-18
Using a proton-proton collision data sample corresponding to an integrated luminosity of 3 fb(-1) collected by LHCb at center-of-mass energies of 7 and 8 TeV, about 3800 Ξ(b)(0) → Ξ(c)(+)π(-), Ξ(c)(+)) → pK(-)π(+) signal decays are reconstructed. From this sample, the first measurement of the Ξ(b)(0) baryon lifetime is made, relative to that of the Λ(b)(0) baryon. The mass differences M(Ξ(b)(0))-M(Λ(b)(0)) and M(Ξ(c)(+))-M(Λ(c)(+)) are also measured with precision more than 4 times better than the current world averages. The resulting values are τ(Ξ(b)(0))/τ(Λ)(b)(0)) = 1.006 ± 0.018 ± 0.010,M(Ξ(b)(0))-M(Λ(b)(0)) = 172.44 ± 0.39 ± 0.17 MeV/c(2),M(Ξ(c)(+))-M(Λ(c)(+)) = 181.51 ± 0.14 ± 0.10 MeV/c(2),where the first uncertainty is statistical and the second is systematic. The relative rate of Ξ(b)(0) to Λ(b)(0) baryon production is measured to be f(Ξ)(b)(0))/f(Λ)(b)(0))B(Ξ(b)(0) → Ξ(c)(+)π(-))/B(Λ(b)(0) → Λ(c)(+)π(-))B(Ξ(c)(+) → pK(-)π(+))/B(Λ(c)(+) → pK(-)}π(+)) = (1.88 ± 0.04 ± 0.03) × 10(-2),where the first factor is the ratio of fragmentation fractions, b → Ξ(b)(0) relative to b → Λ(b)(0). Relative production rates as functions of transverse momentum and pseudorapidity are also presented.
Code of Federal Regulations, 2012 CFR
2012-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging set...
Code of Federal Regulations, 2013 CFR
2013-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging set...
Code of Federal Regulations, 2011 CFR
2011-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging set...
Code of Federal Regulations, 2014 CFR
2014-07-01
... are defined as follows: (1) Eligible engines rated at or above 19 kW, other than marine diesel engines, constitute an averaging set. (2) Eligible engines rated under 19 kW, other than marine diesel engines, constitute an averaging set. (3) Marine diesel engines rated at or above 19 kW constitute an averaging set...
Hydrodynamic Trails Produced by Daphnia: Size and Energetics
Wickramarathna, Lalith N.; Noss, Christian; Lorke, Andreas
2014-01-01
This study focuses on quantifying hydrodynamic trails produced by freely swimming zooplankton. We combined volumetric tracking of swimming trajectories with planar observations of the flow field induced by Daphnia of different size and swimming in different patterns. Spatial extension of the planar flow field along the trajectories was used to interrogate the dimensions (length and volume) and energetics (dissipation rate of kinetic energy and total dissipated power) of the trails. Our findings demonstrate that neither swimming pattern nor size of the organisms affect the trail width or the dissipation rate. However, we found that the trail volume increases with increasing organism size and swimming velocity, more precisely the trail volume is proportional to the third power of Reynolds number. This increase furthermore results in significantly enhanced total dissipated power at higher Reynolds number. The biggest trail volume observed corresponds to about 500 times the body volume of the largest daphnids. Trail-averaged viscous dissipation rate of the swimming daphnids vary in the range of to and the observed magnitudes of total dissipated power between and , respectively. Among other zooplankton species, daphnids display the highest total dissipated power in their trails. These findings are discussed in the context of fluid mixing and transport by organisms swimming at intermediate Reynolds numbers. PMID:24671019
The Montgomery Äsberg and the Hamilton Ratings of Depression
Carmody, Thomas; Rush, A. John; Bernstein, Ira; Warden, Diane; Brannan, Stephen; Burnham, Daniel; Woo, Ada; Trivedi, Madhukar
2007-01-01
The 17-item Hamilton Rating Scale for Depression (HRSD17) and the Montgomery Äsberg Depression Rating Scale (MADRS) are two widely used clinicianrated symptom scales. A 6-item version of the HRSD (HRSD6) was created by Bech to address the psychometric limitations of the HRSD17. The psychometric properties of these measures were compared using classical test theory (CTT) and item response theory (IRT) methods. IRT methods were used to equate total scores on any two scales. Data from two distinctly different outpatient studies of nonpsychotic major depression: a 12-month study of highly treatment-resistant patients (n=233) and an 8-week acute phase drug treatment trial (n=985) were used for robustness of results. MADRS and HRSD6 items generally contributed more to the measurement of depression than HRSD17 items as shown by higher item-total correlations and higher IRT slope parameters. The MADRS and HRSD6 were unifactorial while the HRSD17 contained 2 factors. The MADRS showed about twice the precision in estimating depression as either the HRSD17 or HRSD6 for average severity of depression. An HRSD17 of 7 corresponded to an 8 or 9 on the MADRS and 4 on the HRSD6. The MADRS would be superior to the HRSD17 in the conduct of clinical trials. PMID:16769204
NASA Astrophysics Data System (ADS)
Voeikov, Vladimir L.; Buravleva, Ekaterina; Bulargina, Yulia; Gurfinkel, Youri I.
2001-10-01
An automatic device for high-temporal resolution of the process of erythrocytes sedimentation in blood was designed. The position of the boundary between red blood and plasma is registered each 30 sec in several pipettes simultaneously with +/- 10 mkm precision. Data are processed by a PC and presented as velocity-time curves (ESR-grams) and the curves describing time evolution of the boundary position. ESR-grams demonstrate non-monotonous character of erythrocytes sedimentation in blood. Blood of particular donor being in a stable physiological state taken on different days is characterized by similar ESR-grams. Pathological deviations from a normal physiological state are reflected in the shortening of duration of each process stage and increasing of average sedimentation rate. Intravenous infusion of some medical preparations may lead either to improving (prolonging of macrokinetic stages, decreasing of sedimentation rate), or to worsening of studied parameters depending on an individual. The low extent of blood dilution with saline in vitro lead as a rule to decreasing of sedimentation rate and improving of microkinetic parameters of the process. Adding of highly diluted hydrogen peroxide to blood samples of patients resulted in the improving of sedimentation kinetics. ESR-graphy may widen opportunities of practical medicine in diagnostics, prognostics and drug therapy.
Sun-Direction Estimation Using a Partially Underdetermined Set of Coarse Sun Sensors
NASA Astrophysics Data System (ADS)
O'Keefe, Stephen A.; Schaub, Hanspeter
2015-09-01
A comparison of different methods to estimate the sun-direction vector using a partially underdetermined set of cosine-type coarse sun sensors (CSS), while simultaneously controlling the attitude towards a power-positive orientation, is presented. CSS are commonly used in performing power-positive sun-pointing and are attractive due to their relative inexpensiveness, small size, and reduced power consumption. For this study only CSS and rate gyro measurements are available, and the sensor configuration does not provide global triple coverage required for a unique sun-direction calculation. The methods investigated include a vector average method, a combination of least squares and minimum norm criteria, and an extended Kalman filter approach. All cases are formulated such that precise ground calibration of the CSS is not required. Despite significant biases in the state dynamics and measurement models, Monte Carlo simulations show that an extended Kalman filter approach, despite the underdetermined sensor coverage, can provide degree-level accuracy of the sun-direction vector both with and without a control algorithm running simultaneously. If no rate gyro measurements are available, and rates are partially estimated from CSS, the EKF performance degrades as expected, but is still able to achieve better than 10∘ accuracy using only CSS measurements.
Meisner, Eric M; Hager, Gregory D; Ishman, Stacey L; Brown, David; Tunkel, David E; Ishii, Masaru
2013-11-01
To evaluate the accuracy of three-dimensional (3D) airway reconstructions obtained using quantitative endoscopy (QE). We developed this novel technique to reconstruct precise 3D representations of airway geometries from endoscopic video streams. This method, based on machine vision methodologies, uses a post-processing step of the standard videos obtained during routine laryngoscopy and bronchoscopy. We hypothesize that this method is precise and will generate assessment of airway size and shape similar to those obtained using computed tomography (CT). This study was approved by the institutional review board (IRB). We analyzed video sequences from pediatric patients receiving rigid bronchoscopy. We generated 3D scaled airway models of the subglottis, trachea, and carina using QE. These models were compared to 3D airway models generated from CT. We used the CT data as the gold standard measure of airway size, and used a mixed linear model to estimate the average error in cross-sectional area and effective diameter for QE. The average error in cross sectional area (area sliced perpendicular to the long axis of the airway) was 7.7 mm(2) (variance 33.447 mm(4)). The average error in effective diameter was 0.38775 mm (variance 2.45 mm(2)), approximately 9% error. Our pilot study suggests that QE can be used to generate precise 3D reconstructions of airways. This technique is atraumatic, does not require ionizing radiation, and integrates easily into standard airway assessment protocols. We conjecture that this technology will be useful for staging airway disease and assessing surgical outcomes. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.
High-precision half-life determination for the superallowed β+ emitter Ga62
NASA Astrophysics Data System (ADS)
Grinyer, G. F.; Finlay, P.; Svensson, C. E.; Ball, G. C.; Leslie, J. R.; Austin, R. A. E.; Bandyopadhyay, D.; Chaffey, A.; Chakrawarthy, R. S.; Garrett, P. E.; Hackman, G.; Hyland, B.; Kanungo, R.; Leach, K. G.; Mattoon, C. M.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Ressler, J. J.; Sarazin, F.; Savajols, H.; Schumaker, M. A.; Wong, J.
2008-01-01
The half-life of the superallowed β+ emitter Ga62 has been measured at TRIUMF's Isotope Separator and Accelerator facility using a fast-tape-transport system and 4π continuous-flow gas proportional counter to detect the positrons from the decay of Ga62 to the daughter Zn62. The result, T1/2=116.100±0.025 ms, represents the most precise measurement to date (0.022%) for any superallowed β-decay half-life. When combined with six previous measurements of the Ga62 half-life, a new world average of T1/2=116.121±0.021 ms is obtained. This new half-life measurement results in a 20% improvement in the precision of the Ga62 superallowed ft value while reducing its mean by 0.9σ to ft=3074.3(12) s. The impact of this half-life measurement on precision tests of the CVC hypothesis and isospin symmetry breaking corrections for A⩾62 superallowed decays is discussed.
High precision applications of the global positioning system
NASA Technical Reports Server (NTRS)
Lichten, Stephen M.
1991-01-01
The Global Positioning System (GPS) is a constellation of U.S. defense navigation satellites which can be used for military and civilian positioning applications. A wide variety of GPS scientific applications were identified and precise positioning capabilities with GPS were already demonstrated with data available from the present partial satellite constellation. Expected applications include: measurements of Earth crustal motion, particularly in seismically active regions; measurements of the Earth's rotation rate and pole orientation; high-precision Earth orbiter tracking; surveying; measurements of media propagation delays for calibration of deep space radiometric data in support of NASA planetary missions; determination of precise ground station coordinates; and precise time transfer worldwide.
Pico-CSIA: Picomolar Scale Compound-Specific Isotope Analyses
NASA Astrophysics Data System (ADS)
Baczynski, A. A.; Polissar, P. J.; Juchelka, D.; Schwieters, J. B.; Hilkert, A.; Freeman, K. H.
2016-12-01
The basic approach to analyzing molecular isotopes has remained largely unchanged since the late 1990s. Conventional compound-specific isotope analyses (CSIA) are conducted using capillary gas chromatography (GC), a combustion interface, and an isotope-ratio mass spectrometer (IRMS). Commercially available GC-IRMS systems are comprised of components with inner diameters ≥0.25 mm and employ helium flow rates of 1-4 mL/min. These flow rates are an order of magnitude larger than what the IRMS can accept. Consequently, ≥90% of the sample is lost through the open split, and 1-10s of nanomoles of carbon are required for analysis. These sample requirements are prohibitive for many biomarkers, which are often present in picomolar concentrations. We utilize the resolving power and low flows of narrow-bore capillary GC to improve the sensitivity of CSIA. Narrow bore capillary columns (<0.25 mm ID) allow low helium flow rates of ≤0.5mL/min for more efficient sample transfer to the ion source of the IRMS while maintaining the high linear flow rates necessary to preserve narrow peak widths ( 250 ms). The IRMS has been fitted with collector amplifiers configured to 25 ms response times for rapid data acquisition across narrow peaks. Previous authors (e.g., Sacks et al., 2007) successfully demonstrated improved sensitivity afforded by narrow-bore GC columns. They reported an accuracy and precision of 1.4‰ for peaks with an average width at half maximum of 720 ms for 100 picomoles of carbon on column. Our method builds on their advances and further reduces peak widths ( 600 ms) and the amount of sample lost prior to isotopic analysis. Preliminary experiments with 100 picomoles of carbon on column show an accuracy and standard deviation <1‰. With further improvement, we hope to demonstrate robust isotopic analysis of 10s of picomoles of carbon, more than 2 orders of magnitude lower than commercial systems. The pico-CSIA method affords high-precision isotopic analyses for picomoles of carbon in organic biomarkers, which significantly lowers sample size requirements and broadens analytical windows in paleoclimate, astrobiological, and biogeochemical research.
Varnes, D.J.; Bufe, C.G.
1996-01-01
Seismic activity in the 10 months preceding the 1980 February 14, mb 4.8 earthquake in the Virgin Islands, reported on by Frankel in 1982, consisted of four principal cycles. Each cycle began with a relatively large event or series of closely spaced events, and the duration of the cycles progressively shortened by a factor of about 3/4. Had this regular shortening of the cycles been recognized prior to the earthquake, the time of the next episode of setsmicity (the main shock) might have been closely estimated 41 days in advance. That this event could be much larger than the previous events is indicated from time-to-failure analysis of the accelerating rise in released seismic energy, using a non-linear time- and slip-predictable foreshock model. Examination of the timing of all events in the sequence shows an even higher degree of order. Rates of seismicity, measured by consecutive interevent times, when plotted on an iteration diagram of a rate versus the succeeding rate, form a triangular circulating trajectory. The trajectory becomes an ascending helix if extended in a third dimension, time. This construction reveals additional and precise relations among the time intervals between times of relatively high or relatively low rates of seismic activity, including period halving and doubling. The set of 666 time intervals between all possible pairs of the 37 recorded events appears to be a fractal; the set of time points that define the intervals has a finite, non-integer correlation dimension of 0.70. In contrast, the average correlation dimension of 50 random sequences of 37 events is significantly higher, dose to 1.0. In a similar analysis, the set of distances between pairs of epicentres has a fractal correlation dimension of 1.52. Well-defined cycles, numerous precise ratios among time intervals, and a non-random temporal fractal dimension suggest that the seismic series is not a random process, but rather the product of a deterministic dynamic system.
Reconstructing Mid- to Late Holocene sea-level change from coral microatolls, French Polynesia
NASA Astrophysics Data System (ADS)
Hallmann, Nadine; Camoin, Gilbert; Eisenhauer, Anton; Botella, Alberic; Milne, Glenn; Vella, Claude; Samankassou, Elias; Pothin, Virginie; Dussouillez, Philippe; Fleury, Jules; Fietzke, Jan
2017-04-01
Coral microatolls are sensitive low-tide recorders, as their vertical accretion is limited by the mean low water springs level, and can be considered therefore as high-precision recorders of sea-level change. They are of pivotal importance to resolving the rates and amplitudes of millennial-to-century scale changes during periods of relative climate stability such as the Mid- to Late Holocene, which serves as an important baseline of natural variability prior to the industrial revolution. It provides therefore a unique opportunity to study coastal response to sea-level rise, even if the rates of sea-level rise during the Mid- to Late Holocene were lower than the current rates and those expected in the near future. Mid- to Late Holocene relative sea-level change in French Polynesia was reconstructed based on the coupling between absolute U/Th dating of in situ coral microatolls and their precise positioning via GPS RTK (Real Time Kinematic) measurements. The twelve studied islands represent ideal settings for accurate sea-level studies because: 1) they can be regarded as tectonically stable during the relevant period (slow subsidence), 2) they are located far from former ice sheets (far-field), 3) they are characterized by a low tidal amplitude, and 4) they cover a wide range of latitudes which produces significantly improved constraints on GIA (Glacial Isostatic Adjustment) model parameters. A step-like sea-level rise is evidenced between 6 and 3.9 ka leading to a short sea-level highstand of about a meter in amplitude between 3.9 and 3.6 ka. A sea-level fall, at an average rate of 0.3 mm.yr-1, is recorded between 3.6 and 1.2 ka when sea level approached its present position. In addition, growth pattern analysis of coral microatolls allows the reconstruction of low-amplitude, high-frequency sea-level change on centennial to sub-decadal time scales. The reconstructed sea-level curve extends the Tahiti last deglacial sea-level curve [Deschamps et al., 2012, Nature, 483, 559-564], and is in good agreement with a geophysical model tuned to fit far-field deglacial records [Bassett et al., 2005, Science, 309, 925-928].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graczyk, Dariusz; Gieren, Wolfgang; Konorski, Piotr
In this study we investigate the calibration of surface brightness–color (SBC) relations based solely on eclipsing binary stars. We selected a sample of 35 detached eclipsing binaries with trigonometric parallaxes from Gaia DR1 or Hipparcos whose absolute dimensions are known with an accuracy better than 3% and that lie within 0.3 kpc from the Sun. For the purpose of this study, we used mostly homogeneous optical and near-infrared photometry based on the Tycho-2 and 2MASS catalogs. We derived geometric angular diameters for all stars in our sample with a precision better than 10%, and for 11 of them with amore » precision better than 2%. The precision of individual angular diameters of the eclipsing binary components is currently limited by the precision of the geometric distances (∼5% on average). However, by using a subsample of systems with the best agreement between their geometric and photometric distances, we derived the precise SBC relations based only on eclipsing binary stars. These relations have precisions that are comparable to the best available SBC relations based on interferometric angular diameters, and they are fully consistent with them. With very precise Gaia parallaxes becoming available in the near future, angular diameters with a precision better than 1% will be abundant. At that point, the main uncertainty in the total error budget of the SBC relations will come from transformations between different photometric systems, disentangling of component magnitudes, and for hot OB stars, the main uncertainty will come from the interstellar extinction determination. We argue that all these issues can be overcome with modern high-quality data and conclude that a precision better than 1% is entirely feasible.« less
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Observations on the predictive value of short-term stake tests
Stan Lebow; Bessie Woodward; Patricia Lebow
2008-01-01
This paper compares average ratings of test stakes after 3, 4, 5, and 7 years exposure to their subsequent ratings after 11 years. Average ratings from over 200 treatment groups exposed in plots in southern Mississippi were compared to average ratings of a reference preservative. The analysis revealed that even perfect ratings after three years were not a reliable...
Image-guided smart laser system for precision implantation of cells in cartilage
NASA Astrophysics Data System (ADS)
Katta, Nitesh; Rector, John A.; Gardner, Michael R.; McElroy, Austin B.; Choy, Kevin C.; Crosby, Cody; Zoldan, Janet; Milner, Thomas E.
2017-03-01
State-of-the-art treatment for joint diseases like osteoarthritis focus on articular cartilage repair/regeneration by stem cell implantation therapy. However, the technique is limited by a lack of precision in the physician's imaging and cell deposition toolkit. We describe a novel combination of high-resolution, rapid scan-rate optical coherence tomography (OCT) alongside a short-pulsed nanosecond thulium (Tm) laser for precise cell seeding in cartilage. The superior beam quality of thulium lasers and wavelength of operation 1940 nm offers high volumetric tissue removal rates and minimizes the residual thermal footprint. OCT imaging enables targeted micro-well placement, precise cell deposition, and feature contrast. A bench-top system is constructed using a 15 W, 1940 nm, nanosecond-pulsed Tm fiber laser (500 μJ pulse energy, 100 ns pulse duration, 30kHz repetition rate) for removing tissue, and a swept source laser (1310 ± 70 nm, 100 kHz sweep rate) for OCT imaging, forming a combined Tm/OCT system - a "smart laser knife". OCT assists the smart laser knife user in characterizing cartilage to inform micro-well placement. The Tm laser creates micro-wells (2.35 mm diameter length, 1.5 mm width, 300 μm deep) and micro-incisions (1 mm wide, 200 μm deep) while OCT image-guidance assists and demonstrates this precision cutting and cell deposition with real-time feedback. To test micro-well creation and cell deposition protocol, gelatin phantoms are constructed mimicking cartilage optical properties and physiological structure. Cell viability is then assessed to illustrate the efficacy of the hydrogel deposition. Automated OCT feedback is demonstrated for cutting procedures to avoid important surface/subsurface structures. This bench-top smart laser knife system described here offers a new image-guided approach to precise stem cell seeding that can enhance the efficacy of articular cartilage repair.
Nitrogen emissions from broilers measured by mass balance over eighteen consecutive flocks.
Coufal, C D; Chavez, C; Niemeyer, P R; Carey, J B
2006-03-01
Emission of nitrogen in the form of ammonia from poultry rearing facilities has been an important topic for the poultry industry because of concerns regarding the effects of ammonia on the environment. Sound scientific data is needed to accurately estimate air emissions from poultry operations. Many factors, such as season of the year, ambient temperature and humidity, bird health, and management practices can influence ammonia volatilization from broiler rearing facilities. Precise results are often difficult to attain from commercial facilities, particularly over long periods of time. Therefore, an experiment was conducted to determine nitrogen loss from broilers in a research facility under conditions simulating commercial production for 18 consecutive flocks. Broilers were reared to 40 to 42 d of age and fed diets obtained from a commercial broiler integrator. New rice hulls were used for litter for the first flock, and the same litter was recycled for all subsequent flocks with caked litter removed between flocks. All birds, feeds, and litter materials entering and leaving the facility were quantified, sampled, and analyzed for total nitrogen content. Nitrogen loss was calculated by the mass balance method in which loss was equal to the difference between the nitrogen inputs and the nitrogen outputs. Nitrogen partitioning as a percentage of inputs averaged 15.29, 6.84, 55.52, 1.27, and 21.08% for litter, caked litter, broiler carcasses, mortalities, and nitrogen loss, respectively, over all eighteen flocks. During the production of 18 flocks of broilers on the same recycled litter, the average nitrogen emission rate was calculated to range from 4.13 to 19.74 g of N/ kg of marketed broiler (grams of nitrogen per kilogram) and averaged 11.07 g of N/kg. Nitrogen loss was significantly (P < 0.05) greater for flocks reared in summer vs. winter. Results of this experiment have demonstrated that the rate of nitrogen volatilization from broiler grow-out facilities varies significantly on a flock-to-flock basis.
Flare angles measured with ball gage
NASA Technical Reports Server (NTRS)
Cleghorn, D.; Wall, W. A.
1968-01-01
Precision tungsten carbide balls measure the internal angle of flared joints. Measurements from small and large balls in the flare throat to an external reference point are made. The difference in distances and diameters determine the average slope of the flare between the points of ball contact.
Historical Precision of an Ozone Correction Procedure for AM0 Solar Cell Calibration
NASA Technical Reports Server (NTRS)
Snyder, David B.; Jenkins, Phillip; Scheiman, David
2005-01-01
In an effort to improve the accuracy of the high altitude aircraft method for calibration of high band-gap solar cells, the ozone correction procedure has been revisited. The new procedure adjusts the measured short circuit current, Isc, according to satellite based ozone measurements and a model of the atmospheric ozone profile then extrapolates the measurements to air mass zero, AMO. The purpose of this paper is to assess the precision of the revised procedure by applying it to historical data sets. The average Isc of a silicon cell for a flying season increased 0.5% and the standard deviation improved from 0.5% to 0.3%. The 12 year average Isc of a GaAs cell increased 1% and the standard deviation improved from 0.8% to 0.5%. The slight increase in measured Isc and improvement in standard deviation suggests that the accuracy of the aircraft method may improve from 1% to nearly 0.5%.
Tag-Based Social Image Search: Toward Relevant and Diverse Results
NASA Astrophysics Data System (ADS)
Yang, Kuiyuan; Wang, Meng; Hua, Xian-Sheng; Zhang, Hong-Jiang
Recent years have witnessed a great success of social media websites. Tag-based image search is an important approach to access the image content of interest on these websites. However, the existing ranking methods for tag-based image search frequently return results that are irrelevant or lack of diversity. This chapter presents a diverse relevance ranking scheme which simultaneously takes relevance and diversity into account by exploring the content of images and their associated tags. First, it estimates the relevance scores of images with respect to the query term based on both visual information of images and semantic information of associated tags. Then semantic similarities of social images are estimated based on their tags. Based on the relevance scores and the similarities, the ranking list is generated by a greedy ordering algorithm which optimizes Average Diverse Precision (ADP), a novel measure that is extended from the conventional Average Precision (AP). Comprehensive experiments and user studies demonstrate the effectiveness of the approach.
Functional handwriting performance in school-age children with fetal alcohol spectrum disorders.
Duval-White, Cherie J; Jirikowic, Tracy; Rios, Dianne; Deitz, Jean; Olson, Heather Carmichael
2013-01-01
Handwriting is a critical skill for school success. Children with fetal alcohol spectrum disorders (FASD) often present with fine motor and visual-motor impairments that can affect handwriting performance, yet handwriting skills have not been systematically investigated in this clinical group. This study aimed to comprehensively describe handwriting skills in 20 school-age children with FASD. Children were tested with the Process Assessment of the Learner, 2nd Edition (PAL-II), and the Visuomotor Precision subtest of NEPSY, a developmental neuropsychological assessment. Participants performed below average on PAL-II measures of handwriting legibility and speed and on NEPSY visual-motor precision tasks. In contrast, PAL-II measures of sensorimotor skills were broadly within the average range. Results provide evidence of functional handwriting challenges for children with FASD and suggest diminished visual-motor skills and increased difficulty as task complexity increases. Future research is needed to further describe the prevalence and nature of handwriting challenges in this population. Copyright © 2013 by the American Occupational Therapy Association, Inc.
Functional Handwriting Performance in School-Age Children With Fetal Alcohol Spectrum Disorders
Duval-White, Cherie J.; Rios, Dianne; Deitz, Jean; Olson, Heather Carmichael
2013-01-01
Handwriting is a critical skill for school success. Children with fetal alcohol spectrum disorders (FASD) often present with fine motor and visual–motor impairments that can affect handwriting performance, yet handwriting skills have not been systematically investigated in this clinical group. This study aimed to comprehensively describe handwriting skills in 20 school-age children with FASD. Children were tested with the Process Assessment of the Learner, 2nd Edition (PAL–II), and the Visuomotor Precision subtest of NEPSY, a developmental neuropsychological assessment. Participants performed below average on PAL–II measures of handwriting legibility and speed and on NEPSY visual–motor precision tasks. In contrast, PAL–II measures of sensorimotor skills were broadly within the average range. Results provide evidence of functional handwriting challenges for children with FASD and suggest diminished visual–motor skills and increased difficulty as task complexity increases. Future research is needed to further describe the prevalence and nature of handwriting challenges in this population. PMID:23968791
Petscher, Yaacov; Mitchell, Alison M; Foorman, Barbara R
2015-01-01
A growing body of literature suggests that response latency, the amount of time it takes an individual to respond to an item, may be an important factor to consider when using assessment data to estimate the ability of an individual. Considering that tests of passage and list fluency are being adapted to a computer administration format, it is possible that accounting for individual differences in response times may be an increasingly feasible option to strengthen the precision of individual scores. The present research evaluated the differential reliability of scores when using classical test theory and item response theory as compared to a conditional item response model which includes response time as an item parameter. Results indicated that the precision of student ability scores increased by an average of 5 % when using the conditional item response model, with greater improvements for those who were average or high ability. Implications for measurement models of speeded assessments are discussed.
Petscher, Yaacov; Mitchell, Alison M.; Foorman, Barbara R.
2016-01-01
A growing body of literature suggests that response latency, the amount of time it takes an individual to respond to an item, may be an important factor to consider when using assessment data to estimate the ability of an individual. Considering that tests of passage and list fluency are being adapted to a computer administration format, it is possible that accounting for individual differences in response times may be an increasingly feasible option to strengthen the precision of individual scores. The present research evaluated the differential reliability of scores when using classical test theory and item response theory as compared to a conditional item response model which includes response time as an item parameter. Results indicated that the precision of student ability scores increased by an average of 5 % when using the conditional item response model, with greater improvements for those who were average or high ability. Implications for measurement models of speeded assessments are discussed. PMID:27721568
Patrzyk, M; Schreiber, A; Heidecke, C D; Glitsch, A
2009-12-01
Development of an innovative method of endoscopic laser-supported diaphanoscopy, for precise demonstration of the location of gastrointestinal stromal tumors (GISTs) at laparoscopy is described. The equipment consists of a light transmission cable with an anchoring system for the gastric mucosa, a connecting system for the light source, and the laser light source itself. During surgery, transillumination by laser is used to show the shape of the tumor. The resection margins are then marked by electric coagulation. Ten patients have been successfully treated using this technique in laparoscopic-endoscopic rendezvous procedures. Average time of surgery was 123 minutes. The time for marking the shape of the tumor averaged 16 minutes. Depending on tumor location and size, 4-7 marks were used, and resection margins were 4-15 mm. This new and effective technique facilitates precise locating of gastric GISTs leading to exact and tissue-sparing transmural laparoscopic resections. Georg Thieme Verlag KG Stuttgart New York.
Localization of lung fields in HRCT images using a deep convolution neural network
NASA Astrophysics Data System (ADS)
Kumar, Abhishek; Agarwala, Sunita; Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Nandi, Debashis; Garg, Mandeep; Khandelwal, Niranjan; Kalra, Naveen
2018-02-01
Lung field segmentation is a prerequisite step for the development of a computer-aided diagnosis system for interstitial lung diseases observed in chest HRCT images. Conventional methods of lung field segmentation rely on a large gray value contrast between lung fields and surrounding tissues. These methods fail on lung HRCT images with dense and diffused pathology. An efficient prepro- cessing could improve the accuracy of segmentation of pathological lung field in HRCT images. In this paper, a convolution neural network is used for localization of lung fields in HRCT images. The proposed method provides an optimal bounding box enclosing the lung fields irrespective of the presence of diffuse pathology. The performance of the proposed algorithm is validated on 330 lung HRCT images obtained from MedGift database on ZF and VGG networks. The model achieves a mean average precision of 0.94 with ZF net and a slightly better performance giving a mean average precision of 0.95 in case of VGG net.
Expansion and growth of structure observables in a macroscopic gravity averaged universe
NASA Astrophysics Data System (ADS)
Wijenayake, Tharake; Ishak, Mustapha
2015-03-01
We investigate the effect of averaging inhomogeneities on expansion and large-scale structure growth observables using the exact and covariant framework of macroscopic gravity (MG). It is well known that applying the Einstein's equations and spatial averaging do not commute and lead to the averaging problem and backreaction terms. For the MG formalism applied to the Friedman-Lemaitre-Robertson-Walker (FLRW) metric, the extra term can be encapsulated as an averaging density parameter denoted ΩA . An exact isotropic cosmological solution of MG for the flat FLRW metric is already known in the literature; we derive here an anisotropic exact solution. Using the isotropic solution, we compare the expansion history to current available data of distances to supernovae, baryon acoustic oscillations, cosmic microwave background last scattering surface data, and Hubble constant measurements, and find -0.05 ≤ΩA≤0.07 (at the 95% confidence level). For the flat metric case this reduces to -0.03 ≤ΩA≤0.05 . The positive part of the intervals can be rejected if a mathematical (and physical) prior is taken into account. We also find that the inclusion of this term in the fits can shift the values of the usual cosmological parameters by a few to several percents. Next, we derive an equation for the growth rate of large-scale structure in MG that includes a term due to the averaging and assess its effect on the evolution of the growth compared to that of the Lambda cold dark matter (Λ CDM ) concordance model. We find that an ΩA term of an amplitude range of [-0.04 ,-0.02 ] lead to a relative deviation of the growth from that of the Λ CDM of up to 2%-4% at late times. Thus, the shift in the growth could be of comparable amplitude to that caused by similar changes in cosmological parameters like the dark energy density parameter or its equation of state. The effect could also be comparable in amplitude to some systematic effects considered for future surveys. This indicates that the averaging term and its possible effect need to be tightly constrained in future precision cosmological studies.
1984-04-02
clock is an absolute technique with a 14 0 • ,4 precision of about 0.1 )us The results of the portable clock experiment indicate that LF sync...also gains direct access to the U. S. primary frequency standard, NBS-6. Access to1 BS-6 makes it possible to set an absolute limit of one part in 10...of the components in these equations are uncorrelated we may take vari- ances of each of these equations and the cross terms will average to zero 117
The precision of locomotor odometry in humans.
Durgin, Frank H; Akagi, Mikio; Gallistel, Charles R; Haiken, Woody
2009-03-01
Two experiments measured the human ability to reproduce locomotor distances of 4.6-100 m without visual feedback and compared distance production with time production. Subjects were not permitted to count steps. It was found that the precision of human odometry follows Weber's law that variability is proportional to distance. The coefficients of variation for distance production were much lower than those measured for time production for similar durations. Gait parameters recorded during the task (average step length and step frequency) were found to be even less variable suggesting that step integration could be the basis for non-visual human odometry.
Long Open Path Fourier Transform Spectroscopy Measurements of Greenhouse Gases in the Near Infrared
NASA Astrophysics Data System (ADS)
Griffith, D. W. T.
2015-12-01
Atmospheric composition measurements are an important tool to quantify local and regional emissions and sinks of greenhouse gases. Most in situ measurements are made at a point, but how representative are such measurements in an inhomogeneous environment? Open path Fourier Transform Spectroscopy (FTS) measurements potentially offer spatial averaging and continuous measurements of several trace gases (including CO2, CH4, CO and N2O) simultaneously in the same airmass. Spatial averaging over kilometre scales is a better fit to the finest scale atmospheric models becoming available, and helps bridge the gap between models and in situ measurements. In this paper we assess the precision, accuracy and reliability of long open path measurements by Fourier Transform Spectroscopy in the near infrared from a 5-month continuous record of measurements over a 1.5 km pathlength. Direct open-atmosphere measurements of trace gases CO2, CH4, CO and N2O as well as O2 were retrieved from several absorption bands between 4000 and 8000 cm-1 (2.5 - 1.25 micron). At one end of the path an in situ FTIR analyser simultaneously collected well calibrated measurements of the same species for comparison with the open path-integrated measurements. The measurements ran continuously from June - November 2014. We introduce the open path FTS measurement system and present an analysis of the results, including assessment of precision, accuracy relative to co-incident in situ measurements, reliability. Short term precision of the open path measurement of CO2 was better than 1 ppm for 5 minute averages and thus sufficient for studies in urban and other non-background environments. Measurement bias relative to calibrated in situ measurements was stable across the measurement period. The system operated reliably with data losses mainly due to weather events such as rain and fog preventing transmission of the IR beam. In principle the system can be improved to provide longer pathlengths and higher precision, and we present recent progress in improving the original measurements.
Pribil, M.J.; Wanty, R.B.; Ridley, W.I.; Borrok, D.M.
2010-01-01
An increased interest in high precision Cu isotope ratio measurements using multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) has developed recently for various natural geologic systems and environmental applications, these typically contain high concentrations of sulfur, particularly in the form of sulfate (SO42-) and sulfide (S). For example, Cu, Fe, and Zn concentrations in acid mine drainage (AMD) can range from 100??g/L to greater than 50mg/L with sulfur species concentrations reaching greater than 1000mg/L. Routine separation of Cu, Fe and Zn from AMD, Cu-sulfide minerals and other geological matrices usually incorporates single anion exchange resin column chromatography for metal separation. During chromatographic separation, variable breakthrough of SO42- during anion exchange resin column chromatography into the Cu fractions was observed as a function of the initial sulfur to Cu ratio, column properties, and the sample matrix. SO42- present in the Cu fraction can form a polyatomic 32S-14N-16O-1H species causing a direct mass interference with 63Cu and producing artificially light ??65Cu values. Here we report the extent of the mass interference caused by SO42- breakthrough when measuring ??65Cu on natural samples and NIST SRM 976 Cu isotope spiked with SO42- after both single anion column chromatography and double anion column chromatography. A set of five 100??g/L Cu SRM 976 samples spiked with 500mg/L SO42- resulted in an average ??65Cu of -3.50?????5.42??? following single anion column separation with variable SO42- breakthrough but an average concentration of 770??g/L. Following double anion column separation, the average SO42-concentration of 13??g/L resulted in better precision and accuracy for the measured ??65Cu value of 0.01?????0.02??? relative to the expected 0??? for SRM 976. We conclude that attention to SO42- breakthrough on sulfur-rich samples is necessary for accurate and precise measurements of ??65Cu and may require the use of a double ion exchange column procedure. ?? 2010.
Contact lens overrefraction variability in corneal power estimation after refractive surgery.
Joslin, Charlotte E; Koster, James; Tu, Elmer Y
2005-12-01
To evaluate the accuracy and precision of the contact lens overrefraction (CLO) method in determining corneal refractive power in post-refractive-surgery eyes. Refractive Surgery Service and Contact Lens Service, University of Illinois, Chicago, Illinois, USA. Fourteen eyes of 7 subjects who had a single myopic laser in situ keratomileusis procedure within 12 months with refractive stability were included in this prospective case series. The CLO method was compared with the historical method of predicting the corneal power using 4 different lens fitting strategies and 3 refractive pupil scan sizes (3 mm, 5 mm, and total pupil). Rigid lenses included 3 9.0 mm overall diameter lenses fit flat, steep, and an average of the 2, and a 15.0 mm diameter lens steep fit. Cycloplegic CLO was performed using the autorefractor function of the Nidek OPD-Scan ARK-10000. Results with each strategy were compared with the corneal power estimated with the historical method. The bias (mean of the difference), 95% limits of agreement, and difference versus mean plots for each strategy are presented. In each subject, the CLO-estimated corneal power varied based on lens fit. On average, the bias between CLO and historical methods ranged from -0.38 to +2.42 diopters (D) and was significantly different from 0 in all but 3 strategies. Substantial variability in precision existed between fitting strategies, with the range of the 95% limits of agreement approximating 0.50 D in 2 strategies and 2.59 D in the worst-case scenario. The least precise fitting strategy was use of flat-fitting 9.0 mm diameter lenses. The accuracy and precision of the CLO method of estimating corneal power in post-refractive-surgery eyes was highly variable on the basis of how rigid lense were fit. One of the most commonly used fitting strategies in clinical practice--flat-fitting a 9.0 diameter lens-resulted in the poorest accuracy and precision. Results also suggest use of large-diameter lenses may improve outcomes.
NASA Astrophysics Data System (ADS)
Trifonenkov, A. V.; Trifonenkov, V. P.
2017-01-01
This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.
Precision Agriculture. Reaping the Benefits of Technological Growth. Resources in Technology.
ERIC Educational Resources Information Center
Hadley, Joel F.
1998-01-01
Technological innovations have revolutionized farming. Using precision farming techniques, farmers get an accurate picture of a field's attributes, such as soil properties, yield rates, and crop characteristics through the use of Differential Global Positioning Satellite hardware. (JOW)
Precision Airdrop (Largage de precision)
2005-12-01
NAVIGATION TO A PRECISION AIRDROP OVERVIEW RTO-AG-300-V24 2 - 9 the point from various compass headings. As the tests are conducted, the resultant...rate. This approach avoids including a magnetic compass for the heading reference, which has difficulties due to local changes in the magnetic field...Scientifica della Difesa ROYAUME-UNI Via XX Settembre 123 Dstl Knowledge Services ESPAGNE 00187 Roma Information Centre, Building 247 SDG TECEN / DGAM
Toward the use of precision medicine for the treatment of head and neck squamous cell carcinoma.
Gong, Wang; Xiao, Yandi; Wei, Zihao; Yuan, Yao; Qiu, Min; Sun, Chongkui; Zeng, Xin; Liang, Xinhua; Feng, Mingye; Chen, Qianming
2017-01-10
Precision medicine is a new strategy that aims at preventing and treating human diseases by focusing on individual variations in people's genes, environment and lifestyle. Precision medicine has been used for cancer diagnosis and treatment and shows evident clinical efficacy. Rapid developments in molecular biology, genetics and sequencing technologies, as well as computational technology, has enabled the establishment of "big data", such as the Human Genome Project, which provides a basis for precision medicine. Head and neck squamous cell carcinoma (HNSCC) is an aggressive cancer with a high incidence rate and low survival rate. Current therapies are often aggressive and carry considerable side effects. Much research now indicates that precision medicine can be used for HNSCC and may achieve improved results. From this perspective, we present an overview of the current status, potential strategies, and challenges of precision medicine in HNSCC. We focus on targeted therapy based on cell the surface signaling receptors epidermal growth factor receptor (EGFR), vascular endothelial growth factor (VEGF) and human epidermal growth factor receptor-2 (HER2), and on the PI3K/AKT/mTOR, JAK/STAT3 and RAS/RAF/MEK/ERK cellular signaling pathways. Gene therapy for the treatment of HNSCC is also discussed.
Toward the use of precision medicine for the treatment of head and neck squamous cell carcinoma
Gong, Wang; Xiao, Yandi; Wei, Zihao; Yuan, Yao; Qiu, Min; Sun, Chongkui; Zeng, Xin; Liang, Xinhua; Feng, Mingye; Chen, Qianming
2017-01-01
Precision medicine is a new strategy that aims at preventing and treating human diseases by focusing on individual variations in people's genes, environment and lifestyle. Precision medicine has been used for cancer diagnosis and treatment and shows evident clinical efficacy. Rapid developments in molecular biology, genetics and sequencing technologies, as well as computational technology, has enabled the establishment of “big data”, such as the Human Genome Project, which provides a basis for precision medicine. Head and neck squamous cell carcinoma (HNSCC) is an aggressive cancer with a high incidence rate and low survival rate. Current therapies are often aggressive and carry considerable side effects. Much research now indicates that precision medicine can be used for HNSCC and may achieve improved results. From this perspective, we present an overview of the current status, potential strategies, and challenges of precision medicine in HNSCC. We focus on targeted therapy based on cell the surface signaling receptors epidermal growth factor receptor (EGFR), vascular endothelial growth factor (VEGF) and human epidermal growth factor receptor-2 (HER2), and on the PI3K/AKT/mTOR, JAK/STAT3 and RAS/RAF/MEK/ERK cellular signaling pathways. Gene therapy for the treatment of HNSCC is also discussed. PMID:27924064
Single photon laser altimeter simulator and statistical signal processing
NASA Astrophysics Data System (ADS)
Vacek, Michael; Prochazka, Ivan
2013-05-01
Spaceborne altimeters are common instruments onboard the deep space rendezvous spacecrafts. They provide range and topographic measurements critical in spacecraft navigation. Simultaneously, the receiver part may be utilized for Earth-to-satellite link, one way time transfer, and precise optical radiometry. The main advantage of single photon counting approach is the ability of processing signals with very low signal-to-noise ratio eliminating the need of large telescopes and high power laser source. Extremely small, rugged and compact microchip lasers can be employed. The major limiting factor, on the other hand, is the acquisition time needed to gather sufficient volume of data in repetitive measurements in order to process and evaluate the data appropriately. Statistical signal processing is adopted to detect signals with average strength much lower than one photon per measurement. A comprehensive simulator design and range signal processing algorithm are presented to identify a mission specific altimeter configuration. Typical mission scenarios (celestial body surface landing and topographical mapping) are simulated and evaluated. The high interest and promising single photon altimeter applications are low-orbit (˜10 km) and low-radial velocity (several m/s) topographical mapping (asteroids, Phobos and Deimos) and landing altimetry (˜10 km) where range evaluation repetition rates of ˜100 Hz and 0.1 m precision may be achieved. Moon landing and asteroid Itokawa topographical mapping scenario simulations are discussed in more detail.
NASA Astrophysics Data System (ADS)
Passas, Georgios; Freear, Steven; Fawcett, Darren
2010-01-01
Space-time coding (STC) is an important milestone in modern wireless communications. In this technique, more copies of the same signal are transmitted through different antennas (space) and different symbol periods (time), to improve the robustness of a wireless system by increasing its diversity gain. STCs are channel coding algorithms that can be readily implemented on a field programmable gate array (FPGA) device. This work provides some figures for the amount of required FPGA hardware resources, the speed that the algorithms can operate and the power consumption requirements of a space-time block code (STBC) encoder. Seven encoder very high-speed integrated circuit hardware description language (VHDL) designs have been coded, synthesised and tested. Each design realises a complex orthogonal space-time block code with a different transmission matrix. All VHDL designs are parameterisable in terms of sample precision. Precisions ranging from 4 bits to 32 bits have been synthesised. Alamouti's STBC encoder design [Alamouti, S.M. (1998), 'A Simple Transmit Diversity Technique for Wireless Communications', IEEE Journal on Selected Areas in Communications, 16:55-108.] proved to be the best trade-off, since it is on average 3.2 times smaller, 1.5 times faster and requires slightly less power than the next best trade-off in the comparison, which is a 3/4-rate full-diversity 3Tx-antenna STBC.
Simultaneous HPLC quantitative analysis of active compounds in leaves of Moringa oleifera Lam.
Vongsak, Boonyadist; Sithisarn, Pongtip; Gritsanapan, Wandee
2014-08-01
Moringa oleifera Lam. has been used as a traditional medicine for the treatment of numerous diseases. A simultaneous high-performance liquid chromatography (HPLC) analysis was developed and validated for the determination of the contents of crypto-chlorogenic acid, isoquercetin and astragalin, the primary antioxidative compounds, in M. oleifera leaves. HPLC analysis was successfully conducted by using a Hypersil BDS C18 column, eluted with a gradient of methanol-1% acetic acid with a flow rate of 1 mL/min, and detected at 334 nm. Parameters for the validation included linearity, precision, accuracy and limits of detection and quantitation. The developed HPLC method was precise, with relative standard deviation < 2%. The recovery values of crypto-chlorogenic acid, isoquercetin and astragalin in M. oleifera leaf extracts were 98.50, 98.47 and 98.59%, respectively. The average contents of these compounds in the dried ethanolic extracts of the leaves of M. oleifera collected from different regions of Thailand were 0.081, 0.120 and 0.153% (w/w), respectively. The developed HPLC method was appropriate and practical for the simultaneous analysis of crypto-chlorogenic acid, isoquercetin and astragalin in the leaf extract of M. oleifera. This work is valuable as guidance for the standardization of the leaf extracts and pharmaceutical products of M. oleifera. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Aaij, R; Abellán Beteta, C; Adeva, B; Adinolfi, M; Affolder, A; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Andreassi, G; Andreotti, M; Andrews, J E; Appleby, R B; Aquines Gutierrez, O; Archilli, F; d'Argent, P; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Bachmann, S; Back, J J; Badalov, A; Baesso, C; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Batozskaya, V; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Bel, L J; Bellee, V; Belloli, N; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bertolin, A; Betti, F; Bettler, M-O; van Beuzekom, M; Bifani, S; Billoir, P; Bird, T; Birnkraut, A; Bizzeti, A; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borgheresi, A; Borghi, S; Borisyak, M; Borsato, M; Bowcock, T J V; Bowen, E; Bozzi, C; Braun, S; Britsch, M; Britton, T; Brodzicka, J; Brook, N H; Buchanan, E; Burr, C; Bursche, A; Buytaert, J; Cadeddu, S; Calabrese, R; Calvi, M; Calvo Gomez, M; Campana, P; Campora Perez, D; Capriotti, L; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carniti, P; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Cavallero, G; Cenci, R; Charles, M; Charpentier, Ph; Chefdeville, M; Chen, S; Cheung, S-F; Chiapolini, N; Chrzaszcz, M; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Cogoni, V; Cojocariu, L; Collazuol, G; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombes, M; Coquereau, S; Corti, G; Corvo, M; Couturier, B; Cowan, G A; Craik, D C; Crocombe, A; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Dall'Occo, E; Dalseno, J; David, P N Y; Davis, A; De Aguiar Francisco, O; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Simone, P; Dean, C-T; Decamp, D; Deckenhoff, M; Del Buono, L; Déléage, N; Demmer, M; Derkach, D; Deschamps, O; Dettori, F; Dey, B; Di Canto, A; Di Ruscio, F; Dijkstra, H; Donleavy, S; Dordei, F; Dorigo, M; Dosil Suárez, A; Dovbnya, A; Dreimanis, K; Dufour, L; Dujany, G; Dungs, K; Durante, P; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Easo, S; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; El Rifai, I; Elsasser, Ch; Ely, S; Esen, S; Evans, H M; Evans, T; Falabella, A; Färber, C; Farley, N; Farry, S; Fay, R; Fazzini, D; Ferguson, D; Fernandez Albor, V; Ferrari, F; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fleuret, F; Fohl, K; Fol, P; Fontana, M; Fontanelli, F; Forshaw, D C; Forty, R; Frank, M; Frei, C; Frosini, M; Fu, J; Furfaro, E; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; García Pardiñas, J; Garra Tico, J; Garrido, L; Gascon, D; Gaspar, C; Gavardi, L; Gazzoni, G; Gerick, D; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianì, S; Gibson, V; Girard, O G; Giubega, L; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gotti, C; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graverini, E; Graziani, G; Grecu, A; Griffith, P; Grillo, L; Grünberg, O; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Hadavizadeh, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hamilton, B; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; He, J; Head, T; Heijne, V; Heister, A; Hennessy, K; Henrard, P; Henry, L; Hernando Morata, J A; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hoballah, M; Hombach, C; Hongming, L; Hulsbergen, W; Humair, T; Hushchyn, M; Hussain, N; Hutchcroft, D; Hynds, D; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jalocha, J; Jans, E; Jawahery, A; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kandybei, S; Kanso, W; Karacson, M; Karbach, T M; Karodia, S; Kecke, M; Kelsey, M; Kenyon, I R; Kenzie, M; Ketel, T; Khairullin, E; Khanji, B; Khurewathanakul, C; Kirn, T; Klaver, S; Klimaszewski, K; Kochebina, O; Kolpin, M; Komarov, I; Koopman, R F; Koppenburg, P; Kozeiha, M; Kravchuk, L; Kreplin, K; Kreps, M; Krokovny, P; Kruse, F; Krzemien, W; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; Kuonen, A K; Kurek, K; Kvaratskheliya, T; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lanfranchi, G; Langenbruch, C; Langhans, B; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Lees, J-P; Lefèvre, R; Leflat, A; Lefrançois, J; Lemos Cid, E; Leroy, O; Lesiak, T; Leverington, B; Li, Y; Likhomanenko, T; Liles, M; Lindner, R; Linn, C; Lionetto, F; Liu, B; Liu, X; Loh, D; Longstaff, I; Lopes, J H; Lucchesi, D; Lucio Martinez, M; Luo, H; Lupato, A; Luppi, E; Lupton, O; Lusardi, N; Lusiani, A; Machefert, F; Maciuc, F; Maev, O; Maguire, K; Malde, S; Malinin, A; Manca, G; Mancinelli, G; Manning, P; Mapelli, A; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marino, P; Marks, J; Martellotti, G; Martin, M; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massacrier, L M; Massafferri, A; Matev, R; Mathad, A; Mathe, Z; Matteuzzi, C; Mauri, A; Maurin, B; Mazurov, A; McCann, M; McCarthy, J; McNab, A; McNulty, R; Meadows, B; Meier, F; Meissner, M; Melnychuk, D; Merk, M; Merli, A; Michielin, E; Milanes, D A; Minard, M-N; Mitzel, D S; Molina Rodriguez, J; Monroy, I A; Monteil, S; Morandin, M; Morawski, P; Mordà, A; Morello, M J; Moron, J; Morris, A B; Mountain, R; Muheim, F; Müller, D; Müller, J; Müller, K; Müller, V; Mussini, M; Muster, B; Naik, P; Nakada, T; Nandakumar, R; Nandi, A; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, A D; Nguyen-Mau, C; Niess, V; Nieswand, S; Niet, R; Nikitin, N; Nikodem, T; Novoselov, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Ogilvy, S; Okhrimenko, O; Oldeman, R; Onderwater, C J G; Osorio Rodrigues, B; Otalora Goicochea, J M; Otto, A; Owen, P; Oyanguren, A; Palano, A; Palombo, F; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Pappalardo, L L; Pappenheimer, C; Parker, W; Parkes, C; Passaleva, G; Patel, G D; Patel, M; Patrignani, C; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perret, P; Pescatore, L; Petridis, K; Petrolini, A; Petruzzo, M; Picatoste Olloqui, E; Pietrzyk, B; Pikies, M; Pinci, D; Pistone, A; Piucci, A; Playfer, S; Plo Casasus, M; Poikela, T; Polci, F; Poluektov, A; Polyakov, I; Polycarpo, E; Popov, A; Popov, D; Popovici, B; Potterat, C; Price, E; Price, J D; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Quagliani, R; Rachwal, B; Rademacker, J H; Rama, M; Ramos Pernas, M; Rangel, M S; Raniuk, I; Raven, G; Redi, F; Reichert, S; Dos Reis, A C; Renaudin, V; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Lopez, J A; Rodriguez Perez, P; Rogozhnikov, A; Roiser, S; Romanovsky, V; Romero Vidal, A; Ronayne, J W; Rotondo, M; Ruf, T; Ruiz Valls, P; Saborido Silva, J J; Sagidova, N; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santimaria, M; Santovetti, E; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrina, D; Schael, S; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmelzer, T; Schmidt, B; Schneider, O; Schopper, A; Schubiger, M; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Semennikov, A; Sergi, A; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Shires, A; Siddi, B G; Silva Coutinho, R; Silva de Oliveira, L; Simi, G; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, E; Smith, I T; Smith, J; Smith, M; Snoek, H; Sokoloff, M D; Soler, F J P; Soomro, F; Souza, D; Souza De Paula, B; Spaan, B; Spradlin, P; Sridharan, S; Stagni, F; Stahl, M; Stahl, S; Stefkova, S; Steinkamp, O; Stenyakin, O; Stevenson, S; Stoica, S; Stone, S; Storaci, B; Stracka, S; Straticiuc, M; Straumann, U; Sun, L; Sutcliffe, W; Swientek, K; Swientek, S; Syropoulos, V; Szczekowski, M; Szumlak, T; T'Jampens, S; Tayduganov, A; Tekampe, T; Tellarini, G; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Todd, J; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Tournefier, E; Tourneur, S; Trabelsi, K; Traill, M; Tran, M T; Tresch, M; Trisovic, A; Tsaregorodtsev, A; Tsopelas, P; Tuning, N; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vacca, C; Vagnoni, V; Valenti, G; Vallier, A; Vazquez Gomez, R; Vazquez Regueiro, P; Vázquez Sierra, C; Vecchi, S; van Veghel, M; Velthuis, J J; Veltri, M; Veneziano, G; Vesterinen, M; Viaud, B; Vieira, D; Vieites Diaz, M; Vilasis-Cardona, X; Volkov, V; Vollhardt, A; Voong, D; Vorobyev, A; Vorobyev, V; Voß, C; de Vries, J A; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wang, J; Ward, D R; Watson, N K; Websdale, D; Weiden, A; Whitehead, M; Wicht, J; Wilkinson, G; Wilkinson, M; Williams, M; Williams, M P; Williams, M; Williams, T; Wilson, F F; Wimberley, J; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wraight, K; Wright, S; Wyllie, K; Xie, Y; Xu, Z; Yang, Z; Yin, H; Yu, J; Yuan, X; Yushchenko, O; Zangoli, M; Zavertyaev, M; Zhang, L; Zhang, Y; Zhelezov, A; Zhokhov, A; Zhong, L; Zhukov, V; Zucchelli, S
2016-06-17
Charm meson oscillations are observed in a time-dependent analysis of the ratio of D^{0}→K^{+}π^{-}π^{+}π^{-} to D^{0}→K^{-}π^{+}π^{-}π^{+} decay rates, using data corresponding to an integrated luminosity of 3.0 fb^{-1} recorded by the LHCb experiment. The measurements presented are sensitive to the phase-space averaged ratio of doubly Cabibbo-suppressed to Cabibbo-favored amplitudes r_{D}^{K3π} and the product of the coherence factor R_{D}^{K3π} and a charm mixing parameter y_{K3π}^{'}. The constraints measured are r_{D}^{K3π}=(5.67±0.12)×10^{-2}, which is the most precise determination to date, and R_{D}^{K3π}y_{K3π}^{'}=(0.3±1.8)×10^{-3}, which provides useful input for determinations of the CP-violating phase γ in B^{±}→DK^{±}, D→K^{∓}π^{±}π^{∓}π^{±} decays. The analysis also gives the most precise measurement of the D^{0}→K^{+}π^{-}π^{+}π^{-} branching fraction, and the first observation of D^{0}-D[over ¯]^{0} oscillations in this decay mode, with a significance of 8.2 standard deviations.
Non-Invasive Investigation of Bone Adaptation in Humans to Mechanical Loading
NASA Technical Reports Server (NTRS)
Whalen, R.
1999-01-01
Experimental studies have identified peak cyclic forces, number of loading cycles, and loading rate as contributors to the regulation of bone metabolism. We have proposed a theoretical model that relates bone density to a mechanical stimulus derived from average daily cumulative peak cyclic 'effective' tissue stresses. In order to develop a non-invasive experimental model to test the theoretical model we need to: (1) monitor daily cumulative loading on a bone, (2) compute the internal stress state(s) resulting from the imposed loading, and (3) image volumetric bone density accurately, precisely, and reproducibly within small contiguous volumes throughout the bone. We have chosen the calcaneus (heel) as an experimental model bone site because it is loaded by ligament, tendon and joint contact forces in equilibrium with daily ground reaction forces that we can measure; it is a peripheral bone site and therefore more easily and accurately imaged with computed tomography; it is composed primarily of cancellous bone; and it is a relevant site for monitoring bone loss and adaptation in astronauts and the general population. This paper presents an overview of our recent advances in the areas of monitoring daily ground reaction forces, biomechanical modeling of the forces on the calcaneus during gait, mathematical modeling of calcaneal bone adaptation in response to cumulative daily activity, accurate and precise imaging of the calcaneus with quantitative computed tomography (QCT), and application to long duration space flight.
Richardson, R. Mark; Kells, Adrian P.; Martin, Alastair J.; Larson, Paul S.; Starr, Philip A.; Piferi, Peter G.; Bates, Geoffrey; Tansey, Lisa; Rosenbluth, Kathryn H.; Bringas, John R.; Berger, Mitchel S.; Bankiewicz, Krystof S.
2011-01-01
Background/Aims A skull-mounted aiming device and integrated software platform has been developed for MRI-guided neurological interventions. In anticipation of upcoming gene therapy clinical trials, we adapted this device for real-time convection-enhanced delivery of therapeutics via a custom-designed infusion cannula. The targeting accuracy of this delivery system and the performance of the infusion cannula were validated in nonhuman primates. Methods Infusions of gadoteridol were delivered to multiple brain targets and the targeting error was determined for each cannula placement. Cannula performance was assessed by analyzing gadoteridol distributions and by histological analysis of tissue damage. Results The average targeting error for all targets (n = 11) was 0.8 mm (95% CI = 0.14). For clinically relevant volumes, the distribution volume of gadoteridol increased as a linear function (R2 = 0.97) of the infusion volume (average slope = 3.30, 95% CI = 0.2). No infusions in any target produced occlusion, cannula reflux or leakage from adjacent tracts, and no signs of unexpected tissue damage were observed. Conclusions This integrated delivery platform allows real-time convection-enhanced delivery to be performed with a high level of precision, predictability and safety. This approach may improve the success rate for clinical trials involving intracerebral drug delivery by direct infusion. PMID:21494065
Peng, Xingxing; Guo, Zheng; Zhang, Yujiao; Li, Jun
2017-07-14
The Loess Plateau, China, is the world's largest apple-producing region, and over 80% of the orchards are in rainfed (dryland) areas. Desiccation of the deep soil layer under dryland apple orchards is the main stressor of apple production in this region. Fertilization is a factor that causes soil desiccation in dryland apple orchards. Given its applicability and precision validations, the Environmental Policy Integrated Climate (EPIC) model was used to simulate the dynamics of fruit yield and deep soil desiccation in apple orchards under six fertilization treatments. During the 45 years of study, the annual fruit yield under the fertilization treatments initially increased and then decreased in a fluctuating manner, and the average fruit yields were 24.42, 27.27, 28.69, 29.63, 30.49 and 29.43 t/ha in these respective fertilization treatments. As fertilization increased, yield of the apple orchards increased first and then declined,desiccation of the soil layers occurred earlier and extended deeper, and the average annual water consumption, over-consumption and water use efficiency increased as fertilization increased. In terms of apple yields, sustainable soil water use, and economic benefits, the most appropriate fertilization rate for drylands in Luochuan is 360-480 kg/ha N and 180-240 kg/ha P.
NASA Astrophysics Data System (ADS)
Preston, Daniel; Hill, Larry; Johnson, Carl
2015-06-01
In this paper we describe a novel shock sensitivity test, the Gap Stick Test, which is a generalized variant of the ubiquitous Gap Test. Despite the popularity of the Gap Test, it has some disadvantages: multiple tests must be fired to obtain a single metric, and many tests must be fired to obtain its value to high precision and confidence. Our solution is a test wherein multiple gap tests are joined in series to form a rate stick. The complex re-initiation character of the traditional gap test is thereby retained, but the propagation speed is steady when measured at periodic intervals, and initiation delay in individual segments acts to decrement the average speed. We measure the shock arrival time before and after each inert gap, and compute the average detonation speed through the HE alone (discounting the gap thicknesses). We perform tests for a range of gap thicknesses. We then plot the aforementioned propagation speed as a function of gap thickness. The resulting curve has the same basic structure as a Diameter Effect (DE) curve, and (like the DE curve) terminates at a failure point. Comparison between experiment and hydrocode calculations using ALE3D and the Ignition and Growth reactive burn model calibrated for short duration shock inputs in PBX 9501 is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hao; Naselsky, Pavel; Mohayaee, Roya, E-mail: liuhao@nbi.dk, E-mail: roya@iap.fr, E-mail: naselsky@nbi.dk
2016-06-01
The existence of critical points for the peculiar velocity field is a natural feature of the correlated vector field. These points appear at the junctions of velocity domains with different orientations of their averaged velocity vectors. Since peculiar velocities are the important cause of the scatter in the Hubble expansion rate, we propose that a more precise determination of the Hubble constant can be made by restricting analysis to a subsample of observational data containing only the zones around the critical points of the peculiar velocity field, associated with voids and saddle points. On large-scales the critical points, where themore » first derivative of the gravitational potential vanishes, can easily be identified using the density field and classified by the behavior of the Hessian of the gravitational potential. We use high-resolution N-body simulations to show that these regions are stable in time and hence are excellent tracers of the initial conditions. Furthermore, we show that the variance of the Hubble flow can be substantially minimized by restricting observations to the subsample of such regions of vanishing velocity instead of aiming at increasing the statistics by averaging indiscriminately using the full data sets, as is the common approach.« less
Variability in surface ECG morphology: signal or noise?
NASA Technical Reports Server (NTRS)
Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.
1988-01-01
Using data collected from canine models of acute myocardial ischemia, we investigated two issues of major relevance to electrocardiographic signal averaging: ECG epoch alignment, and the spectral characteristics of the beat-to-beat variability in ECG morphology. With initial digitization rates of 1 kHz, an iterative a posteriori matched filtering alignment scheme, and linear interpolation, we demonstrated that there is sufficient information in the body surface ECG to merit alignment to a precision of 0.1 msecs. Applying this technique to align QRS complexes and atrial pacing artifacts independently, we demonstrated that the conduction delay from atrial stimulus to ventricular activation may be so variable as to preclude using atrial pacing as an alignment mechanism, and that this variability in conduction time be modulated at the frequency of respiration and at a much lower frequency (0.02-0.03Hz). Using a multidimensional spectral technique, we investigated the beat-to-beat variability in ECG morphology, demonstrating that the frequency spectrum of ECG morphological variation reveals a readily discernable modulation at the frequency of respiration. In addition, this technique detects a subtle beat-to-beat alternation in surface ECG morphology which accompanies transient coronary artery occlusion. We conclude that physiologically important information may be stored in the variability in the surface electrocardiogram, and that this information is lost by conventional averaging techniques.
Observations of high manganese layers by the Curiosity rover at the Kimberley, Gale crater, Mars
NASA Astrophysics Data System (ADS)
Lanza, N.; Wiens, R. C.; Fischer, W. W.; Grotzinger, J. P.; Cousin, A.; Rice, M. S.; Clark, B. C.; Arvidson, R. E.; Hurowitz, J.; Gellert, R.; McLennan, S. M.; Maurice, S.; Mangold, N.; Le Mouelic, S.; Anderson, R. B.; Nachon, M.; Ollila, A.; Schmidt, M. E.; Berger, J. A.; Blank, J. G.; Clegg, S. M.; Forni, O.; Hardgrove, C. J.; Hardy, K.; Johnson, J. R.; Melikechi, N.; Newsom, H. E.; Sautter, V.; Martín-Torres, J.; Zorzano, M. P.
2014-12-01
The Gravity Recovery and Interior Laboratory (GRAIL) spacecraft were designed to map the structure of the Moon through high-precision global gravity mapping. The mission consisted of two spacecraft with Ka-band inter-satellite tracking complemented by tracking from Earth. The mission had two phases: a primary mapping mission from March 1 until May 29, 2012 at an average altitude of 50 km, and an extended mission from August 30 until December 14, 2012, with an average altitude of 23 km before November 18, and 20 and 11 km after. High-resolution gravity field models using both these data sets have been estimated, with the current resolution being degree and order 1080 in spherical harmonics. Here, we focus on aspects of the analysis of the GRAIL data: we investigate eclipse modeling, the influence of empirical accelerations on the results, and we discuss the inversion of large-scale systems. In addition to global models we also estimated local gravity adjustments in areas of particular interest such as Mare Orientale, the south pole area, and the farside. We investigate the use of Ka-band Range Rate (KBRR) data versus numerical derivatives of KBRR data, and show that the latter have the capability to locally improve correlations with topography.
Independence of motor unit recruitment and rate modulation during precision force control.
Kamen, G; Du, D C
1999-01-01
The vertebrate motor system chiefly employs motor unit recruitment and rate coding to modulate muscle force output. In this paper, we studied how the recruitment of new motor units altered the firing rate of already-active motor units during precision force production in the first dorsal interosseous muscle. Six healthy adults performed linearly increasing isometric voluntary contractions while motor unit activity and force output were recorded. After motor unit discharges were identified, motor unit firing rates were calculated before and after the instances of new motor unit recruitment. Three procedures were applied to compute motor unit firing rate, including the mean of a fixed number of inter-spike intervals and the constant width weighted Hanning window filter method, as well as a modified boxcar technique. In contrast to previous reports, the analysis of the firing rates of over 200 motor units revealed that reduction of the active firing rates was not a common mechanism used to accommodate the twitch force produced by the recruitment of a new motor unit. Similarly, during de-recruitment there was no tendency for motor unit firing rates to increase immediately following the cessation of activity in other motor units. Considerable consistency in recruitment behavior was observed during repeated contractions. However, firing rates during repeated contractions demonstrated considerably more fluctuation. It is concluded that the neuromuscular system does not use short-term preferential motor unit disfacilitation to effect precise regulation of muscular force output.
NASA Astrophysics Data System (ADS)
Kim-Hak, D.; Hoffnagle, J.; Rella, C.; Sun, M.
2016-12-01
Oxygen is a major and vital component of the Earth atmosphere representing about 21% of its composition. It is consumed or produced through biochemical processes such as combustion, respiration, and photosynthesis. Although atmospheric oxygen is not a greenhouse gas, it can be used as a top-down constraint on the carbon cycle. The variation observations of oxygen in the atmosphere are very small, in the order of the few ppm's. This presents the main technical challenge for measurement as a very high level of precision is required and only few methods including mass spectrometry, fuel cell, and paramagnetic are capable of overcoming it. Here we present new developments of a high-precision gas analyzer that utilizes the technique of Cavity Ring-Down Spectroscopy to measure oxygen concentration and oxygen isotope. Its compact and ruggedness design combined with high precision and long-term stability allows the user to deploy the instrument in the field for continuous monitoring of atmospheric oxygen level. Measurements have a 1-σ 5-minute averaging precision of 1-2 ppm for O2 over a dynamic range of 0-20%. We will present supplemental data acquired from our 10m tower measurements in Santa Clara, CA.
Goble, Daniel J; Khan, Ehran; Baweja, Harsimran S; O'Connor, Shawn M
2018-04-11
Changes in postural sway measured via force plate center of pressure have been associated with many aspects of human motor ability. A previous study validated the accuracy and precision of a relatively new, low-cost and portable force plate called the Balance Tracking System (BTrackS). This work compared a laboratory-grade force plate versus BTrackS during human-like dynamic sway conditions generated by an inverted pendulum device. The present study sought to extend previous validation attempts for BTrackS using a more traditional point of application (POA) approach. Computer numerical control (CNC) guided application of ∼155 N of force was applied five times to each of 21 points on five different BTrackS Balance Plate (BBP) devices with a hex-nose plunger. Results showed excellent agreement (ICC > 0.999) between the POAs and measured COP by the BBP devices, as well as high accuracy (<1% average percent error) and precision (<0.1 cm average standard deviation of residuals). The ICC between BBP devices was exceptionally high (ICC > 0.999) providing evidence of almost perfect inter-device reliability. Taken together, these results provide an important, static corollary to the previously obtained dynamic COP results from inverted pendulum testing of the BBP. Copyright © 2018 Elsevier Ltd. All rights reserved.
Rapid, Time-Division Multiplexed, Direct Absorption- and Wavelength Modulation-Spectroscopy
Klein, Alexander; Witzel, Oliver; Ebert, Volker
2014-01-01
We present a tunable diode laser spectrometer with a novel, rapid time multiplexed direct absorption- and wavelength modulation-spectroscopy operation mode. The new technique allows enhancing the precision and dynamic range of a tunable diode laser absorption spectrometer without sacrificing accuracy. The spectroscopic technique combines the benefits of absolute concentration measurements using calibration-free direct tunable diode laser absorption spectroscopy (dTDLAS) with the enhanced noise rejection of wavelength modulation spectroscopy (WMS). In this work we demonstrate for the first time a 125 Hz time division multiplexed (TDM-dTDLAS-WMS) spectroscopic scheme by alternating the modulation of a DFB-laser between a triangle-ramp (dTDLAS) and an additional 20 kHz sinusoidal modulation (WMS). The absolute concentration measurement via the dTDLAS-technique allows one to simultaneously calibrate the normalized 2f/1f-signal of the WMS-technique. A dTDLAS/WMS-spectrometer at 1.37 μm for H2O detection was built for experimental validation of the multiplexing scheme over a concentration range from 50 to 3000 ppmV (0.1 MPa, 293 K). A precision of 190 ppbV was achieved with an absorption length of 12.7 cm and an averaging time of two seconds. Our results show a five-fold improvement in precision over the entire concentration range and a significantly decreased averaging time of the spectrometer. PMID:25405508
Vongsak, Boonyadist; Sithisarn, Pongtip; Gritsanapan, Wandee
2013-01-01
Moringa oleifera Lamarck (Moringaceae) is used as a multipurpose medicinal plant for the treatment of various diseases. Isoquercetin, astragalin, and crypto-chlorogenic acid have been previously found to be major active components in the leaves of this plant. In this study, a thin-layer-chromatography (TLC-)densitometric method was developed and validated for simultaneous quantification of these major components in the 70% ethanolic extracts of M. oleifera leaves collected from 12 locations. The average amounts of crypto-chlorogenic acid, isoquercetin, and astragalin were found to be 0.0473, 0.0427, and 0.0534% dry weight, respectively. The method was validated for linearity, precision, accuracy, limit of detection, limit of quantitation, and robustness. The linearity was obtained in the range of 100–500 ng/spot with a correlation coefficient (r) over 0.9961. Intraday and interday precisions demonstrated relative standard deviations of less than 5%. The accuracy of the method was confirmed by determining the recovery. The average recoveries of each component from the extracts were in the range of 98.28 to 99.65%. Additionally, the leaves from Chiang Mai province contained the highest amounts of all active components. The proposed TLC-densitometric method was simple, accurate, precise, and cost-effective for routine quality controlling of M. oleifera leaf extracts. PMID:23533530
Dai, Wujiao; Shi, Qiang; Cai, Changsheng
2017-01-01
The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively. PMID:28387744
Dai, Wujiao; Shi, Qiang; Cai, Changsheng
2017-04-07
The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively.
Bednarczyk, Robert A.; Richards, Jennifer L.; Allen, Kristen E.; Warraich, Gohar J.; Omer, Saad B.
2017-01-01
Objectives. To evaluate trends in rates of personal belief exemptions (PBEs) to immunization requirements for private kindergartens in California that practice alternative educational methods. Methods. We used California Department of Public Health data on kindergarten PBE rates from 2000 to 2014 to compare annual average increases in PBE rates between schools. Results. Alternative schools had an average PBE rate of 8.7%, compared with 2.1% among public schools. Waldorf schools had the highest average PBE rate of 45.1%, which was 19 times higher than in public schools (incidence rate ratio = 19.1; 95% confidence interval = 16.4, 22.2). Montessori and holistic schools had the highest average annual increases in PBE rates, slightly higher than Waldorf schools (Montessori: 8.8%; holistic: 7.1%; Waldorf: 3.6%). Conclusions. Waldorf schools had exceptionally high average PBE rates, and Montessori and holistic schools had higher annual increases in PBE rates. Children in these schools may be at higher risk for spreading vaccine-preventable diseases if trends are not reversed. PMID:27854520
Brennan, Julia M; Bednarczyk, Robert A; Richards, Jennifer L; Allen, Kristen E; Warraich, Gohar J; Omer, Saad B
2017-01-01
To evaluate trends in rates of personal belief exemptions (PBEs) to immunization requirements for private kindergartens in California that practice alternative educational methods. We used California Department of Public Health data on kindergarten PBE rates from 2000 to 2014 to compare annual average increases in PBE rates between schools. Alternative schools had an average PBE rate of 8.7%, compared with 2.1% among public schools. Waldorf schools had the highest average PBE rate of 45.1%, which was 19 times higher than in public schools (incidence rate ratio = 19.1; 95% confidence interval = 16.4, 22.2). Montessori and holistic schools had the highest average annual increases in PBE rates, slightly higher than Waldorf schools (Montessori: 8.8%; holistic: 7.1%; Waldorf: 3.6%). Waldorf schools had exceptionally high average PBE rates, and Montessori and holistic schools had higher annual increases in PBE rates. Children in these schools may be at higher risk for spreading vaccine-preventable diseases if trends are not reversed.
Athens, Jessica K.; Remington, Patrick L.; Gangnon, Ronald E.
2015-01-01
Objectives The University of Wisconsin Population Health Institute has published the County Health Rankings since 2010. These rankings use population-based data to highlight health outcomes and the multiple determinants of these outcomes and to encourage in-depth health assessment for all United States counties. A significant methodological limitation, however, is the uncertainty of rank estimates, particularly for small counties. To address this challenge, we explore the use of longitudinal and pooled outcome data in hierarchical Bayesian models to generate county ranks with greater precision. Methods In our models we used pooled outcome data for three measure groups: (1) Poor physical and poor mental health days; (2) percent of births with low birth weight and fair or poor health prevalence; and (3) age-specific mortality rates for nine age groups. We used the fixed and random effects components of these models to generate posterior samples of rates for each measure. We also used time-series data in longitudinal random effects models for age-specific mortality. Based on the posterior samples from these models, we estimate ranks and rank quartiles for each measure, as well as the probability of a county ranking in its assigned quartile. Rank quartile probabilities for univariate, joint outcome, and/or longitudinal models were compared to assess improvements in rank precision. Results The joint outcome model for poor physical and poor mental health days resulted in improved rank precision, as did the longitudinal model for age-specific mortality rates. Rank precision for low birth weight births and fair/poor health prevalence based on the univariate and joint outcome models were equivalent. Conclusion Incorporating longitudinal or pooled outcome data may improve rank certainty, depending on characteristics of the measures selected. For measures with different determinants, joint modeling neither improved nor degraded rank precision. This approach suggests a simple way to use existing information to improve the precision of small-area measures of population health. PMID:26098858
Delre, Antonio; Mønster, Jacob; Samuelsson, Jerker; Fredenslund, Anders M; Scheutz, Charlotte
2018-09-01
The tracer gas dispersion method (TDM) is a remote sensing method used for quantifying fugitive emissions by relying on the controlled release of a tracer gas at the source, combined with concentration measurements of the tracer and target gas plumes. The TDM was tested at a wastewater treatment plant for plant-integrated methane emission quantification, using four analytical instruments simultaneously and four different tracer gases. Measurements performed using a combination of an analytical instrument and a tracer gas, with a high ratio between the tracer gas release rate and instrument precision (a high release-precision ratio), resulted in well-defined plumes with a high signal-to-noise ratio and a high methane-to-tracer gas correlation factor. Measured methane emission rates differed by up to 18% from the mean value when measurements were performed using seven different instrument and tracer gas combinations. Analytical instruments with a high detection frequency and good precision were established as the most suitable for successful TDM application. The application of an instrument with a poor precision could only to some extent be overcome by applying a higher tracer gas release rate. A sideward misplacement of the tracer gas release point of about 250m resulted in an emission rate comparable to those obtained using a tracer gas correctly simulating the methane emission. Conversely, an upwind misplacement of about 150m resulted in an emission rate overestimation of almost 50%, showing the importance of proper emission source simulation when applying the TDM. Copyright © 2018 Elsevier B.V. All rights reserved.
3D reconstruction optimization using imagery captured by unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Bassie, Abby L.; Meacham, Sean; Young, David; Turnage, Gray; Moorhead, Robert J.
2017-05-01
Because unmanned air vehicles (UAVs) are emerging as an indispensable image acquisition platform in precision agriculture, it is vitally important that researchers understand how to optimize UAV camera payloads for analysis of surveyed areas. In this study, imagery captured by a Nikon RGB camera attached to a Precision Hawk Lancaster was used to survey an agricultural field from six different altitudes ranging from 45.72 m (150 ft.) to 121.92 m (400 ft.). After collecting imagery, two different software packages (MeshLab and AgiSoft) were used to measure predetermined reference objects within six three-dimensional (3-D) point clouds (one per altitude scenario). In-silico measurements were then compared to actual reference object measurements, as recorded with a tape measure. Deviations of in-silico measurements from actual measurements were recorded as Δx, Δy, and Δz. The average measurement deviation in each coordinate direction was then calculated for each of the six flight scenarios. Results from MeshLab vs. AgiSoft offered insight into the effectiveness of GPS-defined point cloud scaling in comparison to user-defined point cloud scaling. In three of the six flight scenarios flown, MeshLab's 3D imaging software (user-defined scale) was able to measure object dimensions from 50.8 to 76.2 cm (20-30 inches) with greater than 93% accuracy. The largest average deviation in any flight scenario from actual measurements was 14.77 cm (5.82 in.). Analysis of the point clouds in AgiSoft (GPS-defined scale) yielded even smaller Δx, Δy, and Δz than the MeshLab measurements in over 75% of the flight scenarios. The precisions of these results are satisfactory in a wide variety of precision agriculture applications focused on differentiating and identifying objects using remote imagery.
Simulated retrievals for the remote sensing of CO2, CH4, CO, and H2O from geostationary orbit
NASA Astrophysics Data System (ADS)
Xi, X.; Natraj, V.; Shia, R. L.; Luo, M.; Zhang, Q.; Newman, S.; Sander, S. P.; Yung, Y. L.
2015-11-01
The Geostationary Fourier Transform Spectrometer (GeoFTS) is designed to measure high-resolution spectra of reflected sunlight in three near-infrared bands centered around 0.76, 1.6, and 2.3 μm and to deliver simultaneous retrievals of column-averaged dry air mole fractions of CO2, CH4, CO, and H2O (denoted XCO2, XCH4, XCO, and XH2O, respectively) at different times of day over North America. In this study, we perform radiative transfer simulations over both clear-sky and all-sky scenes expected to be observed by GeoFTS and estimate the prospective performance of retrievals based on results from Bayesian error analysis and characterization. We find that, for simulated clear-sky retrievals, the average retrieval biases and single-measurement precisions are < 0.2 % for XCO2, XCH4, and XH2O, and < 2 % for XCO, when the a priori values have a bias of 3 % and an uncertainty of 3 %. In addition, an increase in the amount of aerosols and ice clouds leads to a notable increase in the retrieval biases and slight worsening of the retrieval precisions. Furthermore, retrieval precision is a strong function of signal-to-noise ratio and spectral resolution. This simulation study can help guide decisions on the design of the GeoFTS observing system, which can result in cost-effective measurement strategies while achieving satisfactory levels of retrieval precisions and biases. The simultaneous retrievals at different times of day will be important for more accurate estimation of carbon sources and sinks on fine spatiotemporal scales and for studies related to the atmospheric component of the water cycle.
[Navigated drilling for femoral head necrosis. Experimental and clinical results].
Beckmann, J; Tingart, M; Perlick, L; Lüring, C; Grifka, J; Anders, S
2007-05-01
In the early stages of osteonecrosis of the femoral head, core decompression by exact drilling into the ischemic areas can reduce pain and achieve reperfusion. Using computer aided surgery, the precision of the drilling can be improved while simultaneously lowering radiation exposure time for both staff and patients. We describe the experimental and clinical results of drilling under the guidance of the fluoroscopically-based VectorVision navigation system (BrainLAB, Munich, Germany). A total of 70 sawbones were prepared mimicking an osteonecrosis of the femoral head. In two experimental models, bone only and obesity, as well as in a clinical setting involving ten patients with osteonecrosis of the femoral head, the precision and the duration of radiation exposure were compared between the VectorVision system and conventional drilling. No target was missed. For both models, there was a statistically significant difference in terms of the precision, the number of drilling corrections as well as the radiation exposure time. The average distance to the desired midpoint of the lesion of both models was 0.48 mm for navigated drilling and 1.06 mm for conventional drilling, the average drilling corrections were 0.175 and 2.1, and the radiation exposure time less than 1 s and 3.6 s, respectively. In the clinical setting, the reduction of radiation exposure (below 1 s for navigation compared to 56 s for the conventional technique) as well as of drilling corrections (0.2 compared to 3.4) was also significant. Computer guided drilling using the fluoroscopically based VectorVision navigation system shows a clearly improved precision with a enormous simultaneous reduction in radiation exposure. It is therefore recommended for clinical routine.
Williams, C.T.; Sheriff, M.J.; Schmutz, J.A.; Kohl, F.; Toien, O.; Buck, C.L.; Barnes, B.M.
2011-01-01
Precise measures of phenology are critical to understanding how animals organize their annual cycles and how individuals and populations respond to climate-induced changes in physical and ecological stressors. We show that patterns of core body temperature (T b) can be used to precisely determine the timing of key seasonal events including hibernation, mating and parturition, and immergence and emergence from the hibernacula in free-living arctic ground squirrels (Urocitellus parryii). Using temperature loggers that recorded T b every 20 min for up to 18 months, we monitored core T b from three females that subsequently gave birth in captivity and from 66 female and 57 male ground squirrels free-living in the northern foothills of the Brooks Range Alaska. In addition, dates of emergence from hibernation were visually confirmed for four free-living male squirrels. Average T b in captive females decreased by 0.5–1.0°C during gestation and abruptly increased by 1–1.5°C on the day of parturition. In free-living females, similar shifts in T b were observed in 78% (n = 9) of yearlings and 94% (n = 31) of adults; females without the shift are assumed not to have given birth. Three of four ground squirrels for which dates of emergence from hibernation were visually confirmed did not exhibit obvious diurnal rhythms in T b until they first emerged onto the surface when T b patterns became diurnal. In free-living males undergoing reproductive maturation, this pre-emergence euthermic interval averaged 20.4 days (n = 56). T b-loggers represent a cost-effective and logistically feasible method to precisely investigate the phenology of reproduction and hibernation in ground squirrels.
Improving Precision, Maintaining Accuracy, and Reducing Acquisition Time for Trace Elements in EPMA
NASA Astrophysics Data System (ADS)
Donovan, J.; Singer, J.; Armstrong, J. T.
2016-12-01
Trace element precision in electron probe micro analysis (EPMA) is limited by intrinsic random variation in the x-ray continuum. Traditionally we characterize background intensity by measuring on either side of the emission line and interpolating the intensity underneath the peak to obtain the net intensity. Alternatively, we can measure the background intensity at the on-peak spectrometer position using a number of standard materials that do not contain the element of interest. This so-called mean atomic number (MAN) background calibration (Donovan, et al., 2016) uses a set of standard measurements, covering an appropriate range of average atomic number, to iteratively estimate the continuum intensity for the unknown composition (and hence average atomic number). We will demonstrate that, at least for materials with a relatively simple matrix such as SiO2, TiO2, ZrSiO4, etc. where one may obtain a matrix matched standard for use in the so called "blank correction", we can obtain trace element accuracy comparable to traditional off-peak methods, and with improved precision, in about half the time. Donovan, Singer and Armstrong, A New EPMA Method for Fast Trace Element Analysis in Simple Matrices ", American Mineralogist, v101, p1839-1853, 2016 Figure 1. Uranium concentration line profiles from quantitative x-ray maps (20 keV, 100 nA, 5 um beam size and 4000 msec per pixel), for both off-peak and MAN background methods without (a), and with (b), the blank correction applied. We see precision significantly improved compared with traditional off-peak measurements while, in this case, the blank correction provides a small but discernable improvement in accuracy.
Rothschild, Adam S.; Lehmann, Harold P.
2005-01-01
Objective: The aim of this study was to preliminarily determine the feasibility of probabilistically generating problem-specific computerized provider order entry (CPOE) pick-lists from a database of explicitly linked orders and problems from actual clinical cases. Design: In a pilot retrospective validation, physicians reviewed internal medicine cases consisting of the admission history and physical examination and orders placed using CPOE during the first 24 hours after admission. They created coded problem lists and linked orders from individual cases to the problem for which they were most indicated. Problem-specific order pick-lists were generated by including a given order in a pick-list if the probability of linkage of order and problem (PLOP) equaled or exceeded a specified threshold. PLOP for a given linked order-problem pair was computed as its prevalence among the other cases in the experiment with the given problem. The orders that the reviewer linked to a given problem instance served as the reference standard to evaluate its system-generated pick-list. Measurements: Recall, precision, and length of the pick-lists. Results: Average recall reached a maximum of .67 with a precision of .17 and pick-list length of 31.22 at a PLOP threshold of 0. Average precision reached a maximum of .73 with a recall of .09 and pick-list length of .42 at a PLOP threshold of .9. Recall varied inversely with precision in classic information retrieval behavior. Conclusion: We preliminarily conclude that it is feasible to generate problem-specific CPOE pick-lists probabilistically from a database of explicitly linked orders and problems. Further research is necessary to determine the usefulness of this approach in real-world settings. PMID:15684134
Maui Optical Tracking and Identification Facility Transition Program.
1981-08-01
USED) 2 OFFICE 19 LASER SUPPORT ROO 35 SERVICE CELL TECHT LIBARY 20 LASLAL 5 ATA LAB 36 UNDERGROUND WATER RESERVOIR S OFFICE 21 LOBBY 37 PUMP VAULT 8...precision at a rate of 50 samples per second on the system digital recorder along with time, housekeeping and ten selected dc channels which are recorded...with 12-bit precision at a rate of 90 samples per second (45 per mirror state). The digitally recorded ac and dc data can be retrieved post-mission and
Precision of hard structures used to estimate age of mountain Whitefish (Prosopium williamsoni)
Watkins, Carson J.; Ross, Tyler J.; Hardy, Ryan S.; Quist, Michael C.
2015-01-01
The mountain whitefish (Prosopium williamsoni) is a widely distributed salmonid in western North America that has decreased in abundance over portions of its distribution due to anthropogenic disturbances. In this investigation, we examined precision of age estimates derived from scales, pectoral fin rays, and sagittal otoliths from 167 mountain whitefish. Otoliths and pectoral fin rays were mounted in epoxy and cross-sectioned before examination. Scales were pressed onto acetate slides and resulting impressions were examined. Between-reader precision (i.e., between 2 readers), between-reader variability, and reader confidence ratings were compared among hard structures. Coefficient of variation (CV) in age estimates was lowest and percentage of exact agreement (PA-0) was highest for scales (CV = 5.9; PA-0 = 70%) compared to pectoral fin rays (CV =11.0; PA-0 = 58%) and otoliths (CV = 12.3; PA-0 = 55%). Median confidence ratings were significantly different (P ≤ 0.05) among all structures, with scales having the highest median confidence. Reader confidence decreased with fish age for scales and pectoral fin rays, but reader confidence increased with fish age for otoliths. In general, age estimates were more precise and reader confidence was higher for scales compared to pectoral fin rays and otoliths. This research will help fisheries biologists in selecting the most appropriate hard structure to use for future age and growth studies on mountain whitefish. In turn, selection of the most precise hard structure will lead to better estimates of dynamic rate functions.
Development of digital flow control system for multi-channel variable-rate sprayers
USDA-ARS?s Scientific Manuscript database
Precision modulation of nozzle flow rates is a critical step for variable-rate spray applications in orchards and ornamental nurseries. An automatic flow rate control system activated with microprocessors and pulse width modulation (PWM) controlled solenoid valves was developed to control flow rates...
Computation of breast ptosis from 3D surface scans of the female torso
Li, Danni; Cheong, Audrey; Reece, Gregory P.; Crosby, Melissa A.; Fingeret, Michelle C.; Merchant, Fatima A.
2016-01-01
Stereophotography is now finding a niche in clinical breast surgery, and several methods for quantitatively measuring breast morphology from 3D surface images have been developed. Breast ptosis (sagging of the breast), which refers to the extent by which the nipple is lower than the inframammary fold (the contour along which the inferior part of the breast attaches to the chest wall), is an important morphological parameter that is frequently used for assessing the outcome of breast surgery. This study presents a novel algorithm that utilizes three-dimensional (3D) features such as surface curvature and orientation for the assessment of breast ptosis from 3D scans of the female torso. The performance of the computational approach proposed was compared against the consensus of manual ptosis ratings by nine plastic surgeons, and that of current 2D photogrammetric methods. Compared to the 2D methods, the average accuracy for 3D features was ~13% higher, with an increase in precision, recall, and F-score of 37%, 29%, and 33%, respectively. The computational approach proposed provides an improved and unbiased objective method for rating ptosis when compared to qualitative visualization by observers, and distance based 2D photogrammetry approaches. PMID:27643463
Combining multiple ChIP-seq peak detection systems using combinatorial fusion.
Schweikert, Christina; Brown, Stuart; Tang, Zuojian; Smith, Phillip R; Hsu, D Frank
2012-01-01
Due to the recent rapid development in ChIP-seq technologies, which uses high-throughput next-generation DNA sequencing to identify the targets of Chromatin Immunoprecipitation, there is an increasing amount of sequencing data being generated that provides us with greater opportunity to analyze genome-wide protein-DNA interactions. In particular, we are interested in evaluating and enhancing computational and statistical techniques for locating protein binding sites. Many peak detection systems have been developed; in this study, we utilize the following six: CisGenome, MACS, PeakSeq, QuEST, SISSRs, and TRLocator. We define two methods to merge and rescore the regions of two peak detection systems and analyze the performance based on average precision and coverage of transcription start sites. The results indicate that ChIP-seq peak detection can be improved by fusion using score or rank combination. Our method of combination and fusion analysis would provide a means for generic assessment of available technologies and systems and assist researchers in choosing an appropriate system (or fusion method) for analyzing ChIP-seq data. This analysis offers an alternate approach for increasing true positive rates, while decreasing false positive rates and hence improving the ChIP-seq peak identification process.
NASA Astrophysics Data System (ADS)
Aoyama, Yuichi; Kim, Tae-Hee; Doi, Koichiro; Hayakawa, Hideaki; Higashi, Toshihiro; Ohsono, Shingo; Shibuya, Kazuo
2016-06-01
A dual-frequency GPS receiver was deployed on a floating iceberg downstream of the calving front of Shirase Glacier, East Antarctica, on 28 December 2011 for utilizing as floating buoy. The three-dimensional position of the buoy was obtained by GPS every 30 s with a 4-5-cm precision for ca. 25 days. The height uncertainty of the 1-h averaged vertical position was ∼0.5 cm, even considering the uncertainties of un-modeled ocean loading effects. The daily evolution of north-south (NS), east-west (EW), and up-down (UD) motions shows periodic UD variations sometimes attaining an amplitude of 1 m. Observed amplitudes of tidal harmonics of major constituents were 88%-93% (O1) and 85%-88% (M2) of values observed in the global ocean tide models FES2004 and TPXO-8 Atlas. The basal melting rate of the iceberg is estimated to be ∼0.6 m/day, based on a firn densification model and using a quasi-linear sinking rate of the iceberg surface. The 30-s sampling frequency geodetic-mode GPS buoy helps to reveal ice-ocean dynamics around the calving front of Antarctic glaciers.
Zhou, Pinghong; Yao, Liqing; Qin, Xinyu; Xu, Meidong; Zhong, Yunshi; Chen, Weifeng
2009-02-01
The objective of this study was to determine the efficacy and safety of endoscopic submucosal dissection for locally recurrent colorectal cancer after previous endoscopic mucosal resection. A total of 16 patients with locally recurrent colorectal lesions were enrolled. A needle knife, an insulated-tip knife and a hook knife were used to resect the lesion along the submucosa. The rate of the curative resection, procedure time, and incidence of complications were evaluated. Of 16 lesions, 15 were completely resected with endoscopic submucosal dissection, yielding an en bloc resection rate of 93.8 percent. Histologic examination confirmed that lateral and basal margins were cancer-free in 14 patients (87.5 percent). The average procedure time was 87.2 +/- 60.7 minutes. None of the patients had immediate or delayed bleeding during or after endoscopic submucosal dissection. Perforation in one patient (6.3 percent) was the only complication and was managed conservatively. The mean follow-up period was 15.5 +/- 6.8 months; none of the patients experienced lesion residue or recurrence. Endoscopic submucosal dissection appears to be effective for locally recurrent colorectal cancer after previous endoscopic mucosal resection, making it possible to resect whole lesions and provide precise histologic information.
Detailed study of oxidation/wear mechanism in lox turbopump bearings
NASA Technical Reports Server (NTRS)
Chase, T. J.; Mccarty, J. P.
1993-01-01
Wear of 440C angular contact ball bearings of the phase 2 high pressure oxygen turbopump (HPOTP) of the space shuttle main engine (SSME) has been studied by means of various advanced nondestructive techniques (NDT) and modeled with reference to all known material, design, and operation variables. Three modes dominating the wear scenario were found to be the adhesive/sheer peeling (ASP), oxidation, and abrasion. Bearing wear was modeled in terms of the three modes. Lacking a comprehensive theory of rolling contact wear to date, each mode is modeled after well-established theories of sliding wear, while sliding velocity and distance are related to microsliding in ball-to-ring contacts. Microsliding, stress, temperature, and other contact variables are evaluated with analytical software packages of SHABERTH(TM)/SINDA(TM) and ADORE(TM). Empirical constants for the models are derived from NIST experiments by applying the models to the NIST wear data. The bearing wear model so established precisely predicts quite well the average ball wear rate for the HPOTP bearings. The wear rate has been statistically determined for the entire population of flight and development bearings based on Rocketdyne records to date. Numerous illustrations are given.
A soft kinetic data structure for lesion border detection.
Kockara, Sinan; Mete, Mutlu; Yip, Vincent; Lee, Brendan; Aydin, Kemal
2010-06-15
The medical imaging and image processing techniques, ranging from microscopic to macroscopic, has become one of the main components of diagnostic procedures to assist dermatologists in their medical decision-making processes. Computer-aided segmentation and border detection on dermoscopic images is one of the core components of diagnostic procedures and therapeutic interventions for skin cancer. Automated assessment tools for dermoscopic images have become an important research field mainly because of inter- and intra-observer variations in human interpretations. In this study, a novel approach-graph spanner-for automatic border detection in dermoscopic images is proposed. In this approach, a proximity graph representation of dermoscopic images in order to detect regions and borders in skin lesion is presented. Graph spanner approach is examined on a set of 100 dermoscopic images whose manually drawn borders by a dermatologist are used as the ground truth. Error rates, false positives and false negatives along with true positives and true negatives are quantified by digitally comparing results with manually determined borders from a dermatologist. The results show that the highest precision and recall rates obtained to determine lesion boundaries are 100%. However, accuracy of assessment averages out at 97.72% and borders errors' mean is 2.28% for whole dataset.
NASA Technical Reports Server (NTRS)
Currie, J. R.; Kissel, R. R.
1986-01-01
A system for the measurement of shaft angles is disclosed wherein a synchro resolver is sequentially pulsed, and alternately, a sine and then a cosine representative voltage output of it are sampled. Two like type, sine or cosine, succeeding outputs (V sub S1, V sub S2) are averaged and algebraically related to the opposite type output pulse (V sub c) occurring between the averaged pulses to provide a precise indication of the angle of a shaft coupled to the resolver at the instant of the occurrence of the intermediately occurring pulse (V sub c).
Fluctuation Dynamics of Exchange Rates on Indian Financial Market
NASA Astrophysics Data System (ADS)
Sarkar, A.; Barat, P.
Here we investigate the scaling behavior and the complexity of the average daily exchange rate returns of the Indian Rupee against four foreign currencies namely US Dollar, Euro, Great Britain Pound and Japanese Yen. Our analysis revealed that the average daily exchange rate return of the Indian Rupee against the US Dollar exhibits a persistent scaling behavior and follow Levy stable distribution. On the contrary the average daily exchange rate returns of the other three foreign currencies show randomness and follow Gaussian distribution. Moreover, it is seen that the complexity of the average daily exchange rate return of the Indian Rupee against US Dollar is less than the other three exchange rate returns.
14 CFR 36.6 - Incorporation by reference.
Code of Federal Regulations, 2010 CFR
2010-01-01
... No. 179, entitled “Precision Sound Level Meters,” dated 1973. (ii) IEC Publication No. 225, entitled... 1966. (iii) IEC Publication No. 651, entitled “Sound Level Meters,” first edition, dated 1979. (iv) IEC... edition, dated 1976. (v) IEC Publication No. 804, entitled “Integrating-averaging Sound Level Meters...
Problems for the Average Adult in Understanding Medical Language.
ERIC Educational Resources Information Center
Crismore, Avon
Like legal language, medical language is a private language, a separate stratum containing some words specially defined for medical purposes, some existing only in the medical vocabulary, and some adding precision or solemnity. These characteristics often cause a breakdown in patient-doctor communication. Analysis of data obtained from prototype…
77 FR 31574 - Executive-Led Trade Mission to South Africa and Zambia
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-29
... storage and handling [cir] Precision farming technologies Transportation Equipment and Infrastructure [cir... with a growing middle class, particularly in urban areas. Its relatively open economy has averaged more...- economy standards, South Africa continues to lag far behind in its adoption of green building practices...
Wei, Fang; Lu, Bin; Wang, Jian; Xu, Dan; Pan, Zhengqing; Chen, Dijun; Cai, Haiwen; Qu, Ronghui
2015-02-23
A precision and broadband laser frequency swept technique is experimentally demonstrated. Using synchronous current compensation, a slave diode laser is dynamically injection-locked to a specific high-order modulation-sideband of a narrow-linewidth master laser modulated by an electro-optic modulator (EOM), whose driven radio frequency (RF) signal can be agilely, precisely controlled by a frequency synthesizer, and the high-order modulation-sideband enables multiplied sweep range and tuning rate. By using 5th order sideband injection-locking, the original tuning range of 3 GHz and tuning rate of 0.5 THz/s is multiplied by 5 times to 15 GHz and 2.5 THz/s respectively. The slave laser has a 3 dB-linewidth of 2.5 kHz which is the same to the master laser. The settling time response of a 10 MHz frequency switching is 2.5 µs. By using higher-order modulation-sideband and optimized experiment parameters, an extended sweep range and rate could be expected.
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
Shah, Ajit
2009-07-01
Suicides may be misclassified as accidental deaths in countries with strict legal definitions of suicide, with cultural and religious factors leading to poor registration of suicide and stigma attached to suicide. The concordance between four different definitions of suicides was evaluated by examining the relationship between pure suicide and accidental death rates, gender differences, age-associated trends and potential distil risk and protective factors by conducting secondary analysis of the latest World Health Organisation data on elderly death rates. The four definitions of suicide were: (i) one-year pure suicides rates; one-year combined suicide rates (pure suicide rates combined with accidental death rates); (iii) five-year average pure suicide rates; and (iv) five-year average combined suicides rates (pure suicides rates combined with accidental death rates). The predicted negative correlation between pure suicide and accidental death rates was not observed. Gender differences were similar for all four definitions of suicide. There was a highly significant concordance for the findings of age-associated trends between one-year pure and combined suicide rates, one-year and five-year average pure suicide rates, and five-year average pure and combined suicide rates. There was poor concordance between pure and combined suicide rates for both one-year and five-year average data for the 14 potential distil risk and protective factors, but this concordance between one-year and five-year average pure suicide rates was highly significant. The use of one-year pure suicide rates in cross-national ecological studies examining gender differences, age-associated trends and potential distil risk and protective factors is likely to be practical, pragmatic and resource-efficient.
The Rate of Core Collapse Supernovae to Redshift 2.5 from the CANDELS and CLASH Supernova Surveys
NASA Astrophysics Data System (ADS)
Strolger, Louis-Gregory; Dahlen, Tomas; Rodney, Steven A.; Graur, Or; Riess, Adam G.; McCully, Curtis; Ravindranath, Swara; Mobasher, Bahram; Shahady, A. Kristin
2015-11-01
The Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey and Cluster Lensing And Supernova survey with Hubble multi-cycle treasury programs with the Hubble Space Telescope (HST) have provided new opportunities to probe the rate of core-collapse supernovae (CCSNe) at high redshift, now extending to z≈ 2.5. Here we use a sample of approximately 44 CCSNe to determine volumetric rates, RCC, in six redshift bins in the range 0.1\\lt z\\lt 2.5. Together with rates from our previous HST program, and rates from the literature, we trace a more complete history of {R}{CC}(z), with {R}{CC}=0.72+/- 0.06 yr-1 Mpc-3 10-4{h}703 at z\\lt 0.08, and increasing to {3.7}-1.6+3.1 yr-1 Mpc-3 10-4{h}703 to z≈ 2.0. The statistical precision in each bin is several factors better than than the systematic error, with significant contributions from host extinction, and average peak absolute magnitudes of the assumed luminosity functions for CCSN types. Assuming negligible time delays from stellar formation to explosion, we find these composite CCSN rates to be in excellent agreement with cosmic star formation rate density (SFRs) derived largely from dust-corrected rest-frame UV emission, with a scaling factor of k=0.0091+/- 0.0017 {M}⊙ -1, and inconsistent (to \\gt 95% confidence) with SFRs from IR luminous galaxies, or with SFR models that include simple evolution in the initial mass function over time. This scaling factor is expected if the fraction of the IMF contributing to CCSN progenitors is in the 8-50 M⊙ range. It is not supportive, however, of an upper mass limit for progenitors at \\lt 20 {M}⊙ .
Delay times of a LiDAR-guided precision sprayer control system
USDA-ARS?s Scientific Manuscript database
Accurate flow control systems in triggering sprays against detected targets are needed for precision variable-rate sprayer development. System delay times due to the laser-sensor data buffer, software operation, and hydraulic-mechanical component response were determined for a control system used fo...
High-rate RTK and PPP multi-GNSS positioning for small-scale dynamic displacements monitoring
NASA Astrophysics Data System (ADS)
Paziewski, Jacek; Sieradzki, Rafał; Baryła, Radosław; Wielgosz, Pawel
2017-04-01
The monitoring of dynamic displacements and deformations of engineering structures such as buildings, towers and bridges is of great interest due to several practical and theoretical reasons. The most important is to provide information required for safe maintenance of the constructions. High temporal resolution and precision of GNSS observations predestine this technology to be applied to most demanding application in terms of accuracy, availability and reliability. GNSS technique supported by appropriate processing methodology may meet the specific demands and requirements of ground and structures monitoring. Thus, high-rate multi-GNSS signals may be used as reliable source of information on dynamic displacements of ground and engineering structures, also in real time applications. In this study we present initial results of application of precise relative GNSS positioning for detection of small scale (cm level) high temporal resolution dynamic displacements. Methodology and algorithms applied in self-developed software allowing for relative positioning using high-rate dual-frequency phase and pseudorange GPS+Galileo observations are also given. Additionally, an approach was also made to use the Precise Point Positioning technique to such application. In the experiment were used the observations obtained from high-rate (20 Hz) geodetic receivers. The dynamic displacements were simulated using specially constructed device moving GNSS antenna with dedicated amplitude and frequency. The obtained results indicate on possibility of detection of dynamic displacements of the GNSS antenna even at the level of few millimetres using both relative and Precise Point Positioning techniques after suitable signals processing.
NASA Astrophysics Data System (ADS)
Various papers on the mechanical technology of inertial devices are presented. The topics addressed include: development of a directional gyroscope for remotely piloted vehicles and similar applications; a two-degree-of-freedom gyroscope with frictionless inner and outer gimbal pickoffs; oscillogyro design, manufacture, and performance; development of miniature two-axis rate gyroscope; mechanical design aspects of the electrostatically suspended gyroscope; role of gas-lubricated bearings in current and future sensors; development of a new microporous retainer material for precision ball bearings; design study for a high-stability, large-centrifuge test bed; evaluation of a two-axis rate gyro; operating principles of a two-axis angular rate transducer; and nutation frequency analysis. Also considered are: triaxial laser gyro; mechanical design considerations for a ring laser gyro dither mechanism; environmental considerations in the design of fiberoptic gyroscopes; manufacturing aspects of some critical high-precision mechanical components of inertial devices; dynamics and control of a gyroscopic force measurement system; high precision and high performance motion systems; use of multiple acceleration references to obtain high precision centrifuge data at low cost; gyro testing and evaluation at the Communications Research Centre; review of the mechanical design and development of a high-performance accelerometer; and silicon microengineering for accelerometers.
Kontosic, I; Vukelić, M; Pancić, M; Kunisek, J
1994-12-01
Physical work load was estimated in a female conveyor-belt worker in a bottling plant. Estimation was based on continuous measurement and on calculation of average heart rate values in three-minute and one-hour periods and during the total measuring period. The thermal component of the heart rate was calculated by means of the corrected effective temperature, for the one-hour periods. The average heart rate at rest was also determined. The work component of the heart rate was calculated by subtraction of the resting heart rate and the heart rate measured at 50 W, using a regression equation. The average estimated gross energy expenditure during the work was 9.6 +/- 1.3 kJ/min corresponding to the category of light industrial work. The average estimated oxygen uptake was 0.42 +/- 0.06 L/min. The average performed mechanical work was 12.2 +/- 4.2 W, i.e. the energy expenditure was 8.3 +/- 1.5%.
Jin, Chunfen; Viidanoja, Jyrki
2017-01-15
Existing liquid chromatography - mass spectrometry method for the analysis of short chain carboxylic acids was expanded and validated to cover also the measurement of glycerol from oils and fats. The method employs chloride anion attachment and two ions, [glycerol+ 35 Cl] - and [glycerol+ 37 Cl] - , as alternative quantifiers for improved selectivity of glycerol measurement. The averaged within run precision, between run precision and accuracy ranged between 0.3-7%, 0.4-6% and 94-99%, respectively, depending on the analyte ion and sample matrix. Selected renewable diesel feedstocks were analyzed with the method. Copyright © 2016 Elsevier B.V. All rights reserved.
Technical Note: Treatment of Sacroiliac Joint Pain with Peripheral Nerve Stimulation.
Guentchev, Marin; Preuss, Christian; Rink, Rainer; Peter, Levente; Wocker, Ernst-Ludwig; Tuettenberg, Jochen
2015-07-01
Sacroiliac joint (SIJ) pain affects older adults with a prevalence of up to 20% among patients with chronic low back pain. While pain medication, joint blocks and denervation procedures achieve pain relief in most patients, some cases fail to improve. Our goal was to determine the effectiveness of SIJ peripheral nerve stimulation in patients with severe conservative therapy-refractory SIJ pain. Here we present 12 patients with severe conservative therapy-refractory pain receiving an SIJ peripheral nerve stimulation. Patient satisfaction, pain, and quality of life were evaluated by means of the International Patient Satisfaction Index (IPSI), visual analog scale (VAS), and Oswestry Disability Index 2.0 (ODI) using standard questionnaires. For stimulation we placed an eight-pole peripheral nerve electrode parallel to the SIJ. Two weeks postoperatively, our patients reported an average ODI reduction from 57% to 32% and VAS from 9 to 2.1. IPSI was 1.1. After six months, the therapy was rated as effective in seven out of eight patients reporting at that period. The average ODI was low at 34% (p = 0.0006), while the VAS index rose to 3.8 (p < 0.0001) and IPSI to 1.9. Twelve months after stimulation, six out of seven patients considered their treatment a success with an average ODI of 21% (p < 0.0005), VAS 1.7 (p < 0.0001), and IPSI 1.3. We conclude that SIJ stimulation is a promising therapeutic strategy in the treatment of intractable SIJ pain. Further studies are required to determine the precise target group and long-term effect of this novel treatment method. © 2014 International Neuromodulation Society.
Exposure assessment to ochratoxin A in Chinese wine.
Zhong, Qi Ding; Li, Guo Hui; Wang, Dao Bing; Shao, Yi; Li, Jing Guang; Xiong, Zheng He; Wu, Yong Ning
2014-09-03
A rapid, sensitive, reproducible, and inexpensive method of high-performance liquid chromatography with fluorescence detection after an anion-exchange solid-phase extraction cleanup step for the analysis of ochratoxin A (OTA) in Chinese wine was developed. The average recovery rate and the average RSD of recovery were 97.47% and about 4%. The relative standard deviations of both the interday and intraday precision were 6.7 and 12.6%, respectively. The limits of detection and quantitation were determined to be 0.01 and 0.03 μg/L, respectively. A total of 223 samples from the major wine-producing areas of China were analyzed for OTA. OTA was detected at levels of 0.01-0.98 μg/L. The mean was 0.15 μg/L. Then, participants as representative inhabitants were invited to answer the designed questionnaire about the quantity and frequency of wine consumption. All data were simulated by the point evaluation for the risk assessment of OTA contamination from wine. Those results indicated that daily intake (DI) of OTA for the average adult consumer varies between 0.86 and 1.08 ng/kg bw per week, which was lower than all the reference standards. However, the DI value (4.38-5.54 ng/kg bw per week) in the high percentile (97.5) was slightly above 5% PTWI (100 ng kg(-1) week(-1)) of the JECFA. In conclusion, OTA exposure from Chinese wine has no risk of harm. This research will provide the scientific basis for determining the maximum limit of OTA content in Chinese wine.
Use of controlled vocabularies to improve biomedical information retrieval tasks.
Pasche, Emilie; Gobeill, Julien; Vishnyakova, Dina; Ruch, Patrick; Lovis, Christian
2013-01-01
The high heterogeneity of biomedical vocabulary is a major obstacle for information retrieval in large biomedical collections. Therefore, using biomedical controlled vocabularies is crucial for managing these contents. We investigate the impact of query expansion based on controlled vocabularies to improve the effectiveness of two search engines. Our strategy relies on the enrichment of users' queries with additional terms, directly derived from such vocabularies applied to infectious diseases and chemical patents. We observed that query expansion based on pathogen names resulted in improvements of the top-precision of our first search engine, while the normalization of diseases degraded the top-precision. The expansion of chemical entities, which was performed on the second search engine, positively affected the mean average precision. We have shown that query expansion of some types of biomedical entities has a great potential to improve search effectiveness; therefore a fine-tuning of query expansion strategies could help improving the performances of search engines.
High-precision tracking of brownian boomerang colloidal particles confined in quasi two dimensions.
Chakrabarty, Ayan; Wang, Feng; Fan, Chun-Zhen; Sun, Kai; Wei, Qi-Huo
2013-11-26
In this article, we present a high-precision image-processing algorithm for tracking the translational and rotational Brownian motion of boomerang-shaped colloidal particles confined in quasi-two-dimensional geometry. By measuring mean square displacements of an immobilized particle, we demonstrate that the positional and angular precision of our imaging and image-processing system can achieve 13 nm and 0.004 rad, respectively. By analyzing computer-simulated images, we demonstrate that the positional and angular accuracies of our image-processing algorithm can achieve 32 nm and 0.006 rad. Because of zero correlations between the displacements in neighboring time intervals, trajectories of different videos of the same particle can be merged into a very long time trajectory, allowing for long-time averaging of different physical variables. We apply this image-processing algorithm to measure the diffusion coefficients of boomerang particles of three different apex angles and discuss the angle dependence of these diffusion coefficients.
Position and volume estimation of atmospheric nuclear detonations from video reconstruction
NASA Astrophysics Data System (ADS)
Schmitt, Daniel T.
Recent work in digitizing films of foundational atmospheric nuclear detonations from the 1950s provides an opportunity to perform deeper analysis on these historical tests. This work leverages multi-view geometry and computer vision techniques to provide an automated means to perform three-dimensional analysis of the blasts for several points in time. The accomplishment of this requires careful alignment of the films in time, detection of features in the images, matching of features, and multi-view reconstruction. Sub-explosion features can be detected with a 67% hit rate and 22% false alarm rate. Hotspot features can be detected with a 71.95% hit rate, 86.03% precision and a 0.015% false positive rate. Detected hotspots are matched across 57-109 degree viewpoints with 76.63% average correct matching by defining their location relative to the center of the explosion, rotating them to the alternative viewpoint, and matching them collectively. When 3D reconstruction is applied to the hotspot matching it completes an automated process that has been used to create 168 3D point clouds with 31.6 points per reconstruction with each point having an accuracy of 0.62 meters with 0.35, 0.24, and 0.34 meters of accuracy in the x-, y- and z-direction respectively. As a demonstration of using the point clouds for analysis, volumes are estimated and shown to be consistent with radius-based models and in some cases improve on the level of uncertainty in the yield calculation.
Performance analysis of a new hypersonic vitrector system.
Stanga, Paulo Eduardo; Pastor-Idoate, Salvador; Zambrano, Isaac; Carlin, Paul; McLeod, David
2017-01-01
To evaluate porcine vitreous flow and water flow rates in a new prototype hypersonic vitrectomy system compared to currently available pneumatic guillotine vitrectors (GVs) systems. Two vitrectors were tested, a prototype, ultrasound-powered, hypersonic vitrector (HV) and a GV. Porcine vitreous was obtained within 12 to 24 h of sacrifice and kept at 4°C. A vial of vitreous or water was placed on a precision balance and its weight measured before and after the use of each vitrector. Test parameters included changes in aspiration levels, vitrector gauge, cut rates for GVs, % ultrasound (US) power for HVs, and port size for HVs. Data was analysed using linear regression and t-tests. There was no difference in the total average mean water flow between the 25-gauge GV and the 25-gauge HV (t-test: P = 0.363); however, 25-gauge GV was superior (t-test: P < 0.001) in vitreous flow. The 23-gauge GV was only more efficient in water and vitreous removal than 23-gauge HV needle-1 (Port 0.0055) (t-test: P < 0.001). For HV, wall thickness and gauge had no effect on flow rates. Water and vitreous flows showed a direct correlation with increasing aspiration levels and % US power (p<0.05). The HV produced consistent water and vitreous flow rates across the range of US power and aspiration levels tested. Hypersonic vitrectomy may be a promising new alternative to the currently available guillotine-based technologies.