Downs, Stephen; Marquez, Jodie; Chiarelli, Pauline
2013-06-01
What is the intra-rater and inter-rater relative reliability of the Berg Balance Scale? What is the absolute reliability of the Berg Balance Scale? Does the absolute reliability of the Berg Balance Scale vary across the scale? Systematic review with meta-analysis of reliability studies. Any clinical population that has undergone assessment with the Berg Balance Scale. Relative intra-rater reliability, relative inter-rater reliability, and absolute reliability. Eleven studies involving 668 participants were included in the review. The relative intrarater reliability of the Berg Balance Scale was high, with a pooled estimate of 0.98 (95% CI 0.97 to 0.99). Relative inter-rater reliability was also high, with a pooled estimate of 0.97 (95% CI 0.96 to 0.98). A ceiling effect of the Berg Balance Scale was evident for some participants. In the analysis of absolute reliability, all of the relevant studies had an average score of 20 or above on the 0 to 56 point Berg Balance Scale. The absolute reliability across this part of the scale, as measured by the minimal detectable change with 95% confidence, varied between 2.8 points and 6.6 points. The Berg Balance Scale has a higher absolute reliability when close to 56 points due to the ceiling effect. We identified no data that estimated the absolute reliability of the Berg Balance Scale among participants with a mean score below 20 out of 56. The Berg Balance Scale has acceptable reliability, although it might not detect modest, clinically important changes in balance in individual subjects. The review was only able to comment on the absolute reliability of the Berg Balance Scale among people with moderately poor to normal balance. Copyright © 2013 Australian Physiotherapy Association. Published by .. All rights reserved.
Radtke, Valentin; Himmel, Daniel; Pütz, Katharina; Goll, Sascha K; Krossing, Ingo
2014-04-07
We introduce the protoelectric potential map (PPM) as a novel, two-dimensional plot of the absolute reduction potential (peabs scale) combined with the absolute protochemical potential (Brønsted acidity: pHabs scale). The validity of this thermodynamically derived PPM is solvent-independent due to the scale zero points, which were chosen as the ideal electron gas and the ideal proton gas at standard conditions. To tie a chemical environment to these reference states, the standard Gibbs energies for the transfer of the gaseous electrons/protons to the medium are needed as anchor points. Thereby, the thermodynamics of any redox, acid-base or combined system in any medium can be related to any other, resulting in a predictability of reactions even over different media or phase boundaries. Instruction is given on how to construct the PPM from the anchor points derived and tabulated with this work. Since efforts to establish "absolute" reduction potential scales and also "absolute" pH scales already exist, a short review in this field is given and brought into relation to the PPM. Some comments on the electrochemical validation and realization conclude this concept article. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An absolute interval scale of order for point patterns
Protonotarios, Emmanouil D.; Baum, Buzz; Johnston, Alan; Hunter, Ginger L.; Griffin, Lewis D.
2014-01-01
Human observers readily make judgements about the degree of order in planar arrangements of points (point patterns). Here, based on pairwise ranking of 20 point patterns by degree of order, we have been able to show that judgements of order are highly consistent across individuals and the dimension of order has an interval scale structure spanning roughly 10 just-notable-differences (jnd) between disorder and order. We describe a geometric algorithm that estimates order to an accuracy of half a jnd by quantifying the variability of the size and shape of spaces between points. The algorithm is 70% more accurate than the best available measures. By anchoring the output of the algorithm so that Poisson point processes score on average 0, perfect lattices score 10 and unit steps correspond closely to jnds, we construct an absolute interval scale of order. We demonstrate its utility in biology by using this scale to quantify order during the development of the pattern of bristles on the dorsal thorax of the fruit fly. PMID:25079866
Mind and body therapy for fibromyalgia.
Theadom, Alice; Cropley, Mark; Smith, Helen E; Feigin, Valery L; McPherson, Kathryn
2015-04-09
Mind-body interventions are based on the holistic principle that mind, body and behaviour are all interconnected. Mind-body interventions incorporate strategies that are thought to improve psychological and physical well-being, aim to allow patients to take an active role in their treatment, and promote people's ability to cope. Mind-body interventions are widely used by people with fibromyalgia to help manage their symptoms and improve well-being. Examples of mind-body therapies include psychological therapies, biofeedback, mindfulness, movement therapies and relaxation strategies. To review the benefits and harms of mind-body therapies in comparison to standard care and attention placebo control groups for adults with fibromyalgia, post-intervention and at three and six month follow-up. Electronic searches of the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (Ovid), EMBASE (Ovid), PsycINFO (Ovid), AMED (EBSCO) and CINAHL (Ovid) were conducted up to 30 October 2013. Searches of reference lists were conducted and authors in the field were contacted to identify additional relevant articles. All relevant randomised controlled trials (RCTs) of mind-body interventions for adults with fibromyalgia were included. Two authors independently selected studies, extracted the data and assessed trials for low, unclear or high risk of bias. Any discrepancy was resolved through discussion and consensus. Continuous outcomes were analysed using mean difference (MD) where the same outcome measure and scoring method was used and standardised mean difference (SMD) where different outcome measures were used. For binary data standard estimation of the risk ratio (RR) and its 95% confidence interval (CI) was used. Seventy-four papers describing 61 trials were identified, with 4234 predominantly female participants. The nature of fibromyalgia varied from mild to severe across the study populations. Twenty-six studies were classified as having a low risk of bias for all domains assessed. The findings of mind-body therapies compared with usual care were prioritised.There is low quality evidence that in comparison to usual care controls psychological therapies have favourable effects on physical functioning (SMD -0.4, 95% CI -0.6 to -0.3, -7.5% absolute change, 2 point shift on a 0 to 100 scale), pain (SMD -0.3, 95% CI -0.5 to -0.2, -3.5% absolute change, 2 point shift on a 0 to 100 scale) and mood (SMD -0.5, 95% CI -0.6 to -0.3, -4.8% absolute change, 3 point shift on a 20 to 80 scale). There is very low quality evidence of more withdrawals in the psychological therapy group in comparison to usual care controls (RR 1.38, 95% CI 1.12 to 1.69, 6% absolute risk difference). There is lack of evidence of a difference between the number of adverse events in the psychological therapy and control groups (RR 0.38, 95% CI 0.06 to 2.50, 4% absolute risk difference).There was very low quality evidence that biofeedback in comparison to usual care controls had an effect on physical functioning (SMD -0.1, 95% CI -0.4 to 0.3, -1.2% absolute change, 1 point shift on a 0 to 100 scale), pain (SMD -2.6, 95% CI -91.3 to 86.1, -2.6% absolute change) and mood (SMD 0.1, 95% CI -0.3 to 0.5, 1.9% absolute change, less than 1 point shift on a 0 to 90 scale) post-intervention. In view of the quality of evidence we cannot be certain that biofeedback has a little or no effect on these outcomes. There was very low quality evidence that biofeedback led to more withdrawals from the study (RR 4.08, 95% CI 1.43 to 11.62, 20% absolute risk difference). No adverse events were reported.There was no advantage observed for mindfulness in comparison to usual care for physical functioning (SMD -0.3, 95% CI -0.6 to 0.1, -4.8% absolute change, 4 point shift on a scale 0 to 100), pain (SMD -0.1, CI -0.4 to 0.3, -1.3% absolute change, less than 1 point shift on a 0 to 10 scale), mood (SMD -0.2, 95% CI -0.5 to 0.0, -3.7% absolute change, 2 point shift on a 20 to 80 scale) or withdrawals (RR 1.07, 95% CI 0.67 to 1.72, 2% absolute risk difference) between the two groups post-intervention. However, the quality of the evidence was very low for pain and moderate for mood and number of withdrawals. No studies reported any adverse events.Very low quality evidence revealed that movement therapies in comparison to usual care controls improved pain (MD -2.3, CI -4.2 to -0.4, -23% absolute change) and mood (MD -9.8, 95% CI -18.5 to -1.2, -16.4% absolute change) post-intervention. There was no advantage for physical functioning (SMD -0.2, 95% CI -0.5 to 0.2, -3.4% absolute change, 2 point shift on a 0 to 100 scale), participant withdrawals (RR 1.95, 95% CI 1.13 to 3.38, 11% absolute difference) or adverse events (RR 4.62, 95% CI 0.23 to 93.92, 4% absolute risk difference) between the two groups, however rare adverse events may include worsening of pain.Low quality evidence revealed that relaxation based therapies in comparison to usual care controls showed an advantage for physical functioning (MD -8.3, 95% CI -10.1 to -6.5, -10.4% absolute change) and pain (SMD -1.0, 95% CI -1.6 to -0.5, -3.5% absolute change, 2 point shift on a 0 to 78 scale) but not for mood (SMD -4.4, CI -14.5 to 5.6, -7.4% absolute change) post-intervention. There was no difference between the groups for number of withdrawals (RR 4.40, 95% CI 0.59 to 33.07, 31% absolute risk difference) and no adverse events were reported. Psychological interventions therapies may be effective in improving physical functioning, pain and low mood for adults with fibromyalgia in comparison to usual care controls but the quality of the evidence is low. Further research on the outcomes of therapies is needed to determine if positive effects identified post-intervention are sustained. The effectiveness of biofeedback, mindfulness, movement therapies and relaxation based therapies remains unclear as the quality of the evidence was very low or low. The small number of trials and inconsistency in the use of outcome measures across the trials restricted the analysis.
Note: An absolute X-Y-Θ position sensor using a two-dimensional phase-encoded binary scale
NASA Astrophysics Data System (ADS)
Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan
2018-04-01
This Note presents a new absolute X-Y-Θ position sensor for measuring planar motion of a precision multi-axis stage system. By analyzing the rotated image of a two-dimensional phase-encoded binary scale (2D), the absolute 2D position values at two separated points were obtained and the absolute X-Y-Θ position could be calculated combining these values. The sensor head was constructed using a board-level camera, a light-emitting diode light source, an imaging lens, and a cube beam-splitter. To obtain the uniform intensity profiles from the vignette scale image, we selected the averaging directions deliberately, and higher resolution in the angle measurement could be achieved by increasing the allowable offset size. The performance of a prototype sensor was evaluated in respect of resolution, nonlinearity, and repeatability. The sensor could resolve 25 nm linear and 0.001° angular displacements clearly, and the standard deviations were less than 18 nm when 2D grid positions were measured repeatedly.
Acupuncture for peripheral joint osteoarthritis
Manheimer, Eric; Cheng, Ke; Linde, Klaus; Lao, Lixing; Yoo, Junghee; Wieland, Susan; van der Windt, Daniëlle AWM; Berman, Brian M; Bouter, Lex M
2011-01-01
Background Peripheral joint osteoarthritis is a major cause of pain and functional limitation. Few treatments are safe and effective. Objectives To assess the effects of acupuncture for treating peripheral joint osteoarthritis. Search strategy We searched the Cochrane Central Register of Controlled Trials (The Cochrane Library 2008, Issue 1), MEDLINE, and EMBASE (both through December 2007), and scanned reference lists of articles. Selection criteria Randomized controlled trials (RCTs) comparing needle acupuncture with a sham, another active treatment, or a waiting list control group in people with osteoarthritis of the knee, hip, or hand. Data collection and analysis Two authors independently assessed trial quality and extracted data. We contacted study authors for additional information. We calculated standardized mean differences using the differences in improvements between groups. Main results Sixteen trials involving 3498 people were included. Twelve of the RCTs included only people with OA of the knee, 3 only OA of the hip, and 1 a mix of people with OA of the hip and/or knee. In comparison with a sham control, acupuncture showed statistically significant, short-term improvements in osteoarthritis pain (standardized mean difference -0.28, 95% confidence interval -0.45 to -0.11; 0.9 point greater improvement than sham on 20 point scale; absolute percent change 4.59%; relative percent change 10.32%; 9 trials; 1835 participants) and function (-0.28, -0.46 to -0.09; 2.7 point greater improvement on 68 point scale; absolute percent change 3.97%; relative percent change 8.63%); however, these pooled short-term benefits did not meet our predefined thresholds for clinical relevance (i.e. 1.3 points for pain; 3.57 points for function) and there was substantial statistical heterogeneity. Additionally, restriction to sham-controlled trials using shams judged most likely to adequately blind participants to treatment assignment (which were also the same shams judged most likely to have physiological activity), reduced heterogeneity and resulted in pooled short-term benefits of acupuncture that were smaller and non-significant. In comparison with sham acupuncture at the six-month follow-up, acupuncture showed borderline statistically significant, clinically irrelevant improvements in osteoarthritis pain (-0.10, -0.21 to 0.01; 0.4 point greater improvement than sham on 20 point scale; absolute percent change 1.81%; relative percent change 4.06%; 4 trials;1399 participants) and function (-0.11, -0.22 to 0.00; 1.2 point greater improvement than sham on 68 point scale; absolute percent change 1.79%; relative percent change 3.89%). In a secondary analysis versus a waiting list control, acupuncture was associated with statistically significant, clinically relevant short-term improvements in osteoarthritis pain (-0.96, -1.19 to -0.72; 14.5 point greater improvement than sham on 100 point scale; absolute percent change 14.5%; relative percent change 29.14%; 4 trials; 884 participants) and function (-0.89, -1.18 to -0.60; 13.0 point greater improvement than sham on 100 point scale; absolute percent change 13.0%; relative percent change 25.21%). In the head-on comparisons of acupuncture with the ‘supervised osteoarthritis education’ and the ‘physician consultation’ control groups, acupuncture was associated with clinically relevant short- and long-term improvements in pain and function. In the head on comparisons of acupuncture with ‘home exercises/advice leaflet’ and ‘supervised exercise’, acupuncture was associated with similar treatment effects as the controls. Acupuncture as an adjuvant to an exercise based physiotherapy program did not result in any greater improvements than the exercise program alone. Information on safety was reported in only 8 trials and even in these trials there was limited reporting and heterogeneous methods. Authors' conclusions Sham-controlled trials show statistically significant benefits; however, these benefits are small, do not meet our pre-defined thresholds for clinical relevance, and are probably due at least partially to placebo effects from incomplete blinding. Waiting list-controlled trials of acupuncture for peripheral joint osteoarthritis suggest statistically significant and clinically relevant benefits, much of which may be due to expectation or placebo effects. PMID:20091527
Absolute and relative educational inequalities in depression in Europe.
Dudal, Pieter; Bracke, Piet
2016-09-01
To investigate (1) the size of absolute and relative educational inequalities in depression, (2) their variation between European countries, and (3) their relationship with underlying prevalence rates. Analyses are based on the European Social Survey, rounds three and six (N = 57,419). Depression is measured using the shortened Centre of Epidemiologic Studies Depression Scale. Education is coded by use of the International Standard Classification of Education. Country-specific logistic regressions are applied. Results point to an elevated risk of depressive symptoms among the lower educated. The cross-national patterns differ between absolute and relative measurements. For men, large relative inequalities are found for countries including Denmark and Sweden, but are accompanied by small absolute inequalities. For women, large relative and absolute inequalities are found in Belgium, Bulgaria, and Hungary. Results point to an empirical association between inequalities and the underlying prevalence rates. However, the strength of the association is only moderate. This research stresses the importance of including both measurements for comparative research and suggests the inclusion of the level of population health in research into inequalities in health.
Acupuncture for treating fibromyalgia
Deare, John C; Zheng, Zhen; Xue, Charlie CL; Liu, Jian Ping; Shang, Jingsheng; Scott, Sean W; Littlejohn, Geoff
2014-01-01
Background One in five fibromyalgia sufferers use acupuncture treatment within two years of diagnosis. Objectives To examine the benefits and safety of acupuncture treatment for fibromyalgia. Search methods We searched CENTRAL, PubMed, EMBASE, CINAHL, National Research Register, HSR Project and Current Contents, as well as the Chinese databases VIP and Wangfang to January 2012 with no language restrictions. Selection criteria Randomised and quasi-randomised studies evaluating any type of invasive acupuncture for fibromyalgia diagnosed according to the American College of Rheumatology (ACR) criteria, and reporting any main outcome: pain, physical function, fatigue, sleep, total well-being, stiffness and adverse events. Data collection and analysis Two author pairs selected trials, extracted data and assessed risk of bias. Treatment effects were reported as standardised mean differences (SMD) and 95%confidence intervals (CI) for continuous outcomes using different measurement tools (pain, physical function, fatigue, sleep, total well-being and stiffness) and risk ratio (RR) and 95% CI for dichotomous outcomes (adverse events).We pooled data using the random-effects model. Main results Nine trials (395 participants) were included. All studies except one were at low risk of selection bias; five were at risk of selective reporting bias (favouring either treatment group); two were subject to attrition bias (favouring acupuncture); three were subject to performance bias (favouring acupuncture) and one to detection bias (favouring acupuncture). Three studies utilised electro-acupuncture (EA) with the remainder using manual acupuncture (MA) without electrical stimulation. All studies used ’formula acupuncture’ except for one, which used trigger points. Low quality evidence from one study (13 participants) showed EA improved symptoms with no adverse events at one month following treatment. Mean pain in the non-treatment control group was 70 points on a 100 point scale; EA reduced pain by a mean of 22 points (95% confidence interval (CI) 4 to 41), or 22% absolute improvement. Control group global well-being was 66.5 points on a 100 point scale; EA improved well-being by a mean of 15 points (95% CI 5 to 26 points). Control group stiffness was 4.8 points on a 0 to 10 point; EA reduced stiffness by a mean of 0.9 points (95% CI 0.1 to 2 points; absolute reduction 9%, 95% CI 4% to 16%). Fatigue was 4.5 points (10 point scale) without treatment; EA reduced fatigue by a mean of 1 point (95% CI 0.22 to 2 points), absolute reduction 11% (2% to 20%). There was no difference in sleep quality (MD 0.4 points, 95% CI −1 to 0.21 points, 10 point scale), and physical function was not reported. Moderate quality evidence from six studies (286 participants) indicated that acupuncture (EA or MA) was no better than sham acupuncture, except for less stiffness at one month. Subgroup analysis of two studies (104 participants) indicated benefits of EA. Mean pain was 70 points on 0 to 100 point scale with sham treatment; EA reduced pain by 13% (5% to 22%); (SMD −0.63, 95% CI −1.02 to −0.23). Global well-being was 5.2 points on a 10 point scale with sham treatment; EA improved well-being: SMD 0.65, 95% CI 0.26 to 1.05; absolute improvement 11% (4% to 17%). EA improved sleep, from 3 points on a 0 to 10 point scale in the sham group: SMD 0.40 (95% CI 0.01 to 0.79); absolute improvement 8% (0.2% to 16%). Low-quality evidence from one study suggested that MA group resulted in poorer physical function: mean function in the sham group was 28 points (100 point scale); treatment worsened function by a mean of 6 points (95% CI −10.9 to −0.7). Low-quality evidence from three trials (289 participants) suggested no difference in adverse events between real (9%) and sham acupuncture (35%); RR 0.44 (95% CI 0.12 to 1.63). Moderate quality evidence from one study (58 participants) found that compared with standard therapy alone (antidepressants and exercise), adjunct acupuncture therapy reduced pain at one month after treatment: mean pain was 8 points on a 0 to 10 point scale in the standard therapy group; treatment reduced pain by 3 points (95% CI −3.9 to −2.1), an absolute reduction of 30% (21% to 39%). Two people treated with acupuncture reported adverse events; there were none in the control group (RR 3.57; 95% CI 0.18 to 71.21). Global well-being, sleep, fatigue and stiffness were not reported. Physical function data were not usable. Low quality evidence from one study (38 participants) showed a short-term benefit of acupuncture over antidepressants in pain relief: mean pain was 29 points (0 to 100 point scale) in the antidepressant group; acupuncture reduced pain by 17 points (95% CI −24.1 to −10.5). Other outcomes or adverse events were not reported. Moderate-quality evidence from one study (41 participants) indicated that deep needling with or without deqi did not differ in pain, fatigue, function or adverse events. Other outcomes were not reported. Four studies reported no differences between acupuncture and control or other treatments described at six to seven months follow-up. No serious adverse events were reported, but there were insufficient adverse events to be certain of the risks. Authors’ conclusions There is low tomoderate-level evidence that compared with no treatment and standard therapy, acupuncture improves pain and stiffness in people with fibromyalgia. There is moderate-level evidence that the effect of acupuncture does not differ from sham acupuncture in reducing pain or fatigue, or improving sleep or global well-being. EA is probably better than MA for pain and stiffness reduction and improvement of global well-being, sleep and fatigue. The effect lasts up to one month, but is not maintained at six months follow-up. MA probably does not improve pain or physical functioning. Acupuncture appears safe. People with fibromyalgia may consider using EA alone or with exercise and medication. The small sample size, scarcity of studies for each comparison, lack of an ideal sham acupuncture weaken the level of evidence and its clinical implications. Larger studies are warranted. PMID:23728665
A comparison of phone-based and onsite-based fidelity for Assertive Community Treatment in Indiana
McGrew, John H.; Stull, Laura G.; Rollins, Angela L.; Salyers, Michelle P.; Hicks, Lia J.
2014-01-01
Objective This study investigated the reliability, validity, and role of rater expertise in phone-administered fidelity assessment instrument based on the Dartmouth Assertive Community Treatment Scale (DACTS). Methods An experienced rater paired with a research assistant without fidelity assessment experience or a consultant familiar with the treatment site conducted phone based assessments of 23 teams providing assertive community treatment in Indiana. Using the DACTS, consultants conducted on-site evaluations of the programs. Results The pairs of phone raters revealed high levels of consistency [intraclass correlation coefficient (ICC)=.92] and consensus (mean absolute difference of .07). Phone and on-site assessment showed strong agreement (ICC=.87) and consensus (mean absolute difference of .07) and agreed within .1 scale point, or 2% of the scoring range, for 83% of sites and within .15 scale point for 91% of sites. Results were unaffected by the expertise level of the rater. Conclusions Phone based assessment could help agencies monitor faithful implementation of evidence-based practices. PMID:21632738
Estimating a just-noticeable difference for ocular comfort in contact lens wearers.
Papas, Eric B; Keay, Lisa; Golebiowski, Blanka
2011-06-21
To estimate the just-noticeable difference (JND) in ocular comfort rating by human, contact lens-wearing subjects using 1 to 100 numerical scales. Ostensibly identical, new contact lenses were worn simultaneously in both eyes by 40 subjects who made individual comfort ratings for each eye using a 100-point numerical ratings scale (NRS). Concurrently, interocular preference was indicated on a five-point Likert scale (1 to 5: strongly prefer right, slightly prefer right, no preference, slightly prefer left, strongly prefer left, respectively). Differences in NRS comfort score (ΔC) between the right and left eyes were determined for each Likert scale preference criteria. The distribution of group ΔC scores was examined relative to alternative definitions of JND as a means of estimating its value. For Likert scores indicating the presence of a slight interocular preference, absolute ΔC ranged from 1 to 30 units with a mean of 7.4 ± 1.3 (95% confidence interval) across all lenses and trials. When there was no Likert scale preference expressed between the eyes, absolute ΔC did not exceed 5 units. For ratings of comfort using a 100-point numerical rating scale, the inter-ocular JND is unlikely to be less than 5 units. The estimate for the average value in the population was approximately 7 to 8 units. These numbers indicate the lowest level at which changes in comfort measured with such scales are likely to be clinically significant.
NASA Astrophysics Data System (ADS)
Battuello, M.; Girard, F.; Florio, M.
2009-02-01
Four independent radiation temperature scales approximating the ITS-90 at 900 nm, 950 nm and 1.6 µm have been realized from the indium point (429.7485 K) to the copper point (1357.77 K) which were used to derive by extrapolation the transition temperature T90(Co-C) of the cobalt-carbon eutectic fixed point. An INRIM cell was investigated and an average value T90(Co-C) = 1597.20 K was found with the four values lying within 0.25 K. Alternatively, thermodynamic approximated scales were realized by assigning to the fixed points the best presently available thermodynamic values and deriving T(Co-C). An average value of 1597.27 K was found (four values lying within 0.25 K). The standard uncertainties associated with T90(Co-C) and T(Co-C) were 0.16 K and 0.17 K, respectively. INRIM determinations are compatible with recent thermodynamic determinations on three different cells (values lying between 1597.11 K and 1597.25 K) and with the result of a comparison on the same cell by an absolute radiation thermometer and an irradiance measurement with filter radiometers which give values of 1597.11 K and 1597.43 K, respectively (Anhalt et al 2006 Metrologia 43 S78-83). The INRIM approach allows the determination of both ITS-90 and thermodynamic temperature of a fixed point in a simple way and can provide valuable support to absolute radiometric methods in defining the transition temperature of new high-temperature fixed points.
The Rational Zero Point on Incentive-Object Preference Scales: A Developmental Study
ERIC Educational Resources Information Center
Haaf, Robert A.
1971-01-01
Preference judgments made by 20 males and 20 females (grades K-4) about the incentive value of 10 objects (i.e. bubble gum, Chiclet, candy corn, dried lima bean) helped determine relative and absolute scales for use of these objects as rewards. The assumption that the same object is equally rewarding at different age levels may be unwarranted.…
The Dynamics of Scaling: A Memory-Based Anchor Model of Category Rating and Absolute Identification
ERIC Educational Resources Information Center
Petrov, Alexander A.; Anderson, John R.
2005-01-01
A memory-based scaling model--ANCHOR--is proposed and tested. The perceived magnitude of the target stimulus is compared with a set of anchors in memory. Anchor selection is probabilistic and sensitive to similarity, base-level strength, and recency. The winning anchor provides a reference point near the target and thereby converts the global…
Kociolek, Aaron M; Keir, Peter J
2011-07-07
A detailed musculoskeletal model of the human hand is needed to investigate the pathomechanics of tendon disorders and carpal tunnel syndrome. The purpose of this study was to develop a biomechanical model with realistic flexor tendon excursions and moment arms. An existing upper extremity model served as a starting point, which included programmed movement of the index finger. Movement capabilities were added for the other fingers. Metacarpophalangeal articulations were modelled as universal joints to simulate flexion/extension and abduction/adduction while interphalangeal articulations used hinges to represent flexion. Flexor tendon paths were modelled using two approaches. The first method constrained tendons with control points, representing annular pulleys. The second technique used wrap objects at the joints as tendon constraints. Both control point and joint wrap models were iteratively adjusted to coincide with tendon excursions and moment arms from a anthropometric regression model using inputs for a 50th percentile male. Tendon excursions from the joint wrap method best matched the regression model even though anatomic features of the tendon paths were not preserved (absolute differences: mean<0.33 mm, peak<0.74 mm). The joint wrap model also produced similar moment arms to the regression (absolute differences: mean<0.63 mm, peak<1.58 mm). When a scaling algorithm was used to test anthropometrics, the scaled joint wrap models better matched the regression than the scaled control point models. Detailed patient-specific anatomical data will improve model outcomes for clinical use; however, population studies may benefit from simplified geometry, especially with anthropometric scaling. Copyright © 2011 Elsevier Ltd. All rights reserved.
High-resolution absolute position detection using a multiple grating
NASA Astrophysics Data System (ADS)
Schilling, Ulrich; Drabarek, Pawel; Kuehnle, Goetz; Tiziani, Hans J.
1996-08-01
To control electro-mechanical engines, high-resolution linear and rotary encoders are needed. Interferometric methods (grating interferometers) promise a resolution of a few nanometers, but have an ambiguity range of some microns. Incremental encoders increase the absolute measurement range by counting the signal periods starting from a defined initial point. In many applications, however, it is not possible to move to this initial point, so that absolute encoders have to be used. Absolute encoders generally have a scale with two or more tracks placed next to each other. Therefore, they use a two-dimensional grating structure to measure a one-dimensional position. We present a new method, which uses a one-dimensional structure to determine the position in one dimension. It is based on a grating with a large grating period up to some millimeters, having the same diffraction efficiency in several predefined diffraction orders (multiple grating). By combining the phase signals of the different diffraction orders, it is possible to establish the position in an absolute range of the grating period with a resolution like incremental grating interferometers. The principal functionality was demonstrated by applying the multiple grating in a heterodyne grating interferometer. The heterodyne frequency was generated by a frequency modulated laser in an unbalanced interferometer. In experimental measurements an absolute range of 8 mm was obtained while achieving a resolution of 10 nm.
Development and Validation of a Photonumeric Scale for Assessment of Chin Retrusion.
Sykes, Jonathan M; Carruthers, Alastair; Hardas, Bhushan; Murphy, Diane K; Jones, Derek; Carruthers, Jean; Donofrio, Lisa; Creutz, Lela; Marx, Ann; Dill, Sara
2016-10-01
A validated scale is needed for objective and reproducible comparisons of chin appearance before and after chin augmentation in practice and clinical studies. To describe the development and validation of the 5-point photonumeric Allergan Chin Retrusion Scale. The Allergan Chin Retrusion Scale was developed to include an assessment guide, verbal descriptors, morphed images, and real subject images for each scale grade. The clinical significance of a 1-point score difference was evaluated in a review of multiple image pairs representing varying differences in severity. Interrater and intrarater reliability was evaluated in a live-subject validation study (N = 298) completed during 2 sessions occurring 3 weeks apart. A difference of ≥1 point on the scale was shown to reflect a clinically meaningful difference (mean [95% confidence interval] absolute score difference, 1.07 [0.94-1.20] for clinically different image pairs and 0.51 [0.39-0.63] for not clinically different pairs). Intrarater agreement between the 2 live-subject validation sessions was substantial (mean weighted kappa = 0.79). Interrater agreement was substantial during the second rating session (0.68, primary end point). The Allergan Chin Retrusion Scale is a validated and reliable scale for physician rating of severity of chin retrusion.
Fawkner, Samantha; Henretty, Joan; Knowles, Ann-Marie; Nevill, Alan; Niven, Ailsa
2014-01-01
The aim of this study was to adopt a longitudinal design to explore the direct effects of both absolute and relative maturation and changes in body size on physical activity, and explore if, and how, physical self-perceptions might mediate this effect. We recruited 208 girls (11.8 ± 0.4 years) at baseline. Data were collected at three subsequent time points, each 6 months apart. At 18 months, 119 girls remained in the study. At each time point, girls completed the Physical Activity Questionnaire for Children, the Pubertal Development Scale (from which, both a measure of relative and absolute maturation were defined) and the Physical Self-Perception Profile, and had physical size characteristics assessed. Multilevel modelling for physical activity indicated a significant negative effect of age, positive effect for physical condition and sport competence and positive association for relatively early maturers. Absolute maturation, body mass, waist circumference and sum of skinfolds did not significantly contribute to the model. Contrary to common hypotheses, relatively more mature girls may, in fact, be more active than their less mature peers. However, neither changes in absolute maturation nor physical size appear to directly influence changes in physical activity in adolescent girls.
Measurement of the Am 242 m neutron-induced reaction cross sections
Buckner, M. Q.; Wu, C. Y.; Henderson, R. A.; ...
2017-02-17
The neutron-induced reaction cross sections of 242mAm were measured at the Los Alamos Neutron Science Center using the Detector for Advanced Neutron-Capture Experiments array along with a compact parallel-plate avalanche counter for fission-fragment detection. A new neutron-capture cross section was determined, and the absolute scale was set according to a concurrent measurement of the well-known 242mAm(n,f) cross section. The (n,γ) cross section was measured from thermal energy to an incident energy of 1 eV at which point the data quality was limited by the reaction yield in the laboratory. Our new 242mAm fission cross section was normalized to ENDF/B-VII.1 tomore » set the absolute scale, and it agreed well with the (n,f) cross section from thermal energy to 1 keV. Lastly, the average absolute capture-to-fission ratio was determined from thermal energy to E n = 0.1 eV, and it was found to be 26(4)% as opposed to the ratio of 19% from the ENDF/B-VII.1 evaluation.« less
IQ Discrepancies between the Binet and WISC-R in Children with Developmental Problems.
ERIC Educational Resources Information Center
Bloom, Allan S.; And Others
1983-01-01
Administered the Stanford-Binet and Wechsler Intelligence Scale for Children (Revised) to 121 children with developmental problems. Results showed 28 children received absolute differences of 12 points or greater between the Binet and the WISC-R. There were 10 instances of complete incongruence between the Binet and all the WISC-R IQs. (JAC)
Development and Validation of a Photonumeric Scale for Evaluation of Volume Deficit of the Hand
Donofrio, Lisa; Hardas, Bhushan; Murphy, Diane K.; Carruthers, Jean; Carruthers, Alastair; Sykes, Jonathan M.; Creutz, Lela; Marx, Ann; Dill, Sara
2016-01-01
BACKGROUND A validated scale is needed for objective and reproducible comparisons of hand appearance before and after treatment in practice and clinical studies. OBJECTIVE To describe the development and validation of the 5-point photonumeric Allergan Hand Volume Deficit Scale. METHODS The scale was developed to include an assessment guide, verbal descriptors, morphed images, and real-subject images for each grade. The clinical significance of a 1-point score difference was evaluated in a review of image pairs representing varying differences in severity. Interrater and intrarater reliability was evaluated in a live-subject validation study (N = 296) completed during 2 sessions occurring 3 weeks apart. RESULTS A score difference of ≥1 point was shown to reflect a clinically significant difference (mean [95% confidence interval] absolute score difference, 1.12 [0.99–1.26] for clinically different image pairs and 0.45 [0.33–0.57] for not clinically different pairs). Intrarater agreement between the 2 validation sessions was almost perfect (mean weighted kappa = 0.83). Interrater agreement was almost perfect during the second session (0.82, primary end point). CONCLUSION The Allergan Hand Volume Deficit Scale is a validated and reliable scale for physician rating of hand volume deficit. PMID:27661741
Development and Validation of a Photonumeric Scale for Evaluation of Facial Skin Texture
Carruthers, Alastair; Hardas, Bhushan; Murphy, Diane K.; Carruthers, Jean; Jones, Derek; Sykes, Jonathan M.; Creutz, Lela; Marx, Ann; Dill, Sara
2016-01-01
BACKGROUND A validated scale is needed for objective and reproducible comparisons of facial skin roughness before and after aesthetic treatment in practice and in clinical studies. OBJECTIVE To describe the development and validation of the 5-point photonumeric Allergan Skin Roughness Scale. METHODS The scale was developed to include an assessment guide, verbal descriptors, morphed images, and real subject images for each grade. The clinical significance of a 1-point score difference was evaluated in a review of image pairs representing varying differences in severity. Interrater and intrarater reliability was evaluated in a live-subject validation study (N = 290) completed during 2 sessions occurring 3 weeks apart. RESULTS A score difference of ≥1 point was shown to reflect a clinically meaningful difference (mean [95% confidence interval] absolute score difference 1.09 [0.96–1.23] for clinically different image pairs and 0.53 [0.38–0.67] for not clinically different pairs). Intrarater agreement between the 2 validation sessions was almost perfect (weighted kappa = 0.83). Interrater agreement was almost perfect during the second rating session (0.81, primary end point). CONCLUSION The Allergan Skin Roughness Scale is a validated and reliable scale for physician rating of midface skin roughness. PMID:27661744
A review of the different techniques for solid surface acid-base characterization.
Sun, Chenhang; Berg, John C
2003-09-18
In this work, various techniques for solid surface acid-base (AB) characterization are reviewed. Different techniques employ different scales to rank acid-base properties. Based on the results from literature and the authors' own investigations for mineral oxides, these scales are compared. The comparison shows that Isoelectric Point (IEP), the most commonly used AB scale, is not a description of the absolute basicity or acidity of a surface, but a description of their relative strength. That is, a high IEP surface shows more basic functionality comparing with its acidic functionality, whereas a low IEP surface shows less basic functionality comparing with its acidic functionality. The choice of technique and scale for AB characterization depends on the specific application. For the cases in which the overall AB property is of interest, IEP (by electrokinetic titration) and H(0,max) (by indicator dye adsorption) are appropriate. For the cases in which the absolute AB property is of interest such as in the study of adhesion, it is more pertinent to use chemical shift (by XPS) and the heat of adsorption of probe gases (by calorimetry or IGC).
A new lunar absolute control point: established by images from the landing camera on Chang'e-3
NASA Astrophysics Data System (ADS)
Wang, Fen-Fei; Liu, Jian-Jun; Li, Chun-Lai; Ren, Xin; Mu, Ling-Li; Yan, Wei; Wang, Wen-Rui; Xiao, Jing-Tao; Tan, Xu; Zhang, Xiao-Xia; Zou, Xiao-Duan; Gao, Xing-Ye
2014-12-01
The establishment of a lunar control network is one of the core tasks in selenodesy, in which defining an absolute control point on the Moon is the most important step. However, up to now, the number of absolute control points has been very sparse. These absolute control points have mainly been lunar laser ranging retroreflectors, whose geographical location can be observed by observations on Earth and also identified in high resolution lunar satellite images. The Chang'e-3 (CE-3) probe successfully landed on the Moon, and its geographical location has been monitored by an observing station on Earth. Since its positional accuracy is expected to reach the meter level, the CE-3 landing site can become a new high precision absolute control point. We use a sequence of images taken from the landing camera, as well as satellite images taken by CE-1 and CE-2, to identify the location of the CE-3 lander. With its geographical location known, the CE-3 landing site can be established as a new absolute control point, which will effectively expand the current area of the lunar absolute control network by 22%, and can greatly facilitate future research in the field of lunar surveying and mapping, as well as selenodesy.
Strand, Bjørn Heine; Steingrímsdóttir, Ólöf Anna; Grøholt, Else-Karin; Ariansen, Inger; Graff-Iversen, Sidsel; Næss, Øyvind
2014-11-24
Educational inequalities in total mortality in Norway have widened during 1960-2000. We wanted to investigate if inequalities have continued to increase in the post millennium decade, and which causes of deaths were the main drivers. All deaths (total and cause specific) in the adult Norwegian population aged 45-74 years over five decades, until 2010 were included; in all 708,449 deaths and over 62 million person years. Two indices of inequalities were used to measure inequality and changes in inequalities over time, on the relative scale (Relative Index of Inequality, RII) and on the absolute scale (Slope Index of Inequality, SII). Relative inequalities in total mortality increased over the five decades in both genders. Among men absolute inequalities stabilized during 2000-2010, after steady, significant increases each decade back to the 1960s, while in women, absolute inequalities continued to increase significantly during the last decade. The stabilization in absolute inequalities among men in the last decade was mostly due to a fall in inequalities in cardiovascular disease (CVD) mortality and lung cancer and respiratory disease mortality. Still, in this last decade, the absolute inequalities in cause-specific mortality among men were mostly due to cardiovascular diseases (CVD) (34% of total mortality inequality), lung cancer and respiratory diseases (21%). Among women the absolute inequalities in mortality were mostly due to lung cancer and chronic lower respiratory tract diseases (30%) and CVD (27%). In men, absolute inequalities in mortality have stopped increasing, seemingly due to reduction in inequalities in CVD mortality. Absolute inequality in mortality continues to widen among women, mostly due to death from lung cancer and chronic lung disease. Relative educational inequalities in mortality are still on the rise for Norwegian men and women.
NASA Astrophysics Data System (ADS)
Corwin, Ivan; Dimitrov, Evgeni
2018-05-01
We consider the ASEP and the stochastic six vertex model started with step initial data. After a long time, T, it is known that the one-point height function fluctuations for these systems are of order T 1/3. We prove the KPZ prediction of T 2/3 scaling in space. Namely, we prove tightness (and Brownian absolute continuity of all subsequential limits) as T goes to infinity of the height function with spatial coordinate scaled by T 2/3 and fluctuations scaled by T 1/3. The starting point for proving these results is a connection discovered recently by Borodin-Bufetov-Wheeler between the stochastic six vertex height function and the Hall-Littlewood process (a certain measure on plane partitions). Interpreting this process as a line ensemble with a Gibbsian resampling invariance, we show that the one-point tightness of the top curve can be propagated to the tightness of the entire curve.
Bobo, William V; Angleró, Gabriela C; Jenkins, Gregory; Hall-Flavin, Daniel K; Weinshilboum, Richard; Biernacka, Joanna M
2016-05-01
The study aimed to define thresholds of clinically significant change in 17-item Hamilton Depression Rating Scale (HDRS-17) scores using the Clinical Global Impression-Improvement (CGI-I) Scale as a gold standard. We conducted a secondary analysis of individual patient data from the Pharmacogenomic Research Network Antidepressant Medication Pharmacogenomic Study, an 8-week, single-arm clinical trial of citalopram or escitalopram treatment of adults with major depression. We used equipercentile linking to identify levels of absolute and percent change in HDRS-17 scores that equated with scores on the CGI-I at 4 and 8 weeks. Additional analyses equated changes in the HDRS-7 and Bech-6 scale scores with CGI-I scores. A CGI-I score of 2 (much improved) corresponded to an absolute decrease (improvement) in HDRS-17 total score of 11 points and a percent decrease of 50-57%, from baseline values. Similar results were observed for percent change in HDRS-7 and Bech-6 scores. Larger absolute (but not percent) decreases in HDRS-17 scores equated with CGI-I scores of 2 in persons with higher baseline depression severity. Our results support the consensus definition of response based on HDRS-17 scores (>50% decrease from baseline). A similar definition of response may apply to the HDRS-7 and Bech-6. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A New Sclerosing Agent in the Treatment of Venous Malformations
Sannier, K.; Dompmartin, A.; Théron, J.; Labbé, D.; Barrellier, M.T.; Leroyer, R.; Touré, P.; Leroy, D.
2004-01-01
Summary Absolute ethanol is the most effective agent in the treatment of venous malformation (VM) although it is quite risky to use because of the danger of diffusion beyond the target. To reduce this risk, we have developed an alcoholic sclerosing solution that is less diffusible. The viscosity of absolute ethanol was enhanced with monographic ethyl-cellulose at a concentration of 5.88% ie 0.75 g in 15 ml of absolute ethanol 95%. 23 patients with VM located on the buttock (1), hand (2), leg (1) and face (19) were treated. A mean volume of 1.99 ml of the solution was injected directly into the VM. Each patient had an average of 2.8 procedures. Sixteen patients were done under general anaesthesia and seven with local anaesthesia. Evaluation was performed by the patient, the dermatologist of the treating multidisciplinary team and a dermatological group not involved in the treatment of the patients. Patients were evaluated after a mean delay of 24.52 months. Evaluation of the cosmetic result was made with a five point scale and the global result with a three point scale. VM pain was evaluated by the patients with a Visual Analogue Scale. The aesthetic results were graded as satisfactory (> 3) for the patient and the dermatologist of the multidisciplinary team. However the results were not as good with the independent dermatological group evaluation. The pain was significantly less important after the treatment (p << 0.001). Among the 23 patients, the local adverse events were nine necrosis with or without ethylcellulose fistula followed by only two surgical procedures. There were no systemic adverse events. Sclerotherapy of VM is usually performed with absolute ethanol or ethibloc. The main advantage of our sclerosing mixture is that it expands like a balloon when injected slowly in a aqueous media. Because of the important increase in viscosity the volume of injected solution is much lower than ethanol alone and the risk of systemic reactions is lower. Contrary to ethibloc, post-sclerosing surgery is not necessary because sub-cutaneous ethylcellulose disappears secondarily. PMID:20587223
Topical herbal therapies for treating osteoarthritis
Cameron, Melainie; Chrubasik, Sigrun
2014-01-01
Background Before extraction and synthetic chemistry were invented, musculoskeletal complaints were treated with preparations from medicinal plants. They were either administered orally or topically. In contrast to the oral medicinal plant products, topicals act in part as counterirritants or are toxic when given orally. Objectives To update the previous Cochrane review of herbal therapy for osteoarthritis from 2000 by evaluating the evidence on effectiveness for topical medicinal plant products. Search methods Databases for mainstream and complementary medicine were searched using terms to include all forms of arthritis combined with medicinal plant products. We searched electronic databases (Cochrane Central Register of Controlled Trials (CENTRAL),MEDLINE, EMBASE, AMED, CINAHL, ISI Web of Science, World Health Organization Clinical Trials Registry Platform) to February 2013, unrestricted by language. We also searched the reference lists from retrieved trials. Selection criteria Randomised controlled trials of herbal interventions used topically, compared with inert (placebo) or active controls, in people with osteoarthritis were included. Data collection and analysis Two review authors independently selected trials for inclusion, assessed the risk of bias of included studies and extracted data. Main results Seven studies (seven different medicinal plant interventions; 785 participants) were included. Single studies (five studies, six interventions) and non-comparable studies (two studies, one intervention) precluded pooling of results. Moderate evidence from a single study of 174 people with hand osteoarthritis indicated that treatment with Arnica extract gel probably results in similar benefits as treatment with ibuprofen (non-steroidal anti-inflammatory drug) with a similar number of adverse events. Mean pain in the ibuprofen group was 44.2 points on a 100 point scale; treatment with Arnica gel reduced the pain by 4 points after three weeks: mean difference (MD) −3.8 points (95% confidence intervals (CI) −10.1 to 2.5), absolute reduction 4% (10% reduction to 3% increase). Hand function was 7.5 points on a 30 point scale in the ibuprofen-treated group; treatment with Arnica gel reduced function by 0.4 points (MD −0.4, 95% CI −1.75 to 0.95), absolute improvement 1% (6% improvement to 3% decline)). Total adverse events were higher in the Arnica gel group (13% compared to 8% in the ibuprofen group): relative risk (RR) 1.65 (95% CI 0.72 to 3.76). Moderate quality evidence from a single trial of 99 people with knee osteoarthritis indicated that compared with placebo, Capsicum extract gel probably does not improve pain or knee function, and is commonly associated with treatment-related adverse events including skin irritation and a burning sensation. At four weeks follow-up, mean pain in the placebo group was 46 points on a 100 point scale; treatment with Capsicum extract reduced pain by 1 point (MD −1, 95%CI −6.8 to 4.8), absolute reduction of 1%(7%reduction to 5% increase). Mean knee function in the placebo group was 34.8 points on a 96 point scale at four weeks; treatment with Capsicum extract improved function by a mean of 2.6 points (MD −2.6, 95% CI −9.5 to 4.2), an absolute improvement of 3% (10% improvement to 4% decline). Adverse event rates were greater in the Capsicum extract group (80% compared with 20% in the placebo group, rate ratio 4.12, 95% CI 3.30 to 5.17). The number needed to treat to result in adverse events was 2 (95% CI 1 to 2). Moderate evidence from a single trial of 220 people with knee osteoarthritis suggested that comfrey extract gel probably improves pain without increasing adverse events. At three weeks, the mean pain in the placebo group was 83.5 points on a 100 point scale. Treatment with comfrey reduced pain by a mean of 41.5 points (MD −41.5, 95% CI −48 to −34), an absolute reduction of 42% (34% to 48% reduction). Function was not reported. Adverse events were similar: 6%(7/110) reported adverse events in the comfrey group compared with 14% (15/110) in the placebo group (RR 0.47, 95% CI 0.20 to 1.10). Although evidence from a single trial indicated that adhesive patches containing Chinese herbal mixtures FNZG and SJG may improve pain and function, the clinical applicability of these findings are uncertain because participants were only treated and followed up for seven days. We are also uncertain if other topical herbal products (Marhame-Mafasel compress, stinging nettle leaf) improve osteoarthritis symptoms due to the very low quality evidence from single trials. No serious side effects were reported. Authors’ conclusions Although the mechanism of action of the topical medicinal plant products provides a rationale basis for their use in the treatment of osteoarthritis, the quality and quantity of current research studies of effectiveness are insufficient. Arnica gel probably improves symptoms as effectively as a gel containing non-steroidal anti-inflammatory drug, but with no better (and possibly worse) adverse event profile. Comfrey extract gel probably improves pain, and Capsicum extract gel probably will not improve pain or function at the doses examined in this review. Further high quality, fully powered studies are required to confirm the trends of effectiveness identifed in studies so far. PMID:23728701
Development and Validation of a Photonumeric Scale for Evaluation of Volume Deficit of the Temple
Jones, Derek; Hardas, Bhushan; Murphy, Diane K.; Donofrio, Lisa; Sykes, Jonathan M.; Carruthers, Alastair; Creutz, Lela; Marx, Ann; Dill, Sara
2016-01-01
BACKGROUND A validated scale is needed for objective and reproducible comparisons of temple appearance before and after aesthetic treatment in practice and clinical studies. OBJECTIVE To describe the development and validation of the 5-point photonumeric Allergan Temple Hollowing Scale. METHODS The scale was developed to include an assessment guide, verbal descriptors, morphed images, and real subject images for each grade. The clinical significance of a 1-point score difference was evaluated in a review of image pairs representing varying differences in severity. Interrater and intrarater reliability was evaluated in a live-subject validation study (N = 298) completed during 2 sessions occurring 3 weeks apart. RESULTS A score difference of ≥1 point was shown to reflect a clinically significant difference (mean [95% confidence interval] absolute score difference, 1.1 [0.94–1.26] for clinically different image pairs and 0.67 [0.51–0.83] for not clinically different pairs). Intrarater agreement between the 2 validation sessions was almost perfect (mean weighted kappa = 0.86). Interrater agreement was almost perfect during the second session (0.81, primary endpoint). CONCLUSION The Allergan Temple Hollowing Scale is a validated and reliable scale for physician rating of temple volume deficit. PMID:27661742
NASA Astrophysics Data System (ADS)
Wähmer, M.; Anhalt, K.; Hollandt, J.; Klein, R.; Taubert, R. D.; Thornagel, R.; Ulm, G.; Gavrilov, V.; Grigoryeva, I.; Khlevnoy, B.; Sapritsky, V.
2017-10-01
Absolute spectral radiometry is currently the only established primary thermometric method for the temperature range above 1300 K. Up to now, the ongoing improvements of high-temperature fixed points and their formal implementation into an improved temperature scale with the mise en pratique for the definition of the kelvin, rely solely on single-wavelength absolute radiometry traceable to the cryogenic radiometer. Two alternative primary thermometric methods, yielding comparable or possibly even smaller uncertainties, have been proposed in the literature. They use ratios of irradiances to determine the thermodynamic temperature traceable to blackbody radiation and synchrotron radiation. At PTB, a project has been established in cooperation with VNIIOFI to use, for the first time, all three methods simultaneously for the determination of the phase transition temperatures of high-temperature fixed points. For this, a dedicated four-wavelengths ratio filter radiometer was developed. With all three thermometric methods performed independently and in parallel, we aim to compare the potential and practical limitations of all three methods, disclose possibly undetected systematic effects of each method and thereby confirm or improve the previous measurements traceable to the cryogenic radiometer. This will give further and independent confidence in the thermodynamic temperature determination of the high-temperature fixed point's phase transitions.
Abadlia, L; Gasser, F; Khalouk, K; Mayoufi, M; Gasser, J G
2014-09-01
In this paper we describe an experimental setup designed to measure simultaneously and very accurately the resistivity and the absolute thermoelectric power, also called absolute thermopower or absolute Seebeck coefficient, of solid and liquid conductors/semiconductors over a wide range of temperatures (room temperature to 1600 K in present work). A careful analysis of the existing experimental data allowed us to extend the absolute thermoelectric power scale of platinum to the range 0-1800 K with two new polynomial expressions. The experimental device is controlled by a LabView program. A detailed description of the accurate dynamic measurement methodology is given in this paper. We measure the absolute thermoelectric power and the electrical resistivity and deduce with a good accuracy the thermal conductivity using the relations between the three electronic transport coefficients, going beyond the classical Wiedemann-Franz law. We use this experimental setup and methodology to give new very accurate results for pure copper, platinum, and nickel especially at very high temperatures. But resistivity and absolute thermopower measurement can be more than an objective in itself. Resistivity characterizes the bulk of a material while absolute thermoelectric power characterizes the material at the point where the electrical contact is established with a couple of metallic elements (forming a thermocouple). In a forthcoming paper we will show that the measurement of resistivity and absolute thermoelectric power characterizes advantageously the (change of) phase, probably as well as DSC (if not better), since the change of phases can be easily followed during several hours/days at constant temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abadlia, L.; Mayoufi, M.; Gasser, F.
2014-09-15
In this paper we describe an experimental setup designed to measure simultaneously and very accurately the resistivity and the absolute thermoelectric power, also called absolute thermopower or absolute Seebeck coefficient, of solid and liquid conductors/semiconductors over a wide range of temperatures (room temperature to 1600 K in present work). A careful analysis of the existing experimental data allowed us to extend the absolute thermoelectric power scale of platinum to the range 0-1800 K with two new polynomial expressions. The experimental device is controlled by a LabView program. A detailed description of the accurate dynamic measurement methodology is given in thismore » paper. We measure the absolute thermoelectric power and the electrical resistivity and deduce with a good accuracy the thermal conductivity using the relations between the three electronic transport coefficients, going beyond the classical Wiedemann-Franz law. We use this experimental setup and methodology to give new very accurate results for pure copper, platinum, and nickel especially at very high temperatures. But resistivity and absolute thermopower measurement can be more than an objective in itself. Resistivity characterizes the bulk of a material while absolute thermoelectric power characterizes the material at the point where the electrical contact is established with a couple of metallic elements (forming a thermocouple). In a forthcoming paper we will show that the measurement of resistivity and absolute thermoelectric power characterizes advantageously the (change of) phase, probably as well as DSC (if not better), since the change of phases can be easily followed during several hours/days at constant temperature.« less
Absolute and relative height-pixel accuracy of SRTM-GL1 over the South American Andean Plateau
NASA Astrophysics Data System (ADS)
Satge, Frédéric; Denezine, Matheus; Pillco, Ramiro; Timouk, Franck; Pinel, Sébastien; Molina, Jorge; Garnier, Jérémie; Seyler, Frédérique; Bonnet, Marie-Paule
2016-11-01
Previously available only over the Continental United States (CONUS), the 1 arc-second mesh size (spatial resolution) SRTM-GL1 (Shuttle Radar Topographic Mission - Global 1) product has been freely available worldwide since November 2014. With a relatively small mesh size, this digital elevation model (DEM) provides valuable topographic information over remote regions. SRTM-GL1 is assessed for the first time over the South American Andean Plateau in terms of both the absolute and relative vertical point-to-point accuracies at the regional scale and for different slope classes. For comparison, SRTM-v4 and GDEM-v2 Global DEM version 2 (GDEM-v2) generated by ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) are also considered. A total of approximately 160,000 ICESat/GLAS (Ice, Cloud and Land Elevation Satellite/Geoscience Laser Altimeter System) data are used as ground reference measurements. Relative error is often neglected in DEM assessments due to the lack of reference data. A new methodology is proposed to assess the relative accuracies of SRTM-GL1, SRTM-v4 and GDEM-v2 based on a comparison with ICESat/GLAS measurements. Slope values derived from DEMs and ICESat/GLAS measurements from approximately 265,000 ICESat/GLAS point pairs are compared using quantitative and categorical statistical analysis introducing a new index: the False Slope Ratio (FSR). Additionally, a reference hydrological network is derived from Google Earth and compared with river networks derived from the DEMs to assess each DEM's potential for hydrological applications over the region. In terms of the absolute vertical accuracy on a global scale, GDEM-v2 is the most accurate DEM, while SRTM-GL1 is more accurate than SRTM-v4. However, a simple bias correction makes SRTM-GL1 the most accurate DEM over the region in terms of vertical accuracy. The relative accuracy results generally did not corroborate the absolute vertical accuracy. GDEM-v2 presents the lowest statistical results based on the relative accuracy, while SRTM-GL1 is the most accurate. Vertical accuracy and relative accuracy are two independent components that must be jointly considered when assessing a DEM's potential. DEM accuracies increased with slope. In terms of hydrological potential, SRTM products are more accurate than GDEM-v2. However, the DEMs exhibit river extraction limitations over the region due to the low regional slope gradient.
Spectral Processing Analysis System (SPANS).
1980-11-01
Approximately 750 pounds Temperature Range: 60 - 80 degrees Farenheit Humidity: 40 - 70 percent (relative) Duty Cycle: Continuous Power Requirements: 5 wire, 3...displayed per display frame, local or absolute scaling, number of display points per line and waveform av- A eraging. A typical display is shown in Figure 3...the waveform. In the case of white noise, a high degree of correlation is found at zero lag only with the remaining lags showing little correlation
Performance Evaluation of sUAS Equipped with Velodyne HDL-32E LiDAR Sensor
NASA Astrophysics Data System (ADS)
Jozkow, G.; Wieczorek, P.; Karpina, M.; Walicka, A.; Borkowski, A.
2017-08-01
The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.
NASA Astrophysics Data System (ADS)
Chesley, J. T.; Leier, A. L.; White, S.; Torres, R.
2017-06-01
Recently developed data collection techniques allow for improved characterization of sedimentary outcrops. Here, we outline a workflow that utilizes unmanned aerial vehicles (UAV) and structure-from-motion (SfM) photogrammetry to produce sub-meter-scale outcrop reconstructions in 3-D. SfM photogrammetry uses multiple overlapping images and an image-based terrain extraction algorithm to reconstruct the location of individual points from the photographs in 3-D space. The results of this technique can be used to construct point clouds, orthomosaics, and digital surface models that can be imported into GIS and related software for further study. The accuracy of the reconstructed outcrops, with respect to an absolute framework, is improved with geotagged images or independently gathered ground control points, and the internal accuracy of 3-D reconstructions is sufficient for sub-meter scale measurements. We demonstrate this approach with a case study from central Utah, USA, where UAV-SfM data can help delineate complex features within Jurassic fluvial sandstones.
Recall of patterns using binary and gray-scale autoassociative morphological memories
NASA Astrophysics Data System (ADS)
Sussner, Peter
2005-08-01
Morphological associative memories (MAM's) belong to a class of artificial neural networks that perform the operations erosion or dilation of mathematical morphology at each node. Therefore we speak of morphological neural networks. Alternatively, the total input effect on a morphological neuron can be expressed in terms of lattice induced matrix operations in the mathematical theory of minimax algebra. Neural models of associative memories are usually concerned with the storage and the retrieval of binary or bipolar patterns. Thus far, the emphasis in research on morphological associative memory systems has been on binary models, although a number of notable features of autoassociative morphological memories (AMM's) such as optimal absolute storage capacity and one-step convergence have been shown to hold in the general, gray-scale setting. In previous papers, we gained valuable insight into the storage and recall phases of AMM's by analyzing their fixed points and basins of attraction. We have shown in particular that the fixed points of binary AMM's correspond to the lattice polynomials in the original patterns. This paper extends these results in the following ways. In the first place, we provide an exact characterization of the fixed points of gray-scale AMM's in terms of combinations of the original patterns. Secondly, we present an exact expression for the fixed point attractor that represents the output of either a binary or a gray-scale AMM upon presentation of a certain input. The results of this paper are confirmed in several experiments using binary patterns and gray-scale images.
Sensor fusion of cameras and a laser for city-scale 3D reconstruction.
Bok, Yunsu; Choi, Dong-Geol; Kweon, In So
2014-11-04
This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.
NASA Astrophysics Data System (ADS)
Lenderink, Geert; Barbero, Renaud; Loriaux, Jessica; Fowler, Hayley
2017-04-01
Present-day precipitation-temperature scaling relations indicate that hourly precipitation extremes may have a response to warming exceeding the Clausius-Clapeyron (CC) relation; for The Netherlands the dependency on surface dew point temperature follows two times the CC relation corresponding to 14 % per degree. Our hypothesis - as supported by a simple physical argument presented here - is that this 2CC behaviour arises from the physics of convective clouds. So, we think that this response is due to local feedbacks related to the convective activity, while other large scale atmospheric forcing conditions remain similar except for the higher temperature (approximately uniform warming with height) and absolute humidity (corresponding to the assumption of unchanged relative humidity). To test this hypothesis, we analysed the large-scale atmospheric conditions accompanying summertime afternoon precipitation events using surface observations combined with a regional re-analysis for the data in The Netherlands. Events are precipitation measurements clustered in time and space derived from approximately 30 automatic weather stations. The hourly peak intensities of these events again reveal a 2CC scaling with the surface dew point temperature. The temperature excess of moist updrafts initialized at the surface and the maximum cloud depth are clear functions of surface dew point temperature, confirming the key role of surface humidity on convective activity. Almost no differences in relative humidity and the dry temperature lapse rate were found across the dew point temperature range, supporting our theory that 2CC scaling is mainly due to the response of convection to increases in near surface humidity, while other atmospheric conditions remain similar. Additionally, hourly precipitation extremes are on average accompanied by substantial large-scale upward motions and therefore large-scale moisture convergence, which appears to accelerate with surface dew point. This increase in large-scale moisture convergence appears to be consequence of latent heat release due to the convective activity as estimated from the quasi-geostrophic omega equation. Consequently, most hourly extremes occur in precipitation events with considerable spatial extent. Importantly, this event size appears to increase rapidly at the highest dew point temperature range, suggesting potentially strong impacts of climatic warming.
NASA Astrophysics Data System (ADS)
Langousis, Andreas; Kaleris, Vassilios; Xeygeni, Vagia; Magkou, Foteini
2017-04-01
Assessing the availability of groundwater reserves at a regional level, requires accurate and robust hydraulic head estimation at multiple locations of an aquifer. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the spatial distribution of the hydraulic conductivity in the aquifer, and it is usually determined through trial-and-error, by solving the groundwater flow based on a properly selected set of alternative but physically plausible geologic structures. In this work, we use: 1) dimensional analysis, and b) a pulse-based stochastic model for simulation of synthetic aquifer structures, to calculate the distribution of the absolute error in hydraulic head estimation as a function of the standardized distance from the nearest measuring locations. The resulting distributions are proved to encompass all possible small-scale structural dependencies, exhibiting characteristics (bounds, multi-modal features etc.) that can be explained using simple geometric arguments. The obtained results are promising, pointing towards the direction of establishing design criteria based on large-scale geologic maps.
In-Flight Measurement of the Absolute Energy Scale of the Fermi Large Area Telescope
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Barbielini, G; Bastieri, D.; Bechtol, K.; Bellazzini, R.;
2012-01-01
The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to survey the gamma-ray sky from 20 MeV to several hundreds of GeV. In this energy band there are no astronomical sources with sufficiently well known and sharp spectral features to allow an absolute calibration of the LAT energy scale. However, the geomagnetic cutoff in the cosmic ray electron- plus-positron (CRE) spectrum in low Earth orbit does provide such a spectral feature. The energy and spectral shape of this cutoff can be calculated with the aid of a numerical code tracing charged particles in the Earth's magnetic field. By comparing the cutoff value with that measured by the LAT in different geomagnetic positions, we have obtained several calibration points between approx. 6 and approx. 13 GeV with an estimated uncertainty of approx. 2%. An energy calibration with such high accuracy reduces the systematic uncertainty in LAT measurements of, for example, the spectral cutoff in the emission from gamma ray pulsars.
In-Flight Measurement of the Absolute Energy Scale of the Fermi Large Area Telescope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; /Stanford U., HEPL /SLAC /KIPAC, Menlo Park; Ajello, M.
The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to survey the gamma-ray sky from 20 MeV to several hundreds of GeV. In this energy band there are no astronomical sources with sufficiently well known and sharp spectral features to allow an absolute calibration of the LAT energy scale. However, the geomagnetic cutoff in the cosmic ray electron-plus-positron (CRE) spectrum in low Earth orbit does provide such a spectral feature. The energy and spectral shape of this cutoff can be calculated with the aid of a numerical code tracing charged particles in themore » Earth's magnetic field. By comparing the cutoff value with that measured by the LAT in different geomagnetic positions, we have obtained several calibration points between {approx}6 and {approx}13 GeV with an estimated uncertainty of {approx}2%. An energy calibration with such high accuracy reduces the systematic uncertainty in LAT measurements of, for example, the spectral cutoff in the emission from gamma ray pulsars.« less
Method and apparatus for two-dimensional absolute optical encoding
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
2004-01-01
This invention presents a two-dimensional absolute optical encoder and a method for determining position of an object in accordance with information from the encoder. The encoder of the present invention comprises a scale having a pattern being predetermined to indicate an absolute location on the scale, means for illuminating the scale, means for forming an image of the pattern; and detector means for outputting signals derived from the portion of the image of the pattern which lies within a field of view of the detector means, the field of view defining an image reference coordinate system, and analyzing means, receiving the signals from the detector means, for determining the absolute location of the object. There are two types of scale patterns presented in this invention: grid type and starfield type.
Anatomic motor point localization for partial quadriceps block in spasticity.
Albert, T; Yelnik, A; Colle, F; Bonan, I; Lassau, J P
2000-03-01
To identify the location of the vastus intermedius nerve and its motor point (point M) and to precisely identify its coordinates in relation to anatomic surface landmarks. Descriptive study. Anatomy institute of a university school of medicine. Twenty-nine adult cadaver limbs immobilized in anatomic position. Anatomic dissection to identify point M. Anatomic surface landmarks were point F, the issuing point of femoral nerve under the inguinal ligament; point R, the middle of superior edge of the patella; segment FR, which corresponds to thigh length; point M', point M orthogonal projection on segment FR. Absolute vertical coordinate, distance FM, relative vertical coordinate compared to the thigh length, FM'/FR ratio; absolute horizontal coordinate, distance MM'. The absolute vertical coordinate was 11.7+/-2 cm. The relative vertical coordinate was at .29+/-.04 of thigh length. The horizontal coordinate was at 2+/-.5 cm lateral to the FR line. Point M can be defined with relative precision by two coordinates. Application and clinical interest of nerve blocking using these coordinates in quadriceps spasticity should be studied.
Calibrated Tully-fisher Relations For Improved Photometric Estimates Of Disk Rotation Velocities
NASA Astrophysics Data System (ADS)
Reyes, Reinabelle; Mandelbaum, R.; Gunn, J. E.; Pizagno, J.
2011-01-01
We present calibrated scaling relations (also referred to as Tully-Fisher relations or TFRs) between rotation velocity and photometric quantities-- absolute magnitude, stellar mass, and synthetic magnitude (a linear combination of absolute magnitude and color)-- of disk galaxies at z 0.1. First, we selected a parent disk sample of 170,000 galaxies from SDSS DR7, with redshifts between 0.02 and 0.10 and r band absolute magnitudes between -18.0 and -22.5. Then, we constructed a child disk sample of 189 galaxies that span the parameter space-- in absolute magnitude, color, and disk size-- covered by the parent sample, and for which we have obtained kinematic data. Long-slit spectroscopy were obtained from the Dual Imaging Spectrograph (DIS) at the Apache Point Observatory 3.5 m for 99 galaxies, and from Pizagno et al. (2007) for 95 galaxies (five have repeat observations). We find the best photometric estimator of disk rotation velocity to be a synthetic magnitude with a color correction that is consistent with the Bell et al. (2003) color-based stellar mass ratio. The improved rotation velocity estimates have a wide range of scientific applications, and in particular, in combination with weak lensing measurements, they enable us to constrain the ratio of optical-to-virial velocity in disk galaxies.
Using, Seeing, Feeling, and Doing Absolute Value for Deeper Understanding
ERIC Educational Resources Information Center
Ponce, Gregorio A.
2008-01-01
Using sticky notes and number lines, a hands-on activity is shared that anchors initial student thinking about absolute value. The initial point of reference should help students successfully evaluate numeric problems involving absolute value. They should also be able to solve absolute value equations and inequalities that are typically found in…
Effect of helicity on the correlation time of large scales in turbulent flows
NASA Astrophysics Data System (ADS)
Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne
2017-11-01
Solutions of the forced Navier-Stokes equation have been conjectured to thermalize at scales larger than the forcing scale, similar to an absolute equilibrium obtained for the spectrally truncated Euler equation. Using direct numeric simulations of Taylor-Green flows and general-periodic helical flows, we present results on the probability density function, energy spectrum, autocorrelation function, and correlation time that compare the two systems. In the case of highly helical flows, we derive an analytic expression describing the correlation time for the absolute equilibrium of helical flows that is different from the E-1 /2k-1 scaling law of weakly helical flows. This model predicts a new helicity-based scaling law for the correlation time as τ (k ) ˜H-1 /2k-1 /2 . This scaling law is verified in simulations of the truncated Euler equation. In simulations of the Navier-Stokes equations the large-scale modes of forced Taylor-Green symmetric flows (with zero total helicity and large separation of scales) follow the same properties as absolute equilibrium including a τ (k ) ˜E-1 /2k-1 scaling for the correlation time. General-periodic helical flows also show similarities between the two systems; however, the largest scales of the forced flows deviate from the absolute equilibrium solutions.
Absolute mass of neutrinos and the first unique forbidden {beta} decay of {sup 187}Re
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dvornicky, Rastislav; Simkovic, Fedor; Bogoliubov Laboratory of Theoretical Physics, JINR Dubna, 141980 Dubna, Moscow region
2011-04-15
The planned rhenium {beta}-decay experiment, called the ''Microcalorimeter Arrays for a Rhenium Experiment'' (MARE), might probe the absolute mass scale of neutrinos with the same sensitivity as the Karlsruhe tritium neutrino mass (KATRIN) experiment, which will take commissioning data in 2011 and will proceed for 5 years. We present the energy distribution of emitted electrons for the first unique forbidden {beta} decay of {sup 187}Re. It is found that the p-wave emission of electron dominates over the s wave. By assuming mixing of three neutrinos, the Kurie function for the rhenium {beta} decay is derived. It is shown that themore » Kurie plot near the end point is within a good accuracy linear in the limit of massless neutrinos like the Kurie plot of the superallowed {beta} decay of {sup 3}H.« less
Absolute Points for Multiple Assignment Problems
ERIC Educational Resources Information Center
Adlakha, V.; Kowalski, K.
2006-01-01
An algorithm is presented to solve multiple assignment problems in which a cost is incurred only when an assignment is made at a given cell. The proposed method recursively searches for single/group absolute points to identify cells that must be loaded in any optimal solution. Unlike other methods, the first solution is the optimal solution. The…
Kite, Benjamin A.; Pearson, Matthew R.; Henson, James M.
2016-01-01
The purpose of the present studies was to examine the effects of response scale on the observed relationships between protective behavioral strategies (PBS) measures and alcohol-related outcomes. We reasoned that an ‘absolute frequency’ scale (stem: “how many times…”; response scale: 0 times to 11+ times) conflates the frequency of using PBS with the frequency of consuming alcohol; thus, we hypothesized that the use of an absolute frequency response scale would result in positive relationships between types of PBS and alcohol-related outcomes. Alternatively, a ‘contingent frequency’ scale (stem: “When drinking…how often…”; response scale: never to always) does not conflate frequency of alcohol use with use of PBS; therefore, we hypothesized that use of a contingent frequency scale would result in negative relationships between use of PBS and alcohol-related outcomes. Two published measures of PBS were used across studies: the Protective Behavioral Strategies Survey (PBSS) and the Strategy Questionnaire (SQ). Across three studies, we demonstrate that when measured using a contingent frequency response scale, PBS measures relate negatively to alcohol-related outcomes in a theoretically consistent manner; however, when PBS measures were measured on an absolute frequency response scale, they were non-significantly or positively related to alcohol-related outcomes. We discuss the implications of these findings for the assessment of PBS. PMID:23438243
NASA Astrophysics Data System (ADS)
Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong
2017-12-01
The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.
The learning effect of intraoperative video-enhanced surgical procedure training.
van Det, M J; Meijerink, W J H J; Hoff, C; Middel, L J; Koopal, S A; Pierie, J P E N
2011-07-01
The transition from basic skills training in a skills lab to procedure training in the operating theater using the traditional master-apprentice model (MAM) lacks uniformity and efficiency. When the supervising surgeon performs parts of a procedure, training opportunities are lost. To minimize this intervention by the supervisor and maximize the actual operating time for the trainee, we created a new training method called INtraoperative Video-Enhanced Surgical Training (INVEST). Ten surgical residents were trained in laparoscopic cholecystectomy either by the MAM or with INVEST. Each trainee performed six cholecystectomies that were objectively evaluated on an Objective Structured Assessment of Technical Skills (OSATS) global rating scale. Absolute and relative improvements during the training curriculum were compared between the groups. A questionnaire evaluated the trainee's opinion on this new training method. Skill improvement on the OSATS global rating scale was significantly greater for the trainees in the INVEST curriculum compared to the MAM, with mean absolute improvement 32.6 versus 14.0 points and mean relative improvement 59.1 versus 34.6% (P=0.02). INVEST significantly enhances technical and procedural skill development during the early learning curve for laparoscopic cholecystectomy. Trainees were positive about the content and the idea of the curriculum.
Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan
2014-01-01
Purpose The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Methods and Materials Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3+ xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R2, chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Results Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R2 was satisfactory and corresponded well with the expected values. Conclusions Multivariate NTCP models with LASSO can be used to predict patient-rated xerostomia after IMRT. PMID:24586971
Lee, Tsair-Fwu; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan
2014-01-01
The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3(+) xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R(2), chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R(2) was satisfactory and corresponded well with the expected values. Multivariate NTCP models with LASSO can be used to predict patient-rated xerostomia after IMRT.
Study of multi-functional precision optical measuring system for large scale equipment
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi
2017-10-01
The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.
Lundman, Berit; Årestedt, Kristofer; Norberg, Astrid; Norberg, Catharina; Fischer, Regina Santamäki; Lövheim, Hugo
2015-01-01
This study tested the psychometric properties of a Swedish version of the Self-Transcendence Scale (STS). Cohen's weighted kappa, agreement, absolute reliability, relative reliability, and internal consistency were calculated, and the underlying structure of the STS was established by exploratory factor analysis. There were 2 samples available: 1 including 194 people aged 85-103 years and a convenience sample of 60 people aged 21-69 years. Weighted kappa values ranged from .40 to .89. The intraclass correlation coefficient for the original STS was .763, and the least significant change between repeated tests was 6.25 points. The revised STS was found to have satisfactory psychometric properties, and 2 of the 4 underlying dimensions in Reed's self-transcendence theory were supported.
ERIC Educational Resources Information Center
Koskey, Kristin L. K.; Stewart, Victoria C.
2014-01-01
This small "n" observational study used a concurrent mixed methods approach to address a void in the literature with regard to the qualitative meaningfulness of the data yielded by absolute magnitude estimation scaling (MES) used to rate subjective stimuli. We investigated whether respondents' scales progressed from less to more and…
Allan deviation analysis of financial return series
NASA Astrophysics Data System (ADS)
Hernández-Pérez, R.
2012-05-01
We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.
Goldman, D; Kohn, P M; Hunt, R W
1983-08-01
The following measures were obtained from 42 student volunteers: the General and the Disinhibition subscales of the Sensation Seeking Scale (Form IV), the Reducer-Augmenter Scale, and the Absolute Auditory Threshold. General sensation seeking correlated significantly with the Reducer-Augmenter Scale, r(40) = .59, p less than .001, and the Absolute Auditory Threshold, r(40) = .45, p less than .005. Both results proved general across sex. These findings, that high-sensation seekers tend to be reducers and to lack sensitivity to weak stimulation, were interpreted as supporting strength-of-the-nervous-system theory more than the formulation of Zuckerman and his associates.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-01
...) a change of at least five absolute percentage points in, but not less than 25 percent of, the... between a countervailable subsidy rate of zero (or de minimis) and a countervailable subsidy rate of... absolute points and not less than 25 percent of the originally calculated margin. Thus, the ministerial...
Wang, Guochao; Tan, Lilong; Yan, Shuhua
2018-02-07
We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He-Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10 -8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions.
Tan, Lilong; Yan, Shuhua
2018-01-01
We report on a frequency-comb-referenced absolute interferometer which instantly measures long distance by integrating multi-wavelength interferometry with direct synthetic wavelength interferometry. The reported interferometer utilizes four different wavelengths, simultaneously calibrated to the frequency comb of a femtosecond laser, to implement subwavelength distance measurement, while direct synthetic wavelength interferometry is elaborately introduced by launching a fifth wavelength to extend a non-ambiguous range for meter-scale measurement. A linearity test performed comparatively with a He–Ne laser interferometer shows a residual error of less than 70.8 nm in peak-to-valley over a 3 m distance, and a 10 h distance comparison is demonstrated to gain fractional deviations of ~3 × 10−8 versus 3 m distance. Test results reveal that the presented absolute interferometer enables precise, stable, and long-term distance measurements and facilitates absolute positioning applications such as large-scale manufacturing and space missions. PMID:29414897
Effects of extended-release niacin with laropiprant in high-risk patients.
Landray, Martin J; Haynes, Richard; Hopewell, Jemma C; Parish, Sarah; Aung, Theingi; Tomson, Joseph; Wallendszus, Karl; Craig, Martin; Jiang, Lixin; Collins, Rory; Armitage, Jane
2014-07-17
Patients with evidence of vascular disease are at increased risk for subsequent vascular events despite effective use of statins to lower the low-density lipoprotein (LDL) cholesterol level. Niacin lowers the LDL cholesterol level and raises the high-density lipoprotein (HDL) cholesterol level, but its clinical efficacy and safety are uncertain. After a prerandomization run-in phase to standardize the background statin-based LDL cholesterol-lowering therapy and to establish participants' ability to take extended-release niacin without clinically significant adverse effects, we randomly assigned 25,673 adults with vascular disease to receive 2 g of extended-release niacin and 40 mg of laropiprant or a matching placebo daily. The primary outcome was the first major vascular event (nonfatal myocardial infarction, death from coronary causes, stroke, or arterial revascularization). During a median follow-up period of 3.9 years, participants who were assigned to extended-release niacin-laropiprant had an LDL cholesterol level that was an average of 10 mg per deciliter (0.25 mmol per liter as measured in the central laboratory) lower and an HDL cholesterol level that was an average of 6 mg per deciliter (0.16 mmol per liter) higher than the levels in those assigned to placebo. Assignment to niacin-laropiprant, as compared with assignment to placebo, had no significant effect on the incidence of major vascular events (13.2% and 13.7% of participants with an event, respectively; rate ratio, 0.96; 95% confidence interval [CI], 0.90 to 1.03; P=0.29). Niacin-laropiprant was associated with an increased incidence of disturbances in diabetes control that were considered to be serious (absolute excess as compared with placebo, 3.7 percentage points; P<0.001) and with an increased incidence of diabetes diagnoses (absolute excess, 1.3 percentage points; P<0.001), as well as increases in serious adverse events associated with the gastrointestinal system (absolute excess, 1.0 percentage point; P<0.001), musculoskeletal system (absolute excess, 0.7 percentage points; P<0.001), skin (absolute excess, 0.3 percentage points; P=0.003), and unexpectedly, infection (absolute excess, 1.4 percentage points; P<0.001) and bleeding (absolute excess, 0.7 percentage points; P<0.001). Among participants with atherosclerotic vascular disease, the addition of extended-release niacin-laropiprant to statin-based LDL cholesterol-lowering therapy did not significantly reduce the risk of major vascular events but did increase the risk of serious adverse events. (Funded by Merck and others; HPS2-THRIVE ClinicalTrials.gov number, NCT00461630.).
Bowers, G N; Inman, S R
1977-01-01
We are impressed with the ease and certainty of calibration electronic thermometers with thermistor probes to +/- 0.01 degree C at the gallium melting point, 29.771(4) degrees C. The IFCC reference method for measuring aspartate aminotransferase activity in serum was run at the reaction temperature of 29.771(4) degrees C. By constantly referencing to gallium as an integral part of the assay procedure, we determined the absolute reaction temperature to IPTS-68 (International Practical Temperature Scale of 1968) to +/- 0.02 degrees C. This unique temperature calibration standard near the center of the range of temperatures commonly used in the clinical laboratory is a valuable addition and can be expected to improve the accuracy of measurements, especially in clinical enzymology.
Absolute and Relative Reliability of Percentage of Syllables Stuttered and Severity Rating Scales
ERIC Educational Resources Information Center
Karimi, Hamid; O'Brian, Sue; Onslow, Mark; Jones, Mark
2014-01-01
Purpose: Percentage of syllables stuttered (%SS) and severity rating (SR) scales are measures in common use to quantify stuttering severity and its changes during basic and clinical research conditions. However, their reliability has not been assessed with indices measuring both relative and absolute reliability. This study was designed to provide…
Absolute Distance Measurement with the MSTAR Sensor
NASA Technical Reports Server (NTRS)
Lay, Oliver P.; Dubovitsky, Serge; Peters, Robert; Burger, Johan; Ahn, Seh-Won; Steier, William H.; Fetterman, Harrold R.; Chang, Yian
2003-01-01
The MSTAR sensor (Modulation Sideband Technology for Absolute Ranging) is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with sub-nanometer accuracy. The sensor uses a single laser in conjunction with fast phase modulators and low frequency detectors. We describe the design of the system - the principle of operation, the metrology source, beamlaunching optics, and signal processing - and show results for target distances up to 1 meter. We then demonstrate how the system can be scaled to kilometer-scale distances.
40 CFR 92.105 - General equipment specifications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... accuracy and precision of 0.1 percent of absolute pressure at point or better. (2) Gauges and transducers used to measure any other pressures shall have an accuracy and precision of 1 percent of absolute...
NASA Technical Reports Server (NTRS)
Tai, Chang-Kou
1988-01-01
Direct estimation of the absolute dynamic topography from satellite altimetry has been confined to the largest scales (basically the basin-scale) owing to the fact that the signal-to-noise ratio is more unfavorable everywhere else. But even for the largest scales, the results are contaminated by the orbit error and geoid uncertainties. Recently a more accurate Earth gravity model (GEM-T1) became available, providing the opportunity to examine the whole question of direct estimation under a more critical limelight. It is found that our knowledge of the Earth's gravity field has indeed improved a great deal. However, it is not yet possible to claim definitively that our knowledge of the ocean circulation has improved through direct estimation. Yet, the improvement in the gravity model has come to the point that it is no longer possible to attribute the discrepancy at the basin scales between altimetric and hydrographic results as mostly due to geoid uncertainties. A substantial part of the difference must be due to other factors; i.e., the orbit error, or the uncertainty of the hydrographically derived dynamic topography.
Articulated Multimedia Physics, Lesson 14, Gases, The Gas Laws, and Absolute Temperature.
ERIC Educational Resources Information Center
New York Inst. of Tech., Old Westbury.
As the fourteenth lesson of the Articulated Multimedia Physics Course, instructional materials are presented in this study guide with relation to gases, gas laws, and absolute temperature. The topics are concerned with the kinetic theory of gases, thermometric scales, Charles' law, ideal gases, Boyle's law, absolute zero, and gas pressures. The…
Trends in Racial and Ethnic Disparities in Infant Mortality Rates in the United States, 1989–2006
Rossen, Lauren M.; Schoendorf, Kenneth C.
2014-01-01
Objectives. We sought to measure overall disparities in pregnancy outcome, incorporating data from the many race and ethnic groups that compose the US population, to improve understanding of how disparities may have changed over time. Methods. We used Birth Cohort Linked Birth–Infant Death Data Files from US Vital Statistics from 1989–1990 and 2005–2006 to examine multigroup indices of racial and ethnic disparities in the overall infant mortality rate (IMR), preterm birth rate, and gestational age–specific IMRs. We calculated selected absolute and relative multigroup disparity metrics weighting subgroups equally and by population size. Results. Overall IMR decreased on the absolute scale, but increased on the population-weighted relative scale. Disparities in the preterm birth rate decreased on both the absolute and relative scales, and across equally weighted and population-weighted indices. Disparities in preterm IMR increased on both the absolute and relative scales. Conclusions. Infant mortality is a common bellwether of general and maternal and child health. Despite significant decreases in disparities in the preterm birth rate, relative disparities in overall and preterm IMRs increased significantly over the past 20 years. PMID:24028239
A post-processing system for automated rectification and registration of spaceborne SAR imagery
NASA Technical Reports Server (NTRS)
Curlander, John C.; Kwok, Ronald; Pang, Shirley S.
1987-01-01
An automated post-processing system has been developed that interfaces with the raw image output of the operational digital SAR correlator. This system is designed for optimal efficiency by using advanced signal processing hardware and an algorithm that requires no operator interaction, such as the determination of ground control points. The standard output is a geocoded image product (i.e. resampled to a specified map projection). The system is capable of producing multiframe mosaics for large-scale mapping by combining images in both the along-track direction and adjacent cross-track swaths from ascending and descending passes over the same target area. The output products have absolute location uncertainty of less than 50 m and relative distortion (scale factor and skew) of less than 0.1 per cent relative to local variations from the assumed geoid.
Muir-Hunter, Susan W; Graham, Laura; Montero Odasso, Manuel
2015-08-01
To measure test-retest and interrater reliability of the Berg Balance Scale (BBS) in community-dwelling adults with mild to moderate Alzheimer disease (AD). Method : A sample of 15 adults (mean age 80.20 [SD 5.03] years) with AD performed three balance tests: the BBS, timed up-and-go test (TUG), and Functional Reach Test (FRT). Both relative reliability, using the intra-class correlation coefficient (ICC), and absolute reliability, using standard error of measurement (SEM) and minimal detectable change (MDC95) values, were calculated; Bland-Altman plots were constructed to evaluate inter-tester agreement. The test-retest interval was 1 week. Results : For the BBS, relative reliability values were 0.95 (95% CI, 0.85-0.98) for test-retest reliability and 0.72 (95% CI, 0.31-0.91) for interrater reliability; SEM was 6.01 points and MDC95 was 16.66 points; and interrater agreement was 16.62 points. The BBS performed better in test-retest reliability than the TUG and FRT, tests with established reliability in AD. Between 33% and 50% of participants required cueing beyond standardized instructions because they were unable to remember test instructions. Conclusions : The BBS achieved relative reliability values that support its clinical utility, but MDC95 and agreement values indicate the scale has performance limitations in AD. Further research to optimize balance assessment for people with AD is required.
Absolute Calibration of Si iRMs used for Measurements of Si Paleo-nutrient proxies
NASA Astrophysics Data System (ADS)
Vocke, R. D., Jr.; Rabb, S. A.
2016-12-01
Silicon isotope variations (reported as δ30Si and δ29Si, relative to NBS28) in silicic acid dissolved in ocean waters, in biogenic silica and in diatoms are extremely informative paleo-nutrient proxies. The resolution and comparability of such measurements depend on the quality of the isotopic Reference Materials (iRMs) defining the delta scale. We report new absolute Si isotopic measurements on the iRMs NBS28 (RM 8546 - Silica Sand), Diatomite, and Big Batch using the Avogadro measurement approach and comparing them with prior assessments of these iRMs. The Avogadro Si measurement technique was developed by the German Physikalish-Technische Bundesanstalt (PTB) to provide a precise and highly accurate method to measure absolute isotopic ratios in highly enriched 28Si (99.996%) material. These measurements are part of an international effort to redefine the kg and mole based on the Planck constant h and the Avogadro constant NA, respectively (Vocke et al., 2014 Metrologia 51, 361, Azuma et al., 2015 Metrologia 52 360). This approach produces absolute Si isotope ratio data with lower levels of uncertainty when compared to the traditional "Atomic Weights" method of absolute isotope ratio measurement calibration. This is illustrated in Fig. 1 where absolute Si isotopic measurements on SRM 990, separated by 40+ years of advances in instrumentation, are compared. The availability of this new technique does not say that absolute Si isotopic ratios are or ever will be better for normal Si isotopic measurements when seeking isotopic variations in nature, because they are not. However, by determining the absolute isotopic ratios of all the Si iRM scale artifacts, such iRMs become traceable to the metric system (SI); thereby automatically conferring on all the artifact-based δ30Si and δ29Si measurements traceability to the base SI unit, the mole. Such traceability should help reduce the potential of bias between different iRMs and facilitate the replacement of delta-scale artefacts when they run out. Fig. 1 Comparison of absolute isotopic measurements of SRM 990 using two radically different approaches to absolute calibration and mass bias corrections.
Quantification of Treatment Effect Modification on Both an Additive and Multiplicative Scale
Girerd, Nicolas; Rabilloud, Muriel; Pibarot, Philippe; Mathieu, Patrick; Roy, Pascal
2016-01-01
Background In both observational and randomized studies, associations with overall survival are by and large assessed on a multiplicative scale using the Cox model. However, clinicians and clinical researchers have an ardent interest in assessing absolute benefit associated with treatments. In older patients, some studies have reported lower relative treatment effect, which might translate into similar or even greater absolute treatment effect given their high baseline hazard for clinical events. Methods The effect of treatment and the effect modification of treatment were respectively assessed using a multiplicative and an additive hazard model in an analysis adjusted for propensity score in the context of coronary surgery. Results The multiplicative model yielded a lower relative hazard reduction with bilateral internal thoracic artery grafting in older patients (Hazard ratio for interaction/year = 1.03, 95%CI: 1.00 to 1.06, p = 0.05) whereas the additive model reported a similar absolute hazard reduction with increasing age (Delta for interaction/year = 0.10, 95%CI: -0.27 to 0.46, p = 0.61). The number needed to treat derived from the propensity score-adjusted multiplicative model was remarkably similar at the end of the follow-up in patients aged < = 60 and in patients >70. Conclusions The present example demonstrates that a lower treatment effect in older patients on a relative scale can conversely translate into a similar treatment effect on an additive scale due to large baseline hazard differences. Importantly, absolute risk reduction, either crude or adjusted, can be calculated from multiplicative survival models. We advocate for a wider use of the absolute scale, especially using additive hazard models, to assess treatment effect and treatment effect modification. PMID:27045168
Accuracy assessment of the global TanDEM-X Digital Elevation Model with GPS data
NASA Astrophysics Data System (ADS)
Wessel, Birgit; Huber, Martin; Wohlfart, Christian; Marschalk, Ursula; Kosmann, Detlev; Roth, Achim
2018-05-01
The primary goal of the German TanDEM-X mission is the generation of a highly accurate and global Digital Elevation Model (DEM) with global accuracies of at least 10 m absolute height error (linear 90% error). The global TanDEM-X DEM acquired with single-pass SAR interferometry was finished in September 2016. This paper provides a unique accuracy assessment of the final TanDEM-X global DEM using two different GPS point reference data sets, which are distributed across all continents, to fully characterize the absolute height error. Firstly, the absolute vertical accuracy is examined by about three million globally distributed kinematic GPS (KGPS) points derived from 19 KGPS tracks covering a total length of about 66,000 km. Secondly, a comparison is performed with more than 23,000 "GPS on Bench Marks" (GPS-on-BM) points provided by the US National Geodetic Survey (NGS) scattered across 14 different land cover types of the US National Land Cover Data base (NLCD). Both GPS comparisons prove an absolute vertical mean error of TanDEM-X DEM smaller than ±0.20 m, a Root Means Square Error (RMSE) smaller than 1.4 m and an excellent absolute 90% linear height error below 2 m. The RMSE values are sensitive to land cover types. For low vegetation the RMSE is ±1.1 m, whereas it is slightly higher for developed areas (±1.4 m) and for forests (±1.8 m). This validation confirms an outstanding absolute height error at 90% confidence level of the global TanDEM-X DEM outperforming the requirement by a factor of five. Due to its extensive and globally distributed reference data sets, this study is of considerable interests for scientific and commercial applications.
Anchoring effects in the judgment of confidence: semantic or numeric priming?
Carroll, Steven R; Petrusic, William M; Leth-Steensen, Craig
2009-02-01
Over the last decade, researchers have debated whether anchoring effects are the result of semantic or numeric priming. The present study tested both hypotheses. In four experiments involving a sensory detection task, participants first made a relative confidence judgment by deciding whether they were more or less confident than an anchor value in the correctness of their decision. Subsequently, they expressed an absolute level of confidence. In two of these experiments, the relative confidence anchor values represented the midpoints between the absolute confidence scale values, which were either explicitly numeric or semantic, nonnumeric representations of magnitude. In two other experiments, the anchor values were drawn from a scale modally different from that used to express the absolute confidence (i.e., nonnumeric and numeric, respectively, or vice versa). Regardless of the nature of the anchors, the mean confidence ratings revealed anchoring effects only when the relative and absolute confidence values were drawn from identical scales. Together, the results of these four experiments limit the conditions under which both numeric and semantic priming would be expected to lead to anchoring effects.
Quantifying Biomass from Point Clouds by Connecting Representations of Ecosystem Structure
NASA Astrophysics Data System (ADS)
Hendryx, S. M.; Barron-Gafford, G.
2017-12-01
Quantifying terrestrial ecosystem biomass is an essential part of monitoring carbon stocks and fluxes within the global carbon cycle and optimizing natural resource management. Point cloud data such as from lidar and structure from motion can be effective for quantifying biomass over large areas, but significant challenges remain in developing effective models that allow for such predictions. Inference models that estimate biomass from point clouds are established in many environments, yet, are often scale-dependent, needing to be fitted and applied at the same spatial scale and grid size at which they were developed. Furthermore, training such models typically requires large in situ datasets that are often prohibitively costly or time-consuming to obtain. We present here a scale- and sensor-invariant framework for efficiently estimating biomass from point clouds. Central to this framework, we present a new algorithm, assignPointsToExistingClusters, that has been developed for finding matches between in situ data and clusters in remotely-sensed point clouds. The algorithm can be used for assessing canopy segmentation accuracy and for training and validating machine learning models for predicting biophysical variables. We demonstrate the algorithm's efficacy by using it to train a random forest model of above ground biomass in a shrubland environment in Southern Arizona. We show that by learning a nonlinear function to estimate biomass from segmented canopy features we can reduce error, especially in the presence of inaccurate clusterings, when compared to a traditional, deterministic technique to estimate biomass from remotely measured canopies. Our random forest on cluster features model extends established methods of training random forest regressions to predict biomass of subplots but requires significantly less training data and is scale invariant. The random forest on cluster features model reduced mean absolute error, when evaluated on all test data in leave one out cross validation, by 40.6% from deterministic mesquite allometry and 35.9% from the inferred ecosystem-state allometric function. Our framework should allow for the inference of biomass more efficiently than common subplot methods and more accurately than individual tree segmentation methods in densely vegetated environments.
Fundamental principles of absolute radiometry and the philosophy of this NBS program (1968 to 1971)
NASA Technical Reports Server (NTRS)
Geist, J.
1972-01-01
A description is given work performed on a program to develop an electrically calibrated detector (also called absolute radiometer, absolute detector, and electrically calibrated radiometer) that could be used to realize, maintain, and transfer a scale of total irradiance. The program includes a comprehensive investigation of the theoretical basis of absolute detector radiometry, as well as the design and construction of a number of detectors. A theoretical analysis of the sources of error is also included.
Energy dispersive X-ray analysis on an absolute scale in scanning transmission electron microscopy.
Chen, Z; D'Alfonso, A J; Weyland, M; Taplin, D J; Allen, L J; Findlay, S D
2015-10-01
We demonstrate absolute scale agreement between the number of X-ray counts in energy dispersive X-ray spectroscopy using an atomic-scale coherent electron probe and first-principles simulations. Scan-averaged spectra were collected across a range of thicknesses with precisely determined and controlled microscope parameters. Ionization cross-sections were calculated using the quantum excitation of phonons model, incorporating dynamical (multiple) electron scattering, which is seen to be important even for very thin specimens. Copyright © 2015 Elsevier B.V. All rights reserved.
Method of differential-phase/absolute-amplitude QAM
Dimsdle, Jeffrey William [Overland Park, KS
2007-07-03
A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.
Method of differential-phase/absolute-amplitude QAM
Dimsdle, Jeffrey William [Overland Park, KS
2008-10-21
A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.
Method of differential-phase/absolute-amplitude QAM
Dimsdle, Jeffrey William [Overland Park, KS
2009-09-01
A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.
Method of differential-phase/absolute-amplitude QAM
Dimsdle, Jeffrey William [Overland Park, KS
2007-07-17
A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.
Method of differential-phase/absolute-amplitude QAM
Dimsdle, Jeffrey William
2007-10-02
A method of quadrature amplitude modulation involving encoding phase differentially and amplitude absolutely, allowing for a high data rate and spectral efficiency in data transmission and other communication applications, and allowing for amplitude scaling to facilitate data recovery; amplitude scale tracking to track-out rapid and severe scale variations and facilitate successful demodulation and data retrieval; 2.sup.N power carrier recovery; incoherent demodulation where coherent carrier recovery is not possible or practical due to signal degradation; coherent demodulation; multipath equalization to equalize frequency dependent multipath; and demodulation filtering.
Kussman, Barry D; Wypij, David; DiNardo, James A; Newburger, Jane; Jonas, Richard A; Bartlett, Jodi; McGrath, Ellen; Laussen, Peter C
2005-11-01
Cerebral oximetry is a technique that enables monitoring of regional cerebral oxygenation during cardiac surgery. In this study, we evaluated differences in bi-hemispheric measurement of cerebral oxygen saturation using near-infrared spectroscopy in 62 infants undergoing biventricular repair without aortic arch reconstruction. Left and right regional cerebral oxygen saturation index (rSO2i) were recorded continuously after the induction of anesthesia, and data were analyzed at 12 time points. Baseline rSO2i measurements were left 65 +/- 13 and right 66 +/- 13 (P = 0.17). Mean left and right rSO2i measurements were similar (< or =2 percentage points/absolute scale units) before, during, and after cardiopulmonary bypass, irrespective of the use of deep hypothermic circulatory arrest. Further longitudinal neurological outcome studies are required to determine whether uni- or bi-hemispheric monitoring is required in this patient population.
Flosadottir, Vala; Roos, Ewa M; Ageberg, Eva
2017-09-01
The Activity Rating Scale (ARS) for disorders of the knee evaluates the level of activity by the frequency of participation in 4 separate activities with high demands on knee function, with a score ranging from 0 (none) to 16 (pivoting activities 4 times/wk). To translate and cross-culturally adapt the ARS into Swedish and to assess measurement properties of the Swedish version of the ARS. Cohort study (diagnosis); Level of evidence, 2. The COSMIN guidelines were followed. Participants (N = 100 [55 women]; mean age, 27 years) who were undergoing rehabilitation for a knee injury completed the ARS twice for test-retest reliability. The Knee injury and Osteoarthritis Outcome Score (KOOS), Tegner Activity Scale (TAS), and modernized Saltin-Grimby Physical Activity Level Scale (SGPALS) were administered at baseline to validate the ARS. Construct validity and responsiveness of the ARS were evaluated by testing predefined hypotheses regarding correlations between the ARS, KOOS, TAS, and SGPALS. The Cronbach alpha, intraclass correlation coefficients, absolute reliability, standard error of measurement, smallest detectable change, and Spearman rank-order correlation coefficients were calculated. The ARS showed good internal consistency (α ≈ 0.96), good test-retest reliability (intraclass correlation coefficient >0.9), and no systematic bias between measurements. The standard error of measurement was less than 2 points, and the smallest detectable change was less than 1 point at the group level and less than 5 points at the individual level. More than 75% of the hypotheses were confirmed, indicating good construct validity and good responsiveness of the ARS. The Swedish version of the ARS is valid, reliable, and responsive for evaluating the level of activity based on the frequency of participation in high-demand knee sports activities in young adults with a knee injury.
Novakovic, A M; Thorsted, A; Schindler, E; Jönsson, S; Munafo, A; Karlsson, M O
2018-05-10
The aim of this work was to assess the relationship between the absolute lymphocyte count (ALC), and disability (as measured by the Expanded Disability Status Scale [EDSS]) and occurrence of relapses, 2 efficacy endpoints, respectively, in patients with remitting-relasping multiple sclerosis. Data for ALC, EDSS, and relapse rate were available from 1319 patients receiving placebo and/or cladribine tablets. Pharmacodynamic models were developed to characterize the time course of the endpoints. ALC-related measures were then evaluated as predictors of the efficacy endpoints. EDSS data were best fitted by a model where the logit-linear disease progression is affected by the dynamics of ALC change from baseline. Relapse rate data were best described by the Weibull hazard function, and the ALC change from baseline was also found to be a significant predictor of time to relapse. Presented models have shown that once cladribine exposure driven ALC-derived measures are included in the model, the need for drug effect components is of less importance (EDSS) or disappears (relapse rate). This simplifies the models and theoretically makes them mechanism specific rather than drug specific. Having a reliable mechanism-specific model would allow leveraging historical data across compounds, to support decision making in drug development and possibly shorten the time to market. © 2018, The American College of Clinical Pharmacology.
Vroland-Nordstrand, Kristina; Krumlinde-Sundholm, Lena
2012-11-01
to evaluate the test-retest reliability of children's perceptions of their own competence in performing daily tasks and of their choice of goals for intervention using the Swedish version of the perceived efficacy and goal setting system (PEGS). A second aim was to evaluate agreement between children's and parents' perceptions of the child's competence and choices of intervention goals. Forty-four children with disabilities and their parents completed the Swedish version of the PEGS. Thirty-six of the children completed a retest session allocated into one of two groups: (A) for evaluation of perceived competence and (B) for evaluation of choice of goals. Cohen's kappa, weighted kappa and absolute agreement were calculated. Test-retest reliability for children's perceived competence showed good agreement for the dichotomized scale of competent/non-competent performance; however, using the four-point scale the agreement varied. The children's own goals were relatively stable over time; 78% had an absolute agreement ranging from 50% to 100%. There was poor agreement between the children's and their parents' ratings. Goals identified by the children differed from those identified by their parents, with 48% of the children having no goals identical to those chosen by their parents. These results indicate that the Swedish version of the PEGS produces reliable outcomes comparable to the original version.
NASA Astrophysics Data System (ADS)
Casagrande, L.; VandenBerg, Don A.
2018-04-01
We use MARCS model atmosphere fluxes to compute synthetic colours, bolometric corrections and reddening coefficients for the Hipparcos/Tycho, Pan-STARRS1, SkyMapper, and JWST systems. Tables and interpolation subroutines are provided to transform isochrones from the theoretical to various observational planes, to derive bolometric corrections, synthetic colours and colour-temperature relations at nearly any given point of the HR diagram for 2600 K ≤ Teff ≤ 8000 K, and different values of reddening in 85 photometric filters. We use absolute spectrophotometry from the CALSPEC library to show that bolometric fluxes can be recovered to ˜2 per cent from bolometric corrections in a single band, when input stellar parameters are well known for FG dwarfs at various metallicities. This sole source of uncertainty impacts interferometric Teff to ≃0.5 per cent (or 30 K at the solar temperature). Uncertainties are halved when combining bolometric corrections in more bands, and limited by the fundamental uncertainty of the current absolute flux scale at 1 per cent. Stars in the RAVE DR5 catalogue are used to validate the quality of our MARCS synthetic photometry in selected filters across the optical and infrared range. This investigation shows that extant MARCS synthetic fluxes are able to reproduce the main features observed in stellar populations across the Galactic disc.
AMS radiocarbon dating and varve chronology of Lake Soppensee: 6000 to 12000 14C years BP
NASA Astrophysics Data System (ADS)
Hajdas, Irena; Ivy, Susan D.; Beer, Jürg; Bonani, Georges; Imboden, Dieter; Lotted, André F.; Sturm, Michael; Suter, Martin
1993-12-01
For the extension of the radiocarbon calibration curve beyond 10000 14C y BP, laminated sediment from Lake Soppensee (central Switzerland) was dated. The radiocarbon time scale was obtained using accelerator mass spectrometry (AMS) dating of terrestrial macrofossils selected from the Soppensee sediment. Because of an unlaminated sediment section during the Younger Dryas (10000 11000 14C y BP), the absolute time scale, based on counting annual layers (varves), had to be corrected for missing varves. The Soppensee radiocarbon-verve chronology covers the time period from 6000 to 12000 14C y BP on the radiocarbon time scale and 7000 to 13000 calendar y BP on the absolute time scale. The good agreement with the tree ring curve in the interval from 7000 to 11450 cal y BP (cal y indicates calendar year) proves the annual character of the laminations. The ash layer of the Vasset/Killian Tephra (Massif Central, France) is dated at 8230±140 14C y BP and 9407±44 cal y BP. The boundaries of the Younger Dryas biozone are placed at 10986±69 cal y BP (Younger Dryas/Preboreal) and 1212±86 cal y BP (Alleröd/Younger Dryas) on the absolute time scale. The absolute age of the Laacher See Tephra layer, dated with the radiocarbon method at 10 800 to 11200 14C y BP, is estimated at 12350 ± 135 cal y BP. The oldest radiocarbon age of 14190±120 14C y BP was obtained on macrofossils of pioneer vegetation which were found in the lowermost part of the sediment profile. For the late Glacial, the offset between the radiocarbon (10000 12000 14C y BP) and the absolute time scale (11400 13000 cal y BP) in the Soppensee chronology is not greater than 1000 years, which differs from the trend of the U/Th-radiocarbon curve derived from corals.
Allometric scaling of UK urban emissions: interpretation and implications for air quality management
NASA Astrophysics Data System (ADS)
MacKenzie, Rob; Barnes, Matt; Whyatt, Duncan; Hewitt, Nick
2016-04-01
Allometry uncovers structures and patterns by relating the characteristics of complex systems to a measure of scale. We present an allometric analysis of air quality for UK urban settlements, beginning with emissions and moving on to consider air concentrations. We consider both airshed-average 'urban background' concentrations (cf. those derived from satellites for NO2) and local pollution 'hotspots'. We show that there is a strong and robust scaling (with respect to population) of the non-point-source emissions of the greenhouse gases carbon dioxide and methane, as well as the toxic pollutants nitrogen dioxide, PM2.5, and 1,3-butadiene. The scaling of traffic-related emissions is not simply a reflection of road length, but rather results from the socio-economic patterning of road-use. The recent controversy regarding diesel vehicle emissions is germane to our study but does not affect our overall conclusions. We next develop an hypothesis for the population-scaling of airshed-average air concentrations, with which we demonstrate that, although average air quality is expected to be worse in large urban centres compared to small urban centres, the overall effect is an economy of scale (i.e., large cities reduce the overall burden of emissions compared to the same population spread over many smaller urban settlements). Our hypothesis explains satellite-derived observations of airshed-average urban NO2 concentrations. The theory derived also explains which properties of nature-based solutions (urban greening) can make a significant contribution at city scale, and points to a hitherto unforeseen opportunity to make large cities cleaner than smaller cities in absolute terms with respect to their airshed-average pollutant concentration.
242Pu absolute neutron-capture cross section measurement
NASA Astrophysics Data System (ADS)
Buckner, M. Q.; Wu, C. Y.; Henderson, R. A.; Bucher, B.; Chyzh, A.; Bredeweg, T. A.; Baramsai, B.; Couture, A.; Jandel, M.; Mosby, S.; O'Donnell, J. M.; Ullmann, J. L.
2017-09-01
The absolute neutron-capture cross section of 242Pu was measured at the Los Alamos Neutron Science Center using the Detector for Advanced Neutron-Capture Experiments array along with a compact parallel-plate avalanche counter for fission-fragment detection. During target fabrication, a small amount of 239Pu was added to the active target so that the absolute scale of the 242Pu(n,γ) cross section could be set according to the known 239Pu(n,f) resonance at En,R = 7.83 eV. The relative scale of the 242Pu(n,γ) cross section covers four orders of magnitude for incident neutron energies from thermal to ≈ 40 keV. The cross section reported in ENDF/B-VII.1 for the 242Pu(n,γ) En,R = 2.68 eV resonance was found to be 2.4% lower than the new absolute 242Pu(n,γ) cross section.
Harper, Sam; Lynch, John; Meersman, Stephen C.; Breen, Nancy; Davis, William W.; Reichman, Marsha E.
2008-01-01
The authors provide an overview of methods for summarizing social disparities in health using the example of lung cancer. They apply four measures of relative disparity and three measures of absolute disparity to trends in US lung cancer incidence by area-socioeconomic position and race-ethnicity from 1992 to 2004. Among females, measures of absolute and relative disparity suggested that area-socioeconomic and race-ethnic disparities increased over these 12 years but differed widely with respect to the magnitude of the change. Among males, the authors found substantial disagreement among summary measures of relative disparity with respect to the magnitude and the direction of change in disparities. Among area-socioeconomic groups, the index of disparity increased by 47% and the relative concentration index decreased by 116%, while for race-ethnicity the index of disparity increased by 36% and the Theil index increased by 13%. The choice of a summary measure of disparity may affect the interpretation of changes in health disparities. Important issues to consider are the reference point from which differences are measured, whether to measure disparity on the absolute or relative scale, and whether to weight disparity measures by population size. A suite of indicators is needed to provide a clear picture of health disparity change. PMID:18344513
Adegbija, Odewumi; Hoy, Wendy E; Wang, Zhiqiang
2015-11-13
There have been suggestions that currently recommended waist circumference (WC) cut-off points for Australians of European origin may not be applicable to Aboriginal people who have different body habitus profiles. We aimed to generate equivalent WC values that correspond to body mass index (BMI) points for identifying absolute cardiovascular disease (CVD) risks. Prospective cohort study. An Aboriginal community in Australia's Northern Territory. From 1992 to 1998, 920 adults without CVD, with age, WC and BMI measurements were followed-up for up to 20 years. Incident CVD, coronary artery disease (CAD) and heart failure (HF) events during the follow-up period ascertained from hospitalisation data. We generated WC values with 10-year absolute risks equivalent for the development of CVD as BMI values (20-34 kg/m(2)) using the Weibull accelerated time-failure model. There were 211 incident cases of CVD over 13,669 person-years of follow-up. At the average age of 35 years, WC values with absolute CVD, CAD and HF risks equivalent to BMI of 25 kg/m(2) were 91.5, 91.8 and 91.7 cm, respectively, for males, and corresponding WC values were 92.5, 92.7 and 93 cm for females. WC values with equal absolute CVD, CAD and HF risks to BMI of 30 kg/m(2) were 101.7, 103.1 and 102.6 cm, respectively, for males, and corresponding values were 99.2, 101.6 and 101.5 cm for females. Association between WC and CVD did not depend on gender (p=0.54). WC ranging from 91 to 93 cm was equivalent to BMI 25 kg/m(2) for overweight, and 99 to 103 cm was equivalent to BMI of 30 kg/m(2) for obesity in terms of predicting 10-year absolute CVD risk. Replicating the absolute risk method in other Aboriginal communities will further validate the WC values generated for future development of WC cut-off points for Aboriginal people. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malys, S.; Jensen, P.A.
1990-04-01
The Global Positioning System (GPS) carrier beat phase data collected by the TI4100 GPS receiver has been successfully utilized by the US Defense Mapping Agency in an algorithm which is designed to estimate individual absolute geodetic point positions from data collected over a few hours. The algorithm uses differenced data from one station and two to four GPS satellites at a series of epochs separated by 30 second intervals. The precise GPS ephemerides and satellite clock states, held fixed in the estimation process, are those estimated by the Naval Surface Warfare Center (NSWC). Broadcast ephemerides and clock states are alsomore » utilized for comparative purposes. An outline of the data corrections applied, the mathematical model and the estimation algorithm are presented. Point positioning results and statistics are presented for a globally-distributed set of stations which contributed to the CASA Uno experiment. Statistical assessment of 114 GPS point positions at 11 CASA Uno stations indicates that the overall standard deviation of a point position component, estimated from a few hours of data, is 73 centimeters. Solution of the long line geodetic inverse problem using repeated point positions such as these can potentially offer a new tool for those studying geodynamics on a global scale.« less
The existence of negative absolute temperatures in Axelrod’s social influence model
NASA Astrophysics Data System (ADS)
Villegas-Febres, J. C.; Olivares-Rivas, W.
2008-06-01
We introduce the concept of temperature as an order parameter in the standard Axelrod’s social influence model. It is defined as the relation between suitably defined entropy and energy functions, T=(. We show that at the critical point, where the order/disorder transition occurs, this absolute temperature changes in sign. At this point, which corresponds to the transition homogeneous/heterogeneous culture, the entropy of the system shows a maximum. We discuss the relationship between the temperature and other properties of the model in terms of cultural traits.
Independent control of joint stiffness in the framework of the equilibrium-point hypothesis.
Latash, M L
1992-01-01
In the framework of the equilibrium-point hypothesis, virtual trajectories and joint stiffness patterns have been reconstructed during two motor tasks practiced against a constant bias torque. One task required a voluntary increase in joint stiffness while preserving the original joint position. The other task involved fast elbow flexions over 36 degrees. Joint stiffness gradually subsided after the termination of fast movements. In both tasks, the external torque could slowly and unexpectedly change. The subjects were required not to change their motor commands if the torque changed, i.e. "to do the same no matter what the motor did". In both tasks, changes in joint stiffness were accompanied by unchanged virtual trajectories that were also independent of the absolute value of the bias torque. By contrast, the intercept of the joint compliant characteristic with the angle axis, r(t)-function, has demonstrated a clear dependence upon both the level of coactivation and external load. We assume that a template virtual trajectory is generated at a certain level of the motor hierarchy and is later scaled taking into account some commonly changing dynamic factors of the movement execution, for example, external load. The scaling leads to the generation of commands to the segmental structures that can be expressed, according to the equilibrium-point hypothesis, as changes in the thresholds of the tonic stretch reflex for corresponding muscles.
The Kelvin and Temperature Measurements
Mangum, B. W.; Furukawa, G. T.; Kreider, K. G.; Meyer, C. W.; Ripple, D. C.; Strouse, G. F.; Tew, W. L.; Moldover, M. R.; Johnson, B. Carol; Yoon, H. W.; Gibson, C. E.; Saunders, R. D.
2001-01-01
The International Temperature Scale of 1990 (ITS-90) is defined from 0.65 K upwards to the highest temperature measurable by spectral radiation thermometry, the radiation thermometry being based on the Planck radiation law. When it was developed, the ITS-90 represented thermodynamic temperatures as closely as possible. Part I of this paper describes the realization of contact thermometry up to 1234.93 K, the temperature range in which the ITS-90 is defined in terms of calibration of thermometers at 15 fixed points and vapor pressure/temperature relations which are phase equilibrium states of pure substances. The realization is accomplished by using fixed-point devices, containing samples of the highest available purity, and suitable temperature-controlled environments. All components are constructed to achieve the defining equilibrium states of the samples for the calibration of thermometers. The high quality of the temperature realization and measurements is well documented. Various research efforts are described, including research to improve the uncertainty in thermodynamic temperatures by measuring the velocity of sound in gas up to 800 K, research in applying noise thermometry techniques, and research on thermocouples. Thermometer calibration services and high-purity samples and devices suitable for “on-site” thermometer calibration that are available to the thermometry community are described. Part II of the paper describes the realization of temperature above 1234.93 K for which the ITS-90 is defined in terms of the calibration of spectroradiometers using reference blackbody sources that are at the temperature of the equilibrium liquid-solid phase transition of pure silver, gold, or copper. The realization of temperature from absolute spectral or total radiometry over the temperature range from about 60 K to 3000 K is also described. The dissemination of the temperature scale using radiation thermometry from NIST to the customer is achieved by calibration of blackbody sources, tungsten-strip lamps, and pyrometers. As an example of the research efforts in absolute radiometry, which impacts the NIST spectral irradiance and radiance scales, results with filter radiometers and a high-temperature blackbody are summarized. PMID:27500019
ERIC Educational Resources Information Center
Bockris, J. O'M.
1983-01-01
Suggests various methods for teaching the double layer in electrochemistry courses. Topics addressed include measuring change in absolute potential difference (PD) at interphase, conventional electrode potential scale, analyzing absolute PD, metal-metal and overlap electron PDs, accumulation of material at interphase, thermodynamics of electrified…
Ladin, Keren; Daniels, Norman; Kawachi, Ichiro
2010-01-01
Purpose: Socioeconomic inequality has been associated with higher levels of morbidity and mortality. This study explores the role of absolute and relative deprivation in predicting late-life depression on both individual and country levels. Design and Methods: Country- and individual-level inequality indicators were used in multivariate logistic regression and in relative indexes of inequality. Data obtained from the Survey of Health, Ageing and Retirement in Europe (SHARE, Wave 1, Release 2) included 22,777 men and women (aged 50–104 years) from 10 European countries. Late-life depression was measured using the EURO-D scale and corresponding clinical cut point. Absolute deprivation was measured using gross domestic product and median household income at the country level and socioeconomic status at the individual level. Relative deprivation was measured by Gini coefficients at the country level and educational attainment at the individual level. Results: Rates of depression ranged from 18.10% in Denmark to 36.84% in Spain reflecting a clear north–south gradient. Measures of absolute and relative deprivation were significant in predicting depression at both country and individual levels. Findings suggest that the adverse impact of societal inequality cannot be overcome by increased individual-level or country-level income. Increases in individual-level income did not mitigate the effect of country-level relative deprivation. Implications: Mental health disparities persist throughout later life whereby persons exposed to higher levels of country-level inequality suffer greater morbidity compared with those in countries with less inequality. Cross-national variation in the relationship between inequality and depression illuminates the need for further research. PMID:19515635
NASA Astrophysics Data System (ADS)
Christodoulou, L.; Eminian, C.; Loveday, J.; Norberg, P.; Baldry, I. K.; Hurley, P. D.; Driver, S. P.; Bamford, S. P.; Hopkins, A. M.; Liske, J.; Peacock, J. A.; Bland-Hawthorn, J.; Brough, S.; Cameron, E.; Conselice, C. J.; Croom, S. M.; Frenk, C. S.; Gunawardhana, M.; Jones, D. H.; Kelvin, L. S.; Kuijken, K.; Nichol, R. C.; Parkinson, H.; Pimbblet, K. A.; Popescu, C. C.; Prescott, M.; Robotham, A. S. G.; Sharp, R. G.; Sutherland, W. J.; Taylor, E. N.; Thomas, D.; Tuffs, R. J.; van Kampen, E.; Wijesinghe, D.
2012-09-01
We measure the two-point angular correlation function of a sample of 4289 223 galaxies with r < 19.4 mag from the Sloan Digital Sky Survey (SDSS) as a function of photometric redshift, absolute magnitude and colour down to Mr - 5 log h = -14 mag. Photometric redshifts are estimated from ugriz model magnitudes and two Petrosian radii using the artificial neural network package ANNz, taking advantage of the Galaxy And Mass Assembly (GAMA) spectroscopic sample as our training set. These photometric redshifts are then used to determine absolute magnitudes and colours. For all our samples, we estimate the underlying redshift and absolute magnitude distributions using Monte Carlo resampling. These redshift distributions are used in Limber's equation to obtain spatial correlation function parameters from power-law fits to the angular correlation function. We confirm an increase in clustering strength for sub-L* red galaxies compared with ˜L* red galaxies at small scales in all redshift bins, whereas for the blue population the correlation length is almost independent of luminosity for ˜L* galaxies and fainter. A linear relation between relative bias and log luminosity is found to hold down to luminosities L ˜ 0.03L*. We find that the redshift dependence of the bias of the L* population can be described by the passive evolution model of Tegmark & Peebles. A visual inspection of a random sample from our r < 19.4 sample of SDSS galaxies reveals that about 10 per cent are spurious, with a higher contamination rate towards very faint absolute magnitudes due to over-deblended nearby galaxies. We correct for this contamination in our clustering analysis.
NASA Astrophysics Data System (ADS)
Vocke, Robert; Rabb, Savelas
2015-04-01
All isotope amount ratios (hereafter referred to as isotope ratios) produced and measured on any mass spectrometer are biased. This unfortunate situation results mainly from the physical processes in the source area where ions are produced. Because the ionized atoms in poly-isotopic elements have different masses, such processes are typically mass dependent and lead to what is commonly referred to as mass fractionation (for thermal ionization and electron impact sources) and mass bias (for inductively coupled plasma sources.) This biasing process produces a measured isotope ratio that is either larger or smaller than the "true" ratio in the sample. This has led to the development of numerous fractionation "laws" that seek to correct for these effects, many of which are not based on the physical processes giving rise to the biases. The search for tighter and reproducible precisions has led to two isotope ratio measurement systems that exist side-by-side. One still seeks to measure "absolute" isotope ratios while the other utilizes an artifact based measurement system called a delta-scale. The common element between these two measurement systems is the utilization of isotope reference materials (iRMs). These iRMs are used to validate a fractionation "law" in the former case and function as a scale anchor in the latter. Many value assignments of iRMs are based on "best measurements" by the original groups producing the reference material, a not entirely satisfactory approach. Other iRMs, with absolute isotope ratio values, have been produced by calibrated measurements following the Atomic Weight approach (AW) pioneered by NBS nearly 50 years ago. Unfortunately, the AW is not capable of calibrating the new generation of iRMs to sufficient precision. So how do we get iRMs with isotope ratios of sufficient precision and without bias? Such a focus is not to denigrate the extremely precise delta-scale measurements presently being made on non-traditional and tradition stable isotope systems. But even absolute isotope ratio measurements have an important role to play in delta-scale schemes. Highly precise and unbiased measurements of the artifact anchor for any scale facilitates the replacement of that scale's anchor once the initial supply of the iRM is exhausted. Absolute isotope ratio measurements of artifacts at the positive and negative extremes of a delta-scale will allow the appropriate assignment of delta-values to these normalizing iRMs, thereby minimizing any scale contractions or expansions to either side of the anchor artifact. And finally, absolute values for critical iRMs with also allow delta-scale results to be used in other scientific disciplines that employ other units of measure. Precise absolute isotope ratios of Si has been one of the consequences of the Avogadro Project (an international effort to replace the original kilogram artifact with a natural constant, the Planck constant.) We will present the results of applying such measurements to the principal iRMs for the Si isotope system (SRM 990, Big Batch and Diatomite) and its consequences for their delta-Si29 and delta-Si30 values.
The Long-Wave Infrared Earth Image as a Pointing Reference for Deep-Space Optical Communications
NASA Astrophysics Data System (ADS)
Biswas, A.; Piazzolla, S.; Peterson, G.; Ortiz, G. G.; Hemmati, H.
2006-11-01
Optical communications from space require an absolute pointing reference. Whereas at near-Earth and even planetary distances out to Mars and Jupiter a laser beacon transmitted from Earth can serve as such a pointing reference, for farther distances extending to the outer reaches of the solar system, the means for meeting this requirement remains an open issue. We discuss in this article the prospects and consequences of utilizing the Earth image sensed in the long-wave infrared (LWIR) spectral band as a beacon to satisfy the absolute pointing requirements. We have used data from satellite-based thermal measurements of Earth to synthesize images at various ranges and have shown the centroiding accuracies that can be achieved with prospective LWIR image sensing arrays. The nonuniform emissivity of Earth causes a mispointing bias error term that exceeds a provisional pointing budget allocation when using simple centroiding algorithms. Other issues related to implementing thermal imaging of Earth from deep space for the purposes of providing a pointing reference are also reported.
A model of the 8-25 micron point source infrared sky
NASA Technical Reports Server (NTRS)
Wainscoat, Richard J.; Cohen, Martin; Volk, Kevin; Walker, Helen J.; Schwartz, Deborah E.
1992-01-01
We present a detailed model for the IR point-source sky that comprises geometrically and physically realistic representations of the Galactic disk, bulge, stellar halo, spiral arms (including the 'local arm'), molecular ring, and the extragalactic sky. We represent each of the distinct Galactic components by up to 87 types of Galactic source, each fully characterized by scale heights, space densities, and absolute magnitudes at BVJHK, 12, and 25 microns. The model is guided by a parallel Monte Carlo simulation of the Galaxy at 12 microns. The content of our Galactic source table constitutes a good match to the 12 micron luminosity function in the simulation, as well as to the luminosity functions at V and K. We are able to produce differential and cumulative IR source counts for any bandpass lying fully within the IRAS Low-Resolution Spectrometer's range (7.7-22.7 microns as well as for the IRAS 12 and 25 micron bands. These source counts match the IRAS observations well. The model can be used to predict the character of the point source sky expected for observations from IR space experiments.
Robust estimation of adaptive tensors of curvature by tensor voting.
Tong, Wai-Shun; Tang, Chi-Keung
2005-03-01
Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.
Absolute instability of the Gaussian wake profile
NASA Technical Reports Server (NTRS)
Hultgren, Lennart S.; Aggarwal, Arun K.
1987-01-01
Linear parallel-flow stability theory has been used to investigate the effect of viscosity on the local absolute instability of a family of wake profiles with a Gaussian velocity distribution. The type of local instability, i.e., convective or absolute, is determined by the location of a branch-point singularity with zero group velocity of the complex dispersion relation for the instability waves. The effects of viscosity were found to be weak for values of the wake Reynolds number, based on the center-line velocity defect and the wake half-width, larger than about 400. Absolute instability occurs only for sufficiently large values of the center-line wake defect. The critical value of this parameter increases with decreasing wake Reynolds number, thereby indicating a shrinking region of absolute instability with decreasing wake Reynolds number. If backflow is not allowed, absolute instability does not occur for wake Reynolds numbers smaller than about 38.
Sellers, Michael S; Lísal, Martin; Brennan, John K
2016-03-21
We present an extension of various free-energy methodologies to determine the chemical potential of the solid and liquid phases of a fully-flexible molecule using classical simulation. The methods are applied to the Smith-Bharadwaj atomistic potential representation of cyclotrimethylene trinitramine (RDX), a well-studied energetic material, to accurately determine the solid and liquid phase Gibbs free energies, and the melting point (Tm). We outline an efficient technique to find the absolute chemical potential and melting point of a fully-flexible molecule using one set of simulations to compute the solid absolute chemical potential and one set of simulations to compute the solid-liquid free energy difference. With this combination, only a handful of simulations are needed, whereby the absolute quantities of the chemical potentials are obtained, for use in other property calculations, such as the characterization of crystal polymorphs or the determination of the entropy. Using the LAMMPS molecular simulator, the Frenkel and Ladd and pseudo-supercritical path techniques are adapted to generate 3rd order fits of the solid and liquid chemical potentials. Results yield the thermodynamic melting point Tm = 488.75 K at 1.0 atm. We also validate these calculations and compare this melting point to one obtained from a typical superheated simulation technique.
Liang, Shanshan; Yuan, Fusong; Luo, Xu; Yu, Zhuoren; Tang, Zhihui
2018-04-05
Marginal discrepancy is key to evaluating the accuracy of fixed dental prostheses. An improved method of evaluating marginal discrepancy is needed. The purpose of this in vitro study was to evaluate the absolute marginal discrepancy of ceramic crowns fabricated using conventional and digital methods with a digital method for the quantitative evaluation of absolute marginal discrepancy. The novel method was based on 3-dimensional scanning, iterative closest point registration techniques, and reverse engineering theory. Six standard tooth preparations for the right maxillary central incisor, right maxillary second premolar, right maxillary second molar, left mandibular lateral incisor, left mandibular first premolar, and left mandibular first molar were selected. Ten conventional ceramic crowns and 10 CEREC crowns were fabricated for each tooth preparation. A dental cast scanner was used to obtain 3-dimensional data of the preparations and ceramic crowns, and the data were compared with the "virtual seating" iterative closest point technique. Reverse engineering software used edge sharpening and other functional modules to extract the margins of the preparations and crowns. Finally, quantitative evaluation of the absolute marginal discrepancy of the ceramic crowns was obtained from the 2-dimensional cross-sectional straight-line distance between points on the margin of the ceramic crowns and the standard preparations based on the circumferential function module along the long axis. The absolute marginal discrepancy of the ceramic crowns fabricated using conventional methods was 115 ±15.2 μm, and 110 ±14.3 μm for those fabricated using the digital technique was. ANOVA showed no statistical difference between the 2 methods or among ceramic crowns for different teeth (P>.05). The digital quantitative evaluation method for the absolute marginal discrepancy of ceramic crowns was established. The evaluations determined that the absolute marginal discrepancies were within a clinically acceptable range. This method is acceptable for the digital evaluation of the accuracy of complete crowns. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Külahci, Fatih; Sen, Zekâi
2009-09-15
The classical solid/liquid distribution coefficient, K(d), for radionuclides in water-sediment systems is dependent on many parameters such as flow, geology, pH, acidity, alkalinity, total hardness, radioactivity concentration, etc. in a region. Considerations of all these effects require a regional analysis with an effective methodology, which has been based on the concept of the cumulative semivariogram concept in this paper. Although classical K(d) calculations are punctual and cannot represent regional pattern, in this paper a regional calculation methodology is suggested through the use of Absolute Point Cumulative SemiVariogram (APCSV) technique. The application of the methodology is presented for (137)Cs and (90)Sr measurements at a set of points in Keban Dam reservoir, Turkey.
Parent education and biologic factors influence on cognition in sickle cell anemia
King, Allison A.; Strouse, John J.; Rodeghier, Mark J.; Compas, Bruce E.; Casella, James F.; McKinstry, Robert C.; Noetzel, Michael J.; Quinn, Charles T.; Ichord, Rebecca; Dowling, Michael M.; Miller, J. Philip; DeBaun, Michael R.
2015-01-01
Children with sickle cell anemia have a high prevalence of silent cerebral infarcts (SCIs) that are associated with decreased full-scale intelligence quotient (FSIQ). While the educational attainment of parents is a known strong predictor of the cognitive development of children in general, the role of parental education in sickle cell anemia along with other factors that adversely affect cognitive function (anemia, cerebral infarcts) is not known. We tested the hypothesis that both the presence of SCI and parental education would impact FSIQ in children with sickle cell anemia. A multicenter, cross-sectional study was conducted in 19 US sites of the Silent Infarct Transfusion Trial among children with sickle cell anemia, age 5–15 years. All were screened for SCIs. Participants with and without SCI were administered the Wechsler Abbreviated Scale of Intelligence. A total of 150 participants (107 with and 43 without SCIs) were included in the analysis. In a multivariable linear regression model for FSIQ, the absence of college education for the head of household was associated with a decrease of 6.2 points (P=0.005); presence of SCI with a 5.2 point decrease (P=0.017); each $1000 of family income per capita with a 0.33 point increase (P=0.023); each increase of 1 year in age with a 0.96 point decrease (P=0.023); and each 1% (absolute) decrease in hemoglobin oxygen saturation with 0.75 point decrease (P=0.030). In conclusion, FSIQ in children with sickle cell anemia is best accounted for by a multivariate model that includes both biologic and socioenvironmental factors. PMID:24123128
Absolute calibration of optical flats
Sommargren, Gary E.
2005-04-05
The invention uses the phase shifting diffraction interferometer (PSDI) to provide a true point-by-point measurement of absolute flatness over the surface of optical flats. Beams exiting the fiber optics in a PSDI have perfect spherical wavefronts. The measurement beam is reflected from the optical flat and passed through an auxiliary optic to then be combined with the reference beam on a CCD. The combined beams include phase errors due to both the optic under test and the auxiliary optic. Standard phase extraction algorithms are used to calculate this combined phase error. The optical flat is then removed from the system and the measurement fiber is moved to recombine the two beams. The newly combined beams include only the phase errors due to the auxiliary optic. When the second phase measurement is subtracted from the first phase measurement, the absolute phase error of the optical flat is obtained.
Experimental Estimating Deflection of a Simple Beam Bridge Model Using Grating Eddy Current Sensors
Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui
2012-01-01
A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring. PMID:23112583
Experimental estimating deflection of a simple beam bridge model using grating eddy current sensors.
Lü, Chunfeng; Liu, Weiwen; Zhang, Yongjie; Zhao, Hui
2012-01-01
A novel three-point method using a grating eddy current absolute position sensor (GECS) for bridge deflection estimation is proposed in this paper. Real spatial positions of the measuring points along the span axis are directly used as relative reference points of each other rather than using any other auxiliary static reference points for measuring devices in a conventional method. Every three adjacent measuring points are defined as a measuring unit and a straight connecting bar with a GECS fixed on the center section of it links the two endpoints. In each measuring unit, the displacement of the mid-measuring point relative to the connecting bar measured by the GECS is defined as the relative deflection. Absolute deflections of each measuring point can be calculated from the relative deflections of all the measuring units directly without any correcting approaches. Principles of the three-point method and displacement measurement of the GECS are introduced in detail. Both static and dynamic experiments have been carried out on a simple beam bridge model, which demonstrate that the three-point deflection estimation method using the GECS is effective and offers a reliable way for bridge deflection estimation, especially for long-term monitoring.
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
ERIC Educational Resources Information Center
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
Assessment of uncertainty in ROLO lunar irradiance for on-orbit calibration
Stone, T.C.; Kieffer, H.H.; Barnes, W.L.; Butler, J.J.
2004-01-01
A system to provide radiometric calibration of remote sensing imaging instruments on-orbit using the Moon has been developed by the US Geological Survey RObotic Lunar Observatory (ROLO) project. ROLO has developed a model for lunar irradiance which treats the primary geometric variables of phase and libration explicitly. The model fits hundreds of data points in each of 23 VNIR and 9 SWIR bands; input data are derived from lunar radiance images acquired by the project's on-site telescopes, calibrated to exoatmospheric radiance and converted to disk-equivalent reflectance. Experimental uncertainties are tracked through all stages of the data processing and modeling. Model fit residuals are ???1% in each band over the full range of observed phase and libration angles. Application of ROLO lunar calibration to SeaWiFS has demonstrated the capability for long-term instrument response trending with precision approaching 0.1% per year. Current work involves assessing the error in absolute responsivity and relative spectral response of the ROLO imaging systems, and propagation of error through the data reduction and modeling software systems with the goal of reducing the uncertainty in the absolute scale, now estimated at 5-10%. This level is similar to the scatter seen in ROLO lunar irradiance comparisons of multiple spacecraft instruments that have viewed the Moon. A field calibration campaign involving NASA and NIST has been initiated that ties the ROLO lunar measurements to the NIST (SI) radiometric scale.
NASA Astrophysics Data System (ADS)
Capra, B. R.; Morgan, R. G.; Leyland, P.
2005-02-01
The present study focused on simulating a trajectory point towards the end of the first experimental heatshield of the FIRE II vehicle, at a total flight time of 1639.53s. Scale replicas were sized according to binary scaling and instrumented with thermocouples for testing in the X1 expansion tube, located at The University of Queensland. Correlation of flight to experimental data was achieved through the separation, and independent treatment of the heat modes. Preliminary investigation indicates that the absolute value of radiant surface flux is conserved between two binary scaled models, whereas convective heat transfer increases with the length scale. This difference in the scaling techniques result in the overall contribution of radiative heat transfer diminishing to less than 1% in expansion tubes from a flight value of approximately 9-17%. From empirical correlation's it has been shown that the St √ Re number decreases, under special circumstances, in expansion tubes by the percentage radiation present on the flight vehicle. Results obtained in this study give a strong indication that the relative radiative heat transfer contribution in the expansion tube tests is less than that in flight, supporting the analysis that the absolute value remains constant with binary scaling. Key words: Heat Transfer, Fire II Flight Vehicle, Expansion Tubes, Binary Scaling. NOMENCLATURE dA elemental surface area, m2 H0 stagnation enthalpy, MJ/kg L arbitrary length, m ls scale factor equal to Lf /Le M Mach Number ˙m mass flow rate, kg/s p pressure, kPa ˙q heat transfer rate, W/m2 ¯q averaged heat transfer rate W/m2 RN nose radius m Re Reynolds number, equal to ρURN µ s/RD radial distance from symmetry axis St Stanton number, equal to ˙q ρUH0 St √ Re = ˙qR 1/2 N (ρU)1/2 µ1/2H0 over radius of forebody (D/2) T temperature, K U velocity, m/s Ue equivalent velocity m/s, equal to √ 2H0 U1 primary shock speed m/s U2 secondary shock speed m/s ρ density, kg/m3 ρL binary scaling parameter, kg/m2 subscripts c convective exp experiment f flight r radiative s post shock T total ∞ freestream
Measuring Growth with Vertical Scales
ERIC Educational Resources Information Center
Briggs, Derek C.
2013-01-01
A vertical score scale is needed to measure growth across multiple tests in terms of absolute changes in magnitude. Since the warrant for subsequent growth interpretations depends upon the assumption that the scale has interval properties, the validation of a vertical scale would seem to require methods for distinguishing interval scales from…
Localization of an Underwater Control Network Based on Quasi-Stable Adjustment.
Zhao, Jianhu; Chen, Xinhua; Zhang, Hongmei; Feng, Jie
2018-03-23
There exists a common problem in the localization of underwater control networks that the precision of the absolute coordinates of known points obtained by marine absolute measurement is poor, and it seriously affects the precision of the whole network in traditional constraint adjustment. Therefore, considering that the precision of underwater baselines is good, we use it to carry out quasi-stable adjustment to amend known points before constraint adjustment so that the points fit the network shape better. In addition, we add unconstrained adjustment for quality control of underwater baselines, the observations of quasi-stable adjustment and constrained adjustment, to eliminate the unqualified baselines and improve the results' accuracy of the two adjustments. Finally, the modified method is applied to a practical LBL (Long Baseline) experiment and obtains a mean point location precision of 0.08 m, which improves by 38% compared with the traditional method.
Localization of an Underwater Control Network Based on Quasi-Stable Adjustment
Chen, Xinhua; Zhang, Hongmei; Feng, Jie
2018-01-01
There exists a common problem in the localization of underwater control networks that the precision of the absolute coordinates of known points obtained by marine absolute measurement is poor, and it seriously affects the precision of the whole network in traditional constraint adjustment. Therefore, considering that the precision of underwater baselines is good, we use it to carry out quasi-stable adjustment to amend known points before constraint adjustment so that the points fit the network shape better. In addition, we add unconstrained adjustment for quality control of underwater baselines, the observations of quasi-stable adjustment and constrained adjustment, to eliminate the unqualified baselines and improve the results’ accuracy of the two adjustments. Finally, the modified method is applied to a practical LBL (Long Baseline) experiment and obtains a mean point location precision of 0.08 m, which improves by 38% compared with the traditional method. PMID:29570627
NASA Astrophysics Data System (ADS)
Subhash, Hrebesh M.; Choudhury, Niloy; Jacques, Steven L.; Wang, Ruikang K.; Chen, Fangyi; Zha, Dingjun; Nuttall, Alfred L.
2012-01-01
Direct measurement of absolute vibration parameters from different locations within the mammalian organ of Corti is crucial for understanding the hearing mechanics such as how sound propagates through the cochlea and how sound stimulates the vibration of various structures of the cochlea, namely, basilar membrane (BM), recticular lamina, outer hair cells and tectorial membrane (TM). In this study we demonstrate the feasibility a modified phase-sensitive spectral domain optical coherence tomography system to provide subnanometer scale vibration information from multiple angles within the imaging beam. The system has the potential to provide depth resolved absolute vibration measurement of tissue microstructures from each of the delay-encoded vibration images with a noise floor of ~0.3nm at 200Hz.
Absolute irradiance of the Moon for on-orbit calibration
Stone, T.C.; Kieffer, H.H.; ,
2002-01-01
The recognized need for on-orbit calibration of remote sensing imaging instruments drives the ROLO project effort to characterize the Moon for use as an absolute radiance source. For over 5 years the ground-based ROLO telescopes have acquired spatially-resolved lunar images in 23 VNIR (Moon diameter ???500 pixels) and 9 SWIR (???250 pixels) passbands at phase angles within ??90 degrees. A numerical model for lunar irradiance has been developed which fits hundreds of ROLO images in each band, corrected for atmospheric extinction and calibrated to absolute radiance, then integrated to irradiance. The band-coupled extinction algorithm uses absorption spectra of several gases and aerosols derived from MODTRAN to fit time-dependent component abundances to nightly observations of standard stars. The absolute radiance scale is based upon independent telescopic measurements of the star Vega. The fitting process yields uncertainties in lunar relative irradiance over small ranges of phase angle and the full range of lunar libration well under 0.5%. A larger source of uncertainty enters in the absolute solar spectral irradiance, especially in the SWIR, where solar models disagree by up to 6%. Results of ROLO model direct comparisons to spacecraft observations demonstrate the ability of the technique to track sensor responsivity drifts to sub-percent precision. Intercomparisons among instruments provide key insights into both calibration issues and the absolute scale for lunar irradiance.
Reconstructing 3D coastal cliffs from airborne oblique photographs without ground control points
NASA Astrophysics Data System (ADS)
Dewez, T. J. B.
2014-05-01
Coastal cliff collapse hazard assessment requires measuring cliff face topography at regular intervals. Terrestrial laser scanner techniques have proven useful so far but are expensive to use either through purchasing the equipment or through survey subcontracting. In addition, terrestrial laser surveys take time which is sometimes incompatible with the time during with the beach is accessible at low-tide. By comparison, structure from motion techniques (SFM) are much less costly to implement, and if airborne, acquisition of several kilometers of coastline can be done in a matter of minutes. In this paper, the potential of GPS-tagged oblique airborne photographs and SFM techniques is examined to reconstruct chalk cliff dense 3D point clouds without Ground Control Points (GCP). The focus is put on comparing the relative 3D point of views reconstructed by Visual SFM with their synchronous Solmeta Geotagger Pro2 GPS locations using robust estimators. With a set of 568 oblique photos, shot from the open door of an airplane with a triplet of synchronized Nikon D7000, GPS and SFM-determined view point coordinates converge to X: ±31.5 m; Y: ±39.7 m; Z: ±13.0 m (LE66). Uncertainty in GPS position affects the model scale, angular attitude of the reference frame (the shoreline ends up tilted by 2°) and absolute positioning. Ground Control Points cannot be avoided to orient such models.
NASA Astrophysics Data System (ADS)
Habib, A. S.; Shutt, A. L.; Regan, P. H.; Matthews, M. C.; Alsulaiti, H.; Bradley, D. A.
2014-02-01
Radioactive scale formation in various oil production facilities is acknowledged to pose a potential significant health and environmental issue. The presence of such an issue in Libyan oil fields was recognized as early as 1998. The naturally occurring radioactive materials (NORM) involved in this matter are radium isotopes (226Ra and 228Ra) and their decay products, precipitating into scales formed on the surfaces of production equipment. A field trip to a number of onshore Libyan oil fields has indicated the existence of elevated levels of specific activity in a number of locations in some of the more mature oil fields. In this study, oil scale samples collected from different parts of Libya have been characterized using gamma spectroscopy through use of a well shielded HPGe spectrometer. To avoid potential alpha-bearing dust inhalation and in accord with safe working practices at this University, the samples, contained in plastic bags and existing in different geometries, are not permitted to be opened. MCNP, a Monte Carlo simulation code, is being used to simulate the spectrometer and the scale samples in order to obtain the system absolute efficiency and then to calculate sample specific activities. The samples are assumed to have uniform densities and homogeneously distributed activity. Present results are compared to two extreme situations that were assumed in a previous study: (i) with the entire activity concentrated at a point on the sample surface proximal to the detector, simulating the sample lowest activity, and; (ii) with the entire activity concentrated at a point on the sample surface distal to the detector, simulating the sample highest activity.
Absolute color scale for improved diagnostics with wavefront error mapping.
Smolek, Michael K; Klyce, Stephen D
2007-11-01
Wavefront data are expressed in micrometers and referenced to the pupil plane, but current methods to map wavefront error lack standardization. Many use normalized or floating scales that may confuse the user by generating ambiguous, noisy, or varying information. An absolute scale that combines consistent clinical information with statistical relevance is needed for wavefront error mapping. The color contours should correspond better to current corneal topography standards to improve clinical interpretation. Retrospective analysis of wavefront error data. Historic ophthalmic medical records. Topographic modeling system topographical examinations of 120 corneas across 12 categories were used. Corneal wavefront error data in micrometers from each topography map were extracted at 8 Zernike polynomial orders and for 3 pupil diameters expressed in millimeters (3, 5, and 7 mm). Both total aberrations (orders 2 through 8) and higher-order aberrations (orders 3 through 8) were expressed in the form of frequency histograms to determine the working range of the scale across all categories. The standard deviation of the mean error of normal corneas determined the map contour resolution. Map colors were based on corneal topography color standards and on the ability to distinguish adjacent color contours through contrast. Higher-order and total wavefront error contour maps for different corneal conditions. An absolute color scale was produced that encompassed a range of +/-6.5 microm and a contour interval of 0.5 microm. All aberrations in the categorical database were plotted with no loss of clinical information necessary for classification. In the few instances where mapped information was beyond the range of the scale, the type and severity of aberration remained legible. When wavefront data are expressed in micrometers, this absolute scale facilitates the determination of the severity of aberrations present compared with a floating scale, particularly for distinguishing normal from abnormal levels of wavefront error. The new color palette makes it easier to identify disorders. The corneal mapping method can be extended to mapping whole eye wavefront errors. When refraction data are expressed in diopters, the previously published corneal topography scale is suggested.
On the calculation of the absolute grand potential of confined smectic-A phases
NASA Astrophysics Data System (ADS)
Huang, Chien-Cheng; Baus, Marc; Ryckaert, Jean-Paul
2015-09-01
We determine the absolute grand potential Λ along a confined smectic-A branch of a calamitic liquid crystal system enclosed in a slit pore of transverse area A and width L, using the rod-rod Gay-Berne potential and a rod-wall potential favouring perpendicular orientation at the walls. For a confined phase with an integer number of smectic layers sandwiched between the opposite walls, we obtain the excess properties (excess grand potential Λexc, solvation force fs and adsorption Γ) with respect to the bulk phase at the same μ (chemical potential) and T (temperature) state point. While usual thermodynamic integration methods are used along the confined smectic branch to estimate the grand potential difference as μ is varied at fixed L, T, the absolute grand potential at one reference state point is obtained via the evaluation of the absolute Helmholtz free energy in the (N, L, A, T) canonical ensemble. It proceeds via a sequence of free energy difference estimations involving successively the cost of localising rods on layers and the switching on of a one-dimensional harmonic field to keep layers integrity coupled to the elimination of inter-layers and wall interactions. The absolute free energy of the resulting set of fully independent layers of interacting rods is finally estimated via the existing procedures. This work opens the way to the computer simulation study of phase transitions implying confined layered phases.
Supercontinent cycles and the calculation of absolute palaeolongitude in deep time.
Mitchell, Ross N; Kilian, Taylor M; Evans, David A D
2012-02-08
Traditional models of the supercontinent cycle predict that the next supercontinent--'Amasia'--will form either where Pangaea rifted (the 'introversion' model) or on the opposite side of the world (the 'extroversion' models). Here, by contrast, we develop an 'orthoversion' model whereby a succeeding supercontinent forms 90° away, within the great circle of subduction encircling its relict predecessor. A supercontinent aggregates over a mantle downwelling but then influences global-scale mantle convection to create an upwelling under the landmass. We calculate the minimum moment of inertia about which oscillatory true polar wander occurs owing to the prolate shape of the non-hydrostatic Earth. By fitting great circles to each supercontinent's true polar wander legacy, we determine that the arc distances between successive supercontinent centres (the axes of the respective minimum moments of inertia) are 88° for Nuna to Rodinia and 87° for Rodinia to Pangaea--as predicted by the orthoversion model. Supercontinent centres can be located back into Precambrian time, providing fixed points for the calculation of absolute palaeolongitude over billion-year timescales. Palaeogeographic reconstructions additionally constrained in palaeolongitude will provide increasingly accurate estimates of ancient plate motions and palaeobiogeographic affinities.
Exploring of PST-TBPM in Monitoring Dynamic Deformation of Steel Structure in Vibration
NASA Astrophysics Data System (ADS)
Chen, Mingzhi; Zhao, Yongqian; Hai, Hua; Yu, Chengxin; Zhang, Guojian
2018-01-01
In order to monitor the dynamic deformation of steel structure in the real-time, digital photography is used in this paper. Firstly, the grid method is used correct the distortion of digital camera. Then the digital cameras are used to capture the initial and experimental images of steel structure to obtain its relative deformation. PST-TBPM (photographing scale transformation-time baseline parallax method) is used to eliminate the parallax error and convert the pixel change value of deformation points into the actual displacement value. In order to visualize the deformation trend of steel structure, the deformation curves are drawn based on the deformation value of deformation points. Results show that the average absolute accuracy and relative accuracy of PST-TBPM are 0.28mm and 1.1‰, respectively. Digital photography used in this study can meet accuracy requirements of steel structure deformation monitoring. It also can warn the safety of steel structure and provide data support for managers’ safety decisions based on the deformation curves on site.
Forbidden unique beta-decays and neutrino mass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dvornický, Rastislav; Šimkovic, Fedor
2013-12-30
The measurement of the electron spectrum in beta-decays provides a robust direct determination of the values of neutrino masses. The planned rhenium beta-decay experiment, called the “Microcalorimeter Arrays for a Rhenium Experiment” (MARE), might probe the absolute mass scale of neutrinos with the same sensitivity as the Karlsruhe tritium neutrino mass (KATRIN) experiment, which is expected to collect data in a near future. In this contribution we discuss the spectrum of emitted electrons close to the end point in the case of the first unique forbidden beta-decay of {sup 79}Se, {sup 107}Pd and {sup 187}Re. It is found that themore » p{sub 3/2}-wave emission dominates over the s{sub 1/2}-wave. It is shown that the Kurie plot near the end point is within a good accuracy linear in the limit of massless neutrinos like the Kurie plot of the superallowed beta-decay of {sup 3}H.« less
The Ontological Representation of Death: A Scale to Measure the Idea of Annihilation Versus Passage.
Testoni, Ines; Ancona, Dorella; Ronconi, Lucia
2015-01-01
Since the borders between natural life and death have been blurred by technique, in Western societies discussions and practices regarding death have became infinite. The studies in this area include all the most important topics of psychology, sociology, and philosophy. From a psychological point of view, the research has created many instruments for measuring death anxiety, fear, threat, depression, meaning of life, and among them, the profiles on death attitude are innumerable. This research presents the validation of a new attitude scale, which conjoins psychological dimensions and philosophical ones. This scale may be useful because the ontological idea of death has not yet been considered in research. The hypothesis is that it is different to believe that death is absolute annihilation than to be sure that it is a passage or a transformation of one's personal identity. The hypothetical difference results in a greater inner suffering caused by the former idea. In order to measure this possibility, we analyzed the correlation between Testoni Death Representation Scale and Beck Hopelessness Scale, Suicide Resilience Inventory-25, and Reasons for Living Inventory. The results confirm the hypothesis, showing that the representation of death as total annihilation is positively correlated to hopelessness and negatively correlated to resilience.
The recalibration of the IUE scientific instrument
NASA Technical Reports Server (NTRS)
Imhoff, Catherine L.; Oliversen, Nancy A.; Nichols-Bohlin, Joy; Casatella, Angelo; Lloyd, Christopher
1988-01-01
The IUE instrument was recalibrated because of long time-scale changes in the scientific instrument, a better understanding of the performance of the instrument, improved sets of calibration data, and improved analysis techniques. Calibrations completed or planned include intensity transfer functions (ITF), low-dispersion absolute calibrations, high-dispersion ripple corrections and absolute calibrations, improved geometric mapping of the ITFs to spectral images, studies to improve the signal-to-noise, enhanced absolute calibrations employing corrections for time, temperature, and aperture dependence, and photometric and geometric calibrations for the FES.
ERIC Educational Resources Information Center
Struyf, Jef
2011-01-01
The boiling point of a monofunctional organic compound is expressed as the sum of two parts: a contribution to the boiling point due to the R group and a contribution due to the functional group. The boiling point in absolute temperature of the corresponding RH hydrocarbon is chosen for the contribution to the boiling point of the R group and is a…
Powell, T; Brooker, D J; Papadopolous, A
1993-05-01
Relative and absolute test-retest reliability of the MEAMS was examined in 12 subjects with probable dementia and 12 matched controls. Relative reliability was good. Measures of absolute reliability showed scores changing by up to 3 points over an interval of a week. A version effect was found to be in evidence.
Absolute calibration of the mass scale in the inverse problem of the physical theory of fireballs
NASA Astrophysics Data System (ADS)
Kalenichenko, V. V.
1992-08-01
A method of the absolute calibration of the mass scale is proposed for solving the inverse problem of the physical theory of fireballs. The method is based on data on the masses of fallen meteorites whose fireballs have been photographed in flight. The method can be applied to fireballs whose bodies have not experienced significant fragmentation during their flight in the atmosphere and have kept their shape relatively well. Data on the Lost City and Innisfree meteorites are used to calculate the calibration coefficients.
Exponential bound in the quest for absolute zero
NASA Astrophysics Data System (ADS)
Stefanatos, Dionisis
2017-10-01
In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.
Exponential bound in the quest for absolute zero.
Stefanatos, Dionisis
2017-10-01
In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.
Developments in the realization of diffuse reflectance scales at NPL
NASA Astrophysics Data System (ADS)
Chunnilall, Christopher J.; Clarke, Frank J. J.; Shaw, Michael J.
2005-08-01
The United Kingdom scales for diffuse reflectance are realized using two primary instruments. In the 360 nm to 2.5 μm spectral region the National Reference Reflectometer (NRR) realizes absolute measurement of reflectance and radiance factor by goniometric measurements. Hemispherical reflectance scales are obtained through the spatial integration of these goniometric measurements. In the mid-infrared region (2.5 μm - 55 μm) the hemispherical reflectance scale is realized by the Absolute Hemispherical Reflectometer (AHR). This paper describes some of the uncertainties resulting from errors in aligning the NRR and non-ideality in sample topography, together with its use to carry out measurements in the 1 - 1.6 μm region. The AHR has previously been used with grating spectrometers, and has now been coupled to a Fourier transform spectrometer.
The demise of superfluid density in overdoped La 2-xSr xCuO 4 films grown by molecular beam epitaxy
Bozovic, I.; He, X.; Wu, J.; ...
2016-09-30
Here, we synthesize La 2–xSr xCuO 4 thin films using atomic layer-by-layer molecular beam epitaxy (ALL-MBE). The films are high-quality—singe crystal, atomically smooth, and very homogeneous. The critical temperature (T c) shows a very little (<1 K) variation within a film of 10×10 mm 2 area. The large statistics (over 2000 films) is crucial to discern intrinsic properties. We measured the absolute value of the magnetic penetration depth λ with the accuracy better than 1 % and mapped densely the entire overdoped side of the La 2–xSr xCuO 4 phase diagram. A new scaling law is established accurately for themore » dependence of T c on the superfluid density. The scaling we observe is incompatible with the standard Bardeen-Cooper-Schrieffer picture and points to local pairing.« less
Gyrofluid modeling and phenomenology of low-βe Alfvén wave turbulence
NASA Astrophysics Data System (ADS)
Passot, T.; Sulem, P. L.; Tassi, E.
2018-04-01
A two-field reduced gyrofluid model including electron inertia, ion finite Larmor radius corrections, and parallel magnetic field fluctuations is derived from the model of Brizard [Brizard, Phys. Fluids B 4, 1213 (1992)]. It assumes low βe, where βe indicates the ratio between the equilibrium electron pressure and the magnetic pressure exerted by a strong uniform magnetic guide field, but permits an arbitrary ion-to-electron equilibrium temperature ratio. It is shown to have a noncanonical Hamiltonian structure and provides a convenient framework for studying kinetic Alfvén wave turbulence, from magnetohydrodynamics to sub-de scales (where de holds for the electron skin depth). Magnetic energy spectra are phenomenologically determined within energy and generalized cross-helicity cascades in the perpendicular spectral plane. Arguments based on absolute statistical equilibria are used to predict the direction of the transfers, pointing out that, within the sub-ion range, the generalized cross-helicity could display an inverse cascade if injected at small scales, for example by reconnection processes.
Responsiveness of the VISA-P scale for patellar tendinopathy in athletes.
Hernandez-Sanchez, Sergio; Hidalgo, Ma Dolores; Gomez, Antonia
2014-03-01
Patient-reported outcome measures are increasingly used in sports medicine to assess results after treatment, but interpretability of change for many instruments remains unclear. To define the minimum clinically important difference (MCID) for the Victorian Institute of Sport Assessment scale (VISA-P) in athletes with patellar tendinopathy (PT) who underwent conservative treatment. Ninety-eight athletes with PT were enrolled in the study. Each participant completed the VISA-P at admission, after 1 week, and at the final visit. Athletes also assessed their clinical change at discharge on a 15-point Likert scale. We equated important change with a score of ≥3 (somewhat better). Receiver-operating characteristic (ROC) curve analysis and mean change score were used to determine MCID. Minimal detectable change was calculated. The effect of baseline scores on MCID and different criteria used to define important change were investigated. A Bayesian analysis was used to establish the posterior probability of reporting clinical changes related to MCID value. Athletes with PT who showed an absolute change greater than 13 points in the VISA-P score or 15.4-27% of relative change achieved a minimal important change in their clinical status. This value depended on baseline scores. The probability of a clinical change in a patient was 98% when this threshold was achieved and 45% when MCID was not achieved. Definition of the MCID will enhance the interpretability of changes in the VISA-P score in the athletes with PT, but caution is required when these values are used.
Flosadottir, Vala; Roos, Ewa M.; Ageberg, Eva
2017-01-01
Background: The Activity Rating Scale (ARS) for disorders of the knee evaluates the level of activity by the frequency of participation in 4 separate activities with high demands on knee function, with a score ranging from 0 (none) to 16 (pivoting activities 4 times/wk). Purpose: To translate and cross-culturally adapt the ARS into Swedish and to assess measurement properties of the Swedish version of the ARS. Study Design: Cohort study (diagnosis); Level of evidence, 2. Methods: The COSMIN guidelines were followed. Participants (N = 100 [55 women]; mean age, 27 years) who were undergoing rehabilitation for a knee injury completed the ARS twice for test-retest reliability. The Knee injury and Osteoarthritis Outcome Score (KOOS), Tegner Activity Scale (TAS), and modernized Saltin-Grimby Physical Activity Level Scale (SGPALS) were administered at baseline to validate the ARS. Construct validity and responsiveness of the ARS were evaluated by testing predefined hypotheses regarding correlations between the ARS, KOOS, TAS, and SGPALS. The Cronbach alpha, intraclass correlation coefficients, absolute reliability, standard error of measurement, smallest detectable change, and Spearman rank-order correlation coefficients were calculated. Results: The ARS showed good internal consistency (α ≈ 0.96), good test-retest reliability (intraclass correlation coefficient >0.9), and no systematic bias between measurements. The standard error of measurement was less than 2 points, and the smallest detectable change was less than 1 point at the group level and less than 5 points at the individual level. More than 75% of the hypotheses were confirmed, indicating good construct validity and good responsiveness of the ARS. Conclusion: The Swedish version of the ARS is valid, reliable, and responsive for evaluating the level of activity based on the frequency of participation in high-demand knee sports activities in young adults with a knee injury. PMID:28979920
Ortiz-Catalan, Max; Guðmundsdóttir, Rannveig A; Kristoffersen, Morten B; Zepeda-Echavarria, Alejandra; Caine-Winterberger, Kerstin; Kulbacka-Ortiz, Katarzyna; Widehammar, Cathrine; Eriksson, Karin; Stockselius, Anita; Ragnö, Christina; Pihlar, Zdenka; Burger, Helena; Hermansson, Liselotte
2016-12-10
Phantom limb pain is a debilitating condition for which no effective treatment has been found. We hypothesised that re-engagement of central and peripheral circuitry involved in motor execution could reduce phantom limb pain via competitive plasticity and reversal of cortical reorganisation. Patients with upper limb amputation and known chronic intractable phantom limb pain were recruited at three clinics in Sweden and one in Slovenia. Patients received 12 sessions of phantom motor execution using machine learning, augmented and virtual reality, and serious gaming. Changes in intensity, frequency, duration, quality, and intrusion of phantom limb pain were assessed by the use of the numeric rating scale, the pain rating index, the weighted pain distribution scale, and a study-specific frequency scale before each session and at follow-up interviews 1, 3, and 6 months after the last session. Changes in medication and prostheses were also monitored. Results are reported using descriptive statistics and analysed by non-parametric tests. The trial is registered at ClinicalTrials.gov, number NCT02281539. Between Sept 15, 2014, and April 10, 2015, 14 patients with intractable chronic phantom limb pain, for whom conventional treatments failed, were enrolled. After 12 sessions, patients showed statistically and clinically significant improvements in all metrics of phantom limb pain. Phantom limb pain decreased from pre-treatment to the last treatment session by 47% (SD 39; absolute mean change 1·0 [0·8]; p=0·001) for weighted pain distribution, 32% (38; absolute mean change 1·6 [1·8]; p=0·007) for the numeric rating scale, and 51% (33; absolute mean change 9·6 [8·1]; p=0·0001) for the pain rating index. The numeric rating scale score for intrusion of phantom limb pain in activities of daily living and sleep was reduced by 43% (SD 37; absolute mean change 2·4 [2·3]; p=0·004) and 61% (39; absolute mean change 2·3 [1·8]; p=0·001), respectively. Two of four patients who were on medication reduced their intake by 81% (absolute reduction 1300 mg, gabapentin) and 33% (absolute reduction 75 mg, pregabalin). Improvements remained 6 months after the last treatment. Our findings suggest potential value in motor execution of the phantom limb as a treatment for phantom limb pain. Promotion of phantom motor execution aided by machine learning, augmented and virtual reality, and gaming is a non-invasive, non-pharmacological, and engaging treatment with no identified side-effects at present. Promobilia Foundation, VINNOVA, Jimmy Dahlstens Fond, PicoSolve, and Innovationskontor Väst. Copyright © 2016 Elsevier Ltd. All rights reserved.
Whiteley, William N; Emberson, Jonathan; Lees, Kennedy R; Blackwell, Lisa; Albers, Gregory; Bluhmki, Erich; Brott, Thomas; Cohen, Geoff; Davis, Stephen; Donnan, Geoffrey; Grotta, James; Howard, George; Kaste, Markku; Koga, Masatoshi; von Kummer, Rüdiger; Lansberg, Maarten G; Lindley, Richard I; Lyden, Patrick; Olivot, Jean Marc; Parsons, Mark; Toni, Danilo; Toyoda, Kazunori; Wahlgren, Nils; Wardlaw, Joanna; Del Zoppo, Gregory J; Sandercock, Peter; Hacke, Werner; Baigent, Colin
2016-08-01
Randomised trials have shown that alteplase improves the odds of a good outcome when delivered within 4·5 h of acute ischaemic stroke. However, alteplase also increases the risk of intracerebral haemorrhage; we aimed to determine the proportional and absolute effects of alteplase on the risks of intracerebral haemorrhage, mortality, and functional impairment in different types of patients. We used individual patient data from the Stroke Thrombolysis Trialists' (STT) meta-analysis of randomised trials of alteplase versus placebo (or untreated control) in patients with acute ischaemic stroke. We prespecified assessment of three classifications of intracerebral haemorrhage: type 2 parenchymal haemorrhage within 7 days; Safe Implementation of Thrombolysis in Stroke Monitoring Study's (SITS-MOST) haemorrhage within 24-36 h (type 2 parenchymal haemorrhage with a deterioration of at least 4 points on National Institutes of Health Stroke Scale [NIHSS]); and fatal intracerebral haemorrhage within 7 days. We used logistic regression, stratified by trial, to model the log odds of intracerebral haemorrhage on allocation to alteplase, treatment delay, age, and stroke severity. We did exploratory analyses to assess mortality after intracerebral haemorrhage and examine the absolute risks of intracerebral haemorrhage in the context of functional outcome at 90-180 days. Data were available from 6756 participants in the nine trials of intravenous alteplase versus control. Alteplase increased the odds of type 2 parenchymal haemorrhage (occurring in 231 [6·8%] of 3391 patients allocated alteplase vs 44 [1·3%] of 3365 patients allocated control; odds ratio [OR] 5·55 [95% CI 4·01-7·70]; absolute excess 5·5% [4·6-6·4]); of SITS-MOST haemorrhage (124 [3·7%] of 3391 vs 19 [0·6%] of 3365; OR 6·67 [4·11-10·84]; absolute excess 3·1% [2·4-3·8]); and of fatal intracerebral haemorrhage (91 [2·7%] of 3391 vs 13 [0·4%] of 3365; OR 7·14 [3·98-12·79]; absolute excess 2·3% [1·7-2·9]). However defined, the proportional increase in intracerebral haemorrhage was similar irrespective of treatment delay, age, or baseline stroke severity, but the absolute excess risk of intracerebral haemorrhage increased with increasing stroke severity: for SITS-MOST intracerebral haemorrhage the absolute excess risk ranged from 1·5% (0·8-2·6%) for strokes with NIHSS 0-4 to 3·7% (2·1-6·3%) for NIHSS 22 or more (p=0·0101). For patients treated within 4·5 h, the absolute increase in the proportion (6·8% [4·0% to 9·5%]) achieving a modified Rankin Scale of 0 or 1 (excellent outcome) exceeded the absolute increase in risk of fatal intracerebral haemorrhage (2·2% [1·5% to 3·0%]) and the increased risk of any death within 90 days (0·9% [-1·4% to 3·2%]). Among patients given alteplase, the net outcome is predicted both by time to treatment (with faster time increasing the proportion achieving an excellent outcome) and stroke severity (with a more severe stroke increasing the absolute risk of intracerebral haemorrhage). Although, within 4·5 h of stroke, the probability of achieving an excellent outcome with alteplase treatment exceeds the risk of death, early treatment is especially important for patients with severe stroke. UK Medical Research Council, British Heart Foundation, University of Glasgow, University of Edinburgh. Copyright © 2016 Elsevier Ltd. All rights reserved.
Equipercentile linking of the BPRS and the PANSS.
Leucht, S; Rothe, P; Davis, J M; Engel, R R
2013-08-01
The Positive and Negative Syndrome Scale (PANSS) and the Brief Psychiatric Rating Scale (BPRS) are the most frequently used scales to rate the symptoms of schizophrenia. There are many situations in which it is important to know what a given total score or a percent reduction from baseline score of one scale means in terms of the other scale. We used the equipercentile linking method to identify corresponding scores of simultaneous BPRS and PANSS ratings in 3767 patients from antipsychotic drug trials. Data were collected at baseline and at weeks 1, 2, 4 and 6. BPRS total scores of 18, 30, 40 and 50 roughly corresponded to PANSS total scores of 31, 55, 73 and 90, respectively. An absolute BPRS improvement of 10, 20, 30, 40 points corresponded to a PANSS improvement of 15, 32, 50, and 67. A percentage improvement of the BPRS total score from baseline of 19%, 30%, 40% and 50% roughly corresponded to percentage PANSS improvement of 16%, 25%, 35%, and 44%. Thus a given PANSS percent improvement was always lower than the corresponding BPRS percent improvement, on the average by 4-5%. A reason may be the higher number of items used in the PANSS. These results are important for the comparison of trials that used these rating scales. We present a detailed conversion table in an online supplement. Copyright © 2012 Elsevier B.V. and ECNP. All rights reserved.
Scale covariant gravitation. V - Kinetic theory. VI - Stellar structure and evolution
NASA Technical Reports Server (NTRS)
Hsieh, S.-H.; Canuto, V. M.
1981-01-01
A scale covariant kinetic theory for particles and photons is developed. The mathematical framework of the theory is given by the tangent bundle of a Weyl manifold. The Liouville equation is derived, and solutions to corresponding equilibrium distributions are presented and shown to yield thermodynamic results identical to the ones obtained previously. The scale covariant theory is then used to derive results of interest to stellar structure and evolution. A radiative transfer equation is derived that can be used to study stellar evolution with a variable gravitational constant. In addition, it is shown that the sun's absolute luminosity scales as L approximately equal to GM/kappa, where kappa is the stellar opacity. Finally, a formula is derived for the age of globular clusters as a function of the gravitational constant using a previously derived expression for the absolute luminosity.
Design and laboratory testing of a prototype linear temperature sensor
NASA Astrophysics Data System (ADS)
Dube, C. M.; Nielsen, C. M.
1982-07-01
This report discusses the basic theory, design, and laboratory testing of a prototype linear temperature sensor (or "line sensor'), which is an instrument for measuring internal waves in the ocean. The operating principle of the line sensor consists of measuring the average resistance change of a vertically suspended wire (or coil of wire) induced by the passage of an internal wave in a thermocline. The advantage of the line sensor over conventional internal wave measurement techniques is that it is insensitive to thermal finestructure which contaminates point sensor measurements, and its output is approximately linearly proportional to the internal wave displacement. An approximately one-half scale prototype line sensor module was teste in the laboratory. The line sensor signal was linearly related to the actual fluid displacement to within 10%. Furthermore, the absolute output was well predicted (within 25%) from the theoretical model and the sensor material properties alone. Comparisons of the line sensor and a point sensor in a wavefield with superimposed turbulence (finestructure) revealed negligible distortion in the line sensor signal, while the point sensor signal was swamped by "turbulent noise'. The effects of internal wave strain were also found to be negligible.
Thermodynamic Temperature Measurement to the Indium Point Based on Radiance Comparison
NASA Astrophysics Data System (ADS)
Yamaguchi, Y.; Yamada, Y.
2017-04-01
A multi-national project (the EMRP InK project) was completed recently, which successfully determined the thermodynamic temperatures of several of the high-temperature fixed points above the copper point. The National Metrology Institute of Japan contributed to this project with its newly established absolute spectral radiance calibration capability. In the current study, we have extended the range of thermodynamic temperature measurement to below the copper point and measured the thermodynamic temperatures of the indium point (T_{90} = 429.748 5 K), tin point (505.078 K), zinc point (692.677 K), aluminum point (933.473 K) and the silver point (1 234.93 K) by radiance comparison against the copper point, with a set of radiation thermometers having center wavelengths ranging from 0.65 μm to 1.6 μm. The copper-point temperature was measured by the absolute radiation thermometer which was calibrated by radiance method traceable to the electrical substitution cryogenic radiometer. The radiance of the fixed-point blackbodies was measured by standard radiation thermometers whose spectral responsivity and nonlinearity are precisely evaluated, and then the thermodynamic temperatures were determined from radiance ratios to the copper point. The values of T-T_{90} for the silver-, aluminum-, zinc-, tin- and indium-point cells were determined as -4 mK (U = 104 mK, k=2), -99 mK (88 mK), -76 mK (76 mK), -68 mK (163 mK) and -42 mK (279 mK), respectively.
Photogrammetry Tool for Forensic Analysis
NASA Technical Reports Server (NTRS)
Lane, John
2012-01-01
A system allows crime scene and accident scene investigators the ability to acquire visual scene data using cameras for processing at a later time. This system uses a COTS digital camera, a photogrammetry calibration cube, and 3D photogrammetry processing software. In a previous instrument developed by NASA, the laser scaling device made use of parallel laser beams to provide a photogrammetry solution in 2D. This device and associated software work well under certain conditions. In order to make use of a full 3D photogrammetry system, a different approach was needed. When using multiple cubes, whose locations relative to each other are unknown, a procedure that would merge the data from each cube would be as follows: 1. One marks a reference point on cube 1, then marks points on cube 2 as unknowns. This locates cube 2 in cube 1 s coordinate system. 2. One marks reference points on cube 2, then marks points on cube 1 as unknowns. This locates cube 1 in cube 2 s coordinate system. 3. This procedure is continued for all combinations of cubes. 4. The coordinate of all of the found coordinate systems is then merged into a single global coordinate system. In order to achieve maximum accuracy, measurements are done in one of two ways, depending on scale: when measuring the size of objects, the coordinate system corresponding to the nearest cube is used, or when measuring the location of objects relative to a global coordinate system, a merged coordinate system is used. Presently, traffic accident analysis is time-consuming and not very accurate. Using cubes with differential GPS would give absolute positions of cubes in the accident area, so that individual cubes would provide local photogrammetry calibration to objects near a cube.
a Weighted Closed-Form Solution for Rgb-D Data Registration
NASA Astrophysics Data System (ADS)
Vestena, K. M.; Dos Santos, D. R.; Oilveira, E. M., Jr.; Pavan, N. L.; Khoshelham, K.
2016-06-01
Existing 3D indoor mapping of RGB-D data are prominently point-based and feature-based methods. In most cases iterative closest point (ICP) and its variants are generally used for pairwise registration process. Considering that the ICP algorithm requires an relatively accurate initial transformation and high overlap a weighted closed-form solution for RGB-D data registration is proposed. In this solution, we weighted and normalized the 3D points based on the theoretical random errors and the dual-number quaternions are used to represent the 3D rigid body motion. Basically, dual-number quaternions provide a closed-form solution by minimizing a cost function. The most important advantage of the closed-form solution is that it provides the optimal transformation in one-step, it does not need to calculate good initial estimates and expressively decreases the demand for computer resources in contrast to the iterative method. Basically, first our method exploits RGB information. We employed a scale invariant feature transformation (SIFT) for extracting, detecting, and matching features. It is able to detect and describe local features that are invariant to scaling and rotation. To detect and filter outliers, we used random sample consensus (RANSAC) algorithm, jointly with an statistical dispersion called interquartile range (IQR). After, a new RGB-D loop-closure solution is implemented based on the volumetric information between pair of point clouds and the dispersion of the random errors. The loop-closure consists to recognize when the sensor revisits some region. Finally, a globally consistent map is created to minimize the registration errors via a graph-based optimization. The effectiveness of the proposed method is demonstrated with a Kinect dataset. The experimental results show that the proposed method can properly map the indoor environment with an absolute accuracy around 1.5% of the travel of a trajectory.
Graphs to estimate an individualized risk of breast cancer.
Benichou, J; Gail, M H; Mulvihill, J J
1996-01-01
Clinicians who counsel women about their risk for developing breast cancer need a rapid method to estimate individualized risk (absolute risk), as well as the confidence limits around that point. The Breast Cancer Detection Demonstration Project (BCDDP) model (sometimes called the Gail model) assumes no genetic model and simultaneously incorporates five risk factors, but involves cumbersome calculations and interpolations. This report provides graphs to estimate the absolute risk of breast cancer from the BCDDP model. The BCDDP recruited 280,000 women from 1973 to 1980 who were monitored for 5 years. From this cohort, 2,852 white women developed breast cancer and 3,146 controls were selected, all with complete risk-factor information. The BCDDP model, previously developed from these data, was used to prepare graphs that relate a specific summary relative-risk estimate to the absolute risk of developing breast cancer over intervals of 10, 20, and 30 years. Once a summary relative risk is calculated, the appropriate graph is chosen that shows the 10-, 20-, or 30-year absolute risk of developing breast cancer. A separate graph gives the 95% confidence limits around the point estimate of absolute risk. Once a clinician rules out a single gene trait that predisposes to breast cancer and elicits information on age and four risk factors, the tables and figures permit an estimation of a women's absolute risk of developing breast cancer in the next three decades. These results are intended to be applied to women who undergo regular screening. They should be used only in a formal counseling program to maximize a woman's understanding of the estimates and the proper use of them.
Electrotherapy modalities for rotator cuff disease.
Page, Matthew J; Green, Sally; Mrocki, Marshall A; Surace, Stephen J; Deitch, Jessica; McBain, Brodwen; Lyttle, Nicolette; Buchbinder, Rachelle
2016-06-10
Management of rotator cuff disease may include use of electrotherapy modalities (also known as electrophysical agents), which aim to reduce pain and improve function via an increase in energy (electrical, sound, light, or thermal) into the body. Examples include therapeutic ultrasound, low-level laser therapy (LLLT), transcutaneous electrical nerve stimulation (TENS), and pulsed electromagnetic field therapy (PEMF). These modalities are usually delivered as components of a physical therapy intervention. This review is one of a series of reviews that form an update of the Cochrane review, 'Physiotherapy interventions for shoulder pain'. To synthesise available evidence regarding the benefits and harms of electrotherapy modalities for the treatment of people with rotator cuff disease. We searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2015, Issue 3), Ovid MEDLINE (January 1966 to March 2015), Ovid EMBASE (January 1980 to March 2015), CINAHL Plus (EBSCOhost, January 1937 to March 2015), ClinicalTrials.gov and the WHO ICTRP clinical trials registries up to March 2015, unrestricted by language, and reviewed the reference lists of review articles and retrieved trials, to identify potentially relevant trials. We included randomised controlled trials (RCTs) and quasi-randomised trials, including adults with rotator cuff disease (e.g. subacromial impingement syndrome, rotator cuff tendinitis, calcific tendinitis), and comparing any electrotherapy modality with placebo, no intervention, a different electrotherapy modality or any other intervention (e.g. glucocorticoid injection). Trials investigating whether electrotherapy modalities were more effective than placebo or no treatment, or were an effective addition to another physical therapy intervention (e.g. manual therapy or exercise) were the main comparisons of interest. Main outcomes of interest were overall pain, function, pain on motion, patient-reported global assessment of treatment success, quality of life and the number of participants experiencing adverse events. Two review authors independently selected trials for inclusion, extracted the data, performed a risk of bias assessment and assessed the quality of the body of evidence for the main outcomes using the GRADE approach. We included 47 trials (2388 participants). Most trials (n = 43) included participants with rotator cuff disease without calcification (four trials included people with calcific tendinitis). Sixteen (34%) trials investigated the effect of an electrotherapy modality delivered in isolation. Only 23% were rated at low risk of allocation bias, and 49% were rated at low risk of both performance and detection bias (for self-reported outcomes). The trials were heterogeneous in terms of population, intervention and comparator, so none of the data could be combined in a meta-analysis.In one trial (61 participants; low quality evidence), pulsed therapeutic ultrasound (three to five times a week for six weeks) was compared with placebo (inactive ultrasound therapy) for calcific tendinitis. At six weeks, the mean reduction in overall pain with placebo was -6.3 points on a 52-point scale, and -14.9 points with ultrasound (MD -8.60 points, 95% CI -13.48 to -3.72 points; absolute risk difference 17%, 7% to 26% more). Mean improvement in function with placebo was 3.7 points on a 100-point scale, and 17.8 points with ultrasound (mean difference (MD) 14.10 points, 95% confidence interval (CI) 5.39 to 22.81 points; absolute risk difference 14%, 5% to 23% more). Ninety-one per cent (29/32) of participants reported treatment success with ultrasound compared with 52% (15/29) of participants receiving placebo (risk ratio (RR) 1.75, 95% CI 1.21 to 2.53; absolute risk difference 39%, 18% to 60% more). Mean improvement in quality of life with placebo was 0.40 points on a 10-point scale, and 2.60 points with ultrasound (MD 2.20 points, 95% CI 0.91 points to 3.49 points; absolute risk difference 22%, 9% to 35% more). Between-group differences were not important at nine months. No participant reported adverse events.Therapeutic ultrasound produced no clinically important additional benefits when combined with other physical therapy interventions (eight clinically heterogeneous trials, low quality evidence). We are uncertain whether there are differences in patient-important outcomes between ultrasound and other active interventions (manual therapy, acupuncture, glucocorticoid injection, glucocorticoid injection plus oral tolmetin sodium, or exercise) because the quality of evidence is very low. Two placebo-controlled trials reported results favouring LLLT up to three weeks (low quality evidence), however combining LLLT with other physical therapy interventions produced few additional benefits (10 clinically heterogeneous trials, low quality evidence). We are uncertain whether transcutaneous electrical nerve stimulation (TENS) is more or less effective than glucocorticoid injection with respect to pain, function, global treatment success and active range of motion because of the very low quality evidence from a single trial. In other single, small trials, no clinically important benefits of pulsed electromagnetic field therapy (PEMF), microcurrent electrical stimulation (MENS), acetic acid iontophoresis and microwave diathermy were observed (low or very low quality evidence).No adverse events of therapeutic ultrasound, LLLT, TENS or microwave diathermy were reported by any participants. Adverse events were not measured in any trials investigating the effects of PEMF, MENS or acetic acid iontophoresis. Based on low quality evidence, therapeutic ultrasound may have short-term benefits over placebo in people with calcific tendinitis, and LLLT may have short-term benefits over placebo in people with rotator cuff disease. Further high quality placebo-controlled trials are needed to confirm these results. In contrast, based on low quality evidence, PEMF may not provide clinically relevant benefits over placebo, and therapeutic ultrasound, LLLT and PEMF may not provide additional benefits when combined with other physical therapy interventions. We are uncertain whether TENS is superior to placebo, and whether any electrotherapy modality provides benefits over other active interventions (e.g. glucocorticoid injection) because of the very low quality of the evidence. Practitioners should communicate the uncertainty of these effects and consider other approaches or combinations of treatment. Further trials of electrotherapy modalities for rotator cuff disease should be based upon a strong rationale and consideration of whether or not they would alter the conclusions of this review.
A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.
Rutledge, Robert G
2011-03-02
Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.
A Java Program for LRE-Based Real-Time qPCR that Enables Large-Scale Absolute Quantification
Rutledge, Robert G.
2011-01-01
Background Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Findings Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. Conclusions The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples. PMID:21407812
NASA Technical Reports Server (NTRS)
Ulich, B. L.; Rhodes, P. J.; Davis, J. H.; Hollis, J. M.
1980-01-01
Careful observations have been made at 86.1 GHz to derive the absolute brightness temperatures of the sun (7914 + or - 192 K), Venus (357.5 + or - 13.1 K), Jupiter (179.4 + or - 4.7 K), and Saturn (153.4 + or - 4.8 K) with a standard error of about three percent. This is a significant improvement in accuracy over previous results at millimeter wavelengths. A stable transmitter and novel superheterodyne receiver were constructed and used to determine the effective collecting area of the Millimeter Wave Observatory (MWO) 4.9-m antenna relative to a previously calibrated standard gain horn. The thermal scale was set by calibrating the radiometer with carefully constructed and tested hot and cold loads. The brightness temperatures may be used to establish an absolute calibration scale and to determine the antenna aperture and beam efficiencies of other radio telescopes at 3.5-mm wavelength.
Strong thermal leptogenesis and the absolute neutrino mass scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bari, Pasquale Di; King, Sophie E.; Fiorentin, Michele Re, E-mail: pdb1d08@soton.ac.uk, E-mail: sk1806@soton.ac.uk, E-mail: m.re-fiorentin@soton.ac.uk
We show that successful strong thermal leptogenesis, where the final asymmetry is independent of the initial conditions and in particular a large pre-existing asymmetry is efficiently washed-out, favours values of the lightest neutrino mass m{sub 1}∼>10 meV for normal ordering (NO) and m{sub 1}∼>3 meV for inverted ordering (IO) for models with orthogonal matrix entries respecting |Ω{sub ij}{sup 2}|∼<2. We show analytically why lower values of m{sub 1} require a higher level of fine tuning in the seesaw formula and/or in the flavoured decay parameters (in the electronic for NO, in the muonic for IO). We also show how this constraint existsmore » thanks to the measured values of the neutrino mixing angles and could be tightened by a future determination of the Dirac phase. Our analysis also allows us to place a more stringent constraint for a specific model or class of models, such as SO(10)-inspired models, and shows that some models cannot realise strong thermal leptogenesis for any value of m{sub 1}. A scatter plot analysis fully supports the analytical results. We also briefly discuss the interplay with absolute neutrino mass scale experiments concluding that they will be able in the coming years to either corner strong thermal leptogenesis or find positive signals pointing to a non-vanishing m{sub 1}. Since the constraint is much stronger for NO than for IO, it is very important that new data from planned neutrino oscillation experiments will be able to solve the ambiguity.« less
Absolute calibration of ultraviolet filter photometry
NASA Technical Reports Server (NTRS)
Bless, R. C.; Fairchild, T.; Code, A. D.
1972-01-01
The essential features of the calibration procedure can be divided into three parts. First, the shape of the bandpass of each photometer was determined by measuring the transmissions of the individual optical components and also by measuring the response of the photometer as a whole. Secondly, each photometer was placed in the essentially-collimated synchrotron radiation bundle maintained at a constant intensity level, and the output signal was determined from about 100 points on the objective. Finally, two or three points on the objective were illuminated by synchrotron radiation at several different intensity levels covering the dynamic range of the photometers. The output signals were placed on an absolute basis by the electron counting technique described earlier.
NASA Astrophysics Data System (ADS)
Best, Fred A.; Revercomb, Henry E.; Knuteson, Robert O.; Tobin, David C.; Ellington, Scott D.; Werner, Mark W.; Adler, Douglas P.; Garcia, Raymond K.; Taylor, Joseph K.; Ciganovich, Nick N.; Smith, William L., Sr.; Bingham, Gail E.; Elwell, John D.; Scott, Deron K.
2005-01-01
The NASA New Millennium Program's Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) instrument provides enormous advances in water vapor, wind, temperature, and trace gas profiling from geostationary orbit. The top-level instrument calibration requirement is to measure brightness temperature to better than 1 K (3 sigma) over a broad range of atmospheric brightness temperatures, with a reproducibility of +/-0.2 K. For in-flight radiometric calibration, GIFTS uses views of two on-board blackbody sources (290 K and 255 K) along with cold space, sequenced at regular programmable intervals. The blackbody references are cavities that follow the UW Atmospheric Emitted Radiance Interferometer (AERI) design, scaled to the GIFTS beam size. The cavity spectral emissivity is better than 0.998 with an absolute uncertainty of less than 0.001. Absolute blackbody temperature uncertainties are estimated at 0.07 K. This paper describes the detailed design of the GIFTS on-board calibration system that recently underwent its Critical Design Review. The blackbody cavities use ultra-stable thermistors to measure temperature, and are coated with high emissivity black paint. Monte Carlo modeling has been performed to calculate the cavity emissivity. Both absolute temperature and emissivity measurements are traceable to NIST, and detailed uncertainty budgets have been developed and used to show the overall system meets accuracy requirements. The blackbody controller is housed on a single electronics board and provides precise selectable set point temperature control, thermistor resistance measurement, and the digital interface to the GIFTS instrument. Plans for the NIST traceable ground calibration of the on-board blackbody system have also been developed and are presented in this paper.
Microcounseling Skill Discrimination Scale: A Methodological Note
ERIC Educational Resources Information Center
Stokes, Joseph; Romer, Daniel
1977-01-01
Absolute ratings on the Microcounseling Skill Discrimination Scale (MSDS) confound the individual's use of the rating scale and actual ability to discriminate effective and ineffective counselor behaviors. This note suggests methods of scoring the MSDS that will eliminate variability attributable to response language and improve the validity of…
Cultural Differences in Justificatory Reasoning
ERIC Educational Resources Information Center
Soong, Hannah; Lee, Richard; John, George
2012-01-01
Justificatory reasoning, the ability to justify one's beliefs and actions, is an important goal of education. We develop a scale to measure the three forms of justificatory reasoning--absolutism, relativism, and evaluativism--before validating the scale across two cultures and domains. The results show that the scale possessed validity and…
Improved dewpoint-probe calibration
NASA Technical Reports Server (NTRS)
Stephenson, J. G.; Theodore, E. A.
1978-01-01
Relatively-simple pressure-control apparatus calibrates dewpoint probes considerably faster than conventional methods, with no loss of accuracy. Technique requires only pressure measurement at each calibration point and single absolute-humidity measurement at beginning of run. Several probes can be calibrated simultaneously and points can be checked above room temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saenz, D; Gutierrez, A
Purpose: The ScandiDos Discover has obtained FDA clearance and is now clinically released. We studied the essential attenuation and beam hardening components as well as tested the diode array’s ability to detect changes in absolute dose and MLC leaf positions. Methods: The ScandiDos Discover was mounted on the heads of an Elekta VersaHD and a Varian 23EX. Beam attenuation measurements were made at 10 cm depth for 6 MV and 18 MV beam energies. The PDD(10) was measured as a metric for the effect on beam quality. Next, a plan consisting of two orthogonal 10 × 10 cm2 fields wasmore » used to adjust the dose per fraction by scaling monitor units to test the absolute dose detection sensitivity of the Discover. A second plan (conformal arc) was then delivered several times independently on the Elekta VersaHD. Artificially introduced MLC position errors in the four central leaves were then added. The errors were incrementally increased from 1 mm to 4 mm and back across seven control points. Results: The absolute dose measured at 10 cm depth decreased by 1.2% and 0.7% for 6 MV and 18 MV beam with the Discover, respectively. Attenuation depended slightly on the field size but only changed the attenuation by 0.1% across 5 × 5 cm{sup 2} and 20 − 20 cm{sup 2} fields. The change in PDD(10) for a 10 − 10 cm{sup 2} field was +0.1% and +0.6% for 6 MV and 18 MV, respectively. Changes in monitor units from −5.0% to 5.0% were faithfully detected. Detected leaf errors were within 1.0 mm of intended errors. Conclusion: A novel in-vivo dosimeter monitoring the radiation beam during treatment was examined through its attenuation and beam hardening characteristics. The device tracked with changes in absolute dose as well as introduced leaf position deviations.« less
Investigation of scale effects in the TRF determined by VLBI
NASA Astrophysics Data System (ADS)
Wahl, Daniel; Heinkelmann, Robert; Schuh, Harald
2017-04-01
The improvement of the International Terrestrial Reference Frame (ITRF) is of great significance for Earth sciences and one of the major tasks in geodesy. The translation, rotation and the scale-factor, as well as their linear rates, are solved in a 14-parameter transformation between individual frames of each space geodetic technique and the combined frame. In ITRF2008, as well as in the current release ITRF2014, the scale-factor is provided by Very Long Baseline Interferometry (VLBI) and Satellite Laser Ranging (SLR) in equal shares. Since VLBI measures extremely precise group delays that are transformed to baseline lengths by the velocity of light, a natural constant, VLBI is the most suitable method for providing the scale. The aim of the current work is to identify possible shortcomings in the VLBI scale contribution to ITRF2008. For developing recommendations for an enhanced estimation, scale effects in the Terrestrial Reference Frame (TRF) determined with VLBI are considered in detail and compared to ITRF2008. In contrast to station coordinates, where the scale is defined by a geocentric position vector, pointing from the origin of the reference frame to the station, baselines are not related to the origin. They are describing the absolute scale independently from the datum. The more accurate a baseline length, and consequently the scale, is estimated by VLBI, the better the scale contribution to the ITRF. Considering time series of baseline length between different stations, a non-linear periodic signal can clearly be recognized, caused by seasonal effects at observation sites. Modeling these seasonal effects and subtracting them from the original data enhances the repeatability of single baselines significantly. Other effects influencing the scale strongly, are jumps in the time series of baseline length, mainly evoked by major earthquakes. Co- and post-seismic effects can be identified in the data, having a non-linear character likewise. Modeling the non-linear motion or completely excluding affected stations is another important step for an improved scale determination. In addition to the investigation of single baseline repeatabilities also the spatial transformation, which is performed for determining parameters of the ITRF2008, are considered. Since the reliability of the resulting transformation parameters is higher the more identical points are used in the transformation, an approach where all possible stations are used as control points is comprehensible. Experiments that examine the scale-factor and its spatial behavior between control points in ITRF2008 and coordinates determined by VLBI only showed that the network geometry has a large influence on the outcome as well. Introducing an unequally distributed network for the datum configuration, the correlations between translation parameters and the scale-factor can become remarkably high. Only a homogeneous spatial distribution of participating stations yields a maximally uncorrelated scale-factor that can be interpreted independent from other parameters. In the current release of the ITRF, the ITRF2014, for the first time, non-linear effects in the time series of station coordinates are taken into account. The present work shows the importance and the right direction of the modification of the ITRF calculation. But also further improvements were found which lead to an enhanced scale determination.
NASA Astrophysics Data System (ADS)
Svitlov, S. M.
2010-06-01
A recent paper (Baumann et al 2009 Metrologia 46 178-86) presents a method to evaluate the free-fall acceleration at a desired point in space, as required for the watt balance experiment. The claimed uncertainty of their absolute gravity measurements is supported by two bilateral comparisons using two absolute gravimeters of the same type. This comment discusses the case where absolute gravity measurements are traceable to a key comparison reference value. Such an approach produces a more complete uncertainty budget and reduces the risk of the results of different watt balance experiments not being compatible.
NASA Astrophysics Data System (ADS)
Skaloud, J.; Rehak, M.; Lichti, D.
2014-03-01
This study highlights the benefit of precise aerial position control in the context of mapping using frame-based imagery taken by small UAVs. We execute several flights with a custom Micro Aerial Vehicle (MAV) octocopter over a small calibration field equipped with 90 signalized targets and 25 ground control points. The octocopter carries a consumer grade RGB camera, modified to insure precise GPS time stamping of each exposure, as well as a multi-frequency/constellation GNSS receiver. The GNSS antenna and camera are rigidly mounted together on a one-axis gimbal that allows control of the obliquity of the captured imagery. The presented experiments focus on including absolute and relative aerial control. We confirm practically that both approaches are very effective: the absolute control allows omission of ground control points while the relative requires only a minimum number of control points. Indeed, the latter method represents an attractive alternative in the context of MAVs for two reasons. First, the procedure is somewhat simplified (e.g. the lever-arm between the camera perspective and antenna phase centers does not need to be determined) and, second, its principle allows employing a single-frequency antenna and carrier-phase GNSS receiver. This reduces the cost of the system as well as the payload, which in turn increases the flying time.
Kwon, Sae Kwang; Kang, Yeon Gwi; Kim, Sung Ju; Chang, Chong Bum; Seong, Sang Cheol; Kim, Tae Kyun
2010-10-01
Patient satisfaction is becoming increasingly important as a crucial outcome measure for total knee arthroplasty. We aimed to determine how well commonly used clinical outcome scales correlate with patient satisfaction after total knee arthroplasty. In particular, we sought to determine whether patient satisfaction correlates better with absolute postoperative scores or preoperative to 12-month postoperative changes. Patient satisfaction was evaluated using 4 grades (enthusiastic, satisfied, noncommittal, and disappointed) for 438 replaced knees that were followed for longer than 1 year. Outcomes scales used the American Knee Society, Western Ontario McMaster University Osteoarthritis Index scales, and Short Form-36 scores. Correlation analyses were performed to investigate the relation between patient satisfaction and the 2 different aspects of the outcome scales: postoperative scores evaluated at latest follow-ups and preoperative to postoperative changes. The Western Ontario McMaster University Osteoarthritis Index scales function score was most strongly correlated with satisfaction (correlation coefficient=0.45). Absolute postoperative scores were better correlated with satisfaction than the preoperative to postoperative changes for all scales. Level IV (retrospective case series). Copyright © 2010 Elsevier Inc. All rights reserved.
Pande, Aakanksha H; Ross-Degnan, Dennis; Zaslavsky, Alan M; Salomon, Joshua A
2011-07-01
The 2010 Patient Protection and Affordable Care Act (PPACA) has been controversial. The potential impact of national healthcare reform may be considered using a similar set of state-level reforms including exchanges and a mandate, enacted in 2006 in Massachusetts. To evaluate the effects of reforms on healthcare access, affordability, and disparities. Interrupted time series with comparison series. Longitudinal survey data from 2002 to 2009 from the Behavioral Risk Factor Surveillance System including 178,040 nonelderly adults residing in Massachusetts, Vermont, New Hampshire, Rhode Island, and Connecticut. Analysis was conducted from January to August 2010. Massachusetts 2006 healthcare reform, which included an individual health insurance mandate. Being uninsured, having no personal doctor, and forgoing care because of cost, evaluated in Massachusetts and four comparison states before (2002-2005) and after (2007-2009) the healthcare reform. Effects on disparities defined by race, education, income, and employment also were assessed. Living in Massachusetts in 2009 was associated with a 7.6 percentage point (95% CI=3.9, 11.3) higher probability of being insured; 4.8 percentage point (-0.9, 10.6) lower probability of forgoing care because of cost; and a 6.6 percentage point (1.9, 11.3) higher probability of having a personal doctor, compared to expected levels in the absence of reform, defined by trends in control states and adjusting for socioeconomic factors. The effects of the reform on insurance coverage attenuated from 2008 to 2009. In a socioeconomically disadvantaged group, the reforms had a greater effect in improving outcomes on the absolute but not relative scale. Healthcare reforms in Massachusetts, which included a health insurance mandate, were associated with significant increases in insurance coverage and access. The absolute effects of the reform were greater for disadvantaged populations. This is important evidence to consider as debate over national healthcare reform continues. Copyright © 2011 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Mikkonen, Hannah G; Clarke, Bradley O; Dasika, Raghava; Wallis, Christian J; Reichman, Suzie M
2017-02-15
Understanding ambient background concentrations in soil, at a local scale, is an essential part of environmental risk assessment. Where high resolution geochemical soil surveys have not been undertaken, soil data from alternative sources, such as environmental site assessment reports, can be used to support an understanding of ambient background conditions. Concentrations of metals/metalloids (As, Mn, Ni, Pb and Zn) were extracted from open-source environmental site assessment reports, for soils derived from the Newer Volcanics basalt, of Melbourne, Victoria, Australia. A manual screening method was applied to remove samples that were indicated to be contaminated by point sources and hence not representative of ambient background conditions. The manual screening approach was validated by comparison to data from a targeted background soil survey. Statistical methods for exclusion of contaminated samples from background soil datasets were compared to the manual screening method. The statistical methods tested included the Median plus Two Median Absolute Deviations, the upper whisker of a normal and log transformed Tukey boxplot, the point of inflection on a cumulative frequency plot and the 95th percentile. We have demonstrated that where anomalous sample results cannot be screened using site information, the Median plus Two Median Absolute Deviations is a conservative method for derivation of ambient background upper concentration limits (i.e. expected maximums). The upper whisker of a boxplot and the point of inflection on a cumulative frequency plot, were also considered adequate methods for deriving ambient background upper concentration limits, where the percentage of contaminated samples is <25%. Median ambient background concentrations of metals/metalloids in the Newer Volcanic soils of Melbourne were comparable to ambient background concentrations in Europe and the United States, except for Ni, which was naturally enriched in the basalt-derived soils of Melbourne. Copyright © 2016 Elsevier B.V. All rights reserved.
Eye height scaling of absolute size in immersive and nonimmersive displays
NASA Technical Reports Server (NTRS)
Dixon, M. W.; Wraga, M.; Proffitt, D. R.; Williams, G. C.; Kaiser, M. K. (Principal Investigator)
2000-01-01
Eye-height (EH) scaling of absolute height was investigated in three experiments. In Experiment 1, standing observers viewed cubes in an immersive virtual environment. Observers' center of projection was placed at actual EH and at 0.7 times actual EH. Observers' size judgments revealed that the EH manipulation was 76.8% effective. In Experiment 2, seated observers viewed the same cubes on an interactive desktop display; however, no effect of EH was found in response to the simulated EH manipulation. Experiment 3 tested standing observers in the immersive environment with the field of view reduced to match that of the desktop. Comparable to Experiment 1, the effect of EH was 77%. These results suggest that EH scaling is not generally used when people view an interactive desktop display because the altitude of the center of projection is indeterminate. EH scaling is spontaneously evoked, however, in immersive environments.
Laser guide star pointing camera for ESO LGS Facilities
NASA Astrophysics Data System (ADS)
Bonaccini Calia, D.; Centrone, M.; Pedichini, F.; Ricciardi, A.; Cerruto, A.; Ambrosino, F.
2014-08-01
Every observatory using LGS-AO routinely has the experience of the long time needed to bring and acquire the laser guide star in the wavefront sensor field of view. This is mostly due to the difficulty of creating LGS pointing models, because of the opto-mechanical flexures and hysteresis in the launch and receiver telescope structures. The launch telescopes are normally sitting on the mechanical structure of the larger receiver telescope. The LGS acquisition time is even longer in case of multiple LGS systems. In this framework the optimization of the LGS systems absolute pointing accuracy is relevant to boost the time efficiency of both science and technical observations. In this paper we show the rationale, the design and the feasibility tests of a LGS Pointing Camera (LPC), which has been conceived for the VLT Adaptive Optics Facility 4LGSF project. The LPC would assist in pointing the four LGS, while the VLT is doing the initial active optics cycles to adjust its own optics on a natural star target, after a preset. The LPC allows minimizing the needed accuracy for LGS pointing model calibrations, while allowing to reach sub-arcsec LGS absolute pointing accuracy. This considerably reduces the LGS acquisition time and observations operation overheads. The LPC is a smart CCD camera, fed by a 150mm diameter aperture of a Maksutov telescope, mounted on the top ring of the VLT UT4, running Linux and acting as server for the client 4LGSF. The smart camera is able to recognize within few seconds the sky field using astrometric software, determining the stars and the LGS absolute positions. Upon request it returns the offsets to give to the LGS, to position them at the required sky coordinates. As byproduct goal, once calibrated the LPC can calculate upon request for each LGS, its return flux, its fwhm and the uplink beam scattering levels.
Budoff, Matthew J; Nasir, Khurram; McClelland, Robyn L; Detrano, Robert; Wong, Nathan; Blumenthal, Roger S; Kondos, George; Kronmal, Richard A
2009-01-27
In this study, we aimed to establish whether age-sex-specific percentiles of coronary artery calcium (CAC) predict cardiovascular outcomes better than the actual (absolute) CAC score. The presence and extent of CAC correlates with the overall magnitude of coronary atherosclerotic plaque burden and with the development of subsequent coronary events. MESA (Multi-Ethnic Study of Atherosclerosis) is a prospective cohort study of 6,814 asymptomatic participants followed for coronary heart disease (CHD) events including myocardial infarction, angina, resuscitated cardiac arrest, or CHD death. Time to incident CHD was modeled with Cox regression, and we compared models with percentiles based on age, sex, and/or race/ethnicity to categories commonly used (0, 1 to 100, 101 to 400, 400+ Agatston units). There were 163 (2.4%) incident CHD events (median follow-up 3.75 years). Expressing CAC in terms of age- and sex-specific percentiles had significantly lower area under the receiver-operating characteristic curve (AUC) than when using absolute scores (women: AUC 0.73 versus 0.76, p = 0.044; men: AUC 0.73 versus 0.77, p < 0.001). Akaike's information criterion indicated better model fit with the overall score. Both methods robustly predicted events (>90th percentile associated with a hazard ratio [HR] of 16.4, 95% confidence interval [CI]: 9.30 to 28.9, and score >400 associated with HR of 20.6, 95% CI: 11.8 to 36.0). Within groups based on age-, sex-, and race/ethnicity-specific percentiles there remains a clear trend of increasing risk across levels of the absolute CAC groups. In contrast, once absolute CAC category is fixed, there is no increasing trend across levels of age-, sex-, and race/ethnicity-specific categories. Patients with low absolute scores are low-risk, regardless of age-, sex-, and race/ethnicity-specific percentile rank. Persons with an absolute CAC score of >400 are high risk, regardless of percentile rank. Using absolute CAC in standard groups performed better than age-, sex-, and race/ethnicity-specific percentiles in terms of model fit and discrimination. We recommend using cut points based on the absolute CAC amount, and the common CAC cut points of 100 and 400 seem to perform well.
Qi, Li; Zhu, Jiang; Hancock, Aneeka M.; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D.; Chen, Zhongping
2016-01-01
Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it is related to vessel geometry. In this paper, we present a volumetric vessel reconstruction approach that is capable of measuring the absolute BFV distributed along the entire middle cerebral artery (MCA) within a large field-of-view. The Doppler angle at each point of the MCA, representing the vessel geometry, is derived analytically by localizing the artery from pure DOCT images through vessel segmentation and skeletonization. Our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches. Experiments on rodents using swept-source optical coherence tomography showed that our approach was able to reveal the consequences of permanent MCA occlusion with absolute BFV measurement. PMID:26977365
Qi, Li; Zhu, Jiang; Hancock, Aneeka M; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D; Chen, Zhongping
2016-02-01
Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it is related to vessel geometry. In this paper, we present a volumetric vessel reconstruction approach that is capable of measuring the absolute BFV distributed along the entire middle cerebral artery (MCA) within a large field-of-view. The Doppler angle at each point of the MCA, representing the vessel geometry, is derived analytically by localizing the artery from pure DOCT images through vessel segmentation and skeletonization. Our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches. Experiments on rodents using swept-source optical coherence tomography showed that our approach was able to reveal the consequences of permanent MCA occlusion with absolute BFV measurement.
40 CFR 1065.659 - Removed water correction.
Code of Federal Regulations, 2011 CFR
2011-07-01
... know that saturated water vapor conditions exist. Use good engineering judgment to measure the... absolute pressure based on an alarm set point, a pressure regulator set point, or good engineering judgment... from raw exhaust, you may determine the amount of water based on intake-air humidity, plus a chemical...
40 CFR 1065.659 - Removed water correction.
Code of Federal Regulations, 2010 CFR
2010-07-01
... know that saturated water vapor conditions exist. Use good engineering judgment to measure the... absolute pressure based on an alarm set point, a pressure regulator set point, or good engineering judgment... from raw exhaust, you may determine the amount of water based on intake-air humidity, plus a chemical...
ERIC Educational Resources Information Center
Gerlach, Karrie; Trate, Jaclyn; Blecking, Anja; Geissinger, Peter; Murphy, Kristen
2014-01-01
Scale as a theme in science instruction is not a new idea. As early as the mid-1980s, scale was identified as an important component of a student's overall science literacy. However, the study of scale and the scale literacy of students in varying levels of education have received less attention than other science-literacy components. Foremost…
Starting Processes of High Contraction Ratio Scramjet Inlets
2012-01-01
shortly before injection, at which point the boxes were switched to relative mode via the “ Taka Taka ” box, shown on Fig. 22. This absolute mode...camera used for the Schlieren visualisation, as well as the trigger for the 32 channel data acquisition system used. Figure 22: Taka ... taka box, used to manipulate the resistance mode during testing Figure 23: Typical raw thin film array signal, showing both absolute and relative
Computation of fluid flow and pore-space properties estimation on micro-CT images of rock samples
NASA Astrophysics Data System (ADS)
Starnoni, M.; Pokrajac, D.; Neilson, J. E.
2017-09-01
Accurate determination of the petrophysical properties of rocks, namely REV, mean pore and grain size and absolute permeability, is essential for a broad range of engineering applications. Here, the petrophysical properties of rocks are calculated using an integrated approach comprising image processing, statistical correlation and numerical simulations. The Stokes equations of creeping flow for incompressible fluids are solved using the Finite-Volume SIMPLE algorithm. Simulations are then carried out on three-dimensional digital images obtained from micro-CT scanning of two rock formations: one sandstone and one carbonate. Permeability is predicted from the computed flow field using Darcy's law. It is shown that REV, REA and mean pore and grain size are effectively estimated using the two-point spatial correlation function. Homogeneity and anisotropy are also evaluated using the same statistical tools. A comparison of different absolute permeability estimates is also presented, revealing a good agreement between the numerical value and the experimentally determined one for the carbonate sample, but a large discrepancy for the sandstone. Finally, a new convergence criterion for the SIMPLE algorithm, and more generally for the family of pressure-correction methods, is presented. This criterion is based on satisfaction of bulk momentum balance, which makes it particularly useful for pore-scale modelling of reservoir rocks.
NASA Astrophysics Data System (ADS)
Deng, Ziwang; Liu, Jinliang; Qiu, Xin; Zhou, Xiaolan; Zhu, Huaiping
2017-10-01
A novel method for daily temperature and precipitation downscaling is proposed in this study which combines the Ensemble Optimal Interpolation (EnOI) and bias correction techniques. For downscaling temperature, the day to day seasonal cycle of high resolution temperature of the NCEP climate forecast system reanalysis (CFSR) is used as background state. An enlarged ensemble of daily temperature anomaly relative to this seasonal cycle and information from global climate models (GCMs) are used to construct a gain matrix for each calendar day. Consequently, the relationship between large and local-scale processes represented by the gain matrix will change accordingly. The gain matrix contains information of realistic spatial correlation of temperature between different CFSR grid points, between CFSR grid points and GCM grid points, and between different GCM grid points. Therefore, this downscaling method keeps spatial consistency and reflects the interaction between local geographic and atmospheric conditions. Maximum and minimum temperatures are downscaled using the same method. For precipitation, because of the non-Gaussianity issue, a logarithmic transformation is used to daily total precipitation prior to conducting downscaling. Cross validation and independent data validation are used to evaluate this algorithm. Finally, data from a 29-member ensemble of phase 5 of the Coupled Model Intercomparison Project (CMIP5) GCMs are downscaled to CFSR grid points in Ontario for the period from 1981 to 2100. The results show that this method is capable of generating high resolution details without changing large scale characteristics. It results in much lower absolute errors in local scale details at most grid points than simple spatial downscaling methods. Biases in the downscaled data inherited from GCMs are corrected with a linear method for temperatures and distribution mapping for precipitation. The downscaled ensemble projects significant warming with amplitudes of 3.9 and 6.5 °C for 2050s and 2080s relative to 1990s in Ontario, respectively; Cooling degree days and hot days will significantly increase over southern Ontario and heating degree days and cold days will significantly decrease in northern Ontario. Annual total precipitation will increase over Ontario and heavy precipitation events will increase as well. These results are consistent with conclusions in many other studies in the literature.
Large-Scale Simulation of Multi-Asset Ising Financial Markets
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2017-03-01
We perform a large-scale simulation of an Ising-based financial market model that includes 300 asset time series. The financial system simulated by the model shows a fat-tailed return distribution and volatility clustering and exhibits unstable periods indicated by the volatility index measured as the average of absolute-returns. Moreover, we determine that the cumulative risk fraction, which measures the system risk, changes at high volatility periods. We also calculate the inverse participation ratio (IPR) and its higher-power version, IPR6, from the absolute-return cross-correlation matrix. Finally, we show that the IPR and IPR6 also change at high volatility periods.
Absolute Scale Quantitative Off-Axis Electron Holography at Atomic Resolution
NASA Astrophysics Data System (ADS)
Winkler, Florian; Barthel, Juri; Tavabi, Amir H.; Borghardt, Sven; Kardynal, Beata E.; Dunin-Borkowski, Rafal E.
2018-04-01
An absolute scale match between experiment and simulation in atomic-resolution off-axis electron holography is demonstrated, with unknown experimental parameters determined directly from the recorded electron wave function using an automated numerical algorithm. We show that the local thickness and tilt of a pristine thin WSe2 flake can be measured uniquely, whereas some electron optical aberrations cannot be determined unambiguously for a periodic object. The ability to determine local specimen and imaging parameters directly from electron wave functions is of great importance for quantitative studies of electrostatic potentials in nanoscale materials, in particular when performing in situ experiments and considering that aberrations change over time.
Soon, Aun Woon; Toney, Amanda Greene; Stidham, Timothy; Kendall, John; Roosevelt, Genie
2018-04-24
To assess whether Web-based teaching is at least as effective as traditional classroom didactic in improving the proficiency of pediatric novice learners in the image acquisition and interpretation of pneumothorax and pleural effusion using point-of-care ultrasound (POCUS). We conducted a randomized controlled noninferiority study comparing the effectiveness of Web-based teaching to traditional classroom didactic. The participants were randomized to either group A (live classroom lecture) or group B (Web-based lecture) and completed a survey and knowledge test. They also received hands-on training and completed an objective structured clinical examination. The participants were invited to return 2 months later to test for retention of knowledge and skills. There were no significant differences in the mean written test scores between the classroom group and Web group for the precourse test (absolute difference, -2.5; 95% confidence interval [CI], -12 to 6.9), postcourse test (absolute difference, 2.0; 95% CI, -1.4, 5.3), and postcourse 2-month retention test (absolute difference, -0.8; 95% CI, -9.6 to 8.1). Similarly, no significant differences were noted in the mean objective structured clinical examination scores for both intervention groups in postcourse (absolute difference, 1.9; 95% CI, -4.7 to 8.5) and 2-month retention (absolute difference, -0.6; 95% CI, -10.7 to 9.5). Web-based teaching is at least as effective as traditional classroom didactic in improving the proficiency of novice learners in POCUS. The usage of Web-based tutorials allows a more efficient use of time and a wider dissemination of knowledge.
Muir, Keith W; Ford, Gary A; Messow, Claudia-Martina; Ford, Ian; Murray, Alicia; Clifton, Andrew; Brown, Martin M; Madigan, Jeremy; Lenthall, Rob; Robertson, Fergus; Dixit, Anand; Cloud, Geoffrey C; Wardlaw, Joanna; Freeman, Janet; White, Philip
2017-01-01
Objective The Pragmatic Ischaemic Thrombectomy Evaluation (PISTE) trial was a multicentre, randomised, controlled clinical trial comparing intravenous thrombolysis (IVT) alone with IVT and adjunctive intra-arterial mechanical thrombectomy (MT) in patients who had acute ischaemic stroke with large artery occlusive anterior circulation stroke confirmed on CT angiography (CTA). Design Eligible patients had IVT started within 4.5 hours of stroke symptom onset. Those randomised to additional MT underwent thrombectomy using any Conformité Européene (CE)-marked device, with target interval times for IVT start to arterial puncture of <90 min. The primary outcome was the proportion of patients achieving independence defined by a modified Rankin Scale (mRS) score of 0–2 at day 90. Results Ten UK centres enrolled 65 patients between April 2013 and April 2015. Median National Institutes of Health Stroke Scale score was 16 (IQR 13–21). Median stroke onset to IVT start was 120 min. In the intention-to-treat analysis, there was no significant difference in disability-free survival at day 90 with MT (absolute difference 11%, adjusted OR 2.12, 95% CI 0.65 to 6.94, p=0.20). Secondary analyses showed significantly greater likelihood of full neurological recovery (mRS 0–1) at day 90 (OR 7.6, 95% CI 1.6 to 37.2, p=0.010). In the per-protocol population (n=58), the primary and most secondary clinical outcomes significantly favoured MT (absolute difference in mRS 0–2 of 22% and adjusted OR 4.9, 95% CI 1.2 to 19.7, p=0.021). Conclusions The trial did not find a significant difference between treatment groups for the primary end point. However, the effect size was consistent with published data and across primary and secondary end points. Proceeding as fast as possible to MT after CTA confirmation of large artery occlusion on a background of intravenous alteplase is safe, improves excellent clinical outcomes and, in the per-protocol population, improves disability-free survival. Trial registration number NCT01745692; Results. PMID:27756804
Performance of Different Light Sources for the Absolute Calibration of Radiation Thermometers
NASA Astrophysics Data System (ADS)
Martín, M. J.; Mantilla, J. M.; del Campo, D.; Hernanz, M. L.; Pons, A.; Campos, J.
2017-09-01
The evolving mise en pratique for the definition of the kelvin (MeP-K) [1, 2] will, in its forthcoming edition, encourage the realization and dissemination of the thermodynamic temperature either directly (primary thermometry) or indirectly (relative primary thermometry) via fixed points with assigned reference thermodynamic temperatures. In the last years, the Centro Español de Metrología (CEM), in collaboration with the Instituto de Óptica of Consejo Superior de Investigaciones Científicas (IO-CSIC), has developed several setups for absolute calibration of standard radiation thermometers using the radiance method to allow CEM the direct dissemination of the thermodynamic temperature and the assignment of the thermodynamic temperatures to several fixed points. Different calibration facilities based on a monochromator and/or a laser and an integrating sphere have been developed to calibrate CEM's standard radiation thermometers (KE-LP2 and KE-LP4) and filter radiometer (FIRA2). This system is based on the one described in [3] placed in IO-CSIC. Different light sources have been tried and tested for measuring absolute spectral radiance responsivity: a Xe-Hg 500 W lamp, a supercontinuum laser NKT SuperK-EXR20 and a diode laser emitting at 6473 nm with a typical maximum power of 120 mW. Their advantages and disadvantages have been studied such as sensitivity to interferences generated by the laser inside the filter, flux stability generated by the radiant sources and so forth. This paper describes the setups used, the uncertainty budgets and the results obtained for the absolute temperatures of Cu, Co-C, Pt-C and Re-C fixed points, measured with the three thermometers with central wavelengths around 650 nm.
A Meta-Analysis of Growth Trends from Vertically Scaled Assessments
ERIC Educational Resources Information Center
Dadey, Nathan; Briggs, Derek C.
2012-01-01
A vertical scale, in principle, provides a common metric across tests with differing difficulties (e.g., spanning multiple grades) so that statements of "absolute" growth can be made. This paper compares 16 states' 2007-2008 effect size growth trends on vertically scaled reading and math assessments across grades 3 to 8. Two patterns…
Hyperresonance Unifying Theory and the resulting Law
NASA Astrophysics Data System (ADS)
Omerbashich, Mensur
2012-07-01
Hyperresonance Unifying Theory (HUT) is herein conceived based on theoretical and experimental geophysics, as that absolute extension of both Multiverse and String Theories, in which all universes (the Hyperverse) - of non-prescribed energies and scales - mutually orbit as well as oscillate in tune. The motivation for this is to explain oddities of "attraction at a distance" and physical unit(s) attached to the Newtonian gravitational constant G. In order to make sure HUT holds absolutely, we operate over non-temporal, unitless and quantities with derived units only. A HUT's harmonic geophysical localization (here for the Earth-Moon system; the Georesonator) is indeed achieved for mechanist and quantum scales, in form of the Moon's Equation of Levitation (of Anti-gravity). HUT holds true for our Solar system the same as its localized equation holds down to the precision of terrestrial G-experiments, regardless of the scale: to 10^-11 and 10^-39 for mechanist and quantum scales, respectively. Due to its absolute accuracy (within NIST experimental limits), the derived equation is regarded a law. HUT can indeed be demonstrated for our entire Solar system in various albeit empirical ways. In summary, HUT shows: (i) how classical gravity can be expressed in terms of scale and the speed of light; (ii) the tuning-forks principle is universal; (iii) the body's fundamental oscillation note is not a random number as previously believed; (iv) earthquakes of about M6 and stronger arise mainly due to Earth's alignments longer than three days to two celestial objects in our Solar system, whereas M7+ earthquakes occur mostly during two simultaneous such alignments; etc. HUT indicates: (v) quantum physics is objectocentric, i.e. trivial in absolute terms so it cannot be generalized beyond classical mass-bodies; (vi) geophysics is largely due to the magnification of mass resonance; etc. HUT can be extended to multiverse (10^17) and string scales (10^-67) too, providing a constraint to String Theory. HUT is the unifying theory as it demotes classical forces to states of stringdom. The String Theory's paradigm on vibrational rather than particlegenic reality has thus been confirmed.
Application of psychometric theory to the measurement of voice quality using rating scales.
Shrivastav, Rahul; Sapienza, Christine M; Nandur, Vuday
2005-04-01
Rating scales are commonly used to study voice quality. However, recent research has demonstrated that perceptual measures of voice quality obtained using rating scales suffer from poor interjudge agreement and reliability, especially in the mid-range of the scale. These findings, along with those obtained using multidimensional scaling (MDS), have been interpreted to show that listeners perceive voice quality in an idiosyncratic manner. Based on psychometric theory, the present research explored an alternative explanation for the poor interlistener agreement observed in previous research. This approach suggests that poor agreement between listeners may result, in part, from measurement errors related to a variety of factors rather than true differences in the perception of voice quality. In this study, 10 listeners rated breathiness for 27 vowel stimuli using a 5-point rating scale. Each stimulus was presented to the listeners 10 times in random order. Interlistener agreement and reliability were calculated from these ratings. Agreement and reliability were observed to improve when multiple ratings of each stimulus from each listener were averaged and when standardized scores were used instead of absolute ratings. The probability of exact agreement was found to be approximately .9 when using averaged ratings and standardized scores. In contrast, the probability of exact agreement was only .4 when a single rating from each listener was used to measure agreement. These findings support the hypothesis that poor agreement reported in past research partly arises from errors in measurement rather than individual differences in the perception of voice quality.
Progress in Noise Thermometry at 505 K and 693 K Using Quantized Voltage Noise Ratio Spectra
NASA Astrophysics Data System (ADS)
Tew, W. L.; Benz, S. P.; Dresselhaus, P. D.; Coakley, K. J.; Rogalla, H.; White, D. R.; Labenski, J. R.
2010-09-01
Technical advances and new results in noise thermometry at temperatures near the tin freezing point and the zinc freezing point using a quantized voltage noise source (QVNS) are reported. The temperatures are derived by comparing the power spectral density of QVNS synthesized noise with that of Johnson noise from a known resistance at both 505 K and 693 K. Reference noise is digitally synthesized so that the average power spectra of the QVNS match those of the thermal noise, resulting in ratios of power spectra close to unity in the low-frequency limit. Three-parameter models are used to account for differences in impedance-related time constants in the spectra. Direct comparison of noise temperatures to the International Temperature Scale of 1990 (ITS-90) is achieved in a comparison furnace with standard platinum resistance thermometers. The observed noise temperatures determined by operating the noise thermometer in both absolute and relative modes, and related statistics together with estimated uncertainties are reported. The relative noise thermometry results are combined with results from other thermodynamic determinations at temperatures near the tin freezing point to calculate a value of T - T 90 = +4(18) mK for temperatures near the zinc freezing point. These latest results achieve a lower uncertainty than that of our earlier efforts. The present value of T - T 90 is compared to other published determinations from noise thermometry and other methods.
Barry, Mamadou S; Auger, Nathalie; Burrows, Stephanie
2012-06-01
To determine the age and cause groups contributing to absolute and relative socio-economic inequalities in paediatric mortality, hospitalisation and tumour incidence over time. Deaths (n= 9559), hospitalisations (n= 834,932) and incident tumours (n= 4555) were obtained for five age groupings (<1, 1-4, 5-9, 10-14, 15-19 years) and four periods (1990-1993, 1994-1997, 1998-2001, 2002-2005) for Québec, Canada. Age- and cause-specific morbidity and mortality rates for males and females were calculated across socio-economic status decile based on a composite deprivation score for 89 urban communities. Absolute and relative measures of inequality were computed for each age and cause. Mortality and morbidity rates tended to decrease over time, as did absolute and relative socio-economic inequalities for most (but not all) causes and age groups, although precision was low. Socio-economic inequalities persisted in the last period and were greater on the absolute scale for mortality and hospitalisation in early childhood, and on the relative scale for mortality in adolescents. Four causes (respiratory, digestive, infectious, genito-urinary diseases) contributed to the majority of absolute inequality in hospitalisation (males 85%, females 98%). Inequalities were not pronounced for cause-specific mortality and not apparent for tumour incidence. Socio-economic inequalities in Québec tended to narrow for most but not all outcomes. Absolute socio-economic inequalities persisted for children <10 years, and several causes were responsible for the majority of inequality in hospitalisation. Public health policies and prevention programs aiming to reduce socio-economic inequalities in paediatric health should account for trends that differ across age and cause of disease. © 2011 The Authors. Journal of Paediatrics and Child Health © 2011 Paediatrics and Child Health Division (Royal Australasian College of Physicians).
The Absolute Magnitude of the Sun in Several Filters
NASA Astrophysics Data System (ADS)
Willmer, Christopher N. A.
2018-06-01
This paper presents a table with estimates of the absolute magnitude of the Sun and the conversions from vegamag to the AB and ST systems for several wide-band filters used in ground-based and space-based observatories. These estimates use the dustless spectral energy distribution (SED) of Vega, calibrated absolutely using the SED of Sirius, to set the vegamag zero-points and a composite spectrum of the Sun that coadds space-based observations from the ultraviolet to the near-infrared with models of the Solar atmosphere. The uncertainty of the absolute magnitudes is estimated by comparing the synthetic colors with photometric measurements of solar analogs and is found to be ∼0.02 mag. Combined with the uncertainty of ∼2% in the calibration of the Vega SED, the errors of these absolute magnitudes are ∼3%–4%. Using these SEDs, for three of the most utilized filters in extragalactic work the estimated absolute magnitudes of the Sun are M B = 5.44, M V = 4.81, and M K = 3.27 mag in the vegamag system and M B = 5.31, M V = 4.80, and M K = 5.08 mag in AB.
Kim, Sujin; Kwon, Soonman; Subramanian, S V
2015-11-01
In 1999, the Korean government introduced the National Cancer Screening Program (NCSP) to increase the cancer-screening rate, particularly among the low-income population. This study investigates how the NCSP has decreased both relative and absolute income inequalities in the uptake of cancer screening in South Korea. A nationally representative cross-sectional repeated data from the Korea National Health and Nutrition Examination Survey 1998-2012, managed by the Ministry of Health and Welfare, was used to assess changes over time and the extent of discontinuity at the NCSP-recommended initiation age in the uptake of screening for breast, colorectal, and gastric cancers across income quartiles. Relative inequalities in the uptake of screening for all cancers decreased significantly over the policy period. Absolute inequalities did not change for most cancers, but marginally increased from 9 to 14% points in the uptake of screening for colorectal cancer among men. At the recommended initiation age, absolute inequalities did not change for breast and colorectal cancers but increased from 5 to 16% points for gastric cancer, for which relative inequality significantly decreased. The NCSP, which reduced out-of-pocket payment, may not decrease absolute gap although it leads to overall increases in the uptake of cancer screening and decreases in relative inequalities. Further investigations are needed to understand barriers that prevent the low-income population from attending cancer screening.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antušek, A., E-mail: andrej.antusek@stuba.sk; Holka, F., E-mail: filip.holka@stuba.sk
2015-08-21
We present coupled cluster calculations of NMR shielding constants of aluminum, gallium, and indium in water-ion clusters. In addition, relativistic and dynamical corrections and the influence of the second solvation shell are evaluated. The final NMR shielding constants define new absolute shielding scales, 600.0 ± 4.1 ppm, 2044.4 ± 31.4 ppm, and 4507.7 ± 63.7 ppm for aluminum, gallium, and indium, respectively. The nuclear magnetic dipole moments for {sup 27}Al, {sup 69}Ga, {sup 71}Ga, {sup 113}In, and {sup 115}In isotopes are corrected by combining the computed shielding constants with experimental NMR frequencies. The absolute magnitude of the correction increases alongmore » the series and for indium isotopes it reaches approximately −8.0 × 10{sup −3} of the nuclear magneton.« less
Low frequency ac waveform generator
Bilharz, O.W.
1983-11-22
Low frequency sine, cosine, triangle and square waves are synthesized in circuitry which allows variation in the waveform amplitude and frequency while exhibiting good stability and without requiring significant stablization time. A triangle waveform is formed by a ramped integration process controlled by a saturation amplifier circuit which produces the necessary hysteresis for the triangle waveform. The output of the saturation circuit is tapped to produce the square waveform. The sine waveform is synthesized by taking the absolute value of the triangular waveform, raising this absolute value to a predetermined power, multiplying the raised absolute value of the triangle wave with the triangle wave itself and properly scaling the resultant waveform and subtracting it from the triangular waveform to a predetermined power and adding the squared waveform raised to the predetermined power with a DC reference and subtracting the squared waveform therefrom, with all waveforms properly scaled. The resultant waveform is then multiplied with a square wave in order to correct the polarity and produce the resultant cosine waveform.
System and method for calibrating a rotary absolute position sensor
NASA Technical Reports Server (NTRS)
Davis, Donald R. (Inventor); Permenter, Frank Noble (Inventor); Radford, Nicolaus A (Inventor)
2012-01-01
A system includes a rotary device, a rotary absolute position (RAP) sensor generating encoded pairs of voltage signals describing positional data of the rotary device, a host machine, and an algorithm. The algorithm calculates calibration parameters usable to determine an absolute position of the rotary device using the encoded pairs, and is adapted for linearly-mapping an ellipse defined by the encoded pairs to thereby calculate the calibration parameters. A method of calibrating the RAP sensor includes measuring the rotary position as encoded pairs of voltage signals, linearly-mapping an ellipse defined by the encoded pairs to thereby calculate the calibration parameters, and calculating an absolute position of the rotary device using the calibration parameters. The calibration parameters include a positive definite matrix (A) and a center point (q) of the ellipse. The voltage signals may include an encoded sine and cosine of a rotary angle of the rotary device.
Residual Stress Analysis Based on Acoustic and Optical Methods.
Yoshida, Sanichiro; Sasaki, Tomohiro; Usui, Masaru; Sakamoto, Shuichi; Gurney, David; Park, Ik-Keun
2016-02-16
Co-application of acoustoelasticity and optical interferometry to residual stress analysis is discussed. The underlying idea is to combine the advantages of both methods. Acoustoelasticity is capable of evaluating a residual stress absolutely but it is a single point measurement. Optical interferometry is able to measure deformation yielding two-dimensional, full-field data, but it is not suitable for absolute evaluation of residual stresses. By theoretically relating the deformation data to residual stresses, and calibrating it with absolute residual stress evaluated at a reference point, it is possible to measure residual stresses quantitatively, nondestructively and two-dimensionally. The feasibility of the idea has been tested with a butt-jointed dissimilar plate specimen. A steel plate 18.5 mm wide, 50 mm long and 3.37 mm thick is braze-jointed to a cemented carbide plate of the same dimension along the 18.5 mm-side. Acoustoelasticity evaluates the elastic modulus at reference points via acoustic velocity measurement. A tensile load is applied to the specimen at a constant pulling rate in a stress range substantially lower than the yield stress. Optical interferometry measures the resulting acceleration field. Based on the theory of harmonic oscillation, the acceleration field is correlated to compressive and tensile residual stresses qualitatively. The acoustic and optical results show reasonable agreement in the compressive and tensile residual stresses, indicating the feasibility of the idea.
Windt, Carel W; Blümler, Peter
2015-04-01
Nuclear magnetic resonance (NMR) and NMR imaging (magnetic resonance imaging) offer the possibility to quantitatively and non-invasively measure the presence and movement of water. Unfortunately, traditional NMR hardware is expensive, poorly suited for plants, and because of its bulk and complexity, not suitable for use in the field. But does it need to be? We here explore how novel, small-scale portable NMR devices can be used as a flow sensor to directly measure xylem sap flow in a poplar tree (Populus nigra L.), or in a dendrometer-like fashion to measure dynamic changes in the absolute water content of fruit or stems. For the latter purpose we monitored the diurnal pattern of growth, expansion and shrinkage in a model fruit (bean pod, Phaseolus vulgaris L.) and in the stem of an oak tree (Quercus robur L.). We compared changes in absolute stem water content, as measured by the NMR sensor, against stem diameter variations as measured by a set of conventional point dendrometers, to test how well the sensitivities of the two methods compare and to investigate how well diurnal changes in trunk absolute water content correlate with the concomitant diurnal variations in stem diameter. Our results confirm the existence of a strong correlation between the two parameters, but also suggest that dynamic changes in oak stem water content could be larger than is apparent on the basis of the stem diameter variation alone. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A Tamarisk Habitat Suitability Map for the Continental US
NASA Technical Reports Server (NTRS)
Morisette, Jeffrey T.; Jernevich, Catherine S.; Ullah, Asad; Cai, Weijie; Pedelty, Jeffrey A.; Gentle, Jim; Stohlgren, Thomas J.; Schnase, John L.
2005-01-01
This paper presents a national-scale map of habitat suitability for a high-priority invasive species, Tamarisk (Tamarisk spp., salt cedar). We successfully integrate satellite data and tens of thousands of field sampling points through logistic regression modeling to create a habitat suitability map that is 90% accurate. This interagency effort uses field data collected and coordinated through the US Geological Survey and nation-wide environmental data layers derived from NASA s MODerate Resolution Imaging Spectroradiometer (MODIS). We demonstrate the utilization of the map by ranking the lower 48 US states (and the District of Columbia) based upon their absolute, as well as proportional, areas of highly likely and moderately likely habitat for Tamarisk. The interagency effort and modeling approach presented here could be applied to map other harmful species in the US and globally.
Hohlraum energetics scaling to 520 TW on the National Ignition Facilitya)
NASA Astrophysics Data System (ADS)
Kline, J. L.; Callahan, D. A.; Glenzer, S. H.; Meezan, N. B.; Moody, J. D.; Hinkel, D. E.; Jones, O. S.; MacKinnon, A. J.; Bennedetti, R.; Berger, R. L.; Bradley, D.; Dewald, E. L.; Bass, I.; Bennett, C.; Bowers, M.; Brunton, G.; Bude, J.; Burkhart, S.; Condor, A.; Di Nicola, J. M.; Di Nicola, P.; Dixit, S. N.; Doeppner, T.; Dzenitis, E. G.; Erbert, G.; Folta, J.; Grim, G.; Glenn, S.; Hamza, A.; Haan, S. W.; Heebner, J.; Henesian, M.; Hermann, M.; Hicks, D. G.; Hsing, W. W.; Izumi, N.; Jancaitis, K.; Jones, O. S.; Kalantar, D.; Khan, S. F.; Kirkwood, R.; Kyrala, G. A.; LaFortune, K.; Landen, O. L.; Lagin, L.; Larson, D.; Pape, S. Le; Ma, T.; MacPhee, A. G.; Michel, P. A.; Miller, P.; Montincelli, M.; Moore, A. S.; Nikroo, A.; Nostrand, M.; Olson, R. E.; Pak, A.; Park, H. S.; Patel, J. P.; Pelz, L.; Ralph, J.; Regan, S. P.; Robey, H. F.; Rosen, M. D.; Ross, J. S.; Schneider, M. B.; Shaw, M.; Smalyuk, V. A.; Strozzi, D. J.; Suratwala, T.; Suter, L. J.; Tommasini, R.; Town, R. P. J.; Van Wonterghem, B.; Wegner, P.; Widmann, K.; Widmayer, C.; Wilkens, H.; Williams, E. A.; Edwards, M. J.; Remington, B. A.; MacGowan, B. J.; Kilkenny, J. D.; Lindl, J. D.; Atherton, L. J.; Batha, S. H.; Moses, E.
2013-05-01
Indirect drive experiments have now been carried out with laser powers and energies up to 520 TW and 1.9 MJ. These experiments show that the energy coupling to the target is nearly constant at 84% ± 3% over a wide range of laser parameters from 350 to 520 TW and 1.2 to 1.9 MJ. Experiments at 520 TW with depleted uranium hohlraums achieve radiation temperatures of ˜330 ± 4 eV, enough to drive capsules 20 μm thicker than the ignition point design to velocities near the ignition goal of 370 km/s. A series of three symcap implosion experiments with nearly identical target, laser, and diagnostics configurations show the symmetry and drive are reproducible at the level of ±8.5% absolute and ±2% relative, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casagrande, L.; Asplund, M.; Ramirez, I.
Solar infrared colors provide powerful constraints on the stellar effective temperature scale, but they must be measured with both accuracy and precision in order to do so. We fulfill this requirement by using line-depth ratios to derive in a model-independent way the infrared colors of the Sun, and we use the latter to test the zero point of the Casagrande et al. effective temperature scale, confirming its accuracy. Solar colors in the widely used Two Micron All Sky Survey (2MASS) JHK{sub s} and WISE W1-4 systems are provided: (V - J){sub Sun} = 1.198, (V - H){sub Sun} = 1.484,more » (V - K{sub s} ){sub Sun} = 1.560, (J - H){sub Sun} = 0.286, (J - K{sub s} ){sub Sun} = 0.362, (H - K{sub s} ){sub Sun} = 0.076, (V - W1){sub Sun} = 1.608, (V - W2){sub Sun} = 1.563, (V - W3){sub Sun} = 1.552, and (V - W4){sub Sun} = 1.604. A cross-check of the effective temperatures derived implementing 2MASS or WISE magnitudes in the infrared flux method confirms that the absolute calibration of the two systems agrees within the errors, possibly suggesting a 1% offset between the two, thus validating extant near- and mid-infrared absolute calibrations. While 2MASS magnitudes are usually well suited to derive T{sub eff}, we find that a number of bright, solar-like stars exhibit anomalous WISE colors. In most cases, this effect is spurious and can be attributed to lower-quality measurements, although for a couple of objects (3% {+-} 2% of the total sample) it might be real, and may hint at the presence of warm/hot debris disks.« less
NASA Technical Reports Server (NTRS)
Fowler, J. W.; Acquaviva, V.; Ade, P. A. R.; Aguirre, P.; Amiri, M.; Appel, J. W.; Barrientos, L. F.; Bassistelli, E. S.; Bond, J. R.; Brown, B.;
2010-01-01
We present a measurement of the angular power spectrum of the cosmic microwave background (CMB) radiation observed at 148 GHz. The measurement uses maps with 1.4' angular resolution made with data from the Atacama Cosmology Telescope (ACT). The observations cover 228 deg(sup 2) of the southern sky, in a 4 deg. 2-wide strip centered on declination 53 deg. South. The CMB at arc minute angular scales is particularly sensitive to the Silk damping scale, to the Sunyaev-Zel'dovich (SZ) effect from galaxy dusters, and to emission by radio sources and dusty galaxies. After masking the 108 brightest point sources in our maps, we estimate the power spectrum between 600 less than l less than 8000 using the adaptive multi-taper method to minimize spectral leakage and maximize use of the full data set. Our absolute calibration is based on observations of Uranus. To verify the calibration and test the fidelity of our map at large angular scales, we cross-correlate the ACT map to the WMAP map and recover the WMAP power spectrum from 250 less than l less than 1150. The power beyond the Silk damping tail of the CMB (l approximately 5000) is consistent with models of the emission from point sources. We quantify the contribution of SZ clusters to the power spectrum by fitting to a model normalized to sigma 8 = 0.8. We constrain the model's amplitude A(sub sz) less than 1.63 (95% CL). If interpreted as a measurement of as, this implies sigma (sup SZ) (sub 8) less than 0.86 (95% CL) given our SZ model. A fit of ACT and WMAP five-year data jointly to a 6-parameter ACDM model plus point sources and the SZ effect is consistent with these results.
Lunar Cratering Chronology: Calibrating Degree of Freshness of Craters to Absolute Ages
NASA Astrophysics Data System (ADS)
Trang, D.; Gillis-Davis, J.; Boyce, J. M.
2013-12-01
The use of impact craters to age-date surfaces of and/or geomorphological features on planetary bodies is a decades old practice. Various dating techniques use different aspects of impact craters in order to determine ages. One approach is based on the degree of freshness of primary-impact craters. This method examines the degradation state of craters through visual inspection of seven criteria: polygonality, crater ray, continuous ejecta, rim crest sharpness, satellite craters, radial channels, and terraces. These criteria are used to rank craters in order of age from 0.0 (oldest) to 7.0 (youngest). However, the relative decimal scale used in this technique has not been tied to a classification of absolute ages. In this work, we calibrate the degree of freshness to absolute ages through crater counting. We link the degree of freshness to absolute ages through crater counting of fifteen craters with diameters ranging from 5-22 km and degree of freshness from 6.3 to 2.5. We use the Terrain Camera data set on Kaguya to count craters on the continuous ejecta of each crater in our sample suite. Specifically, we divide the crater's ejecta blanket into quarters and count craters between the rim of the main crater out to one crater radii from the rim for two of the four sections. From these crater counts, we are able to estimate the absolute model age of each main crater using the Craterstats2 tool in ArcGIS. Next, we compare the degree of freshness for the crater count-derived age of our main craters to obtain a linear inverse relation that links these two metrics. So far, for craters with degree of freshness from 6.3 to 5.0, the linear regression has an R2 value of 0.7, which corresponds to a relative uncertainty of ×230 million years. At this point, this tool that links degree of freshness to absolute ages cannot be used with craters <8km because this class of crater degrades quicker than larger craters. A graphical solution exists for correcting the degree of freshness for craters <8 km in diameter. We convert this graphical solution to a single function of two independent variables, observed degree of freshness and crater diameter. This function, which results in a corrected degree of freshness is found through a curve-fitting routine and corrects the degree of freshness for craters <8 km in diameter. As a result, we are able to derive absolute ages from the degree of freshness of craters with diameters from about ≤20 km down to a 1 km in diameter with a precision of ×230 million years.
NASA Astrophysics Data System (ADS)
Krajíček, Zdeněk; Bergoglio, Mercede; Jousten, Karl; Otal, Pierre; Sabuga, Wladimir; Saxholm, Sari; Pražák, Dominik; Vičar, Martin
2014-01-01
This report describes a EURAMET comparison of five European National Metrology Institutes in low gauge and absolute pressure in gas (nitrogen), denoted as EURAMET.M.P-K4.2010. Its main intention is to state equivalence of the pressure standards, in particular those based on the technology of force-balanced piston gauges such as e.g. FRS by Furness Controls, UK and FPG8601 by DHI-Fluke, USA. It covers the range from 1 Pa to 15 kPa, both gauge and absolute. The comparison in absolute mode serves as a EURAMET Key Comparison which can be linked to CCM.P-K4 and CCM.P-K2 via PTB. The comparison in gauge mode is a supplementary comparison. The comparison was carried out from September 2008 till October 2012. The participating laboratories were the following: CMI, INRIM, LNE, MIKES, PTB-Berlin (absolute pressure 1 kPa and below) and PTB-Braunschweig (absolute pressure 1 kPa and above and gauge pressure). CMI was the pilot laboratory and provided a transfer standard for the comparison. This transfer standard was also the laboratory standard of CMI at the same time, which resulted in a unique and logistically difficult star comparison. Both in gauge and absolute pressures all the participating institutes successfully proved their equivalence with respect to the reference value and all also proved mutual bilateral equivalences in all the points. All the participating laboratories are also equivalent with the reference values of CCM.P-K4 and CCM.P-K2 in the relevant points. The comparison also proved the ability of FPG8601 to serve as a transfer standard. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Does my high blood pressure improve your survival? Overall and subgroup learning curves in health.
Van Gestel, Raf; Müller, Tobias; Bosmans, Johan
2017-09-01
Learning curves in health are of interest for a wide range of medical disciplines, healthcare providers, and policy makers. In this paper, we distinguish between three types of learning when identifying overall learning curves: economies of scale, learning from cumulative experience, and human capital depreciation. In addition, we approach the question of how treating more patients with specific characteristics predicts provider performance. To soften collinearity problems, we explore the use of least absolute shrinkage and selection operator regression as a variable selection method and Theil-Goldberger mixed estimation to augment the available information. We use data from the Belgian Transcatheter Aorta Valve Implantation (TAVI) registry, containing information on the first 860 TAVI procedures in Belgium. We find that treating an additional TAVI patient is associated with an increase in the probability of 2-year survival by about 0.16%-points. For adverse events like renal failure and stroke, we find that an extra day between procedures is associated with an increase in the probability for these events by 0.12%-points and 0.07%-points, respectively. Furthermore, we find evidence for positive learning effects from physicians' experience with defibrillation, treating patients with hypertension, and the use of certain types of replacement valves during the TAVI procedure. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Apel, W. D.; Arteaga-Velázquez, J. C.; Bähren, L.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Falcke, H.; Fuchs, B.; Gemmeke, H.; Grupen, C.; Haungs, A.; Heck, D.; Hiller, R.; Hörandel, J. R.; Horneffer, A.; Huber, D.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Krömer, O.; Kuijpers, J.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Melissas, M.; Morello, C.; Nehls, S.; Oehlschläger, J.; Palmieri, N.; Pierog, T.; Rautenberg, J.; Rebel, H.; Roth, M.; Rühle, C.; Saftoiu, A.; Schieler, H.; Schmidt, A.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Weindl, A.; Wochele, J.; Zabierowski, J.; Zensus, J. A.
2016-02-01
LOPES was a digital antenna array detecting the radio emission of cosmic-ray air showers. The calibration of the absolute amplitude scale of the measurements was done using an external, commercial reference source, which emits a frequency comb with defined amplitudes. Recently, we obtained improved reference values by the manufacturer of the reference source, which significantly changed the absolute calibration of LOPES. We reanalyzed previously published LOPES measurements, studying the impact of the changed calibration. The main effect is an overall decrease of the LOPES amplitude scale by a factor of 2.6 ± 0.2, affecting all previously published values for measurements of the electric-field strength. This results in a major change in the conclusion of the paper 'Comparing LOPES measurements of air-shower radio emission with REAS 3.11 and CoREAS simulations' published by Apel et al. (2013) : With the revised calibration, LOPES measurements now are compatible with CoREAS simulations, but in tension with REAS 3.11 simulations. Since CoREAS is the latest version of the simulation code incorporating the current state of knowledge on the radio emission of air showers, this new result indicates that the absolute amplitude prediction of current simulations now is in agreement with experimental data.
Absolute mass scale calibration in the inverse problem of the physical theory of fireballs.
NASA Astrophysics Data System (ADS)
Kalenichenko, V. V.
A method of the absolute mass scale calibration is suggested for solving the inverse problem of the physical theory of fireballs. The method is based on the data on the masses of the fallen meteorites whose fireballs have been photographed in their flight. The method may be applied to those fireballs whose bodies have not experienced considerable fragmentation during their destruction in the atmosphere and have kept their form well enough. Statistical analysis of the inverse problem solution for a sufficiently representative sample makes it possible to separate a subsample of such fireballs. The data on the Lost City and Innisfree meteorites are used to obtain calibration coefficients.
Marques, Alda; Almeida, Sara; Carvalho, Joana; Cruz, Joana; Oliveira, Ana; Jácome, Cristina
2016-12-01
To assess the reliability, validity, and ability to identify fall status of the Balance Evaluation Systems Test (BESTest), Mini-BESTest, and Brief-BESTest, compared with the Berg Balance Scale (BBS), in older people living in the community. Cross-sectional. Community centers. Older adults (N=122; mean age ± SD, 76±9y). Not applicable. Participants reported on falls history in the preceding year and completed the Activities-Specific Balance Confidence (ABC) Scale. The BBS, BESTest, and the Five Times Sit-To-Stand Test were administered. Interrater (2 physiotherapists) and test-retest relative (48-72h) and absolute reliabilities were explored with the intraclass correlation coefficient (ICC) equation (2,1) and the Bland and Altman method. Minimal detectable changes at the 95% confidence level (MDC 95 ) were established. Validity was assessed by correlating the balance tests with each other and with the ABC Scale (Spearman correlation coefficients-ρ). Receiver operating characteristics assessed the ability of each balance test to differentiate between people with and without a history of falls. All balance tests presented good to excellent interrater (ICC=.71-.93) and test-retest (ICC=.50-.82) relative reliability, with no evidence of bias. MDC 95 values were 4.6, 9, 3.8, and 4.1 points for the BBS, BESTest, Mini-BESTest, and Brief-BESTest, respectively. All tests were significantly correlated with each other (ρ=.83-.96) and with the ABC Scale (ρ=.46-.61). Acceptable ability to identify fall status (areas under the curve, .71-.78) was found for all tests. Cutoff points were 48.5, 82, 19.5, and 12.5 points for the BBS, BESTest, Mini-BESTest, and Brief-BESTest, respectively. All balance tests are reliable, valid, and able to identify fall status in older people living in the community. Therefore, the choice of which test to use will depend on the level of balance impairment, purpose, and time availability. Copyright © 2016. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Cook, M. J.; Sasagawa, G. S.; Roland, E. C.; Schmidt, D. A.; Wilcock, W. S. D.; Zumberge, M. A.
2017-12-01
Seawater pressure can be used to measure vertical seafloor deformation since small seafloor height changes produce measurable pressure changes. However, resolving secular vertical deformation near subduction zones can be difficult due to pressure gauge drift. A typical gauge drift rate of about 10 cm/year exceeds the expected secular rate of 1 cm/year or less in Cascadia. The absolute self-calibrating pressure recorder (ASCPR) was developed to solve the issue of gauge drift by using a deadweight calibrator to make campaign-style measurements of the absolute seawater pressure. Pressure gauges alternate between observing the ambient seawater pressure and the deadweight calibrator pressure, which is an accurately known reference value, every 10-20 minutes for several hours. The difference between the known reference pressure and the observed seafloor pressure allows offsets and transients to be corrected to determine the true, absolute seafloor pressure. Absolute seafloor pressure measurements provide a great utility for geodetic deformation studies. The measurements provide instrument-independent, benchmark values that can be used far into the future as epoch points in long-term time series or as important calibration points for other continuous pressure records. The ASCPR was first deployed in Cascadia in 2014 and 2015, when seven concrete seafloor benchmarks were placed along a trench-perpendicular profile extending from 20 km to 105 km off the central Oregon coast. Two benchmarks have ASCPR measurements that span three years, one benchmark spans two years, and four benchmarks span one year. Measurement repeatability is currently 3 to 4 cm, but we anticipate accuracy on the order of 1 cm with improvements to the instrument metrology and processing tidal and non-tidal oceanographic signals.
On the Use of the Main-sequence Knee (Saddle) to Measure Globular Cluster Ages
NASA Astrophysics Data System (ADS)
Saracino, S.; Dalessandro, E.; Ferraro, F. R.; Lanzoni, B.; Origlia, L.; Salaris, M.; Pietrinferni, A.; Geisler, D.; Kalirai, J. S.; Correnti, M.; Cohen, R. E.; Mauro, F.; Villanova, S.; Moni Bidin, C.
2018-06-01
In this paper, we review the operational definition of the so-called main-sequence knee (MS-knee), a feature in the color-magnitude diagram (CMD) occurring at the low-mass end of the MS. The magnitude of this feature is predicted to be independent of age at fixed chemical composition. For this reason, its difference in magnitude with respect to the MS turn-off (MS-TO) point has been suggested as a possible diagnostic to estimate absolute globular cluster (GC) ages. We first demonstrate that the operational definition of the MS-knee currently adopted in the literature refers to the inflection point of the MS (which we here more appropriately named MS-saddle), a feature that is well distinct from the knee and which cannot be used as its proxy. The MS-knee is only visible in near-infrared CMDs, while the MS-saddle can be also detected in optical–NIR CMDs. By using different sets of isochrones, we then demonstrate that the absolute magnitude of the MS-knee varies by a few tenths of a dex from one model to another, thus showing that at the moment stellar models may not capture the full systematic error in the method. We also demonstrate that while the absolute magnitude of the MS-saddle is almost coincident in different models, it has a systematic dependence on the adopted color combinations which is not predicted by stellar models. Hence, it cannot be used as a reliable reference for absolute age determination. Moreover, when statistical and systematic uncertainties are properly taken into account, the difference in magnitude between the MS-TO and the MS-saddle does not provide absolute ages with better accuracy than other methods like the MS-fitting.
Potential energy hypersurface and molecular flexibility
NASA Astrophysics Data System (ADS)
Koča, Jaroslav
1993-02-01
The molecular flexibility phenomenon is discussed from the conformational potential energy(hyper) surface (PES) point of view. Flexibility is considered as a product of three terms: thermodynamic, kinetic and geometrical. Several expressions characterizing absolute and relative molecular flexibility are introduced, depending on a subspace studied of the entire conformational space, energy level E of PES as well as absolute temperature. Results obtained by programs DAISY, CICADA and PANIC in conjunction with molecular mechanics program MMX for flexibility analysis of isopentane, 2,2-dimethylpentane and isohexane molecules are introduced.
1990-06-01
Layer Manipulator is placed AP differential pressure across the surface fence e, IC, mean and turbulent viscous dissipation Rt absolute viscosity of...feet long. The zero point for the traversing system is situated 3.3 feet from the inlet end of the blockhouse and ranges over 90% of the semi-open...tenth the absolute air pressure in millimeters of water. A voltage divider further reduces CD23 output voltage by one-half to accommodate the MASSCOMP
Code of Federal Regulations, 2011 CFR
2011-01-01
...—Shipping Point 1 (A) For 1 through 20 Samples Factor Grades AL 2 Number of 33-count samples 3 1 2 3 4 5 6 7.... 2 Russet. Table I—Shipping Point 1 (Continued) (B) For 21 through 40 Samples Factor Grades AL 2... outside the continental United States, the port of entry into the United States. 2 AL—Absolute limit...
Code of Federal Regulations, 2012 CFR
2012-01-01
...—Shipping Point 1 (A) For 1 through 20 Samples Factor Grades AL 2 Number of 33-count samples 3 1 2 3 4 5 6 7.... 2 Russet. Table I—Shipping Point 1 (Continued) (B) For 21 through 40 Samples Factor Grades AL 2... outside the continental United States, the port of entry into the United States. 2 AL—Absolute limit...
Code of Federal Regulations, 2010 CFR
2010-01-01
...—Shipping Point 1 (A) For 1 through 20 Samples Factor Grades AL 2 Number of 33-count samples 3 1 2 3 4 5 6 7.... 2 Russet. Table I—Shipping Point 1 (Continued) (B) For 21 through 40 Samples Factor Grades AL 2... outside the continental United States, the port of entry into the United States. 2 AL—Absolute limit...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
Neutrino footprint in large scale structure
NASA Astrophysics Data System (ADS)
Garay, Carlos Peña; Verde, Licia; Jimenez, Raul
2017-03-01
Recent constrains on the sum of neutrino masses inferred by analyzing cosmological data, show that detecting a non-zero neutrino mass is within reach of forthcoming cosmological surveys. Such a measurement will imply a direct determination of the absolute neutrino mass scale. Physically, the measurement relies on constraining the shape of the matter power spectrum below the neutrino free streaming scale: massive neutrinos erase power at these scales. However, detection of a lack of small-scale power from cosmological data could also be due to a host of other effects. It is therefore of paramount importance to validate neutrinos as the source of power suppression at small scales. We show that, independent on hierarchy, neutrinos always show a footprint on large, linear scales; the exact location and properties are fully specified by the measured power suppression (an astrophysical measurement) and atmospheric neutrinos mass splitting (a neutrino oscillation experiment measurement). This feature cannot be easily mimicked by systematic uncertainties in the cosmological data analysis or modifications in the cosmological model. Therefore the measurement of such a feature, up to 1% relative change in the power spectrum for extreme differences in the mass eigenstates mass ratios, is a smoking gun for confirming the determination of the absolute neutrino mass scale from cosmological observations. It also demonstrates the synergy between astrophysics and particle physics experiments.
Electrotherapy modalities for adhesive capsulitis (frozen shoulder).
Page, Matthew J; Green, Sally; Kramer, Sharon; Johnston, Renea V; McBain, Brodwen; Buchbinder, Rachelle
2014-10-01
Adhesive capsulitis (also termed frozen shoulder) is a common condition characterised by spontaneous onset of pain, progressive restriction of movement of the shoulder and disability that restricts activities of daily living, work and leisure. Electrotherapy modalities, which aim to reduce pain and improve function via an increase in energy (electrical, sound, light, thermal) into the body, are often delivered as components of a physical therapy intervention. This review is one in a series of reviews which form an update of the Cochrane review 'Physiotherapy interventions for shoulder pain'. To synthesise the available evidence regarding the benefits and harms of electrotherapy modalities, delivered alone or in combination with other interventions, for the treatment of adhesive capsulitis. We searched CENTRAL, MEDLINE, EMBASE, CINAHL Plus and the ClinicalTrials.gov and World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) clinical trials registries up to May 2014, unrestricted by language, and reviewed the reference lists of review articles and retrieved trials to identify any other potentially relevant trials. We included randomised controlled trials (RCTs) and controlled clinical trials using a quasi-randomised method of allocation that included adults with adhesive capsulitis and compared any electrotherapy modality to placebo, no treatment, a different electrotherapy modality, or any other intervention. The two main questions of the review focused on whether electrotherapy modalities are effective compared to placebo or no treatment, or if they are an effective adjunct to manual therapy or exercise (or both). The main outcomes of interest were participant-reported pain relief of 30% or greater, overall pain, function, global assessment of treatment success, active shoulder abduction, quality of life, and the number of participants experiencing any adverse event. Two review authors independently selected trials for inclusion, extracted the data, performed a risk of bias assessment, and assessed the quality of the body of evidence for the main outcomes using the GRADE approach. Nineteen trials (1249 participants) were included in the review. Four trials reported using an adequate method of allocation concealment and six trials blinded participants and personnel. Only two electrotherapy modalities (low-level laser therapy (LLLT) and pulsed electromagnetic field therapy (PEMF)) have been compared to placebo. No trial has compared an electrotherapy modality plus manual therapy and exercise to manual therapy and exercise alone. The two main questions of the review were investigated in nine trials.Low quality evidence from one trial (40 participants) indicated that LLLT for six days may result in improvement at six days. Eighty per cent (16/20) of participants reported treatment success with LLLT compared with 10% (2/20) of participants receiving placebo (risk ratio (RR) 8.00, 95% confidence interval (CI) 2.11 to 30.34; absolute risk difference 70%, 95% CI 48% to 92%). No participants in either group reported adverse events.We were uncertain whether PEMF for two weeks improved pain or function more than placebo at two weeks because of the very low quality evidence from one trial (32 participants). Seventy-five per cent (15/20) of participants reported pain relief of 30% or more with PEMF compared with 0% (0/12) of participants receiving placebo (RR 19.19, 95% CI 1.25 to 294.21; absolute risk difference 75%, 95% CI 53% to 97%). Fifty-five per cent (11/20) of participants reported total recovery of joint function with PEMF compared with 0% (0/12) of participants receiving placebo (RR 14.24, 95% CI 0.91 to 221.75; absolute risk difference 55%, 95% CI 31 to 79).Moderate quality evidence from one trial (63 participants) indicated that LLLT plus exercise for eight weeks probably results in greater improvement when measured at the fourth week of treatment, but a similar number of adverse events, compared with placebo plus exercise. The mean pain score at four weeks was 51 points with placebo plus exercise, while with LLLT plus exercise the mean pain score was 32 points on a 100 point scale (mean difference (MD) 19 points, 95% CI 15 to 23; absolute risk difference 19%, 95% CI 15% to 23%). The mean function impairment score was 48 points with placebo plus exercise, while with LLLT plus exercise the mean function impairment score was 36 points on a 100 point scale (MD 12 points, 95% CI 6 to 18; absolute risk difference 12%, 95% CI 6 to 18). Mean active abduction was 70 degrees with placebo plus exercise, while with LLLT plus exercise mean active abduction was 79 degrees (MD 9 degrees, 95% CI 2 to 16; absolute risk difference 5%, 95% CI 1% to 9%). No participants in either group reported adverse events. LLLT's benefits on function were maintained at four months.Based on very low quality evidence from six trials, we were uncertain whether therapeutic ultrasound, PEMF, continuous short wave diathermy, Iodex phonophoresis, a combination of Iodex iontophoresis with continuous short wave diathermy, or a combination of therapeutic ultrasound with transcutaneous electrical nerve stimulation (TENS) were effective adjuncts to exercise. Based on low or very low quality evidence from 12 trials, we were uncertain whether a diverse range of electrotherapy modalities (delivered alone or in combination with manual therapy, exercise, or other active interventions) were more or less effective than other active interventions (for example glucocorticoid injection). Based upon low quality evidence from one trial, LLLT for six days may be more effective than placebo in terms of global treatment success at six days. Based upon moderate quality evidence from one trial, LLLT plus exercise for eight weeks may be more effective than exercise alone in terms of pain up to four weeks, and function up to four months. It is unclear whether PEMF is more or less effective than placebo, or whether other electrotherapy modalities are an effective adjunct to exercise. Further high quality randomised controlled trials are needed to establish the benefits and harms of physical therapy interventions (that comprise electrotherapy modalities, manual therapy and exercise, and are reflective of clinical practice) compared to interventions with evidence of benefit (for example glucocorticoid injection or arthrographic joint distension).
Method and apparatus for ultra-high-sensitivity, incremental and absolute optical encoding
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
1999-01-01
An absolute optical linear or rotary encoder which encodes the motion of an object (3) with increased resolution and encoding range and decreased sensitivity to damage to the scale includes a scale (5), which moves with the object and is illuminated by a light source (11). The scale carries a pattern (9) which is imaged by a microscope optical system (13) on a CCD array (17) in a camera head (15). The pattern includes both fiducial markings (31) which are identical for each period of the pattern and code areas (33) which include binary codings of numbers identifying the individual periods of the pattern. The image of the pattern formed on the CCD array is analyzed by an image processor (23) to locate the fiducial marking, decode the information encoded in the code area, and thereby determine the position of the object.
Statistical properties of edge plasma turbulence in the Large Helical Device
NASA Astrophysics Data System (ADS)
Dewhurst, J. M.; Hnat, B.; Ohno, N.; Dendy, R. O.; Masuzaki, S.; Morisaki, T.; Komori, A.
2008-09-01
Ion saturation current (Isat) measurements made by three tips of a Langmuir probe array in the Large Helical Device are analysed for two plasma discharges. Absolute moment analysis is used to quantify properties on different temporal scales of the measured signals, which are bursty and intermittent. Strong coherent modes in some datasets are found to distort this analysis and are consequently removed from the time series by applying bandstop filters. Absolute moment analysis of the filtered data reveals two regions of power-law scaling, with the temporal scale τ ≈ 40 µs separating the two regimes. A comparison is made with similar results from the Mega-Amp Spherical Tokamak. The probability density function is studied and a monotonic relationship between connection length and skewness is found. Conditional averaging is used to characterize the average temporal shape of the largest intermittent bursts.
Living beyond the edge: Higgs inflation and vacuum metastability
Bezrukov, Fedor; Rubio, Javier; Shaposhnikov, Mikhail
2015-10-13
The measurements of the Higgs mass and top Yukawa coupling indicate that we live in a very special universe, at the edge of the absolute stability of the electroweak vacuum. If fully stable, the Standard Model (SM) can be extended all the way up to the inflationary scale and the Higgs field, nonminimally coupled to gravity with strength ξ, can be responsible for inflation. We show that the successful Higgs inflation scenario can also take place if the SM vacuum is not absolutely stable. This conclusion is based on two effects that were overlooked previously. The first one is associatedmore » with the effective renormalization of the SM couplings at the energy scale M P/ξ, where M P is the Planck scale. Lastly, the second one is a symmetry restoration after inflation due to high temperature effects that leads to the (temporary) disappearance of the vacuum at Planck values of the Higgs field.« less
Deformation Monitoring of the Submillimetric UPV Calibration Baseline
NASA Astrophysics Data System (ADS)
García-Asenjo, Luis; Baselga, Sergio; Garrigues, Pascual
2017-06-01
A 330 m calibration baseline was established at the Universitat Politècnica de València (UPV) in 2007. Absolute scale was subsequently transferred in 2012 from the Nummela Standard Baseline in Finland and distances between pillars were determined with uncertainties ranging from 0.1 mm to 0.3 mm. In order to assess the long-term stability of the baseline three field campaigns were carried out from 2013 to 2015 in a co-operative effort with the Universidad Complutense de Madrid (UCM), which provided the only Mekometer ME5000 distance meter available in Spain. Since the application of the ISO17123-4 full procedure did not suffice to come to a definite conclusion about possible displacements of the pillars, we opted for the traditional geodetic network approach. This approach had to be adapted to the case at hand in order to deal with problems such as the geometric weakness inherent to calibration baselines and scale uncertainty derived from both the use of different instruments and the high correlation between the meteorological correction and scale determination. Additionally, the so-called the maximum number of stable points method was also tested. In this contribution it is described the process followed to assess the stability of the UPV submillimetric calibration baseline during the period of time from 2012 to 2015.
Wang, G; Wu, K; Hu, H; Li, G; Wang, L J
2016-10-01
To reduce seismic and environmental vibration noise, ultra-low-frequency vertical vibration isolation systems play an important role in absolute gravimetry. For this purpose, an isolator based on a two-stage beam structure is proposed and demonstrated. The isolator has a simpler and more robust structure than the present ultra-low-frequency vertical active vibration isolators. In the system, two beams are connected to a frame using flexural pivots. The upper beam is suspended from the frame with a normal hex spring and the lower beam is suspended from the upper one using a zero-length spring. The pivot of the upper beam is not vertically above the pivot of the lower beam. With this special design, the attachment points of the zero-length spring to the beams can be moved to adjust the effective stiffness. A photoelectric detector is used to detect the angle between the two beams, and a voice coil actuator attached to the upper beam is controlled by a feedback circuit to keep the angle at a fixed value. The system can achieve a natural period of 100 s by carefully moving the attachment points of the zero-length spring to the beams and tuning the feedback parameters. The system has been used as an inertial reference in the T-1 absolute gravimeter. The experiment results demonstrate that the system has significant vibration isolation performance that holds promise in applications such as absolute gravimeters.
NASA Astrophysics Data System (ADS)
Wang, G.; Wu, K.; Hu, H.; Li, G.; Wang, L. J.
2016-10-01
To reduce seismic and environmental vibration noise, ultra-low-frequency vertical vibration isolation systems play an important role in absolute gravimetry. For this purpose, an isolator based on a two-stage beam structure is proposed and demonstrated. The isolator has a simpler and more robust structure than the present ultra-low-frequency vertical active vibration isolators. In the system, two beams are connected to a frame using flexural pivots. The upper beam is suspended from the frame with a normal hex spring and the lower beam is suspended from the upper one using a zero-length spring. The pivot of the upper beam is not vertically above the pivot of the lower beam. With this special design, the attachment points of the zero-length spring to the beams can be moved to adjust the effective stiffness. A photoelectric detector is used to detect the angle between the two beams, and a voice coil actuator attached to the upper beam is controlled by a feedback circuit to keep the angle at a fixed value. The system can achieve a natural period of 100 s by carefully moving the attachment points of the zero-length spring to the beams and tuning the feedback parameters. The system has been used as an inertial reference in the T-1 absolute gravimeter. The experiment results demonstrate that the system has significant vibration isolation performance that holds promise in applications such as absolute gravimeters.
Laser Truss Sensor for Segmented Telescope Phasing
NASA Technical Reports Server (NTRS)
Liu, Duncan T.; Lay, Oliver P.; Azizi, Alireza; Erlig, Herman; Dorsky, Leonard I.; Asbury, Cheryl G.; Zhao, Feng
2011-01-01
A paper describes the laser truss sensor (LTS) for detecting piston motion between two adjacent telescope segment edges. LTS is formed by two point-to-point laser metrology gauges in a crossed geometry. A high-resolution (<30 nm) LTS can be implemented with existing laser metrology gauges. The distance change between the reference plane and the target plane is measured as a function of the phase change between the reference and target beams. To ease the bandwidth requirements for phase detection electronics (or phase meter), homodyne or heterodyne detection techniques have been used. The phase of the target beam also changes with the refractive index of air, which changes with the air pressure, temperature, and humidity. This error can be minimized by enclosing the metrology beams in baffles. For longer-term (weeks) tracking at the micron level accuracy, the same gauge can be operated in the absolute metrology mode with an accuracy of microns; to implement absolute metrology, two laser frequencies will be used on the same gauge. Absolute metrology using heterodyne laser gauges is a demonstrated technology. Complexity of laser source fiber distribution can be optimized using the range-gated metrology (RGM) approach.
Transit dosimetry in IMRT with an a-Si EPID in direct detection configuration
NASA Astrophysics Data System (ADS)
Sabet, Mahsheed; Rowshanfarzad, Pejman; Vial, Philip; Menk, Frederick W.; Greer, Peter B.
2012-08-01
In this study an amorphous silicon electronic portal imaging device (a-Si EPID) converted to direct detection configuration was investigated as a transit dosimeter for intensity modulated radiation therapy (IMRT). After calibration to dose and correction for a background offset signal, the EPID-measured absolute IMRT transit doses for 29 fields were compared to a MatriXX two-dimensional array of ionization chambers (as reference) using Gamma evaluation (3%, 3 mm). The MatriXX was first evaluated as reference for transit dosimetry. The accuracy of EPID measurements was also investigated by comparison of point dose measurements by an ionization chamber on the central axis with slab and anthropomorphic phantoms in a range of simple to complex fields. The uncertainty in ionization chamber measurements in IMRT fields was also investigated by its displacement from the central axis and comparison with the central axis measurements. Comparison of the absolute doses measured by the EPID and MatriXX with slab phantoms in IMRT fields showed that on average 96.4% and 97.5% of points had a Gamma index<1 in head and neck and prostate fields, respectively. For absolute dose comparisons with anthropomorphic phantoms, the values changed to an average of 93.6%, 93.7% and 94.4% of points with Gamma index<1 in head and neck, brain and prostate fields, respectively. Point doses measured by the EPID and ionization chamber were within 3% difference for all conditions. The deviations introduced in the response of the ionization chamber in IMRT fields were<1%. The direct EPID performance for transit dosimetry showed that it has the potential to perform accurate, efficient and comprehensive in vivo dosimetry for IMRT.
An Automatic Procedure for Combining Digital Images and Laser Scanner Data
NASA Astrophysics Data System (ADS)
Moussa, W.; Abdel-Wahab, M.; Fritsch, D.
2012-07-01
Besides improving both the geometry and the visual quality of the model, the integration of close-range photogrammetry and terrestrial laser scanning techniques directs at filling gaps in laser scanner point clouds to avoid modeling errors, reconstructing more details in higher resolution and recovering simple structures with less geometric details. Thus, within this paper a flexible approach for the automatic combination of digital images and laser scanner data is presented. Our approach comprises two methods for data fusion. The first method starts by a marker-free registration of digital images based on a point-based environment model (PEM) of a scene which stores the 3D laser scanner point clouds associated with intensity and RGB values. The PEM allows the extraction of accurate control information for the direct computation of absolute camera orientations with redundant information by means of accurate space resection methods. In order to use the computed relations between the digital images and the laser scanner data, an extended Helmert (seven-parameter) transformation is introduced and its parameters are estimated. Precedent to that, in the second method, the local relative orientation parameters of the camera images are calculated by means of an optimized Structure and Motion (SaM) reconstruction method. Then, using the determined transformation parameters results in having absolute oriented images in relation to the laser scanner data. With the resulting absolute orientations we have employed robust dense image reconstruction algorithms to create oriented dense image point clouds, which are automatically combined with the laser scanner data to form a complete detailed representation of a scene. Examples of different data sets are shown and experimental results demonstrate the effectiveness of the presented procedures.
The major influence of the atmosphere on intracranial pressure: an observational study.
Herbowski, Leszek
2017-01-01
The impact of the atmosphere on human physiology has been studied widely within the last years. In practice, intracranial pressure is a pressure difference between intracranial compartments and the surrounding atmosphere. This means that gauge intracranial pressure uses atmospheric pressure as its zero point, and therefore, this method of pressure measurement excludes the effects of barometric pressure's fluctuation. The comparison of these two physical quantities can only take place through their absolute value relationship. The aim of this study is to investigate the direct effect of barometric pressure on the absolute intracranial pressure homeostasis. A prospective observational cross-sectional open study was conducted in Szczecin, Poland. In 28 neurosurgical patients with suspected normal-pressure hydrocephalus, intracranial intraventricular pressure was monitored in a sitting position. A total of 168 intracranial pressure and atmospheric pressure measurements were performed. Absolute atmospheric pressure was recorded directly. All values of intracranial gauge pressure were converted to absolute pressure (the sum of gauge intracranial pressure and local absolute atmospheric pressure). The average absolute mean intracranial pressure in the patients is 1006.6 hPa (95 % CI 1004.5 to 1008.8 hPa, SEM 1.1), and the mean absolute atmospheric pressure is 1007.9 hPa (95 % CI 1006.3 to 1009.6 hPa, SEM 0.8). The observed association between atmospheric and intracranial pressure is strongly significant (Spearman correlation r = 0.87, p < 0.05) and all the measurements are perfectly reliable (Bland-Altman coefficient is 4.8 %). It appears from this study that changes in absolute intracranial pressure are related to seasonal variation. Absolute intracranial pressure is shown to be impacted positively by atmospheric pressure.
The major influence of the atmosphere on intracranial pressure: an observational study
NASA Astrophysics Data System (ADS)
Herbowski, Leszek
2017-01-01
The impact of the atmosphere on human physiology has been studied widely within the last years. In practice, intracranial pressure is a pressure difference between intracranial compartments and the surrounding atmosphere. This means that gauge intracranial pressure uses atmospheric pressure as its zero point, and therefore, this method of pressure measurement excludes the effects of barometric pressure's fluctuation. The comparison of these two physical quantities can only take place through their absolute value relationship. The aim of this study is to investigate the direct effect of barometric pressure on the absolute intracranial pressure homeostasis. A prospective observational cross-sectional open study was conducted in Szczecin, Poland. In 28 neurosurgical patients with suspected normal-pressure hydrocephalus, intracranial intraventricular pressure was monitored in a sitting position. A total of 168 intracranial pressure and atmospheric pressure measurements were performed. Absolute atmospheric pressure was recorded directly. All values of intracranial gauge pressure were converted to absolute pressure (the sum of gauge intracranial pressure and local absolute atmospheric pressure). The average absolute mean intracranial pressure in the patients is 1006.6 hPa (95 % CI 1004.5 to 1008.8 hPa, SEM 1.1), and the mean absolute atmospheric pressure is 1007.9 hPa (95 % CI 1006.3 to 1009.6 hPa, SEM 0.8). The observed association between atmospheric and intracranial pressure is strongly significant (Spearman correlation r = 0.87, p < 0.05) and all the measurements are perfectly reliable (Bland-Altman coefficient is 4.8 %). It appears from this study that changes in absolute intracranial pressure are related to seasonal variation. Absolute intracranial pressure is shown to be impacted positively by atmospheric pressure.
2013-10-01
structure reveals four distinct purely refracted acoustic paths: One with a single upper turning point near 80 m depth, two with a pair of upper turning... points at a depth of roughly 300 m, and one with three upper turning points at 420 m. Individual path intensity, defined as the absolute square of...contribu- tion to acoustic scattering is thought to occur at upper turning points (UTP) (Flatte et al., 1979). Here, the acoustic path is horizontal
Investigating the Luminous Environment of SDSS Data Release 4 Mg II Absorption Line Systems
NASA Astrophysics Data System (ADS)
Caler, Michelle A.; Ravi, Sheth K.
2018-01-01
We investigate the luminous environment within a few hundred kiloparsecs of 3760 Mg II absorption line systems. These systems lie along 3760 lines of sight to Sloan Digital Sky Survey (SDSS) Data Release 4 QSOs, have redshifts that range between 0.37 ≤ z ≤ 0.82, and have rest equivalent widths greater than 0.18 Å. We use the SDSS Catalog Archive Server to identify galaxies projected near 3 arcminutes of the absorbing QSO’s position, and a background subtraction technique to estimate the absolute magnitude distribution and luminosity function of galaxies physically associated with these Mg II absorption line systems. The Mg II absorption system sample is split into two parts, with the split occurring at rest equivalent width 0.8 Å, and the resulting absolute magnitude distributions and luminosity functions compared on scales ranging from 50 h-1 kpc to 880 h-1 kpc. We find that, on scales of 100 h-1 kpc and smaller, the two distributions differ: the absolute magnitude distribution of galaxies associated with systems of rest frame equivalent width ≥ 0.8 Å (2750 lines of sight) seems to be approximated by that of elliptical-Sa type galaxies, whereas the absolute magnitude distribution of galaxies associated with systems of rest frame equivalent width < 0.8 Å (1010 lines of sight) seems to be approximated by that of Sa-Sbc type galaxies. However, on larger scales greater than 200 h-1 kpc, both distributions are broadly consistent with that of elliptical-Sa type galaxies. We note that, in a broader context, these results represent an estimate of the bright end of the galaxy luminosity function at a median redshift of z ˜ 0.65.
Audio steganography by amplitude or phase modification
NASA Astrophysics Data System (ADS)
Gopalan, Kaliappan; Wenndt, Stanley J.; Adams, Scott F.; Haddad, Darren M.
2003-06-01
This paper presents the results of embedding short covert message utterances on a host, or cover, utterance by modifying the phase or amplitude of perceptually masked or significant regions of the host. In the first method, the absolute phase at selected, perceptually masked frequency indices was changed to fixed, covert data-dependent values. Embedded bits were retrieved at the receiver from the phase at the selected frequency indices. Tests on embedding a GSM-coded covert utterance on clean and noisy host utterances showed no noticeable difference in the stego compared to the hosts in speech quality or spectrogram. A bit error rate of 2 out of 2800 was observed for a clean host utterance while no error occurred for a noisy host. In the second method, the absolute phase of 10 or fewer perceptually significant points in the host was set in accordance with covert data. This resulted in a stego with successful data retrieval and a slightly noticeable degradation in speech quality. Modifying the amplitude of perceptually significant points caused perceptible differences in the stego even with small changes of amplitude made at five points per frame. Finally, the stego obtained by altering the amplitude at perceptually masked points showed barely noticeable differences and excellent data recovery.
Yan, Yunzhi; Xu, Yinsheng; Chu, Ling; He, Shan; Chen, Yifeng
2012-06-01
Identifying the life-history strategies of fish and their associations with the surrounding environment is the basic foundation in the conservation and sustainable utilization of fish species. We examined the age, growth, and reproduction of Sarcocheilichthys nigripinnis using 352 specimens collected monthly from May 2009 to April 2010 in the Qingyi Stream. We found the sex ratio of this study population was 0.58:1 (female: male), significantly different from expected 1:1. Females and males both comprised four age groups. The annuli on the scales were formed during February and March. No obvious between-sex difference was observed in length-weight and length-scale-radius relationships. The total length in back-calculation significantly increased with age for both sexes, but did not differ significantly at each age between the two sexes. An inflection point was observed in the growth curves given by the von Bertalanffy growth function for total weight. At this inflection point, fish were 3.95 years. Both sexes reach their 50% sex maturity at age 2, when females and males were 94.7 mm and 103.0 mm total length. The temporal pattern of the gonado-somatic index corresponded to a spawning period that occurred from April through July. The non-synchronicity of egg diameter in each mature ovary during the breeding period suggested these fish may be batch spawners. The absolute fecundity increased significantly with total length and weight, whereas no significant correlation was observed between the relative fecundity and body size.
Turbulent mass inhomogeneities induced by a point-source
NASA Astrophysics Data System (ADS)
Thalabard, Simon
2018-03-01
We describe how turbulence distributes tracers away from a localized source of injection, and analyze how the spatial inhomogeneities of the concentration field depend on the amount of randomness in the injection mechanism. For that purpose, we contrast the mass correlations induced by purely random injections with those induced by continuous injections in the environment. Using the Kraichnan model of turbulent advection, whereby the underlying velocity field is assumed to be shortly correlated in time, we explicitly identify scaling regions for the statistics of the mass contained within a shell of radius r and located at a distance ρ away from the source. The two key parameters are found to be (i) the ratio s 2 between the absolute and the relative timescales of dispersion and (ii) the ratio Λ between the size of the cloud and its distance away from the source. When the injection is random, only the former is relevant, as previously shown by Celani et al (2007 J. Fluid Mech. 583 189–98) in the case of an incompressible fluid. It is argued that the space partition in terms of s 2 and Λ is a robust feature of the injection mechanism itself, which should remain relevant beyond the Kraichnan model. This is for instance the case in a generalized version of the model, where the absolute dispersion is prescribed to be ballistic rather than diffusive.
NASA Astrophysics Data System (ADS)
Cheng, Jieyu; Qiu, Wu; Yuan, Jing; Fenster, Aaron; Chiu, Bernard
2016-03-01
Registration of longitudinally acquired 3D ultrasound (US) images plays an important role in monitoring and quantifying progression/regression of carotid atherosclerosis. We introduce an image-based non-rigid registration algorithm to align the baseline 3D carotid US with longitudinal images acquired over several follow-up time points. This algorithm minimizes the sum of absolute intensity differences (SAD) under a variational optical-flow perspective within a multi-scale optimization framework to capture local and global deformations. Outer wall and lumen were segmented manually on each image, and the performance of the registration algorithm was quantified by Dice similarity coefficient (DSC) and mean absolute distance (MAD) of the outer wall and lumen surfaces after registration. In this study, images for 5 subjects were registered initially by rigid registration, followed by the proposed algorithm. Mean DSC generated by the proposed algorithm was 79:3+/-3:8% for lumen and 85:9+/-4:0% for outer wall, compared to 73:9+/-3:4% and 84:7+/-3:2% generated by rigid registration. Mean MAD of 0:46+/-0:08mm and 0:52+/-0:13mm were generated for lumen and outer wall respectively by the proposed algorithm, compared to 0:55+/-0:08mm and 0:54+/-0:11mm generated by rigid registration. The mean registration time of our method per image pair was 143+/-23s.
NASA Astrophysics Data System (ADS)
Xiaoyu, Lai; Renxin, Xu
2017-06-01
The nature of pulsar-like compact stars is essentially a central question of the fundamental strong interaction (explained in quantum chromo-dynamics) at low energy scale, the solution of which still remains a challenge though tremendous efforts have been tried. This kind of compact objects could actually be strange quark stars if strange quark matter in bulk may constitute the true ground state of the strong-interaction matter rather than 56Fe (the so-called Witten’s conjecture). From astrophysical points of view, however, it is proposed that strange cluster matter could be absolutely stable and thus those compact stars could be strange cluster stars in fact. This proposal could be regarded as a general Witten’s conjecture: strange matter in bulk could be absolutely stable, in which quarks are either free (for strange quark matter) or localized (for strange cluster matter). Strange cluster with three-light-flavor symmetry is renamed strangeon, being coined by combining “strange nucleon” for the sake of simplicity. A strangeon star can then be thought as a 3-flavored gigantic nucleus, and strangeons are its constituent as an analogy of nucleons which are the constituent of a normal (micro) nucleus. The observational consequences of strangeon stars show that different manifestations of pulsarlike compact stars could be understood in the regime of strangeon stars, and we are expecting more evidence for strangeon star by advanced facilities (e.g., FAST, SKA, and eXTP).
Pionnier, Raphaël; Découfour, Nicolas; Barbier, Franck; Popineau, Christophe; Simoneau-Buessinger, Emilie
2016-03-01
The purpose of this study was to quantitatively and qualitatively assess dynamic balance with accuracy in individuals with chronic ankle instability (CAI). To this aim, a motion capture system was used while participants performed the Star Excursion Balance Test (SEBT). Reached distances for the 8 points of the star were automatically computed, thereby excluding any dependence to the experimenter. In addition, new relevant variables were also computed, such as absolute time needed to reach each distance, lower limb ranges of motion during unipodal stance, as well as absolute error of pointing. Velocity of the center of pressure and range of variation of ground reaction forces have also been assessed during the unipodal phase of the SEBT thanks to force plates. CAI group exhibited smaller reached distances and greater absolute error of pointing than the control group (p<0.05). Moreover, the ranges of motion of lower limbs joints, the velocity of the center of pressure and the range of variation of the ground reaction forces were all significantly smaller in the CAI group (p<0.05). These reduced quantitative and qualitative performances highlighted a lower dynamic postural control. The limited body movements and accelerations during the unipodal stance in the CAI group could highlight a protective strategy. The present findings could help clinicians to better understand the motor strategies used by CAI patients during dynamic balance and may guide the rehabilitation process. Copyright © 2016 Elsevier B.V. All rights reserved.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald
2016-01-01
The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.
Low frequency AC waveform generator
Bilharz, Oscar W.
1986-01-01
Low frequency sine, cosine, triangle and square waves are synthesized in circuitry which allows variation in the waveform amplitude and frequency while exhibiting good stability and without requiring significant stabilization time. A triangle waveform is formed by a ramped integration process controlled by a saturation amplifier circuit which produces the necessary hysteresis for the triangle waveform. The output of the saturation circuit is tapped to produce the square waveform. The sine waveform is synthesized by taking the absolute value of the triangular waveform, raising this absolute value to a predetermined power, multiplying the raised absolute value of the triangle wave with the triangle wave itself and properly scaling the resultant waveform and subtracting it from the triangular waveform itself. The cosine is synthesized by squaring the triangular waveform, raising the triangular waveform to a predetermined power and adding the squared waveform raised to the predetermined power with a DC reference and subtracting the squared waveform therefrom, with all waveforms properly scaled. The resultant waveform is then multiplied with a square wave in order to correct the polarity and produce the resultant cosine waveform.
NASA Astrophysics Data System (ADS)
Chen, Chen; Klämpfl, Florian; Stelzle, Florian; Schmidt, Michael
2014-11-01
An imging resolution of micron-scale has not yet been discovered by diffuse optical imaging (DOI), while a superficial response was eliminated. In this work, we report on a new approach of DOI with a local off-set alignment to subvert the common boundary conditions of the modified Beer-Lambert Law (MBLL). It can resolve a superficial target in micron scale under a turbid media. To validate both major breakthroughs, this system was used to recover a subsurface microvascular mimicking structure under an skin equivalent phantom. This microvascular was included with oxy-hemoglobin solution in variant concentrations to distiguish the absolute values of CtRHb and CtHbO2 . Experimental results confirmed the feasibility of recovering the target vascular of 50 µm in diameter, and graded the values of the concentrations of oxy-hemoglobin from 10 g/L to 50 g/L absolutely. Ultimately, this approach could evolve into a non-invasive imaging system to map the microvascular pattern and the associated oximetry under a human skin in-vivo.
NASA Technical Reports Server (NTRS)
Klimas, Alex; Uritsky, Vadim; Donovan, Eric
2010-01-01
We provide indirect evidence for turbulent reconnection in Earth's midtail plasma sheet by reexamining the statistical properties of bright, nightside auroral emission events as observed by the UVI experiment on the Polar spacecraft and discussed previously by Uritsky et al. The events are divided into two groups: (1) those that map to absolute value of (X(sub GSM)) < 12 R(sub E) in the magnetotail and do not show scale-free statistics and (2) those that map to absolute value of (X(sub GSM)) > 12 R(sub E) and do show scale-free statistics. The absolute value of (X(sub GSM)) dependence is shown to most effectively organize the events into these two groups. Power law exponents obtained for group 2 are shown to validate the conclusions of Uritsky et al. concerning the existence of critical dynamics in the auroral emissions. It is suggested that the auroral dynamics is a reflection of a critical state in the magnetotail that is based on the dynamics of turbulent reconnection in the midtail plasma sheet.
An absolute photometric system at 10 and 20 microns
NASA Technical Reports Server (NTRS)
Rieke, G. H.; Lebofsky, M. J.; Low, F. J.
1985-01-01
Two new direct calibrations at 10 and 20 microns are presented in which terrestrial flux standards are referred to infrared standard stars. These measurements give both good agreement and higher accuracy when compared with previous direct calibrations. As a result, the absolute calibrations at 10 and 20 microns have now been determined with accuracies of 3 and 8 percent, respectively. A variety of absolute calibrations based on extrapolation of stellar spectra from the visible to 10 microns are reviewed. Current atmospheric models of A-type stars underestimate their fluxes by about 10 percent at 10 microns, whereas models of solar-type stars agree well with the direct calibrations. The calibration at 20 microns can probably be determined to about 5 percent by extrapolation from the more accurate result at 10 microns. The photometric system at 10 and 20 microns is updated to reflect the new absolute calibration, to base its zero point directly on the colors of A0 stars, and to improve the accuracy in the comparison of the standard stars.
Millicharge or decay: a critical take on Minimal Dark Matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nobile, Eugenio Del; Dipartimento di Fisica e Astronomia “G. Galilei”, Università di Padova and INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova; Nardecchia, Marco
2016-04-26
Minimal Dark Matter (MDM) is a theoretical framework highly appreciated for its minimality and yet its predictivity. Of the two only viable candidates singled out in the original analysis, the scalar eptaplet has been found to decay too quickly to be around today, while the fermionic quintuplet is now being probed by indirect Dark Matter (DM) searches. It is therefore timely to critically review the MDM paradigm, possibly pointing out generalizations of this framework. We propose and explore two distinct directions. One is to abandon the assumption of DM electric neutrality in favor of absolutely stable, millicharged DM candidates whichmore » are part of SU(2){sub L} multiplets with integer isospin. Another possibility is to lower the cutoff of the model, which was originally fixed at the Planck scale, to allow for DM decays. We find new viable MDM candidates and study their phenomenology in detail.« less
Sputter deposition of a spongelike morphology in metal coatings
NASA Astrophysics Data System (ADS)
Jankowski, A. F.; Hayes, J. P.
2003-03-01
Metallic films are grown with a ``spongelike'' morphology in the as-deposited condition using planar magnetron sputtering. The morphology of the deposit is characterized by metallic continuity in three dimensions with continuous and open porosity on the submicron scale. The stabilization of the spongelike morphology is found over a limited range of the sputter deposition parameters, that is, of working gas pressure and substrate temperature. This spongelike morphology is an extension of the features as generally represented in the classic zone models of growth for physical vapor deposits. Nickel coatings are deposited with working gas pressures up to 4 Pa and for substrate temperatures up to 1100 K. The morphology of the deposits is examined in plan and in cross section views with scanning electron microscopy. The parametric range of gas pressure and substrate temperature (relative to absolute melt point) under which the spongelike metal deposits are produced appear universal for other metals including gold, silver, and aluminum.
Broadband, high-resolution investigation of advanced absorption line shapes at high temperature
NASA Astrophysics Data System (ADS)
Schroeder, Paul J.; Cich, Matthew J.; Yang, Jinyu; Swann, William C.; Coddington, Ian; Newbury, Nathan R.; Drouin, Brian J.; Rieker, Gregory B.
2017-08-01
Spectroscopic studies of planetary atmospheres and high-temperature processes (e.g., combustion) require absorption line-shape models that are accurate over extended temperature ranges. To date, advanced line shapes, like the speed-dependent Voigt and Rautian profiles, have not been tested above room temperature with broadband spectrometers. We investigate pure water vapor spectra from 296 to 1305 K acquired with a dual-frequency comb spectrometer spanning from 6800 to 7200 c m-1 at a point spacing of 0.0033 c m-1 and absolute frequency accuracy of <3.3 ×10-6c m-1 . Using a multispectral fitting analysis, we show that only the speed-dependent Voigt accurately models this temperature range with a single power-law temperature-scaling exponent for the broadening coefficients. Only the data from the analysis using this profile fall within theoretical predictions, suggesting that this mechanism captures the dominant narrowing physics for these high-temperature conditions.
The interstellar medium of M31. III - Narrow-band imagery in H alpha and (SII)
NASA Technical Reports Server (NTRS)
Walterbos, R. A. M.; Braun, R.
1992-01-01
Deep CCD imagery in H alpha and (SII) is presented of the major spiral arms of M31 with particular attention given to the data reduction and the analysis of the (SII)/H alpha flux ratios. A diffuse ionized gas noted in the images is analyzed which shows higher (SII)/H alpha ratios, and 967 discrete nebulae are listed with gray-scale images, finding charts, and absolute fluxes. The differential H-alpha luminosity function is found to have a slope of -0.95 for brighter objects and flattens out below a critical level. The curve is shown to correspond to the point at which single-star ionization accounts for the H alpha luminosities and is consistent with previous observations. The catalog of objects and fluxes is the largest existing sample of this type, and the unresolved objects in the sample are considered to be planetary nebulae.
The structure of the nearby universe traced by theIRAS galaxies
NASA Technical Reports Server (NTRS)
Yahil, Amos
1993-01-01
One of the most important discoveries of the Infrared Astronomical Satellite (IRAS) has been the detection of about 20,000 galaxies with 60 microns fluxes above 0.5 Jy. From the observational point of view, the IRAS galaxies are ideal tracers of density, since they are homogeneously detected over most of the sky, and their fluxes are unaffected by galactic extinction. The nearby universe was mapped by the IRAS galaxies to a distance of approximately 200 h(exp -1) Mpc for the absolute value of b less than 5 deg. The ability to map down to such low galactic latitudes has proven to be particularly imporant, since some of the most important nearby large-scale structures, such as the Great Attractor, the Perseus-Pisces region, and the Shapley concentration, all lie there. Two major results of the U.S. IRAS redshift survey are discussed.
Vapor-deposited porous films for energy conversion
Jankowski, Alan F.; Hayes, Jeffrey P.; Morse, Jeffrey D.
2005-07-05
Metallic films are grown with a "spongelike" morphology in the as-deposited condition using planar magnetron sputtering. The morphology of the deposit is characterized by metallic continuity in three dimensions with continuous and open porosity on the submicron scale. The stabilization of the spongelike morphology is found over a limited range of the sputter deposition parameters, that is, of working gas pressure and substrate temperature. This spongelike morphology is an extension of the features as generally represented in the classic zone models of growth for physical vapor deposits. Nickel coatings were deposited with working gas pressures up 4 Pa and for substrate temperatures up to 1000 K. The morphology of the deposits is examined in plan and in cross section views with scanning electron microscopy (SEM). The parametric range of gas pressure and substrate temperature (relative to absolute melt point) under which the spongelike metal deposits are produced appear universal for other metals including gold, silver, and aluminum.
Fixing the reference frame for PPMXL proper motions using extragalactic sources
Grabowski, Kathleen; Carlin, Jeffrey L.; Newberg, Heidi Jo; ...
2015-05-27
In this study, we quantify and correct systematic errors in PPMXL proper motions using extragalactic sources from the first two LAMOST data releases and the Vèron-Cetty & Vèron Catalog of Quasars. Although the majority of the sources are from the Vèron catalog, LAMOST makes important contributions in regions that are not well-sampled by previous catalogs, particularly at low Galactic latitudes and in the south Galactic cap. We show that quasars in PPMXL have measurable and significant proper motions, which reflect the systematic zero-point offsets present in the catalog. We confirm the global proper motion shifts seen by Wu et al.,more » and additionally find smaller-scale fluctuations of the QSO-derived corrections to an absolute frame. Finally, we average the proper motions of 158 106 extragalactic objects in bins of 3° × 3° and present a table of proper motion corrections.« less
Millicharge or decay: a critical take on Minimal Dark Matter
Nobile, Eugenio Del; Nardecchia, Marco; Panci, Paolo
2016-04-26
Minimal Dark Matter (MDM) is a theoretical framework highly appreciated for its minimality and yet its predictivity. Of the two only viable candidates singled out in the original analysis, the scalar eptaplet has been found to decay too quickly to be around today, while the fermionic quintuplet is now being probed by indirect Dark Matter (DM) searches. It is therefore timely to critically review the MDM paradigm, possibly pointing out generalizations of this framework. We propose and explore two distinct directions. One is to abandon the assumption of DM electric neutrality in favor of absolutely stable, millicharged DM candidates whichmore » are part of SU(2)L multiplets with integer isospin. Another possibility is to lower the cutoff of the model, which was originally fixed at the Planck scale, to allow for DM decays. We find new viable MDM candidates and study their phenomenology in detail.« less
Multiscaling and clustering of volatility
NASA Astrophysics Data System (ADS)
Pasquini, Michele; Serva, Maurizio
1999-07-01
The dynamics of prices in stock markets has been studied intensively both experimentally (data analysis) and theoretically (models). Nevertheless, while the distribution of returns of the most important indices is known to be a truncated Lévy, the behaviour of volatility correlations is still poorly understood. What is well known is that absolute returns have memory on a long time range, this phenomenon is known in financial literature as clustering of volatility. In this paper we show that volatility correlations are power laws with a non-unique scaling exponent. This kind of multiscale phenomenology is known to be relevant in fully developed turbulence and in disordered systems and it is pointed out here for the first time for a financial series. In our study we consider the New York Stock Exchange (NYSE) daily index, from January 1966 to June 1998, for a total of 8180 working days.
Deutsch, Diana; Henthorn, Trevor; Marvin, Elizabeth; Xu, HongShuai
2006-02-01
Absolute pitch is extremely rare in the U.S. and Europe; this rarity has so far been unexplained. This paper reports a substantial difference in the prevalence of absolute pitch in two normal populations, in a large-scale study employing an on-site test, without self-selection from within the target populations. Music conservatory students in the U.S. and China were tested. The Chinese subjects spoke the tone language Mandarin, in which pitch is involved in conveying the meaning of words. The American subjects were nontone language speakers. The earlier the age of onset of musical training, the greater the prevalence of absolute pitch; however, its prevalence was far greater among the Chinese than the U.S. students for each level of age of onset of musical training. The findings suggest that the potential for acquiring absolute pitch may be universal, and may be realized by enabling infants to associate pitches with verbal labels during the critical period for acquisition of features of their native language.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, J., E-mail: radiant@ferrodevices.com; Chapman, S., E-mail: radiant@ferrodevices.com
Piezoresponse Force Microscopy (PFM) is a popular tool for the study of ferroelectric and piezoelectric materials at the nanometer level. Progress in the development of piezoelectric MEMS fabrication is highlighting the need to characterize absolute displacement at the nanometer and Ångstrom scales, something Atomic Force Microscopy (AFM) might do but PFM cannot. Absolute displacement is measured by executing a polarization measurement of the ferroelectric or piezoelectric capacitor in question while monitoring the absolute vertical position of the sample surface with a stationary AFM cantilever. Two issues dominate the execution and precision of such a measurement: (1) the small amplitude ofmore » the electrical signal from the AFM at the Ångstrom level and (2) calibration of the AFM. The authors have developed a calibration routine and test technique for mitigating the two issues, making it possible to use an atomic force microscope to measure both the movement of a capacitor surface as well as the motion of a micro-machine structure actuated by that capacitor. The theory, procedures, pitfalls, and results of using an AFM for absolute piezoelectric measurement are provided.« less
NASA Astrophysics Data System (ADS)
Breytenbach, A.
2016-10-01
Conducted in the City of Tshwane, South Africa, this study set about to test the accuracy of DSMs derived from different remotely sensed data locally. VHR digital mapping camera stereo-pairs, tri-stereo imagery collected by a Pléiades satellite and data detected from the Tandem-X InSAR satellite configuration were fundamental in the construction of seamless DSM products at different postings, namely 2 m, 4 m and 12 m. The three DSMs were sampled against independent control points originating from validated airborne LiDAR data. The reference surfaces were derived from the same dense point cloud at grid resolutions corresponding to those of the samples. The absolute and relative positional accuracies were computed using well-known DEM error metrics and accuracy statistics. Overall vertical accuracies were also assessed and compared across seven slope classes and nine primary land cover classes. Although all three DSMs displayed significantly more vertical errors where solid waterbodies, dense natural and/or alien woody vegetation and, in a lesser degree, urban residential areas with significant canopy cover were encountered, all three surpassed their expected positional accuracies overall.
Gardiner, T D; Coleman, M; Browning, H; Tallis, L; Ptashnik, I V; Shine, K P
2012-06-13
Solar-pointing Fourier transform infrared (FTIR) spectroscopy offers the capability to measure both the fine scale and broadband spectral structure of atmospheric transmission simultaneously across wide spectral regions. It is therefore suited to the study of both water vapour monomer and continuum absorption behaviours. However, in order to properly address this issue, it is necessary to radiatively calibrate the FTIR instrument response. A solar-pointing high-resolution FTIR spectrometer was deployed as part of the 'Continuum Absorption by Visible and Infrared radiation and its Atmospheric Relevance' (CAVIAR) consortium project. This paper describes the radiative calibration process using an ultra-high-temperature blackbody and the consideration of the related influence factors. The result is a radiatively calibrated measurement of the solar irradiation at the ground across the IR region from 2000 to 10 000 cm(-1) with an uncertainty of between 3.3 and 5.9 per cent. This measurement is shown to be in good general agreement with a radiative-transfer model. The results from the CAVIAR field measurements are being used in ongoing studies of atmospheric absorbers, in particular the water vapour continuum.
The Theory of Intelligence and Its Measurement
ERIC Educational Resources Information Center
Jensen, A. R.
2011-01-01
Mental chronometry (MC) studies cognitive processes measured by time. It provides an absolute, ratio scale. The limitations of instrumentation and statistical analysis caused the early studies in MC to be eclipsed by the "paper-and-pencil" psychometric tests started by Binet. However, they use an age-normed, rather than a ratio scale, which…
Comparison of evidence on harms of medical interventions in randomized and nonrandomized studies
Papanikolaou, Panagiotis N.; Christidi, Georgia D.; Ioannidis, John P.A.
2006-01-01
Background Information on major harms of medical interventions comes primarily from epidemiologic studies performed after licensing and marketing. Comparison with data from large-scale randomized trials is occasionally feasible. We compared evidence from randomized trials with that from epidemiologic studies to determine whether they give different estimates of risk for important harms of medical interventions. Methods We targeted well-defined, specific harms of various medical interventions for which data were already available from large-scale randomized trials (> 4000 subjects). Nonrandomized studies involving at least 4000 subjects addressing these same harms were retrieved through a search of MEDLINE. We compared the relative risks and absolute risk differences for specific harms in the randomized and nonrandomized studies. Results Eligible nonrandomized studies were found for 15 harms for which data were available from randomized trials addressing the same harms. Comparisons of relative risks between the study types were feasible for 13 of the 15 topics, and of absolute risk differences for 8 topics. The estimated increase in relative risk differed more than 2-fold between the randomized and nonrandomized studies for 7 (54%) of the 13 topics; the estimated increase in absolute risk differed more than 2-fold for 5 (62%) of the 8 topics. There was no clear predilection for randomized or nonrandomized studies to estimate greater relative risks, but usually (75% [6/8]) the randomized trials estimated larger absolute excess risks of harm than the nonrandomized studies did. Interpretation Nonrandomized studies are often conservative in estimating absolute risks of harms. It would be useful to compare and scrutinize the evidence on harms obtained from both randomized and nonrandomized studies. PMID:16505459
NASA Astrophysics Data System (ADS)
Fischer, Ulrich; Celia, Michael A.
1999-04-01
Functional relationships for unsaturated flow in soils, including those between capillary pressure, saturation, and relative permeabilities, are often described using analytical models based on the bundle-of-tubes concept. These models are often limited by, for example, inherent difficulties in prediction of absolute permeabilities, and in incorporation of a discontinuous nonwetting phase. To overcome these difficulties, an alternative approach may be formulated using pore-scale network models. In this approach, the pore space of the network model is adjusted to match retention data, and absolute and relative permeabilities are then calculated. A new approach that allows more general assignments of pore sizes within the network model provides for greater flexibility to match measured data. This additional flexibility is especially important for simultaneous modeling of main imbibition and drainage branches. Through comparisons between the network model results, analytical model results, and measured data for a variety of both undisturbed and repacked soils, the network model is seen to match capillary pressure-saturation data nearly as well as the analytical model, to predict water phase relative permeabilities equally well, and to predict gas phase relative permeabilities significantly better than the analytical model. The network model also provides very good estimates for intrinsic permeability and thus for absolute permeabilities. Both the network model and the analytical model lost accuracy in predicting relative water permeabilities for soils characterized by a van Genuchten exponent n≲3. Overall, the computational results indicate that reliable predictions of both relative and absolute permeabilities are obtained with the network model when the model matches the capillary pressure-saturation data well. The results also indicate that measured imbibition data are crucial to good predictions of the complete hysteresis loop.
High heat flux measurements and experimental calibrations/characterizations
NASA Technical Reports Server (NTRS)
Kidd, Carl T.
1992-01-01
Recent progress in techniques employed in the measurement of very high heat-transfer rates in reentry-type facilities at the Arnold Engineering Development Center (AEDC) is described. These advances include thermal analyses applied to transducer concepts used to make these measurements; improved heat-flux sensor fabrication methods, equipment, and procedures for determining the experimental time response of individual sensors; performance of absolute heat-flux calibrations at levels above 2,000 Btu/cu ft-sec (2.27 kW/cu cm); and innovative methods of performing in-situ run-to-run characterizations of heat-flux probes installed in the test facility. Graphical illustrations of the results of extensive thermal analyses of the null-point calorimeter and coaxial surface thermocouple concepts with application to measurements in aerothermal test environments are presented. Results of time response experiments and absolute calibrations of null-point calorimeters and coaxial thermocouples performed in the laboratory at intermediate to high heat-flux levels are shown. Typical AEDC high-enthalpy arc heater heat-flux data recently obtained with a Calspan-fabricated null-point probe model are included.
Completely optical orientation determination for an unstabilized aerial three-line camera
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2010-10-01
Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.
Communicating data about the benefits and harms of treatment: a randomized trial.
Woloshin, Steven; Schwartz, Lisa M
2011-07-19
Despite limited evidence, it is often asserted that natural frequencies (for example, 2 in 1000) are the best way to communicate absolute risks. To compare comprehension of treatment benefit and harm when absolute risks are presented as natural frequencies, percents, or both. Parallel-group randomized trial with central allocation and masking of investigators to group assignment, conducted through an Internet survey in September 2009. (ClinicalTrials.gov registration number: NCT00950014) National sample of U.S. adults randomly selected from a professional survey firm's research panel of about 30,000 households. 2944 adults aged 18 years or older (all with complete follow-up). Tables presenting absolute risks in 1 of 5 numeric formats: natural frequency (x in 1000), variable frequency (x in 100, x in 1000, or x in 10,000, as needed to keep the numerator >1), percent, percent plus natural frequency, or percent plus variable frequency. Comprehension as assessed by 18 questions (primary outcome) and judgment of treatment benefit and harm. The average number of comprehension questions answered correctly was lowest in the variable frequency group and highest in the percent group (13.1 vs. 13.8; difference, 0.7 [95% CI, 0.3 to 1.1]). The proportion of participants who "passed" the comprehension test (≥13 correct answers) was lowest in the natural and variable frequency groups and highest in the percent group (68% vs. 73%; difference, 5 percentage points [CI, 0 to 10 percentage points]). The largest format effect was seen for the 2 questions about absolute differences: the proportion correct in the natural frequency versus percent groups was 43% versus 72% (P < 0.001) and 73% versus 87% (P < 0.001). Even when data were presented in the percent format, one third of participants failed the comprehension test. Natural frequencies are not the best format for communicating the absolute benefits and harms of treatment. The more succinct percent format resulted in better comprehension: Comprehension was slightly better overall and notably better for absolute differences. Attorney General Consumer and Prescriber Education grant program, the Robert Wood Johnson Pioneer Program, and the National Cancer Institute.
Age-corrected reference values for the Heidelberg multi-color anomaloscope.
Rüfer, Florian; Sauter, Benno; Klettner, Alexa; Göbel, Katja; Flammer, Josef; Erb, Carl
2012-09-01
To determine reference values for the HMC anomaloscope (Heidelberg multi-color anomaloscope) of healthy subjects. One hundred and thirteen healthy subjects were divided into four age groups: <20 years of age (ten female, five male), 20-39 years of age (23 female, 15 male), 40-59 years of age (23 female, ten male) and >60 years of age (nine female, 18 male). Match midpoint, matching range (MR) and anomaly quotient (AQ), according to the Moreland equation [blue (436 nm) + blue-green (490 nm) = cyan (480 nm) + yellow (589 nm)] and according to the Rayleigh equation [green (546 nm) + red (671 nm) = yellow (589 nm)] were determined. The neutral adaptation was done showing white light every 5 seconds in absolute mode and every 15 seconds in relative mode. The mean match midpoint according to the Rayleigh equation was 43.9 ± 2.6 scale units in absolute mode. It was highest between 20-39 years (45.2 ± 2.2) and lowest in subjects >60 years of age (42.2 ± 2.2). The mean MR in absolute mode was 3.1 ± 3.5 scale units with a maximum >60 years (4.4 ± 4.4). The MR in relative mode was between 1.6 ± 1.9 (20-39 years) and 4.4 ± 3.8 (>60 years). The resulting mean AQ was 1.01 ± 0.15 in both modes. The mean match midpoint of the Moreland equation was 51.0 ± 5.2 scale units in absolute mode. It was highest between 20-39 years (52.5 ± 5.7), and lowest in subjects >60 years of age (48.7 ± 3.6). The mean MR according to the Moreland equation was lower in absolute mode (13.4 ± 15.6) than in relative mode (16.2 ± 15.2). The mean resulting AQ was 1.02 ± 0.21 in both modes. The values of this study can be used as references for the diagnosis of red-green and blue perception impairment with the HMC anomaloscope.
Frahm Olsen, Mette; Bjerre, Eik; Hansen, Maria Damkjær; Tendal, Britta; Hilden, Jørgen; Hróbjartsson, Asbjørn
2018-05-21
The minimum clinically important difference (MCID) is used to interpret the relevance of treatment effects, e.g., when developing clinical guidelines, evaluating trial results or planning sample sizes. There is currently no agreement on an appropriate MCID in chronic pain and little is known about which contextual factors cause variation. This is a systematic review. We searched PubMed, EMBASE, and Cochrane Library. Eligible studies determined MCID for chronic pain based on a one-dimensional pain scale, a patient-reported transition scale of perceived improvement, and either a mean change analysis (mean difference in pain among minimally improved patients) or a threshold analysis (pain reduction associated with best sensitivity and specificity for identifying minimally improved patients). Main results were descriptively summarized due to considerable heterogeneity, which were quantified using meta-analyses and explored using subgroup analyses and metaregression. We included 66 studies (31.254 patients). Median absolute MCID was 23 mm on a 0-100 mm scale (interquartile range [IQR] 12-39) and median relative MCID was 34% (IQR 22-45) among studies using the mean change approach. In both cases, heterogeneity was very high: absolute MCID I 2 = 99% and relative MCID I 2 = 96%. High variation was also seen among studies using the threshold approach: median absolute MCID was 20 mm (IQR 15-30) and relative MCID was 32% (IQR 15-41). Absolute MCID was strongly associated with baseline pain, explaining approximately two-thirds of the variation, and to a lesser degree with the operational definition of minimum pain relief and clinical condition. A total of 15 clinical and methodological factors were assessed as possible causes for variation in MCID. MCID for chronic pain relief vary considerably. Baseline pain is strongly associated with absolute, but not relative, measures. To a much lesser degree, MCID is also influenced by the operational definition of relevant pain relief and possibly by clinical condition. Explicit and conscientious reflections on the choice of an MCID are required when classifying effect sizes as clinically important or trivial. Copyright © 2018 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klemt, M.
Relative oscillator strengths of 139 Til lines were determined from emission measurements of a three chamber electric arc burning in an argon atmosphere. Introducing a small admixture of titanium chloride into the center of the arc, spectra of titanium could be observed end-on with no self-absorption and no selfreversal of the measured lines. The relative oscillator strengths were obtained from the Til line intensities and the measured arc temperature. Using absolute oscillator strengths of three resonance lines which had been measured by Reinke (1967), and several life time measurements from Hese (1970), Witt et al. (1971) and Andersen and Sorensenmore » (1972), the relative oscillator strengths were converted to an absolute scale. The accuracy of these absolute values is in the range of 20% to 40%. (auth)« less
Spectral irradiance calibration in the infrared. I - Ground-based and IRAS broadband calibrations
NASA Technical Reports Server (NTRS)
Cohen, Martin; Walker, Russell G.; Barlow, Michael J.; Deacon, John R.
1992-01-01
Absolutely calibrated versions of realistic model atmosphere calculations for Sirius and Vega by Kurucz (1991) are presented and used as a basis to offer a new absolute calibration of infrared broad and narrow filters. In-band fluxes for Vega are obtained and defined to be zero magnitude at all wavelengths shortward of 20 microns. Existing infrared photometry is used differentially to establish an absolute scale of the new Sirius model, yielding an angular diameter within 1 sigma of the mean determined interferometrically by Hanbury Brown et al. (1974). The use of Sirius as a primary infrared stellar standard beyond the 20 micron region is suggested. Isophotal wavelengths and monochromatic flux densities for both Vega and Sirius are tabulated.
NASA Astrophysics Data System (ADS)
Qi, Li; Zhu, Jiang; Hancock, Aneeka M.; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D.; Chen, Zhongping
2017-02-01
Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it not only relates to the properties of the laser and the scattering particles, but also relates to the geometry of both directions of the laser beam and the flow. In this paper, focusing on the analysis of cerebral hemodynamics, we presents a method to quantify the total absolute blood flow velocity in middle cerebral artery (MCA) based on volumetric vessel reconstruction from pure DOCT images. A modified region growing segmentation method is first used to localize the MCA on successive DOCT B-scan images. Vessel skeletonization, followed by an averaging gradient angle calculation method, is then carried out to obtain Doppler angles along the entire MCA. Once the Doppler angles are determined, the absolute blood flow velocity of each position on the MCA is easily found. Given a seed point position on the MCA, our approach could achieve automatic quantification of the fully distributed absolute BFV. Based on experiments conducted using a swept-source optical coherence tomography system, our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches in the rodent brain.
Geolocation Accuracy Evaluations of OrbView-3, EROS-A, and SPOT-5 Imagery
NASA Technical Reports Server (NTRS)
Bresnahan, Paul
2007-01-01
This viewgraph presentation evaluates absolute geolocation accuracy of OrbView-3, EROS-A, and SPOT-5 by comparing test imagery-derived ground coordinates to Ground Control Points using SOCET set photogrammetric software.
Absolute Determination of High DC Voltages by Means of Frequency Measurement
NASA Astrophysics Data System (ADS)
Peier, Dirk; Schulz, Bernd
1983-01-01
A novel absolute measuring procedure is presented for the definition of fixed points of the voltage in the 100 kV range. The method is based on transit time measurements with accelerated electrons. By utilizing the selective interaction of a monoenergetic electron beam with the electromagnetic field of a special cavity resonator, the voltage is referred to fundamental constants and the base unit second. Possible balance voltages are indicated by a current detector. Experimental investigations are carried out with resonators in the normal conducting range. With a copper resonator operating at the temperature of boiling nitrogen (77 K), the relative uncertainty of the voltage points is estimated to be +/- 4 × 10-4. The technically realizable uncertainty can be reduced to +/- 1 × 10-5 by the proposed application of a superconducting niobium resonator. Thus this measuring device becomes suitable as a primary standard for the high-voltage range.
The LPSP instrument on OSO 8. II - In-flight performance and preliminary results
NASA Technical Reports Server (NTRS)
Bonnet, R. M.; Lemaire, P.; Vial, J. C.; Artzner, G.; Gouttebroze, P.; Jouchoux, A.; Vidal-Madjar, A.; Leibacher, J. W.; Skumanich, A.
1978-01-01
The paper describes the in-flight performance for the first 18 months of operation of the LPSP (Laboratoire de Physique Stellaire et Planetaire) instrument incorporated in the OSO 8 launched June 1975. By means of the instrument, an absolute pointing accuracy of nearly one second was achieved in orbit during real-time operations. The instrument uses a Cassegrain telescope and a spectrometer simultaneously observing six wavelengths. In-flight performance is discussed with attention to angular resolution, spectral resolution, dispersion and grating mechanism (spectral scanner) stability, scattered light background and dark current, photometric standardization, and absolute calibration. Real-time operation and problems are considered with reference to pointing system problems, target acquisition, and L-alpha modulation. Preliminary results involving the observational program, quiet sun and chromospheric studies, quiet chromospheric oscillation and transients, sunspots and active regions, prominences, and aeronomy investigations are reported.
Electron line shape and transmission function of the KATRIN monitor spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slezák, M.
Knowledge of the neutrino mass is of particular interest in modern neutrino physics. Besides the neutrinoless double beta decay and cosmological observation information about the neutrino mass is obtained from single beta decay by observing the shape of the electron spectrum near the endpoint. The KATRIN β decay experiment aims to push the limit on the effective electron antineutrino mass down to 0.2 eV/c{sup 2}. To reach this sensitivity several systematic effects have to be under control. One of them is the fluctuations of the absolute energy scale, which therefore has to be continuously monitored at very high precision. Thismore » paper shortly describes KATRIN, the technique for continuous monitoring of the absolute energy scale and recent improvements in analysis of the monitoring data.« less
Empirical Prediction of Aircraft Landing Gear Noise
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Guo, Yue-Ping
2005-01-01
This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.
Lombardi, Raúl; Nin, Nicolás; Lorente, José A; Frutos-Vivar, Fernando; Ferguson, Niall D; Hurtado, Javier; Apezteguia, Carlos; Desmery, Pablo; Raymondos, Konstantinos; Tomicic, Vinko; Cakar, Nahit; González, Marco; Elizalde, José; Nightingale, Peter; Abroug, Fekri; Jibaja, Manuel; Arabi, Yaseen; Moreno, Rui; Matamis, Dimitros; Anzueto, Antonio; Esteban, Andrés
2011-07-01
The aim of our study was to assess the new diagnostic criteria of acute kidney injury (AKI) proposed by the Acute Kidney Injury Network (AKIN) in a large cohort of mechanically ventilated patients. This is a prospective observational cohort study enrolling 2783 adult intensive care unit patients under mechanical ventilation (MV) with data on serum creatinine concentration (SCr) in the first 48 hours. The absolute and the relative AKIN diagnostic criteria (changes in SCr ≥ 0.3 mg/dl or ≥ 50% over the first 48 hours of MV, respectively) were analyzed separately. In addition, patients were classified into three groups according to their change in SCr (ΔSCr) over the first day on MV (ΔSCr): group 1, ΔSCr ≤ -0.3 mg/dl; group 2, ΔSCr between -0.3 and +0.29 mg/dl; and group 3, ΔSCr ≥ +0.3 mg/dl). The primary end point was in-hospital mortality, and secondary end points were intensive care unit and hospital length of stay, and duration of MV. Of 2783 patients, 803 (28.8%) had AKI according to both criteria: 431 only absolute (AKI(A)), 362 both relative and absolute (AKI(R+A)), and 10 only relative. The relative criterion identified more patients when baseline SCr (SCr₀) was <0.9 mg/dl and the absolute when SCr₀ was >1.5 mg/dl. The diagnosis of AKI was associated with mortality. Our study confirms the validity of the AKIN criteria in a population of mechanically patients and the criteria's relationship with the baseline SCr.
Absolute, SI-traceable lunar irradiance tie-points for the USGS Lunar Model
NASA Astrophysics Data System (ADS)
Brown, Steven W.; Eplee, Robert E.; Xiong, Xiaoxiong J.
2017-10-01
The United States Geological Survey (USGS) has developed an empirical model, known as the Robotic Lunar Observatory (ROLO) Model, that predicts the reflectance of the Moon for any Sun-sensor-Moon configuration over the spectral range from 350 nm to 2500 nm. The lunar irradiance can be predicted from the modeled lunar reflectance using a spectrum of the incident solar irradiance. While extremely successful as a relative exo-atmospheric calibration target, the ROLO Model is not SI-traceable and has estimated uncertainties too large for the Moon to be used as an absolute celestial calibration target. In this work, two recent absolute, low uncertainty, SI-traceable top-of-the-atmosphere (TOA) lunar irradiances, measured over the spectral range from 380 nm to 1040 nm, at lunar phase angles of 6.6° and 16.9° , are used as tie-points to the output of the ROLO Model. Combined with empirically derived phase and libration corrections to the output of the ROLO Model and uncertainty estimates in those corrections, the measurements enable development of a corrected TOA lunar irradiance model and its uncertainty budget for phase angles between +/-80° and libration angles from 7° to 51° . The uncertainties in the empirically corrected output from the ROLO model are approximately 1 % from 440 nm to 865 nm and increase to almost 3 % at 412 nm. The dominant components in the uncertainty budget are the uncertainty in the absolute TOA lunar irradiance and the uncertainty in the fit to the phase correction from the output of the ROLO model.
An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points
NASA Astrophysics Data System (ADS)
Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun
2014-05-01
Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.
A new mosaic method for three-dimensional surface
NASA Astrophysics Data System (ADS)
Yuan, Yun; Zhu, Zhaokun; Ding, Yongjun
2011-08-01
Three-dimensional (3-D) data mosaic is a indispensable link in surface measurement and digital terrain map generation. With respect to the mosaic problem of the local unorganized cloud points with rude registration and mass mismatched points, a new mosaic method for 3-D surface based on RANSAC is proposed. Every circular of this method is processed sequentially by random sample with additional shape constraint, data normalization of cloud points, absolute orientation, data denormalization of cloud points, inlier number statistic, etc. After N random sample trials the largest consensus set is selected, and at last the model is re-estimated using all the points in the selected subset. The minimal subset is composed of three non-colinear points which form a triangle. The shape of triangle is considered in random sample selection in order to make the sample selection reasonable. A new coordinate system transformation algorithm presented in this paper is used to avoid the singularity. The whole rotation transformation between the two coordinate systems can be solved by twice rotations expressed by Euler angle vector, each rotation has explicit physical means. Both simulation and real data are used to prove the correctness and validity of this mosaic method. This method has better noise immunity due to its robust estimation property, and has high accuracy as the shape constraint is added to random sample and the data normalization added to the absolute orientation. This method is applicable for high precision measurement of three-dimensional surface and also for the 3-D terrain mosaic.
Gravity and the geoid in the Nepal Himalaya
NASA Technical Reports Server (NTRS)
Bilham, Roger
1992-01-01
Materials within the Himalaya are rising due to convergence between India and Asia. If the rate of erosion is comparable to the rate of uplift the mean surface elevation will remain constant. Any slight imbalance in these two processes will lead to growth or attrition of the Himalaya. The process of uplift of materials within the Himalaya coupled with surface erosion is similar to the advance of a glacier into a region of melting. If the melting rate exceeds the rate of downhill motion of the glacier then the terminus of the glacier will receed up-valley despite the downhill motion of the bulk of the glacier. Thus although buried rocks, minerals and surface control points in the Himalaya are undoubtably rising, the growth or collapse of the Himalaya depends on the erosion rate which is invisible to geodetic measurements. Erosion rates are currently estimated from suspended sediment loads in rivers in the Himalaya. These typically underestimate the real erosion rate since bed-load is not measured during times of heavy flood, and it is difficult to integrate widely varying suspended load measurements over many years. An alternative way to measure erosion rate is to measure the rate of change of gravity in a region of uplift. If a control point moves vertically it should be accompanied by a reduction in gravity as the point moves away from the Earth's center of mass. There is a difference in the change of gravity between uplift with and without erosion corresponding to the difference between the free-air gradient and the gradient in the acceleration due to gravity caused by a corresponding thickness of rock. Essentially gravity should change precisely in accord with a change in elevation of the point in a free-air gradient if erosion equals uplift rate. We were funded by NASA to undertake a measurement of absolute gravity simultaneously with measurements of GPS height within the Himalaya. Since both absolute gravity and time are known in an absolute sense to 1 part in 10(exp 10) it is possible to estimate gravity with a precision of 0.1 mu gal. Known systematic errors reduce the measurement to an absolute uncertainty of 6 mu gal. The free air gradient at the point of measurement is typically about 3 mu gals/cm. At Simikot where our experiment was conducted we determined a vertical gravity gradient of 4.4 mu gals/cm.
Paybins, Katherine S.
2003-01-01
Characteristics of perennial and intermittent headwater streams were documented in the mountaintop removal coal-mining region of southern West Virginia in 2000?01. The perennial-flow origin points were identified in autumn during low base-flow conditions. The intermittent-flow origin points were identified in late winter and early spring during high base-flow conditions. Results of this investigation indicate that the median drainage area upstream of the origin of intermittent flow was 14.5 acres, and varied by an absolute median of 3.4 acres between the late winter measurements of 2000 and early spring measurements of 2001. Median drainage area in the northeastern part of the study unit was generally larger (20.4 acres), with a lower median basin slope (322 feet per mile) than the southwestern part of the study unit (12.9 acres and 465 feet per mile, respectively). Both of the seasons preceding the annual intermittent flow visits were much drier than normal. The West Virginia Department of Environmental Protection reports that the median size of permitted valley fills in southern West Virginia is 12.0 acres, which is comparable to the median drainage area upstream of the ephemeralintermittent flow point (14.5 acres). The maximum size of permitted fills (480 acres), however, is more than 10 times the observed maximum drainage area upstream of the ephemeral-intermittent flow point (45.3 acres), although a single valley fill may cover more than one drainage area. The median drainage area upstream of the origin of perennial flow was 40.8 acres, and varied by an absolute median of 18.0 acres between two annual autumn measurements. Only basins underlain with mostly sandstone bedrock produced perennial flow. Perennial points in the northeast part of the study unit had a larger median drainage area (70.0 acres) and a smaller median basin slope (416 feet per mile) than perennial points in the southwest part of the study unit (35.5 acres and 567 feet per mile, respectively). Some streams were totally dry for one or both of the annual October visits. Both of the seasons preceding the October visits had near normal to higher than normal precipitation. These dry streams were adjacent to perennial streams draining similarly sized areas, suggesting that local conditions at a firstorder- stream scale determine whether or not there will be perennial flow. Headwater-flow rates varied little from year to year, but there was some variation between late winter and early spring and autumn. Flow rates at intermittent points of flow origin ranged from 0.001 to 0.032 cubic feet per second, with a median of 0.017 cubic feet per second. Flow rates at perennial points of flow origin ranged from 0.001 to 0.14 cubic feet per second, with a median of 0.003 cubic feet per second.
Boulton, David W.; Kasichayanula, Sreeneeranj; Keung, Chi Fung (Anther); Arnold, Mark E.; Christopher, Lisa J.; Xu, Xiaohui (Sophia); LaCreta, Frank
2013-01-01
Aim To determine the absolute oral bioavailability (Fp.o.) of saxagliptin and dapagliflozin using simultaneous intravenous 14C‐microdose/therapeutic oral dosing (i.v.micro + oraltherap). Methods The Fp.o. values of saxagliptin and dapagliflozin were determined in healthy subjects (n = 7 and 8, respectively) following the concomitant administration of single i.v. micro doses with unlabelled oraltherap doses. Accelerator mass spectrometry and liquid chromatography‐tandem mass spectrometry were used to quantify the labelled and unlabelled drug, respectively. Results The geometric mean point estimates (90% confidence interval) Fp.o. values for saxagliptin and dapagliflozin were 50% (48, 53%) and 78% (73, 83%), respectively. The i.v.micro had similar pharmacokinetics to oraltherap. Conclusions Simultaneous i.v.micro + oraltherap dosing is a valuable tool to assess human absolute bioavailability. PMID:22823746
OARE flight maneuvers and calibration measurements on STS-58
NASA Technical Reports Server (NTRS)
Blanchard, Robert C.; Nicholson, John Y.; Ritter, James R.; Larman, Kevin T.
1994-01-01
The Orbital Acceleration Research Experiment (OARE), which has flown on STS-40, STS-50, and STS-58, contains a three axis accelerometer with a single, nonpendulous, electrostatically suspended proofmass which can resolve accelerations to the nano-g level. The experiment also contains a full calibration station to permit in situ bias and scale factor calibration. This on-orbit calibration capability eliminates the large uncertainty of ground-based calibrations encountered with accelerometers flown in the past on the orbiter, thus providing absolute acceleration measurement accuracy heretofore unachievable. This is the first time accelerometer scale factor measurements have been performed on orbit. A detailed analysis of the calibration process is given along with results of the calibration factors from the on-orbit OARE flight measurements on STS-58. In addition, the analysis of OARE flight maneuver data used to validate the scale factor measurements in the sensor's most sensitive range is also presented. Estimates on calibration uncertainties are discussed. This provides bounds on the STS-58 absolute acceleration measurements for future applications.
Brew, Christopher J; Simpson, Philip M; Whitehouse, Sarah L; Donnelly, William; Crawford, Ross W; Hubble, Matthew J W
2012-04-01
We describe a scaling method for templating digital radiographs using conventional acetate templates independent of template magnification without the need for a calibration marker. The mean magnification factor for the radiology department was determined (119.8%; range, 117%-123.4%). This fixed magnification factor was used to scale the radiographs by the method described. Thirty-two femoral heads on postoperative total hip arthroplasty radiographs were then measured and compared with the actual size. The mean absolute accuracy was within 0.5% of actual head size (range, 0%-3%) with a mean absolute difference of 0.16 mm (range, 0-1 mm; SD, 0.26 mm). Intraclass correlation coefficient showed excellent reliability for both interobserver and intraobserver measurements with intraclass correlation coefficient scores of 0.993 (95% CI, 0.988-0.996) for interobserver measurements and intraobserver measurements ranging between 0.990 and 0.993 (95% CI, 0.980-0.997). Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.
Determination of quality factors by microdosimetry
NASA Astrophysics Data System (ADS)
Al-Affan, I. A. M.; Watt, D. E.
1987-03-01
The application of microdose parameters for the specification of a revised scale of quality factors which would be applicable at low doses and dose rates is examined in terms of an original proposal by Rossi. Two important modifications are suggested to enable an absolute scale of quality factors to be constructed. Allowance should be made to allow for the dependence of the saturation threshold of lineal energy on the type of heavy charged particle. Also, an artificial saturation threshold should be introduced for electron tracks as a mean of modifying the measurements made in the microdosimeter to the more realistic site sizes of nanometer dimensions. The proposed absolute scale of quality factors nicely encompasses the high RBEs of around 3 observed at low doses for tritium β rays and is consistent with the recent recommendation of the ICRP that the quality factor for fast neutrons be increased by a factor of two, assuming that there is no biological repair for the reference radiation.
Chen, Lei; Peeters, Anna; Magliano, Dianna J; Shaw, Jonathan E; Welborn, Timothy A; Wolfe, Rory; Zimmet, Paul Z; Tonkin, Andrew M
2007-12-01
Framingham risk functions are widely used for prediction of future cardiovascular disease events. They do not, however, include anthropometric measures of overweight or obesity, now considered a major cardiovascular disease risk factor. We aimed to establish the most appropriate anthropometric index and its optimal cutoff point for use as an ancillary measure in clinical practice when identifying people with increased absolute cardiovascular risk estimates. Analysis of a population-based, cross-sectional survey was carried out. The 1991 Framingham prediction equations were used to compute 5 and 10-year risks of cardiovascular or coronary heart disease in 7191 participants from the Australian Diabetes, Obesity and Lifestyle Study (1999-2000). Receiver operating characteristic curve analysis was used to compare measures of body mass index (BMI), waist circumference, and waist-to-hip ratio in identifying participants estimated to be at 'high', or at 'intermediate or high' absolute risk. After adjustment for BMI and age, waist-to-hip ratio showed stronger correlation with absolute risk estimates than waist circumference. The areas under the receiver operating characteristic curve for waist-to-hip ratio (0.67-0.70 in men, 0.64-0.74 in women) were greater than those for waist circumference (0.60-0.65, 0.59-0.71) or BMI (0.52-0.59, 0.53-0.66). The optimal cutoff points of BMI, waist circumference and waist-to-hip ratio to predict people at 'high', or at 'intermediate or high' absolute risk estimates were 26 kg/m2, 95 cm and 0.90 in men, and 25-26 kg/m2, 80-85 cm and 0.80 in women, respectively. Measurement of waist-to-hip ratio is more useful than BMI or waist circumference in the identification of individuals estimated to be at increased risk for future primary cardiovascular events.
The absolute magnitudes of RR Lyraes from HIPPARCOS parallaxes and proper motions
NASA Astrophysics Data System (ADS)
Fernley, J.; Barnes, T. G.; Skillen, I.; Hawley, S. L.; Hanley, C. J.; Evans, D. W.; Solano, E.; Garrido, R.
1998-02-01
We have used HIPPARCOS proper motions and the method of Statistical Parallax to estimate the absolute magnitude of RR Lyrae stars. In addition we used the HIPPARCOS parallax of RR Lyrae itself to determine it's absolute magnitude. These two results are in excellent agreement with each other and give a zero-point for the RR Lyrae M_v,[Fe/H] relation of 0.77+/-0.15 at [Fe/H]=-1.53. This zero-point is in good agreement with that obtained recently by several groups using Baade-Wesselink methods which, averaged over the results from the different groups, gives M_v = 0.73+/-0.14 at [Fe/H]=-1.53. Taking the HIPPARCOS based zero-point and a value of 0.18+/-0.03 for the slope of the M_v,[Fe/H] relation from the literature we find firstly, the distance modulus of the LMC is 18.26+/-0.15 and secondly, the mean age of the Globular Clusters is 17.4+/-3.0 GYrs. These values are compared with recent estimates based on other "standard candles" that have also been calibrated with HIPPARCOS data. It is clear that, in addition to astrophysical problems, there are also problems in the application of HIPPARCOS data that are not yet fully understood. Table 1, which contains the basic data for the RR Lyraes, is available only at CDS. It may be retrieved via anonymous FTP at cdsarc.u-strasbg.fr (130.79.128.5) or via the Web at http://cdsweb.u-strasbg.fr/Abstract.html
Yeo, Leonard L L; Paliwal, Prakash; Teoh, Hock L; Seet, Raymond C; Chan, Bernard P L; Wakerley, Benjamin; Liang, Shen; Rathakrishnan, Rahul; Chong, Vincent F; Ting, Eric Y S; Sharma, Vijay K
2013-11-01
Intravenously administered tissue plasminogen activator (IV tPA) remains the only approved therapeutic agent for arterial recanalization in acute ischemic stroke (AIS). Considerable proportion of AIS patients demonstrate changes in their neurologic status within the first 24 hours of intravenous thrombolysis with IV tPA. However, there are little available data on the course of clinical recovery in subacute 2- to 24-hour window and its impact. We evaluated whether neurologic improvement at 2 and 24 hours after IV tPA bolus can predict functional outcomes in AIS patients at 3 months. Data for consecutive AIS patients treated with IV tPA within 4.5 hours of symptom onset during 2007-2011 were prospectively entered in our thrombolyzed registry. National Institutes of Health Stroke Scale (NIHSS) scores were recorded before IV tPA bolus, at 2 and 24 hours. Early neurologic improvement (ENI) at 2 hours was defined as a reduction in NIHSS score by 10 or more points from baseline or an absolute score of 4 or less points at 2 hours. Continuous neurologic improvement (CNI) was defined as a reduction of NIHSS score by 8 or more points between 2 and 24 hours or an absolute score of 4 or less points at 24 hours. Favorable functional outcomes at 3 months were determined by modified Rankin Scale (mRS) score of 0-1. Of 2460 AIS patients admitted during the study period, 263 (10.7%) received IV tPA within the time window; median age was 64 years (range 19-92), with 63.9% being men, a median NIHSS score of 17 points (range 5-35), and a median onset-to-treatment time of 145 minutes (range 57-270). Overall, 130 (49.4%) thrombolyzed patients achieved an mRS score of 0-1 at 3 months. The female gender, age, and baseline NIHSS score were found to be significantly associated with CNI on univariate analysis. On multivariate analysis, NIHSS score at onset and female gender (odds ratio [OR]: 2.218, 95% confidence interval [CI]: 1.140-4.285; P=.024) were found to be independent predictors of CNI. Factors associated with favorable outcomes at 3 months on univariate analysis were younger age, female gender, hypertension, NIHSS score at onset, recanalization on transcranial Doppler (TCD) monitoring or repeat computed tomography (CT) angiography, ENI at 2 hours, and CNI. On multivariate analysis, NIHSS score at onset (OR per 1-point increase: .835, 95% CI: .751-.929, P<.001), 2-hour TCD recanalization (OR: 3.048, 95% CI: 1.537-6.046; P=.001), 24-hour CT angiographic recanalization (OR: 4.329, 95% CI: 2.382-9.974; P=.001), ENI at 2 hours (OR: 2.536, 95% CI: 1.321-5.102; P=.004), and CNI (OR: 7.253, 95% CI: 3.682-15.115; P<.001) were independent predictors of favorable outcomes at 3 months. Women are twice as likely to have CNI from the 2- to 24-hour period after IV tPA. ENI and CNI within the first 24 hours are strong predictors of favorable functional outcomes in thrombolyzed AIS patients. Copyright © 2013 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Hemilä, Harri
2017-05-12
The relative scale has been used for decades in analysing binary data in epidemiology. In contrast, there has been a long tradition of carrying out meta-analyses of continuous outcomes on the absolute, original measurement, scale. The biological rationale for using the relative scale in the analysis of binary outcomes is that it adjusts for baseline variations; however, similar baseline variations can occur in continuous outcomes and relative effect scale may therefore be often useful also for continuous outcomes. The aim of this study was to determine whether the relative scale is more consistent with empirical data on treating the common cold than the absolute scale. Individual patient data was available for 2 randomized trials on zinc lozenges for the treatment of the common cold. Mossad (Ann Intern Med 125:81-8, 1996) found 4.0 days and 43% reduction, and Petrus (Curr Ther Res 59:595-607, 1998) found 1.77 days and 25% reduction, in the duration of colds. In both trials, variance in the placebo group was significantly greater than in the zinc lozenge group. The effect estimates were applied to the common cold distributions of the placebo groups, and the resulting distributions were compared with the actual zinc lozenge group distributions. When the absolute effect estimates, 4.0 and 1.77 days, were applied to the placebo group common cold distributions, negative and zero (i.e., impossible) cold durations were predicted, and the high level variance remained. In contrast, when the relative effect estimates, 43 and 25%, were applied, impossible common cold durations were not predicted in the placebo groups, and the cold distributions became similar to those of the zinc lozenge groups. For some continuous outcomes, such as the duration of illness and the duration of hospital stay, the relative scale leads to a more informative statistical analysis and more effective communication of the study findings. The transformation of continuous data to the relative scale is simple with a spreadsheet program, after which the relative scale data can be analysed using standard meta-analysis software. The option for the analysis of relative effects of continuous outcomes directly from the original data should be implemented in standard meta-analysis programs.
KaDonna Randolph
2010-01-01
The use of the geometric and arithmetic means for estimating tree crown diameter and crown cross-sectional area were examined for trees with crown width measurements taken at the widest point of the crown and perpendicular to the widest point of the crown. The average difference between the geometric and arithmetic mean crown diameters was less than 0.2 ft in absolute...
NASA Astrophysics Data System (ADS)
Zounemat-Kermani, Mohammad
2012-08-01
In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.
Katz, C M
1991-04-01
Sliding-scale insulin therapy is seldom the best way to treat hospitalized diabetic patients. In the few clinical situations in which it is appropriate, close attention to details and solidly based scientific principles is absolutely necessary. Well-organized alternative approaches to insulin therapy usually offer greater efficiency and effectiveness.
A Numerical Fit of Analytical to Simulated Density Profiles in Dark Matter Haloes
NASA Astrophysics Data System (ADS)
Caimmi, R.; Marmo, C.; Valentinuzzi, T.
2005-06-01
Analytical and geometrical properties of generalized power-law (GPL) density profiles are investigated in detail. In particular, a one-to-one correspondence is found between mathematical parameters (a scaling radius, r_0, a scaling density, rho_0, and three exponents, alpha, beta, gamma), and geometrical parameters (the coordinates of the intersection of the asymptotes, x_C, y_C, and three vertical intercepts, b, b_beta, b_gamma, related to the curve and the asymptotes, respectively): (r_0,rho_0,alpha,beta,gamma) <--> (x_C,y_C,b,b_beta,b_gamma). Then GPL density profiles are compared with simulated dark haloes (SDH) density profiles, and nonlinear least-absolute values and least-squares fits involving the above mentioned five parameters (RFSM5 method) are prescribed. More specifically, the sum of absolute values or squares of absolute logarithmic residuals, R_i= log rhoSDH(r_i)-log rhoGPL(r_i), is evaluated on 10^5 points making a 5- dimension hypergrid, through a few iterations. The size is progressively reduced around a fiducial minimum, and superpositions on nodes of earlier hypergrids are avoided. An application is made to a sample of 17 SDHs on the scale of cluster of galaxies, within a flat LambdaCDM cosmological model (Rasia et al. 2004). In dealing with the mean SDH density profile, a virial radius, rvir, averaged over the whole sample, is assigned, which allows the calculation of the remaining parameters. Using a RFSM5 method provides a better fit with respect to other methods. The geometrical parameters, averaged over the whole sample of best fitting GPL density profiles, yield (alpha,beta,gamma) approx(0.6,3.1,1.0), to be compared with (alpha,beta,gamma)=(1,3,1), i.e. the NFW density profile (Navarro et al. 1995, 1996, 1997), (alpha,beta,gamma)=(1.5,3,1.5) (Moore et al. 1998, 1999), (alpha,beta,gamma)=(1,2.5,1) (Rasia et al. 2004); and, in addition, gamma approx 1.5 (Hiotelis 2003), deduced from the application of a RFSM5 method, but using a different definition of scaled radius, or concentration; and gamma approx 1.2-1.3 deduced from more recent high-resolution simulations (Diemand et al. 2004, Reed et al. 2005). No evident correlation is found between SDH dynamical state (relaxed or merging) and asymptotic inner slope of the fitting logarithmic density profile or (for SDH comparable virial masses) scaled radius. Mean values and standard deviations of some parameters are calculated, and in particular the decimal logarithm of the scaled radius, xivir, reads < log xivir >=0.74 and sigma_s log xivir=0.15-0.17, consistent with previous results related to NFW density profiles. It provides additional support to the idea, that NFW density profiles may be considered as a convenient way to parametrize SDH density profiles, without implying that it necessarily produces the best possible fit (Bullock et al. 2001). A certain degree of degeneracy is found in fitting GPL to SDH density profiles. If it is intrinsic to the RFSM5 method or it could be reduced by the next generation of high-resolution simulations, still remains an open question.
Patankar, S.; Gumbrell, E. T.; Robinson, T. S.; ...
2017-08-17
Here we report a new method using high stability, laser-driven supercontinuum generation in a liquid cell to calibrate the absolute photon response of fast optical streak cameras as a function of wavelength when operating at fastest sweep speeds. A stable, pulsed white light source based around the use of self-phase modulation in a salt solution was developed to provide the required brightness on picosecond timescales, enabling streak camera calibration in fully dynamic operation. The measured spectral brightness allowed for absolute photon response calibration over a broad spectral range (425-650nm). Calibrations performed with two Axis Photonique streak cameras using the Photonismore » P820PSU streak tube demonstrated responses which qualitatively follow the photocathode response. Peak sensitivities were 1 photon/count above background. The absolute dynamic sensitivity is less than the static by up to an order of magnitude. We attribute this to the dynamic response of the phosphor being lower.« less
VUV photoionization cross sections of HO2, H2O2, and H2CO.
Dodson, Leah G; Shen, Linhan; Savee, John D; Eddingsaas, Nathan C; Welz, Oliver; Taatjes, Craig A; Osborn, David L; Sander, Stanley P; Okumura, Mitchio
2015-02-26
The absolute vacuum ultraviolet (VUV) photoionization spectra of the hydroperoxyl radical (HO2), hydrogen peroxide (H2O2), and formaldehyde (H2CO) have been measured from their first ionization thresholds to 12.008 eV. HO2, H2O2, and H2CO were generated from the oxidation of methanol initiated by pulsed-laser-photolysis of Cl2 in a low-pressure slow flow reactor. Reactants, intermediates, and products were detected by time-resolved multiplexed synchrotron photoionization mass spectrometry. Absolute concentrations were obtained from the time-dependent photoion signals by modeling the kinetics of the methanol oxidation chemistry. Photoionization cross sections were determined at several photon energies relative to the cross section of methanol, which was in turn determined relative to that of propene. These measurements were used to place relative photoionization spectra of HO2, H2O2, and H2CO on an absolute scale, resulting in absolute photoionization spectra.
Split-step eigenvector-following technique for exploring enthalpy landscapes at absolute zero.
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra
2006-03-16
The mapping of enthalpy landscapes is complicated by the coupling of particle position and volume coordinates. To address this issue, we have developed a new split-step eigenvector-following technique for locating minima and transition points in an enthalpy landscape at absolute zero. Each iteration is split into two steps in order to independently vary system volume and relative atomic coordinates. A separate Lagrange multiplier is used for each eigendirection in order to provide maximum flexibility in determining step sizes. This technique will be useful for mapping the enthalpy landscapes of bulk systems such as supercooled liquids and glasses.
Absolute spectrophotometry of Wolf-Rayet stars from 1200 to 7000 A - A cautionary tale
NASA Technical Reports Server (NTRS)
Garmany, C. D.; Conti, P. S.; Massey, P.
1984-01-01
It is demonstrated that absolute spectrophotometry of the continua of Wolf-Rayet stars may be obtained over the wavelength range 1200-7000 A using IUE and optical measurements. It is shown that the application of a 'standard' reddening law to the observed data gives spurious results in many cases. Additional UV extinction is apparently necessary and may well be circumstellar in origin. In such hot stars, the long-wavelength 'tail' of the emergent stellar continuum are measured. The inadequacy of previous attempts to determine intrinsic continua and effective temperatures of Wolf-Rayet stars is pointed out.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, S.; Mohr, J. J.; Semler, D. R.
2012-09-20
The Blanco Cosmology Survey (BCS) is a 60 night imaging survey of {approx}80 deg{sup 2} of the southern sky located in two fields: ({alpha}, {delta}) = (5 hr, -55 Degree-Sign ) and (23 hr, -55 Degree-Sign ). The survey was carried out between 2005 and 2008 in griz bands with the Mosaic2 imager on the Blanco 4 m telescope. The primary aim of the BCS survey is to provide the data required to optically confirm and measure photometric redshifts for Sunyaev-Zel'dovich effect selected galaxy clusters from the South Pole Telescope and the Atacama Cosmology Telescope. We process and calibrate themore » BCS data, carrying out point-spread function-corrected model-fitting photometry for all detected objects. The median 10{sigma} galaxy (point-source) depths over the survey in griz are approximately 23.3 (23.9), 23.4 (24.0), 23.0 (23.6), and 21.3 (22.1), respectively. The astrometric accuracy relative to the USNO-B survey is {approx}45 mas. We calibrate our absolute photometry using the stellar locus in grizJ bands, and thus our absolute photometric scale derives from the Two Micron All Sky Survey, which has {approx}2% accuracy. The scatter of stars about the stellar locus indicates a systematic floor in the relative stellar photometric scatter in griz that is {approx}1.9%, {approx}2.2%, {approx}2.7%, and {approx}2.7%, respectively. A simple cut in the AstrOmatic star-galaxy classifier spread{sub m}odel produces a star sample with good spatial uniformity. We use the resulting photometric catalogs to calibrate photometric redshifts for the survey and demonstrate scatter {delta}z/(1 + z) = 0.054 with an outlier fraction {eta} < 5% to z {approx} 1. We highlight some selected science results to date and provide a full description of the released data products.« less
Techniques in Altitude Registration for Limb Scatter Instruments
NASA Astrophysics Data System (ADS)
Moy, L.; Jaross, G.; Bhartia, P. K.; Kramarova, N. A.
2017-12-01
One of the largest constraints to the retrieval of accurate ozone profiles from limb sounding sensors is altitude registration. As described in Moy et al. (2017) two methods applicable to UV limb scattering, the Rayleigh Scattering Attitude Sensing (RSAS) and Absolute Radiance Residual Method (ARRM), have been used to determine altitude registration to the accuracy necessary for long-term ozone monitoring. The methods compare model calculations of radiances to measured radiances and are independent of onboard tracking devices. RSAS determines absolute altitude errors but, because the method is susceptible to aerosol interference, it is limited to latitudes and time periods with minimal aerosol contamination. ARRM, a new technique using wavelengths near 300 nm, can be applied across all seasons and altitudes, but its sensitivity to accurate instrument calibration means it may be inappropriate for anything but monitoring change. These characteristics make the two techniques complementary. Both methods have been applied to Limb Profiler instrument measurements from the Ozone Mapping and Profiler Suite (OMPS) onboard the Suomi NPP (SNPP) satellite. The results from RSAS and ARRM differ by as much as 500 m over orbital and seasonal time scales, but long-term pointing trends derived from the two indicate changes within 100 m over the 5 year data record. In this paper we further discuss what these methods are revealing about the stability of LP's altitude registration. An independent evaluation of pointing errors using VIIRS, another sensor onboard the Suomi NPP satellite, indicates changes of as much as 80 m over the course of the mission. The correlations between VIIRS and the ARRM time series suggest a high degree of precision in this limb technique. We have therefore relied upon ARRM to evaluate error sources in more widespread altitude registration techniques such as RSAS and lunar observations. These techniques can be more readily applied to other limb scatter missions such as SAGE III and ALTIUS
The micron- to kilometer-scale Moon: linking samples to orbital observations, Apollo to LRO
NASA Astrophysics Data System (ADS)
Crites, S.; Lucey, P. G.; Taylor, J.; Martel, L.; Sun, L.; Honniball, C.; Lemelin, M.
2017-12-01
The Apollo missions have shaped the field of lunar science and our understanding of the Moon, from global-scale revelations like the magma ocean hypothesis, to providing ground truth for compositional remote sensing and absolute ages to anchor cratering chronologies. While lunar meteorite samples can provide a global- to regional-level view of the Moon, samples returned from known locations are needed to directly link orbital-scale observations with laboratory measurements-a link that can be brought to full fruition with today's extremely high spatial resolution observations from Lunar Reconnaissance Orbiter and other recent missions. Korotev et al. (2005) described a scenario of the Moon without Apollo to speculate about our understanding of the Moon if our data were confined to lunar meteorites and remote sensing. I will review some of the major points discussed by Korotev et al. (2005), and focus on some of the ways in which spectroscopic remote sensing in particular has benefited from the Apollo samples. For example, could the causes and effects of lunar-style space weathering have been unraveled without the Apollo samples? What would be the limitations on remote sensing compositional measurements that rely on Apollo samples for calibration and validation? And what new opportunities to bring together orbital and sample analyses now exist, in light of today's high spatial and spectral resolution remote sensing datasets?
Trends in mean and extreme temperatures over Ibadan, Southwest Nigeria
NASA Astrophysics Data System (ADS)
Abatan, Abayomi A.; Osayomi, Tolulope; Akande, Samuel O.; Abiodun, Babatunde J.; Gutowski, William J.
2018-02-01
In recent times, Ibadan has been experiencing an increase in mean temperature which appears to be linked to anthropogenic global warming. Previous studies have indicated that the warming may be accompanied by changes in extreme events. This study examined trends in mean and extreme temperatures over Ibadan during 1971-2012 at annual and seasonal scales using the high-resolution atmospheric reanalysis from European Centre for Medium-Range Weather Forecasts (ECMWF) twentieth-century dataset (ERA-20C) at 15 grid points. Magnitudes of linear trends in mean and extreme temperatures and their statistical significance were calculated using ordinary least squares and Mann-Kendall rank statistic tests. The results show that Ibadan has witnessed an increase in annual and seasonal mean minimum temperatures. The annual mean maximum temperature exhibited a non-significant decline in most parts of Ibadan. While trends in cold extremes at annual scale show warming, trends in coldest night show greater warming than in coldest day. At the seasonal scale, we found that Ibadan experienced a mix of positive and negative trends in absolute extreme temperature indices. However, cold extremes show the largest trend magnitudes, with trends in coldest night showing the greatest warming. The results compare well with those obtained from a limited number of stations. This study should inform decision-makers and urban planners about the ongoing warming in Ibadan.
New calibrators for the Cepheid period-luminosity relation
NASA Technical Reports Server (NTRS)
Evans, Nancy R.
1992-01-01
IUE spectra of six Cepheids have been used to determine their absolute magnitudes from the spectral types of their binary companions. The stars observed are U Aql, V659 Cen, Y Lac, S Nor, V350 Sgr, and V636 Sco. The absolute magnitude for V659 Cen is more uncertain than for the others because its reddening is poorly determined and the spectral type is hotter than those of the others. In addition, a reddening law with extra absorption in the 2200 A region is necessary, although this has a negligible effect on the absolute magnitude. For the other Cepheids, and also Eta Aql and W Sgr, the standard deviation from the Feast and Walker period-luminosity-color (PLC) relation is 0.37 mag, confirming the previously estimated internal uncertainty. The absolute magnitudes for S Nor from the binary companion and from cluster membership are very similar. The preliminary PLC zero point is less than 2 sigma (+0.21 mag) different from that of Feast and Walker. The same narrowing of the instability strip at low luminosities found by Fernie is seen.
Design considerations and validation of the MSTAR absolute metrology system
NASA Astrophysics Data System (ADS)
Peters, Robert D.; Lay, Oliver P.; Dubovitsky, Serge; Burger, Johan; Jeganathan, Muthu
2004-08-01
Absolute metrology measures the actual distance between two optical fiducials. A number of methods have been employed, including pulsed time-of-flight, intensity-modulated optical beam, and two-color interferometry. The rms accuracy is currently limited to ~5 microns. Resolving the integer number of wavelengths requires a 1-sigma range accuracy of ~0.1 microns. Closing this gap has a large pay-off: the range (length measurement) accuracy can be increased substantially using the unambiguous optical phase. The MSTAR sensor (Modulation Sideband Technology for Absolute Ranging) is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with sub-nanometer accuracy. In this paper, we present recent experiments that use dispersed white light interferometry to independently validate the zero-point of the system. We also describe progress towards reducing the size of optics, and stabilizing the laser wavelength for operation over larger target ranges. MSTAR is a general-purpose tool for conveniently measuring length with much greater accuracy than was previously possible, and has a wide range of possible applications.
Concentration Independent Calibration of β-γ Coincidence Detector Using 131mXe and 133Xe
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIntyre, Justin I.; Cooper, Matthew W.; Carman, April J.
Absolute efficiency calibration of radiometric detectors is frequently difficult and requires careful detector modeling and accurate knowledge of the radioactive source used. In the past we have calibrated the b-g coincidence detector of the Automated Radioxenon Sampler/Analyzer (ARSA) using a variety of sources and techniques which have proven to be less than desirable.[1] A superior technique has been developed that uses the conversion-electron (CE) and x-ray coincidence of 131mXe to provide a more accurate absolute gamma efficiency of the detector. The 131mXe is injected directly into the beta cell of the coincident counting system and no knowledge of absolute sourcemore » strength is required. In addition, 133Xe is used to provide a second independent means to obtain the absolute efficiency calibration. These two data points provide the necessary information for calculating the detector efficiency and can be used in conjunction with other noble gas isotopes to completely characterize and calibrate the ARSA nuclear detector. In this paper we discuss the techniques and results that we have obtained.« less
Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration
Deng, Mingjun; Li, Jiansong
2017-01-01
The Chinese Gaofen-3 (GF-3) mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR) sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts) using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method. PMID:29240675
NASA Astrophysics Data System (ADS)
Tallec, T.; Rivalland, V.; Jarosz, N.; Boulet, G.; Gentine, P.; Ceschia, E.
2012-04-01
In the current context of climate change, intra- and inter-annual variability of precipitation can lead to major modifications of water budgets and water use efficiencies (WUE). Obtaining greater insight into how climatic variability and agricultural practices affect water budgets and their components in croplands is, thus, important for adapting crop management and limiting water losses. The principal aims of this study were 1) to assess the contribution of different components to the agro-ecosystem water budget and 2) to analyze and compare the WUE calculated from ecophysiological (WUEplt), environmental (WUEeco) and agronomical (WUEagro) points of view for various crops during the growing season and for the annual time scale. Eddy covariance (EC) measurements of CO2 and water flux were performed on winter wheat, maize and sunflower crops at two sites in southwest France: Auradé and Lamasquère. To infer WUEplt, an estimation of plant transpiration (TR) is needed. We then tested a new method for partitioning evapotranspiration (ETR), measured by means of the EC method, into soil evaporation (E) and plant transpiration (TR) based on marginal distribution sampling (MDS). We compared these estimations with calibrated simulations of the ICARE-SVAT double source mechanistic model. The two partitioning methods showed good agreement, demonstrating that MDS is a convenient, simple and robust tool for estimating E with reasonable associated uncertainties. During the growing season, the proportion of E in ETR was approximately one-third and varied mainly with crop leaf area. When calculated on an annual time scale, the proportion of E in ETR reached more than 50%, depending on crop leaf area and the duration and distribution of bare soil within the year. WUEplt values ranged between -4.1 and -5.6 g C kg-1 H2O for maize and winter wheat, respectively, and were strongly dependent on meteorological conditions at the half-hourly, daily and seasonal time scales. When normalized by the vapor pressure deficit to reduce the effect of seasonal climatic variability on WUEplt, maize had the highest efficiency. Absolute WUEeco values on the ecosystem level, including water loss through evaporation and carbon release through ecosystem respiration, were consequently lower than on the stand level. This observation was even more pronounced on an annual time scale than on the growing-season time scale because of bare soil periods. Winter wheat showed the highest absolute values of WUEeco, and sunflower showed the lowest. To account for carbon input into WUE through organic fertilization and output through biomass exportation during harvest, net biome production (NBP) was considered in the calculation of an ecosystem-level WUE (WUENBP). Considering WUENBP instead of WUEeco markedly decreased the efficiency of the ecosystem, especially for crops with important carbon exports, as observed for the maize used for silaging and pointed out the profits of organic C input. From an agronomic perspective, maize showed the best WUE, with exported (marketable) carbon per unit of water used exceeding that of other crops. Thus, the environmental and agronomical WUE approaches should be considered together in the context of global climate change and sustainable development.
Toward an integrated ice core chronology using relative and orbital tie-points
NASA Astrophysics Data System (ADS)
Bazin, L.; Landais, A.; Lemieux-Dudon, B.; Toyé Mahamadou Kele, H.; Blunier, T.; Capron, E.; Chappellaz, J.; Fischer, H.; Leuenberger, M.; Lipenkov, V.; Loutre, M.-F.; Martinerie, P.; Parrenin, F.; Prié, F.; Raynaud, D.; Veres, D.; Wolff, E.
2012-04-01
Precise ice cores chronologies are essential to better understand the mechanisms linking climate change to orbital and greenhouse gases concentration forcing. A tool for ice core dating (DATICE [developed by Lemieux-Dudon et al., 2010] permits to generate a common time-scale integrating relative and absolute dating constraints on different ice cores, using an inverse method. Nevertheless, this method has only been applied for a 4-ice cores scenario and for the 0-50 kyr time period. Here, we present the bases for an extension of this work back to 800 ka using (1) a compilation of published and new relative and orbital tie-points obtained from measurements of air trapped in ice cores and (2) an adaptation of the DATICE inputs to 5 ice cores for the last 800 ka. We first present new measurements of δ18Oatm and δO2/N2 on the Talos Dome and EPICA Dome C (EDC) ice cores with a particular focus on Marine Isotopic Stages (MIS) 5, and 11. Then, we show two tie-points compilations. The first one is based on new and published CH4 and δ18Oatm measurements on 5 ice cores (NorthGRIP, EPICA Dronning Maud Land, EDC, Talos Dome and Vostok) in order to produce a table of relative gas tie-points over the last 400 ka. The second one is based on new and published records of δO2/N2, δ18Oatm and air content to provide a table of orbital tie-points over the last 800 ka. Finally, we integrate the different dating constraints presented above in the DATICE tool adapted to 5 ice cores to cover the last 800 ka and show how these constraints compare with the established gas chronologies of each ice core.
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2012 CFR
2012-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2010 CFR
2010-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2014 CFR
2014-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2013 CFR
2013-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
19 CFR 351.224 - Disclosure of calculations and procedures for the correction of ministerial errors.
Code of Federal Regulations, 2011 CFR
2011-04-01
... least five absolute percentage points in, but not less than 25 percent of, the weighted-average dumping... margin or countervailable subsidy rate (whichever is applicable) of zero (or de minimis) and a weighted...
Primary care and health inequality: Difference-in-difference study comparing England and Ontario.
Cookson, Richard; Mondor, Luke; Asaria, Miqdad; Kringos, Dionne S; Klazinga, Niek S; Wodchis, Walter P
2017-01-01
It is not known whether equity-oriented primary care investment that seeks to scale up the delivery of effective care in disadvantaged communities can reduce health inequality within high-income settings that have pre-existing universal primary care systems. We provide some non-randomised controlled evidence by comparing health inequality trends between two similar jurisdictions-one of which implemented equity-oriented primary care investment in the mid-to-late 2000s as part of a cross-government strategy for reducing health inequality (England), and one which invested in primary care without any explicit equity objective (Ontario, Canada). We analysed whole-population data on 32,482 neighbourhoods (with mean population size of approximately 1,500 people) in England, and 18,961 neighbourhoods (with mean population size of approximately 700 people) in Ontario. We examined trends in mortality amenable to healthcare by decile groups of neighbourhood deprivation within each jurisdiction. We used linear models to estimate absolute and relative gaps in amenable mortality between most and least deprived groups, considering the gradient between these extremes, and evaluated difference-in-difference comparisons between the two jurisdictions. Inequality trends were comparable in both jurisdictions from 2004-6 but diverged from 2007-11. Compared with Ontario, the absolute gap in amenable mortality in England fell between 2004-6 and 2007-11 by 19.8 per 100,000 population (95% CI: 4.8 to 34.9); and the relative gap in amenable mortality fell by 10 percentage points (95% CI: 1 to 19). The biggest divergence occurred in the most deprived decile group of neighbourhoods. In comparison to Ontario, England succeeded in reducing absolute socioeconomic gaps in mortality amenable to healthcare from 2007 to 2011, and preventing them from growing in relative terms. Equity-oriented primary care reform in England in the mid-to-late 2000s may have helped to reduce socioeconomic inequality in health, though other explanations for this divergence are possible and further research is needed on the specific causal mechanisms.
Simulations of VLBI observations of a geodetic satellite providing co-location in space
NASA Astrophysics Data System (ADS)
Anderson, James M.; Beyerle, Georg; Glaser, Susanne; Liu, Li; Männel, Benjamin; Nilsson, Tobias; Heinkelmann, Robert; Schuh, Harald
2018-02-01
We performed Monte Carlo simulations of very-long-baseline interferometry (VLBI) observations of Earth-orbiting satellites incorporating co-located space-geodetic instruments in order to study how well the VLBI frame and the spacecraft frame can be tied using such measurements. We simulated observations of spacecraft by VLBI observations, time-of-flight (TOF) measurements using a time-encoded signal in the spacecraft transmission, similar in concept to precise point positioning, and differential VLBI (D-VLBI) observations using angularly nearby quasar calibrators to compare their relative performance. We used the proposed European Geodetic Reference Antenna in Space (E-GRASP) mission as an initial test case for our software. We found that the standard VLBI technique is limited, in part, by the present lack of knowledge of the absolute offset of VLBI time to Coordinated Universal Time at the level of microseconds. TOF measurements are better able to overcome this problem and provide frame ties with uncertainties in translation and scale nearly a factor of three smaller than those yielded from VLBI measurements. If the absolute time offset issue can be resolved by external means, the VLBI results can be significantly improved and can come close to providing 1 mm accuracy in the frame tie parameters. D-VLBI observations with optimum performance assumptions provide roughly a factor of two higher uncertainties for the E-GRASP orbit. We additionally simulated how station and spacecraft position offsets affect the frame tie performance.
Simplest little Higgs model revisited: Hidden mass relation, unitarity, and naturalness
NASA Astrophysics Data System (ADS)
Cheung, Kingman; He, Shi-Ping; Mao, Ying-nan; Zhang, Chen; Zhou, Yang
2018-06-01
We analyze the scalar potential of the simplest little Higgs (SLH) model in an approach consistent with the spirit of continuum effective field theory (CEFT). By requiring correct electroweak symmetry breaking (EWSB) with the 125 GeV Higgs boson, we are able to derive a relation between the pseudoaxion mass mη and the heavy top mass mT, which serves as a crucial test of the SLH mechanism. By requiring mη2>0 an upper bound on mT can be obtained for any fixed SLH global symmetry breaking scale f . We also point out that an absolute upper bound on f can be obtained by imposing partial wave unitarity constraint, which in turn leads to absolute upper bounds of mT≲19 TeV , mη≲1.5 TeV , and mZ'≲48 TeV . We present the allowed region in the three-dimensional parameter space characterized by f ,tβ,mT, taking into account the requirement of valid EWSB and the constraint from perturbative unitarity. We also propose a strategy of analyzing the fine-tuning problem consistent with the spirit of CEFT and apply it to the SLH. We suggest that the scalar potential and fine-tuning analysis strategies adopted here should also be applicable to a wide class of little Higgs and twin Higgs models, which may reveal interesting relations as crucial tests of the related EWSB mechanism and provide a new perspective on assessing their degree of fine-tuning.
Effect of distance-related heterogeneity on population size estimates from point counts
Efford, Murray G.; Dawson, Deanna K.
2009-01-01
Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.
ACCESS: Design and Sub-System Performance
NASA Technical Reports Server (NTRS)
Kaiser, Mary Elizabeth; Morris, Matthew J.; McCandliss, Stephan R.; Rasucher, Bernard J.; Kimble, Randy A.; Kruk, Jeffrey W.; Pelton, Russell; Mott, D. Brent; Wen, Hiting; Foltz, Roger;
2012-01-01
Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. ACCESS, "Absolute Color Calibration Experiment for Standard Stars", is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35 -1.7 micrometer bandpass.
Kenney, Terry A.
2010-01-01
Operational procedures at U.S. Geological Survey gaging stations include periodic leveling checks to ensure that gages are accurately set to the established gage datum. Differential leveling techniques are used to determine elevations for reference marks, reference points, all gages, and the water surface. The techniques presented in this manual provide guidance on instruments and methods that ensure gaging-station levels are run to both a high precision and accuracy. Levels are run at gaging stations whenever differences in gage readings are unresolved, stations may have been damaged, or according to a pre-determined frequency. Engineer's levels, both optical levels and electronic digital levels, are commonly used for gaging-station levels. Collimation tests should be run at least once a week for any week that levels are run, and the absolute value of the collimation error cannot exceed 0.003 foot/100 feet (ft). An acceptable set of gaging-station levels consists of a minimum of two foresights, each from a different instrument height, taken on at least two independent reference marks, all reference points, all gages, and the water surface. The initial instrument height is determined from another independent reference mark, known as the origin, or base reference mark. The absolute value of the closure error of a leveling circuit must be less than or equal to ft, where n is the total number of instrument setups, and may not exceed |0.015| ft regardless of the number of instrument setups. Closure error for a leveling circuit is distributed by instrument setup and adjusted elevations are determined. Side shots in a level circuit are assessed by examining the differences between the adjusted first and second elevations for each objective point in the circuit. The absolute value of these differences must be less than or equal to 0.005 ft. Final elevations for objective points are determined by averaging the valid adjusted first and second elevations. If final elevations indicate that the reference gage is off by |0.015| ft or more, it must be reset.
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
Consistent Long-Time Series of GPS Satellite Antenna Phase Center Corrections
NASA Astrophysics Data System (ADS)
Steigenberger, P.; Schmid, R.; Rothacher, M.
2004-12-01
The current IGS processing strategy disregards satellite antenna phase center variations (pcvs) depending on the nadir angle and applies block-specific phase center offsets only. However, the transition from relative to absolute receiver antenna corrections presently under discussion necessitates the consideration of satellite antenna pcvs. Moreover, studies of several groups have shown that the offsets are not homogeneous within a satellite block. Manufacturer specifications seem to confirm this assumption. In order to get best possible antenna corrections, consistent ten-year time series (1994-2004) of satellite-specific pcvs and offsets were generated. This challenging effort became possible as part of the reprocessing of a global GPS network currently performed by the Technical Universities of Munich and Dresden. The data of about 160 stations since the official start of the IGS in 1994 have been reprocessed, as today's GPS time series are mostly inhomogeneous and inconsistent due to continuous improvements in the processing strategies and modeling of global GPS solutions. An analysis of the signals contained in the time series of the phase center offsets demonstrates amplitudes on the decimeter level, at least one order of magnitude worse than the desired accuracy. The periods partly arise from the GPS orbit configuration, as the orientation of the orbit planes with regard to the inertial system repeats after about 350 days due to the rotation of the ascending nodes. In addition, the rms values of the X- and Y-offsets show a high correlation with the angle between the orbit plane and the direction to the sun. The time series of the pcvs mainly point at the correlation with the global terrestrial scale. Solutions with relative and absolute phase center corrections, with block- and satellite-specific satellite antenna corrections demonstrate the effect of this parameter group on other global GPS parameters such as the terrestrial scale, station velocities, the geocenter position or the tropospheric delays. Thus, deeper insight into the so-called `Bermuda triangle' of several highly correlated parameters is given.
Corticosteroid use in the intensive care unit: a survey of intensivists.
Lamontagne, François; Quiroz Martinez, Hector; Adhikari, Neill K J; Cook, Deborah J; Koo, Karen K Y; Lauzier, François; Turgeon, Alexis F; Kho, Michelle E; Burns, Karen E A; Chant, Clarence; Fowler, Rob; Douglas, Ivor; Poulin, Yannick; Choong, Karen; Ferguson, Niall D; Meade, Maureen O
2013-07-01
The efficacy of systemic corticosteroids in many critical illnesses remains uncertain. Our primary objective was to survey intensivists in North America about their perceived use of corticosteroids in clinical practice. Self-administered paper survey. Intensivists in academic hospitals with clinical trial expertise in critical illness. We generated questionnaire items in focus groups and refined them after assessments of clinical sensibility and test-retest reliability and pilot testing. We administered the survey to experienced intensivists practicing in selected North American centres actively enrolling patients in the multicentre Oscillation for ARDS Treated Early (OSCILLATE) Trial (ISRCTN87124254). Respondents used a four-point scale to grade how frequently they would administer corticosteroids in 14 clinical settings. They also reported their opinions on 16 potential near-absolute indications or contraindications for the use of corticosteroids. Our response rate was 82% (103/125). Respondents were general internists (50%), respirologists (22%), anesthesiologists (21%), and surgeons (7%) who practiced in mixed medical-surgical units. A majority of respondents reported almost always prescribing corticosteroids in the setting of significant bronchospasm in a mechanically ventilated patient (94%), recent corticosteroid use and low blood pressure (93%), and vasopressor-refractory septic shock (52%). Although more than half of respondents stated they would almost never prescribe corticosteroids in severe community-acquired pneumonia (81%), acute lung injury (ALI, 76%), acute respiratory distress syndrome (ARDS, 65%), and severe ARDS (51%), variability increased with severity of acute lung injury. Near-absolute indications selected by most respondents included known adrenal insufficiency (99%) and suspicion of cryptogenic organizing pneumonia (89%), connective tissue disease (85%), or other potentially corticosteroid-responsive illnesses (85%). Respondents reported rarely prescribing corticosteroids for ALI, but accepted them for bronchospasm, suspected adrenal insufficiency due to previous corticosteroid use, and vasopressor-refractory septic shock. These competing indications will complicate the design and interpretation of any future large-scale trial of corticosteroids in critical illness.
Comparisons of absolute gravimeters (COOMET.M.G-S1)
NASA Astrophysics Data System (ADS)
Vinnichenko, Mr Alexander; Germak, Alessandro, Dr
2017-01-01
This report describes the results of the RMO supplementary comparison COOMET.M.G-S1 (also known as bilateral comparison COOMET 634/UA/14). The comparison measurements between the two participants NSC 'IM' (pilot laboratory) and INRIM were started in December 2015 and finished in January 2016. Participants of comparisons were conducted at their national standards the measurements of the free fall acceleration in gravimetric point laboratory of absolute gravimetry of INRIM named INRiM.2. Absolute measurements of gravimetric acceleration were conducted by ballistic gravimeters. The agreement between the two participants is good. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolff, Wania, E-mail: wania@if.ufrj.br; Luna, Hugo; Sigaud, Lucas
Absolute total non-dissociative and partial dissociative cross sections of pyrimidine were measured for electron impact energies ranging from 70 to 400 eV and for proton impact energies from 125 up to 2500 keV. MOs ionization induced by coulomb interaction were studied by measuring both ionization and partial dissociative cross sections through time of flight mass spectrometry and by obtaining the branching ratios for fragment formation via a model calculation based on the Born approximation. The partial yields and the absolute cross sections measured as a function of the energy combined with the model calculation proved to be a useful toolmore » to determine the vacancy population of the valence MOs from which several sets of fragment ions are produced. It was also a key point to distinguish the dissociation regimes induced by both particles. A comparison with previous experimental results is also presented.« less
Absolute measurement of the extreme UV solar flux
NASA Technical Reports Server (NTRS)
Carlson, R. W.; Ogawa, H. S.; Judge, D. L.; Phillips, E.
1984-01-01
A windowless rare-gas ionization chamber has been developed to measure the absolute value of the solar extreme UV flux in the 50-575-A region. Successful results were obtained on a solar-pointing sounding rocket. The ionization chamber, operated in total absorption, is an inherently stable absolute detector of ionizing UV radiation and was designed to be independent of effects from secondary ionization and gas effusion. The net error of the measurement is + or - 7.3 percent, which is primarily due to residual outgassing in the instrument, other errors such as multiple ionization, photoelectron collection, and extrapolation to the zero atmospheric optical depth being small in comparison. For the day of the flight, Aug. 10, 1982, the solar irradiance (50-575 A), normalized to unit solar distance, was found to be 5.71 + or - 0.42 x 10 to the 10th photons per sq cm sec.
Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K
2016-07-12
We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.
Martín Fernández, Jesús; Gómez Gascón, Tomás; Martínez García-Olalla, Carlos; del Cura González, María Isabel; Cabezas Peña, María Carmen; García Sánchez, Salvador
2008-07-01
To establish the CVP-35 evaluative properties to measure the professional quality of life (PQL). Prospective, observational study. A primary care area in the Community of Madrid, Spain. A total of 149 sanitary workers with some burnout sign measured by Maslach Burnout Inventory (MBI) participated. They fulfilled MBI, Goldberg Health Questionnaire (GHQ-28), and CVP-35 questionnaires at the beginning and after a year of follow-up, in which 73 subjects took part in activities for coping stress. It was assessed the change of PQL and their domains managerial support (PQL-MS), work load (PQL-WL), intrinsic motivation (PQL-IM) for the subjects with variations at the MBI, or GHQ-28 punctuation greater than 0.5 SD of the initial distribution. Variations in CVP-35 and their domains correlate weakly with changes in MBI and GHQ-28 (r<0.500), but they are congruent with the conceptual model. In the individuals with significant variations in the GHQ-28, they appreciate an average change in PQL and their domains between 0.18 and 0.55 points (absolute value). In those with significant variations in the MBI domains, PQL presented average absolute variations between 0.23 and 0.45 points, PQL-MS between 0.30 and 0.67, PQL-WL between 0.01 and 0.55 and PQL-IM between 0.22 and 0.83 points. CVP-35 is a sensitive-to-change instrument under population point of view. Changes in PQL perception or in any of their domains of 0.5 points could be pointed as relevant.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-21
... other errors, would result in (1) a change of at least five absolute percentage points in, but not less...) preliminary determination, or (2) a difference between a weighted-average dumping margin of zero or de minimis...
Computing the universe: how large-scale simulations illuminate galaxies and dark energy
NASA Astrophysics Data System (ADS)
O'Shea, Brian
2015-04-01
High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.
A soft-computing methodology for noninvasive time-spatial temperature estimation.
Teixeira, César A; Ruano, Maria Graça; Ruano, António E; Pereira, Wagner C A
2008-02-01
The safe and effective application of thermal therapies is restricted due to lack of reliable noninvasive temperature estimators. In this paper, the temporal echo-shifts of backscattered ultrasound signals, collected from a gel-based phantom, were tracked and assigned with the past temperature values as radial basis functions neural networks input information. The phantom was heated using a piston-like therapeutic ultrasound transducer. The neural models were assigned to estimate the temperature at different intensities and points arranged across the therapeutic transducer radial line (60 mm apart from the transducer face). Model inputs, as well as the number of neurons were selected using the multiobjective genetic algorithm (MOGA). The best attained models present, in average, a maximum absolute error less than 0.5 degrees C, which is pointed as the borderline between a reliable and an unreliable estimator in hyperthermia/diathermia. In order to test the spatial generalization capacity, the best models were tested using spatial points not yet assessed, and some of them presented a maximum absolute error inferior to 0.5 degrees C, being "elected" as the best models. It should be also stressed that these best models present implementational low-complexity, as desired for real-time applications.
Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures
NASA Astrophysics Data System (ADS)
Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.
2013-05-01
An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.
Chit, Ayman; Zivaripiran, Hossein; Shin, Thomas; Lee, Jason K. H.; Tomovici, Antigona; Macina, Denis; Johnson, David R.; Decker, Michael D.; Wu, Jianhong
2018-01-01
Background Acellular pertussis vaccine studies postulate that waning protection, particularly after the adolescent booster, is a major contributor to the increasing US pertussis incidence. However, these studies reported relative (ie, vs a population given prior doses of pertussis vaccine), not absolute (ie, vs a pertussis vaccine naïve population) efficacy following the adolescent booster. We aim to estimate the absolute protection offered by acellular pertussis vaccines. Methods We conducted a systematic review of acellular pertussis vaccine effectiveness (VE) publications. Studies had to comply with the US schedule, evaluate clinical outcomes, and report VE over discrete time points. VE after the 5-dose childhood series and after the adolescent sixth-dose booster were extracted separately and pooled. All relative VE estimates were transformed to absolute estimates. VE waning was estimated using meta-regression modeling. Findings Three studies reported VE after the childhood series and four after the adolescent booster. All booster studies reported relative VE (vs acellular pertussis vaccine-primed population). We estimate initial childhood series absolute VE is 91% (95% CI: 87% to 95%) and declines at 9.6% annually. Initial relative VE after adolescent boosting is 70% (95% CI: 54% to 86%) and declines at 45.3% annually. Initial absolute VE after adolescent boosting is 85% (95% CI: 84% to 86%) and declines at 11.7% (95% CI: 11.1% to 12.3%) annually. Interpretation Acellular pertussis vaccine efficacy is initially high and wanes over time. Observational VE studies of boosting failed to recognize that they were measuring relative, not absolute, VE and the absolute VE in the boosted population is better than appreciated. PMID:29912887
Garratt, Elisabeth A; Chandola, Tarani; Purdam, Kingsley; Wood, Alex M
2016-10-01
Parents face an increased risk of psychological distress compared with adults without children, and families with children also have lower average household incomes. Past research suggests that absolute income (material position) and income status (psychosocial position) influence psychological distress, but their combined effects on changes in psychological distress have not been examined. Whether absolute income interacts with income status to influence psychological distress are also key questions. We used fixed-effects panel models to examine longitudinal associations between psychological distress (measured on the Kessler scale) and absolute income, distance from the regional mean income, and regional income rank (a proxy for status) using data from 29,107 parents included in the UK Millennium Cohort Study (2003-2012). Psychological distress was determined by an interaction between absolute income and income rank: higher absolute income was associated with lower psychological distress across the income spectrum, while the benefits of higher income rank were evident only in the highest income parents. Parents' psychological distress was, therefore, determined by a combination of income-related material and psychosocial factors. Both material and psychosocial factors contribute to well-being. Higher absolute incomes were associated with lower psychological distress across the income spectrum, demonstrating the importance of material factors. Conversely, income status was associated with psychological distress only at higher absolute incomes, suggesting that psychosocial factors are more relevant to distress in more advantaged, higher income parents. Clinical interventions could, therefore, consider both the material and psychosocial impacts of income on psychological distress.
Dunn, Philip J H; Malinovsky, Dmitry; Goenaga-Infante, Heidi
2015-04-01
We report a methodology for the determination of the stable carbon absolute isotope ratio of a glycine candidate reference material with natural carbon isotopic composition using EA-IRMS. For the first time, stable carbon absolute isotope ratios have been reported using continuous flow rather than dual inlet isotope ratio mass spectrometry. Also for the first time, a calibration strategy based on the use of synthetic mixtures gravimetrically prepared from well characterised, highly (13)C-enriched and (13)C-depleted glycines was developed for EA-IRMS calibration and generation of absolute carbon isotope ratio values traceable to the SI through calibration standards of known purity. A second calibration strategy based on converting the more typically determined delta values on the Vienna PeeDee Belemnite (VPDB) scale using literature values for the absolute carbon isotope ratio of VPDB itself was used for comparison. Both calibration approaches provided results consistent with those previously reported for the same natural glycine using MC-ICP-MS; absolute carbon ratios of 10,649 × 10(-6) with an expanded uncertainty (k = 2) of 24 × 10(-6) and 10,646 × 10(-6) with an expanded uncertainty (k = 2) of 88 × 10(-6) were obtained, respectively. The absolute carbon isotope ratio of the VPDB standard was found to be 11,115 × 10(-6) with an expanded uncertainty (k = 2) of 27 × 10(-6), which is in excellent agreement with previously published values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamberto, M; Chen, H; Huang, K
2015-06-15
Purpose To characterize the Cyberknife (CK) robotic system’s dosimetric accuracy of the delivery of MultiPlan’s Monte Carlo dose calculations using EBT3 radiochromic film inserted in a thorax phantom. Methods The CIRS XSight Lung Tracking (XLT) Phantom (model 10823) was used in this study with custom cut EBT3 film inserted in the horizontal (coronal) plane inside the lung tissue equivalent phantom. CK MultiPlan v3.5.3 with Monte Carlo dose calculation algorithm (1.5 mm grid size, 2% statistical uncertainty) was used to calculate a clinical plan for a 25-mm lung tumor lesion, as contoured by the physician, and then imported onto the XLTmore » phantom CT. Using the same film batch, the net OD to dose calibration curve was obtained using CK with the 60 mm fixed cone by delivering 0– 800 cGy. The test films (n=3) were irradiated using 325 cGy to the prescription point. Films were scanned 48 hours after irradiation using an Epson v700 scanner (48 bits color scan, extracted red channel only, 96 dpi). Percent absolute dose and relative isodose distribution difference relative to the planned dose were quantified using an in-house QA software program. Multiplan Monte Carlo dose calculation was validated using RCF dosimetry (EBT3) and gamma index criteria of 3%/3mm and 2%/2mm for absolute dose and relative isodose distribution measurement comparisons. Results EBT3 film measurements of the patient plans calculated with Monte Carlo in MultiPlan resulted in an absolute dose passing rate of 99.6±0.4% for the Gamma Index of 3%/3mm, 10% dose threshold, and 95.6±4.4% for 2%/2mm, 10% threshold criteria. The measured central axis absolute dose was within 1.2% (329.0±2.5 cGy) of the Monte Carlo planned dose (325.0±6.5 cGy) for that same point. Conclusion MultiPlan’s Monte Carlo dose calculation was validated using the EBT3 film absolute dosimetry for delivery in a heterogeneous thorax phantom.« less
Measurements of the Acoustic Speaking Voice After Vocal Warm-up and Cooldown in Choir Singers.
Onofre, Fernanda; Prado, Yuka de Almeida; Rojas, Gleidy Vannesa E; Garcia, Denny Marco; Aguiar-Ricz, Lílian
2017-01-01
The aim of this study was to evaluate the acoustic measurements of the vowel /a/ in modal recording before and after a singing voice resistance test and after 30 minutes of absolute rest in female choir singers. This is a prospective cohort study. A total of 13 soprano choir singers with experience in choir singing were evaluated through analysis of acoustic voice parameters at three points in time: before continuous use of the voice, after vocal warm-up and a singing test 60 minutes in duration respecting the pauses for breathing, and after vocal cooldown and an absolute voice rest for 30 minutes. The fundamental frequency increased after the voice resistance test (P = 0.012) and remained elevated after the 30 minutes of voice rest (P = 0.01). The jitter decreased after the voice resistance test (P = 0.02) and after the 30 minutes of voice rest. A significant difference was detected for the acoustic voice parameters relative average perturbation (RAP), (P = 0.05), and pitch perturbation quotient (PPQ), (P = 0.04), compared with the initial time point. The fundamental frequency increased after 60 minutes of singing and remained elevated after vocal cooldown and absolute rest for 30 minutes, proving an efficient parameter for identifying the changes inherent to voice demand during singing. Copyright © 2017. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Morlanes, Tomas; de la Pena, Jose L.; Sanchez-Brea, Luis M.; Alonso, Jose; Crespo, Daniel; Saez-Landete, Jose B.; Bernabeu, Eusebio
2005-07-01
In this work, an optoelectronic device that provides the absolute position of a measurement element with respect to a pattern scale upon switch-on is presented. That means that there is not a need to perform any kind of transversal displacement after the startup of the system. The optoelectronic device is based on the process of light propagation passing through a slit. A light source with a definite size guarantees the relation of distances between the different elements that constitute our system and allows getting a particular optical intensity profile that can be measured by an electronic post-processing device providing the absolute location of the system with a resolution of 1 micron. The accuracy of this measuring device is restricted to the same limitations of any incremental position optical encoder.
Minimally important difference of the Treatment Satisfaction with Medicines Questionnaire (SATMED-Q)
2011-01-01
Background A previous study has documented the reliability and validity of the Treatment Satisfaction with Medicines Questionnaire (SATMED-Q) in exploring patient satisfaction with medicines for chronic health conditions in routine medical practice, but the minimally important difference (MID) of this tool is as yet unknown. The objective of this research was to estimate the MID for the SATMED-Q total score and six constituent domains. Methods The sample of patients (456 subjects, mean age 59 years, 53% male) used for testing psychometric properties was also used to assess MID. Item #14 of the Treatment Satisfaction Questionnaire for Medication (TSQM) was used as an anchor reference since it directly explores satisfaction with medicine on a 7-point ordinal scale (from extremely satisfied to extremely dissatisfied, with a neutral category). Patients were classified into four categories according to responses to this item (extremely satisfied/dissatisfied, very satisfied/dissatisfied, satisfied/dissatisfied, neither satisfied nor dissatisfied (neutral), and calculations were made for the total score and each domain of the SATMED-Q using standardised scores. The mean absolute differences in total score (and domains) between the neutral category and the satisfied/dissatisfied category were considered to be the MID. Effect sizes (ES) were also computed. Results The MID for the total score was 13.4 (ES = 0.91), while the domain values ranged from 10.3 (medical care domain, ES = 0.43) to 20.6 (impact on daily living, ES = 0.85). Mean differences in satisfaction (as measured by the total SATMED-Q score and domain scores) using the levels of satisfaction established by item #14 were significantly different, with F values ranging from 12.2 to 88.8 (p < 0.001 in all cases). Conclusion The SATMED-Q was demonstrated to be responsive to different levels of patient satisfaction with therapy in chronically ill subjects. The MID obtained was 13.4 points for the overall normalised scoring scale, and between 10.3 and 20.6 points for domains. PMID:22014277
1986-10-01
35 ~- 2.3.12. Remark: Let X. Y be the processes given in the example :after Lemma 2.3.3. Take the same probability space as in that example and...z(s) =zjO) + ’ A zis).1als) + ’~ zsMs 0O<s<t 0O<s<t 0<~ Fix n>1I; then if s Is a point of increase of a, (that is. if Aa(s)=Al). then ri(s) = q(s...Absolute Continuity and Singularity of Locally Absolutely Continuous Probability Distributions. I. Math USSR Sbornik Vol. 35 , No 5, 631-680. Kabanov
New spatial upscaling methods for multi-point measurements: From normal to p-normal
NASA Astrophysics Data System (ADS)
Liu, Feng; Li, Xin
2017-12-01
Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.
Evaluating Dense 3d Reconstruction Software Packages for Oblique Monitoring of Crop Canopy Surface
NASA Astrophysics Data System (ADS)
Brocks, S.; Bareth, G.
2016-06-01
Crop Surface Models (CSMs) are 2.5D raster surfaces representing absolute plant canopy height. Using multiple CMSs generated from data acquired at multiple time steps, a crop surface monitoring is enabled. This makes it possible to monitor crop growth over time and can be used for monitoring in-field crop growth variability which is useful in the context of high-throughput phenotyping. This study aims to evaluate several software packages for dense 3D reconstruction from multiple overlapping RGB images on field and plot-scale. A summer barley field experiment located at the Campus Klein-Altendorf of University of Bonn was observed by acquiring stereo images from an oblique angle using consumer-grade smart cameras. Two such cameras were mounted at an elevation of 10 m and acquired images for a period of two months during the growing period of 2014. The field experiment consisted of nine barley cultivars that were cultivated in multiple repetitions and nitrogen treatments. Manual plant height measurements were carried out at four dates during the observation period. The software packages Agisoft PhotoScan, VisualSfM with CMVS/PMVS2 and SURE are investigated. The point clouds are georeferenced through a set of ground control points. Where adequate results are reached, a statistical analysis is performed.
Jimenez-Soto, Eliana; Durham, Jo; Hodge, Andrew
2014-01-01
Cambodia has made considerable improvements in mortality rates for children under the age of five and neonates. These improvements may, however, mask considerable disparities between subnational populations. In this paper, we examine the extent of the country's child mortality inequalities. Mortality rates for children under-five and neonates were directly estimated using the 2000, 2005 and 2010 waves of the Cambodian Demographic Health Survey. Disparities were measured on both absolute and relative scales using rate differences and ratios, and where applicable, slope and relative indices of inequality by levels of rural/urban location, regions and household wealth. Since 2000, considerable reductions in under-five and to a lesser extent in neonatal mortality rates have been observed. This mortality decline has, however, been accompanied by an increase in relative inequality in both rates of child mortality for geography-related stratifying markers. For absolute inequality amongst regions, most trends are increasing, particularly for neonatal mortality, but are not statistically significant. The only exception to this general pattern is the statistically significant positive trend in absolute inequality for under-five mortality in the Coastal region. For wealth, some evidence for increases in both relative and absolute inequality for neonates is observed. Despite considerable gains in reducing under-five and neonatal mortality at a national level, entrenched and increased geographical and wealth-based inequality in mortality, at least on a relative scale, remain. As expected, national progress seems to be associated with the period of political and macroeconomic stability that started in the early 2000s. However, issues of quality of care and potential non-inclusive economic growth might explain remaining disparities, particularly across wealth and geography markers. A focus on further addressing key supply and demand side barriers to accessing maternal and child health care and on the social determinants of health will be essential in narrowing inequalities.
Hühn, M
1995-05-01
Some approaches to molecular marker-assisted linkage detection for a dominant disease-resistance trait based on a segregating F2 population are discussed. Analysis of two-point linkage is carried out by the traditional measure of maximum lod score. It depends on (1) the maximum-likelihood estimate of the recombination fraction between the marker and the disease-resistance gene locus, (2) the observed absolute frequencies, and (3) the unknown number of tested individuals. If one replaces the absolute frequencies by expressions depending on the unknown sample size and the maximum-likelihood estimate of recombination value, the conventional rule for significant linkage (maximum lod score exceeds a given linkage threshold) can be resolved for the sample size. For each sub-population used for linkage analysis [susceptible (= recessive) individuals, resistant (= dominant) individuals, complete F2] this approach gives a lower bound for the necessary number of individuals required for the detection of significant two-point linkage by the lod-score method.
Albin, Thomas J
2017-07-01
Occasionally practitioners must work with single dimensions defined as combinations (sums or differences) of percentile values, but lack information (e.g. variances) to estimate the accommodation achieved. This paper describes methods to predict accommodation proportions for such combinations of percentile values, e.g. two 90th percentile values. Kreifeldt and Nah z-score multipliers were used to estimate the proportions accommodated by combinations of percentile values of 2-15 variables; two simplified versions required less information about variance and/or correlation. The estimates were compared to actual observed proportions; for combinations of 2-15 percentile values the average absolute differences ranged between 0.5 and 1.5 percentage points. The multipliers were also used to estimate adjusted percentile values, that, when combined, estimate a desired proportion of the combined measurements. For combinations of two and three adjusted variables, the average absolute difference between predicted and observed proportions ranged between 0.5 and 3.0 percentage points. Copyright © 2017 Elsevier Ltd. All rights reserved.
Discrete distributed strain sensing of intelligent structures
NASA Technical Reports Server (NTRS)
Anderson, Mark S.; Crawley, Edward F.
1992-01-01
Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.
Principles of estimation of Radiative danger
NASA Astrophysics Data System (ADS)
Korogodin, V. I.
1990-08-01
The main principles of the estimation of Radiative danger has been discussed. Two main particularities of the danger were pointed out: negatve consequencies of small doses, which does not lead to radiation sickness, but lead to disfunctions of sanguine organs and thin intestines; absolute estimation of biological anomalies, which was forwarded by A.D. Sakharov (1921-1989). The ethic aspects of the use of Nuclear weapons on the fate of Human civilization were pointed out by A.D. Sakharov (1921-1990).
Communication: The absolute shielding scales of oxygen and sulfur revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Komorovsky, Stanislav; Repisky, Michal; Malkin, Elena
2015-03-07
We present an updated semi-experimental absolute shielding scale for the {sup 17}O and {sup 33}S nuclei. These new shielding scales are based on accurate rotational microwave data for the spin–rotation constants of H{sub 2}{sup 17}O [Puzzarini et al., J. Chem. Phys. 131, 234304 (2009)], C{sup 17}O [Cazzoli et al., Phys. Chem. Chem. Phys. 4, 3575 (2002)], and H{sub 2}{sup 33}S [Helgaker et al., J. Chem. Phys. 139, 244308 (2013)] corrected both for vibrational and temperature effects estimated at the CCSD(T) level of theory as well as for the relativistic corrections to the relation between the spin–rotation constant and the absolutemore » shielding constant. Our best estimate for the oxygen shielding constants of H{sub 2}{sup 17}O is 328.4(3) ppm and for C{sup 17}O −59.05(59) ppm. The relativistic correction for the sulfur shielding of H{sub 2}{sup 33}S amounts to 3.3%, and the new sulfur shielding constant for this molecule is 742.9(4.6) ppm.« less
Camera system considerations for geomorphic applications of SfM photogrammetry
Mosbrucker, Adam; Major, Jon J.; Spicer, Kurt R.; Pitlick, John
2017-01-01
The availability of high-resolution, multi-temporal, remotely sensed topographic data is revolutionizing geomorphic analysis. Three-dimensional topographic point measurements acquired from structure-from-motion (SfM) photogrammetry have been shown to be highly accurate and cost-effective compared to laser-based alternatives in some environments. Use of consumer-grade digital cameras to generate terrain models and derivatives is becoming prevalent within the geomorphic community despite the details of these instruments being largely overlooked in current SfM literature. This article is protected by copyright. All rights reserved.A practical discussion of camera system selection, configuration, and image acquisition is presented. The hypothesis that optimizing source imagery can increase digital terrain model (DTM) accuracy is tested by evaluating accuracies of four SfM datasets conducted over multiple years of a gravel bed river floodplain using independent ground check points with the purpose of comparing morphological sediment budgets computed from SfM- and lidar-derived DTMs. Case study results are compared to existing SfM validation studies in an attempt to deconstruct the principle components of an SfM error budget. This article is protected by copyright. All rights reserved.Greater information capacity of source imagery was found to increase pixel matching quality, which produced 8 times greater point density and 6 times greater accuracy. When propagated through volumetric change analysis, individual DTM accuracy (6–37 cm) was sufficient to detect moderate geomorphic change (order 100,000 m3) on an unvegetated fluvial surface; change detection determined from repeat lidar and SfM surveys differed by about 10%. Simple camera selection criteria increased accuracy by 64%; configuration settings or image post-processing techniques increased point density by 5–25% and decreased processing time by 10–30%. This article is protected by copyright. All rights reserved.Regression analysis of 67 reviewed datasets revealed that the best explanatory variable to predict accuracy of SfM data is photographic scale. Despite the prevalent use of object distance ratios to describe scale, nominal ground sample distance is shown to be a superior metric, explaining 68% of the variability in mean absolute vertical error.
Spatiotemporal exposure modeling of ambient erythemal ultraviolet radiation.
VoPham, Trang; Hart, Jaime E; Bertrand, Kimberly A; Sun, Zhibin; Tamimi, Rulla M; Laden, Francine
2016-11-24
Ultraviolet B (UV-B) radiation plays a multifaceted role in human health, inducing DNA damage and representing the primary source of vitamin D for most humans; however, current U.S. UV exposure models are limited in spatial, temporal, and/or spectral resolution. Area-to-point (ATP) residual kriging is a geostatistical method that can be used to create a spatiotemporal exposure model by downscaling from an area- to point-level spatial resolution using fine-scale ancillary data. A stratified ATP residual kriging approach was used to predict average July noon-time erythemal UV (UV Ery ) (mW/m 2 ) biennially from 1998 to 2012 by downscaling National Aeronautics and Space Administration (NASA) Total Ozone Mapping Spectrometer (TOMS) and Ozone Monitoring Instrument (OMI) gridded remote sensing images to a 1 km spatial resolution. Ancillary data were incorporated in random intercept linear mixed-effects regression models. Modeling was performed separately within nine U.S. regions to satisfy stationarity and account for locally varying associations between UV Ery and predictors. Cross-validation was used to compare ATP residual kriging models and NASA grids to UV-B Monitoring and Research Program (UVMRP) measurements (gold standard). Predictors included in the final regional models included surface albedo, aerosol optical depth (AOD), cloud cover, dew point, elevation, latitude, ozone, surface incoming shortwave flux, sulfur dioxide (SO 2 ), year, and interactions between year and surface albedo, AOD, cloud cover, dew point, elevation, latitude, and SO 2 . ATP residual kriging models more accurately estimated UV Ery at UVMRP monitoring stations on average compared to NASA grids across the contiguous U.S. (average mean absolute error [MAE] for ATP, NASA: 15.8, 20.3; average root mean square error [RMSE]: 21.3, 25.5). ATP residual kriging was associated with positive percent relative improvements in MAE (0.6-31.5%) and RMSE (3.6-29.4%) across all regions compared to NASA grids. ATP residual kriging incorporating fine-scale spatial predictors can provide more accurate, high-resolution UV Ery estimates compared to using NASA grids and can be used in epidemiologic studies examining the health effects of ambient UV.
An absolute scale for measuring the utility of money
NASA Astrophysics Data System (ADS)
Thomas, P. J.
2010-07-01
Measurement of the utility of money is essential in the insurance industry, for prioritising public spending schemes and for the evaluation of decisions on protection systems in high-hazard industries. Up to this time, however, there has been no universally agreed measure for the utility of money, with many utility functions being in common use. In this paper, we shall derive a single family of utility functions, which have risk-aversion as the only free parameter. The fact that they return a utility of zero at their low, reference datum, either the utility of no money or of one unit of money, irrespective of the value of risk-aversion used, qualifies them to be regarded as absolute scales for the utility of money. Evidence of validation for the concept will be offered based on inferential measurements of risk-aversion, using diverse measurement data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, T.; Gatchell, M.; Stockett, M. H.
2014-06-14
We present scaling laws for absolute cross sections for non-statistical fragmentation in collisions between Polycyclic Aromatic Hydrocarbons (PAH/PAH{sup +}) and hydrogen or helium atoms with kinetic energies ranging from 50 eV to 10 keV. Further, we calculate the total fragmentation cross sections (including statistical fragmentation) for 110 eV PAH/PAH{sup +} + He collisions, and show that they compare well with experimental results. We demonstrate that non-statistical fragmentation becomes dominant for large PAHs and that it yields highly reactive fragments forming strong covalent bonds with atoms (H and N) and molecules (C{sub 6}H{sub 5}). Thus nonstatistical fragmentation may be an effectivemore » initial step in the formation of, e.g., Polycyclic Aromatic Nitrogen Heterocycles (PANHs). This relates to recent discussions on the evolution of PAHNs in space and the reactivities of defect graphene structures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tu, Guangde; Rinkevicius, Zilvinas; Vahtras, Olav
We outline an approach within time-dependent density functional theory that predicts x-ray spectra on an absolute scale. The approach rests on a recent formulation of the resonant-convergent first-order polarization propagator [P. Norman et al., J. Chem. Phys. 123, 194103 (2005)] and corrects for the self-interaction energy of the core orbital. This polarization propagator approach makes it possible to directly calculate the x-ray absorption cross section at a particular frequency without explicitly addressing the excited-state spectrum. The self-interaction correction for the employed density functional accounts for an energy shift of the spectrum, and fully correlated absolute-scale x-ray spectra are thereby obtainedmore » based solely on optimization of the electronic ground state. The procedure is benchmarked against experimental spectra of a set of small organic molecules at the carbon, nitrogen, and oxygen K edges.« less
ERIC Educational Resources Information Center
Nam, Younkyeong; Karahan, Engin; Roehrig, Gillian
2016-01-01
Geologic time scale is a very important concept for understanding long-term earth system events such as climate change. This study examines forty-three 4th-8th grade Native American--particularly Ojibwe tribe--students' understanding of relative ordering and absolute time of Earth's significant geological and biological events. This study also…
Hassan, Namir; Ismail, Hairul Nizam
2004-06-01
In a study of irrational beliefs within a university population, 282 male and 238 female students responded to the 33-item Students' Irrational Beliefs Scale, and their responses were factor analyzed. Analysis suggested six dimensions could explain 39.5% of the variance. These dimensions were Perfectionism, Negativism, Blame Proneness, Escapism, Anxious Over Concern, and Absolute Demands.
Admire, Brittany; Lian, Bo; Yalkowsky, Samuel H
2015-01-01
The UPPER (Unified Physicochemical Property Estimation Relationships) model uses enthalpic and entropic parameters to estimate 20 biologically relevant properties of organic compounds. The model has been validated by Lian and Yalkowsky on a data set of 700 hydrocarbons. The aim of this work is to expand the UPPER model to estimate the boiling and melting points of polyhalogenated compounds. In this work, 19 new group descriptors are defined and used to predict the transition temperatures of an additional 1288 compounds. The boiling points of 808 and the melting points of 742 polyhalogenated compounds are predicted with average absolute errors of 13.56 K and 25.85 K, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
Southern California Earthquake Center Geologic Vertical Motion Database
NASA Astrophysics Data System (ADS)
Niemi, Nathan A.; Oskin, Michael; Rockwell, Thomas K.
2008-07-01
The Southern California Earthquake Center Geologic Vertical Motion Database (VMDB) integrates disparate sources of geologic uplift and subsidence data at 104- to 106-year time scales into a single resource for investigations of crustal deformation in southern California. Over 1800 vertical deformation rate data points in southern California and northern Baja California populate the database. Four mature data sets are now represented: marine terraces, incised river terraces, thermochronologic ages, and stratigraphic surfaces. An innovative architecture and interface of the VMDB exposes distinct data sets and reference frames, permitting user exploration of this complex data set and allowing user control over the assumptions applied to convert geologic and geochronologic information into absolute uplift rates. Online exploration and download tools are available through all common web browsers, allowing the distribution of vertical motion results as HTML tables, tab-delimited GIS-compatible text files, or via a map interface through the Google Maps™ web service. The VMDB represents a mature product for research of fault activity and elastic deformation of southern California.
Blood-gas analyzer calibration and quality control using a precision gas-mixing instrument.
Wallace, W D; Clark, J S; Cutler, C A
1981-08-01
We describe a new instrument that performs on-site mixing of oxygen (O2), carbon dioxide (CO2), and nitrogen (N2) to create compositions that can replace gases from standard premixed cylinders. This instrument yields accurate and predictable gas mixtures that can be used for two-point gas calibration of blood gas/pH analyzers or for liquid tonometry of either an aqueous buffer or blood used as quality-control material on blood-gas electrodes. The desired mixture of O2, CO2, and N2 is produced by microprocessor control of the sequential open-times on three solenoid valves that meter these pure gases through a common small-bore orifice. Any combination of O2 and CO2 can be chosen by dialing the front panel thumbwheels and pressing a button. Gas chromatographic evaluation of this gas-mixing instrument demonstrates its accuracy and precision to be better than +/- 0.1% absolute full scale for O2, CO2, and N2, making this instrument calibration and tonometry.
The absolute threshold of cone vision
Koeing, Darran; Hofer, Heidi
2013-01-01
We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115
Geurts, Marjolein; van der Worp, H Bart; Kappelle, L Jaap; Amelink, G Johan; Algra, Ale; Hofmeijer, Jeannette
2013-09-01
We assessed whether the effects of surgical decompression for space-occupying hemispheric infarction, observed at 1 year, are sustained at 3 years. Patients with space-occupying hemispheric infarction, who were enrolled in the Hemicraniectomy After Middle cerebral artery infarction with Life-threatening Edema Trial within 4 days after stroke onset, were followed up at 3 years. Outcome measures included functional outcome (modified Rankin Scale), death, quality of life, and place of residence. Poor functional outcome was defined as modified Rankin Scale >3. Of 64 included patients, 32 were randomized to decompressive surgery and 32 to best medical treatment. Just as at 1 year, surgery had no effect on the risk of poor functional outcome at 3 years (absolute risk reduction, 1%; 95% confidence interval, -21 to 22), but it reduced case fatality (absolute risk reduction, 37%; 95% confidence interval, 14-60). Sixteen surgically treated patients and 8 controls lived at home (absolute risk reduction, 27%; 95% confidence interval, 4-50). Quality of life improved between 1 and 3 years in patients treated with surgery. In patients with space-occupying hemispheric infarction, the effects of decompressive surgery on case fatality and functional outcome observed at 1 year are sustained at 3 years. http://www.controlled-trials.com. Unique identifier: ISRCTN94237756.
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.
NASA Astrophysics Data System (ADS)
Keawprasert, T.; Anhalt, K.; Taubert, D. R.; Sperling, A.; Schuster, M.; Nevas, S.
2013-09-01
An LP3 radiation thermometer was absolutely calibrated at a newly developed monochromator-based set-up and the TUneable Lasers in Photometry (TULIP) facility of PTB in the wavelength range from 400 nm to 1100 nm. At both facilities, the spectral radiation of the respective sources irradiates an integrating sphere, thus generating uniform radiance across its precision aperture. The spectral irradiance of the integrating sphere is determined via an effective area of a precision aperture and a Si trap detector, traceable to the primary cryogenic radiometer of PTB. Due to the limited output power from the monochromator, the absolute calibration was performed with the measurement uncertainty of 0.17 % (k = 1), while the respective uncertainty at the TULIP facility is 0.14 %. Calibration results obtained by the two facilities were compared in terms of spectral radiance responsivity, effective wavelength and integral responsivity. It was found that the measurement results in integral responsivity at the both facilities are in agreement within the expanded uncertainty (k = 2). To verify the calibration accuracy, the absolutely calibrated radiation thermometer was used to measure the thermodynamic freezing temperatures of the PTB gold fixed-point blackbody.
Absolute continuity for operator valued completely positive maps on C∗-algebras
NASA Astrophysics Data System (ADS)
Gheondea, Aurelian; Kavruk, Ali Şamil
2009-02-01
Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.
The slippery slope from contraception to euthanasia.
Kippley, J F
1978-01-01
The key element in natural family planning that keeps it from being the 1st to abortion is the emphasis on natural. A purely secular form of noncontraceptive birth control fails to avoid being the 1st step down the slippery slope toward abortion and then euthanasia. It is felt that the fundamental difference is in what is absolutized. The Western culture has absolutized family planning, thus, when people think that their right to plan the size of their family is an absolute right, and things do not go according to plans, they pursue their absolutized plans even if it means invading some other person's right to life. As Malcom Muggeridge has pointed out, as soon as a culture accepts the killing of the defenseless and innocent, the principle has been established for killing anyone who is socially inconvenient. However, when doing things according to God's laws, all individual plans are made relative. We do not attempt test-tube techniques and we do not resort to abortion or to sterilization. Some will reject the inherently religious overtones of the full meaning of natural (defined as acting in accord with the nature God has given each person), but at least, they have been given something to think about.
Ervin, Kent M; Nickel, Alex A; Lanorio, Jerry G; Ghale, Surja B
2015-07-16
A meta-analysis of experimental information from a variety of sources is combined with statistical thermodynamics calculations to refine the gas-phase acidity scale from hydrogen sulfide to pyrrole. The absolute acidities of hydrogen sulfide, methanethiol, and pyrrole are evaluated from literature R-H bond energies and radical electron affinities to anchor the scale. Relative acidities from proton-transfer equilibrium experiments are used in a local thermochemical network optimized by least-squares analysis to obtain absolute acidities of 14 additional acids in the region. Thermal enthalpy and entropy corrections are applied using molecular parameters from density functional theory, with explicit calculation of hindered rotor energy levels for torsional modes. The analysis reduces the uncertainties of the absolute acidities of the 14 acids to within ±1.2 to ±3.3 kJ/mol, expressed as estimates of the 95% confidence level. The experimental gas-phase acidities are compared with calculations, with generally good agreement. For nitromethane, ethanethiol, and cyclopentadiene, the refined acidities can be combined with electron affinities of the corresponding radicals from photoelectron spectroscopy to obtain improved values of the C-H or S-H bond dissociation energies, yielding D298(H-CH2NO2) = 423.5 ± 2.2 kJ mol(-1), D298(C2H5S-H) = 364.7 ± 2.2 kJ mol(-1), and D298(C5H5-H) = 347.4 ± 2.2 kJ mol(-1). These values represent the best-available experimental bond dissociation energies for these species.
Quantum Bath Refrigeration towards Absolute Zero: Challenging the Unattainability Principle
NASA Astrophysics Data System (ADS)
Kolář, M.; Gelbwaser-Klimovsky, D.; Alicki, R.; Kurizki, G.
2012-08-01
A minimal model of a quantum refrigerator, i.e., a periodically phase-flipped two-level system permanently coupled to a finite-capacity bath (cold bath) and an infinite heat dump (hot bath), is introduced and used to investigate the cooling of the cold bath towards absolute zero (T=0). Remarkably, the temperature scaling of the cold-bath cooling rate reveals that it does not vanish as T→0 for certain realistic quantized baths, e.g., phonons in strongly disordered media (fractons) or quantized spin waves in ferromagnets (magnons). This result challenges Nernst’s third-law formulation known as the unattainability principle.
NASA Technical Reports Server (NTRS)
Deyoung, James A.; Klepczynski, William J.; Mckinley, Angela Davis; Powell, William M.; Mai, Phu V.; Hetzel, P.; Bauch, A.; Davis, J. A.; Pearce, P. R.; Baumont, Francoise S.
1995-01-01
The international transatlantic time and frequency transfer experiment was designed by participating laboratories and has been implemented during 1994 to test the international communications path involving a large number of transmitting stations. This paper will present empirically determined clock and time scale differences, time and frequency domain instabilities, and a representative power spectral density analysis. The experiments by the method of co-location which will allow absolute calibration of the participating laboratories have been performed. Absolute time differences and accuracy levels of this experiment will be assessed in the near future.
NASA Astrophysics Data System (ADS)
Tarrío, Diego; Prokofiev, Alexander V.; Gustavsson, Cecilia; Jansson, Kaj; Andersson-Sundén, Erik; Al-Adili, Ali; Pomp, Stephan
2017-09-01
Neutron-induced fission cross sections of 235U and 238U are widely used as standards for monitoring of neutron beams and fields. An absolute measurement of these cross sections at an absolute scale, i.e., versus the H(n,p) scattering cross section, is planned with the white neutron beam under construction at the Neutrons For Science (NFS) facility in GANIL. The experimental setup, based on PPACs and ΔE-ΔE-E telescopes containing Silicon and CsI(Tl) detectors, is described. The expected uncertainties are discussed.
Ricotta, Carlo
2003-01-01
Traditional diversity measures such as the Shannon entropy are generally computed from the species' relative abundance vector of a given community to the exclusion of species' absolute abundances. In this paper, I first mention some examples where the total information content associated with a given community may be more adequate than Shannon's average information content for a better understanding of ecosystem functioning. Next, I propose a parametric measure of statistical information that contains both Shannon's entropy and total information content as special cases of this more general function.
Quantum bath refrigeration towards absolute zero: challenging the unattainability principle.
Kolář, M; Gelbwaser-Klimovsky, D; Alicki, R; Kurizki, G
2012-08-31
A minimal model of a quantum refrigerator, i.e., a periodically phase-flipped two-level system permanently coupled to a finite-capacity bath (cold bath) and an infinite heat dump (hot bath), is introduced and used to investigate the cooling of the cold bath towards absolute zero (T=0). Remarkably, the temperature scaling of the cold-bath cooling rate reveals that it does not vanish as T→0 for certain realistic quantized baths, e.g., phonons in strongly disordered media (fractons) or quantized spin waves in ferromagnets (magnons). This result challenges Nernst's third-law formulation known as the unattainability principle.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-28
... errors, (1) would result in a change of at least five absolute percentage points in, but not less than 25... determination; or (2) would result in a difference between a weighted-average dumping margin of zero or de...
Elastic scattering and soft diffraction with ALFA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puzo, P.
The ALFA detector in ATLAS aims at measuring the absolute luminosity and the total cross-section with 2-3% accuracy. Its uses elastically scattered protons whose impact position on a fiber detector, located 240 m away from the interaction point, allow a measurement of the scattering angle.
Characterization of MreB polymers in E. coli and their correlations to cell shape
NASA Astrophysics Data System (ADS)
Nguyen, Jeffrey; Ouzonov, Nikolay; Gitai, Zemer; Shaevitz, Joshua
2015-03-01
Shape influences all facets of how bacteria interact with their environment. The size of E. coli is determined by the peptidoglycan cell wall and internal turgor pressure. The cell wall is patterned by MreB, an actin homolog that forms short polymers on the cytoplasmic membrane. MreB coordinates the breaking of old material and the insertion of new material for growth, but it is currently unknown what mechanism sets the absolute diameter of the cell. Using new techniques in fluorescence microscopy and image processing, we are able to quantify cell shape in 3- dimensions and access previously unattainable data on the conformation of MreB polymers. To study how MreB affects the diameter of bacteria, we analyzed the shapes and polymers of cells that have had MreB perturbed by one of two methods. We first treated cells with the MreB polymerization-inhibiting drug A22. Secondly, we created point mutants in MreB that change MreB polymer conformation and the cell shape. By analyzing the correlations between different shape and polymer metrics, we find that under both treatments, the average helical pitch angle of the polymers correlates strongly with the cell diameter. This observation links the micron scale shape of the cell to the nanometer scale MreB cytoskeleton.
Surface Rupture Map of the 2002 M7.9 Denali Fault Earthquake, Alaska: Digital Data
Haeussler, Peter J.
2009-01-01
The November 3, 2002, Mw7.9 Denali Fault earthquake produced about 340 km of surface rupture along the Susitna Glacier Thrust Fault and the right-lateral, strike-slip Denali and Totschunda Faults. Digital photogrammetric methods were primarily used to create a 1:500-scale, three-dimensional surface rupture map, and 1:6,000-scale aerial photographs were used for three-dimensional digitization in ESRI's ArcMap GIS software, using Leica's StereoAnalyst plug in. Points were digitized 4.3 m apart, on average, for the entire surface rupture. Earthquake-induced landslides, sackungen, and unruptured Holocene fault scarps on the eastern Denali Fault were also digitized where they lay within the limits of air photo coverage. This digital three-dimensional fault-trace map is superior to traditional maps in terms of relative and absolute accuracy, completeness, and detail and is used as a basis for three-dimensional visualization. Field work complements the air photo observations in locations of dense vegetation, on bedrock, or in areas where the surface trace is weakly developed. Seventeen km of the fault trace, which broke through glacier ice, were not digitized in detail due to time constraints, and air photos missed another 10 km of fault rupture through the upper Black Rapids Glacier, so that was not mapped in detail either.
He, J; Gao, H; Xu, P; Yang, R
2015-12-01
Body weight, length, width and depth at two growth stages were observed for a total of 5015 individuals of GIFT strain, along with a pedigree including 5588 individuals from 104 sires and 162 dams was collected. Multivariate animal models and a random regression model were used to genetically analyse absolute and relative growth scales of these growth traits. In absolute growth scale, the observed growth traits had moderate heritabilities ranging from 0.321 to 0.576, while pairwise ratios between body length, width and depth were lowly inherited and maximum heritability was only 0.146 for length/depth. All genetic correlations were above 0.5 between pairwise growth traits and genetic correlation between length/width and length/depth varied between both growth stages. Based on those estimates, selection index of multiple traits of interest can be formulated in future breeding program to improve genetically body weight and morphology of the GIFT strain. In relative growth scale, heritabilities in relative growths of body length, width and depth to body weight were 0.257, 0.412 and 0.066, respectively, while genetic correlations among these allometry scalings were above 0.8. Genetic analysis for joint allometries of body weight to body length, width and depth will contribute to genetically regulate the growth rate between body shape and body weight. © 2015 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Apel, W. D.; Arteaga-Velázquez, J. C.; Bähren, L.; Bezyazeekov, P. A.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Budnev, N. M.; Cantoni, E.; Chiavassa, A.; Daumiller, K.; de Souza, V.; di Pierro, F.; Doll, P.; Engel, R.; Falcke, H.; Fedorov, O.; Fuchs, B.; Gemmeke, H.; Gress, O. A.; Grupen, C.; Haungs, A.; Heck, D.; Hiller, R.; Hörandel, J. R.; Horneffer, A.; Huber, D.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Kazarina, Y.; Kleifges, M.; Korosteleva, E. E.; Kostunin, D.; Krömer, O.; Kuijpers, J.; Kuzmichev, L. A.; Link, K.; Lubsandorzhiev, N.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Melissas, M.; Mirgazov, R. R.; Monkhoev, R.; Morello, C.; Oehlschläger, J.; Osipova, E. A.; Pakhorukov, A.; Palmieri, N.; Pankov, L.; Pierog, T.; Prosin, V. V.; Rautenberg, J.; Rebel, H.; Roth, M.; Rubtsov, G. I.; Rühle, C.; Saftoiu, A.; Schieler, H.; Schmidt, A.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Weindl, A.; Wischnewski, R.; Wochele, J.; Zabierowski, J.; Zagorodnikov, A.; Zensus, J. A.; Tunka-Rex; Lopes Collaborations
2016-12-01
The radio technique is a promising method for detection of cosmic-ray air showers of energies around 100PeV and higher with an array of radio antennas. Since the amplitude of the radio signal can be measured absolutely and increases with the shower energy, radio measurements can be used to determine the air-shower energy on an absolute scale. We show that calibrated measurements of radio detectors operated in coincidence with host experiments measuring air showers based on other techniques can be used for comparing the energy scales of these host experiments. Using two approaches, first via direct amplitude measurements, and second via comparison of measurements with air shower simulations, we compare the energy scales of the air-shower experiments Tunka-133 and KASCADE-Grande, using their radio extensions, Tunka-Rex and LOPES, respectively. Due to the consistent amplitude calibration for Tunka-Rex and LOPES achieved by using the same reference source, this comparison reaches an accuracy of approximately 10% - limited by some shortcomings of LOPES, which was a prototype experiment for the digital radio technique for air showers. In particular we show that the energy scales of cosmic-ray measurements by the independently calibrated experiments KASCADE-Grande and Tunka-133 are consistent with each other on this level.
Hill, Peter B
2015-06-01
Grading of erythema in clinical practice is a subjective assessment that cannot be confirmed using a definitive test; nevertheless, erythema scores are typically measured in clinical trials assessing the response to treatment interventions. Most commonly, ordinal scales are used for this purpose, but the optimal number of categories in such scales has not been determined. This study aimed to compare the reliability and agreement of a four-point and a six-point ordinal scale for the assessment of erythema in digital images of canine skin. Fifteen digital images showing varying degrees of erythema were assessed by specialist dermatologists and laypeople, using either the four-point or the six-point scale. Reliability between the raters was assessed using intraclass correlation coefficients and Cronbach's α. Agreement was assessed using the variation ratio (the percentage of respondents who chose the mode, the most common answer). Intraobserver variability was assessed by comparing the results of two grading sessions, at least 6 weeks apart. Both scales demonstrated high reliability, with intraclass correlation coefficient values and Cronbach's α above 0.99. However, the four-point scale demonstrated significantly superior agreement, with variation ratios for the four-point scale averaging 74.8%, compared with 56.2% for the six-point scale. Intraobserver consistency for the four-point scale was very high. Although both scales demonstrated high reliability, the four-point scale was superior in terms of agreement. For the assessment of erythema in clinical trials, a four-point ordinal scale is recommended. © 2014 ESVD and ACVD.
2010-12-01
Air Force Reseach Laboratory, Hanscom AFB, MA 928, 2010 December © 2010, The American Astronomical Society. 14. ABSTRACT The absolutely calibrated...the visible and Sirius (a CMa) in the infrared. The resulting zero-point SED tests well against solar analog data presented by Rieke et al. while also...resulting zero-point SED tests well against solar analog data presented by Rieke et al. while also maintaining an unambiguous link to specific
On the Photometric Calibration of FORS2 and the Sloan Digital Sky Survey
NASA Astrophysics Data System (ADS)
Bramich, D.; Moehler, S.; Coccato, L.; Freudling, W.; Garcia-Dabó, C. E.; Müller, P.; Saviane, I.
2012-09-01
An accurate absolute calibration of photometric data to place them on a standard magnitude scale is very important for many science goals. Absolute calibration requires the observation of photometric standard stars and analysis of the observations with an appropriate photometric model including all relevant effects. In the FORS Absolute Photometry (FAP) project, we have developed a standard star observing strategy and modelling procedure that enables calibration of science target photometry to better than 3% accuracy on photometrically stable nights given sufficient signal-to-noise. In the application of this photometric modelling to large photometric databases, we have investigated the Sloan Digital Sky Survey (SDSS) and found systematic trends in the published photometric data. The amplitudes of these trends are similar to the reported typical precision (˜1% and ˜2%) of the SDSS photometry in the griz- and u-bands, respectively.
NASA Astrophysics Data System (ADS)
Parker, David H.
2017-04-01
By using three, or more, electronic distance measurement (EDM) instruments, such as commercially available laser trackers, in an unconventional trilateration architecture, 3-D coordinates of specialized retroreflector targets attached to cardinal points on a structure can be measured with absolute uncertainty of less than one part-permillion. For example, 3-D coordinates of a structure within a 100 meter cube can be measured within a volume of a 0.1 mm cube (the thickness of a sheet of paper). Relative dynamic movements, such as vibrations at 30 Hz, are typically measured 10 times better, i.e., within a 0.01 mm cube. Measurements of such accuracy open new areas for nondestructive testing and finite element model confirmation of stiff, large-scale structures, such as: buildings, bridges, cranes, boilers, tank cars, nuclear power plant containment buildings, post-tensioned concrete, and the like by measuring the response to applied loads, changes over the life of the structure, or changes following an accident, fire, earthquake, modification, etc. The sensitivity of these measurements makes it possible to measure parameters such as: linearity, hysteresis, creep, symmetry, damping coefficient, and the like. For example, cracks exhibit a highly non-linear response when strains are reversed from compression to tension. Due to the measurements being 3-D, unexpected movements, such as transverse motion produced by an axial load, could give an indication of an anomaly-such as an asymmetric crack or materials property in a beam, delamination of concrete, or other asymmetry due to failures. Details of the specialized retroreflector are included.
NASA Astrophysics Data System (ADS)
Shean, D. E.; Arendt, A. A.; Whorton, E.; Riedel, J. L.; O'Neel, S.; Fountain, A. G.; Joughin, I. R.
2016-12-01
We adapted the open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline an automated processing workflow for 0.5 m GSD DigitalGlobe WorldView-1/2/3 and GeoEye-1 along-track and cross-track stereo image data. Output DEM products are posted at 2, 8, and 32 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5 m where appropriate ground-control data are available, with observed standard deviation of 0.1-0.5 m for overlapping, co-registered DEMs (n=14,17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We have leveraged these resources to produce dense time series and regional mosaics for the Earth's ice sheets. We are now processing and analyzing all available 2008-2016 commercial stereo DEMs over glaciers and perennial snowfields in the contiguous US. We are using these records to study long-term, interannual, and seasonal volume change and glacier mass balance. This analysis will provide a new assessment of regional climate change, and will offer basin-scale analyses of snowpack evolution and snow/ice melt runoff for water resource applications.
[Validation of a clinical prediction rule to distinguish bacterial from aseptic meningitis].
Agüero, Gonzalo; Davenport, María C; Del Valle, María de la P; Gallegos, Paulina; Kannemann, Ana L; Bokser, Vivian; Ferrero, Fernando
2010-02-01
Despite most meningitis are not bacterial, antibiotics are usually administered on admission because bacterial meningitis is difficult to be rule-out. Distinguishing bacterial from aseptic meningitis on admission could avoid inappropriate antibiotic use and hospitalization. We aimed to validate a clinical prediction rule to distinguish bacterial from aseptic meningitis in children, on arriving to the emergency room. This prospective study included patients aged < 19 years with meningitis. Cerebrospinal fluid (CSF) and peripheral blood neutrophil count were obtained from all patients. The BMS (Bacterial Meningitis Score) described by Nigrovic (Pediatrics 2002; 110: 712), was calculated: positive CSF Gram stain= 2 points, CSF absolute neutrophil count > or = 1000 cells/mm(3), CSF protein > or = 80 mg/dl, peripheral blood absolute neutrophil count > or = 10.000/mm(3), seizure = 1 point each. Sensitivity (S), specificity (E), positive and negative predictive values (PPV and NPV), positive and negative likelihood ratios (PLR and NLR) of the BMS to predict bacterial meningitis were calculated. Seventy patients with meningitis were included (14 bacterial meningitis). When BMS was calculated, 25 patients showed a BMS= 0 points, 11 BMS= 1 point, and 34 BMS > or = 2 points. A BMS = 0 showed S: 100%, E: 44%, VPP: 31%, VPN: 100%, RVP: 1,81 RVN: 0. A BMS > or = 2 predicted bacterial meningitis with S: 100%, E: 64%, VPP: 41%, VPN: 100%, PLR: 2.8, NLR:0. Using BMS was simple, and allowed identifying children with very low risk of bacterial meningitis. It could be a useful tool to assist clinical decision making.
Inequalities, Signum Functions and Wrinkles in Wiggle Graphs.
ERIC Educational Resources Information Center
Priest, Dean B.; Wood, Dianne
Presented is a graphical approach to teaching higher degree, rational function, and absolute value inequalities that simplifies the solution of these inequalities and thereby reduces the amount of classroom time that has to be devoted to this topic. Applications are also given for signum functions, maximum-minimum, and points of inflection…
Alaska national hydrography dataset positional accuracy assessment study
Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy
2013-01-01
Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.
Avoiding Degeneracy in Multidimensional Unfolding by Penalizing on the Coefficient of Variation
ERIC Educational Resources Information Center
Busing, Frank M. T. A.; Groenen, Patrick J. K.; Heiser, Willem J.
2005-01-01
Multidimensional unfolding methods suffer from the degeneracy problem in almost all circumstances. Most degeneracies are easily recognized: the solutions are perfect but trivial, characterized by approximately equal distances between points from different sets. A definition of an absolutely degenerate solution is proposed, which makes clear that…
ERIC Educational Resources Information Center
Ryan, James J.; Holmes, Mark
1988-01-01
Two articles comment on the debate over the utility of science in educational administration. Critiques of various positions on the topic point out the possible effects of conservatism and positivism on inequality and inequity in educational administration. (CB)
NASA Astrophysics Data System (ADS)
Rutzinger, Martin; Bremer, Magnus; Ragg, Hansjörg
2013-04-01
Recently, terrestrial laser scanning (TLS) and matching of images acquired by unmanned arial vehicles (UAV) are operationally used for 3D geodata acquisition in Geoscience applications. However, the two systems cover different application domains in terms of acquisition conditions and data properties i.e. accuracy and line of sight. In this study we investigate the major differences between the two platforms for terrain roughness estimation. Terrain roughness is an important input for various applications such as morphometry studies, geomorphologic mapping, and natural process modeling (e.g. rockfall, avalanche, and hydraulic modeling). Data has been collected simultaneously by TLS using an Optech ILRIS3D and a rotary UAV using an octocopter from twins.nrn for a 900 m² test site located in a riverbed in Tyrol, Austria (Judenbach, Mieming). The TLS point cloud has been acquired from three scan positions. These have been registered using iterative closest point algorithm and a target-based referencing approach. For registration geometric targets (spheres) with a diameter of 20 cm were used. These targets were measured with dGPS for absolute georeferencing. The TLS point cloud has an average point density of 19,000 pts/m², which represents a point spacing of about 5 mm. 15 images where acquired by UAV in a height of 20 m using a calibrated camera with focal length of 18.3 mm. A 3D point cloud containing RGB attributes was derived using APERO/MICMAC software, by a direct georeferencing approach based on the aircraft IMU data. The point cloud is finally co-registered with the TLS data to guarantee an optimal preparation in order to perform the analysis. The UAV point cloud has an average point density of 17,500 pts/m², which represents a point spacing of 7.5 mm. After registration and georeferencing the level of detail of roughness representation in both point clouds have been compared considering elevation differences, roughness and representation of different grain sizes. UAV closes the gap between aerial and terrestrial surveys in terms of resolution and acquisition flexibility. This is also true for the data accuracy. Considering these data collection and data quality properties of both systems they have their merit on its own in terms of scale, data quality, data collection speed and application.
NASA Astrophysics Data System (ADS)
Hoyt, Taylor J.; Freedman, Wendy L.; Madore, Barry F.; Seibert, Mark; Beaton, Rachael L.; Hatt, Dylan; Jang, In Sung; Lee, Myung Gyoon; Monson, Andrew J.; Rich, Jeffrey A.
2018-05-01
We present a new empirical JHK absolute calibration of the tip of the red giant branch (TRGB) in the Large Magellanic Cloud (LMC). We use published data from the extensive Near-Infrared Synoptic Survey containing 3.5 million stars, 65,000 of which are red giants that fall within one magnitude of the TRGB. Adopting the TRGB slopes from a companion study of the isolated dwarf galaxy IC 1613, as well as an LMC distance modulus of μ 0 = 18.49 mag from (geometric) detached eclipsing binaries, we derive absolute JHK zero points for the near-infrared TRGB. For a comparison with measurements in the bar alone, we apply the calibrated JHK TRGB to a 500 deg2 area of the 2MASS survey. The TRGB reveals the 3D structure of the LMC with a tilt in the direction perpendicular to the major axis of the bar, which is in agreement with previous studies.
A geometric performance assessment of the EO-1 advanced land imager
Storey, James C.; Choate, M.J.; Meyer, D.J.
2004-01-01
The Earth Observing 1 (EO-1) Advanced Land Imager (ALI) demonstrates technology applicable to a successor system to the Landsat Thematic Mapper series. A study of the geometric performance characteristics of the ALI was conducted under the auspices of the EO-1 Science Validation Team. This study evaluated ALI performance with respect to absolute pointing knowledge, focal plane sensor chip assembly alignment, and band-to-band registration for purposes of comparing this new technology to the heritage Landsat systems. On-orbit geometric calibration procedures were developed that allowed the generation of ALI geometrically corrected products that compare favorably with their Landsat 7 counterparts with respect to absolute geodetic accuracy, internal image geometry, and band registration.
Microwave measurements of the absolute values of absorption by water vapour in the atmosphere.
Hogg, D C; Guiraud, F O
1979-05-31
MEASUREMENT of the absolute value of absorption by water vapour at microwave frequencies is difficult because the effect is so small. Far in the wings of the absorption lines, in the so-called 'windows' of the spectrum, it is especially difficult to achieve high accuracy in the free atmosphere. But it is in these windows that the behaviour of the absorption is important from both applied and scientific points of view. Satellite communications, remote sensing of the atmosphere, and radioastronomy, are all influenced by this behaviour. Measurements on an Earth-space path are reported here; the results indicate a nonlinear relationship between absorption and water-vapour content.
NASA Astrophysics Data System (ADS)
Yi, Huili; Tian, Jianxiang
2014-07-01
A new simple correlation based on the principle of corresponding state is proposed to estimate the temperature-dependent surface tension of normal saturated liquids. The correlation is a linear one and strongly stands for 41 saturated normal liquids. The new correlation requires only the triple point temperature, triple point surface tension and critical point temperature as input and is able to represent the experimental surface tension data for these 41 saturated normal liquids with a mean absolute average percent deviation of 1.26% in the temperature regions considered. For most substances, the temperature covers the range from the triple temperature to the one beyond the boiling temperature.
Upper Limit of Weights in TAI Computation
NASA Technical Reports Server (NTRS)
Thomas, Claudine; Azoubib, Jacques
1996-01-01
The international reference time scale International Atomic Time (TAI) computed by the Bureau International des Poids et Mesures (BIPM) relies on a weighted average of data from a large number of atomic clocks. In it, the weight attributed to a given clock depends on its long-term stability. In this paper the TAI algorithm is used as the basis for a discussion of how to implement an upper limit of weight for clocks contributing to the ensemble time. This problem is approached through the comparison of two different techniques. In one case, a maximum relative weight is fixed: no individual clock can contribute more than a given fraction to the resulting time scale. The weight of each clock is then adjusted according to the qualities of the whole set of contributing elements. In the other case, a parameter characteristic of frequency stability is chosen: no individual clock can appear more stable than the stated limit. This is equivalent to choosing an absolute limit of weight and attributing this to to the most stable clocks independently of the other elements of the ensemble. The first technique is more robust than the second and automatically optimizes the stability of the resulting time scale, but leads to a more complicated computatio. The second technique has been used in the TAI algorithm since the very beginning. Careful analysis of tests on real clock data shows that improvement of the stability of the time scale requires revision from time to time of the fixed value chosen for the upper limit of absolute weight. In particular, we present results which confirm the decision of the CCDS Working Group on TAI to increase the absolute upper limit by a factor of 2.5. We also show that the use of an upper relative contribution further helps to improve the stability and may be a useful step towards better use of the massive ensemble of HP 507IA clocks now contributing to TAI.
The PMA Catalogue: 420 million positions and absolute proper motions
NASA Astrophysics Data System (ADS)
Akhmetov, V. S.; Fedorov, P. N.; Velichko, A. B.; Shulga, V. M.
2017-07-01
We present a catalogue that contains about 420 million absolute proper motions of stars. It was derived from the combination of positions from Gaia DR1 and 2MASS, with a mean difference of epochs of about 15 yr. Most of the systematic zonal errors inherent in the 2MASS Catalogue were eliminated before deriving the absolute proper motions. The absolute calibration procedure (zero-pointing of the proper motions) was carried out using about 1.6 million positions of extragalactic sources. The mean formal error of the absolute calibration is less than 0.35 mas yr-1. The derived proper motions cover the whole celestial sphere without gaps for a range of stellar magnitudes from 8 to 21 mag. In the sky areas where the extragalactic sources are invisible (the avoidance zone), a dedicated procedure was used that transforms the relative proper motions into absolute ones. The rms error of proper motions depends on stellar magnitude and ranges from 2-5 mas yr-1 for stars with 10 mag < G < 17 mag to 5-10 mas yr-1 for faint ones. The present catalogue contains the Gaia DR1 positions of stars for the J2015 epoch. The system of the PMA proper motions does not depend on the systematic errors of the 2MASS positions, and in the range from 14 to 21 mag represents an independent realization of a quasi-inertial reference frame in the optical and near-infrared wavelength range. The Catalogue also contains stellar magnitudes taken from the Gaia DR1 and 2MASS catalogues. A comparison of the PMA proper motions of stars with similar data from certain recent catalogues has been undertaken.
Huang, S.; Young, Caitlin; Feng, M.; Heidemann, Hans Karl; Cushing, Matthew; Mushet, D.M.; Liu, S.
2011-01-01
Recent flood events in the Prairie Pothole Region of North America have stimulated interest in modeling water storage capacities of wetlands and their surrounding catchments to facilitate flood mitigation efforts. Accurate estimates of basin storage capacities have been hampered by a lack of high-resolution elevation data. In this paper, we developed a 0.5 m bare-earth model from Light Detection And Ranging (LiDAR) data and, in combination with National Wetlands Inventory data, delineated wetland catchments and their spilling points within a 196 km2 study area. We then calculated the maximum water storage capacity of individual basins and modeled the connectivity among these basins. When compared to field survey results, catchment and spilling point delineations from the LiDAR bare-earth model captured subtle landscape features very well. Of the 11 modeled spilling points, 10 matched field survey spilling points. The comparison between observed and modeled maximum water storage had an R2 of 0.87 with mean absolute error of 5564 m3. Since maximum water storage capacity of basins does not translate into floodwater regulation capability, we further developed a Basin Floodwater Regulation Index. Based upon this index, the absolute and relative water that could be held by wetlands over a landscape could be modeled. This conceptual model of floodwater downstream contribution was demonstrated with water level data from 17 May 2008.
Humbert, P; Faivre, B; Véran, Y; Debure, C; Truchetet, F; Bécherel, P-A; Plantin, P; Kerihuel, J-C; Eming, SA; Dissemond, J; Weyandt, G; Kaspar, D; Smola, H; Zöllner, P
2014-01-01
Background Stringent control of proteolytic activity represents a major therapeutic approach for wound-bed preparation. Objectives We tested whether a protease-modulating polyacrylate- (PA-) containing hydrogel resulted in a more efficient wound-bed preparation of venous leg ulcers when compared to an amorphous hydrogel without known protease-modulating properties. Methods Patients were randomized to the polyacrylate-based hydrogel (n = 34) or to an amorphous hydrogel (n = 41). Wound beds were evaluated by three blinded experts using photographs taken on days 0, 7 and 14. Results After 14 days of treatment there was an absolute decrease in fibrin and necrotic tissue of 37.6 ± 29.9 percentage points in the PA-based hydrogel group and by 16.8 ± 23.0 percentage points in the amorphous hydrogel group. The absolute increase in the proportion of ulcer area covered by granulation tissue was 36.0 ± 27.4 percentage points in the PA-based hydrogel group and 14.5 ± 22.0 percentage points in the control group. The differences between the groups were significant (decrease in fibrin and necrotic tissue P = 0.004 and increase in granulation tissue P = 0.0005, respectively). Conclusion In particular, long-standing wounds profited from the treatment with the PA-based hydrogel. These data suggest that PA-based hydrogel dressings can stimulate normalization of the wound environment, particularly in hard-to-heal ulcers. PMID:24612304
Vacuum ultraviolet photoionization cross section of the hydroxyl radical.
Dodson, Leah G; Savee, John D; Gozem, Samer; Shen, Linhan; Krylov, Anna I; Taatjes, Craig A; Osborn, David L; Okumura, Mitchio
2018-05-14
The absolute photoionization spectrum of the hydroxyl (OH) radical from 12.513 to 14.213 eV was measured by multiplexed photoionization mass spectrometry with time-resolved radical kinetics. Tunable vacuum ultraviolet (VUV) synchrotron radiation was generated at the Advanced Light Source. OH radicals were generated from the reaction of O( 1 D) + H 2 O in a flow reactor in He at 8 Torr. The initial O( 1 D) concentration, where the atom was formed by pulsed laser photolysis of ozone, was determined from the measured depletion of a known concentration of ozone. Concentrations of OH and O( 3 P) were obtained by fitting observed time traces with a kinetics model constructed with literature rate coefficients. The absolute cross section of OH was determined to be σ(13.436 eV) = 3.2 ± 1.0 Mb and σ(14.193 eV) = 4.7 ± 1.6 Mb relative to the known cross section for O( 3 P) at 14.193 eV. The absolute photoionization spectrum was obtained by recording a spectrum at a resolution of 8 meV (50 meV steps) and scaling to the single-energy cross sections. We computed the absolute VUV photoionization spectrum of OH and O( 3 P) using equation-of-motion coupled-cluster Dyson orbitals and a Coulomb photoelectron wave function and found good agreement with the observed absolute photoionization spectra.
Vacuum ultraviolet photoionization cross section of the hydroxyl radical
NASA Astrophysics Data System (ADS)
Dodson, Leah G.; Savee, John D.; Gozem, Samer; Shen, Linhan; Krylov, Anna I.; Taatjes, Craig A.; Osborn, David L.; Okumura, Mitchio
2018-05-01
The absolute photoionization spectrum of the hydroxyl (OH) radical from 12.513 to 14.213 eV was measured by multiplexed photoionization mass spectrometry with time-resolved radical kinetics. Tunable vacuum ultraviolet (VUV) synchrotron radiation was generated at the Advanced Light Source. OH radicals were generated from the reaction of O(1D) + H2O in a flow reactor in He at 8 Torr. The initial O(1D) concentration, where the atom was formed by pulsed laser photolysis of ozone, was determined from the measured depletion of a known concentration of ozone. Concentrations of OH and O(3P) were obtained by fitting observed time traces with a kinetics model constructed with literature rate coefficients. The absolute cross section of OH was determined to be σ(13.436 eV) = 3.2 ± 1.0 Mb and σ(14.193 eV) = 4.7 ± 1.6 Mb relative to the known cross section for O(3P) at 14.193 eV. The absolute photoionization spectrum was obtained by recording a spectrum at a resolution of 8 meV (50 meV steps) and scaling to the single-energy cross sections. We computed the absolute VUV photoionization spectrum of OH and O(3P) using equation-of-motion coupled-cluster Dyson orbitals and a Coulomb photoelectron wave function and found good agreement with the observed absolute photoionization spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Andujar, A; Cheung, J; Chuang, C
Purpose: To investigate the effect of dynamic and static jaw tracking on patient peripheral doses. Materials and Methods: A patient plan with a large sacral metastasis (volume 800cm3, prescription 600cGyx5) was selected for this study. The plan was created using 2-field RapidArc with jaw tracking enabled (Eclipse, V11.0.31). These fields were then exported and edited in MATLAB with static jaw positions using the control point with the largest field size for each respective arc, but preserving the optimized leaf sequences for delivery. These fields were imported back into Eclipse for dose calculation and comparison and copied to a Rando phantommore » for delivery analysis. Points were chosen in the phantom at depth and on the phantom surface at locations outside the primary radiation field, at distances of 12cm, 20cm, and 30cm from the isocenter. Measurements were acquired with OSLDs placed at these positions in the phantom with both the dynamic and static jaw deliveries for comparison. Surface measurements included an additional 1cm bolus over the OSLDs to ensure electron equilibrium. Results: The static jaw deliveries resulted in cumulative jaw-defined field sizes of 17.3% and 17.4% greater area than the dynamic jaw deliveries for each arc. The static jaw plan resulted in very small differences in calculated dose in the treatment planning system ranging from 0–16cGy. The measured dose differences were larger than calculated, but the differences in absolute dose were small. The measured dose differences at depth (surface) between the two deliveries showed an increase for the static jaw delivery of 2.2%(11.4%), 15.6%(20.0%), and 12.7%(12.7%) for distances of 12cm, 20cm, and 30cm, respectively. Eclipse calculates a difference of 0–3.1% for all of these points. The largest absolute dose difference between all points was 6.2cGy. Conclusion: While we demonstrated larger than expected differences in peripheral dose, the absolute dose differences were small.« less
First Impressions of CARTOSAT-1
NASA Technical Reports Server (NTRS)
Lutes, James
2007-01-01
CARTOSAT-1 RPCs need special handling. Absolute accuracy of uncontrolled scenes is poor (biases > 300 m). Noticeable cross-track scale error (+/- 3-4 m across stereo pair). Most errors are either biases or linear in line/sample (These are easier to correct with ground control).
New statistical scission-point model to predict fission fragment observables
NASA Astrophysics Data System (ADS)
Lemaître, Jean-François; Panebianco, Stefano; Sida, Jean-Luc; Hilaire, Stéphane; Heinrich, Sophie
2015-09-01
The development of high performance computing facilities makes possible a massive production of nuclear data in a full microscopic framework. Taking advantage of the individual potential calculations of more than 7000 nuclei, a new statistical scission-point model, called SPY, has been developed. It gives access to the absolute available energy at the scission point, which allows the use of a parameter-free microcanonical statistical description to calculate the distributions and the mean values of all fission observables. SPY uses the richness of microscopy in a rather simple theoretical framework, without any parameter except the scission-point definition, to draw clear answers based on perfect knowledge of the ingredients involved in the model, with very limited computing cost.
[The Correlation Between MicroRNAs in Serum and the Extent of Liver Injury].
Zuo, Yi-Nan; He, Xue-Ling; Shi, Xue-Ni; Wei, Shi-Hang; Yin, Hai-Lin
2017-05-01
To investigate the correlation between the absolute quantification of the microRNAs (miR-122, miR-451, miR-92a, miR-192) in serum during acute liver injury and the extent of liver injury on rat models of CCl 4 induced acute liver injury and mice models of acetaminophen (APAP) induced acute liver injury. Furthermore, to investigate the correlation between the absolute quantification of microRNAs in serum and the drug induced liver injury pathological scoring system (DILI-PSS). The acute liver injury model in rat by CCl 4 (1.5 mL/kg), and the acute liver injury model in mice by APAP (160 mg/kg) were established. The serum at different time points on both models were collected respectively. The absolute quantification of microRNAs in serum were detected by using MiRbay TM SV miRNA Assay kit. Meanwhile, the pathological sections of liver tissue of the mice at each time point were collected to analyze the correlation between microRNAs and the degree of liver injury. In CCl 4 -induced rat acute liver injury model and APAP induced mouse acute liver injury, miR-122 and miR-192 appeared to be rising significantly, which remained the highest level at 24 h after treatment, and declined to the normal level after 72 h. In CCl 4 -induced rat acute liver injury model, the change of miR-92a was fluctuated and had no apparent rules, miR-451 declined gradually, but not obviously. In mice acute liver injury model induced by APAP, miR-92a and miR-451 in the progress of liver injury declined gradually, reached the lowest point at 48 h, and then recovered. The result of correlation analysis indicated that miR-122 and miR-192 presented a good positive correlation with the DILI-PSS ( r =0.741 3, P <0.05; r =0.788 3, P <0.01). The absolute quantification of miR-122 and miR-192 in serum has the highest level in 24 h, then decrease in 72 h, in both drug-induced and chemical liver injury. In addition, both the two microRNAs have good correlation with DILI-PSS in APAP-induced liver injury models.
Measurement of optical to electrical and electrical to optical delays with ps-level uncertainty.
Peek, H Z; Pinkert, T J; Jansweijer, P P M; Koelemeij, J C J
2018-05-28
We present a new measurement principle to determine the absolute time delay of a waveform from an optical reference plane to an electrical reference plane and vice versa. We demonstrate a method based on this principle with 2 ps uncertainty. This method can be used to perform accurate time delay determinations of optical transceivers used in fiber-optic time-dissemination equipment. As a result the time scales in optical and electrical domain can be related to each other with the same uncertainty. We expect this method will be a new breakthrough in high-accuracy time transfer and absolute calibration of time-transfer equipment.
Radulović, Vladimir; Štancar, Žiga; Snoj, Luka; Trkov, Andrej
2014-02-01
The calculation of axial neutron flux distributions with the MCNP code at the JSI TRIGA Mark II reactor has been validated with experimental measurements of the (197)Au(n,γ)(198)Au reaction rate. The calculated absolute reaction rate values, scaled according to the reactor power and corrected for the flux redistribution effect, are in good agreement with the experimental results. The effect of different cross-section libraries on the calculations has been investigated and shown to be minor. Copyright © 2013 Elsevier Ltd. All rights reserved.
Accuracy of free energies of hydration using CM1 and CM3 atomic charges.
Udier-Blagović, Marina; Morales De Tirado, Patricia; Pearlman, Shoshannah A; Jorgensen, William L
2004-08-01
Absolute free energies of hydration (DeltaGhyd) have been computed for 25 diverse organic molecules using partial atomic charges derived from AM1 and PM3 wave functions via the CM1 and CM3 procedures of Cramer, Truhlar, and coworkers. Comparisons are made with results using charges fit to the electrostatic potential surface (EPS) from ab initio 6-31G* wave functions and from the OPLS-AA force field. OPLS Lennard-Jones parameters for the organic molecules were used together with the TIP4P water model in Monte Carlo simulations with free energy perturbation theory. Absolute free energies of hydration were computed for OPLS united-atom and all-atom methane by annihilating the solutes in water and in the gas phase, and absolute DeltaGhyd values for all other molecules were computed via transformation to one of these references. Optimal charge scaling factors were determined by minimizing the unsigned average error between experimental and calculated hydration free energies. The PM3-based charge models do not lead to lower average errors than obtained with the EPS charges for the subset of 13 molecules in the original study. However, improvement is obtained by scaling the CM1A partial charges by 1.14 and the CM3A charges by 1.15, which leads to average errors of 1.0 and 1.1 kcal/mol for the full set of 25 molecules. The scaled CM1A charges also yield the best results for the hydration of amides including the E/Z free-energy difference for N-methylacetamide in water. Copyright 2004 Wiley Periodicals, Inc.
Lin, G.; Thurber, C.H.; Zhang, H.; Hauksson, E.; Shearer, P.M.; Waldhauser, F.; Brocher, T.M.; Hardebeck, J.
2010-01-01
We obtain a seismic velocity model of the California crust and uppermost mantle using a regional-scale double-difference tomography algorithm. We begin by using absolute arrival-time picks to solve for a coarse three-dimensional (3D) P velocity (VP) model with a uniform 30 km horizontal node spacing, which we then use as the starting model for a finer-scale inversion using double-difference tomography applied to absolute and differential pick times. For computational reasons, we split the state into 5 subregions with a grid spacing of 10 to 20 km and assemble our final statewide VP model by stitching together these local models. We also solve for a statewide S-wave model using S picks from both the Southern California Seismic Network and USArray, assuming a starting model based on the VP results and a VP=VS ratio of 1.732. Our new model has improved areal coverage compared with previous models, extending 570 km in the SW-NE directionand 1320 km in the NW-SE direction. It also extends to greater depth due to the inclusion of substantial data at large epicentral distances. Our VP model generally agrees with previous separate regional models for northern and southern California, but we also observe some new features, such as high-velocity anomalies at shallow depths in the Klamath Mountains and Mount Shasta area, somewhat slow velocities in the northern Coast Ranges, and slow anomalies beneath the Sierra Nevada at midcrustal and greater depths. This model can be applied to a variety of regional-scale studies in California, such as developing a unified statewide earthquake location catalog and performing regional waveform modeling.
Absolute measurement of the 242Pu neutron-capture cross section
Buckner, M. Q.; Wu, C. Y.; Henderson, R. A.; ...
2016-04-21
Here, the absolute neutron-capture cross section of 242Pu was measured at the Los Alamos Neutron Science Center using the Detector for Advanced Neutron-Capture Experiments array along with a compact parallel-plate avalanche counter for fission-fragment detection. The first direct measurement of the 242Pu(n,γ) cross section was made over the incident neutron energy range from thermal to ≈ 6 keV, and the absolute scale of the (n,γ) cross section was set according to the known 239Pu(n,f) resonance at E n,R = 7.83 eV. This was accomplished by adding a small quantity of 239Pu to the 242Pu sample. The relative scale of themore » cross section, with a range of four orders of magnitude, was determined for incident neutron energies from thermal to ≈ 40 keV. Our data, in general, are in agreement with previous measurements and those reported in ENDF/B-VII.1; the 242Pu(n,γ) cross section at the E n,R = 2.68 eV resonance is within 2.4% of the evaluated value. However, discrepancies exist at higher energies; our data are ≈30% lower than the evaluated data at E n ≈ 1 keV and are approximately 2σ away from the previous measurement at E n ≈ 20 keV.« less
A novel validation and calibration method for motion capture systems based on micro-triangulation.
Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M
2018-06-06
Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
[Continuity and discontinuity of the geomerida: the bionomic and biotic aspects].
Kafanov, A I
2005-01-01
The view of the spatial structure of the geomerida (Earth's life cover) as a continuum that prevails in modern phytocoenology is mostly determined by a physiognomic (landscape-bionomic) discrimination of vegetation components. In this connection, geography of life forms appears as subject of the landscapebionomic biogeography. In zoocoenology there is a tendency of synthesis of alternative concepts based on the assumption that there are no absolute continuum and absolute discontinuum in the organic nature. The problem of continuum and discontinuum of living cover being problem of scale aries from fractal structure of geomerida. This problem arises from fractal nature of the spatial structure of geomerida. The continuum mainly belongs to regularities of topological order. At regional and subregional scale the continuum of biochores is rather rare. The objective evidences of relative discontinuity of the living cover are determined by significant alterations of species diversity at the regional, subregional and even topological scale Alternatively to conventionally discriminated units in physionomically continuous vegetation, the same biotic complexes, represented as operational units of biogeographical and biocenological zoning, are distinguished repeatedly and independently by different researchers. An area occupied by certain flora (fauna, biota) could be considered as elementary unit of biotic diversity (elementary biotic complex).
Using Blur to Affect Perceived Distance and Size
HELD, ROBERT T.; COOPER, EMILY A.; O’BRIEN, JAMES F.; BANKS, MARTIN S.
2011-01-01
We present a probabilistic model of how viewers may use defocus blur in conjunction with other pictorial cues to estimate the absolute distances to objects in a scene. Our model explains how the pattern of blur in an image together with relative depth cues indicates the apparent scale of the image’s contents. From the model, we develop a semiautomated algorithm that applies blur to a sharply rendered image and thereby changes the apparent distance and scale of the scene’s contents. To examine the correspondence between the model/algorithm and actual viewer experience, we conducted an experiment with human viewers and compared their estimates of absolute distance to the model’s predictions. We did this for images with geometrically correct blur due to defocus and for images with commonly used approximations to the correct blur. The agreement between the experimental data and model predictions was excellent. The model predicts that some approximations should work well and that others should not. Human viewers responded to the various types of blur in much the way the model predicts. The model and algorithm allow one to manipulate blur precisely and to achieve the desired perceived scale efficiently. PMID:21552429
Absolute gravimetry as an operational tool for geodynamics research
NASA Astrophysics Data System (ADS)
Torge, W.
Relative gravimetric techniques have been used for nearly 30 years for measuring non-tidal gravity variations with time, and thus have contributed to geodynamics research by monitoring vertical crustal movements and internal mass shifts. With today's accuracy of about ± 0.05µms-2 (or 5µGal), significant results have been obtained in numerous control nets of local extension, especially in connection with seismic and volcanic events. Nevertheless, the main drawbacks of relative gravimetry, which are deficiencies in absolute datum and calibration, set a limit for its application, especially with respect to large-scale networks and long-term investigations. These problems can now be successfully attacked by absolute gravimetry, with transportable gravimeters available since about 20 years. While the absolute technique during the first two centuries of gravimetry's history was based on the pendulum method, the free-fall method can now be employed taking advantage of laser-interferometry, electronic timing, vacuum and shock absorbing techniques, and on-line computer-control. The accuracy inherent in advanced instruments is about ± 0.05 µms-2. In field work, generally an accuracy of ±0.1 µms-2 may be expected, strongly depending on local environmental conditions.
Nanoseismic sources made in the laboratory: source kinematics and time history
NASA Astrophysics Data System (ADS)
McLaskey, G.; Glaser, S. D.
2009-12-01
When studying seismic signals in the field, the analysis of source mechanisms is always obscured by propagation effects such as scattering and reflections due to the inhomogeneous nature of the earth. To get around this complication, we measure seismic waves (wavelengths from 2 mm to 300 mm) in laboratory-sized specimens of extremely homogeneous isotropic materials. We are able to study the focal mechanism and time history of nanoseismic sources produced by fracture, impact, and sliding friction, roughly six orders of magnitude smaller and more rapid than typical earthquakes. Using very sensitive broadband conical piezoelectric sensors, we are able to measure surface normal displacements down to a few pm (10^-12 m) in amplitude. Thick plate specimens of homogeneous materials such as glass, steel, gypsum, and polymethylmethacrylate (PMMA) are used as propagation media in the experiments. Recorded signals are in excellent agreement with theoretically determined Green’s functions obtained from a generalized ray theory code for an infinite plate geometry. Extremely precise estimates of the source time history are made via full waveform inversion from the displacement time histories recorded by an array of at least ten sensors. Each channel is sampled at a rate of 5 MHz. The system is absolutely calibrated using the normal impact of a tiny (~1 mm) ball on the surface of the specimen. The ball impact induces a force pulse into the specimen a few ms in duration. The amplitude, duration, and shape of the force pulse were found to be well approximated by Hertzian-derived impact theory, while the total change in momentum of the ball is independently measured from its incoming and rebound velocities. Another calibration source, the sudden fracture of a thin-walled glass capillary tube laid on its side and loaded against the surface of the specimen produces a similar point force, this time with a source function very nearly a step in time with rise time of less than 500 ns. The force at which the capillary breaks is recorded using a force sensor and is used for absolute calibration. A third set of nanoseismic sources were generated from frictional sliding. In this case, the location and spatial extent of the source along the cm-scale fault is not precisely known and must be determined. These sources are much more representative of earthquakes and the determination of their focal mechanisms is the subject of ongoing research. Sources of this type have been observed on a great range of time scales with rise times ranging from 500 ns to hundreds of ms. This study tests the generality of the seismic source representation theory. The unconventional scale, geometry, and experimental arrangement facilitates the discussion of issues such as the point source approximation, the origin of uncertainty in moment tensor inversions, the applicability of magnitude calculations for non-double-couple sources, and the relationship between momentum and seismic moment.
Zhou, Zhongliang; Su, Yanfang; Campbell, Benjamin; Zhou, Zhiying; Gao, Jianmin; Yu, Qiang; Chen, Jiuhao; Pan, Yishan
2015-01-01
Objective With a quasi-experimental design, this study aims to assess whether the Zero-markup Policy for Essential Drugs (ZPED) reduces the medical expense for patients at county hospitals, the major healthcare provider in rural China. Methods Data from Ningshan county hospital and Zhenping county hospital, China, include 2014 outpatient records and 9239 inpatient records. Quantitative methods are employed to evaluate ZPED. Both hospital-data difference-in-differences and individual-data regressions are applied to analyze the data from inpatient and outpatient departments. Results In absolute terms, the total expense per visit reduced by 19.02 CNY (3.12 USD) for outpatient services and 399.6 CNY (65.60 USD) for inpatient services. In relative terms, the expense per visit was reduced by 11% for both outpatient and inpatient services. Due to the reduction of inpatient expense, the estimated reduction of outpatient visits is 2% among the general population and 3.39% among users of outpatient services. The drug expense per visit dropped by 27.20 CNY (4.47 USD) for outpatient services and 278.7 CNY (45.75 USD) for inpatient services. The proportion of drug expense out of total expense per visit dropped by 11.73 percentage points in outpatient visits and by 3.92 percentage points in inpatient visits. Conclusion Implementation of ZPED is a benefit for patients in both absolute and relative terms. The absolute monetary reduction of the per-visit inpatient expense is 20 times of that in outpatient care. According to cross-price elasticity, the substitution between inpatient and outpatient due to the change in inpatient price is small. Furthermore, given that the relative reductions are the same for outpatient and inpatient visits, according to relative thinking theory, the incentive to utilize outpatient or inpatient care attributed to ZPED is equivalent, regardless of the 20-times price difference in absolute terms. PMID:25790443
Zhou, Zhongliang; Su, Yanfang; Campbell, Benjamin; Zhou, Zhiying; Gao, Jianmin; Yu, Qiang; Chen, Jiuhao; Pan, Yishan
2015-01-01
With a quasi-experimental design, this study aims to assess whether the Zero-markup Policy for Essential Drugs (ZPED) reduces the medical expense for patients at county hospitals, the major healthcare provider in rural China. Data from Ningshan county hospital and Zhenping county hospital, China, include 2014 outpatient records and 9239 inpatient records. Quantitative methods are employed to evaluate ZPED. Both hospital-data difference-in-differences and individual-data regressions are applied to analyze the data from inpatient and outpatient departments. In absolute terms, the total expense per visit reduced by 19.02 CNY (3.12 USD) for outpatient services and 399.6 CNY (65.60 USD) for inpatient services. In relative terms, the expense per visit was reduced by 11% for both outpatient and inpatient services. Due to the reduction of inpatient expense, the estimated reduction of outpatient visits is 2% among the general population and 3.39% among users of outpatient services. The drug expense per visit dropped by 27.20 CNY (4.47 USD) for outpatient services and 278.7 CNY (45.75 USD) for inpatient services. The proportion of drug expense out of total expense per visit dropped by 11.73 percentage points in outpatient visits and by 3.92 percentage points in inpatient visits. Implementation of ZPED is a benefit for patients in both absolute and relative terms. The absolute monetary reduction of the per-visit inpatient expense is 20 times of that in outpatient care. According to cross-price elasticity, the substitution between inpatient and outpatient due to the change in inpatient price is small. Furthermore, given that the relative reductions are the same for outpatient and inpatient visits, according to relative thinking theory, the incentive to utilize outpatient or inpatient care attributed to ZPED is equivalent, regardless of the 20-times price difference in absolute terms.
Díez, P; Aird, E G A; Sander, T; Gouldstone, C A; Sharpe, P H G; Lee, C D; Lowe, G; Thomas, R A S; Simnor, T; Bownes, P; Bidmead, M; Gandon, L; Eaton, D; Palmer, A L
2017-11-09
A UK multicentre audit to evaluate HDR and PDR brachytherapy has been performed using alanine absolute dosimetry. This is the first national UK audit performing an absolute dose measurement at a clinically relevant distance (20 mm) from the source. It was performed in both INTERLACE (a phase III multicentre trial in cervical cancer) and non-INTERLACE brachytherapy centres treating gynaecological tumours. Forty-seven UK centres (including the National Physical Laboratory) were visited. A simulated line source was generated within each centre's treatment planning system and dwell times calculated to deliver 10 Gy at 20 mm from the midpoint of the central dwell (representative of Point A of the Manchester system). The line source was delivered in a water-equivalent plastic phantom (Barts Solid Water) encased in blocks of PMMA (polymethyl methacrylate) and charge measured with an ion chamber at 3 positions (120° apart, 20 mm from the source). Absorbed dose was then measured with alanine at the same positions and averaged to reduce source positional uncertainties. Charge was also measured at 50 mm from the source (representative of Point B of the Manchester system). Source types included 46 HDR and PDR 192 Ir sources, (7 Flexisource, 24 mHDR-v2, 12 GammaMed HDR Plus, 2 GammaMed PDR Plus, 1 VS2000) and 1 HDR 60 Co source, (Co0.A86). Alanine measurements when compared to the centres' calculated dose showed a mean difference (±SD) of +1.1% (±1.4%) at 20 mm. Differences were also observed between source types and dose calculation algorithm. Ion chamber measurements demonstrated significant discrepancies between the three holes mainly due to positional variation of the source within the catheter (0.4%-4.9% maximum difference between two holes). This comprehensive audit of absolute dose to water from a simulated line source showed all centres could deliver the prescribed dose to within 5% maximum difference between measurement and calculation.
Manual therapy and exercise for rotator cuff disease.
Page, Matthew J; Green, Sally; McBain, Brodwen; Surace, Stephen J; Deitch, Jessica; Lyttle, Nicolette; Mrocki, Marshall A; Buchbinder, Rachelle
2016-06-10
Management of rotator cuff disease often includes manual therapy and exercise, usually delivered together as components of a physical therapy intervention. This review is one of a series of reviews that form an update of the Cochrane review, 'Physiotherapy interventions for shoulder pain'. To synthesise available evidence regarding the benefits and harms of manual therapy and exercise, alone or in combination, for the treatment of people with rotator cuff disease. We searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2015, Issue 3), Ovid MEDLINE (January 1966 to March 2015), Ovid EMBASE (January 1980 to March 2015), CINAHL Plus (EBSCO, January 1937 to March 2015), ClinicalTrials.gov and the WHO ICTRP clinical trials registries up to March 2015, unrestricted by language, and reviewed the reference lists of review articles and retrieved trials, to identify potentially relevant trials. We included randomised and quasi-randomised trials, including adults with rotator cuff disease, and comparing any manual therapy or exercise intervention with placebo, no intervention, a different type of manual therapy or exercise or any other intervention (e.g. glucocorticoid injection). Interventions included mobilisation, manipulation and supervised or home exercises. Trials investigating the primary or add-on effect of manual therapy and exercise were the main comparisons of interest. Main outcomes of interest were overall pain, function, pain on motion, patient-reported global assessment of treatment success, quality of life and the number of participants experiencing adverse events. Two review authors independently selected trials for inclusion, extracted the data, performed a risk of bias assessment and assessed the quality of the body of evidence for the main outcomes using the GRADE approach. We included 60 trials (3620 participants), although only 10 addressed the main comparisons of interest. Overall risk of bias was low in three, unclear in 14 and high in 43 trials. We were unable to perform any meta-analyses because of clinical heterogeneity or incomplete outcome reporting. One trial compared manual therapy and exercise with placebo (inactive ultrasound therapy) in 120 participants with chronic rotator cuff disease (high quality evidence). At 22 weeks, the mean change in overall pain with placebo was 17.3 points on a 100-point scale, and 24.8 points with manual therapy and exercise (adjusted mean difference (MD) 6.8 points, 95% confidence interval (CI) -0.70 to 14.30 points; absolute risk difference 7%, 1% fewer to 14% more). Mean change in function with placebo was 15.6 points on a 100-point scale, and 22.4 points with manual therapy and exercise (adjusted MD 7.1 points, 95% CI 0.30 to 13.90 points; absolute risk difference 7%, 1% to 14% more). Fifty-seven per cent (31/54) of participants reported treatment success with manual therapy and exercise compared with 41% (24/58) of participants receiving placebo (risk ratio (RR) 1.39, 95% CI 0.94 to 2.03; absolute risk difference 16% (2% fewer to 34% more). Thirty-one per cent (17/55) of participants reported adverse events with manual therapy and exercise compared with 8% (5/61) of participants receiving placebo (RR 3.77, 95% CI 1.49 to 9.54; absolute risk difference 23% (9% to 37% more). However adverse events were mild (short-term pain following treatment).Five trials (low quality evidence) found no important differences between manual therapy and exercise compared with glucocorticoid injection with respect to overall pain, function, active shoulder abduction and quality of life from four weeks up to 12 months. However, global treatment success was more common up to 11 weeks in people receiving glucocorticoid injection (low quality evidence). One trial (low quality evidence) showed no important differences between manual therapy and exercise and arthroscopic subacromial decompression with respect to overall pain, function, active range of motion and strength at six and 12 months, or global treatment success at four to eight years. One trial (low quality evidence) found that manual therapy and exercise may not be as effective as acupuncture plus dietary counselling and Phlogenzym supplement with respect to overall pain, function, active shoulder abduction and quality life at 12 weeks. We are uncertain whether manual therapy and exercise improves function more than oral non-steroidal anti-inflammatory drugs (NSAID), or whether combining manual therapy and exercise with glucocorticoid injection provides additional benefit in function over glucocorticoid injection alone, because of the very low quality evidence in these two trials.Fifty-two trials investigated effects of manual therapy alone or exercise alone, and the evidence was mostly very low quality. There was little or no difference in patient-important outcomes between manual therapy alone and placebo, no treatment, therapeutic ultrasound and kinesiotaping, although manual therapy alone was less effective than glucocorticoid injection. Exercise alone led to less improvement in overall pain, but not function, when compared with surgical repair for rotator cuff tear. There was little or no difference in patient-important outcomes between exercise alone and placebo, radial extracorporeal shockwave treatment, glucocorticoid injection, arthroscopic subacromial decompression and functional brace. Further, manual therapy or exercise provided few or no additional benefits when combined with other physical therapy interventions, and one type of manual therapy or exercise was rarely more effective than another. Despite identifying 60 eligible trials, only one trial compared a combination of manual therapy and exercise reflective of common current practice to placebo. We judged it to be of high quality and found no clinically important differences between groups in any outcome. Effects of manual therapy and exercise may be similar to those of glucocorticoid injection and arthroscopic subacromial decompression, but this is based on low quality evidence. Adverse events associated with manual therapy and exercise are relatively more frequent than placebo but mild in nature. Novel combinations of manual therapy and exercise should be compared with a realistic placebo in future trials. Further trials of manual therapy alone or exercise alone for rotator cuff disease should be based upon a strong rationale and consideration of whether or not they would alter the conclusions of this review.
OD-ing on Reality: An Interview with Alex Flinn
ERIC Educational Resources Information Center
Lesesne, Teri S.
2005-01-01
Alex Flinn discusses her debut novel, Breathing Underwater, which received much critical acclaim. She absolutely felt like her second book was being compared to Breathing Underwater at first. Breaking Point (which was completed before Breathing Underwater was published) was very different in tone. It was darker. The voice was different. The ending…
Goethe's Phenomenological Optics: The Point Where Language Ends and Experience Begins in Science.
ERIC Educational Resources Information Center
Junker, Kirk
This paper explores whether phenomenology, in general, and the case of Johann Wolfgang von Goethe's phenomenological optics in particular, provides a case and a location for "minimal realism," located between the extreme positions of absolute scientific realists and "radical rhetoricians." The paper begins with a description of…
49 CFR 178.338-9 - Holding time.
Code of Federal Regulations, 2014 CFR
2014-10-01
... cryogenic liquid having a boiling point, at a pressure of one atmosphere, absolute, no lower than the design... that liquid and stabilized to the lowest practical pressure, which must be equal to or less than the... combined liquid and vapor lading at the pressure offered for transportation, and the set pressure of the...
Counter Examples as Starting Points for Reasoning and Sense Making
ERIC Educational Resources Information Center
Yopp, David A.
2013-01-01
This article describes a classroom activity with college sophomores in a methods-of-proof course in which students reasoned about absolute value inequalities. The course was designed to meet the needs of both mathematics majors and secondary school mathematics teaching majors early in their college studies. Asked to "fix" a false…
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei
2005-05-01
We consider dynamics of financial markets as dynamics of expectations and discuss such a dynamics from the point of view of phenomenological thermodynamics. We describe a financial Carnot cycle and the financial analog of a heat machine. We see, that while in physics a perpetuum mobile is absolutely impossible, in economics such mobile may exist under some conditions.
Estimation of the lower flammability limit of organic compounds as a function of temperature.
Rowley, J R; Rowley, R L; Wilding, W V
2011-02-15
A new method of estimating the lower flammability limit (LFL) of general organic compounds is presented. The LFL is predicted at 298 K for gases and the lower temperature limit for solids and liquids from structural contributions and the ideal gas heat of formation of the fuel. The average absolute deviation from more than 500 experimental data points is 10.7%. In a previous study, the widely used modified Burgess-Wheeler law was shown to underestimate the effect of temperature on the lower flammability limit when determined in a large-diameter vessel. An improved version of the modified Burgess-Wheeler law is presented that represents the temperature dependence of LFL data determined in large-diameter vessels more accurately. When the LFL is estimated at increased temperatures using a combination of this model and the proposed structural-contribution method, an average absolute deviation of 3.3% is returned when compared with 65 data points for 17 organic compounds determined in an ASHRAE-style apparatus. Copyright © 2010 Elsevier B.V. All rights reserved.
Vection: the contributions of absolute and relative visual motion.
Howard, I P; Howard, A
1994-01-01
Inspection of a visual scene rotating about the vertical body axis induces a compelling sense of self rotation, or circular vection. Circular vection is suppressed by stationary objects seen beyond the moving display but not by stationary objects in the foreground. We hypothesised that stationary objects in the foreground facilitate vection because they introduce a relative-motion signal into what would otherwise be an absolute-motion signal. Vection latency and magnitude were measured with a full-field moving display and with stationary objects of various sizes and at various positions in the visual field. The results confirmed the hypothesis. Vection latency was longer when there were no stationary objects in view than when stationary objects were in view. The effect of stationary objects was particularly evident at low stimulus velocities. At low velocities a small stationary point significantly increased vection magnitude in spite of the fact that, at higher stimulus velocities and with other stationary objects in view, fixation on a stationary point, if anything, reduced vection. Changing the position of the stationary objects in the field of view did not affect vection latencies or magnitudes.
Hurst, Robert B; Mayerbacher, Marinus; Gebauer, Andre; Schreiber, K Ulrich; Wells, Jon-Paul R
2017-02-01
Large ring lasers have exceeded the performance of navigational gyroscopes by several orders of magnitude and have become useful tools for geodesy. In order to apply them to tests in fundamental physics, remaining systematic errors have to be significantly reduced. We derive a modified expression for the Sagnac frequency of a square ring laser gyro under Earth rotation. The modifications include corrections for dispersion (of both the gain medium and the mirrors), for the Goos-Hänchen effect in the mirrors, and for refractive index of the gas filling the cavity. The corrections were measured and calculated for the 16 m2 Grossring laser located at the Geodetic Observatory Wettzell. The optical frequency and the free spectral range of this laser were measured, allowing unique determination of the longitudinal mode number, and measurement of the dispersion. Ultimately we find that the absolute scale factor of the gyroscope can be estimated to an accuracy of approximately 1 part in 108.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Hay, C.; Mitrovica, J. X.; Little, C. M.; Ponte, R. M.; Tingley, M.
2017-12-01
Understanding observed spatial variations in centennial relative sea level trends on the United States east coast has important scientific and societal applications. Past studies based on models and proxies variously suggest roles for crustal displacement, ocean dynamics, and melting of the Greenland ice sheet. Here we perform joint Bayesian inference on regional relative sea level, vertical land motion, and absolute sea level fields based on tide gauge records and GPS data. Posterior solutions show that regional vertical land motion explains most (80% median estimate) of the spatial variance in the large-scale relative sea level trend field on the east coast over 1900-2016. The posterior estimate for coastal absolute sea level rise is remarkably spatially uniform compared to previous studies, with a spatial average of 1.4-2.3 mm/yr (95% credible interval). Results corroborate glacial isostatic adjustment models and reveal that meaningful long-period, large-scale vertical velocity signals can be extracted from short GPS records.
Schroder, Kerstin E. E.; Carey, Michael P.; Vanable, Peter A.
2008-01-01
Investigation of sexual behavior involves many challenges, including how to assess sexual behavior and how to analyze the resulting data. Sexual behavior can be assessed using absolute frequency measures (also known as “counts”) or with relative frequency measures (e.g., rating scales ranging from “never” to “always”). We discuss these two assessment approaches in the context of research on HIV risk behavior. We conclude that these two approaches yield non-redundant information and, more importantly, that only data yielding information about the absolute frequency of risk behavior have the potential to serve as valid indicators of HIV contraction risk. However, analyses of count data may be challenging due to non-normal distributions with many outliers. Therefore, we identify new and powerful data analytical solutions that have been developed recently to analyze count data, and discuss limitations of a commonly applied method (viz., ANCOVA using baseline scores as covariates). PMID:14534027
Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang
2016-10-14
First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.
Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang
2016-01-01
First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow. PMID:27754412
NASA Astrophysics Data System (ADS)
Reaney, S. M.; Heathwaite, L.; Lane, S. N.; Buckley, C.
2007-12-01
Pollution of rivers from agricultural phosphorus is recognised as a significant global problem and is a major management challenge as it involves processes that are small in magnitude, distributed over large areas, operating at fine spatial scales and associated with certain land use types when they are well connected to the receiving waters. Whilst some of these processes have been addressed in terms of water quality forecasting models and field measurements, we lack effective tools to prioritise where action should be taken to remediate the diffuse pollution problem. From a management perspective, the required information is on 'what to do where' rather than absolute values. This change in focus opens up the problem to be considered in a probabilistic / relative framework rather than concentrating on absolute values. The SCIMAP risk management framework is based on the critical source area concept whereby a risk and a connection are required to generate a problem. Treatments of both surface and subsurface hydrological connectivity have been developed. The approach is based on the philosophy that for a point to be considered connected there needs to be a continuous flow path to the receiving water. This information is calculated by simulating the possible flow paths from the source cell to the receiving water and recording the required catchment wetness to allow flow along that route. This algorithm gives information on the ease at which each point in the landscape can export risk along surface and subsurface pathways to the receiving waters. To understand the annual dynamics of the locational diffuse P risk, a temporal risk framework has been developed. This risk framework accounts for land management activies within the agricultural calendar. These events include the application of fertiliser, the P additions from livestock and the offtake of P in crops. Changes to these risks can be made to investigate management options. The SCIMAP risk mapping framework has been applied to 12 catchments in England as part of the DEFRA / Environment Agency's Catchment Sensitive Farming programme. Result from these catchments will be presented.
Primary care and health inequality: Difference-in-difference study comparing England and Ontario
Cookson, Richard; Mondor, Luke; Kringos, Dionne S.; Klazinga, Niek S.
2017-01-01
Background It is not known whether equity-oriented primary care investment that seeks to scale up the delivery of effective care in disadvantaged communities can reduce health inequality within high-income settings that have pre-existing universal primary care systems. We provide some non-randomised controlled evidence by comparing health inequality trends between two similar jurisdictions–one of which implemented equity-oriented primary care investment in the mid-to-late 2000s as part of a cross-government strategy for reducing health inequality (England), and one which invested in primary care without any explicit equity objective (Ontario, Canada). Methods We analysed whole-population data on 32,482 neighbourhoods (with mean population size of approximately 1,500 people) in England, and 18,961 neighbourhoods (with mean population size of approximately 700 people) in Ontario. We examined trends in mortality amenable to healthcare by decile groups of neighbourhood deprivation within each jurisdiction. We used linear models to estimate absolute and relative gaps in amenable mortality between most and least deprived groups, considering the gradient between these extremes, and evaluated difference-in-difference comparisons between the two jurisdictions. Results Inequality trends were comparable in both jurisdictions from 2004–6 but diverged from 2007–11. Compared with Ontario, the absolute gap in amenable mortality in England fell between 2004–6 and 2007–11 by 19.8 per 100,000 population (95% CI: 4.8 to 34.9); and the relative gap in amenable mortality fell by 10 percentage points (95% CI: 1 to 19). The biggest divergence occurred in the most deprived decile group of neighbourhoods. Discussion In comparison to Ontario, England succeeded in reducing absolute socioeconomic gaps in mortality amenable to healthcare from 2007 to 2011, and preventing them from growing in relative terms. Equity-oriented primary care reform in England in the mid-to-late 2000s may have helped to reduce socioeconomic inequality in health, though other explanations for this divergence are possible and further research is needed on the specific causal mechanisms. PMID:29182652
NASA Technical Reports Server (NTRS)
Hendershott, M. C.; Munk, W. H.; Zetler, B. D.
1974-01-01
Two procedures for the evaluation of global tides from SEASAT-A altimetry data are elaborated: an empirical method leading to the response functions for a grid of about 500 points from which the tide can be predicted for any point in the oceans, and a dynamic method which consists of iteratively modifying the parameters in a numerical solution to Laplace tide equations. It is assumed that the shape of the received altimeter signal can be interpreted for sea state and that orbit calculations are available so that absolute sea levels can be obtained.
Smith, Zachary J; Strombom, Sven; Wachsmann-Hogiu, Sebastian
2011-08-29
A multivariate optical computer has been constructed consisting of a spectrograph, digital micromirror device, and photomultiplier tube that is capable of determining absolute concentrations of individual components of a multivariate spectral model. We present experimental results on ternary mixtures, showing accurate quantification of chemical concentrations based on integrated intensities of fluorescence and Raman spectra measured with a single point detector. We additionally show in simulation that point measurements based on principal component spectra retain the ability to classify cancerous from noncancerous T cells.
Radical Prostatectomy versus Observation for Localized Prostate Cancer
Wilt, Timothy J.; Brawer, Michael K.; Jones, Karen M.; Barry, Michael J.; Aronson, William J.; Fox, Steven; Gingrich, Jeffrey R.; Wei, John T.; Gilhooly, Patricia; Grob, B. Mayer; Nsouli, Imad; Iyer, Padmini; Cartagena, Ruben; Snider, Glenn; Roehrborn, Claus; Sharifi, Roohollah; Blank, William; Pandya, Parikshit; Andriole, Gerald L.; Culkin, Daniel; Wheeler, Thomas
2012-01-01
BACKGROUND The effectiveness of surgery versus observation for men with localized prostate cancer detected by means of prostate-specific antigen (PSA) testing is not known. METHODS From November 1994 through January 2002, we randomly assigned 731 men with localized prostate cancer (mean age, 67 years; median PSA value, 7.8 ng per milliliter) to radical prostatectomy or observation and followed them through January 2010. The primary outcome was all-cause mortality; the secondary outcome was prostate-cancer mortality. RESULTS During the median follow-up of 10.0 years, 171 of 364 men (47.0%) assigned to radical prostatectomy died, as compared with 183 of 367 (49.9%) assigned to observation (hazard ratio, 0.88; 95% confidence interval [CI], 0.71 to 1.08; P = 0.22; absolute risk reduction, 2.9 percentage points). Among men assigned to radical prostatectomy, 21 (5.8%) died from prostate cancer or treatment, as compared with 31 men (8.4%) assigned to observation (hazard ratio, 0.63; 95% CI, 0.36 to 1.09; P = 0.09; absolute risk reduction, 2.6 percentage points). The effect of treatment on all-cause and prostate-cancer mortality did not differ according to age, race, coexisting conditions, self-reported performance status, or histologic features of the tumor. Radical prostatectomy was associated with reduced all-cause mortality among men with a PSA value greater than 10 ng per milliliter (P = 0.04 for interaction) and possibly among those with intermediate-risk or high-risk tumors (P = 0.07 for interaction). Adverse events within 30 days after surgery occurred in 21.4% of men, including one death. CONCLUSIONS Among men with localized prostate cancer detected during the early era of PSA testing, radical prostatectomy did not significantly reduce all-cause or prostate-cancer mortality, as compared with observation, through at least 12 years of follow-up. Absolute differences were less than 3 percentage points. (Funded by the Department of Veterans Affairs Cooperative Studies Program and others; PIVOT ClinicalTrials.gov number, NCT00007644.) PMID:22808955
Gravity and Displacement Variations in the Areas of Strong Earthquakes in the East of Russia
NASA Astrophysics Data System (ADS)
Timofeev, V. Yu.; Kalish, E. N.; Stus', Yu. F.; Ardyukov, D. G.; Valitov, M. G.; Timofeev, A. V.; Nosov, D. A.; Sizikov, I. S.; Boiko, E. V.; Gornov, P. Yu.; Kulinich, R. G.; Kolpashchikova, T. N.; Proshkina, Z. N.; Nazarov, E. O.; Kolmogorov, V. G.
2018-05-01
The modern gravimetry methods are capable of measuring gravity with an accuracy of up to 10-10 of the normal value, which is commensurate with the accuracy of the up-to-date methods of displacement measurements by satellite geodesy. Significant changes, e.g., in the coseismic displacements of the Earth's surface are recorded in the zones of large earthquakes. These changes should manifest themselves in the variations of gravity. Absolute measurements have been conducted by various modifications of absolute ballistic gravimeters GABL since the mid-1970s at the Klyuchi point (Novosibirsk) in the south of the West Siberian plate. Monitoring observations have been taking place in the seismically active regions since the 1990s. In this paper we consider the results of the long-term measurements of the variations in gravity and recent crustal displacements for different types of earthquakes (the zones of shear, extension, and compression). In the seismically active areas in the east of Russia, the longest annual series of absolute measurements starting from 1992 was recorded in the southeastern segment of Baikal region. In this area, the Kultuk earthquake with magnitude 6.5 occurred on August 27, 2008, at a distance of 25 km from the observation point of the Talaya seismic station. The measurements in Gornyi (Mountainous) Altai have been conducted since 2000. A strikeslip earthquake with magnitude 7.5 took place in the southern segment of the region on September 27, 2003. The effects of the catastrophic M = 9.0 Tohoku, Japan, earthquake of March 11, 2011 were identified in Primor'e in the far zone of the event. The empirical data are consistent with the results of modeling based on the seismological data. The coseismic variations in gravity are caused by the combined effect of the changes in the elevation of the observation point and crustal deformation.
Surgery versus physiotherapy for stress urinary incontinence.
Labrie, Julien; Berghmans, Bary L C M; Fischer, Kathelijn; Milani, Alfredo L; van der Wijk, Ileana; Smalbraak, Dina J C; Vollebregt, Astrid; Schellart, René P; Graziosi, Giuseppe C M; van der Ploeg, J Marinus; Brouns, Joseph F G M; Tiersma, E Stella M; Groenendijk, Annette G; Scholten, Piet; Mol, Ben Willem; Blokhuis, Elisabeth E; Adriaanse, Albert H; Schram, Aaltje; Roovers, Jan-Paul W R; Lagro-Janssen, Antoine L M; van der Vaart, Carl H
2013-09-19
Physiotherapy involving pelvic-floor muscle training is advocated as first-line treatment for stress urinary incontinence; midurethral-sling surgery is generally recommended when physiotherapy is unsuccessful. Data are lacking from randomized trials comparing these two options as initial therapy. We performed a multicenter, randomized trial to compare physiotherapy and midurethral-sling surgery in women with stress urinary incontinence. Crossover between groups was allowed. The primary outcome was subjective improvement, measured by means of the Patient Global Impression of Improvement at 12 months. We randomly assigned 230 women to the surgery group and 230 women to the physiotherapy group. A total of 49.0% of women in the physiotherapy group and 11.2% of women in the surgery group crossed over to the alternative treatment. In an intention-to-treat analysis, subjective improvement was reported by 90.8% of women in the surgery group and 64.4% of women in the physiotherapy group (absolute difference, 26.4 percentage points; 95% confidence interval [CI], 18.1 to 34.5). The rates of subjective cure were 85.2% in the surgery group and 53.4% in the physiotherapy group (absolute difference, 31.8 percentage points; 95% CI, 22.6 to 40.3); rates of objective cure were 76.5% and 58.8%, respectively (absolute difference, 17.8 percentage points; 95% CI, 7.9 to 27.3). A post hoc per-protocol analysis showed that women who crossed over to the surgery group had outcomes similar to those of women initially assigned to surgery and that both these groups had outcomes superior to those of women who did not cross over to surgery. For women with stress urinary incontinence, initial midurethral-sling surgery, as compared with initial physiotherapy, results in higher rates of subjective improvement and subjective and objective cure at 1 year. (Funded by ZonMw, the Netherlands Organization for Health Research and Development; Dutch Trial Register number, NTR1248.).
Point Positioning Service for Natural Hazard Monitoring
NASA Astrophysics Data System (ADS)
Bar-Sever, Y. E.
2014-12-01
In an effort to improve natural hazard monitoring, JPL has invested in updating and enlarging its global real-time GNSS tracking network, and has launched a unique service - real-time precise positioning for natural hazard monitoring, entitled GREAT Alert (GNSS Real-Time Earthquake and Tsunami Alert). GREAT Alert leverages the full technological and operational capability of the JPL's Global Differential GPS System [www.gdgps.net] to offer owners of real-time dual-frequency GNSS receivers: Sub-5 cm (3D RMS) real-time, absolute positioning in ITRF08, regardless of location Under 5 seconds turnaround time Full covariance information Estimates of ancillary parameters (such as troposphere) optionally provided This service enables GNSS networks operators to instantly have access to the most accurate and reliable real-time positioning solutions for their sites, and also to the hundreds of participating sites globally, assuring inter-consistency and uniformity across all solutions. Local authorities with limited technical and financial resources can now access to the best technology, and share environmental data to the benefit of the entire pacific region. We will describe the specialized precise point positioning techniques employed by the GREAT Alert service optimized for natural hazard monitoring, and in particular Earthquake monitoring. We address three fundamental aspects of these applications: 1) small and infrequent motion, 2) the availability of data at a central location, and 3) the need for refined solutions at several time scales
NASA Astrophysics Data System (ADS)
Shean, David E.; Alexandrov, Oleg; Moratto, Zachary M.; Smith, Benjamin E.; Joughin, Ian R.; Porter, Claire; Morin, Paul
2016-06-01
We adapted the automated, open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline a processing workflow for ˜0.5 m ground sample distance (GSD) DigitalGlobe WorldView-1 and WorldView-2 along-track stereo image data, with an overview of ASP capabilities, an evaluation of ASP correlator options, benchmark test results, and two case studies of DEM accuracy. Output DEM products are posted at ˜2 m with direct geolocation accuracy of <5.0 m CE90/LE90. An automated iterative closest-point (ICP) co-registration tool reduces absolute vertical and horizontal error to <0.5 m where appropriate ground-control data are available, with observed standard deviation of ˜0.1-0.5 m for overlapping, co-registered DEMs (n = 14, 17). While ASP can be used to process individual stereo pairs on a local workstation, the methods presented here were developed for large-scale batch processing in a high-performance computing environment. We are leveraging these resources to produce dense time series and regional mosaics for the Earth's polar regions.
Mandel, Micha; Gauthier, Susan A; Guttmann, Charles R G; Weiner, Howard L; Betensky, Rebecca A
2007-12-01
The expanded disability status scale (EDSS) is an ordinal score that measures progression in multiple sclerosis (MS). Progression is defined as reaching EDSS of a certain level (absolute progression) or increasing of one point of EDSS (relative progression). Survival methods for time to progression are not adequate for such data since they do not exploit the EDSS level at the end of follow-up. Instead, we suggest a Markov transitional model applicable for repeated categorical or ordinal data. This approach enables derivation of covariate-specific survival curves, obtained after estimation of the regression coefficients and manipulations of the resulting transition matrix. Large sample theory and resampling methods are employed to derive pointwise confidence intervals, which perform well in simulation. Methods for generating survival curves for time to EDSS of a certain level, time to increase of EDSS of at least one point, and time to two consecutive visits with EDSS greater than three are described explicitly. The regression models described are easily implemented using standard software packages. Survival curves are obtained from the regression results using packages that support simple matrix calculation. We present and demonstrate our method on data collected at the Partners MS center in Boston, MA. We apply our approach to progression defined by time to two consecutive visits with EDSS greater than three, and calculate crude (without covariates) and covariate-specific curves.
Accuracy Study of a 2-Component Point Doppler Velocimeter (PDV)
NASA Technical Reports Server (NTRS)
Kuhlman, John; Naylor, Steve; James, Kelly; Ramanath, Senthil
1997-01-01
A two-component Point Doppler Velocimeter (PDV) which has recently been developed is described, and a series of velocity measurements which have been obtained to quantify the accuracy of the PDV system are summarized. This PDV system uses molecular iodine vapor cells as frequency discriminating filters to determine the Doppler shift of laser light which is scattered off of seed particles in a flow. The majority of results which have been obtained to date are for the mean velocity of a rotating wheel, although preliminary data are described for fully-developed turbulent pipe flow. Accuracy of the present wheel velocity data is approximately +/- 1 % of full scale, while linearity of a single channel is on the order of +/- 0.5 % (i.e., +/- 0.6 m/sec and +/- 0.3 m/sec, out of 57 m/sec, respectively). The observed linearity of these results is on the order of the accuracy to which the speed of the rotating wheel has been set for individual data readings. The absolute accuracy of the rotating wheel data is shown to be consistent with the level of repeatability of the cell calibrations. The preliminary turbulent pipe flow data show consistent turbulence intensity values, and mean axial velocity profiles generally agree with pitot probe data. However, there is at present an offset error in the radial velocity which is on the order of 5-10 % of the mean axial velocity.
NASA Technical Reports Server (NTRS)
Cook, A. F.; Forti, G.; Mccrosky, R. E.; Posen, A.; Southworth, R. B.; Williams, J. T.
1973-01-01
Observations from multiple sites of a radar network and by television of 29 individual meteors from February 1969 through June 1970 are reported. Only 12 of the meteors did not appear to fragment over all the observed portion of their trajectories. From these 12, the relation for the radar magnitude to the panchromatic absolute magnitude was found in terms of velocity of the meteor. A very tentative fit to the data on the duration of long enduring echoes versus visual absolute magnitude is made. The exponential decay characteristics of the later parts of several of the light curves are pointed out as possible evidence of mutual coalescence of droplets into which the meteoroid has completely broken.
Superslow relaxation in identical phase oscillators with random and frustrated interactions
NASA Astrophysics Data System (ADS)
Daido, H.
2018-04-01
This paper is concerned with the relaxation dynamics of a large population of identical phase oscillators, each of which interacts with all the others through random couplings whose parameters obey the same Gaussian distribution with the average equal to zero and are mutually independent. The results obtained by numerical simulation suggest that for the infinite-size system, the absolute value of Kuramoto's order parameter exhibits superslow relaxation, i.e., 1/ln t as time t increases. Moreover, the statistics on both the transient time T for the system to reach a fixed point and the absolute value of Kuramoto's order parameter at t = T are also presented together with their distribution densities over many realizations of the coupling parameters.
Absolute judgment for one- and two-dimensional stimuli embedded in Gaussian noise
NASA Technical Reports Server (NTRS)
Kvalseth, T. O.
1977-01-01
This study examines the effect on human performance of adding Gaussian noise or disturbance to the stimuli in absolute judgment tasks involving both one- and two-dimensional stimuli. For each selected stimulus value (both an X-value and a Y-value were generated in the two-dimensional case), 10 values (or 10 pairs of values in the two-dimensional case) were generated from a zero-mean Gaussian variate, added to the selected stimulus value and then served as the coordinate values for the 10 points that were displayed sequentially on a CRT. The results show that human performance, in terms of the information transmitted and rms error as functions of stimulus uncertainty, was significantly reduced as the noise variance increased.
Automatic solar image motion measurements. [electronic disk flux monitoring
NASA Technical Reports Server (NTRS)
Colgate, S. A.; Moore, E. P.
1975-01-01
The solar seeing image motion has been monitored electronically and absolutely with a 25 cm telescope at three sites along the ridge at the southern end of the Magdalena Mountains west of Socorro, New Mexico. The uncorrelated component of the variations of the optical flux from two points at opposite limbs of the solar disk was continually monitored in 3 frequencies centered at 0.3, 3 and 30 Hz. The frequency band of maximum signal centered at 3 Hz showed the average absolute value of image motion to be somewhat less than 2sec. The observer estimates of combined blurring and image motion were well correlated with electronically measured image motion, but the observer estimates gave a factor 2 larger value.
Assessing Multi-scale Reptile and Amphibian Biodiversity: Mojave Ecoregion Case Study
The ability to assess, report, map, and forecast the life support functions of ecosystems is absolutely critical to our capacity to make informed decisions to maintain the sustainable nature of our environment now and into the future. Because of the variability among living orga...
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; McAndrew, Brendan
2013-01-01
A goal of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission is to observe highaccuracy, long-term climate change trends over decadal time scales. The key to such a goal is to improving the accuracy of SI traceable absolute calibration across infrared and reflected solar wavelengths allowing climate change to be separated from the limit of natural variability. The advances required to reach on-orbit absolute accuracy to allow climate change observations to survive data gaps exist at NIST in the laboratory, but still need demonstration that the advances can move successfully from to NASA and/or instrument vendor capabilities for spaceborne instruments. The current work describes the radiometric calibration error budget for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches, alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The resulting SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climatequality data collections is given. Key components in the error budget are geometry differences between the solar and earth views, knowledge of attenuator behavior when viewing the sun, and sensor behavior such as detector linearity and noise behavior. Methods for demonstrating this error budget are also presented.
Veridical mapping in savant abilities, absolute pitch, and synesthesia: an autism case study
Bouvet, Lucie; Donnadieu, Sophie; Valdois, Sylviane; Caron, Chantal; Dawson, Michelle; Mottron, Laurent
2014-01-01
An enhanced role and autonomy of perception are prominent in autism. Furthermore, savant abilities, absolute pitch, and synesthesia are all more commonly found in autistic individuals than in the typical population. The mechanism of veridical mapping has been proposed to account for how enhanced perception in autism leads to the high prevalence of these three phenomena and their structural similarity. Veridical mapping entails functional rededication of perceptual brain regions to higher order cognitive operations, allowing the enhanced detection and memorization of isomorphisms between perceptual and non-perceptual structures across multiple scales. In this paper, we present FC, an autistic individual who possesses several savant abilities in addition to both absolute pitch and synesthesia-like associations. The co-occurrence in FC of abilities, some of them rare, which share the same structure, as well as FC’s own accounts of their development, together suggest the importance of veridical mapping in the atypical range and nature of abilities displayed by autistic people. PMID:24600416
NASA Astrophysics Data System (ADS)
King, Matt A.; Keshin, Maxim; Whitehouse, Pippa L.; Thomas, Ian D.; Milne, Glenn; Riva, Riccardo E. M.
2012-07-01
The only vertical land movement signal routinely corrected for when estimating absolute sea-level change from tide gauge data is that due to glacial isostatic adjustment (GIA). We compare modeled GIA uplift (ICE-5G + VM2) with vertical land movement at ˜300 GPS stations located near to a global set of tide gauges, and find regionally coherent differences of commonly ±0.5-2 mm/yr. Reference frame differences and signal due to present-day mass trends cannot reconcile these differences. We examine sensitivity to the GIA Earth model by fitting to a subset of the GPS velocities and find substantial regional sensitivity, but no single Earth model is able to reduce the disagreement in all regions. We suggest errors in ice history and neglected lateral Earth structure dominate model-data differences, and urge caution in the use of modeled GIA uplift alone when interpreting regional- and global- scale absolute (geocentric) sea level from tide gauge data.
Hagiwara, Akifumi; Warntjes, Marcel; Hori, Masaaki; Andica, Christina; Nakazawa, Misaki; Kumamaru, Kanako Kunishima; Abe, Osamu; Aoki, Shigeki
2017-01-01
Abstract Conventional magnetic resonance images are usually evaluated using the image signal contrast between tissues and not based on their absolute signal intensities. Quantification of tissue parameters, such as relaxation rates and proton density, would provide an absolute scale; however, these methods have mainly been performed in a research setting. The development of rapid quantification, with scan times in the order of 6 minutes for full head coverage, has provided the prerequisites for clinical use. The aim of this review article was to introduce a specific quantification method and synthesis of contrast-weighted images based on the acquired absolute values, and to present automatic segmentation of brain tissues and measurement of myelin based on the quantitative values, along with application of these techniques to various brain diseases. The entire technique is referred to as “SyMRI” in this review. SyMRI has shown promising results in previous studies when used for multiple sclerosis, brain metastases, Sturge-Weber syndrome, idiopathic normal pressure hydrocephalus, meningitis, and postmortem imaging. PMID:28257339
Takemori, Nobuaki; Takemori, Ayako; Tanaka, Yuki; Endo, Yaeta; Hurst, Jane L.; Gómez-Baena, Guadalupe; Harman, Victoria M.; Beynon, Robert J.
2017-01-01
A major challenge in proteomics is the absolute accurate quantification of large numbers of proteins. QconCATs, artificial proteins that are concatenations of multiple standard peptides, are well established as an efficient means to generate standards for proteome quantification. Previously, QconCATs have been expressed in bacteria, but we now describe QconCAT expression in a robust, cell-free system. The new expression approach rescues QconCATs that previously were unable to be expressed in bacteria and can reduce the incidence of proteolytic damage to QconCATs. Moreover, it is possible to cosynthesize QconCATs in a highly-multiplexed translation reaction, coexpressing tens or hundreds of QconCATs simultaneously. By obviating bacterial culture and through the gain of high level multiplexing, it is now possible to generate tens of thousands of standard peptides in a matter of weeks, rendering absolute quantification of a complex proteome highly achievable in a reproducible, broadly deployable system. PMID:29055021
NASA Technical Reports Server (NTRS)
Thome, Kurtis; Barnes, Robert; Baize, Rosemary; O'Connell, Joseph; Hair, Jason
2010-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) plans to observe climate change trends over decadal time scales to determine the accuracy of climate projections. The project relies on spaceborne earth observations of SI-traceable variables sensitive to key decadal change parameters. The mission includes a reflected solar instrument retrieving at-sensor reflectance over the 320 to 2300 nm spectral range with 500-m spatial resolution and 100-km swath. Reflectance is obtained from the ratio of measurements of the earth s surface to those while viewing the sun relying on a calibration approach that retrieves reflectance with uncertainties less than 0.3%. The calibration is predicated on heritage hardware, reduction of sensor complexity, adherence to detector-based calibration standards, and an ability to simulate in the laboratory on-orbit sources in both size and brightness to provide the basis of a transfer to orbit of the laboratory calibration including a link to absolute solar irradiance measurements.
Fine-scale structure of the San Andreas fault zone and location of the SAFOD target earthquakes
Thurber, C.; Roecker, S.; Zhang, H.; Baher, S.; Ellsworth, W.
2004-01-01
We present results from the tomographic analysis of seismic data from the Parkfield area using three different inversion codes. The models provide a consistent view of the complex velocity structure in the vicinity of the San Andreas, including a sharp velocity contrast across the fault. We use the inversion results to assess our confidence in the absolute location accuracy of a potential target earthquake. We derive two types of accuracy estimates, one based on a consideration of the location differences from the three inversion methods, and the other based on the absolute location accuracy of "virtual earthquakes." Location differences are on the order of 100-200 m horizontally and up to 500 m vertically. Bounds on the absolute location errors based on the "virtual earthquake" relocations are ??? 50 m horizontally and vertically. The average of our locations places the target event epicenter within about 100 m of the SAF surface trace. Copyright 2004 by the American Geophysical Union.
Lindhiem, Oliver; Shaffer, Anne; Kolko, David J
2014-01-01
In the parent intervention outcome literatures, discipline practices are generally quantified as absolute frequencies or, less commonly, as relative frequencies. These differences in methodology warrant direct comparison as they have critical implications for study results and conclusions among treatments targeted at reducing parental aggression and harsh discipline. In this study, we directly compared the absolute frequency method and the relative frequency method for quantifying physically aggressive, psychologically aggressive, and nonaggressive discipline practices. Longitudinal data over a 3-year period came from an existing data set of a clinical trial examining the effectiveness of a psychosocial treatment in reducing parental physical and psychological aggression and improving child behavior (N = 139). Discipline practices (aggressive and nonaggressive) were assessed using the Conflict Tactics Scale. The two methods yielded different patterns of results, particularly for nonaggressive discipline strategies. We suggest that each method makes its own unique contribution to a more complete understanding of the association between parental aggression and intervention effects.
Interpretation of the COBE FIRAS CMBR spectrum
NASA Technical Reports Server (NTRS)
Wright, E. L.; Mather, J. C.; Fixsen, D. J.; Kogut, A.; Shafer, R. A.; Bennett, C. L.; Boggess, N. W.; Cheng, E. S.; Silverberg, R. F.; Smoot, G. F.
1994-01-01
The cosmic microwave background radiation (CMBR) spectrum measured by the Far-Infrared Absolute Spectrophotometer (FIRAS) instrument on NASA's Cosmic Background Explorer (COBE) is indistinguishable from a blackbody, implying stringent limits on energy release in the early universe later than the time t = 1 yr after the big bang. We compare the FIRAS data to previous precise measurements of the cosmic microwave background spectrum and find a reasonable agreement. We discuss the implications of the absolute value of y is less than 2.5 x 10(exp -5) and the absolute value of mu is less than 3.3 x 10(exp -4) 95% confidence limits found by Mather et al. (1994) on many processes occurring after t = 1 yr, such as explosive structure formation, reionization, and dissipation of small-scale density perturbations. We place limits on models with dust plus Population III stars, or evolving populations of IR galaxies, by directly comparing the Mather et al. spectrum to the model predictions.
Donald, William A.; Leib, Ryan D.; O'Brien, Jeremy T.; Bush, Matthew F.; Williams, Evan R.
2008-01-01
In solution, half-cell potentials are measured relative to those of other half cells, thereby establishing a ladder of thermochemical values that are referenced to the standard hydrogen electrode (SHE), which is arbitrarily assigned a value of exactly 0 V. Although there has been considerable interest in, and efforts toward, establishing an absolute electrochemical half-cell potential in solution, there is no general consensus regarding the best approach to obtain this value. Here, ion-electron recombination energies resulting from electron capture by gas-phase nanodrops containing individual [M(NH3)6]3+, M = Ru, Co, Os, Cr, and Ir, and Cu2+ ions are obtained from the number of water molecules that are lost from the reduced precursors. These experimental data combined with nanodrop solvation energies estimated from Born theory and solution-phase entropies estimated from limited experimental data provide absolute reduction energies for these redox couples in bulk aqueous solution. A key advantage of this approach is that solvent effects well past two solvent shells, that are difficult to model accurately, are included in these experimental measurements. By evaluating these data relative to known solution-phase reduction potentials, an absolute value for the SHE of 4.2 ± 0.4 V versus a free electron is obtained. Although not achieved here, the uncertainty of this method could potentially be reduced to below 0.1 V, making this an attractive method for establishing an absolute electrochemical scale that bridges solution and gas-phase redox chemistry. PMID:18288835
Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations
NASA Astrophysics Data System (ADS)
Kozak, P.
2014-12-01
Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.
An investigation of rotor harmonic noise by the use of small scale wind tunnel models
NASA Technical Reports Server (NTRS)
Sternfeld, H., Jr.; Schaffer, E. G.
1982-01-01
Noise measurements of small scale helicopter rotor models were compared with noise measurements of full scale helicopters to determine what information about the full scale helicopters could be derived from noise measurements of small scale helicopter models. Comparisons were made of the discrete frequency (rotational) noise for 4 pairs of tests. Areas covered were tip speed effects, isolated rotor, tandem rotor, and main rotor/tail rotor interaction. Results show good comparison of noise trends with configuration and test condition changes, and good comparison of absolute noise measurements with the corrections used except for the isolated rotor case. Noise measurements of the isolated rotor show a great deal of scatter reflecting the fact that the rotor in hover is basically unstable.
Characterization of Ice Roughness Variations in Scaled Glaze Icing Conditions
NASA Technical Reports Server (NTRS)
McClain, Stephen T.; Vargas, Mario; Tsao, Jen-Ching
2016-01-01
Because of the significant influence of surface tension in governing the stability and breakdown of the liquid film in flooded stagnation regions of airfoils exposed to glaze icing conditions, the Weber number is expected to be a significant parameter governing the formation and evolution of ice roughness. To investigate the influence of the Weber number on roughness formation, 53.3-cm (21-in.) and 182.9-cm (72-in.) NACA 0012 airfoils were exposed to flow conditions with essentially the same Weber number and varying stagnation collection efficiency to illuminate similarities of the ice roughness created on the different airfoils. The airfoils were exposed to icing conditions in the Icing Research Tunnel (IRT) at the NASA Glenn Research Center. Following exposure to the icing event, the airfoils were then scanned using a ROMER Absolute Arm scanning system. The resulting point clouds were then analyzed using the self-organizing map approach of McClain and Kreeger (2013) to determine the spatial roughness variations along the surfaces of the iced airfoils. The roughness characteristics on each airfoil were then compared using the relative geometries of the airfoil. The results indicate that features of the ice shape and roughness such as glaze-ice plateau limits and maximum airfoil roughness were captured well by Weber number and collection efficiency scaling of glaze icing conditions. However, secondary ice roughness features relating the instability and waviness of the liquid film on the glaze-ice plateau surface are scaled based on physics that were not captured by the local collection efficiency variations.
Ice Roughness and Thickness Evolution on a Swept NACA 0012 Airfoil
NASA Technical Reports Server (NTRS)
McClain, Stephen T.; Vargas, Mario; Tsao, Jen-Ching
2017-01-01
Several recent studies have been performed in the Icing Research Tunnel (IRT) at NASA Glenn Research Center focusing on the evolution, spatial variations, and proper scaling of ice roughness on airfoils without sweep exposed to icing conditions employed in classical roughness studies. For this study, experiments were performed in the IRT to investigate the ice roughness and thickness evolution on a 91.44-cm (36-in.) chord NACA 0012 airfoil, swept at 30-deg with 0deg angle of attack, and exposed to both Appendix C and Appendix O (SLD) icing conditions. The ice accretion event times used in the study were less than the time required to form substantially three-dimensional structures, such as scallops, on the airfoil surface. Following each ice accretion event, the iced airfoils were scanned using a ROMER Absolute Arm laser-scanning system. The resulting point clouds were then analyzed using the self-organizing map approach of McClain and Kreeger to determine the spatial roughness variations along the surfaces of the iced airfoils. The resulting measurements demonstrate linearly increasing roughness and thickness parameters with ice accretion time. Further, when compared to dimensionless or scaled results from unswept airfoil investigations, the results of this investigation indicate that the mechanisms for early stage roughness and thickness formation on swept wings are similar to those for unswept wings.
NASA Astrophysics Data System (ADS)
Mansuy, N. R.; Paré, D.; Thiffault, E.
2015-12-01
Large-scale mapping of soil properties is increasingly important for environmental resource management. Whileforested areas play critical environmental roles at local and global scales, forest soil maps are typically at lowresolution.The objective of this study was to generate continuous national maps of selected soil variables (C, N andsoil texture) for the Canadian managed forest landbase at 250 m resolution. We produced these maps using thekNN method with a training dataset of 538 ground-plots fromthe National Forest Inventory (NFI) across Canada,and 18 environmental predictor variables. The best predictor variables were selected (7 topographic and 5 climaticvariables) using the Least Absolute Shrinkage and Selection Operator method. On average, for all soil variables,topographic predictors explained 37% of the total variance versus 64% for the climatic predictors. Therelative root mean square error (RMSE%) calculated with the leave-one-out cross-validation method gave valuesranging between 22% and 99%, depending on the soil variables tested. RMSE values b 40% can be considered agood imputation in light of the low density of points used in this study. The study demonstrates strong capabilitiesfor mapping forest soil properties at 250m resolution, compared with the current Soil Landscape of CanadaSystem, which is largely oriented towards the agricultural landbase. The methodology used here can potentiallycontribute to the national and international need for spatially explicit soil information in resource managementscience.
Instrument Pointing Capabilities: Past, Present, and Future
NASA Technical Reports Server (NTRS)
Blackmore, Lars; Murray, Emmanuell; Scharf, Daniel P.; Aung, Mimi; Bayard, David; Brugarolas, Paul; Hadaegh, Fred; Lee, Allan; Milman, Mark; Sirlin, Sam;
2011-01-01
This paper surveys the instrument pointing capabilities of past, present and future space telescopes and interferometers. As an important aspect of this survey, we present a taxonomy for "apples-to-apples" comparisons of pointing performances. First, pointing errors are defined relative to either an inertial frame or a celestial target. Pointing error can then be further sub-divided into DC, that is, steady state, and AC components. We refer to the magnitude of the DC error relative to the inertial frame as absolute pointing accuracy, and we refer to the magnitude of the DC error relative to a celestial target as relative pointing accuracy. The magnitude of the AC error is referred to as pointing stability. While an AC/DC partition is not new, we leverage previous work by some of the authors to quantitatively clarify and compare varying definitions of jitter and time window averages. With this taxonomy and for sixteen past, present, and future missions, pointing accuracies and stabilities, both required and achieved, are presented. In addition, we describe the attitude control technologies used to and, for future missions, planned to achieve these pointing performances.
A MOLA-controlled RAND-USGS Control Network for Mars
NASA Technical Reports Server (NTRS)
Archinal, B. A.; Colvin, T. R.; Davies, M. E.; Kirk, R. L.; Duxbury, T. C.; Lee, E. M.; Cook, D.; Gitlin, A. R.
2002-01-01
We are undertaking, in support of the Mars Digital Image Mosaic (MDIM) 2.1, many improvements in the RAND-USGS photogrammetric control network for Mars, primarily involving the use of Mars Orbiter Laser Altimeter (MOLA)-derived radii and DIMs to improve control point absolute radii and horizontal positions. Additional information is contained in the original extended abstract.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-18
... M. White Physical Education Program (PEP) provides grants to local educational agencies (LEAs) and... this competition, this priority is an absolute priority. Under 34 CFR 75.105(c)(3), we consider only...(c)(2)(i), we will award an additional 3 points to an application that meets this priority. This...
Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction
NASA Astrophysics Data System (ADS)
Zang, Y.; Yang, B.
2018-04-01
3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.
Absolute IGS antenna phase center model igs08.atx: status and potential improvements
NASA Astrophysics Data System (ADS)
Schmid, R.; Dach, R.; Collilieux, X.; Jäggi, A.; Schmitz, M.; Dilssner, F.
2016-04-01
On 17 April 2011, all analysis centers (ACs) of the International GNSS Service (IGS) adopted the reference frame realization IGS08 and the corresponding absolute antenna phase center model igs08.atx for their routine analyses. The latter consists of an updated set of receiver and satellite antenna phase center offsets and variations (PCOs and PCVs). An update of the model was necessary due to the difference of about 1 ppb in the terrestrial scale between two consecutive realizations of the International Terrestrial Reference Frame (ITRF2008 vs. ITRF2005), as that parameter is highly correlated with the GNSS satellite antenna PCO components in the radial direction.
Hannula, Manne; Huttunen, Kerttu; Koskelo, Jukka; Laitinen, Tomi; Leino, Tuomo
2008-01-01
In this study, the performances of artificial neural network (ANN) analysis and multilinear regression (MLR) model-based estimation of heart rate were compared in an evaluation of individual cognitive workload. The data comprised electrocardiography (ECG) measurements and an evaluation of cognitive load that induces psychophysiological stress (PPS), collected from 14 interceptor fighter pilots during complex simulated F/A-18 Hornet air battles. In our data, the mean absolute error of the ANN estimate was 11.4 as a visual analog scale score, being 13-23% better than the mean absolute error of the MLR model in the estimation of cognitive workload.
An absolute chronology for early Egypt using radiocarbon dating and Bayesian statistical modelling
Dee, Michael; Wengrow, David; Shortland, Andrew; Stevenson, Alice; Brock, Fiona; Girdland Flink, Linus; Bronk Ramsey, Christopher
2013-01-01
The Egyptian state was formed prior to the existence of verifiable historical records. Conventional dates for its formation are based on the relative ordering of artefacts. This approach is no longer considered sufficient for cogent historical analysis. Here, we produce an absolute chronology for Early Egypt by combining radiocarbon and archaeological evidence within a Bayesian paradigm. Our data cover the full trajectory of Egyptian state formation and indicate that the process occurred more rapidly than previously thought. We provide a timeline for the First Dynasty of Egypt of generational-scale resolution that concurs with prevailing archaeological analysis and produce a chronometric date for the foundation of Egypt that distinguishes between historical estimates. PMID:24204188
ACCESS: integration and pre-flight performance
NASA Astrophysics Data System (ADS)
Kaiser, Mary Elizabeth; Morris, Matthew J.; Aldoroty, Lauren N.; Pelton, Russell; Kurucz, Robert; Peacock, Grant O.; Hansen, Jason; McCandliss, Stephan R.; Rauscher, Bernard J.; Kimble, Randy A.; Kruk, Jeffrey W.; Wright, Edward L.; Orndorff, Joseph D.; Feldman, Paul D.; Moos, H. Warren; Riess, Adam G.; Gardner, Jonathan P.; Bohlin, Ralph; Deustua, Susana E.; Dixon, W. V.; Sahnow, David J.; Perlmutter, Saul
2017-09-01
Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. ACCESS, "Absolute Color Calibration Experiment for Standard Stars", is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35 - 1.7μm bandpass. This paper describes the sub-system testing, payload integration, avionics operations, and data transfer for the ACCESS instrument.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald
2016-01-01
The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.
NASA Astrophysics Data System (ADS)
Świetoń, Agnieszka; Pollo, Agnieszka; VVDS Team
2014-12-01
We discuss the dependence of galaxy clustering according to their colours up to z˜ 1.2. For that purpose we used one of the wide fields (F22) from the VIMOS-VLT Deep Survey (VVDS). For galaxies with absolute luminosities close to the characteristic Schechter luminosities M^* at a given redshift, we measured the projected two-point correlation function w_{p}(r_{p}) and we estimated the best-fit parameters for a single power-law model: ξ(r) = (r/r_0)^{-γ} , where r_0 is the correlation length and γ is the slope of correlation function. Our results show that red galaxies exhibit the strongest clustering in all epochs up to z˜ 1.2. Green valley represents the "intermediate" population and blue cloud shows the weakest clustering strength. We also compared the shape of w_p(r_p) for different galaxy populations. All three populations have different clustering properties on the small scales, similarly to the behaviour observed in the local catalogues.
Strong Ground Motion Prediction By Composite Source Model
NASA Astrophysics Data System (ADS)
Burjanek, J.; Irikura, K.; Zahradnik, J.
2003-12-01
A composite source model, incorporating different sized subevents, provides a possible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock). The subevents are distributed randomly over the fault. Each subevent is modeled either as a finite or point source, differences between these choices are shown. The final slip and duration of each subevent is related to its characteristic dimension, using constant stress-drop scaling. Absolute value of subevents' stress drop is free parameter. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally layered crustal model. An estimation of subevents' stress drop is based on fitting empirical attenuation relations for PGA and PGV, as they represent robust information on strong ground motion caused by earthquakes, including both path and source effect. We use the 2000 M6.6 Western Tottori, Japan, earthquake as validation event, providing comparison between predicted and observed waveforms.
NASA Technical Reports Server (NTRS)
Freedman, Wendy L.; Madore, Barry F.; Scowcroft, Vicky; Mnso, Andy; Persson, S. E.; Rigby, Jane; Sturch, Laura; Stetson, Peter
2011-01-01
We present an overview of and preliminary results from an ongoing comprehensive program that has a goal of determining the Hubble constant to a systematic accuracy of 2%. As part of this program, we are currently obtaining 3.6 micron data using the Infrared Array Camera (IRAC) on Spitzer, and the program is designed to include JWST in the future. We demonstrate that the mid-infrared period-luminosity relation for Cepheids at 3.6 microns is the most accurate means of measuring Cepheid distances to date. At 3.6 microns, it is possible to minimize the known remaining systematic uncertainties in the Cepheid extragalactic distance scale. We discuss the advantages of 3.6 micron observations in minimizing systematic effects in the Cepheid calibration of the Hubble constant including the absolute zero point, extinction corrections, and the effects of metallicity on the colors and magnitudes of Cepheids. We are undertaking three independent tests of the sensitivity of the mid-IR Cepheid Leavitt Law to metallicity, which when combined will allow a robust constraint on the effect. Finally, we are providing a new mid-IR Tully-Fisher relation for spiral galaxies.
NASA Astrophysics Data System (ADS)
Muraviev, A. V.; Smolski, V. O.; Loparo, Z. E.; Vodopyanov, K. L.
2018-04-01
Mid-infrared spectroscopy offers supreme sensitivity for the detection of trace gases, solids and liquids based on tell-tale vibrational bands specific to this spectral region. Here, we present a new platform for mid-infrared dual-comb Fourier-transform spectroscopy based on a pair of ultra-broadband subharmonic optical parametric oscillators pumped by two phase-locked thulium-fibre combs. Our system provides fast (7 ms for a single interferogram), moving-parts-free, simultaneous acquisition of 350,000 spectral data points, spaced by a 115 MHz intermodal interval over the 3.1-5.5 µm spectral range. Parallel detection of 22 trace molecular species in a gas mixture, including isotopologues containing isotopes such as 13C, 18O, 17O, 15N, 34S, 33S and deuterium, with part-per-billion sensitivity and sub-Doppler resolution is demonstrated. The technique also features absolute optical frequency referencing to an atomic clock, a high degree of mutual coherence between the two mid-infrared combs with a relative comb-tooth linewidth of 25 mHz, coherent averaging and feasibility for kilohertz-scale spectral resolution.
Factoring handedness data: I. Item analysis.
Messinger, H B; Messinger, M I
1995-12-01
Recently in this journal Peters and Murphy challenged the validity of factor analyses done on bimodal handedness data, suggesting instead that right- and left-handers be studied separately. But bimodality may be avoidable if attention is paid to Oldfield's questionnaire format and instructions for the subjects. Two characteristics appear crucial: a two-column LEFT-RIGHT format for the body of the instrument and what we call Oldfield's Admonition: not to indicate strong preference for handedness item, such as write, unless "... the preference is so strong that you would never try to use the other hand unless absolutely forced to...". Attaining unimodality of an item distribution would seem to overcome the objections of Peters and Murphy. In a 1984 survey in Boston we used Oldfield's ten-item questionnaire exactly as published. This produced unimodal item distributions. With reflection of the five-point item scale and a logarithmic transformation, we achieved a degree of normalization for the items. Two surveys elsewhere based on Oldfield's 20-item list but with changes in the questionnaire format and the instructions, yielded markedly different item distributions with peaks at each extreme and sometimes in the middle as well.
Motta Dos Santos, Luiz Fernando; Coutte, François; Ravallec, Rozenn; Dhulster, Pascal; Tournier-Couturier, Lucie; Jacques, Philippe
2016-10-01
Culture medium elements were analysed by a screening DoE to identify their influence in surfactin specific production by a surfactin constitutive overproducing Bacillus subtilis strain. Statistics pointed the major enhancement caused by high glutamic acid concentrations, as well as a minor positive influence of tryptophan and glucose. Successively, a central composite design was performed in microplate bioreactors using a BioLector®, in which variations of these impressive parameters, glucose, glutamic acid and tryptophan concentrations were selected for optimization of product-biomass yield (YP/X). Results were exploited in combination with a RSM. In absolute terms, experiments attained an YP/X 3.28-fold higher than those obtained in Landy medium, a usual culture medium used for lipopeptide production by B. subtilis. Therefore, two medium compositions for enhancing biomass and surfactin specific production were proposed and tested in continuous regime in a bubbleless membrane bioreactor. An YP/X increase of 2.26-fold was observed in bioreactor scale. Copyright © 2016 Elsevier Ltd. All rights reserved.
Effects of Nonlinear Inhomogeneity on the Cosmic Expansion with Numerical Relativity.
Bentivegna, Eloisa; Bruni, Marco
2016-06-24
We construct a three-dimensional, fully relativistic numerical model of a universe filled with an inhomogeneous pressureless fluid, starting from initial data that represent a perturbation of the Einstein-de Sitter model. We then measure the departure of the average expansion rate with respect to this homogeneous and isotropic reference model, comparing local quantities to the predictions of linear perturbation theory. We find that collapsing perturbations reach the turnaround point much earlier than expected from the reference spherical top-hat collapse model and that the local deviation of the expansion rate from the homogeneous one can be as high as 28% at an underdensity, for an initial density contrast of 10^{-2}. We then study, for the first time, the exact behavior of the backreaction term Q_{D}. We find that, for small values of the initial perturbations, this term exhibits a 1/a scaling, and that it is negative with a linearly growing absolute value for larger perturbation amplitudes, thereby contributing to an overall deceleration of the expansion. Its magnitude, on the other hand, remains very small even for relatively large perturbations.
Legal and Political Considerations in Large-Scale Adaptive Testing,
One thing we can be absolutely sure of is that once personnel selection and classification decisions begin to be made using CAT , there will be legal...and to understand enough about legal processes and judgments to ’sell’ the benefits of CAT to the courts and the public.
ERIC Educational Resources Information Center
Lenkeit, Jenny; Caro, Daniel H.
2014-01-01
Reports of international large-scale assessments tend to evaluate and compare education system performance based on absolute scores. And policymakers refer to high-performing and economically prosperous education systems to enhance their own systemic features. But socioeconomic differences between systems compromise the plausibility of those…
Lagestad, Pål; Floan, Oddbjørn; Moa, Ivar Fossland
2017-01-01
The purpose of the study was to examine differences in physical activity level, physical fitness, body mass index, and overweight among adolescents in vocational and non-vocational studies, at the age of 14, 16, and 19, using a 5-year longitudinal design. Students in sport studies had the highest absoluteVO2peak and higher physical activity levels, than students in vocational subjects and students with a specialization in general studies. However, there were no significant differences between students in vocational subjects and students with a specialization in general studies according to absoluteVO2peak and physical activity levels. Students in vocational subjects were significantly more overweight/obese at 19 years of age, compared with the other students. Our findings support previous research pointing to overweightedness as being more widespread among adolescents in vocational programs than in non-vocational programs. However, differences in the physical activity level and physical fitness do not seem to explain these differences. PMID:28871279
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valenta, J., E-mail: jan.valenta@mff.cuni.cz; Greben, M.
2015-04-15
Application capabilities of optical microscopes and microspectroscopes can be considerably enhanced by a proper calibration of their spectral sensitivity. We propose and demonstrate a method of relative and absolute calibration of a microspectroscope over an extraordinary broad spectral range covered by two (parallel) detection branches in visible and near-infrared spectral regions. The key point of the absolute calibration of a relative spectral sensitivity is application of the standard sample formed by a thin layer of Si nanocrystals with stable and efficient photoluminescence. The spectral PL quantum yield and the PL spatial distribution of the standard sample must be characterized bymore » separate experiments. The absolutely calibrated microspectroscope enables to characterize spectral photon emittance of a studied object or even its luminescence quantum yield (QY) if additional knowledge about spatial distribution of emission and about excitance is available. Capabilities of the calibrated microspectroscope are demonstrated by measuring external QY of electroluminescence from a standard poly-Si solar-cell and of photoluminescence of Er-doped Si nanocrystals.« less
Absolute or relative? A comparative analysis of the relationship between poverty and mortality.
Fritzell, Johan; Rehnberg, Johan; Bacchus Hertzman, Jennie; Blomgren, Jenni
2015-01-01
We aimed to examine the cross-national and cross-temporal association between poverty and mortality, in particular differentiating the impact of absolute and relative poverty. We employed pooled cross-sectional time series analysis. Our measure of relative poverty was based upon the standard 60% of median income. The measure of absolute, or fixed, poverty was based upon the US poverty threshold. Our analyses were conducted on data for 30 countries between 1978 and 2010, a total of 149 data points. We separately studied infant, child, and adult mortality. Our findings highlight the importance of relative poverty for mortality. Especially for infant and child mortality, we found that our estimates of fixed poverty is close to zero either in the crude models, or when adjusting for gross domestic product. Conversely, the relative poverty estimates increased when adjusting for confounders. Our results seemed robust to a number of sensitivity tests. If we agree that risk of death is important, the public policy implication of our findings is that relative poverty, which has close associations to overall inequality, should be a major concern also among rich countries.
Evidence for the timing of sea-level events during MIS 3
NASA Astrophysics Data System (ADS)
Siddall, M.
2005-12-01
Four large sea-level peaks of millennial-scale duration occur during MIS 3. In addition smaller peaks may exist close to the sensitivity of existing methods to derive sea level during these periods. Millennial-scale changes in temperature during MIS 3 are well documented across much of the planet and are linked in some unknown, yet fundamental way to changes in ice volume / sea level. It is therefore highly likely that the timing of the sea level events during MIS 3 will prove to be a `Rosetta Stone' for understanding millennial scale climate variability. I will review observational and mechanistic arguments for the variation of sea level on Antarctic, Greenland and absolute time scales.
Intensity correlation-based calibration of FRET.
Bene, László; Ungvári, Tamás; Fedor, Roland; Sasi Szabó, László; Damjanovich, László
2013-11-05
Dual-laser flow cytometric resonance energy transfer (FCET) is a statistically efficient and accurate way of determining proximity relationships for molecules of cells even under living conditions. In the framework of this algorithm, absolute fluorescence resonance energy transfer (FRET) efficiency is determined by the simultaneous measurement of donor-quenching and sensitized emission. A crucial point is the determination of the scaling factor α responsible for balancing the different sensitivities of the donor and acceptor signal channels. The determination of α is not simple, requiring preparation of special samples that are generally different from a double-labeled FRET sample, or by the use of sophisticated statistical estimation (least-squares) procedures. We present an alternative, free-from-spectral-constants approach for the determination of α and the absolute FRET efficiency, by an extension of the presented framework of the FCET algorithm with an analysis of the second moments (variances and covariances) of the detected intensity distributions. A quadratic equation for α is formulated with the intensity fluctuations, which is proved sufficiently robust to give accurate α-values on a cell-by-cell basis in a wide system of conditions using the same double-labeled sample from which the FRET efficiency itself is determined. This seemingly new approach is illustrated by FRET measurements between epitopes of the MHCI receptor on the cell surface of two cell lines, FT and LS174T. The figures show that whereas the common way of α determination fails at large dye-per-protein labeling ratios of mAbs, this presented-as-new approach has sufficient ability to give accurate results. Although introduced in a flow cytometer, the new approach can also be straightforwardly used with fluorescence microscopes. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Use of consensus development to establish national research priorities in critical care
Vella, Keryn; Goldfrad, Caroline; Rowan, Kathy; Bion, Julian; Black, Nick
2000-01-01
Objectives To test the feasibility of using a nominal group technique to establish clinical and health services research priorities in critical care and to test the representativeness of the group's views. Design Generation of topics by means of a national survey; a nominal group technique to establish the level of consensus; a survey to test the representativeness of the results. Setting United Kingdom and Republic of Ireland. Subjects Nominal group composed of 10 doctors (8 consultants, 2 trainees) and 2 nurses. Main outcome measure Level of support (median) and level of agreement (mean absolute deviation from the median) derived from a 9 point Likert scale. Results Of the 325 intensive care units approached, 187 (58%) responded, providing about 1000 suggestions for research. Of the 106 most frequently suggested topics considered by the nominal group, 37 attracted strong support, 48 moderate support and 21 weak support. There was more agreement after the group had met—overall mean of the mean absolute deviations from the median fell from 1.41 to 1.26. The group's views represented the views of the wider community of critical care staff (r=0.73, P<0.01). There was no significant difference in the views of staff from teaching or from non-teaching hospitals. Of the 37 topics that attracted the strongest support, 24 were concerned with organisational aspects of critical care and only 13 with technology assessment or clinical research. Conclusions A nominal group technique is feasible and reliable for determining research priorities among clinicians. This approach is more democratic and transparent than the traditional methods used by research funding bodies. The results suggest that clinicians perceive research into the best ways of delivering and organising services as a high priority. PMID:10753149
Absolute Timing of the Crab Pulsar with RXTE
NASA Technical Reports Server (NTRS)
Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.
2004-01-01
We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.
Within-day variability on short and long walking tests in persons with multiple sclerosis.
Feys, Peter; Bibby, Bo; Romberg, Anders; Santoyo, Carme; Gebara, Benoit; de Noordhout, Benoit Maertens; Knuts, Kathy; Bethoux, Francois; Skjerbæk, Anders; Jensen, Ellen; Baert, Ilse; Vaney, Claude; de Groot, Vincent; Dalgas, Ulrik
2014-03-15
To compare within-day variability of short (10 m walking test at usual and fastest speed; 10MWT) and long (2 and 6-minute walking test; 2MWT/6MWT) tests in persons with multiple sclerosis. Observational study. MS rehabilitation and research centers in Europe and US within RIMS (European network for best practice and research in MS rehabilitation). Ambulatory persons with MS (Expanded Disability Status Scale 0-6.5). Subjects of different centers performed walking tests at 3 time points during a single day. 10MWT, 2MWT and 6MWT at fastest speed and 10MWT at usual speed. Ninety-five percent limits of agreement were computed using a random effects model with individual pwMS as random effect. Following this model, retest scores are with 95% certainty within these limits of baseline scores. In 102 subjects, within-day variability was constant in absolute units for the 10MWT, 2MWT and 6MWT at fastest speed (+/-0.26, 0.16 and 0.15m/s respectively, corresponding to +/-19.2m and +/-54 m for the 2MWT and 6MWT) independent on the severity of ambulatory dysfunction. This implies a greater relative variability with increasing disability level, often above 20% depending on the applied test. The relative within-day variability of the 10MWT at usual speed was +/-31% independent of ambulatory function. Absolute values of within-day variability on walking tests at fastest speed were independent of disability level and greater with short compared to long walking tests. Relative within-day variability remained overall constant when measured at usual speed. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Heil, Kurt
2017-01-01
Fast and accurate assessment of within-field variation is essential for detecting field-wide heterogeneity and contributing to improvements in the management of agricultural lands. The goal of this paper is to provide an overview of field scale characterization by electromagnetic induction, firstly with a focus on the applications of EM38 to salinity, soil texture, water content and soil water turnover, soil types and boundaries, nutrients and N-turnover and soil sampling designs. Furthermore, results concerning special applications in agriculture, horticulture and archaeology are included. In addition to these investigations, this survey also presents a wide range of practical methods for use. Secondly, the effectiveness of conductivity readings for a specific target in a specific locality is determined by the intensity at which soil factors influence these values in relationship to the desired information. The interpretation and utility of apparent electrical conductivity (ECa) readings are highly location- and soil-specific, so soil properties influencing the measurement of ECa must be clearly understood. From the various calibration results, it appears that regression constants for the relationships between ECa, electrical conductivity of aqueous soil extracts (ECe), texture, yield, etc., are not necessarily transferable from one region to another. The modelling of ECa, soil properties, climate and yield are important for identifying the location to which specific utilizations of ECa technology (e.g., ECa−texture relationships) can be appropriately applied. In general, the determination of absolute levels of ECa is frequently not possible, but it appears to be quite a robust method to detect relative differences, both spatially and temporally. Often, the use of ECa is restricted to its application as a covariate or the use of the readings in a relative sense rather than as absolute terms. PMID:29113048
New generalized corresponding states correlation for surface tension of normal saturated liquids
NASA Astrophysics Data System (ADS)
Yi, Huili; Tian, Jianxiang
2015-08-01
A new simple correlation based on the principle of corresponding state is proposed to estimate the temperature-dependent surface tension of normal saturated liquids. The new correlation contains three coefficients obtained by fitting 17,051 surface tension data of 38 saturated normal liquids. These 38 liquids contain refrigerants, hydrocarbons and some other inorganic liquids. The new correlation requires only the triple point temperature, triple point surface tension and critical point temperature as input and is able to well represent the experimental surface tension data for each of the 38 saturated normal liquids from the triple temperature up to the point near the critical point. The new correlation gives absolute average deviations (AAD) values below 3% for all of these 38 liquids with the only exception being octane with AAD=4.30%. Thus, the new correlation gives better overall results in comparison with other correlations for these 38 normal saturated liquids.
Pointing at targets by children with congenital and transient blindness.
Gaunet, Florence; Ittyerah, Miriam; Rossetti, Yves
2007-04-01
The study investigated pointing at memorized targets in reachable space in congenitally blind (CB) and blindfolded sighted (BS) children (6, 8, 10 and 12 years; ten children in each group). The target locations were presented on a sagittal plane by passive positioning of the left index finger. A go signal for matching the target location with the right index finger was provided 0 or 4 s after demonstration. An age effect was found only for absolute distance errors and the surface area of pointing was smaller for the CB children. Results indicate that early visual experience and age are not predictive factors for pointing in children. The delay was an important factor at all ages and for both groups, indicating distinct spatial representations such as egocentric and allocentric frames of reference, for immediate and delayed pointing, respectively. Therefore, the CB like the BS children are able to use both ego- and allocentric frames of reference.
Endovascular Therapy after Intravenous t-PA versus t-PA Alone for Stroke
Broderick, Joseph P.; Palesch, Yuko Y.; Demchuk, Andrew M.; Yeatts, Sharon D.; Khatri, Pooja; Hill, Michael D.; Jauch, Edward C.; Jovin, Tudor G.; Yan, Bernard; Silver, Frank L.; von Kummer, Rüdiger; Molina, Carlos A.; Demaerschalk, Bart M.; Budzik, Ronald; Clark, Wayne M.; Zaidat, Osama O.; Malisch, Tim W.; Goyal, Mayank; Schonewille, Wouter J.; Mazighi, Mikael; Engelter, Stefan T.; Anderson, Craig; Spilker, Judith; Carrozzella, Janice; Ryckborst, Karla J.; Janis, L. Scott; Martin, Renée H.; Foster, Lydia D.; Tomsick, Thomas A.
2013-01-01
BACKGROUND Endovascular therapy is increasingly used after the administration of intravenous tissue plasminogen activator (t-PA) for patients with moderate-to-severe acute ischemic stroke, but whether a combined approach is more effective than intravenous t-PA alone is uncertain. METHODS We randomly assigned eligible patients who had received intravenous t-PA within 3 hours after symptom onset to receive additional endovascular therapy or intravenous t-PA alone, in a 2:1 ratio. The primary outcome measure was a modified Rankin scale score of 2 or less (indicating functional independence) at 90 days (scores range from 0 to 6, with higher scores indicating greater disability). RESULTS The study was stopped early because of futility after 656 participants had undergone randomization (434 patients to endovascular therapy and 222 to intravenous t-PA alone). The proportion of participants with a modified Rankin score of 2 or less at 90 days did not differ significantly according to treatment (40.8% with endovascular therapy and 38.7% with intravenous t-PA; absolute adjusted difference, 1.5 percentage points; 95% confidence interval [CI], −6.1 to 9.1, with adjustment for the National Institutes of Health Stroke Scale [NIHSS] score [8–19, indicating moderately severe stroke, or ≥20, indicating severe stroke]), nor were there significant differences for the predefined subgroups of patients with an NIHSS score of 20 or higher (6.8 percentage points; 95% CI, −4.4 to 18.1) and those with a score of 19 or lower (−1.0 percentage point; 95% CI, −10.8 to 8.8). Findings in the endovascular-therapy and intravenous t-PA groups were similar for mortality at 90 days (19.1% and 21.6%, respectively; P = 0.52) and the proportion of patients with symptomatic intracerebral hemorrhage within 30 hours after initiation of t-PA (6.2% and 5.9%, respectively; P = 0.83). CONCLUSIONS The trial showed similar safety outcomes and no significant difference in functional independence with endovascular therapy after intravenous t-PA, as compared with intravenous t-PA alone. (Funded by the National Institutes of Health and others; ClinicalTrials.gov number, NCT00359424.) PMID:23390923
An integrated method for atherosclerotic carotid plaque segmentation in ultrasound image.
Qian, Chunjun; Yang, Xiaoping
2018-01-01
Carotid artery atherosclerosis is an important cause of stroke. Ultrasound imaging has been widely used in the diagnosis of atherosclerosis. Therefore, segmenting atherosclerotic carotid plaque in ultrasound image is an important task. Accurate plaque segmentation is helpful for the measurement of carotid plaque burden. In this paper, we propose and evaluate a novel learning-based integrated framework for plaque segmentation. In our study, four different classification algorithms, along with the auto-context iterative algorithm, were employed to effectively integrate features from ultrasound images and later also the iteratively estimated and refined probability maps together for pixel-wise classification. The four classification algorithms were support vector machine with linear kernel, support vector machine with radial basis function kernel, AdaBoost and random forest. The plaque segmentation was implemented in the generated probability map. The performance of the four different learning-based plaque segmentation methods was tested on 29 B-mode ultrasound images. The evaluation indices for our proposed methods were consisted of sensitivity, specificity, Dice similarity coefficient, overlap index, error of area, absolute error of area, point-to-point distance, and Hausdorff point-to-point distance, along with the area under the ROC curve. The segmentation method integrated the random forest and an auto-context model obtained the best results (sensitivity 80.4 ± 8.4%, specificity 96.5 ± 2.0%, Dice similarity coefficient 81.0 ± 4.1%, overlap index 68.3 ± 5.8%, error of area -1.02 ± 18.3%, absolute error of area 14.7 ± 10.9%, point-to-point distance 0.34 ± 0.10 mm, Hausdorff point-to-point distance 1.75 ± 1.02 mm, and area under the ROC curve 0.897), which were almost the best, compared with that from the existed methods. Our proposed learning-based integrated framework investigated in this study could be useful for atherosclerotic carotid plaque segmentation, which will be helpful for the measurement of carotid plaque burden. Copyright © 2017 Elsevier B.V. All rights reserved.
Holme, Øyvind; Løberg, Magnus; Kalager, Mette; Bretthauer, Michael; Hernán, Miguel A; Aas, Eline; Eide, Tor J; Skovlund, Eva; Lekven, Jon; Schneede, Jörn; Tveit, Kjell Magne; Vatn, Morten; Ursin, Giske; Hoff, Geir
2018-06-05
The long-term effects of sigmoidoscopy screening on colorectal cancer (CRC) incidence and mortality in women and men are unclear. To determine the effectiveness of flexible sigmoidoscopy screening after 15 years of follow-up in women and men. Randomized controlled trial. (ClinicalTrials.gov: NCT00119912). Oslo and Telemark County, Norway. Adults aged 50 to 64 years at baseline without prior CRC. Screening (between 1999 and 2001) with flexible sigmoidoscopy with and without additional fecal blood testing versus no screening. Participants with positive screening results were offered colonoscopy. Age-adjusted CRC incidence and mortality stratified by sex. Of 98 678 persons, 20 552 were randomly assigned to screening and 78 126 to no screening. Adherence rates were 64.7% in women and 61.4% in men. Median follow-up was 14.8 years. The absolute risks for CRC in women were 1.86% in the screening group and 2.05% in the control group (risk difference, -0.19 percentage point [95% CI, -0.49 to 0.11 percentage point]; HR, 0.92 [CI, 0.79 to 1.07]). In men, the corresponding risks were 1.72% and 2.50%, respectively (risk difference, -0.78 percentage point [CI, -1.08 to -0.48 percentage points]; hazard ratio [HR], 0.66 [CI, 0.57 to 0.78]) (P for heterogeneity = 0.004). The absolute risks for death from CRC in women were 0.60% in the screening group and 0.59% in the control group (risk difference, 0.01 percentage point [CI, -0.16 to 0.18 percentage point]; HR, 1.01 [CI, 0.77 to 1.33]). The corresponding risks for death from CRC in men were 0.49% and 0.81%, respectively (risk difference, -0.33 percentage point [CI, -0.49 to -0.16 percentage point]; HR, 0.63 [CI, 0.47 to 0.83]) (P for heterogeneity = 0.014). Follow-up through national registries. Offering sigmoidoscopy screening in Norway reduced CRC incidence and mortality in men but had little or no effect in women. Norwegian government and Norwegian Cancer Society.
Lang, Paul Z; Thulasi, Praneetha; Khandelwal, Sumitra S; Hafezi, Farhad; Randleman, J Bradley
2018-05-02
To evaluate the correlation between anterior axial curvature difference maps following corneal cross-linking (CXL) for progressive keratoconus obtained from Scheimpflug-based tomography and Placido-based topography. Between-devices reliability analysis of randomized clinical trial data METHODS: Corneal imaging was collected at a single center institution pre-operatively and at 3, 6, and 12 months post-operatively using Scheimpflug-based tomography (Pentacam, Oculus Inc., Lynnwood, WA) and Scanning-slit, Placido-based topography (Orbscan II, Bausch & Lomb, Rochester, NY) in patients with progressive keratoconus receiving standard protocol CXL (3mW/cm 2 for 30 minutes). Regularization index (RI), absolute maximum keratometry (K Max), and change in (ΔK Max) were compared between the two devices at each time point. 51 eyes from 36 patients were evaluated at all time points. values were significantly different at all time points [56.01±5.3D Scheimpflug vs. 55.04±5.1D scanning-slit pre-operatively (p=0.003); 54.58±5.3D Scheimpflug vs. 53.12±4.9D scanning-slit at 12 months (p<0.0001)] but strongly correlated between devices (r=0.90-0.93) at all time points. The devices were not significantly different at any time point for either ΔK Max or RI but were poorly correlated at all time points (r=0.41-0.53 for ΔK Max, r=0.29-0.48 for RI). At 12 months, 95% LOA was 7.51D for absolute, 8.61D for ΔK Max, and 19.86D for RI. Measurements using Scheimpflug and scanning-slit Placido-based technology are correlated but not interchangeable. Both devices appear reasonable for separately monitoring the cornea's response to CXL; however, caution should be used when comparing results obtained with one measuring technology to the other. Copyright © 2018 Elsevier Inc. All rights reserved.
A Comparative Study of Precise Point Positioning (PPP) Accuracy Using Online Services
NASA Astrophysics Data System (ADS)
Malinowski, Marcin; Kwiecień, Janusz
2016-12-01
Precise Point Positioning (PPP) is a technique used to determine the position of receiver antenna without communication with the reference station. It may be an alternative solution to differential measurements, where maintaining a connection with a single RTK station or a regional network of reference stations RTN is necessary. This situation is especially common in areas with poorly developed infrastructure of ground stations. A lot of research conducted so far on the use of the PPP technique has been concerned about the development of entire day observation sessions. However, this paper presents the results of a comparative analysis of accuracy of absolute determination of position from observations which last between 1 to 7 hours with the use of four permanent services which execute calculations with PPP technique such as: Automatic Precise Positioning Service (APPS), Canadian Spatial Reference System Precise Point Positioning (CSRS-PPP), GNSS Analysis and Positioning Software (GAPS) and magicPPP - Precise Point Positioning Solution (magicGNSS). On the basis of acquired results of measurements, it can be concluded that at least two-hour long measurements allow acquiring an absolute position with an accuracy of 2-4 cm. An evaluation of the impact on the accuracy of simultaneous positioning of three points test network on the change of the horizontal distance and the relative height difference between measured triangle vertices was also conducted. Distances and relative height differences between points of the triangular test network measured with a laser station Leica TDRA6000 were adopted as references. The analyses of results show that at least two hours long measurement sessions can be used to determine the horizontal distance or the difference in height with an accuracy of 1-2 cm. Rapid products employed in calculations conducted with PPP technique reached the accuracy of determining coordinates on a close level as in elaborations which employ Final products.
Nakamura, Kenji; Hirayama-Kurogi, Mio; Ito, Shingo; Kuno, Takuya; Yoneyama, Toshihiro; Obuchi, Wataru; Terasaki, Tetsuya; Ohtsuki, Sumio
2016-08-01
The purpose of the present study was to examine simultaneously the absolute protein amounts of 152 membrane and membrane-associated proteins, including 30 metabolizing enzymes and 107 transporters, in pooled microsomal fractions of human liver, kidney, and intestine by means of SWATH-MS with stable isotope-labeled internal standard peptides, and to compare the results with those obtained by MRM/SRM and high resolution (HR)-MRM/PRM. The protein expression levels of 27 metabolizing enzymes, 54 transporters, and six other membrane proteins were quantitated by SWATH-MS; other targets were below the lower limits of quantitation. Most of the values determined by SWATH-MS differed by less than 50% from those obtained by MRM/SRM or HR-MRM/PRM. Various metabolizing enzymes were expressed in liver microsomes more abundantly than in other microsomes. Ten, 13, and eight transporters listed as important for drugs by International Transporter Consortium were quantified in liver, kidney, and intestinal microsomes, respectively. Our results indicate that SWATH-MS enables large-scale multiplex absolute protein quantification while retaining similar quantitative capability to MRM/SRM or HR-MRM/PRM. SWATH-MS is expected to be useful methodology in the context of drug development for elucidating the molecular mechanisms of drug absorption, metabolism, and excretion in the human body based on protein profile information. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Relative and absolute reliability of measures of linoleic acid-derived oxylipins in human plasma.
Gouveia-Figueira, Sandra; Bosson, Jenny A; Unosson, Jon; Behndig, Annelie F; Nording, Malin L; Fowler, Christopher J
2015-09-01
Modern analytical techniques allow for the measurement of oxylipins derived from linoleic acid in biological samples. Most validatory work has concerned extraction techniques, repeated analysis of aliquots from the same biological sample, and the influence of external factors such as diet and heparin treatment upon their levels, whereas less is known about the relative and absolute reliability of measurements undertaken on different days. A cohort of nineteen healthy males were used, where samples were taken at the same time of day on two occasions, at least 7 days apart. Relative reliability was assessed using Lin's concordance correlation coefficients (CCC) and intraclass correlation coefficients (ICC). Absolute reliability was assessed by Bland-Altman analyses. Nine linoleic acid oxylipins were investigated. ICC and CCC values ranged from acceptable (0.56 [13-HODE]) to poor (near zero [9(10)- and 12(13)-EpOME]). Bland-Altman limits of agreement were in general quite wide, ranging from ±0.5 (12,13-DiHOME) to ±2 (9(10)-EpOME; log10 scale). It is concluded that relative reliability of linoleic acid-derived oxylipins varies between lipids with compounds such as the HODEs showing better relative reliability than compounds such as the EpOMEs. These differences should be kept in mind when designing and interpreting experiments correlating plasma levels of these lipids with factors such as age, body mass index, rating scales etc. Copyright © 2015 Elsevier Inc. All rights reserved.
Absolute dual-comb spectroscopy at 1.55 μm by free-running Er:fiber lasers
NASA Astrophysics Data System (ADS)
Cassinerio, Marco; Gambetta, Alessio; Coluccelli, Nicola; Laporta, Paolo; Galzerano, Gianluca
2014-06-01
We report on a compact scheme for absolute referencing and coherent averaging for dual-comb based spectrometers, exploiting a single continuous-wave (CW) laser in a transfer oscillator configuration. The same CW laser is used for both absolute calibration of the optical frequency axis and the generation of a correction signal which is used for a real-time jitter compensation in a fully electrical feed-forward scheme. The technique is applied to a near-infrared spectrometer based on a pair of free-running mode-locked Er:fiber lasers, allowing to perform real-time absolute-frequency measurements over an optical bandwidth of more than 25 nm, with coherent interferogram averaging over 1-s acquisition time, leading to a signal-to-noise ratio improvement of 29 dB over the 50 μs single shot acquisition. Using 10-cm single pass cell, a value of 1.9 × 10-4 cm-1 Hz-0.5 noise-equivalent-absorption over 1 s integration time is obtained, which can be further scaled down with a multi-pass or resonant cavity. The adoption of a single CW laser, together with the absence of optical locks, and the full-fiber design makes this spectrometer a robust and compact system to be employed in gas-sensing applications.
The impact of the resolution of meteorological datasets on catchment-scale drought studies
NASA Astrophysics Data System (ADS)
Hellwig, Jost; Stahl, Kerstin
2017-04-01
Gridded meteorological datasets provide the basis to study drought at a range of scales, including catchment scale drought studies in hydrology. They are readily available to study past weather conditions and often serve real time monitoring as well. As these datasets differ in spatial/temporal coverage and spatial/temporal resolution, for most studies there is a tradeoff between these features. Our investigation examines whether biases occur when studying drought on catchment scale with low resolution input data. For that, a comparison among the datasets HYRAS (covering Central Europe, 1x1 km grid, daily data, 1951 - 2005), E-OBS (Europe, 0.25° grid, daily data, 1950-2015) and GPCC (whole world, 0.5° grid, monthly data, 1901 - 2013) is carried out. Generally, biases in precipitation increase with decreasing resolution. Most important variations are found during summer. In low mountain range of Central Europe the datasets of sparse resolution (E-OBS, GPCC) overestimate dry days and underestimate total precipitation since they are not able to describe high spatial variability. However, relative measures like the correlation coefficient reveal good consistencies of dry and wet periods, both for absolute precipitation values and standardized indices like the Standardized Precipitation Index (SPI) or Standardized Precipitation Evaporation Index (SPEI). Particularly the most severe droughts derived from the different datasets match very well. These results indicate that absolute values of sparse resolution datasets applied to catchment scale might be critical to use for an assessment of the hydrological drought at catchment scale, whereas relative measures for determining periods of drought are more trustworthy. Therefore, studies on drought, that downscale meteorological data, should carefully consider their data needs and focus on relative measures for dry periods if sufficient for the task.
NASA Astrophysics Data System (ADS)
Schroeer, K.; Kirchengast, G.
2016-12-01
Relating precipitation intensity to temperature is a popular approach to assess potential changes of extreme events in a warming climate. Potential increases in extreme rainfall induced hazards, such as flash flooding, serve as motivation. It has not been addressed whether the temperature-precipitation scaling approach is meaningful on a regional to local level, where the risk of climate and weather impact is dealt with. Substantial variability of temperature sensitivity of extreme precipitation has been found that results from differing methodological assumptions as well as from varying climatological settings of the study domains. Two aspects are consistently found: First, temperature sensitivities beyond the expected consistency with the Clausius-Clapeyron (CC) equation are a feature of short-duration, convective, sub-daily to sub-hourly high-percentile rainfall intensities at mid-latitudes. Second, exponential growth ceases or reverts at threshold temperatures that vary from region to region, as moisture supply becomes limited. Analyses of pooled data, or of single or dispersed stations over large areas make it difficult to estimate the consequences in terms of local climate risk. In this study we test the meaningfulness of the scaling approach from an impact scale perspective. Temperature sensitivities are assessed using quantile regression on hourly and sub-hourly precipitation data from 189 stations in the Austrian south-eastern Alpine region. The observed scaling rates vary substantially, but distinct regional and seasonal patterns emerge. High sensitivity exceeding CC-scaling is seen on the 10-minute scale more than on the hourly scale, in storms shorter than 2 hours duration, and in shoulder seasons, but it is not necessarily a significant feature of the extremes. To be impact relevant, change rates need to be linked to absolute rainfall amounts. We show that high scaling rates occur in lower temperature conditions and thus have smaller effect on absolute precipitation intensities. While reporting of mere percentage numbers can be misleading, scaling studies can add value to process understanding on the local scale, if the factors that influence scaling rates are considered from both a methodological and a physical perspective.
Angular scale expansion theory and the misperception of egocentric distance in locomotor space.
Durgin, Frank H
Perception is crucial for the control of action, but perception need not be scaled accurately to produce accurate actions. This paper reviews evidence for an elegant new theory of locomotor space perception that is based on the dense coding of angular declination so that action control may be guided by richer feedback. The theory accounts for why so much direct-estimation data suggests that egocentric distance is underestimated despite the fact that action measures have been interpreted as indicating accurate perception. Actions are calibrated to the perceived scale of space and thus action measures are typically unable to distinguish systematic (e.g., linearly scaled) misperception from accurate perception. Whereas subjective reports of the scaling of linear extent are difficult to evaluate in absolute terms, study of the scaling of perceived angles (which exist in a known scale, delimited by vertical and horizontal) provides new evidence regarding the perceptual scaling of locomotor space.
Adrait, Annie; Lebert, Dorothée; Trauchessec, Mathieu; Dupuis, Alain; Louwagie, Mathilde; Masselon, Christophe; Jaquinod, Michel; Chevalier, Benoît; Vandenesch, François; Garin, Jérôme; Bruley, Christophe; Brun, Virginie
2012-06-06
Enterotoxin A (SEA) is a staphylococcal virulence factor which is suspected to worsen septic shock prognosis. However, the presence of SEA in the blood of sepsis patients has never been demonstrated. We have developed a mass spectrometry-based assay for the targeted and absolute quantification of SEA in serum. To enhance sensitivity and specificity, we combined an immunoaffinity-based sample preparation with mass spectrometry analysis in the selected reaction monitoring (SRM) mode. Absolute quantification of SEA was performed using the PSAQ™ method (Protein Standard Absolute Quantification), which uses a full-length isotope-labeled SEA as internal standard. The lower limit of detection (LLOD) and lower limit of quantification (LLOQ) were estimated at 352pg/mL and 1057pg/mL, respectively. SEA recovery after immunocapture was determined to be 7.8±1.4%. Therefore, we assumed that less than 1femtomole of each SEA proteotypic peptide was injected on the liquid chromatography column before SRM analysis. From a 6-point titration experiment, quantification accuracy was determined to be 77% and precision at LLOQ was lower than 5%. With this sensitive PSAQ-SRM assay, we expect to contribute to decipher the pathophysiological role of SEA in severe sepsis. This article is part of a Special Issue entitled: Proteomics: The clinical link. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei
This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.
NASA Astrophysics Data System (ADS)
Alvarez, L. V.; Grams, P.
2017-12-01
We present a parallelized, three-dimensional, turbulence-resolving model using the Detached-Eddy Simulation (DES) technique, tested at the scale of the river-reach in the Colorado River. DES is a hybrid large eddy simulation (LES) and Reynolds-averaged Navier Stokes (RANS). RANS is applied to the near-bed grid cells, where grid resolution is not sufficient to fully resolve wall turbulence. LES is applied in the flow interior. We utilize the Spalart-Allmaras one equation turbulence closure with a rough wall extension. The model resolves large-scale turbulence using DES and simultaneously integrates the suspended sediment advection-diffusion equation. The Smith and McLean suspended sediment boundary condition is used to calculate the upward and downward settling of sediment fluxes in the grid cells attached to the bed. Model results compare favorably with ADCP measurements of flow taken on the Colorado River in Grand Canyon during the High Flow Experiment (HFE) of 2008. The model accurately reproduces the size and position of the major recirculation currents, and the error in velocity magnitude was found to be less than 17% or 0.22 m/s absolute error. The mean deviation of the direction of velocity with respect to the measured velocity was found to be 20 degrees. Large-scale turbulence structures with vorticity predominantly in the vertical direction are produced at the shear layer between the main channel and the separation zone. However, these structures rapidly become three-dimensional with no preferred orientation of vorticity. Cross-stream velocities, into the main recirculation zone just upstream of the point of reattachment and out of the main recirculation region just downstream of the point of separation, are highest near the bed. Lateral separation eddies are more efficient at storing and exporting sediment than previously modeled. The input of sediment to the eddy recirculation zone occurs in the interface of the eddy and main channel. Pulsation of the strength of the return current becomes a key factor to determine the rates of erosion and deposition in the main recirculation zone.
NASA Astrophysics Data System (ADS)
Blume, T.; Hassler, S. K.; Weiler, M.
2017-12-01
Hydrological science still struggles with the fact that while we wish for spatially continuous images or movies of state variables and fluxes at the landscape scale, most of our direct measurements are point measurements. To date regional measurements resolving landscape scale patterns can only be obtained by remote sensing methods, with the common drawback that they remain near the earth surface and that temporal resolution is generally low. However, distributed monitoring networks at the landscape scale provide the opportunity for detailed and time-continuous pattern exploration. Even though measurements are spatially discontinuous, the large number of sampling points and experimental setups specifically designed for the purpose of landscape pattern investigation open up new avenues of regional hydrological analyses. The CAOS hydrological observatory in Luxembourg offers a unique setup to investigate questions of temporal stability, pattern evolution and persistence of certain states. The experimental setup consists of 45 sensor clusters. These sensor clusters cover three different geologies, two land use classes, five different landscape positions, and contrasting aspects. At each of these sensor clusters three soil moisture/soil temperature profiles, basic climate variables, sapflow, shallow groundwater, and stream water levels were measured continuously for the past 4 years. We will focus on characteristic landscape patterns of various hydrological state variables and fluxes, studying their temporal stability on the one hand and the dependence of patterns on hydrological states on the other hand (e.g. wet vs dry). This is extended to time-continuous pattern analysis based on time series of spatial rank correlation coefficients. Analyses focus on the absolute values of soil moisture, soil temperature, groundwater levels and sapflow, but also investigate the spatial pattern of the daily changes of these variables. The analysis aims at identifying hydrologic signatures of the processes or landscape characteristics acting as major controls. While groundwater, soil water and transpiration are closely linked by the water cycle, they are controlled by different processes and we expect this to be reflected in interlinked but not necessarily congruent patterns and responses.
Absolute/convective secondary instabilities and the role of confinement in free shear layers
NASA Astrophysics Data System (ADS)
Arratia, Cristóbal; Mowlavi, Saviz; Gallaire, François
2018-05-01
We study the linear spatiotemporal stability of an infinite row of equal point vortices under symmetric confinement between parallel walls. These rows of vortices serve to model the secondary instability leading to the merging of consecutive (Kelvin-Helmholtz) vortices in free shear layers, allowing us to study how confinement limits the growth of shear layers through vortex pairings. Using a geometric construction akin to a Legendre transform on the dispersion relation, we compute the growth rate of the instability in different reference frames as a function of the frame velocity with respect to the vortices. This approach is verified and complemented with numerical computations of the linear impulse response, fully characterizing the absolute/convective nature of the instability. Similar to results by Healey on the primary instability of parallel tanh profiles [J. Fluid Mech. 623, 241 (2009), 10.1017/S0022112008005284], we observe a range of confinement in which absolute instability is promoted. For a parallel shear layer with prescribed confinement and mixing length, the threshold for absolute/convective instability of the secondary pairing instability depends on the separation distance between consecutive vortices, which is physically determined by the wavelength selected by the previous (primary or pairing) instability. In the presence of counterflow and moderate to weak confinement, small (large) wavelength of the vortex row leads to absolute (convective) instability. While absolute secondary instabilities in spatially developing flows have been previously related to an abrupt transition to a complex behavior, this secondary pairing instability regenerates the flow with an increased wavelength, eventually leading to a convectively unstable row of vortices. We argue that since the primary instability remains active for large wavelengths, a spatially developing shear layer can directly saturate on the wavelength of such a convectively unstable row, by-passing the smaller wavelengths of absolute secondary instability. This provides a wavelength selection mechanism, according to which the distance between consecutive vortices should be sufficiently large in comparison with the channel width in order for the row of vortices to persist. We argue that the proposed wavelength selection criteria can serve as a guideline for experimentally obtaining plane shear layers with counterflow, which has remained an experimental challenge.
NASA Astrophysics Data System (ADS)
José Polo, María; José Pérez-Palazón, María; Saénz de Rodrigáñez, Marta; Pimentel, Rafael; Arheimer, Berit
2017-04-01
Global hydrological models provide scientists and technicians with distributed data over medium to large areas from which assessment of water resource planning and use can be easily performed. However, scale conflicts between global models' spatial resolution and the local significant spatial scales in heterogeneous areas usually pose a constraint for the direct use and application of these models' results. The SWICCA (Service for Water Indicators in Climate Change Adaptation) Platform developed under the Copernicus Climate Change Service (C3S) offers a wide range of both climate and hydrological indicators obtained on a global scale with different time and spatial resolutions. Among the different study cases supporting the SWICCA demonstration of local impact assessment, the Sierra Nevada study case (South Spain) is a representative example of mountainous coastal catchments in the Mediterranean region. This work shows the lessons learnt during the study case development to derive local impact indicator tailored to suit the local end-users of water resource in this snow-dominated area. Different approaches were followed to select the most accurate method to downscale the global data and variables to the local level in a highly abrupt topography, in a sequential step approach. 1) SWICCA global climate variable downscaling followed by river flow simulation from a local hydrological model in selected control points in the catchment, together with 2) SWICCA global river flow values downscaling to the control points followed by corrections with local transfer functions were both tested against the available local river flow series of observations during the reference period. This test was performed for the different models and the available spatial resolutions included in the SWICCA platform. From the results, the second option, that is, the use of SWICCA river flow variables, performed the best approximations, once the local transfer functions were applied to the global values and an additional correction was performed based on the relative anomalies obtained instead of the absolute values. This approach was used to derive the future projections of selected local indicators for each end-user in the area under different climate change scenarios. Despite the spatial scale conflicts, the SWICCA river flow indicators (simulated by the E-HYPEv3.1.2 model) succeeded in approximating the observations during the reference period 1970-2000 when provided on a catchment scale, once local transfer functions and further anomaly correction were performed. Satisfactory results were obtained on a monthly scale for river flow in the main stream of the watershed, and on a daily scale for the headwater streams. The accessibility to the hydrological model WiMMed, which includes a snow module, locally validated in the study area has been crucial to downscale the SWICCA results and prove their usefulness.
Dual-comb spectroscopy of molecular electronic transitions in condensed phases
NASA Astrophysics Data System (ADS)
Cho, Byungmoon; Yoon, Tai Hyun; Cho, Minhaeng
2018-03-01
Dual-comb spectroscopy (DCS) utilizes two phase-locked optical frequency combs to allow scanless acquisition of spectra using only a single point detector. Although recent DCS measurements demonstrate rapid acquisition of absolutely calibrated spectral lines with unprecedented precision and accuracy, complex phase-locking schemes and multiple coherent averaging present significant challenges for widespread adoption of DCS. Here, we demonstrate Global Positioning System (GPS) disciplined DCS of a molecular electronic transition in solution at around 800 nm, where the absorption spectrum is recovered by using a single time-domain interferogram. We anticipate that this simplified dual-comb technique with absolute time interval measurement and ultrabroad bandwidth will allow adoption of DCS to tackle molecular dynamics investigation through its implementation in time-resolved nonlinear spectroscopic studies and coherent multidimensional spectroscopy of coupled chromophore systems.
Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V
2018-03-01
Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.
Point defect weakened thermal contraction in monolayer graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zha, Xian-Hu; Department of Physics, University of Science and Technology of China, Hefei; USTC-CityU Joint Advanced Research Centre, Suzhou 215123
We investigate the thermal expansion behaviors of monolayer graphene and three configurations of graphene with point defects, namely the replacement of one carbon atom with a boron or nitrogen atom, or of two neighboring carbon atoms by boron-nitrogen atoms, based on calculations using first-principles density functional theory. It is found that the thermal contraction of monolayer graphene is significantly decreased by point defects. Moreover, the corresponding temperature for negative linear thermal expansion coefficient with the maximum absolute value is reduced. The cause is determined to be point defects that enhance the mechanical strength of graphene and then reduce the amplitudemore » and phonon frequency of the out-of-plane acoustic vibration mode. Such defect weakening of graphene thermal contraction will be useful in nanotechnology to diminish the mismatching or strain between the graphene and its substrate.« less
NASA Astrophysics Data System (ADS)
Fonstad, M. A.; Dietrich, J. T.
2014-12-01
At the very smallest spatial scales of fluvial field analysis, measurements made historically in situ are often now supplemented, or even replaced by, remote sensing methods. This is particularly true in the case of topographic and particle size measurement. In the field, the scales of in situ observation usually range from millimeters up to hundreds of meters. Two recent approaches for remote mapping of river environments at the scales of historical in situ observations are (1) camera-based structure from motion (SfM), and (2) active patterned-light measurement with devices such as the Kinect. Even if only carried by hand, these two approaches can produce topographic datasets over three to four orders of magnitude of spatial scale. Which approach is most useful? Previous studies have demonstrated that both SfM and the Kinect are precise and accurate over in situ field measurement scales; we instead turn to alternate comparative metrics to help determine which tools might be best for our river measurement tasks. These metrics might include (1) the ease of field use, (2) which general environments are or are not amenable to measurement, (3) robustness to changing environmental conditions, (4) ease of data processing, and (5) cost. We test these metrics in a variety of bar-scale fluvial field environments, including a large-river cobble bar, a sand-bedded river point bar, and a complex mountain stream bar. The structure from motion approach is field-equipment inexpensive, is viable over a wide range of environmental conditions, and is highly spatially scalable. The approach requires some type of spatial referencing to make the data useful. The Kinect has the advantages of an almost real-time display of collected data, so problems can be detected quickly, being fast and easy to use, and the data are collected with arbitrary but metric coordinates, so absolute referencing isn't needed to use the data for many problems. It has the disadvantages of its light field generally being unable to penetrate water surfaces, becoming unusable in strong sunlight, and providing so much data as to be sometimes unwieldy in the data processing stage.
Declining Foreign Enrollment at Higher Education Institutions in the United States: A Research Note
ERIC Educational Resources Information Center
Naidoo, Vik
2007-01-01
When the Institute of International Education reported a drop of 2.4% in international student enrollment in the United States in 2003/2004, the first absolute decline in foreign enrollments since 1971/1972 (Open Doors, 2004), many were quick to point fingers at visa policies instituted after the September 11, 2001 attacks. The "Visas…
ERIC Educational Resources Information Center
Clare, Michael
2006-01-01
The dreams and predictions of a digital classroom never quite materialized in the social studies history area. For a variety of reasons teachers keep the technology just outside the door peeking in but never truly welcomed. Not welcomed because of the nature of courseware initially offered, not welcomed because the technology was advanced for the…