Search strategies to identify information on adverse effects: a systematic review
Golder, Su; Loke, Yoon
2009-01-01
Objectives: The review evaluated studies of electronic database search strategies designed to retrieve adverse effects data for systematic reviews. Methods: Studies of adverse effects were located in ten databases as well as by checking references, hand-searching, searching citations, and contacting experts. Two reviewers screened the retrieved records for potentially relevant papers. Results: Five thousand three hundred thirteen citations were retrieved, yielding 19 studies designed to develop or evaluate adverse effect filters, of which 3 met the inclusion criteria. All 3 studies identified highly sensitive search strategies capable of retrieving over 95% of relevant records. However, 1 study did not evaluate precision, while the level of precision in the other 2 studies ranged from 0.8% to 2.8%. Methodological issues in these papers included the relatively small number of records, absence of a validation set of records for testing, and limited evaluation of precision. Conclusions: The results indicate the difficulty of achieving highly sensitive searches for information on adverse effects with a reasonable level of precision. Researchers who intend to locate studies on adverse effects should allow for the amount of resources and time required to conduct a highly sensitive search. PMID:19404498
Boeker, Martin; Vach, Werner; Motschall, Edith
2013-10-26
Recent research indicates a high recall in Google Scholar searches for systematic reviews. These reports raised high expectations of Google Scholar as a unified and easy to use search interface. However, studies on the coverage of Google Scholar rarely used the search interface in a realistic approach but instead merely checked for the existence of gold standard references. In addition, the severe limitations of the Google Search interface must be taken into consideration when comparing with professional literature retrieval tools.The objectives of this work are to measure the relative recall and precision of searches with Google Scholar under conditions which are derived from structured search procedures conventional in scientific literature retrieval; and to provide an overview of current advantages and disadvantages of the Google Scholar search interface in scientific literature retrieval. General and MEDLINE-specific search strategies were retrieved from 14 Cochrane systematic reviews. Cochrane systematic review search strategies were translated to Google Scholar search expression as good as possible under consideration of the original search semantics. The references of the included studies from the Cochrane reviews were checked for their inclusion in the result sets of the Google Scholar searches. Relative recall and precision were calculated. We investigated Cochrane reviews with a number of included references between 11 and 70 with a total of 396 references. The Google Scholar searches resulted in sets between 4,320 and 67,800 and a total of 291,190 hits. The relative recall of the Google Scholar searches had a minimum of 76.2% and a maximum of 100% (7 searches). The precision of the Google Scholar searches had a minimum of 0.05% and a maximum of 0.92%. The overall relative recall for all searches was 92.9%, the overall precision was 0.13%. The reported relative recall must be interpreted with care. It is a quality indicator of Google Scholar confined to an experimental setting which is unavailable in systematic retrieval due to the severe limitations of the Google Scholar search interface. Currently, Google Scholar does not provide necessary elements for systematic scientific literature retrieval such as tools for incremental query optimization, export of a large number of references, a visual search builder or a history function. Google Scholar is not ready as a professional searching tool for tasks where structured retrieval methodology is necessary.
Glanville, Julie M; Duffy, Steven; McCool, Rachael; Varley, Danielle
2014-07-01
Since 2005, International Committee of Medical Journal Editors (ICMJE) member journals have required that clinical trials be registered in publicly available trials registers before they are considered for publication. The research explores whether it is adequate, when searching to inform systematic reviews, to search for relevant clinical trials using only public trials registers and to identify the optimal search approaches in trials registers. A search was conducted in ClinicalTrials.gov and the International Clinical Trials Registry Platform (ICTRP) for research studies that had been included in eight systematic reviews. Four search approaches (highly sensitive, sensitive, precise, and highly precise) were performed using the basic and advanced interfaces in both resources. On average, 84% of studies were not listed in either resource. The largest number of included studies was retrieved in ClinicalTrials.gov and ICTRP when a sensitive search approach was used in the basic interface. The use of the advanced interface maintained or improved sensitivity in 16 of 19 strategies for Clinicaltrials.gov and 8 of 18 for ICTRP. No single search approach was sensitive enough to identify all studies included in the 6 reviews. Trials registers cannot yet be relied upon as the sole means to locate trials for systematic reviews. Trials registers lag behind the major bibliographic databases in terms of their search interfaces. For systematic reviews, trials registers and major bibliographic databases should be searched. Trials registers should be searched using sensitive approaches, and both the registers consulted in this study should be searched.
Error analysis of high-rate GNSS precise point positioning for seismic wave measurement
NASA Astrophysics Data System (ADS)
Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan
2017-06-01
High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.
Kedzior, Karina Karolina; Schuchinsky, Maria; Gerkensmeier, Imke; Loo, Colleen
2017-08-01
The present study aimed to systematically compare the cognitive outcomes of high-frequency repetitive transcranial magnetic stimulation (HF-rTMS) and electroconvulsive therapy (ECT) in head-to-head studies with major depression (MDD) patients. A systematic literature search identified six studies with 219 MDD patients that were too heterogeneous to reliably detect meaningful differences in acute cognitive outcomes after ECT vs. HF-rTMS. Cognitive effects of brain stimulation vary depending on the timeframe and methods of assessment, stimulation parameters, and maintenance treatment. Thus, acute and longer-term differences in cognitive outcomes both need to be investigated at precisely defined timeframes and with similar instruments assessing comparable functions. Copyright © 2017 Elsevier Ltd. All rights reserved.
2013-01-01
Background Recent research indicates a high recall in Google Scholar searches for systematic reviews. These reports raised high expectations of Google Scholar as a unified and easy to use search interface. However, studies on the coverage of Google Scholar rarely used the search interface in a realistic approach but instead merely checked for the existence of gold standard references. In addition, the severe limitations of the Google Search interface must be taken into consideration when comparing with professional literature retrieval tools. The objectives of this work are to measure the relative recall and precision of searches with Google Scholar under conditions which are derived from structured search procedures conventional in scientific literature retrieval; and to provide an overview of current advantages and disadvantages of the Google Scholar search interface in scientific literature retrieval. Methods General and MEDLINE-specific search strategies were retrieved from 14 Cochrane systematic reviews. Cochrane systematic review search strategies were translated to Google Scholar search expression as good as possible under consideration of the original search semantics. The references of the included studies from the Cochrane reviews were checked for their inclusion in the result sets of the Google Scholar searches. Relative recall and precision were calculated. Results We investigated Cochrane reviews with a number of included references between 11 and 70 with a total of 396 references. The Google Scholar searches resulted in sets between 4,320 and 67,800 and a total of 291,190 hits. The relative recall of the Google Scholar searches had a minimum of 76.2% and a maximum of 100% (7 searches). The precision of the Google Scholar searches had a minimum of 0.05% and a maximum of 0.92%. The overall relative recall for all searches was 92.9%, the overall precision was 0.13%. Conclusion The reported relative recall must be interpreted with care. It is a quality indicator of Google Scholar confined to an experimental setting which is unavailable in systematic retrieval due to the severe limitations of the Google Scholar search interface. Currently, Google Scholar does not provide necessary elements for systematic scientific literature retrieval such as tools for incremental query optimization, export of a large number of references, a visual search builder or a history function. Google Scholar is not ready as a professional searching tool for tasks where structured retrieval methodology is necessary. PMID:24160679
High-Precision Measurement of the Ne19 Half-Life and Implications for Right-Handed Weak Currents
NASA Astrophysics Data System (ADS)
Triambak, S.; Finlay, P.; Sumithrarachchi, C. S.; Hackman, G.; Ball, G. C.; Garrett, P. E.; Svensson, C. E.; Cross, D. S.; Garnsworthy, A. B.; Kshetri, R.; Orce, J. N.; Pearson, M. R.; Tardiff, E. R.; Al-Falou, H.; Austin, R. A. E.; Churchman, R.; Djongolov, M. K.; D'Entremont, R.; Kierans, C.; Milovanovic, L.; O'Hagan, S.; Reeve, S.; Sjue, S. K. L.; Williams, S. J.
2012-07-01
We report a precise determination of the Ne19 half-life to be T1/2=17.262±0.007s. This result disagrees with the most recent precision measurements and is important for placing bounds on predicted right-handed interactions that are absent in the current standard model. We are able to identify and disentangle two competing systematic effects that influence the accuracy of such measurements. Our findings prompt a reassessment of results from previous high-precision lifetime measurements that used similar equipment and methods.
High-precision measurement of the 19Ne half-life and implications for right-handed weak currents.
Triambak, S; Finlay, P; Sumithrarachchi, C S; Hackman, G; Ball, G C; Garrett, P E; Svensson, C E; Cross, D S; Garnsworthy, A B; Kshetri, R; Orce, J N; Pearson, M R; Tardiff, E R; Al-Falou, H; Austin, R A E; Churchman, R; Djongolov, M K; D'Entremont, R; Kierans, C; Milovanovic, L; O'Hagan, S; Reeve, S; Sjue, S K L; Williams, S J
2012-07-27
We report a precise determination of the (19)Ne half-life to be T(1/2)=17.262±0.007 s. This result disagrees with the most recent precision measurements and is important for placing bounds on predicted right-handed interactions that are absent in the current standard model. We are able to identify and disentangle two competing systematic effects that influence the accuracy of such measurements. Our findings prompt a reassessment of results from previous high-precision lifetime measurements that used similar equipment and methods.
McGarvey, Richard; Burch, Paul; Matthews, Janet M
2016-01-01
Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with a second differently generated set of spatial point populations, ν₈ and ν(W) again being the best performers in the longer-range autocorrelated populations. However, no systematic variance estimators tested were free from bias. On balance, systematic designs bring more narrow confidence intervals in clustered populations, while random designs permit unbiased estimates of (often wider) confidence interval. The search continues for better estimators of sampling variance for the systematic survey mean.
NASA Astrophysics Data System (ADS)
Adams, T.; Batra, P.; Bugel, L.; Camilleri, L.; Conrad, J. M.; de Gouvêa, A.; Fisher, P. H.; Formaggio, J. A.; Jenkins, J.; Karagiorgi, G.; Kobilarcik, T. R.; Kopp, S.; Kyle, G.; Loinaz, W. A.; Mason, D. A.; Milner, R.; Moore, R.; Morfín, J. G.; Nakamura, M.; Naples, D.; Nienaber, P.; Olness, F. I.; Owens, J. F.; Pate, S. F.; Pronin, A.; Seligman, W. G.; Shaevitz, M. H.; Schellman, H.; Schienbein, I.; Syphers, M. J.; Tait, T. M. P.; Takeuchi, T.; Tan, C. Y.; van de Water, R. G.; Yamamoto, R. K.; Yu, J. Y.
We extend the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering On Glass) to address a variety of issues including precision QCD measurements, extraction of structure functions, and the derived Parton Distribution Functions (PDF's). This experiment uses a Tevatron-based neutrino beam to obtain a sample of Deep Inelastic Scattering (DIS) events which is over two orders of magnitude larger than past samples. We outline an innovative method for fitting the structure functions using a parametrized energy shift which yields reduced systematic uncertainties. High statistics measurements, in combination with improved systematics, will enable NuSOnG to perform discerning tests of fundamental Standard Model parameters as we search for deviations which may hint of "Beyond the Standard Model" physics.
Atom Interferometry with Ultracold Quantum Gases in a Microgravity Environment
NASA Astrophysics Data System (ADS)
Williams, Jason; D'Incao, Jose; Chiow, Sheng-Wey; Yu, Nan
2015-05-01
Precision atom interferometers (AI) in space promise exciting technical capabilities for fundamental physics research, with proposals including unprecedented tests of the weak equivalence principle, precision measurements of the fine structure and gravitational constants, and detection of gravity waves and dark energy. Consequently, multiple AI-based missions have been proposed to NASA, including a dual-atomic-species interferometer that is to be integrated into the Cold Atom Laboratory (CAL) onboard the International Space Station. In this talk, I will discuss our plans and preparation at JPL for the proposed flight experiments to use the CAL facility to study the leading-order systematics expected to corrupt future high-precision measurements of fundamental physics with AIs in microgravity. The project centers on the physics of pairwise interactions and molecular dynamics in these quantum systems as a means to overcome uncontrolled shifts associated with the gravity gradient and few-particle collisions. We will further utilize the CAL AI for proof-of-principle tests of systematic mitigation and phase-readout techniques for use in the next-generation of precision metrology experiments based on AIs in microgravity. This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
An optimal search filter for retrieving systematic reviews and meta-analyses
2012-01-01
Background Health-evidence.ca is an online registry of systematic reviews evaluating the effectiveness of public health interventions. Extensive searching of bibliographic databases is required to keep the registry up to date. However, search filters have been developed to assist in searching the extensive amount of published literature indexed. Search filters can be designed to find literature related to a certain subject (i.e. content-specific filter) or particular study designs (i.e. methodological filter). The objective of this paper is to describe the development and validation of the health-evidence.ca Systematic Review search filter and to compare its performance to other available systematic review filters. Methods This analysis of search filters was conducted in MEDLINE, EMBASE, and CINAHL. The performance of thirty-one search filters in total was assessed. A validation data set of 219 articles indexed between January 2004 and December 2005 was used to evaluate performance on sensitivity, specificity, precision and the number needed to read for each filter. Results Nineteen of 31 search filters were effective in retrieving a high level of relevant articles (sensitivity scores greater than 85%). The majority achieved a high degree of sensitivity at the expense of precision and yielded large result sets. The main advantage of the health-evidence.ca Systematic Review search filter in comparison to the other filters was that it maintained the same level of sensitivity while reducing the number of articles that needed to be screened. Conclusions The health-evidence.ca Systematic Review search filter is a useful tool for identifying published systematic reviews, with further screening to identify those evaluating the effectiveness of public health interventions. The filter that narrows the focus saves considerable time and resources during updates of this online resource, without sacrificing sensitivity. PMID:22512835
Mn-Cr isotopic systematics of Chainpur chondrules and bulk ordinary chondrites
NASA Technical Reports Server (NTRS)
Nyquist, L.; Lindstrom, D.; Wiesmann, H.; Bansal, B.; Shih, C.-Y.; Mittlefehldt, D.; Martinez, R.; Wentworth, S.
1994-01-01
We report on ongoing study of the Mn-Cr systematics of individual Chainpur (LL3.4) chondrules and compare the results to those for bulk ordinary chondrites. Twenty-eight chondrules were surveyed for abundances of Mn, Cr, Na, Fe, Sc, Hf, Ir, and Zn by INAA. Twelve were chosen for SEM/EDX and high-precision Cr-isotopic studies on the basis of LL-chondrite-normalized Mn(LL), Sc(LL), (Mn/Fe)(LL), and (Sc/Fe)(LL) as well as their Mn/Cr ratios. Classification into textural types follows from SEM/EDX examination of interior surfaces.
Dylla, Daniel P.; Megison, Susan D.
2015-01-01
Objective. We compared the precision of a search strategy designed specifically to retrieve randomized controlled trials (RCTs) and systematic reviews of RCTs with search strategies designed for broader purposes. Methods. We designed an experimental search strategy that automatically revised searches up to five times by using increasingly restrictive queries as long at least 50 citations were retrieved. We compared the ability of the experimental and alternative strategies to retrieve studies relevant to 312 test questions. The primary outcome, search precision, was defined for each strategy as the proportion of relevant, high quality citations among the first 50 citations retrieved. Results. The experimental strategy had the highest median precision (5.5%; interquartile range [IQR]: 0%–12%) followed by the narrow strategy of the PubMed Clinical Queries (4.0%; IQR: 0%–10%). The experimental strategy found the most high quality citations (median 2; IQR: 0–6) and was the strategy most likely to find at least one high quality citation (73% of searches; 95% confidence interval 68%–78%). All comparisons were statistically significant. Conclusions. The experimental strategy performed the best in all outcomes although all strategies had low precision. PMID:25922798
NASA Astrophysics Data System (ADS)
MacPherson, G. J.; Defouilloy, C.; Kita, N. T.
2017-07-01
High-precision SIMS analysis of Al-Mg isotopes in USNM 3898, the CAI on which ALL is based, yields 26Al/27Al = (4.88 ± 0.14) × 10-5 in its interior vs. 26Al/27Al = (4.56 ± 0.11) × 10-5 in its outer mantle, suggesting later partial re-melting.
Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M
2011-01-01
Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
Right colic artery anatomy: a systematic review of cadaveric studies.
Haywood, M; Molyneux, C; Mahadevan, V; Srinivasaiah, N
2017-12-01
Complete mesocolic excision for right-sided colon cancer may offer an oncologically superior excision compared to traditional right hemicolectomy through high vascular tie and adherence to embryonic planes during dissection, supported by preoperative scanning to accurately define the tumour lymphovascular supply and drainage. The authors support and recommend precision oncosurgery based on these principles, with an emphasis on the importance of understanding the vascular anatomy. However, the anatomical variability of the right colic artery (RCA) has resulted in significant discord in the literature regarding its precise arrangement. We systematically reviewed the literature on the incidence of the different origins of the RCA in cadaveric studies. An electronic search was conducted as per Preferred Reporting Items for Systematic Reviews and Meta-analyses recommendations up to October 2016 using the MESH terms 'right colic artery' and 'anatomy' (PROSPERO registration number CRD42016041578). Ten studies involving 1073 cadavers were identified as suitable for analysis from 211 articles retrieved. The weighted mean incidence with which the right colic artery arose from other parent vessels was calculated at 36.8% for the superior mesenteric artery, 31.9% for the ileocolic artery, 27.7% for the root of the middle colic artery and 2.5% for the right branch of the middle colic artery. In 1.1% of individuals the RCA shared a trunk with the middle colic and ileocolic arteries. The weighted mean incidence of 2 RCAs was 7.0%, and in 8.9% of cadavers the RCA was absent. This anatomical information will add to the technical nuances of precision oncosurgery in right-sided colon resections.
Magnetic effect in the test of the weak equivalence principle using a rotating torsion pendulum
NASA Astrophysics Data System (ADS)
Zhu, Lin; Liu, Qi; Zhao, Hui-Hui; Yang, Shan-Qing; Luo, Pengshun; Shao, Cheng-Gang; Luo, Jun
2018-04-01
The high precision test of the weak equivalence principle (WEP) using a rotating torsion pendulum requires thorough analysis of systematic effects. Here we investigate one of the main systematic effects, the coupling of the ambient magnetic field to the pendulum. It is shown that the dominant term, the interaction between the average magnetic field and the magnetic dipole of the pendulum, is decreased by a factor of 1.1 × 104 with multi-layer magnetic shield shells. The shield shells reduce the magnetic field to 1.9 × 10-9 T in the transverse direction so that the dipole-interaction limited WEP test is expected at η ≲ 10-14 for a pendulum dipole less than 10-9 A m2. The high-order effect, the coupling of the magnetic field gradient to the magnetic quadrupole of the pendulum, would also contribute to the systematic errors for a test precision down to η ˜ 10-14.
Magnetic effect in the test of the weak equivalence principle using a rotating torsion pendulum.
Zhu, Lin; Liu, Qi; Zhao, Hui-Hui; Yang, Shan-Qing; Luo, Pengshun; Shao, Cheng-Gang; Luo, Jun
2018-04-01
The high precision test of the weak equivalence principle (WEP) using a rotating torsion pendulum requires thorough analysis of systematic effects. Here we investigate one of the main systematic effects, the coupling of the ambient magnetic field to the pendulum. It is shown that the dominant term, the interaction between the average magnetic field and the magnetic dipole of the pendulum, is decreased by a factor of 1.1 × 10 4 with multi-layer magnetic shield shells. The shield shells reduce the magnetic field to 1.9 × 10 -9 T in the transverse direction so that the dipole-interaction limited WEP test is expected at η ≲ 10 -14 for a pendulum dipole less than 10 -9 A m 2 . The high-order effect, the coupling of the magnetic field gradient to the magnetic quadrupole of the pendulum, would also contribute to the systematic errors for a test precision down to η ∼ 10 -14 .
Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN
NASA Astrophysics Data System (ADS)
Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.
2016-12-01
In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.
NASA Astrophysics Data System (ADS)
Archer, Gregory J.
Highly siderophile element (HSE) abundances and 187Re- 187Os isotopic systematics for H chondrites and ungrouped achondrites, as well as 182Hf-182W isotopic systematics of H and CR chondrites are reported. Achondrite fractions with higher HSE abundances show little disturbance of 187Re-187Os isotopic systematics. By contrast, isotopic systematics for lower abundance fractions are consistent with minor Re mobilization. For magnetically separated H chondrite fractions, the magnitudes of disturbance for the 187Re-187Os isotopic system follow the trend coarse-metal isotopic system follow the trend coarse-metal
Reliable inference of light curve parameters in the presence of systematics
NASA Astrophysics Data System (ADS)
Gibson, Neale P.
2016-10-01
Time-series photometry and spectroscopy of transiting exoplanets allow us to study their atmospheres. Unfortunately, the required precision to extract atmospheric information surpasses the design specifications of most general purpose instrumentation. This results in instrumental systematics in the light curves that are typically larger than the target precision. Systematics must therefore be modelled, leaving the inference of light-curve parameters conditioned on the subjective choice of systematics models and model-selection criteria. Here, I briefly review the use of systematics models commonly used for transmission and emission spectroscopy, including model selection, marginalisation over models, and stochastic processes. These form a hierarchy of models with increasing degree of objectivity. I argue that marginalisation over many systematics models is a minimal requirement for robust inference. Stochastic models provide even more flexibility and objectivity, and therefore produce the most reliable results. However, no systematics models are perfect, and the best strategy is to compare multiple methods and repeat observations where possible.
Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod
Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.
2008-01-01
To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.
High-precision 41K/39K measurements by MC-ICP-MS indicate terrestrial variability of δ41K
Morgan, Leah; Santiago Ramos, Danielle P.; Davidheiser-Kroll, Brett; Faithfull, John; Lloyd, Nicholas S.; Ellam, Rob M.; Higgins, John A.
2018-01-01
Potassium is a major component in continental crust, the fourth-most abundant cation in seawater, and a key element in biological processes. Until recently, difficulties with existing analytical techniques hindered our ability to identify natural isotopic variability of potassium isotopes in terrestrial materials. However, measurement precision has greatly improved and a range of K isotopic compositions has now been demonstrated in natural samples. In this study, we present a new technique for high-precision measurement of K isotopic ratios using high-resolution, cold plasma multi-collector mass spectrometry. We apply this technique to demonstrate natural variability in the ratio of 41K to 39K in a diverse group of geological and biological samples, including silicate and evaporite minerals, seawater, and plant and animal tissues. The total range in 41K/39K ratios is ca. 2.6‰, with a long-term external reproducibility of 0.17‰ (2, N=108). Seawater and seawater-derived evaporite minerals are systematically enriched in 41K compared to silicate minerals by ca. 0.6‰, a result consistent with recent findings1, 2. Although our average bulk-silicate Earth value (-0.54‰) is indistinguishable from previously published values, we find systematic δ41K variability in some high-temperature sample suites, particularly those with evidence for the presence of fluids. The δ41K values of biological samples span a range of ca. 1.2‰ between terrestrial mammals, plants, and marine organisms. Implications of terrestrial K isotope variability for the atomic weight of K and K-based geochronology are discussed. Our results indicate that high-precision measurements of stable K isotopes, made using commercially available mass spectrometers, can provide unique insights into the chemistry of potassium in geological and biological systems.
Homogeneous Characterization of Transiting Exoplanet Systems
NASA Astrophysics Data System (ADS)
Gomez Maqueo Chew, Yilen; Faedi, Francesca; Hebb, Leslie; Pollacco, Don; Stassun, Keivan; Ghezzi, Luan; Cargile, Phillip; Barros, Susana; Smalley, Barry; Mack, Claude
2012-02-01
We aim to obtain a homogeneous set of high resolution, high signal- to-noise (S/N) spectra for a large and diverse sample of stars with transiting planets, using the Kitt Peak 4-m echelle spectrograph for bright Northern targets (7.7
Kagami, Saya; Yokoyama, Tetsuya
2016-09-21
Sm-Nd dating, which involves long-lived (147)Sm-(143)Nd and short-lived (146)Sm-(142)Nd systematics, has been widely used in the field of geosciences. To obtain precise and accurate ages of geological samples, the determination of highly precise Nd isotope ratios with nearly complete removal of Ce and Sm is indispensable to avoid mass spectral interference. In this study, we developed a three-step column chemistry procedure for separating Nd from geological samples that includes cation exchange chromatography for separating major elements from rare earth elements (REEs), oxidative extraction chromatography using Ln Resin coupled with HNO3 + KBrO3 for separating tetravalent Ce from the remaining REEs, and final purification of Nd using Ln Resin. This method enables high recovery of Nd (>91%) with effective separation of Nd from Ce and Sm (Ce/Nd < 1.2 × 10(-5) and Sm/Nd < 5.2 × 10(-6)). In addition, we devised a new method for determining Sm/Nd ratios with the isotope dilution inductively coupled plasma mass spectrometry method using (145)Nd- and (149)Sm-enriched spikes coupled with a group separation of REEs using TRU Resin. Applying the techniques developed in this study, we determined the Sm-Nd whole-rock isochron age of basaltic eucrites, yielding 4577 - 88(+ 55) Ma and 4558 ± 300 Ma for (146)Sm-(142)Nd and (147)Sm-(143)Nd systematics, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
Process influences and correction possibilities for high precision injection molded freeform optics
NASA Astrophysics Data System (ADS)
Dick, Lars; Risse, Stefan; Tünnermann, Andreas
2016-08-01
Modern injection molding processes offer a cost-efficient method for manufacturing high precision plastic optics for high volume applications. Besides form deviation of molded freeform optics, internal material stress is a relevant influencing factor for the functionality of a freeform optics in an optical system. This paper illustrates dominant influence parameters of an injection molding process relating to form deviation and internal material stress based on a freeform demonstrator geometry. Furthermore, a deterministic and efficient way for 3D mold correcting of systematic, asymmetrical shrinkage errors is shown to reach micrometer range shape accuracy at diameters up to 40 mm. In a second case, a stress-optimized parameter combination using unusual molding conditions was 3D corrected to reach high precision and low stress freeform polymer optics.
Testing the effectiveness of simplified search strategies for updating systematic reviews.
Rice, Maureen; Ali, Muhammad Usman; Fitzpatrick-Lewis, Donna; Kenny, Meghan; Raina, Parminder; Sherifali, Diana
2017-08-01
The objective of the study was to test the overall effectiveness of a simplified search strategy (SSS) for updating systematic reviews. We identified nine systematic reviews undertaken by our research group for which both comprehensive and SSS updates were performed. Three relevant performance measures were estimated, that is, sensitivity, precision, and number needed to read (NNR). The update reference searches for all nine included systematic reviews identified a total of 55,099 citations that were screened resulting in final inclusion of 163 randomized controlled trials. As compared with reference search, the SSS resulted in 8,239 hits and had a median sensitivity of 83.3%, while precision and NNR were 4.5 times better. During analysis, we found that the SSS performed better for clinically focused topics, with a median sensitivity of 100% and precision and NNR 6 times better than for the reference searches. For broader topics, the sensitivity of the SSS was 80% while precision and NNR were 5.4 times better compared with reference search. SSS performed well for clinically focused topics and, with a median sensitivity of 100%, could be a viable alternative to a conventional comprehensive search strategy for updating this type of systematic reviews particularly considering the budget constraints and the volume of new literature being published. For broader topics, 80% sensitivity is likely to be considered too low for a systematic review update in most cases, although it might be acceptable if updating a scoping or rapid review. Copyright © 2017 Elsevier Inc. All rights reserved.
EVEREST: Pixel Level Decorrelation of K2 Light Curves
NASA Astrophysics Data System (ADS)
Luger, Rodrigo; Agol, Eric; Kruse, Ethan; Barnes, Rory; Becker, Andrew; Foreman-Mackey, Daniel; Deming, Drake
2016-10-01
We present EPIC Variability Extraction and Removal for Exoplanet Science Targets (EVEREST), an open-source pipeline for removing instrumental noise from K2 light curves. EVEREST employs a variant of pixel level decorrelation to remove systematics introduced by the spacecraft’s pointing error and a Gaussian process to capture astrophysical variability. We apply EVEREST to all K2 targets in campaigns 0-7, yielding light curves with precision comparable to that of the original Kepler mission for stars brighter than {K}p≈ 13, and within a factor of two of the Kepler precision for fainter targets. We perform cross-validation and transit injection and recovery tests to validate the pipeline, and compare our light curves to the other de-trended light curves available for download at the MAST High Level Science Products archive. We find that EVEREST achieves the highest average precision of any of these pipelines for unsaturated K2 stars. The improved precision of these light curves will aid in exoplanet detection and characterization, investigations of stellar variability, asteroseismology, and other photometric studies. The EVEREST pipeline can also easily be applied to future surveys, such as the TESS mission, to correct for instrumental systematics and enable the detection of low signal-to-noise transiting exoplanets. The EVEREST light curves and the source code used to generate them are freely available online.
PubMed had a higher sensitivity than Ovid-MEDLINE in the search for systematic reviews.
Katchamart, Wanruchada; Faulkner, Amy; Feldman, Brian; Tomlinson, George; Bombardier, Claire
2011-07-01
To compare the performance of Ovid-MEDLINE vs. PubMed for identifying randomized controlled trials of methotrexate (MTX) in patients with rheumatoid arthritis (RA). We created search strategies for Ovid-MEDLINE and PubMed for a systematic review of MTX in RA. Their performance was evaluated using sensitivity, precision, and number needed to read (NNR). Comparing searches in Ovid-MEDLINE vs. PubMed, PubMed retrieved more citations overall than Ovid-MEDLINE; however, of the 20 citations that met eligibility criteria for the review, Ovid-MEDLINE retrieved 17 and PubMed 18. The sensitivity was 85% for Ovid-MEDLINE vs. 90% for PubMed, whereas the precision and NNR were comparable (precision: 0.881% for Ovid-MEDLINE vs. 0.884% for PubMed and NNR: 114 for Ovid-MEDLINE vs. 113 for PubMed). In systematic reviews of RA, PubMed has higher sensitivity than Ovid-MEDLINE with comparable precision and NNR. This study highlights the importance of well-designed database-specific search strategies. Copyright © 2010 Elsevier Inc. All rights reserved.
Tests of a Fast Plastic Scintillator for High-Precision Half-Life Measurements
NASA Astrophysics Data System (ADS)
Laffoley, A. T.; Dunlop, R.; Finlay, P.; Leach, K. G.; Michetti-Wilson, J.; Rand, E. T.; Svensson, C. E.; Grinyer, G. F.; Thomas, J. C.; Ball, G.; Garnsworthy, A. B.; Hackman, G.; Orce, J. N.; Triambak, S.; Williams, S. J.; Andreoiu, C.; Cross, D.
2013-03-01
A fast plastic scintillator detector is evaluated for possible use in an ongoing program of high-precision half-life measurements of short lived β emitters. Using data taken at TRI-UMF's Isotope Separator and Accelerator Facility with a radioactive 26Na beam, a detailed investigation of potential systematic effects with this new detector setup is being performed. The technique will then be applied to other β-decay half-life measurements including the superallowed Fermi β emitters 10C, 14O, and T = 1/2 decay of 15O.
A systematic and efficient method to compute multi-loop master integrals
NASA Astrophysics Data System (ADS)
Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu
2018-04-01
We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.
Publication bias in dermatology systematic reviews and meta-analyses.
Atakpo, Paul; Vassar, Matt
2016-05-01
Systematic reviews and meta-analyses in dermatology provide high-level evidence for clinicians and policy makers that influence clinical decision making and treatment guidelines. One methodological problem with systematic reviews is the under representation of unpublished studies. This problem is due in part to publication bias. Omission of statistically non-significant data from meta-analyses may result in overestimation of treatment effect sizes which may lead to clinical consequences. Our goal was to assess whether systematic reviewers in dermatology evaluate and report publication bias. Further, we wanted to conduct our own evaluation of publication bias on meta-analyses that failed to do so. Our study considered systematic reviews and meta-analyses from ten dermatology journals from 2006 to 2016. A PubMed search was conducted, and all full-text articles that met our inclusion criteria were retrieved and coded by the primary author. 293 articles were included in our analysis. Additionally, we formally evaluated publication bias in meta-analyses that failed to do so using trim and fill and cumulative meta-analysis by precision methods. Publication bias was mentioned in 107 articles (36.5%) and was formally evaluated in 64 articles (21.8%). Visual inspection of a funnel plot was the most common method of evaluating publication bias. Publication bias was present in 45 articles (15.3%), not present in 57 articles (19.5%) and not determined in 191 articles (65.2%). Using the trim and fill method, 7 meta-analyses (33.33%) showed evidence of publication bias. Although the trim and fill method only found evidence of publication bias in 7 meta-analyses, the cumulative meta-analysis by precision method found evidence of publication bias in 15 meta-analyses (71.4%). Many of the reviews in our study did not mention or evaluate publication bias. Further, of the 42 articles that stated following PRISMA reporting guidelines, 19 (45.2%) evaluated for publication bias. In comparison to other studies, we found that systematic reviews in dermatology were less likely to evaluate for publication bias. Evaluating and reporting the likelihood of publication bias should be standard practice in systematic reviews when appropriate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Semiempirical studies of atomic structure. Progress report, 1 July 1984-1 January 1985
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, L.J.
1985-01-01
Through the acquisition and systematization of empirical data, remarkably precise methods for predicting excitation energies, transition wavelengths, transition probabilities, level lifetimes, ionization potentials, core polarizabilities, and core penetrabilities have been and are being developed and applied. Although the data base for heavy, highly ionized atoms is still sparse, much new information has become available since this program was begun in 1980. The purpose of the project is to perform needed measurements and to utilize the available data through parametrized extrapolations and interpolations along isoelectronic, homologous, and Rydberg sequences to provide predictions for large classes of quantities with a precision thatmore » is sharpened by subsequent measurements.« less
GOCE Precise Science Orbits for the Entire Mission and their Use for Gravity Field Recovery
NASA Astrophysics Data System (ADS)
Jäggi, Adrian; Bock, Heike; Meyer, Ulrich; Weigelt, Matthias
The Gravity field and steady-state Ocean Circulation Explorer (GOCE), ESA's first Earth Explorer Core Mission, was launched on March 17, 2009 into a sun-synchronous dusk-dawn orbit and re-entered into the Earth's atmosphere on November 11, 2013. It was equipped with a three-axis gravity gradiometer for high-resolution recovery of the Earth's gravity field, as well as with a 12-channel, dual-frequency Global Positioning System (GPS) receiver for precise orbit determination (POD), instrument time-tagging, and the determination of the long wavelength part of the Earth’s gravity field. A precise science orbit (PSO) product was provided during the entire mission by the GOCE High-level Processing Facility (HPF) from the GPS high-low Satellite-to-Satellite Tracking (hl-SST) data. We present the reduced-dynamic and kinematic PSO results for the entire mission period. Orbit comparisons and validations with independent Satellite Laser Ranging (SLR) measurements demonstrate the high quality of both orbit products being close to 2 cm 1-D RMS, but also reveal a correlation between solar activity, GPS data availability, and the quality of the orbits. We use the 1-sec kinematic positions of the GOCE PSO product for gravity field determination and present GPS-only solutions covering the entire mission period. The generated gravity field solutions reveal severe systematic errors centered along the geomagnetic equator, which may be traced back to the GPS carrier phase observations used for the kinematic orbit determination. The nature of the systematic errors is further investigated and reprocessed orbits free of systematic errors along the geomagnetic equator are derived. Eventually, the potential of recovering time variable signals from GOCE kinematic positions is assessed.
Determination of the pion-nucleon coupling constant and scattering lengths
NASA Astrophysics Data System (ADS)
Ericson, T. E.; Loiseau, B.; Thomas, A. W.
2002-07-01
We critically evaluate the isovector Goldberger-Miyazawa-Oehme (GMO) sum rule for forward πN scattering using the recent precision measurements of π-p and π-d scattering lengths from pionic atoms. We deduce the charged-pion-nucleon coupling constant, with careful attention to systematic and statistical uncertainties. This determination gives, directly from data, g2c(GMO)/ 4π=14.11+/-0.05(statistical)+/-0.19(systematic) or f2c/4π=0.0783(11). This value is intermediate between that of indirect methods and the direct determination from backward np differential scattering cross sections. We also use the pionic atom data to deduce the coherent symmetric and antisymmetric sums of the pion-proton and pion-neutron scattering lengths with high precision, namely, (aπ-p+aπ-n)/2=[- 12+/-2(statistical)+/-8(systematic)]×10-4 m-1π and (aπ-p-aπ- n)/2=[895+/-3(statistical)+/-13 (systematic)]×10-4 m-1π. For the need of the present analysis, we improve the theoretical description of the pion-deuteron scattering length.
Re-187 Os-187 Isotopic and Highly Siderophile Element Systematics of Group IVB Irons
NASA Technical Reports Server (NTRS)
Honesto, J.; McDonough, W. F.; Walker, R. J.; McCoy, T. J.; Ash, R. D.
2005-01-01
Study of the magmatic iron meteorite groups permits constraints to be placed on the chemical and isotopic composition of parent bodies, and the timing of, and crystal-liquid fractionation processes involved in the crystallization of asteroidal cores. Here we examine Re-Os isotopic and trace elemental systematics of group IVB irons. Compared to most irons, the irons comprising this group are enriched in some of the most refractory siderophile elements, yet highly-depleted in most volatile siderophile elements. These characteristics have been attributed to processes such as high temperature condensation of precursor materials and oxidation in the parent body. Most recently it has been suggested that both processes may be involved in the chemical complexity of the group. Here, high precision isotopic and highly siderophile element (HSE) concentrations are used to further examine these possible origins, and the crystallization history of the group. In addition, we have begun to assess the possibility of relating certain ungrouped irons with major groups via multi-element, trace element modeling. In a companion abstract, the isotopic and trace element systematics of the ungrouped iron Tishomingo are compared with the IVB irons.
Ayiku, Lynda; Levay, Paul; Hudson, Tom; Craven, Jenny; Barrett, Elizabeth; Finnegan, Amy; Adams, Rachel
2017-07-13
A validated geographic search filter for the retrieval of research about the United Kingdom (UK) from bibliographic databases had not previously been published. To develop and validate a geographic search filter to retrieve research about the UK from OVID medline with high recall and precision. Three gold standard sets of references were generated using the relative recall method. The sets contained references to studies about the UK which had informed National Institute for Health and Care Excellence (NICE) guidance. The first and second sets were used to develop and refine the medline UK filter. The third set was used to validate the filter. Recall, precision and number-needed-to-read (NNR) were calculated using a case study. The validated medline UK filter demonstrated 87.6% relative recall against the third gold standard set. In the case study, the medline UK filter demonstrated 100% recall, 11.4% precision and a NNR of nine. A validated geographic search filter to retrieve research about the UK with high recall and precision has been developed. The medline UK filter can be applied to systematic literature searches in OVID medline for topics with a UK focus. © 2017 Crown copyright. Health Information and Libraries Journal © 2017 Health Libraries GroupThis article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.
Observational evidence and strength of evidence domains: case examples
2014-01-01
Background Systematic reviews of healthcare interventions most often focus on randomized controlled trials (RCTs). However, certain circumstances warrant consideration of observational evidence, and such studies are increasingly being included as evidence in systematic reviews. Methods To illustrate the use of observational evidence, we present case examples of systematic reviews in which observational evidence was considered as well as case examples of individual observational studies, and how they demonstrate various strength of evidence domains in accordance with current Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) methods guidance. Results In the presented examples, observational evidence is used when RCTs are infeasible or raise ethical concerns, lack generalizability, or provide insufficient data. Individual study case examples highlight how observational evidence may fulfill required strength of evidence domains, such as study limitations (reduced risk of selection, detection, performance, and attrition); directness; consistency; precision; and reporting bias (publication, selective outcome reporting, and selective analysis reporting), as well as additional domains of dose-response association, plausible confounding that would decrease the observed effect, and strength of association (magnitude of effect). Conclusions The cases highlighted in this paper demonstrate how observational studies may provide moderate to (rarely) high strength evidence in systematic reviews. PMID:24758494
The application of proteomics in different aspects of hepatocellular carcinoma research.
Xing, Xiaohua; Liang, Dong; Huang, Yao; Zeng, Yongyi; Han, Xiao; Liu, Xiaolong; Liu, Jingfeng
2016-08-11
Hepatocellular carcinoma (HCC) is one of the most common malignant tumors, which is causing the second leading cancer-related death worldwide. With the significant advances of high-throughput protein analysis techniques, the proteomics offered an extremely useful and versatile analytical platform for biomedical researches. In recent years, different proteomic strategies have been widely applied in the various aspects of HCC studies, ranging from screening the early diagnostic and prognostic biomarkers to in-depth investigating the underlying molecular mechanisms. In this review, we would like to systematically summarize the current applications of proteomics in hepatocellular carcinoma study, and discuss the challenges of applying proteomics in study clinical samples, as well as discuss the possible application of proteomics in precision medicine. In this review, we have systematically summarized the current applications of proteomics in hepatocellular carcinoma study, ranging from screening biomarkers to in-depth investigating the underlying molecular mechanisms. In addition, we have discussed the challenges of applying proteomics in study clinical samples, as well as the possible applications of proteomics in precision medicine. We believe that this review would help readers to be better familiar with the recent progresses of clinical proteomics, especially in the field of hepatocellular carcinoma research. Copyright © 2016 Elsevier B.V. All rights reserved.
Exploiting the systematic review protocol for classification of medical abstracts.
Frunza, Oana; Inkpen, Diana; Matwin, Stan; Klement, William; O'Blenis, Peter
2011-01-01
To determine whether the automatic classification of documents can be useful in systematic reviews on medical topics, and specifically if the performance of the automatic classification can be enhanced by using the particular protocol of questions employed by the human reviewers to create multiple classifiers. The test collection is the data used in large-scale systematic review on the topic of the dissemination strategy of health care services for elderly people. From a group of 47,274 abstracts marked by human reviewers to be included in or excluded from further screening, we randomly selected 20,000 as a training set, with the remaining 27,274 becoming a separate test set. As a machine learning algorithm we used complement naïve Bayes. We tested both a global classification method, where a single classifier is trained on instances of abstracts and their classification (i.e., included or excluded), and a novel per-question classification method that trains multiple classifiers for each abstract, exploiting the specific protocol (questions) of the systematic review. For the per-question method we tested four ways of combining the results of the classifiers trained for the individual questions. As evaluation measures, we calculated precision and recall for several settings of the two methods. It is most important not to exclude any relevant documents (i.e., to attain high recall for the class of interest) but also desirable to exclude most of the non-relevant documents (i.e., to attain high precision on the class of interest) in order to reduce human workload. For the global method, the highest recall was 67.8% and the highest precision was 37.9%. For the per-question method, the highest recall was 99.2%, and the highest precision was 63%. The human-machine workflow proposed in this paper achieved a recall value of 99.6%, and a precision value of 17.8%. The per-question method that combines classifiers following the specific protocol of the review leads to better results than the global method in terms of recall. Because neither method is efficient enough to classify abstracts reliably by itself, the technology should be applied in a semi-automatic way, with a human expert still involved. When the workflow includes one human expert and the trained automatic classifier, recall improves to an acceptable level, showing that automatic classification techniques can reduce the human workload in the process of building a systematic review. Copyright © 2010 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhou; Adams, Rachel M; Chourey, Karuna
2012-01-01
A variety of quantitative proteomics methods have been developed, including label-free, metabolic labeling, and isobaric chemical labeling using iTRAQ or TMT. Here, these methods were compared in terms of the depth of proteome coverage, quantification accuracy, precision, and reproducibility using a high-performance hybrid mass spectrometer, LTQ Orbitrap Velos. Our results show that (1) the spectral counting method provides the deepest proteome coverage for identification, but its quantification performance is worse than labeling-based approaches, especially the quantification reproducibility; (2) metabolic labeling and isobaric chemical labeling are capable of accurate, precise, and reproducible quantification and provide deep proteome coverage for quantification. Isobaricmore » chemical labeling surpasses metabolic labeling in terms of quantification precision and reproducibility; (3) iTRAQ and TMT perform similarly in all aspects compared in the current study using a CID-HCD dual scan configuration. Based on the unique advantages of each method, we provide guidance for selection of the appropriate method for a quantitative proteomics study.« less
Extended HFSE systematics of Apollo samples - wrenching further Secrets from the Lunar Mantle
NASA Astrophysics Data System (ADS)
Thiemens, M. M.; Sprung, P.; Munker, C.
2016-12-01
As Earth's intimate companion, the Moon provides a close extraterrestrial view on planetary differentiation. In turn, investigating chemical and isotopic compositions of lunar rocks for traces of a putative crystallizing Lunar Magma Ocean (LMO) provides a better understanding of the evolution and differentiation of infant planetary bodies.We expand on high-precision extended High Field Strength Element (HFSE) observations of Münker [1]. In detail, we investigate if the HFSE systematics of low- and high- Ti basalts, KREEPy basalts and breccias, soils, and ferroan anorthosites (FAN) are consistent with their formation from the LMO (FAN, KREEP) or mantle sources comprising mixtures of primary LMO products [2] (mare basalts). Of particular interest is the recently discovered dependence of HFSE partitioning on the Ti-concentration of co-existing melts [3] and that of W partitioning on oxygen fugacity [3,4].Our data form a positively correlated array in Zr/Hf vs. Nb/Ta space, similar to previous high-precision [1] but unlike lower-precision data. The HFSE systematics of different rock types from the Apollo missions mostly form distinct groups. High-Ti and some Apollo 12 low-Ti mare basalts form the lower end of the array, KREEPy samples its upper end. Low Zr/Nb in most high-Ti mare basalts and the globally highest Hf/W confirm involvement of Ti-rich-oxide-bearing cumulates in high-Ti formation [e.g., 1,2]. No global lunar trends exist for Hf/W vs. Zr/Nb. Overall, the composition of KREEPy samples agrees reasonably well with model KREEP-compositions assuming a LMO below IW-1 [1,4].Clearly distinct groupings observed for the various rock types and the lack of a global trend in Hf/W vs. Zr/Nb calls for melting of distinct ultramafic sources [1]. The HFSE systematics of Apollo rocks tend to support a LMO scenario, setting the stage for more detailed petrogenetic modeling. Initial modeling suggests that the lunar mantle must possess residual metal to reconcile the HFSE systematics of Apollo rocks within an LMO-scenario, providing an alternative explanation for the very low abundances of HSE in the lunar crust [5].[1] Münker, C. (2010) GCA 74, 7340-7361. [2] Snyder et al. (1992) GCA 56, 3809-3823. [3] Leitzke et al. (in press) Chem. Geol. [4] Fonseca et al. (2014) EPSL 404, 1-13. [5] Day & Walker (2015) EPSL 423, 114-124
Semi-empirical studies of atomic structure. Progress report, 1 July 1982-1 February 1983
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, L.J.
1983-01-01
A program of studies of the properties of the heavy and highly ionized atomic systems which often occur as contaminants in controlled fusion devices is continuing. The project combines experimental measurements by fast-ion-beam excitation with semi-empirical data parametrizations to identify and exploit regularities in the properties of these very heavy and very highly ionized systems. The increasing use of spectroscopic line intensities as diagnostics for determining thermonuclear plasma temperatures and densities requires laboratory observation and analysis of such spectra, often to accuracies that exceed the capabilities of ab initio theoretical methods for these highly relativistic many electron systems. Through themore » acquisition and systematization of empirical data, remarkably precise methods for predicting excitation energies, transition wavelengths, transition probabilities, level lifetimes, ionization potentials, core polarizabilities, and core penetrabilities are being developed and applied. Although the data base for heavy, highly ionized atoms is still sparse, parametrized extrapolations and interpolations along isoelectronic, homologous, and Rydberg sequences are providing predictions for large classes of quantities, with a precision that is sharpened by subsequent measurements.« less
Semiempirical studies of atomic structure. Progress report, 1 July 1983-1 June 1984
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, L.J.
1984-01-01
A program of studies of the properties of the heavy and highly ionized atomic systems which often occur as contaminants in controlled fusion devices is continuing. The project combines experimental measurements by fast ion beam excitation with semiempirical data parametrizations to identify and exploit regularities in the properties of these very heavy and very highly ionized systems. The increasing use of spectroscopic line intensities as diagnostics for determining thermonuclear plasma temperatures and densities requires laboratory observation and analysis of such spectra, often to accuracies that exceed the capabilities of ab initio theoretical methods for these highly relativistic many electron systems.more » Through the acquisition and systematization of empirical data, remarkably precise methods for predicting excitation energies, transition wavelengths, transition probabilities, level lifetimes, ionization potentials, core polarizabilities, and core penetrabilities are being developed and applied. Although the data base for heavy, highly ionized atoms is still sparse, parametrized extrapolations and interpolations along isoelectronic, homologous, and Rydberg sequences are providing predictions for large classes of quantities, with a precision that is sharpened by subsequent measurements.« less
Yadav, Nand K; Raghuvanshi, Ashish; Sharma, Gajanand; Beg, Sarwar; Katare, Om P; Nanda, Sanju
2016-03-01
The current studies entail systematic quality by design (QbD)-based development of simple, precise, cost-effective and stability-indicating high-performance liquid chromatography method for estimation of ketoprofen. Analytical target profile was defined and critical analytical attributes (CAAs) were selected. Chromatographic separation was accomplished with an isocratic, reversed-phase chromatography using C-18 column, pH 6.8, phosphate buffer-methanol (50 : 50v/v) as a mobile phase at a flow rate of 1.0 mL/min and UV detection at 258 nm. Systematic optimization of chromatographic method was performed using central composite design by evaluating theoretical plates and peak tailing as the CAAs. The method was validated as per International Conference on Harmonization guidelines with parameters such as high sensitivity, specificity of the method with linearity ranging between 0.05 and 250 µg/mL, detection limit of 0.025 µg/mL and quantification limit of 0.05 µg/mL. Precision was demonstrated using relative standard deviation of 1.21%. Stress degradation studies performed using acid, base, peroxide, thermal and photolytic methods helped in identifying the degradation products in the proniosome delivery systems. The results successfully demonstrated the utility of QbD for optimizing the chromatographic conditions for developing highly sensitive liquid chromatographic method for ketoprofen. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Unbiased methods for removing systematics from galaxy clustering measurements
NASA Astrophysics Data System (ADS)
Elsner, Franz; Leistedt, Boris; Peiris, Hiranya V.
2016-02-01
Measuring the angular clustering of galaxies as a function of redshift is a powerful method for extracting information from the three-dimensional galaxy distribution. The precision of such measurements will dramatically increase with ongoing and future wide-field galaxy surveys. However, these are also increasingly sensitive to observational and astrophysical contaminants. Here, we study the statistical properties of three methods proposed for controlling such systematics - template subtraction, basic mode projection, and extended mode projection - all of which make use of externally supplied template maps, designed to characterize and capture the spatial variations of potential systematic effects. Based on a detailed mathematical analysis, and in agreement with simulations, we find that the template subtraction method in its original formulation returns biased estimates of the galaxy angular clustering. We derive closed-form expressions that should be used to correct results for this shortcoming. Turning to the basic mode projection algorithm, we prove it to be free of any bias, whereas we conclude that results computed with extended mode projection are biased. Within a simplified setup, we derive analytical expressions for the bias and discuss the options for correcting it in more realistic configurations. Common to all three methods is an increased estimator variance induced by the cleaning process, albeit at different levels. These results enable unbiased high-precision clustering measurements in the presence of spatially varying systematics, an essential step towards realizing the full potential of current and planned galaxy surveys.
Systematic review of scope and quality of electronic patient record data in primary care
Thiru, Krish; Hassey, Alan; Sullivan, Frank
2003-01-01
Objective To systematically review measures of data quality in electronic patient records (EPRs) in primary care. Design Systematic review of English language publications, 1980-2001. Data sources Bibliographic searches of medical databases, specialist medical informatics databases, conference proceedings, and institutional contacts. Study selection Studies selected according to a predefined framework for categorising review papers. Data extraction Reference standards and measurements used to judge quality. Results Bibliographic searches identified 4589 publications. After primary exclusions 174 articles were classified, 52 of which met the inclusion criteria for review. Selected studies were primarily descriptive surveys. Variability in methods prevented meta-analysis of results. Forty eight publications were concerned with diagnostic data, 37 studies measured data quality, and 15 scoped EPR quality. Reliability of data was assessed with rate comparison. Measures of sensitivity were highly dependent on the element of EPR data being investigated, while the positive predictive value was consistently high, indicating good validity. Prescribing data were generally of better quality than diagnostic or lifestyle data. Conclusion The lack of standardised methods for assessment of quality of data in electronic patient records makes it difficult to compare results between studies. Studies should present data quality measures with clear numerators, denominators, and confidence intervals. Ambiguous terms such as “accuracy” should be avoided unless precisely defined. PMID:12750210
Working memory retrieval as a decision process
Pearson, Benjamin; Raškevičius, Julius; Bays, Paul M.; Pertzov, Yoni; Husain, Masud
2014-01-01
Working memory (WM) is a core cognitive process fundamental to human behavior, yet the mechanisms underlying it remain highly controversial. Here we provide a new framework for understanding retrieval of information from WM, conceptualizing it as a decision based on the quality of internal evidence. Recent findings have demonstrated that precision of WM decreases with memory load. If WM retrieval uses a decision process that depends on memory quality, systematic changes in response time distribution should occur as a function of WM precision. We asked participants to view sample arrays and, after a delay, report the direction of change in location or orientation of a probe. As WM precision deteriorated with increasing memory load, retrieval time increased systematically. Crucially, the shape of reaction time distributions was consistent with a linear accumulator decision process. Varying either task relevance of items or maintenance duration influenced memory precision, with corresponding shifts in retrieval time. These results provide strong support for a decision-making account of WM retrieval based on noisy storage of items. Furthermore, they show that encoding, maintenance, and retrieval in WM need not be considered as separate processes, but may instead be conceptually unified as operations on the same noise-limited, neural representation. PMID:24492597
Working memory retrieval as a decision process.
Pearson, Benjamin; Raskevicius, Julius; Bays, Paul M; Pertzov, Yoni; Husain, Masud
2014-02-03
Working memory (WM) is a core cognitive process fundamental to human behavior, yet the mechanisms underlying it remain highly controversial. Here we provide a new framework for understanding retrieval of information from WM, conceptualizing it as a decision based on the quality of internal evidence. Recent findings have demonstrated that precision of WM decreases with memory load. If WM retrieval uses a decision process that depends on memory quality, systematic changes in response time distribution should occur as a function of WM precision. We asked participants to view sample arrays and, after a delay, report the direction of change in location or orientation of a probe. As WM precision deteriorated with increasing memory load, retrieval time increased systematically. Crucially, the shape of reaction time distributions was consistent with a linear accumulator decision process. Varying either task relevance of items or maintenance duration influenced memory precision, with corresponding shifts in retrieval time. These results provide strong support for a decision-making account of WM retrieval based on noisy storage of items. Furthermore, they show that encoding, maintenance, and retrieval in WM need not be considered as separate processes, but may instead be conceptually unified as operations on the same noise-limited, neural representation.
Giving cosmic redshift drift a whirl
NASA Astrophysics Data System (ADS)
Kim, Alex G.; Linder, Eric V.; Edelstein, Jerry; Erskine, David
2015-03-01
Redshift drift provides a direct kinematic measurement of cosmic acceleration but it occurs with a characteristic time scale of a Hubble time. Thus redshift observations with a challenging precision of 10-9 require a 10 year time span to obtain a signal-to-noise of 1. We discuss theoretical and experimental approaches to address this challenge, potentially requiring less observer time and having greater immunity to common systematics. On the theoretical side we explore allowing the universe, rather than the observer, to provide long time spans; speculative methods include radial baryon acoustic oscillations, cosmic pulsars, and strongly lensed quasars. On the experimental side, we explore beating down the redshift precision using differential interferometric techniques, including externally dispersed interferometers and spatial heterodyne spectroscopy. Low-redshift emission line galaxies are identified as having high cosmology leverage and systematics control, with an 8 h exposure on a 10-m telescope (1000 h of exposure on a 40-m telescope) potentially capable of measuring the redshift of a galaxy to a precision of 10-8 (few ×10-10). Low-redshift redshift drift also has very strong complementarity with cosmic microwave background measurements, with the combination achieving a dark energy figure of merit of nearly 300 (1400) for 5% (1%) precision on drift.
Heterodyne range imaging as an alternative to photogrammetry
NASA Astrophysics Data System (ADS)
Dorrington, Adrian; Cree, Michael; Carnegie, Dale; Payne, Andrew; Conroy, Richard
2007-01-01
Solid-state full-field range imaging technology, capable of determining the distance to objects in a scene simultaneously for every pixel in an image, has recently achieved sub-millimeter distance measurement precision. With this level of precision, it is becoming practical to use this technology for high precision three-dimensional metrology applications. Compared to photogrammetry, range imaging has the advantages of requiring only one viewing angle, a relatively short measurement time, and simplistic fast data processing. In this paper we fist review the range imaging technology, then describe an experiment comparing both photogrammetric and range imaging measurements of a calibration block with attached retro-reflective targets. The results show that the range imaging approach exhibits errors of approximately 0.5 mm in-plane and almost 5 mm out-of-plane; however, these errors appear to be mostly systematic. We then proceed to examine the physical nature and characteristics of the image ranging technology and discuss the possible causes of these systematic errors. Also discussed is the potential for further system characterization and calibration to compensate for the range determination and other errors, which could possibly lead to three-dimensional measurement precision approaching that of photogrammetry.
2010-01-01
Background The identification of health services research in databases such as PubMed/Medline is a cumbersome task. This task becomes even more difficult if the field of interest involves the use of diverse methods and data sources, as is the case with nurse staffing research. This type of research investigates the association between nurse staffing parameters and nursing and patient outcomes. A comprehensively developed search strategy may help identify nurse staffing research in PubMed/Medline. Methods A set of relevant references in PubMed/Medline was identified by means of three systematic reviews. This development set was used to detect candidate free-text and MeSH terms. The frequency of these terms was compared to a random sample from PubMed/Medline in order to identify terms specific to nurse staffing research, which were then used to develop a sensitive, precise and balanced search strategy. To determine their precision, the newly developed search strategies were tested against a) the pool of relevant references extracted from the systematic reviews, b) a reference set identified from an electronic journal screening, and c) a sample from PubMed/Medline. Finally, all newly developed strategies were compared to PubMed's Health Services Research Queries (PubMed's HSR Queries). Results The sensitivities of the newly developed search strategies were almost 100% in all of the three test sets applied; precision ranged from 6.1% to 32.0%. PubMed's HSR queries were less sensitive (83.3% to 88.2%) than the new search strategies. Only minor differences in precision were found (5.0% to 32.0%). Conclusions As with other literature on health services research, nurse staffing studies are difficult to identify in PubMed/Medline. Depending on the purpose of the search, researchers can choose between high sensitivity and retrieval of a large number of references or high precision, i.e. and an increased risk of missing relevant references, respectively. More standardized terminology (e.g. by consistent use of the term "nurse staffing") could improve the precision of future searches in this field. Empirically selected search terms can help to develop effective search strategies. The high consistency between all test sets confirmed the validity of our approach. PMID:20731858
Simon, Michael; Hausner, Elke; Klaus, Susan F; Dunton, Nancy E
2010-08-23
The identification of health services research in databases such as PubMed/Medline is a cumbersome task. This task becomes even more difficult if the field of interest involves the use of diverse methods and data sources, as is the case with nurse staffing research. This type of research investigates the association between nurse staffing parameters and nursing and patient outcomes. A comprehensively developed search strategy may help identify nurse staffing research in PubMed/Medline. A set of relevant references in PubMed/Medline was identified by means of three systematic reviews. This development set was used to detect candidate free-text and MeSH terms. The frequency of these terms was compared to a random sample from PubMed/Medline in order to identify terms specific to nurse staffing research, which were then used to develop a sensitive, precise and balanced search strategy. To determine their precision, the newly developed search strategies were tested against a) the pool of relevant references extracted from the systematic reviews, b) a reference set identified from an electronic journal screening, and c) a sample from PubMed/Medline. Finally, all newly developed strategies were compared to PubMed's Health Services Research Queries (PubMed's HSR Queries). The sensitivities of the newly developed search strategies were almost 100% in all of the three test sets applied; precision ranged from 6.1% to 32.0%. PubMed's HSR queries were less sensitive (83.3% to 88.2%) than the new search strategies. Only minor differences in precision were found (5.0% to 32.0%). As with other literature on health services research, nurse staffing studies are difficult to identify in PubMed/Medline. Depending on the purpose of the search, researchers can choose between high sensitivity and retrieval of a large number of references or high precision, i.e. and an increased risk of missing relevant references, respectively. More standardized terminology (e.g. by consistent use of the term "nurse staffing") could improve the precision of future searches in this field. Empirically selected search terms can help to develop effective search strategies. The high consistency between all test sets confirmed the validity of our approach.
NASA Astrophysics Data System (ADS)
Brogaard, K.; Hansen, C. J.; Miglio, A.; Slumstrup, D.; Frandsen, S.; Jessen-Hansen, J.; Lund, M. N.; Bossini, D.; Thygesen, A.; Davies, G. R.; Chaplin, W. J.; Arentoft, T.; Bruntt, H.; Grundahl, F.; Handberg, R.
2018-05-01
We aim to establish and improve the accuracy level of asteroseismic estimates of mass, radius, and age of giant stars. This can be achieved by measuring independent, accurate, and precise masses, radii, effective temperatures and metallicities of long period eclipsing binary stars with a red giant component that displays solar-like oscillations. We measured precise properties of the three eclipsing binary systems KIC 7037405, KIC 9540226, and KIC 9970396 and estimated their ages be 5.3 ± 0.5, 3.1 ± 0.6, and 4.8 ± 0.5 Gyr. The measurements of the giant stars were compared to corresponding measurements of mass, radius, and age using asteroseismic scaling relations and grid modelling. We found that asteroseismic scaling relations without corrections to Δν systematically overestimate the masses of the three red giants by 11.7 per cent, 13.7 per cent, and 18.9 per cent, respectively. However, by applying theoretical correction factors fΔν according to Rodrigues et al. (2017), we reached general agreement between dynamical and asteroseismic mass estimates, and no indications of systematic differences at the precision level of the asteroseismic measurements. The larger sample investigated by Gaulme et al. (2016) showed a much more complicated situation, where some stars show agreement between the dynamical and corrected asteroseismic measures while others suggest significant overestimates of the asteroseismic measures. We found no simple explanation for this, but indications of several potential problems, some theoretical, others observational. Therefore, an extension of the present precision study to a larger sample of eclipsing systems is crucial for establishing and improving the accuracy of asteroseismology of giant stars.
A time projection chamber for high accuracy and precision fission cross-section measurements
Heffner, M.; Asner, D. M.; Baker, R. G.; ...
2014-05-22
The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4π acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This study provides a detailed description of the design requirements, the design solutions, and the initial performance ofmore » the fissionTPC.« less
NASA Astrophysics Data System (ADS)
Lee, H. W.; Lim, H. W.; Jeon, D. H.; Park, C. K.; Cho, H. S.; Seo, C. W.; Lee, D. Y.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Woo, T. H.; Oh, J. E.
2018-06-01
This study investigated the effectiveness of a new method for measuring the actual focal spot position of a diagnostic x-ray tube using a high-precision antiscatter grid and a digital x-ray detector in which grid magnification, which is directly related to the focal spot position, was determined from the Fourier spectrum of the acquired x-ray grid’s image. A systematic experiment was performed to demonstrate the viability of the proposed measurement method. The hardware system used in the experiment consisted of an x-ray tube run at 50 kVp and 1 mA, a flat-panel detector with a pixel size of 49.5 µm, and a high-precision carbon-interspaced grid with a strip density of 200 lines/inch. The results indicated that the focal spot of the x-ray tube (Jupiter 5000, Oxford Instruments) used in the experiment was located approximately 31.10 mm inside from the exit flange, well agreed with the nominal value of 31.05 mm, which demonstrates the viability of the proposed measurement method. Thus, the proposed method can be utilized for system’s performance optimization in many x-ray imaging applications.
Panuwet, Parinya; Hunter, Ronald E.; D’Souza, Priya E.; Chen, Xianyu; Radford, Samantha A.; Cohen, Jordan R.; Marder, M. Elizabeth; Kartavenka, Kostya; Ryan, P. Barry; Barr, Dana Boyd
2015-01-01
The ability to quantify levels of target analytes in biological samples accurately and precisely, in biomonitoring, involves the use of highly sensitive and selective instrumentation such as tandem mass spectrometers and a thorough understanding of highly variable matrix effects. Typically, matrix effects are caused by co-eluting matrix components that alter the ionization of target analytes as well as the chromatographic response of target analytes, leading to reduced or increased sensitivity of the analysis. Thus, before the desired accuracy and precision standards of laboratory data are achieved, these effects must be characterized and controlled. Here we present our review and observations of matrix effects encountered during the validation and implementation of tandem mass spectrometry-based analytical methods. We also provide systematic, comprehensive laboratory strategies needed to control challenges posed by matrix effects in order to ensure delivery of the most accurate data for biomonitoring studies assessing exposure to environmental toxicants. PMID:25562585
An empirical definition of clinical supervision.
Milne, Derek
2007-11-01
The growing recognition of clinical supervision as the basis for high-quality mental health services is apparent in policy, research and clinical practice, but an empirical definition is required to progress research and practice. A logical analysis was used to draft a working definition, and then a systematic review of 24 empirical studies of clinical supervision produced a best evidence synthesis, which was used to test and improve this definition. The logical analysis indicated that the most popular definition (Bernard & Goodyear, 1992) failed all four necessary tests of a good definition: precision, specification, operationalization and corroboration. The systematic review synthesis was then used to test the working definition, which passed these tests (with two amendments). These two complementary review approaches created a firmer basis for advancing research and practice.
Independent Component Analysis applied to Ground-based observations
NASA Astrophysics Data System (ADS)
Martins-Filho, Walter; Griffith, Caitlin; Pearson, Kyle; Waldmann, Ingo; Alvarez-Candal, Alvaro; Zellem, Robert Thomas
2018-01-01
Transit measurements of Jovian-sized exoplanetary atmospheres allow one to study the composition of exoplanets, largely independent of the planet’s temperature profile. However, measurements of hot-Jupiter transits must archive a level of accuracy in the flux to determine the spectral modulation of the exoplanetary atmosphere. To accomplish this level of precision, we need to extract systematic errors, and, for ground-based measurements, the effects of Earth’s atmosphere, from signal due to the exoplanet, which is several orders of magnitude smaller. The effects of the terrestrial atmosphere and some of the time-dependent systematic errors of ground-based transit measurements are treated mainly by dividing the host star by a reference star at each wavelength and time step of the transit. Recently, Independent Component Analysis (ICA) have been used to remove systematics effects from the raw data of space-based observations (Waldmann, 2014, 2012; Morello et al., 2016, 2015). ICA is a statistical method born from the ideas of the blind-source separations studies, which can be used to de-trend several independent source signals of a data set (Hyvarinen and Oja, 2000). This technique requires no additional prior knowledge of the data set. In addition, this technique has the advantage of requiring no reference star. Here we apply the ICA to ground-based photometry of the exoplanet XO-2b recorded by the 61” Kuiper Telescope and compare the results of the ICA to those of a previous analysis from Zellem et al. (2015), which does not use ICA. We also simulate the effects of various conditions (concerning the systematic errors, noise and the stability of object on the detector) to determine the conditions under which an ICA can be used with high precision to extract the light curve of exoplanetary photometry measurements
Independent Component Analysis applied to Ground-based observations
NASA Astrophysics Data System (ADS)
Martins-Filho, Walter; Griffith, Caitlin Ann; Pearson, Kyle; Waldmann, Ingo; Alvarez-Candal, Alvaro; Zellem, Robert
2017-10-01
Transit measurements of Jovian-sized exoplanetary atmospheres allow one to study the composition of exoplanets, largely independent of the planet’s temperature profile. However, measurements of hot-Jupiter transits must archive a level of accuracy in the flux to determine the spectral modulations of the exoplanetary atmosphere. To accomplish this level of precision, we need to extract systematic errors, and, for ground-based measurements, the effects of Earth’s atmosphere, from signal due to the exoplanet, which is several orders of magnitudes smaller.The effects of the terrestrial atmosphere and some of the time dependent systematic errors of ground-based transit measurements are treated mainly by dividing the host star by a reference star at each wavelength and time step of the transit. Recently, Independent Component Analyses (ICA) have been used to remove systematics effects from the raw data of space-based observations (Waldmann, 2014, 2012; Morello et al., 2016, 2015). ICA is a statistical method born from the ideas of the blind-source separations studies, which can be used to de-trend several independent source signals of a data set (Hyvarinen and Oja, 2000). This technique requires no additional prior knowledge of the data set. In addition this technique has the advantage of requiring no reference star.Here we apply the ICA to ground-based photometry of the exoplanet XO-2b recorded by the 61” Kuiper Telescope and compare the results of the ICA to those of a previous analysis from Zellem et al. (2015), which does not use ICA. We also simulate the effects of various conditions (concerning the systematic errors, noise and the stability of object on the detector) to determine the conditions under which an ICA can be used with high precision to extract the light curve of exoplanetary photometry measurements.
Higher-Order Systematic Effects in the Muon Beam-Spin Dynamics for Muon g-2
NASA Astrophysics Data System (ADS)
Crnkovic, Jason; Brown, Hugh; Krouppa, Brandon; Metodiev, Eric; Morse, William; Semertzidis, Yannis; Tishchenko, Vladimir
2016-03-01
The BNL Muon g-2 Experiment (E821) produced a precision measurement of the muon anomalous magnetic moment, where as the Fermilab Muon g-2 Experiment (E989) is an upgraded version of E821 that has a goal of producing a measurement with approximately 4 times more precision. Improving the precision requires a more detailed understanding of the experimental systematic effects, and so three higher-order systematic effects in the muon beam-spin dynamics have recently been found and estimated for E821. The beamline systematic effect originates from muon production in beamline spectrometers, as well as from muons traversing beamline bending magnets. The kicker systematic effect comes from a combination of the variation in time spent inside the muon storage ring across a muon bunch and the temporal structure of the storage ring kicker waveform. Finally, the detector systematic effect arises from a combination of the energy dependent muon equilibrium orbit in the storage ring, muon decay electron drift time, and decay electron detector acceptance effects. Brookhaven Natl Lab.
High Astrometric Precision in the Calculation of the Coordinates of Orbiters in the GEO Ring
NASA Astrophysics Data System (ADS)
Lacruz, E.; Abad, C.; Downes, J. J.; Hernández-Pérez, F.; Casanova, D.; Tresaco, E.
2018-04-01
We present an astrometric method for the calculation of the positions of orbiters in the GEO ring with a high precision, through a rigorous astrometric treatment of observations with a 1-m class telescope, which are part of the CIDA survey of the GEO ring. We compute the distortion pattern to correct for the systematic errors introduced by the optics and electronics of the telescope, resulting in absolute mean errors of 0.16″ and 0.12″ in right ascension and declination, respectively. These correspond to ≍25 m at the mean distance of the GEO ring, and are thus good quality results.
The accuracy and precision of radiostereometric analysis in upper limb arthroplasty.
Ten Brinke, Bart; Beumer, Annechien; Koenraadt, Koen L M; Eygendaal, Denise; Kraan, Gerald A; Mathijssen, Nina M C
2017-06-01
Background and purpose - Radiostereometric analysis (RSA) is an accurate method for measurement of early migration of implants. Since a relation has been shown between early migration and future loosening of total knee and hip prostheses, RSA plays an important role in the development and evaluation of prostheses. However, there have been few RSA studies of the upper limb, and the value of RSA of the upper limb is not yet clear. We therefore performed a systematic review to investigate the accuracy and precision of RSA of the upper limb. Patients and methods - PRISMA guidelines were followed and the protocol for this review was published online at PROSPERO under registration number CRD42016042014. A systematic search of the literature was performed in the databases Embase, Medline, Cochrane, Web of Science, Scopus, Cinahl, and Google Scholar on April 25, 2015 based on the keywords radiostereometric analysis, shoulder prosthesis, elbow prosthesis, wrist prosthesis, trapeziometacarpal joint prosthesis, humerus, ulna, radius, carpus. Articles concerning RSA for the analysis of early migration of prostheses of the upper limb were included. Quality assessment was performed using the MINORS score, Downs and Black checklist, and the ISO RSA Results - 23 studies were included. Precision values were in the 0.06-0.88 mm and 0.05-10.7° range for the shoulder, the 0.05-0.34 mm and 0.16-0.76° range for the elbow, and the 0.16-1.83 mm and 11-124° range for the TMC joint. Accuracy data from marker- and model-based RSA were not reported in the studies included. Interpretation - RSA is a highly precise method for measurement of early migration of orthopedic implants in the upper limb. However, the precision of rotation measurement is poor in some components. Challenges with RSA in the upper limb include the symmetrical shape of prostheses and the limited size of surrounding bone, leading to over-projection of the markers by the prosthesis. We recommend higher adherence to RSA guidelines and encourage investigators to publish long-term follow-up RSA studies.
High accuracy transit photometry of the planet OGLE-TR-113b with a new deconvolution-based method
NASA Astrophysics Data System (ADS)
Gillon, M.; Pont, F.; Moutou, C.; Bouchy, F.; Courbin, F.; Sohy, S.; Magain, P.
2006-11-01
A high accuracy photometry algorithm is needed to take full advantage of the potential of the transit method for the characterization of exoplanets, especially in deep crowded fields. It has to reduce to the lowest possible level the negative influence of systematic effects on the photometric accuracy. It should also be able to cope with a high level of crowding and with large-scale variations of the spatial resolution from one image to another. A recent deconvolution-based photometry algorithm fulfills all these requirements, and it also increases the resolution of astronomical images, which is an important advantage for the detection of blends and the discrimination of false positives in transit photometry. We made some changes to this algorithm to optimize it for transit photometry and used it to reduce NTT/SUSI2 observations of two transits of OGLE-TR-113b. This reduction has led to two very high precision transit light curves with a low level of systematic residuals, used together with former photometric and spectroscopic measurements to derive new stellar and planetary parameters in excellent agreement with previous ones, but significantly more precise.
Confirmation of radial velocity variability in Arcturus
NASA Technical Reports Server (NTRS)
Cochran, William D.
1988-01-01
The paper presents results of high-precision measurements of radial-velocity variations in Alpha Boo. Significant radial-velocity variability is detected well in excess of the random and systematic measurement errors. The radial velocity varies by an amount greater than 200 m/sec with a period of around 2 days.
Correcting systematic errors in high-sensitivity deuteron polarization measurements
NASA Astrophysics Data System (ADS)
Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.
Alves Junior, Carlos As; Mocellin, Michel C; Gonçalves, Eliane C Andrade; Silva, Diego As; Trindade, Erasmo Bsm
2017-09-01
We analyzed the discriminatory capacity of anthropometric indicators for body fat in children and adolescents. This systematic review and meta-analysis included cross-sectional and clinical studies comprising children and adolescents aged 2-19 y that tested the discriminatory value for body fat measured by anthropometric methods or indexes generated by anthropometric variables compared with precision methods in the diagnosis of body fat [dual-energy X-ray absorptiometry (DXA), computed tomography, air displacement plethysmography (ADP), or MRI]. Five studies met the eligibility criteria and presented high methodologic quality. The anthropometric indicators that had high discriminatory power to identify high body fat were body mass index (BMI) in males [area under the curve (AUC): 0.975] and females (AUC: 0.947), waist circumference (WC) in males (AUC: 0.975) and females (AUC: 0.959), and the waist-to-height ratio (WTHR) in males (AUC: 0.897) and females (AUC: 0.914). BMI, WC, and WTHR can be used by health professionals to assess body fat in children and adolescents. © 2017 American Society for Nutrition.
NASA Astrophysics Data System (ADS)
Peters, Ande; Durner, Wolfgang; Schrader, Frederik; Groh, Jannis; Pütz, Thomas
2017-04-01
Weighing lysimeters are known to be the best means for a precise and unbiased measurement of water fluxes at the interface between the soil-plant system and the atmosphere. The measured data need to be filtered to separate evapotranspiration (ET) and precipitation (P) from noise. Such filter routines apply typically two steps: (i) a low pass filter, like moving average, which is used to smooth noisy data, and (ii) a threshold filter to separate significant from insignificant mass changes. Recent developments of these filters have revealed and solved many problems regarding bias in the data processing. A remaining problem is that each change in flow direction is accompanied with a systematic flow underestimation due to the threshold scheme. In this contribution we show and analyze this systematic effect and propose a heuristic solution by introducing a so-called snap routine. The routine is calibrated and tested with synthetic flux data and applied to real data from a precision lysimeter for a 10-month period. We show that the absolute systematic effect is independent of the magnitude of a certain flux event. Thus, for small events, like dew or rime formation, the relative error is highest and can be in the same order of magnitude as the flux itself. The heuristic snap routine effectively overcomes these problems and yields an almost unbiased representation of the real signal.
NASA Astrophysics Data System (ADS)
Wang, Yubing; Yin, Weihong; Han, Qin; Yang, Xiaohong; Ye, Han; Lü, Qianqian; Yin, Dongdong
2017-04-01
Graphene field-effect transistors have been intensively studied. However, in order to fabricate devices with more complicated structures, such as the integration with waveguide and other two-dimensional materials, we need to transfer the exfoliated graphene samples to a target position. Due to the small area of exfoliated graphene and its random distribution, the transfer method requires rather high precision. In this paper, we systematically study a method to selectively transfer mechanically exfoliated graphene samples to a target position with a precision of sub-micrometer. To characterize the doping level of this method, we transfer graphene flakes to pre-patterned metal electrodes, forming graphene field-effect transistors. The hole doping of graphene is calculated to be 2.16 × {10}12{{{cm}}}-2. In addition, we fabricate a waveguide-integrated multilayer graphene photodetector to demonstrate the viability and accuracy of this method. A photocurrent as high as 0.4 μA is obtained, corresponding to a photoresponsivity of 0.48 mA/W. The device performs uniformly in nine illumination cycles. Project supported by the National Key Research and Development Program of China (No. 2016YFB0402404), the High-Tech Research and Development Program of China (Nos. 2013AA031401, 2015AA016902, 2015AA016904), and the National Natural Foundation of China (Nos. 61674136, 61176053, 61274069, 61435002).
Systematic Molecular Phenotyping: A Path Toward Precision Emergency Medicine?
Limkakeng, Alexander T; Monte, Andrew A; Kabrhel, Christopher; Puskarich, Michael; Heitsch, Laura; Tsalik, Ephraim L; Shapiro, Nathan I
2016-10-01
Precision medicine is an emerging approach to disease treatment and prevention that considers variability in patient genes, environment, and lifestyle. However, little has been written about how such research impacts emergency care. Recent advances in analytical techniques have made it possible to characterize patients in a more comprehensive and sophisticated fashion at the molecular level, promising highly individualized diagnosis and treatment. Among these techniques are various systematic molecular phenotyping analyses (e.g., genomics, transcriptomics, proteomics, and metabolomics). Although a number of emergency physicians use such techniques in their research, widespread discussion of these approaches has been lacking in the emergency care literature and many emergency physicians may be unfamiliar with them. In this article, we briefly review the underpinnings of such studies, note how they already impact acute care, discuss areas in which they might soon be applied, and identify challenges in translation to the emergency department (ED). While such techniques hold much promise, it is unclear whether the obstacles to translating their findings to the ED will be overcome in the near future. Such obstacles include validation, cost, turnaround time, user interface, decision support, standardization, and adoption by end-users. © 2016 by the Society for Academic Emergency Medicine.
Systematic Molecular Phenotyping: A Path Towards Precision Emergency Medicine?
Limkakeng, Alexander T.; Monte, Andrew; Kabrhel, Christopher; Puskarich, Michael; Heitsch, Laura; Tsalik, Ephraim L.; Shapiro, Nathan I.
2016-01-01
Precision medicine is an emerging approach to disease treatment and prevention that considers variability in patient genes, environment, and lifestyle. However, little has been written about how such research impacts emergency care. Recent advances in analytical techniques have made it possible to characterize patients in a more comprehensive and sophisticated fashion at the molecular level, promising highly individualized diagnosis and treatment. Among these techniques are various systematic molecular phenotyping analyses (e.g., genomics, transcriptomics, proteomics, and metabolomics). Although a number of emergency physicians use such techniques in their research, widespread discussion of these approaches has been lacking in the emergency care literature and many emergency physicians may be unfamiliar with them. In this article, we briefly review the underpinnings of such studies, note how they already impact acute care, discuss areas in which they might soon be applied, and identify challenges in translation to the emergency department. While such techniques hold much promise, it is unclear whether the obstacles to translating their findings to the emergency department will be overcome in the near future. Such obstacles include validation, cost, turnaround time, user interface, decision support, standardization, and adoption by end users. PMID:27288269
Designing Successful Next-Generation Instruments to Detect the Epoch of Reionization
NASA Astrophysics Data System (ADS)
Thyagarajan, Nithyanandan; Hydrogen Epoch of Reionization Array (HERA) team, Murchison Widefield Array (MWA) team
2018-01-01
The Epoch of Reionization (EoR) signifies a period of intense evolution of the Inter-Galactic Medium (IGM) in the early Universe caused by the first generations of stars and galaxies, wherein they turned the neutral IGM to be completely ionized by redshift ≥ 6. This important epoch is poorly explored to date. Measurement of redshifted 21 cm line from neutral Hydrogen during the EoR is promising to provide the most direct constraints of this epoch. Ongoing experiments to detect redshifted 21 cm power spectrum during reionization, including the Murchison Widefield Array (MWA), Precision Array for Probing the Epoch of Reionization (PAPER), and the Low Frequency Array (LOFAR), appear to be severely affected by bright foregrounds and unaccounted instrumental systematics. For example, the spectral structure introduced by wide-field effects, aperture shapes and angular power patterns of the antennas, electrical and geometrical reflections in the antennas and electrical paths, and antenna position errors can be major limiting factors. These mimic the 21 cm signal and severely degrade the instrument performance. It is imperative for the next-generation of experiments to eliminate these systematics at their source via robust instrument design. I will discuss a generic framework to set cosmologically motivated antenna performance specifications and design strategies using the Precision Radio Interferometry Simulator (PRISim) -- a high-precision tool that I have developed for simulations of foregrounds and the instrument transfer function intended primarily for 21 cm EoR studies, but also broadly applicable to interferometer-based intensity mapping experiments. The Hydrogen Epoch of Reionization Array (HERA), designed in-part based on this framework, is expected to detect the 21 cm signal with high significance. I will present this framework and the simulations, and their potential for designing upcoming radio instruments such as HERA and the Square Kilometre Array (SKA).
MICKENAUTSCH, Steffen; YENGOPAL, Veerasamy
2013-01-01
Objective To demonstrate the application of the modified Ottawa method by establishing the update need of a systematic review with focus on the caries preventive effect of GIC versus resin pit and fissure sealants; to answer the question as to whether the existing conclusions of this systematic review are still current; to establish whether a new update of this systematic review was needed. Methods: Application of the Modified Ottawa method. Application date: April/May 2012. Results Four signals aligned with the criteria of the modified Ottawa method were identified. The content of these signals suggest that higher precision of the current systematic review results might be achieved if an update of the current review were conducted at this point in time. However, these signals further indicate that such systematic review update, despite its higher precision, would only confirm the existing review conclusion that no statistically significant difference exists in the caries-preventive effect of GIC and resin-based fissure sealants. Conclusion In conclusion, this study demonstrated the modified Ottawa method as an effective tool in establishing the update need of the systematic review. In addition, it was established that the conclusions of the systematic review in relation to the caries preventive effect of GIC versus resin based fissure sealants are still current, and that no update of this systematic review was warranted at date of application. PMID:24212996
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
Systematic Error Mitigation for the PIXIE Instrument
NASA Technical Reports Server (NTRS)
Kogut, Alan; Fixsen, Dale J.; Nagler, Peter; Tucker, Gregory
2016-01-01
The Primordial Ination Explorer (PIXIE) uses a nulling Fourier Transform Spectrometer to measure the absoluteintensity and linear polarization of the cosmic microwave background and diuse astrophysical foregrounds.PIXIE will search for the signature of primordial ination and will characterize distortions from a blackbodyspectrum, both to precision of a few parts per billion. Rigorous control of potential instrumental eects isrequired to take advantage of the raw sensitivity. PIXIE employs a highly symmetric design using multipledierential nulling to reduce the instrumental signature to negligible levels. We discuss the systematic errorbudget and mitigation strategies for the PIXIE mission.
NASA Astrophysics Data System (ADS)
DesOrmeau, J. W.; Ibanez-Mejia, M.; Lafuente Valverde, B.; Eddy, M. P.; Trail, D.; Wang, Y.
2017-12-01
The accessory mineral baddeleyite (monoclinic ZrO2) has great potential as a high-precision U-Pb geochronometer in silica-undersaturated rocks. However, due to a lack of understanding of alpha-recoil damage to the crystal structure and the influences on chemical diffusion properties and closed-system behavior, limitations still exist. Studies have shown phase transformations in baddeleyite (monoclinic to tetragonal) resulting from radiation damage due to ion bombardment [e.g., 1], but the effects of self-irradiation on the baddeleyite crystal structure over geologic timescales remain poorly understood. A recent study reported confocal laser-Raman spectra suggesting the presence of a tetragonal phase in a Phalaborwa baddeleyite [2], which has crucial implications for Pb mobility and high-precision U-Pb results. To better understand the physical effects of self-irradiation in baddeleyite over geologic timescales, samples of various ages (ca. 2060-32 Ma) and calculated alpha-doses (0.001-0.901 x 1016 α/mg) were imaged by cathodoluminescence (CL) and subsequently analyzed via electron backscatter diffraction (EBSD) and confocal laser-Raman spectroscopy. Synthetic baddeleyite grown in a one-atmosphere furnace was also analyzed to allow a comparison of EBSD and Raman results of the pure monoclinic phase and the natural unknown samples. Whole grain EBSD maps (n=75) of baddeleyite from the Phalabowra and Kovdor carbonatites, the Ammänpelto sill, and the Duluth, Ogden and Yinmawanshan gabbros show complex twinning that is loosely correlated to CL zoning and identification of the monoclinic phase only, suggesting no evidence for phase transformations within grains of contrasting age and amounts of alpha-doses. Preliminary Raman results show evidence of systematic shifts in the vibrational modes of Phalabowra baddeleyite with respect to the younger crystals, which may be the result of radiation-induced damage accumulation rather than phase transformation. These results aim to better characterize the poorly known effects of radiation damage in natural baddeleyite and assist in improving methods for high-precision U-Pb dating of mafic systems on Earth and other planetary bodies. [1] Valdez et al. (2008), J. Nucl. Mater. 381, pp. 259-266. [2[ Schaltegger and Davies (2017), RIMG 83, pp.297-328.
NASA Astrophysics Data System (ADS)
Wang, Zhen-yu; Yu, Jian-cheng; Zhang, Ai-qun; Wang, Ya-xing; Zhao, Wen-tao
2017-12-01
Combining high precision numerical analysis methods with optimization algorithms to make a systematic exploration of a design space has become an important topic in the modern design methods. During the design process of an underwater glider's flying-wing structure, a surrogate model is introduced to decrease the computation time for a high precision analysis. By these means, the contradiction between precision and efficiency is solved effectively. Based on the parametric geometry modeling, mesh generation and computational fluid dynamics analysis, a surrogate model is constructed by adopting the design of experiment (DOE) theory to solve the multi-objects design optimization problem of the underwater glider. The procedure of a surrogate model construction is presented, and the Gaussian kernel function is specifically discussed. The Particle Swarm Optimization (PSO) algorithm is applied to hydrodynamic design optimization. The hydrodynamic performance of the optimized flying-wing structure underwater glider increases by 9.1%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahadevan, Suvrath; Halverson, Samuel; Ramsey, Lawrence
2014-05-01
Modal noise in optical fibers imposes limits on the signal-to-noise ratio (S/N) and velocity precision achievable with the next generation of astronomical spectrographs. This is an increasingly pressing problem for precision radial velocity spectrographs in the near-infrared (NIR) and optical that require both high stability of the observed line profiles and high S/N. Many of these spectrographs plan to use highly coherent emission-line calibration sources like laser frequency combs and Fabry-Perot etalons to achieve precision sufficient to detect terrestrial-mass planets. These high-precision calibration sources often use single-mode fibers or highly coherent sources. Coupling light from single-mode fibers to multi-mode fibersmore » leads to only a very low number of modes being excited, thereby exacerbating the modal noise measured by the spectrograph. We present a commercial off-the-shelf solution that significantly mitigates modal noise at all optical and NIR wavelengths, and which can be applied to spectrograph calibration systems. Our solution uses an integrating sphere in conjunction with a diffuser that is moved rapidly using electrostrictive polymers, and is generally superior to most tested forms of mechanical fiber agitation. We demonstrate a high level of modal noise reduction with a narrow bandwidth 1550 nm laser. Our relatively inexpensive solution immediately enables spectrographs to take advantage of the innate precision of bright state-of-the art calibration sources by removing a major source of systematic noise.« less
Selective Influences of Precision and Power Grips on Speech Categorization.
Tiainen, Mikko; Tiippana, Kaisa; Vainio, Martti; Peromaa, Tarja; Komeilipoor, Naeem; Vainio, Lari
2016-01-01
Recent studies have shown that articulatory gestures are systematically associated with specific manual grip actions. Here we show that executing such actions can influence performance on a speech-categorization task. Participants watched and/or listened to speech stimuli while executing either a power or a precision grip. Grip performance influenced the syllable categorization by increasing the proportion of responses of the syllable congruent with the executed grip (power grip-[ke] and precision grip-[te]). Two follow-up experiments indicated that the effect was based on action-induced bias in selecting the syllable.
Printing Proteins as Microarrays for High-Throughput Function Determination
NASA Astrophysics Data System (ADS)
MacBeath, Gavin; Schreiber, Stuart L.
2000-09-01
Systematic efforts are currently under way to construct defined sets of cloned genes for high-throughput expression and purification of recombinant proteins. To facilitate subsequent studies of protein function, we have developed miniaturized assays that accommodate extremely low sample volumes and enable the rapid, simultaneous processing of thousands of proteins. A high-precision robot designed to manufacture complementary DNA microarrays was used to spot proteins onto chemically derivatized glass slides at extremely high spatial densities. The proteins attached covalently to the slide surface yet retained their ability to interact specifically with other proteins, or with small molecules, in solution. Three applications for protein microarrays were demonstrated: screening for protein-protein interactions, identifying the substrates of protein kinases, and identifying the protein targets of small molecules.
Neutron Decay with PERC: a Progress Report
NASA Astrophysics Data System (ADS)
Konrad, G.; Abele, H.; Beck, M.; Drescher, C.; Dubbers, D.; Erhart, J.; Fillunger, H.; Gösselsberger, C.; Heil, W.; Horvath, M.; Jericha, E.; Klauser, C.; Klenke, J.; Märkisch, B.; Maix, R. K.; Mest, H.; Nowak, S.; Rebrova, N.; Roick, C.; Sauerzopf, C.; Schmidt, U.; Soldner, T.; Wang, X.; Zimmer, O.; Perc Collaboration
2012-02-01
The PERC collaboration will perform high-precision measurements of angular correlations in neutron beta decay at the beam facility MEPHISTO of the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich, Germany. The new beam station PERC, a clean, bright, and versatile source of neutron decay products, is designed to improve the sensitivity of neutron decay studies by one order of magnitude. The charged decay products are collected by a strong longitudinal magnetic field directly from inside a neutron guide. This combination provides the highest phase space density of decay products. A magnetic mirror serves to perform precise cuts in phase space, reducing related systematic errors. The new instrument PERC is under development by an international collaboration. The physics motivation, sensitivity, and applications of PERC as well as the status of the design and preliminary results on uncertainties in proton spectroscopy are presented in this paper.
NASA Astrophysics Data System (ADS)
Tauscher, Keith; Rapetti, David; Burns, Jack O.; Switzer, Eric
2018-02-01
The sky-averaged (global) highly redshifted 21 cm spectrum from neutral hydrogen is expected to appear in the VHF range of ∼20–200 MHz and its spectral shape and strength are determined by the heating properties of the first stars and black holes, by the nature and duration of reionization, and by the presence or absence of exotic physics. Measurements of the global signal would therefore provide us with a wealth of astrophysical and cosmological knowledge. However, the signal has not yet been detected because it must be seen through strong foregrounds weighted by a large beam, instrumental calibration errors, and ionospheric, ground, and radio-frequency-interference effects, which we collectively refer to as “systematics.” Here, we present a signal extraction method for global signal experiments which uses Singular Value Decomposition of “training sets” to produce systematics basis functions specifically suited to each observation. Instead of requiring precise absolute knowledge of the systematics, our method effectively requires precise knowledge of how the systematics can vary. After calculating eigenmodes for the signal and systematics, we perform a weighted least square fit of the corresponding coefficients and select the number of modes to include by minimizing an information criterion. We compare the performance of the signal extraction when minimizing various information criteria and find that minimizing the Deviance Information Criterion most consistently yields unbiased fits. The methods used here are built into our widely applicable, publicly available Python package, pylinex, which analytically calculates constraints on signals and systematics from given data, errors, and training sets.
An Open-Source Galaxy Redshift Survey Simulator for next-generation Large Scale Structure Surveys
NASA Astrophysics Data System (ADS)
Seijak, Uros
Galaxy redshift surveys produce three-dimensional maps of the galaxy distribution. On large scales these maps trace the underlying matter fluctuations in a relatively simple manner, so that the properties of the primordial fluctuations along with the overall expansion history and growth of perturbations can be extracted. The BAO standard ruler method to measure the expansion history of the universe using galaxy redshift surveys is thought to be robust to observational artifacts and understood theoretically with high precision. These same surveys can offer a host of additional information, including a measurement of the growth rate of large scale structure through redshift space distortions, the possibility of measuring the sum of neutrino masses, tighter constraints on the expansion history through the Alcock-Paczynski effect, and constraints on the scale-dependence and non-Gaussianity of the primordial fluctuations. Extracting this broadband clustering information hinges on both our ability to minimize and subtract observational systematics to the observed galaxy power spectrum, and our ability to model the broadband behavior of the observed galaxy power spectrum with exquisite precision. Rapid development on both fronts is required to capitalize on WFIRST's data set. We propose to develop an open-source computational toolbox that will propel development in both areas by connecting large scale structure modeling and instrument and survey modeling with the statistical inference process. We will use the proposed simulator to both tailor perturbation theory and fully non-linear models of the broadband clustering of WFIRST galaxies and discover novel observables in the non-linear regime that are robust to observational systematics and able to distinguish between a wide range of spatial and dynamic biasing models for the WFIRST galaxy redshift survey sources. We have demonstrated the utility of this approach in a pilot study of the SDSS-III BOSS galaxies, in which we improved the redshift space distortion growth rate measurement precision by a factor of 2.5 using customized clustering statistics in the non-linear regime that were immunized against observational systematics. We look forward to addressing the unique challenges of modeling and empirically characterizing the WFIRST galaxies and observational systematics.
Constraining the mass–richness relationship of redMaPPer clusters with angular clustering
Baxter, Eric J.; Rozo, Eduardo; Jain, Bhuvnesh; ...
2016-08-04
The potential of using cluster clustering for calibrating the mass–richness relation of galaxy clusters has been recognized theoretically for over a decade. In this paper, we demonstrate the feasibility of this technique to achieve high-precision mass calibration using redMaPPer clusters in the Sloan Digital Sky Survey North Galactic Cap. By including cross-correlations between several richness bins in our analysis, we significantly improve the statistical precision of our mass constraints. The amplitude of the mass–richness relation is constrained to 7 per cent statistical precision by our analysis. However, the error budget is systematics dominated, reaching a 19 per cent total errormore » that is dominated by theoretical uncertainty in the bias–mass relation for dark matter haloes. We confirm the result from Miyatake et al. that the clustering amplitude of redMaPPer clusters depends on galaxy concentration as defined therein, and we provide additional evidence that this dependence cannot be sourced by mass dependences: some other effect must account for the observed variation in clustering amplitude with galaxy concentration. Assuming that the observed dependence of redMaPPer clustering on galaxy concentration is a form of assembly bias, we find that such effects introduce a systematic error on the amplitude of the mass–richness relation that is comparable to the error bar from statistical noise. Finally, the results presented here demonstrate the power of cluster clustering for mass calibration and cosmology provided the current theoretical systematics can be ameliorated.« less
Penning trap mass spectrometry Q-value determinations for highly forbidden β-decays
NASA Astrophysics Data System (ADS)
Sandler, Rachel; Bollen, Georg; Eibach, Martin; Gamage, Nadeesha; Gulyuz, Kerim; Hamaker, Alec; Izzo, Chris; Kandegedara, Rathnayake; Redshaw, Matt; Ringle, Ryan; Valverde, Adrian; Yandow, Isaac; Low Energy Beam Ion Trap Team
2017-09-01
Over the last several decades, extremely sensitive, ultra-low background beta and gamma detection techniques have been developed. These techniques have enabled the observation of very rare processes, such as highly forbidden beta decays e.g. of 113Cd, 50V and 138La. Half-life measurements of highly forbidden beta decays provide a testing ground for theoretical nuclear models, and the comparison of calculated and measured energy spectra could enable a determination of the values of the weak coupling constants. Precision Q-value measurements also allow for systematic tests of the beta-particle detection techniques. We will present the results and current status of Q value determinations for highly forbidden beta decays. The Q values, the mass difference between parent and daughter nuclides, are measured using the high precision Penning trap mass spectrometer LEBIT at the National Superconducting Cyclotron Laboratory.
NASA Astrophysics Data System (ADS)
Redshaw, Matthew
This dissertation describes high precision measurements of atomic masses by measuring the cyclotron frequency of ions trapped singly, or in pairs, in a precision, cryogenic Penning trap. By building on techniques developed at MIT for measuring the cyclotron frequency of single trapped ions, the atomic masses of 84,86Kr, and 129,132,136Xe have been measured to less than a part in 1010 fractional precision. By developing a new technique for measuring the cyclotron frequency ratio of a pair of simultaneously trapped ions, the atomic masses of 28Si, 31P and 32S have been measured to 2 or 3 parts in 10 11. This new technique has also been used to measure the dipole moment of PH+. During the course of these measurements, two significant, but previously unsuspected sources of systematic error were discovered, characterized and eliminated. Extensive tests for other sources of systematic error were performed and are described in detail. The mass measurements presented here provide a significant increase in precision over previous values for these masses, by factors of 3 to 700. The results have a broad range of physics applications: The mass of 136 Xe is important for searches for neutrinoless double-beta-decay; the mass of 28Si is relevant to the re-definition of the artifact kilogram in terms of an atomic mass standard; the masses of 84,86Kr, and 129,132,136Xe provide convenient reference masses for less precise mass spectrometers in diverse fields such as nuclear physics and chemistry; and the dipole moment of PH+ provides a test of molecular structure calculations.
High precision tungsten isotope analysis using MC-ICP-MS and application for terrestrial samples
NASA Astrophysics Data System (ADS)
Suzuki, K.; Takamasa, A.
2017-12-01
Tungsten has five isotopes (M = 180, 182, 183, 184, 186), and 182W isotope is a rediogenic isotope produced by b-decay of 182Hf. Its half life is short (8.9 m.y.), and 182W isotope has been investigated to understand the early Earth geochemical evolution. Both Hf and W are highly refractory elements. As Hf is a lithophile and W is a siderophile elements, 182Hf-182W system could give constraints on metal-silicate (core-mantle) differentiation such as especially early Earth system because of its larege fractionation betwenn core-mantle and short half life. Improvement of analytical techniques of W isotope analyses leads to findings of W isotope anomaly (mostly positive) in old komatiites (2.4 - 3.8 Ga) and young volcanic rocks (12 Ma Ontong Java Plateau and 6 Ma Baffin Bay). In our study, high-precision W isotope ratio measurement with MC-ICP-MS (Thermo co. Ltd., NEPTUNE PLUS). We have measured W standard solution (SRM 3163) and obtained the isotopic compositions with an precision of ± 5ppm. However, the standard solution, which separated by cation or anion exchange resin, has systematical 183W/184W drift to -5ppm. These phenomena was also reported by Willbold et al. (2011). Therefore, we used the standard solution for correction of isotopic fractionation of samples which was processed by the same method as that of the samples. We will present the data of terrestrial samples obtained by the technique dveloped in this study.
Bundle Block Adjustment of Airborne Three-Line Array Imagery Based on Rotation Angles
Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin
2014-01-01
In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models. PMID:24811075
Bundle block adjustment of airborne three-line array imagery based on rotation angles.
Zhang, Yongjun; Zheng, Maoteng; Huang, Xu; Xiong, Jinxin
2014-05-07
In the midst of the rapid developments in electronic instruments and remote sensing technologies, airborne three-line array sensors and their applications are being widely promoted and plentiful research related to data processing and high precision geo-referencing technologies is under way. The exterior orientation parameters (EOPs), which are measured by the integrated positioning and orientation system (POS) of airborne three-line sensors, however, have inevitable systematic errors, so the level of precision of direct geo-referencing is not sufficiently accurate for surveying and mapping applications. Consequently, a few ground control points are necessary to refine the exterior orientation parameters, and this paper will discuss bundle block adjustment models based on the systematic error compensation and the orientation image, considering the principle of an image sensor and the characteristics of the integrated POS. Unlike the models available in the literature, which mainly use a quaternion to represent the rotation matrix of exterior orientation, three rotation angles are directly used in order to effectively model and eliminate the systematic errors of the POS observations. Very good experimental results have been achieved with several real datasets that verify the correctness and effectiveness of the proposed adjustment models.
Karystianis, George; Thayer, Kristina; Wolfe, Mary; Tsafnat, Guy
2017-06-01
Most data extraction efforts in epidemiology are focused on obtaining targeted information from clinical trials. In contrast, limited research has been conducted on the identification of information from observational studies, a major source for human evidence in many fields, including environmental health. The recognition of key epidemiological information (e.g., exposures) through text mining techniques can assist in the automation of systematic reviews and other evidence summaries. We designed and applied a knowledge-driven, rule-based approach to identify targeted information (study design, participant population, exposure, outcome, confounding factors, and the country where the study was conducted) from abstracts of epidemiological studies included in several systematic reviews of environmental health exposures. The rules were based on common syntactical patterns observed in text and are thus not specific to any systematic review. To validate the general applicability of our approach, we compared the data extracted using our approach versus hand curation for 35 epidemiological study abstracts manually selected for inclusion in two systematic reviews. The returned F-score, precision, and recall ranged from 70% to 98%, 81% to 100%, and 54% to 97%, respectively. The highest precision was observed for exposure, outcome and population (100%) while recall was best for exposure and study design with 97% and 89%, respectively. The lowest recall was observed for the population (54%), which also had the lowest F-score (70%). The generated performance of our text-mining approach demonstrated encouraging results for the identification of targeted information from observational epidemiological study abstracts related to environmental exposures. We have demonstrated that rules based on generic syntactic patterns in one corpus can be applied to other observational study design by simple interchanging the dictionaries aiming to identify certain characteristics (i.e., outcomes, exposures). At the document level, the recognised information can assist in the selection and categorization of studies included in a systematic review. Copyright © 2017 Elsevier Inc. All rights reserved.
Correcting systematic bias and instrument measurement drift with mzRefinery
Gibbons, Bryson C.; Chambers, Matthew C.; Monroe, Matthew E.; ...
2015-08-04
Systematic bias in mass measurement adversely affects data quality and negates the advantages of high precision instruments. We introduce the mzRefinery tool into the ProteoWizard package for calibration of mass spectrometry data files. Using confident peptide spectrum matches, three different calibration methods are explored and the optimal transform function is chosen. After calibration, systematic bias is removed and the mass measurement errors are centered at zero ppm. Because it is part of the ProteoWizard package, mzRefinery can read and write a wide variety of file formats. In conclusion, we report on availability; the mzRefinery tool is part of msConvert, availablemore » with the ProteoWizard open source package at http://proteowizard.sourceforge.net/« less
Production and detection of atomic hexadecapole at Earth's magnetic field.
Acosta, V M; Auzinsh, M; Gawlik, W; Grisins, P; Higbie, J M; Jackson Kimball, D F; Krzemien, L; Ledbetter, M P; Pustelny, S; Rochester, S M; Yashchuk, V V; Budker, D
2008-07-21
Optical magnetometers measure magnetic fields with extremely high precision and without cryogenics. However, at geomagnetic fields, important for applications from landmine removal to archaeology, they suffer from nonlinear Zeeman splitting, leading to systematic dependence on sensor orientation. We present experimental results on a method of eliminating this systematic error, using the hexadecapole atomic polarization moment. In particular, we demonstrate selective production of the atomic hexadecapole moment at Earth's magnetic field and verify its immunity to nonlinear Zeeman splitting. This technique promises to eliminate directional errors in all-optical atomic magnetometers, potentially improving their measurement accuracy by several orders of magnitude.
Regulation of Conduction Time along Axons
Seidl, Armin H.
2013-01-01
Timely delivery of information is essential for proper function of the nervous system. Precise regulation of nerve conduction velocity is needed for correct exertion of motor skills, sensory integration and cognitive functions. In vertebrates, the rapid transmission of signals along nerve fibers is made possible by the myelination of axons and the resulting saltatory conduction in between nodes of Ranvier. Myelin is a specialization of glia cells and is provided by oligodendrocytes in the central nervous system. Myelination not only maximizes conduction velocity, but also provides a means to systematically regulate conduction times in the nervous system. Systematic regulation of conduction velocity along axons, and thus systematic regulation of conduction time in between neural areas, is a common occurrence in the nervous system. To date, little is understood about the mechanism that underlies systematic conduction velocity regulation and conduction time synchrony. Node assembly, internode distance (node spacing) and axon diameter - all parameters determining the speed of signal propagation along axons - are controlled by myelinating glia. Therefore, an interaction between glial cells and neurons has been suggested. This review summarizes examples of neural systems in which conduction velocity is regulated by anatomical variations along axons. While functional implications in these systems are not always clear, recent studies in the auditory system of birds and mammals present examples of conduction velocity regulation in systems with high temporal precision and a defined biological function. Together these findings suggest an active process that shapes the interaction between axons and myelinating glia to control conduction velocity along axons. Future studies involving these systems may provide further insight into how specific conduction times in the brain are established and maintained in development. Throughout the text, conduction velocity is used for the speed of signal propagation, i.e. the speed at which an action potential travels. Conduction time refers to the time it takes for a specific signal to travel from its origin to its target, i.e. neuronal cell body to axonal terminal. PMID:23820043
Selva, Anna; Solà, Ivan; Zhang, Yuan; Pardo-Hernandez, Hector; Haynes, R Brian; Martínez García, Laura; Navarro, Tamara; Schünemann, Holger; Alonso-Coello, Pablo
2017-08-30
Identifying scientific literature addressing patients' views and preferences is complex due to the wide range of studies that can be informative and the poor indexing of this evidence. Given the lack of guidance we developed a search strategy to retrieve this type of evidence. We assembled an initial list of terms from several sources, including the revision of the terms and indexing of topic-related studies and, methods research literature, and other relevant projects and systematic reviews. We used the relative recall approach, evaluating the capacity of the designed search strategy for retrieving studies included in relevant systematic reviews for the topic. We implemented in practice the final version of the search strategy for conducting systematic reviews and guidelines, and calculated search's precision and the number of references needed to read (NNR). We assembled an initial version of the search strategy, which had a relative recall of 87.4% (yield of 132/out of 151 studies). We then added some additional terms from the studies not initially identified, and re-tested this improved version against the studies included in a new set of systematic reviews, reaching a relative recall of 85.8% (151/out of 176 studies, 95% CI 79.9 to 90.2). This final version of the strategy includes two sets of terms related with two domains: "Patient Preferences and Decision Making" and "Health State Utilities Values". When we used the search strategy for the development of systematic reviews and clinical guidelines we obtained low precision values (ranging from 2% to 5%), and the NNR from 20 to 50. This search strategy fills an important research gap in this field. It will help systematic reviewers, clinical guideline developers, and policy-makers to retrieve published research on patients' views and preferences. In turn, this will facilitate the inclusion of this critical aspect when formulating heath care decisions, including recommendations.
Influence of various water quality sampling strategies on load estimates for small streams
Robertson, Dale M.; Roerish, Eric D.
1999-01-01
Extensive streamflow and water quality data from eight small streams were systematically subsampled to represent various water‐quality sampling strategies. The subsampled data were then used to determine the accuracy and precision of annual load estimates generated by means of a regression approach (typically used for big rivers) and to determine the most effective sampling strategy for small streams. Estimation of annual loads by regression was imprecise regardless of the sampling strategy used; for the most effective strategy, median absolute errors were ∼30% based on the load estimated with an integration method and all available data, if a regression approach is used with daily average streamflow. The most effective sampling strategy depends on the length of the study. For 1‐year studies, fixed‐period monthly sampling supplemented by storm chasing was the most effective strategy. For studies of 2 or more years, fixed‐period semimonthly sampling resulted in not only the least biased but also the most precise loads. Additional high‐flow samples, typically collected to help define the relation between high streamflow and high loads, result in imprecise, overestimated annual loads if these samples are consistently collected early in high‐flow events.
Submillisecond fireball timing using de Bruijn timecodes
NASA Astrophysics Data System (ADS)
Howie, Robert M.; Paxman, Jonathan; Bland, Philip A.; Towner, Martin C.; Sansom, Eleanor K.; Devillepoix, Hadrien A. R.
2017-08-01
Long-exposure fireball photographs have been used to systematically record meteoroid trajectories, calculate heliocentric orbits, and determine meteorite fall positions since the mid-20th century. Periodic shuttering is used to determine meteoroid velocity, but up until this point, a separate method of precisely determining the arrival time of a meteoroid was required. We show it is possible to encode precise arrival times directly into the meteor image by driving the periodic shutter according to a particular pattern—a de Bruijn sequence—and eliminate the need for a separate subsystem to record absolute fireball timing. The Desert Fireball Network has implemented this approach using a microcontroller driven electro-optic shutter synchronized with GNSS UTC time to create small, simple, and cost-effective high-precision fireball observatories with submillisecond timing accuracy.
Hetley, Richard; Dosher, Barbara Anne; Lu, Zhong-Lin
2014-01-01
Attention precues improve the performance of perceptual tasks in many but not all circumstances. These spatial attention effects may depend upon display set size or workload, and have been variously attributed to external noise filtering, stimulus enhancement, contrast gain, or response gain, or to uncertainty or other decision effects. In this study, we document systematically different effects of spatial attention in low- and high-precision judgments, with and without external noise, and in different set sizes in order to contribute to the development of a taxonomy of spatial attention. An elaborated perceptual template model (ePTM) provides an integrated account of a complex set of effects of spatial attention with just two attention factors: a set-size dependent exclusion or filtering of external noise and a narrowing of the perceptual template to focus on the signal stimulus. These results are related to the previous literature by classifying the judgment precision and presence of external noise masks in those experiments, suggesting a taxonomy of spatially cued attention in discrimination accuracy. PMID:24939234
Hetley, Richard; Dosher, Barbara Anne; Lu, Zhong-Lin
2014-11-01
Attention precues improve the performance of perceptual tasks in many but not all circumstances. These spatial attention effects may depend upon display set size or workload, and have been variously attributed to external noise filtering, stimulus enhancement, contrast gain, or response gain, or to uncertainty or other decision effects. In this study, we document systematically different effects of spatial attention in low- and high-precision judgments, with and without external noise, and in different set sizes in order to contribute to the development of a taxonomy of spatial attention. An elaborated perceptual template model (ePTM) provides an integrated account of a complex set of effects of spatial attention with just two attention factors: a set-size dependent exclusion or filtering of external noise and a narrowing of the perceptual template to focus on the signal stimulus. These results are related to the previous literature by classifying the judgment precision and presence of external noise masks in those experiments, suggesting a taxonomy of spatially cued attention in discrimination accuracy.
A Precise Measurement of the Deuteron Elastic Structure Function A(Q 2)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Honegger, Andrian
1999-12-07
During summer 1997 experiment 394-018 measured the deuteron tensor polarization in D(e,e'more » $$vec\\{d}$$) scattering in Hall C at Jefferson Laboratory. In a momentum transfer range between 0.66 and 1.8 (GeV=c) 2, with slight changes in the experimental setup, the collaboration performed six precision measurements of the deuteron structure function A(Q 2) in elastic D(e,e'd) scattering . Scattered electrons and recoil deuterons were detected in coincidence in the High Momentum Spectrometer and the recoil polarimeter POLDER, respectively. At every kinematics H(e,e') data were taken to study systematic effects of the measurement. These new precise measurements resolve discrepancies between older data sets and put significant constraints on existing models of the deuteron electromagnetic structure. This work was supported by the Swiss National Science Foundation, the French Centre National de la Recherche Scientifique and the Commissariat 'a l'Energie Atomique, the U.S. Department of Energy and the National Science Foundation and the K.C. Wong Foundation.« less
Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander
2011-01-01
This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806
Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J
2015-01-01
Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867
Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J
2015-01-01
Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.
Yokoo, Takeshi; Serai, Suraj D; Pirasteh, Ali; Bashir, Mustafa R; Hamilton, Gavin; Hernando, Diego; Hu, Houchun H; Hetterich, Holger; Kühn, Jens-Peter; Kukuk, Guido M; Loomba, Rohit; Middleton, Michael S; Obuchowski, Nancy A; Song, Ji Soo; Tang, An; Wu, Xinhuai; Reeder, Scott B; Sirlin, Claude B
2018-02-01
Purpose To determine the linearity, bias, and precision of hepatic proton density fat fraction (PDFF) measurements by using magnetic resonance (MR) imaging across different field strengths, imager manufacturers, and reconstruction methods. Materials and Methods This meta-analysis was performed in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A systematic literature search identified studies that evaluated the linearity and/or bias of hepatic PDFF measurements by using MR imaging (hereafter, MR imaging-PDFF) against PDFF measurements by using colocalized MR spectroscopy (hereafter, MR spectroscopy-PDFF) or the precision of MR imaging-PDFF. The quality of each study was evaluated by using the Quality Assessment of Studies of Diagnostic Accuracy 2 tool. De-identified original data sets from the selected studies were pooled. Linearity was evaluated by using linear regression between MR imaging-PDFF and MR spectroscopy-PDFF measurements. Bias, defined as the mean difference between MR imaging-PDFF and MR spectroscopy-PDFF measurements, was evaluated by using Bland-Altman analysis. Precision, defined as the agreement between repeated MR imaging-PDFF measurements, was evaluated by using a linear mixed-effects model, with field strength, imager manufacturer, reconstruction method, and region of interest as random effects. Results Twenty-three studies (1679 participants) were selected for linearity and bias analyses and 11 studies (425 participants) were selected for precision analyses. MR imaging-PDFF was linear with MR spectroscopy-PDFF (R 2 = 0.96). Regression slope (0.97; P < .001) and mean Bland-Altman bias (-0.13%; 95% limits of agreement: -3.95%, 3.40%) indicated minimal underestimation by using MR imaging-PDFF. MR imaging-PDFF was precise at the region-of-interest level, with repeatability and reproducibility coefficients of 2.99% and 4.12%, respectively. Field strength, imager manufacturer, and reconstruction method each had minimal effects on reproducibility. Conclusion MR imaging-PDFF has excellent linearity, bias, and precision across different field strengths, imager manufacturers, and reconstruction methods. © RSNA, 2017 Online supplemental material is available for this article. An earlier incorrect version of this article appeared online. This article was corrected on October 2, 2017.
Emura, Fabian; Gralnek, Ian; Baron, Todd H
2013-01-01
Despite extensive worldwide use of standard esophagogastroduodenoscopy (EGD) examinations, gastric cancer (GC) is one of the most common forms of cancer and ranks as the most common malignant tumor in East Asia, Eastern Europe and parts of Latin America. Current limitations of using non systematic examination during standard EGD could be at least partially responsible for the low incidence of early GC diagnosis in countries with a high prevalence of the disease. Originally proposed by Emura et al., systematic alphanumeric-coded endoscopy (SACE) is a novel method that facilitates complete examination of the upper GI tract based on sequential systematic overlapping photo-documentation using an endoluminal alphanumeric-coded nomenclature comprised of eight regions and 28 areas covering the entire surface upper GI surface. For precise localization or normal or abnormal areas, SACE incorporates a simple coordinate system based on the identification of certain natural axes, walls, curvatures and anatomical endoluminal landmarks. Efectiveness of SACE was recently demonstrated in a screening study that diagnosed early GC at a frequency of 0.30% (2/650) in healthy, average-risk volunteer subjects. Such a novel approach, if uniformly implemented worldwide, could significantly change the way we practice upper endoscopy in our lifetimes.
NASA Astrophysics Data System (ADS)
Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim
2017-09-01
Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.
Probability shapes perceptual precision: A study in orientation estimation.
Jabar, Syaheed B; Anderson, Britt
2015-12-01
Probability is known to affect perceptual estimations, but an understanding of mechanisms is lacking. Moving beyond binary classification tasks, we had naive participants report the orientation of briefly viewed gratings where we systematically manipulated contingent probability. Participants rapidly developed faster and more precise estimations for high-probability tilts. The shapes of their error distributions, as indexed by a kurtosis measure, also showed a distortion from Gaussian. This kurtosis metric was robust, capturing probability effects that were graded, contextual, and varying as a function of stimulus orientation. Our data can be understood as a probability-induced reduction in the variability or "shape" of estimation errors, as would be expected if probability affects the perceptual representations. As probability manipulations are an implicit component of many endogenous cuing paradigms, changes at the perceptual level could account for changes in performance that might have traditionally been ascribed to "attention." (c) 2015 APA, all rights reserved).
Purification of functionalized DNA origami nanostructures.
Shaw, Alan; Benson, Erik; Högberg, Björn
2015-05-26
The high programmability of DNA origami has provided tools for precise manipulation of matter at the nanoscale. This manipulation of matter opens up the possibility to arrange functional elements for a diverse range of applications that utilize the nanometer precision provided by these structures. However, the realization of functionalized DNA origami still suffers from imperfect production methods, in particular in the purification step, where excess material is separated from the desired functionalized DNA origami. In this article we demonstrate and optimize two purification methods that have not previously been applied to DNA origami. In addition, we provide a systematic study comparing the purification efficacy of these and five other commonly used purification methods. Three types of functionalized DNA origami were used as model systems in this study. DNA origami was patterned with either small molecules, antibodies, or larger proteins. With the results of our work we aim to provide a guideline in quality fabrication of various types of functionalized DNA origami and to provide a route for scalable production of these promising tools.
NASA Astrophysics Data System (ADS)
Peters, Andre; Groh, Jannis; Schrader, Frederik; Durner, Wolfgang; Vereecken, Harry; Pütz, Thomas
2017-06-01
Weighing lysimeters are considered to be the best means for a precise measurement of water fluxes at the interface between the soil-plant system and the atmosphere. Any decrease of the net mass of the lysimeter can be interpreted as evapotranspiration (ET), any increase as precipitation (P). However, the measured raw data need to be filtered to separate real mass changes from noise. Such filter routines typically apply two steps: (i) a low pass filter, like moving average, which smooths noisy data, and (ii) a threshold filter that separates significant from insignificant mass changes. Recent developments of these filters have identified and solved some problems regarding bias in the data processing. A remaining problem is that each change in flow direction is accompanied with a systematic flow underestimation due to the threshold scheme. In this contribution, we analyze this systematic effect and show that the absolute underestimation is independent of the magnitude of a flux event. Thus, for small events, like dew or rime formation, the relative error is high and can reach the same magnitude as the flux itself. We develop a heuristic solution to the problem by introducing a so-called "snap routine". The routine is calibrated and tested with synthetic flux data and applied to real measurements obtained with a precision lysimeter for a 10-month period. The heuristic snap routine effectively overcomes these problems and yields an almost unbiased representation of the real signal.
De-Trending K2 Exoplanet Targets for High Spacecraft Motion
NASA Astrophysics Data System (ADS)
Saunders, Nicholas; Luger, Rodrigo; Barnes, Rory
2018-01-01
After the failure of two reaction wheels, the Kepler space telescope lost its fine pointing ability and entered a new phase of observation, K2. Targets observed by K2 have high motion relative to the detector and K2 light curves have higher noise than Kepler observations. Despite the increased noise, systematics removal pipelines such as K2SFF and EVEREST have enabled continued high-precision transiting planet science with the telescope, resulting in the detection of hundreds of new exoplanets. However, as the spacecraft begins to run out of fuel, sputtering will drive large and random variations in pointing that can prevent detection of exoplanets during the remaining 5 campaigns. In general, higher motion will spread the stellar point spread function (PSF) across more pixels during a campaign, which increases the number of degrees of freedom in the noise component and significantly reduces the de-trending power of traditional systematics removal methods. We use a model of the Kepler CCD combined with pixel-level information of a large number of stars across the detector to improve the performance of the EVEREST pipeline at high motion. We also consider the problem of increased crowding for static apertures in the high-motion regime and develop pixel response function (PRF)-fitting techniques to mitigate contamination and maximize the de-trending power. We assess the performance of our code by simulating sputtering events and assessing exoplanet detection efficiency with transit injection/recovery tests. We find that targets with roll amplitudes of up to 8 pixels, approximately 15 times K2 roll, can be de-trended within 2 to 3 factors of current K2 photometric precision for stars up to 14th magnitude. Achieved recovery precision allows detection of small planets around 11th and 12th magnitude stars. These methods can be applied to the light curves of K2 targets for existing and future campaigns to ensure that precision exoplanet science can still be performed despite increased motion. We further discuss how these methods can be applied to upcoming space telescope missions, such as the Transiting Exoplanet Survey Satellite (TESS), to improve future detection and characterization of exoplanet candidates.
Stansfield, Claire; O'Mara-Eves, Alison; Thomas, James
2017-09-01
Using text mining to aid the development of database search strings for topics described by diverse terminology has potential benefits for systematic reviews; however, methods and tools for accomplishing this are poorly covered in the research methods literature. We briefly review the literature on applications of text mining for search term development for systematic reviewing. We found that the tools can be used in 5 overarching ways: improving the precision of searches; identifying search terms to improve search sensitivity; aiding the translation of search strategies across databases; searching and screening within an integrated system; and developing objectively derived search strategies. Using a case study and selected examples, we then reflect on the utility of certain technologies (term frequency-inverse document frequency and Termine, term frequency, and clustering) in improving the precision and sensitivity of searches. Challenges in using these tools are discussed. The utility of these tools is influenced by the different capabilities of the tools, the way the tools are used, and the text that is analysed. Increased awareness of how the tools perform facilitates the further development of methods for their use in systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Edwards, R. Lawrence; Chen, J. H.; Wasserburg, G. J.
1987-01-01
A method is presented for the high-precision measurement of the Th-230 abundance in corals by isotope-dilution mass spectrometry using techniques developed by Chen and Wasserburg (1980, 1981) and Chen et al. (1986). It is shown that 6 x 10 to the 8th atoms of Th-230 can be measured to + or - 30 percent (2 sigma) and 2 x 10 to the 10th atoms of Th-230 to + or - 2 percent. The time over which useful age data on corals can be obtained ranges from a few years to about 500 ky, with the uncertainty in age ranging from 5 y for a 180-y-old coral, to 44 y for a 8294-y-old coral, to 1.1 ky for a 123.1-ky-old coral. Ages were determined with high analytical precision for several corals that grew during high sea-level stands about 120 ky ago, supporting the view that the dominant cause of Pleistocene climate change was Milankovitch forcing.
VizieR Online Data Catalog: Fundamental parameters of Kepler stars (Silva Aguirre+, 2015)
NASA Astrophysics Data System (ADS)
Silva Aguirre, V.; Davies, G. R.; Basu, S.; Christensen-Dalsgaard, J.; Creevey, O.; Metcalfe, T. S.; Bedding, T. R.; Casagrande, L.; Handberg, R.; Lund, M. N.; Nissen, P. E.; Chaplin, W. J.; Huber, D.; Serenelli, A. M.; Stello, D.; van Eylen, V.; Campante, T. L.; Elsworth, Y.; Gilliland, R. L.; Hekker, S.; Karoff, C.; Kawaler, S. D.; Kjeldsen, H.; Lundkvist, M. S.
2016-02-01
Our sample has been extracted from the 77 exoplanet host stars presented in Huber et al. (2013, Cat. J/ApJ/767/127). We have made use of the full time-base of observations from the Kepler satellite to uniformly determine precise fundamental stellar parameters, including ages, for a sample of exoplanet host stars where high-quality asteroseismic data were available. We devised a Bayesian procedure flexible in its input and applied it to different grids of models to study systematics from input physics and extract statistically robust properties for all stars. (4 data files).
Is the coverage of google scholar enough to be used alone for systematic reviews
2013-01-01
Background In searches for clinical trials and systematic reviews, it is said that Google Scholar (GS) should never be used in isolation, but in addition to PubMed, Cochrane, and other trusted sources of information. We therefore performed a study to assess the coverage of GS specifically for the studies included in systematic reviews and evaluate if GS was sensitive enough to be used alone for systematic reviews. Methods All the original studies included in 29 systematic reviews published in the Cochrane Database Syst Rev or in the JAMA in 2009 were gathered in a gold standard database. GS was searched for all these studies one by one to assess the percentage of studies which could have been identified by searching only GS. Results All the 738 original studies included in the gold standard database were retrieved in GS (100%). Conclusion The coverage of GS for the studies included in the systematic reviews is 100%. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed. With some improvement in the research options, to increase its precision, GS could become the leading bibliographic database in medicine and could be used alone for systematic reviews. PMID:23302542
Precision electron-beam polarimetry at 1 GeV using diamond microstrip detectors
Narayan, A.; Jones, D.; Cornejo, J. C.; ...
2016-02-16
We report on the highest precision yet achieved in the measurement of the polarization of a low-energy, O(1 GeV), continuous-wave (CW) electron beam, accomplished using a new polarimeter based on electron-photon scattering, in Hall C at Jefferson Lab. A number of technical innovations were necessary, including a novel method for precise control of the laser polarization in a cavity and a novel diamond microstrip detector that was able to capture most of the spectrum of scattered electrons. The data analysis technique exploited track finding, the high granularity of the detector, and its large acceptance. The polarization of the 180–μA, 1.16-GeVmore » electron beam was measured with a statistical precision of <1% per hour and a systematic uncertainty of 0.59%. This exceeds the level of precision required by the Q weak experiment, a measurement of the weak vector charge of the proton. Proposed future low-energy experiments require polarization uncertainty < 0.4%, and this result represents an important demonstration of that possibility. This measurement is the first use of diamond detectors for particle tracking in an experiment. As a result, it demonstrates the stable operation of a diamond-based tracking detector in a high radiation environment, for two years.« less
Cancer Survivor Eric Dishman Is On a Precision Medicine Mission | NIH MedlinePlus the Magazine
... and personalized care, as well as for inventing innovation techniques that incorporate ethnography, the systematic study of people and cultures, and other social methods into the design and development of new ...
Automatic indexing and retrieval of encounter-specific evidence for point-of-care support.
O'Sullivan, Dympna M; Wilk, Szymon A; Michalowski, Wojtek J; Farion, Ken J
2010-08-01
Evidence-based medicine relies on repositories of empirical research evidence that can be used to support clinical decision making for improved patient care. However, retrieving evidence from such repositories at local sites presents many challenges. This paper describes a methodological framework for automatically indexing and retrieving empirical research evidence in the form of the systematic reviews and associated studies from The Cochrane Library, where retrieved documents are specific to a patient-physician encounter and thus can be used to support evidence-based decision making at the point of care. Such an encounter is defined by three pertinent groups of concepts - diagnosis, treatment, and patient, and the framework relies on these three groups to steer indexing and retrieval of reviews and associated studies. An evaluation of the indexing and retrieval components of the proposed framework was performed using documents relevant for the pediatric asthma domain. Precision and recall values for automatic indexing of systematic reviews and associated studies were 0.93 and 0.87, and 0.81 and 0.56, respectively. Moreover, precision and recall for the retrieval of relevant systematic reviews and associated studies were 0.89 and 0.81, and 0.92 and 0.89, respectively. With minor modifications, the proposed methodological framework can be customized for other evidence repositories. Copyright 2010 Elsevier Inc. All rights reserved.
Automatic evidence retrieval for systematic reviews.
Choong, Miew Keen; Galgani, Filippo; Dunn, Adam G; Tsafnat, Guy
2014-10-01
Snowballing involves recursively pursuing relevant references cited in the retrieved literature and adding them to the search results. Snowballing is an alternative approach to discover additional evidence that was not retrieved through conventional search. Snowballing's effectiveness makes it best practice in systematic reviews despite being time-consuming and tedious. Our goal was to evaluate an automatic method for citation snowballing's capacity to identify and retrieve the full text and/or abstracts of cited articles. Using 20 review articles that contained 949 citations to journal or conference articles, we manually searched Microsoft Academic Search (MAS) and identified 78.0% (740/949) of the cited articles that were present in the database. We compared the performance of the automatic citation snowballing method against the results of this manual search, measuring precision, recall, and F1 score. The automatic method was able to correctly identify 633 (as proportion of included citations: recall=66.7%, F1 score=79.3%; as proportion of citations in MAS: recall=85.5%, F1 score=91.2%) of citations with high precision (97.7%), and retrieved the full text or abstract for 490 (recall=82.9%, precision=92.1%, F1 score=87.3%) of the 633 correctly retrieved citations. The proposed method for automatic citation snowballing is accurate and is capable of obtaining the full texts or abstracts for a substantial proportion of the scholarly citations in review articles. By automating the process of citation snowballing, it may be possible to reduce the time and effort of common evidence surveillance tasks such as keeping trial registries up to date and conducting systematic reviews.
Engineering Heteromaterials to Control Lithium Ion Transport Pathways
Liu, Yang; Vishniakou, Siarhei; Yoo, Jinkyoung; ...
2015-12-21
Safe and efficient operation of lithium ion batteries requires precisely directed flow of lithium ions and electrons to control the first directional volume changes in anode and cathode materials. Understanding and controlling the lithium ion transport in battery electrodes becomes crucial to the design of high performance and durable batteries. Some recent work revealed that the chemical potential barriers encountered at the surfaces of heteromaterials play an important role in directing lithium ion transport at nanoscale. We utilize in situ transmission electron microscopy to demonstrate that we can switch lithiation pathways from radial to axial to grain-by-grain lithiation through themore » systematic creation of heteromaterial combinations in the Si-Ge nanowire system. Furthermore, our systematic studies show that engineered materials at nanoscale can overcome the intrinsic orientation-dependent lithiation, and open new pathways to aid in the development of compact, safe, and efficient batteries.« less
Engineering Heteromaterials to Control Lithium Ion Transport Pathways
Liu, Yang; Vishniakou, Siarhei; Yoo, Jinkyoung; Dayeh, Shadi A.
2015-01-01
Safe and efficient operation of lithium ion batteries requires precisely directed flow of lithium ions and electrons to control the first directional volume changes in anode and cathode materials. Understanding and controlling the lithium ion transport in battery electrodes becomes crucial to the design of high performance and durable batteries. Recent work revealed that the chemical potential barriers encountered at the surfaces of heteromaterials play an important role in directing lithium ion transport at nanoscale. Here, we utilize in situ transmission electron microscopy to demonstrate that we can switch lithiation pathways from radial to axial to grain-by-grain lithiation through the systematic creation of heteromaterial combinations in the Si-Ge nanowire system. Our systematic studies show that engineered materials at nanoscale can overcome the intrinsic orientation-dependent lithiation, and open new pathways to aid in the development of compact, safe, and efficient batteries. PMID:26686655
High precision measurements of 26Naβ- decay
NASA Astrophysics Data System (ADS)
Grinyer, G. F.; Svensson, C. E.; Andreoiu, C.; Andreyev, A. N.; Austin, R. A.; Ball, G. C.; Chakrawarthy, R. S.; Finlay, P.; Garrett, P. E.; Hackman, G.; Hardy, J. C.; Hyland, B.; Iacob, V. E.; Koopmans, K. A.; Kulp, W. D.; Leslie, J. R.; MacDonald, J. A.; Morton, A. C.; Ormand, W. E.; Osborne, C. J.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Schumaker, M. A.; Scraggs, H. C.; Schwarzenberg, J.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Wood, J. L.; Zganjar, E. F.
2005-04-01
High-precision measurements of the half-life and β-branching ratios for the β- decay of 26Na to 26Mg have been measured in β-counting and γ-decay experiments, respectively. A 4π proportional counter and fast tape transport system were employed for the half-life measurement, whereas the γ rays emitted by the daughter nucleus 26Mg were detected with the 8π γ-ray spectrometer, both located at TRIUMF's isotope separator and accelerator radioactive beam facility. The half-life of 26Na was determined to be T1/2=1.07128±0.00013±0.00021s, where the first error is statistical and the second systematic. The logft values derived from these experiments are compared with theoretical values from a full sd-shell model calculation.
Demystifying Kepler Data: A Primer for Systematic Artifact Mitigation
NASA Astrophysics Data System (ADS)
Kinemuchi, K.; Barclay, T.; Fanelli, M.; Pepper, J.; Still, M.; Howell, Steve B.
2012-09-01
The Kepler spacecraft has collected data of high photometric precision and cadence almost continuously since operations began on 2009 May 2. Primarily designed to detect planetary transits and asteroseismological signals from solar-like stars, Kepler has provided high-quality data for many areas of investigation. Unconditioned simple aperture time-series photometry is, however, affected by systematic structure. Examples of these systematics include differential velocity aberration, thermal gradients across the spacecraft, and pointing variations. While exhibiting some impact on Kepler’s primary science, these systematics can critically handicap potentially ground-breaking scientific gains in other astrophysical areas, especially over long timescales greater than 10 days. As the data archive grows to provide light curves for 105 stars of many years in length, Kepler will only fulfill its broad potential for stellar astrophysics if these systematics are understood and mitigated. Post-launch developments in the Kepler archive, data reduction pipeline and open source data analysis software have helped to remove or reduce systematic artifacts. This paper provides a conceptual primer to help users of the Kepler data archive understand and recognize systematic artifacts within light curves and some methods for their removal. Specific examples of artifact mitigation are provided using data available within the archive. Through the methods defined here, the Kepler community will find a road map to maximizing the quality and employment of the Kepler legacy archive.
Demystifying Kepler Data: A Primer for Systematic Artifact Mitigation
NASA Technical Reports Server (NTRS)
Kinemuchi, K.; Barclay, T.; Fanelli, M.; Pepper, J.; Still, M.; Howell, B.
2012-01-01
The Kepler spacecraft has collected data of high photometric precision and cadence almost continuously since operations began on 2009 May 2. Primarily designed to detect planetary transits and asteroseismological signals from solar-like stars, Kepler has provided high quality data for many areas of investigation. Unconditioned simple aperture time-series photometry are however affected by systematic structure. Examples of these systematics are differential velocity aberration, thermal gradients across the spacecraft, and pointing variations. While exhibiting some impact on Kepler's primary science, these systematics can critically handicap potentially ground-breaking scientific gains in other astrophysical areas, especially over long timescales greater than 10 days. As the data archive grows to provide light curves for 10(exp 5) stars of many years in length, Kepler will only fulfill its broad potential for stellar astrophysics if these systematics are understood and mitigated. Post-launch developments in the Kepler archive, data reduction pipeline and open source data analysis software have occurred to remove or reduce systematic artifacts. This paper provides a conceptual primer for users of the Kepler data archive to understand and recognize systematic artifacts within light curves and some methods for their removal. Specific examples of artifact mitigation are provided using data available within the archive. Through the methods defined here, the Kepler community will find a road map to maximizing the quality and employment of the Kepler legacy archive.
Buus, Niels; Gonge, Henrik
2009-08-01
The objective of this paper was to systematically review and critically evaluate all English language research papers reporting empirical studies of clinical supervision in psychiatric nursing. The first part of the search strategy was a combination of brief and building block strategies in the PubMed, CINAHL, and PsycINFO databases. The second part was a citation pearl growing strategy with reviews of 179 reference lists. In total, the search strategy demonstrated a low level of precision and a high level of recall. Thirty four articles met the criteria of the review and were systematically evaluated using three checklists. The findings were summarized by using a new checklist with nine overall questions regarding the studies' design, methods, findings, and limitations. The studies were categorized as: (i) effect studies; (ii) survey studies; (iii) interview studies; and (iv) case studies. In general, the studies were relatively small scale; they used relatively new and basic methods for data collection and analysis, and rarely included sufficient strategies for identifying confounding factors or how the researchers' preconceptions influenced the analyses. Empirical research of clinical supervision in psychiatric nursing was characterized by a basic lack of agreement about which models and instruments to use. Challenges and recommendations for future research are discussed. Clinical supervision in psychiatric nursing was commonly perceived as a good thing, but there was limited empirical evidence supporting this claim.
Morrison, Andra; Polisena, Julie; Husereau, Don; Moulton, Kristen; Clark, Michelle; Fiander, Michelle; Mierzwinski-Urban, Monika; Clifford, Tammy; Hutton, Brian; Rabb, Danielle
2012-04-01
The English language is generally perceived to be the universal language of science. However, the exclusive reliance on English-language studies may not represent all of the evidence. Excluding languages other than English (LOE) may introduce a language bias and lead to erroneous conclusions. We conducted a comprehensive literature search using bibliographic databases and grey literature sources. Studies were eligible for inclusion if they measured the effect of excluding randomized controlled trials (RCTs) reported in LOE from systematic review-based meta-analyses (SR/MA) for one or more outcomes. None of the included studies found major differences between summary treatment effects in English-language restricted meta-analyses and LOE-inclusive meta-analyses. Findings differed about the methodological and reporting quality of trials reported in LOE. The precision of pooled estimates improved with the inclusion of LOE trials. Overall, we found no evidence of a systematic bias from the use of language restrictions in systematic review-based meta-analyses in conventional medicine. Further research is needed to determine the impact of language restriction on systematic reviews in particular fields of medicine.
Quantitative spectroscopy of Galactic BA-type supergiants. I. Atmospheric parameters
NASA Astrophysics Data System (ADS)
Firnstein, M.; Przybilla, N.
2012-07-01
Context. BA-type supergiants show a high potential as versatile indicators for modern astronomy. This paper constitutes the first in a series that aims at a systematic spectroscopic study of Galactic BA-type supergiants. Various problems will be addressed, including in particular observational constraints on the evolution of massive stars and a determination of abundance gradients in the Milky Way. Aims: The focus here is on the determination of accurate and precise atmospheric parameters for a sample of Galactic BA-type supergiants as prerequisite for all further analysis. Some first applications include a recalibration of functional relationships between spectral-type, intrinsic colours, bolometric corrections and effective temperature, and an exploration of the reddening-free Johnson Q and Strömgren [c1] and β-indices as photometric indicators for effective temperatures and gravities of BA-type supergiants. Methods: An extensive grid of theoretical spectra is computed based on a hybrid non-LTE approach, covering the relevant parameter space in effective temperature, surface gravity, helium abundance, microturbulence and elemental abundances. The atmospheric parameters are derived spectroscopically by line-profile fits of our theoretical models to high-resolution and high-S/N spectra obtained at various observatories. Ionization equilibria of multiple metals and the Stark-broadened hydrogen and the neutral helium lines constitute our primary indicators for the parameter determination, supplemented by (spectro-)photometry from the UV to the near-IR. Results: We obtain accurate atmospheric parameters for 35 sample supergiants from a homogeneous analysis. Data on effective temperatures, surface gravities, helium abundances, microturbulence, macroturbulence and rotational velocities are presented. The interstellar reddening and the ratio of total-to-selective extinction towards the stars are determined. Our empirical spectral-type-Teff scale is steeper than reference relations from the literature, the stars are significantly bluer than usually assumed, and bolometric corrections differ significantly from established literature values. Photometric Teff-determinations based on the reddening-free Q-index are found to be of limited use for studies of BA-type supergiants because of large errors of typically ±5% (1σ statistical) ±3% (1σ systematic), compared to a spectroscopically achieved precision of 1-2% (combined statistical and systematic uncertainty with our methodology). The reddening-free [c1] -index and β on the other hand are found to provide useful starting values for high-precision/accuracy analyses, with uncertainties of ±1% ± 2.5% in Teff, and ±0.04 ± 0.13 dex in log g (1σ-statistical, 1σ-systematic, respectively). Based on observations collected at the Centro Astronómico Hispano Alemán at Calar Alto (CAHA), operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC), proposals H2001-2.2-011 and H2005-2.2-016.Based on observations obtained at the European Southern Observatory, proposals 62.H-0176 and 079.B-0856(A). Additional data were adopted from the UVES Paranal Observatory Project (ESO DDT Program ID 266.D-5655).
The Nab Spectrometer, Precision Field Mapping, and Associated Systematic Effects
NASA Astrophysics Data System (ADS)
Fry, Jason; Nab Collaboration
2017-09-01
The Nab experiment will make precision measurements of a, the e- ν correlation parameter, and b, the Fierz interference term, in neutron beta decay, aiming to deliver an independent determination of the ratio λ =GA /GV to sensitively test CKM unitarity. Nab utilizes a novel, long asymmetric spectrometer to measure the proton TOF and electron energy. We extract a from the slope of the measured TOF distribution for different electron energies. A reliable relation of the measured proton TOF to a requires detailed knowledge of the effective proton pathlength, which in turn imposes further requirements on the precision of the magnetic fields in the Nab spectrometer. The Nab spectrometer, magnetometry, and associated systematics will be discussed.
Nanoscale Inhomogeneous Superconductivity in Fe(Te1-xSex) Probed by Nanostructure Transport.
Yue, Chunlei; Hu, Jin; Liu, Xue; Sanchez, Ana M; Mao, Zhiqiang; Wei, Jiang
2016-01-26
Among iron-based superconductors, the layered iron chalcogenide Fe(Te1-xSex) is structurally the simplest and has attracted considerable attention. It has been speculated from bulk studies that nanoscale inhomogeneous superconductivity may inherently exist in this system. However, this has not been directly observed from nanoscale transport measurements. In this work, through simple micromechanical exfoliation and high-precision low-energy ion milling thinning, we prepared Fe(Te0.5Se0.5) nanoflakes with various thicknesses and systematically studied the correlation between the thickness and superconducting phase transition. Our result revealed a systematic thickness-dependent evolution of superconducting transition. When the thickness of the Fe(Te0.5Se0.5) flake is reduced to less than the characteristic inhomogeneity length (around 12 nm), both the superconducting current path and the metallicity of the normal state in Fe(Te0.5Se0.5) atomic sheets are suppressed. This observation provides the first transport evidence for the nanoscale inhomogeneous nature of superconductivity in Fe(Te1-xSex).
Khurana, Rajneet Kaur; Rao, Satish; Beg, Sarwar; Katare, O.P.; Singh, Bhupinder
2016-01-01
The present work aims at the systematic development of a simple, rapid and highly sensitive densitometry-based thin-layer chromatographic method for the quantification of mangiferin in bioanalytical samples. Initially, the quality target method profile was defined and critical analytical attributes (CAAs) earmarked, namely, retardation factor (Rf), peak height, capacity factor, theoretical plates and separation number. Face-centered cubic design was selected for optimization of volume loaded and plate dimensions as the critical method parameters selected from screening studies employing D-optimal and Plackett–Burman design studies, followed by evaluating their effect on the CAAs. The mobile phase containing a mixture of ethyl acetate : acetic acid : formic acid : water in a 7 : 1 : 1 : 1 (v/v/v/v) ratio was finally selected as the optimized solvent for apt chromatographic separation of mangiferin at 262 nm with Rf 0.68 ± 0.02 and all other parameters within the acceptance limits. Method validation studies revealed high linearity in the concentration range of 50–800 ng/band for mangiferin. The developed method showed high accuracy, precision, ruggedness, robustness, specificity, sensitivity, selectivity and recovery. In a nutshell, the bioanalytical method for analysis of mangiferin in plasma revealed the presence of well-resolved peaks and high recovery of mangiferin. PMID:26912808
On Test-Wiseness and Some Related Constructs
ERIC Educational Resources Information Center
Nilsson, Ingvar; Wedman, Ingemar
1976-01-01
States that, due to confusion of concepts and lack of systemization, "previous studies are often difficult to interpret and consequently...afford little possibility of formulating more precise statements about those errors the concepts represent...." A proposal for systematization is presented. (Author/RW)
We introduce and validate a new precision oncology framework for the systematic prioritization of drugs targeting mechanistic tumor dependencies in individual patients. Compounds are prioritized on the basis of their ability to invert the concerted activity of master regulator proteins that mechanistically regulate tumor cell state, as assessed from systematic drug perturbation assays. We validated the approach on a cohort of 212 gastroenteropancreatic neuroendocrine tumors (GEP-NETs), a rare malignancy originating in the pancreas and gastrointestinal tract.
Scaling up the precision in a ytterbium Bose-Einstein condensate interferometer
NASA Astrophysics Data System (ADS)
McAlpine, Katherine; Plotkin-Swing, Benjamin; Gochnauer, Daniel; Saxberg, Brendan; Gupta, Subhadeep
2016-05-01
We report on progress toward a high-precision ytterbium (Yb) Bose-Einstein condensate (BEC) interferometer, with the goal of measuring h/m and thus the fine structure constant α. Here h is Planck's constant and m is the mass of a Yb atom. The use of the non-magnetic Yb atom makes our experiment insensitive to magnetic field noise. Our chosen symmetric 3-path interferometer geometry suppresses errors from vibration, rotation, and acceleration. The precision scales with the phase accrued due to the kinetic energy difference between the interferometer arms, resulting in a quadratic sensitivity to the momentum difference. We are installing and testing the laser pulses for large momentum transfer via Bloch oscillations. We will report on Yb BEC production in a new apparatus and progress toward realizing the atom optical elements for high precision measurements. We will also discuss approaches to mitigate two important systematics: (i) atom interaction effects can be suppressed by creating the BEC in a dynamically shaped optical trap to reduce the density; (ii) diffraction phase effects from the various atom-optical elements can be accounted for through an analysis of the light-atom interaction for each pulse.
NASA Technical Reports Server (NTRS)
Pollmeier, Vincent M.; Kallemeyn, Pieter H.; Thurman, Sam W.
1993-01-01
The application of high-accuracy S/S-band (2.1 GHz uplink/2.3 GHz downlink) ranging to orbit determination with relatively short data arcs is investigated for the approach phase of each of the Galileo spacecraft's two Earth encounters (8 December 1990 and 8 December 1992). Analysis of S-band ranging data from Galileo indicated that under favorable signal levels, meter-level precision was attainable. It is shown that ranginging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. Explicit modeling of ranging bias parameters for each station pass is used to largely remove systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle finding capabilities of the data. The accuracy achieved using the precision range filtering strategy proved markedly better when compared to post-flyby reconstructions than did solutions utilizing a traditional Doppler/range filter strategy. In addition, the navigation accuracy achieved with precision ranging was comparable to that obtained using delta-Differenced One-Way Range, an interferometric measurement of spacecraft angular position relative to a natural radio source, which was also used operationally.
Developing an item bank to measure the coping strategies of people with hereditary retinal diseases.
Prem Senthil, Mallika; Khadka, Jyoti; De Roach, John; Lamey, Tina; McLaren, Terri; Campbell, Isabella; Fenwick, Eva K; Lamoureux, Ecosse L; Pesudovs, Konrad
2018-05-05
Our understanding of the coping strategies used by people with visual impairment to manage stress related to visual loss is limited. This study aims to develop a sophisticated coping instrument in the form of an item bank implemented via Computerised adaptive testing (CAT) for hereditary retinal diseases. Items on coping were extracted from qualitative interviews with patients which were supplemented by items from a literature review. A systematic multi-stage process of item refinement was carried out followed by expert panel discussion and cognitive interviews. The final coping item bank had 30 items. Rasch analysis was used to assess the psychometric properties. A CAT simulation was carried out to estimate an average number of items required to gain precise measurement of hereditary retinal disease-related coping. One hundred eighty-nine participants answered the coping item bank (median age = 58 years). The coping scale demonstrated good precision and targeting. The standardised residual loadings for items revealed six items grouped together. Removal of the six items reduced the precision of the main coping scale and worsened the variance explained by the measure. Therefore, the six items were retained within the main scale. Our CAT simulation indicated that, on average, less than 10 items are required to gain a precise measurement of coping. This is the first study to develop a psychometrically robust coping instrument for hereditary retinal diseases. CAT simulation indicated that on an average, only four and nine items were required to gain measurement at moderate and high precision, respectively.
Swallowing function and chronic respiratory diseases: Systematic review.
Ghannouchi, Ines; Speyer, Renée; Doma, Kenji; Cordier, Reinie; Verin, Eric
2016-08-01
The precise coordination between breathing and swallowing is an important mechanism to prevent pulmonary aspiration. Factors that alter breathing patterns and ventilation, such as chronic respiratory diseases, may influence that precise coordination of breathing and swallowing. The purpose of this systematic literature review is to examine the effects of chronic respiratory diseases on swallowing function. Literature searches were performed using the electronic databases PubMed and Embase. All articles meeting the eligibility criteria up to March 2016 were included. All articles included studied Chronic Obstructive Pulmonary Diseases (COPD) or Obstructive Sleep Apnea (OSA); no studies involving other respiratory diseases were found. A total of 1069 abstracts were retrieved, of which twenty-six studies met the inclusion criteria; eleven studies dealt with OSA and fifteen studies dealt with COPD. The outcome data indicate that chronic respiratory diseases increase the prevalence of oropharyngeal dysphagia (OD) in patients. However, the relative small number of studies, differences in selection criteria, definitions and assessment techniques used for diagnosing OSA, COPD, and OD point to the need for further research. Copyright © 2016 Elsevier Ltd. All rights reserved.
Precision determination of the πN scattering lengths and the charged πNN coupling constant
NASA Astrophysics Data System (ADS)
Ericson, T. E. O.; Loiseau, B.; Thomas, A. W.
2000-01-01
We critically evaluate the isovector GMO sumrule for the charged πNN coupling constant using recent precision data from π-p and π-d atoms and with careful attention to systematic errors. From the π-d scattering length we deduce the pion-proton scattering lengths 1/2(aπ-p + aπ-n) = (-20 +/- 6(statistic)+/-10 (systematic) .10-4m-1πc and 1/2(aπ-p - aπ-n) = (903 +/- 14) . 10-4m-1πc. From this a direct evaluation gives g2c(GMO)/4π = 14.20 +/- 0.07 (statistic)+/-0.13(systematic) or f2c/4π = 0.0786 +/- 0.0008.
Huang, Yuansheng; Yang, Zhirong; Wang, Jing; Zhuo, Lin; Li, Zhixia; Zhan, Siyan
2016-05-06
To compare the performance of search strategies to retrieve systematic reviews of diagnostic test accuracy from The Cochrane Library. Databases of CDSR and DARE in the Cochrane Library were searched for systematic reviews of diagnostic test accuracy published between 2008 and 2012 through nine search strategies. Each strategy consists of one group or combination of groups of searching filters about diagnostic test accuracy. Four groups of diagnostic filters were used. The Strategy combing all the filters was used as the reference to determine the sensitivity, precision, and the sensitivity x precision product for another eight Strategies. The reference Strategy retrieved 8029 records, of which 832 were eligible. The strategy only composed of MeSH terms about "accuracy measures" achieved the highest values in both precision (69.71%) and product (52.45%) with a moderate sensitivity (75.24%). The combination of MeSH terms and free text words about "accuracy measures" contributed little to increasing the sensitivity. Strategies composed of filters about "diagnosis" had similar sensitivity but lower precision and product to those composed of filters about "accuracy measures". MeSH term "exp'diagnosis' " achieved the lowest precision (9.78%) and product (7.91%), while its hyponym retrieved only half the number of records at the expense of missing 53 target articles. The precision was negatively correlated with sensitivities among the nine strategies. Compared to the filters about "diagnosis", the filters about "accuracy measures" achieved similar sensitivities but higher precision. When combining both terms, sensitivity of the strategy was enhanced obviously. The combination of MeSH terms and free text words about the same concept seemed to be meaningless for enhancing sensitivity. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Design and study of water supply system for supercritical unit boiler in thermal power station
NASA Astrophysics Data System (ADS)
Du, Zenghui
2018-04-01
In order to design and optimize the boiler feed water system of supercritical unit, the establishment of a highly accurate controlled object model and its dynamic characteristics are prerequisites for developing a perfect thermal control system. In this paper, the method of mechanism modeling often leads to large systematic errors. Aiming at the information contained in the historical operation data of the boiler typical thermal system, the modern intelligent identification method to establish a high-precision quantitative model is used. This method avoids the difficulties caused by the disturbance experiment modeling for the actual system in the field, and provides a strong reference for the design and optimization of the thermal automation control system in the thermal power plant.
NASA Astrophysics Data System (ADS)
Reggiani, Henrique; Meléndez, Jorge
2018-04-01
Recent studies of chemical abundances in metal-poor halo stars show the existence of different populations, which is important for studies of Galaxy formation and evolution. Here, we revisit the twin pair of chemically anomalous stars HD 134439 and HD 134440, using high resolution (R ˜ 72 000) and high S/N ratio (S/N ˜ 250) HDS/Subaru spectra. We compare them to the well-studied halo star HD 103095, using the line-by-line differential technique to estimate precise stellar parameters and LTE chemical abundances. We present the abundances of C, O, Na, Mg, Si, Ca, Sc, Ti, V, Cr, Mn, Co, Ni, Cu, Zn, Sr, Y, Ba, La, Ce, Nd, and Sm. We compare our results to the precise abundance patterns of Nissen & Schuster (2010) and data from dwarf Spheroidal galaxies (dSphs). We show that the abundance pattern of these stars appears to be closely linked to that of dSphs with [α/Fe] knee below [Fe/H] < -1.5. We also find a systematic difference of 0.06 ± 0.01 dex between the abundances of these twin binary stars, which could be explained by the engulfment of a planet, thus suggesting that planet formation is possible at low metallicities ([Fe/H] = -1.4).
Towards accurate radial velocities from early type spectra in the framework of an ESO key programme
NASA Astrophysics Data System (ADS)
Verschueren, Werner; David, M.; Hensberge, Herman
In order to elucidate the internal kinematics in very young stellar groups, a dedicated machinery was set up, which made it possible to proceed from actual observations to reductions and correlation analysis to the ultimate derivation of early-type stellar radial velocities (RVs) with the requisite precision. The following ingredients are found to be essential to obtain RVs of early-type stars at the 1-km/s level of precision: high-resolution, high-S/N spectra covering a large wavelength range; maximal reduction of observational errors and the use of optimal reduction procedures; the intelligent use of a versatile cross-correlation package; and comparison of velocities derived from different regions of the spectrum in order to detect systematic mismatches between object and template spectrum in some of the lines.
21st century toolkit for optimizing population health through precision nutrition.
O'Sullivan, Aifric; Henrick, Bethany; Dixon, Bonnie; Barile, Daniela; Zivkovic, Angela; Smilowitz, Jennifer; Lemay, Danielle; Martin, William; German, J Bruce; Schaefer, Sara Elizabeth
2017-07-05
Scientific, technological, and economic progress over the last 100 years all but eradicated problems of widespread food shortage and nutrient deficiency in developed nations. But now society is faced with a new set of nutrition problems related to energy imbalance and metabolic disease, which require new kinds of solutions. Recent developments in the area of new analytical tools enable us to systematically study large quantities of detailed and multidimensional metabolic and health data, providing the opportunity to address current nutrition problems through an approach called Precision Nutrition. This approach integrates different kinds of "big data" to expand our understanding of the complexity and diversity of human metabolism in response to diet. With these tools, we can more fully elucidate each individual's unique phenotype, or the current state of health, as determined by the interactions among biology, environment, and behavior. The tools of precision nutrition include genomics, metabolomics, microbiomics, phenotyping, high-throughput analytical chemistry techniques, longitudinal tracking with body sensors, informatics, data science, and sophisticated educational and behavioral interventions. These tools are enabling the development of more personalized and predictive dietary guidance and interventions that have the potential to transform how the public makes food choices and greatly improve population health.
Automatic Evidence Retrieval for Systematic Reviews
Choong, Miew Keen; Galgani, Filippo; Dunn, Adam G
2014-01-01
Background Snowballing involves recursively pursuing relevant references cited in the retrieved literature and adding them to the search results. Snowballing is an alternative approach to discover additional evidence that was not retrieved through conventional search. Snowballing’s effectiveness makes it best practice in systematic reviews despite being time-consuming and tedious. Objective Our goal was to evaluate an automatic method for citation snowballing’s capacity to identify and retrieve the full text and/or abstracts of cited articles. Methods Using 20 review articles that contained 949 citations to journal or conference articles, we manually searched Microsoft Academic Search (MAS) and identified 78.0% (740/949) of the cited articles that were present in the database. We compared the performance of the automatic citation snowballing method against the results of this manual search, measuring precision, recall, and F1 score. Results The automatic method was able to correctly identify 633 (as proportion of included citations: recall=66.7%, F1 score=79.3%; as proportion of citations in MAS: recall=85.5%, F1 score=91.2%) of citations with high precision (97.7%), and retrieved the full text or abstract for 490 (recall=82.9%, precision=92.1%, F1 score=87.3%) of the 633 correctly retrieved citations. Conclusions The proposed method for automatic citation snowballing is accurate and is capable of obtaining the full texts or abstracts for a substantial proportion of the scholarly citations in review articles. By automating the process of citation snowballing, it may be possible to reduce the time and effort of common evidence surveillance tasks such as keeping trial registries up to date and conducting systematic reviews. PMID:25274020
Precise determination of the 113Cd fourth-forbidden non-unique β -decay Q value
NASA Astrophysics Data System (ADS)
Gamage, N. D.; Bollen, G.; Eibach, M.; Gulyuz, K.; Izzo, C.; Kandegedara, R. M. E. B.; Redshaw, M.; Ringle, R.; Sandler, R.; Valverde, A. A.
2016-08-01
Using Penning trap mass spectrometry, we have performed a precise determination of the Q value for the highly forbidden β decay of 113Cd. An independent measurement of the Q value fixes the end-point energy in a fit to the 113Cdβ -decay spectrum. This provides a strong test of systematics for detectors that have observed this decay, such as those developed for β β -decay searches in cadmium and other isotopes. It will also aid in the theoretical description of the β -decay spectrum. The result, Qβ=323.89 (27 ) keV , agrees at the 1.3 σ level with the value obtained from the 2012 Atomic Mass Evaluation [Chin. Phys. C 36, 1603 (2012), 10.1088/1674-1137/36/12/003], but is a factor of almost four more precise. We also report improved values for the atomic masses of 113Cd,113In, and 112Cd.
SATELLITE-MOUNTED LIGHT SOURCES AS PHOTOMETRIC CALIBRATION STANDARDS FOR GROUND-BASED TELESCOPES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, J., E-mail: jalbert@uvic.ca
2012-01-15
A significant and growing portion of systematic error on a number of fundamental parameters in astrophysics and cosmology is due to uncertainties from absolute photometric and flux standards. A path toward achieving major reduction in such uncertainties may be provided by satellite-mounted light sources, resulting in improvement in the ability to precisely characterize atmospheric extinction, and thus helping to usher in the coming generation of precision results in astronomy. Using a campaign of observations of the 532 nm pulsed laser aboard the CALIPSO satellite, collected using a portable network of cameras and photodiodes, we obtain initial measurements of atmospheric extinction,more » which can apparently be greatly improved by further data of this type. For a future satellite-mounted precision light source, a high-altitude balloon platform under development (together with colleagues) can provide testing as well as observational data for calibration of atmospheric uncertainties.« less
Digital identification of cartographic control points
NASA Technical Reports Server (NTRS)
Gaskell, R. W.
1988-01-01
Techniques have been developed for the sub-pixel location of control points in satellite images returned by the Voyager spacecraft. The procedure uses digital imaging data in the neighborhood of the point to form a multipicture model of a piece of the surface. Comparison of this model with the digital image in each picture determines the control point locations to about a tenth of a pixel. At this level of precision, previously insignificant effects must be considered, including chromatic aberration, high level imaging distortions, and systematic errors due to navigation uncertainties. Use of these methods in the study of Jupiter's satellite Io has proven very fruitful.
QED Tests and Search for New Physics in Molecular Hydrogen
NASA Astrophysics Data System (ADS)
Salumbides, E. J.; Niu, M. L.; Dickenson, G. D.; Eikema, K. S. E.; Komasa, J.; Pachucki, K.; Ubachs, W.
2013-06-01
The hydrogen molecule has been the benchmark system for quantum chemistry, and may provide a test ground for new physics. We present our high-resolution spectroscopic studies on the X ^1Σ^+_g electronic ground state rotational series and fundamenal vibrational tones in molecular hydrogen. In combination with recent accurate ab initio calculations, we demonstrate systematic tests of quantum electrodynamical (QED) effects in molecules. Moreover, the precise comparison between theory and experiment can provide stringent constraints on possible new interactions that extend beyond the Standard Model. E. J. Salumbides, G. D. Dickenson, T. I. Ivanov and W. Ubachs, Phys. Rev. Lett. 107, 043005 (2011).
Accuracy evaluation of intraoral optical impressions: A clinical study using a reference appliance.
Atieh, Mohammad A; Ritter, André V; Ko, Ching-Chang; Duqum, Ibrahim
2017-09-01
Trueness and precision are used to evaluate the accuracy of intraoral optical impressions. Although the in vivo precision of intraoral optical impressions has been reported, in vivo trueness has not been evaluated because of limitations in the available protocols. The purpose of this clinical study was to compare the accuracy (trueness and precision) of optical and conventional impressions by using a novel study design. Five study participants consented and were enrolled. For each participant, optical and conventional (vinylsiloxanether) impressions of a custom-made intraoral Co-Cr alloy reference appliance fitted to the mandibular arch were obtained by 1 operator. Three-dimensional (3D) digital models were created for stone casts obtained from the conventional impression group and for the reference appliances by using a validated high-accuracy reference scanner. For the optical impression group, 3D digital models were obtained directly from the intraoral scans. The total mean trueness of each impression system was calculated by averaging the mean absolute deviations of the impression replicates from their 3D reference model for each participant, followed by averaging the obtained values across all participants. The total mean precision for each impression system was calculated by averaging the mean absolute deviations between all the impression replicas for each participant (10 pairs), followed by averaging the obtained values across all participants. Data were analyzed using repeated measures ANOVA (α=.05), first to assess whether a systematic difference in trueness or precision of replicate impressions could be found among participants and second to assess whether the mean trueness and precision values differed between the 2 impression systems. Statistically significant differences were found between the 2 impression systems for both mean trueness (P=.010) and mean precision (P=.007). Conventional impressions had higher accuracy with a mean trueness of 17.0 ±6.6 μm and mean precision of 16.9 ±5.8 μm than optical impressions with a mean trueness of 46.2 ±11.4 μm and mean precision of 61.1 ±4.9 μm. Complete arch (first molar-to-first molar) optical impressions were less accurate than conventional impressions but may be adequate for quadrant impressions. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Transportable Optical Lattice Clock with 7×10^{-17} Uncertainty.
Koller, S B; Grotti, J; Vogt, St; Al-Masoudi, A; Dörscher, S; Häfner, S; Sterr, U; Lisdat, Ch
2017-02-17
We present a transportable optical clock (TOC) with ^{87}Sr. Its complete characterization against a stationary lattice clock resulted in a systematic uncertainty of 7.4×10^{-17}, which is currently limited by the statistics of the determination of the residual lattice light shift, and an instability of 1.3×10^{-15}/sqrt[τ] with an averaging time τ in seconds. Measurements confirm that the systematic uncertainty can be reduced to below the design goal of 1×10^{-17}. To our knowledge, these are the best uncertainties and instabilities reported for any transportable clock to date. For autonomous operation, the TOC has been installed in an air-conditioned car trailer. It is suitable for chronometric leveling with submeter resolution as well as for intercontinental cross-linking of optical clocks, which is essential for a redefinition of the International System of Units (SI) second. In addition, the TOC will be used for high precision experiments for fundamental science that are commonly tied to precise frequency measurements and its development is an important step to space-borne optical clocks.
Transportable Optical Lattice Clock with 7 ×10-17 Uncertainty
NASA Astrophysics Data System (ADS)
Koller, S. B.; Grotti, J.; Vogt, St.; Al-Masoudi, A.; Dörscher, S.; Häfner, S.; Sterr, U.; Lisdat, Ch.
2017-02-01
We present a transportable optical clock (TOC) with
Quantifying the Precision of Single-Molecule Torque and Twist Measurements Using Allan Variance.
van Oene, Maarten M; Ha, Seungkyu; Jager, Tessa; Lee, Mina; Pedaci, Francesco; Lipfert, Jan; Dekker, Nynke H
2018-04-24
Single-molecule manipulation techniques have provided unprecedented insights into the structure, function, interactions, and mechanical properties of biological macromolecules. Recently, the single-molecule toolbox has been expanded by techniques that enable measurements of rotation and torque, such as the optical torque wrench (OTW) and several different implementations of magnetic (torque) tweezers. Although systematic analyses of the position and force precision of single-molecule techniques have attracted considerable attention, their angle and torque precision have been treated in much less detail. Here, we propose Allan deviation as a tool to systematically quantitate angle and torque precision in single-molecule measurements. We apply the Allan variance method to experimental data from our implementations of (electro)magnetic torque tweezers and an OTW and find that both approaches can achieve a torque precision better than 1 pN · nm. The OTW, capable of measuring torque on (sub)millisecond timescales, provides the best torque precision for measurement times ≲10 s, after which drift becomes a limiting factor. For longer measurement times, magnetic torque tweezers with their superior stability provide the best torque precision. Use of the Allan deviation enables critical assessments of the torque precision as a function of measurement time across different measurement modalities and provides a tool to optimize measurement protocols for a given instrument and application. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Can Systematic Reviews Inform GMO Risk Assessment and Risk Management?
Kohl, Christian; Frampton, Geoff; Sweet, Jeremy; Spök, Armin; Haddaway, Neal Robert; Wilhelm, Ralf; Unger, Stefan; Schiemann, Joachim
2015-01-01
Systematic reviews represent powerful tools to identify, collect, synthesize, and evaluate primary research data on specific research questions in a highly standardized and reproducible manner. They enable the defensible synthesis of outcomes by increasing precision and minimizing bias whilst ensuring transparency of the methods used. This makes them especially valuable to inform evidence-based risk analysis and decision making in various topics and research disciplines. Although seen as a "gold standard" for synthesizing primary research data, systematic reviews are not without limitations as they are often cost, labor and time intensive and the utility of synthesis outcomes depends upon the availability of sufficient and robust primary research data. In this paper, we (1) consider the added value systematic reviews could provide when synthesizing primary research data on genetically modified organisms (GMO) and (2) critically assess the adequacy and feasibility of systematic review for collating and analyzing data on potential impacts of GMOs in order to better inform specific steps within GMO risk assessment and risk management. The regulatory framework of the EU is used as an example, although the issues we discuss are likely to be more widely applicable.
NASA Astrophysics Data System (ADS)
Su, Peng; Khreishi, Manal A. H.; Su, Tianquan; Huang, Run; Dominguez, Margaret Z.; Maldonado, Alejandro; Butel, Guillaume; Wang, Yuhao; Parks, Robert E.; Burge, James H.
2014-03-01
A software configurable optical test system (SCOTS) based on deflectometry was developed at the University of Arizona for rapidly, robustly, and accurately measuring precision aspheric and freeform surfaces. SCOTS uses a camera with an external stop to realize a Hartmann test in reverse. With the external camera stop as the reference, a coordinate measuring machine can be used to calibrate the SCOTS test geometry to a high accuracy. Systematic errors from the camera are carefully investigated and controlled. Camera pupil imaging aberration is removed with the external aperture stop. Imaging aberration and other inherent errors are suppressed with an N-rotation test. The performance of the SCOTS test is demonstrated with the measurement results from a 5-m-diameter Large Synoptic Survey Telescope tertiary mirror and an 8.4-m diameter Giant Magellan Telescope primary mirror. The results show that SCOTS can be used as a large-dynamic-range, high-precision, and non-null test method for precision aspheric and freeform surfaces. The SCOTS test can achieve measurement accuracy comparable to traditional interferometric tests.
NASA Astrophysics Data System (ADS)
Perdelwitz, V.; Huke, P.
2018-06-01
Absorption cells filled with diatomic iodine are frequently employed as wavelength reference for high-precision stellar radial velocity determination due their long-term stability and low cost. Despite their wide-spread usage in the community, there is little documentation on how to determine the ideal operating temperature of an individual cell. We have developed a new approach to measuring the effective molecular temperature inside a gas absorption cell and searching for effects detrimental to a high precision wavelength reference, utilizing the Boltzmann distribution of relative line depths within absorption bands of single vibrational transitions. With a high resolution Fourier transform spectrometer, we took a series of 632 spectra at temperatures between 23 °C and 66 °C. These spectra provide a sufficient basis to test the algorithm and demonstrate the stability and repeatability of the temperature determination via molecular lines on a single iodine absorption cell. The achievable radial velocity precision σRV is found to be independent of the cell temperature and a detailed analysis shows a wavelength dependency, which originates in the resolving power of the spectrometer in use and the signal-to-noise ratio. Two effects were found to cause apparent absolute shifts in radial velocity, a temperature-induced shift of the order of ˜1 ms-1K-1 and a more significant effect resulting in abrupt jumps of ≥50 ms-1 is determined to be caused by the temperature crossing the dew point of the molecular iodine.
The use of robotics in otolaryngology-head and neck surgery: a systematic review.
Maan, Zeshaan N; Gibbins, Nick; Al-Jabri, Talal; D'Souza, Alwyn R
2012-01-01
Robotic surgery has become increasingly used due to its enhancement of visualization, precision, and articulation. It eliminates many of the problems encountered with conventional minimally invasive techniques and has been shown to result in reduced blood loss and complications. The rise in endoscopic procedures in otolaryngology-head and neck surgery, and associated difficulties, suggests that robotic surgery may have a role to play. To determine whether robotic surgery conveys any benefits compared to conventional minimally invasive approaches, specifically looking at precision, operative time, and visualization. A systematic review of the literature with a defined search strategy. Searches of MEDLINE, EMBASE and CENTRAL using strategy: ((robot* OR (robot*AND surgery)) AND (ent OR otolaryngology)) to November 2010. Articles reviewed by authors and data compiled in tables for analysis. There were 33 references included in the study. Access and visualization were regularly mentioned as key benefits, though no objective data has been recorded in any study. Once initial setup difficulties were overcome, operative time was shown to decrease with robotic surgery, except in one controlled series of thyroid surgeries. Precision was also highlighted as an advantage, particularly in otological and skull base surgery. Postoperative outcomes were considered equivalent to or better than conventional surgery. Cost was the biggest drawback. The evidence base to date suggests there are benefits to robotic surgery in OHNS, particularly with regards to access, precision, and operative time but there is a lack of controlled, prospective studies with objective outcome measures. In addition, economic feasibility studies must be carried out before a robotic OHNS service is established. Copyright © 2012 Elsevier Inc. All rights reserved.
Schatzl, Magdalena; Hackl, Florian; Glaser, Martin; Rauter, Patrick; Brehm, Moritz; Spindlberger, Lukas; Simbula, Angelica; Galli, Matteo; Fromherz, Thomas; Schäffler, Friedrich
2017-03-15
Efficient coupling to integrated high-quality-factor cavities is crucial for the employment of germanium quantum dot (QD) emitters in future monolithic silicon-based optoelectronic platforms. We report on strongly enhanced emission from single Ge QDs into L3 photonic crystal resonator (PCR) modes based on precise positioning of these dots at the maximum of the respective mode field energy density. Perfect site control of Ge QDs grown on prepatterned silicon-on-insulator substrates was exploited to fabricate in one processing run almost 300 PCRs containing single QDs in systematically varying positions within the cavities. Extensive photoluminescence studies on this cavity chip enable a direct evaluation of the position-dependent coupling efficiency between single dots and selected cavity modes. The experimental results demonstrate the great potential of the approach allowing CMOS-compatible parallel fabrication of arrays of spatially matched dot/cavity systems for group-IV-based data transfer or quantum optical systems in the telecom regime.
2017-01-01
Efficient coupling to integrated high-quality-factor cavities is crucial for the employment of germanium quantum dot (QD) emitters in future monolithic silicon-based optoelectronic platforms. We report on strongly enhanced emission from single Ge QDs into L3 photonic crystal resonator (PCR) modes based on precise positioning of these dots at the maximum of the respective mode field energy density. Perfect site control of Ge QDs grown on prepatterned silicon-on-insulator substrates was exploited to fabricate in one processing run almost 300 PCRs containing single QDs in systematically varying positions within the cavities. Extensive photoluminescence studies on this cavity chip enable a direct evaluation of the position-dependent coupling efficiency between single dots and selected cavity modes. The experimental results demonstrate the great potential of the approach allowing CMOS-compatible parallel fabrication of arrays of spatially matched dot/cavity systems for group-IV-based data transfer or quantum optical systems in the telecom regime. PMID:28345012
Percy, Andrew J; Yang, Juncong; Hardie, Darryl B; Chambers, Andrew G; Tamura-Wells, Jessica; Borchers, Christoph H
2015-06-15
Spurred on by the growing demand for panels of validated disease biomarkers, increasing efforts have focused on advancing qualitative and quantitative tools for more highly multiplexed and sensitive analyses of a multitude of analytes in various human biofluids. In quantitative proteomics, evolving strategies involve the use of the targeted multiple reaction monitoring (MRM) mode of mass spectrometry (MS) with stable isotope-labeled standards (SIS) used for internal normalization. Using that preferred approach with non-invasive urine samples, we have systematically advanced and rigorously assessed the methodology toward the precise quantitation of the largest, multiplexed panel of candidate protein biomarkers in human urine to date. The concentrations of the 136 proteins span >5 orders of magnitude (from 8.6 μg/mL to 25 pg/mL), with average CVs of 8.6% over process triplicate. Detailed here is our quantitative method, the analysis strategy, a feasibility application to prostate cancer samples, and a discussion of the utility of this method in translational studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Khurana, Rajneet Kaur; Rao, Satish; Beg, Sarwar; Katare, O P; Singh, Bhupinder
2016-01-01
The present work aims at the systematic development of a simple, rapid and highly sensitive densitometry-based thin-layer chromatographic method for the quantification of mangiferin in bioanalytical samples. Initially, the quality target method profile was defined and critical analytical attributes (CAAs) earmarked, namely, retardation factor (Rf), peak height, capacity factor, theoretical plates and separation number. Face-centered cubic design was selected for optimization of volume loaded and plate dimensions as the critical method parameters selected from screening studies employing D-optimal and Plackett-Burman design studies, followed by evaluating their effect on the CAAs. The mobile phase containing a mixture of ethyl acetate : acetic acid : formic acid : water in a 7 : 1 : 1 : 1 (v/v/v/v) ratio was finally selected as the optimized solvent for apt chromatographic separation of mangiferin at 262 nm withRf 0.68 ± 0.02 and all other parameters within the acceptance limits. Method validation studies revealed high linearity in the concentration range of 50-800 ng/band for mangiferin. The developed method showed high accuracy, precision, ruggedness, robustness, specificity, sensitivity, selectivity and recovery. In a nutshell, the bioanalytical method for analysis of mangiferin in plasma revealed the presence of well-resolved peaks and high recovery of mangiferin. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Revisiting the Corrosion of the Aluminum Current Collector in Lithium-Ion Batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Tianyuan; Xu, Gui-Liang; Li, Yan
The corrosion of aluminum current collectors and the oxidation of solvents at a relatively high potential have been widely investigated with an aim to stabilize the electrochemical performance of lithium-ion batteries using such components. The corrosion behavior of aluminum current collectors was revisited using a home-build high-precision electrochemical measurement system, and the impact of electrolyte components and the surface protection layer on aluminum foil was systematically studied. The electrochemical results showed that the corrosion of aluminum foil was triggered by the electrochemical oxidation of solvent molecules, like ethylene carbonate, at a relative high potential. The organic radical cations generated frommore » the electrochemical oxidation are energetically unstable, and readily undergo a deprotonation reaction that generates protons and promote the dissolution of Al3+ from the aluminum foil. This new reaction mechanism can also shed light on the dissolution of transitional metal at high potentials.« less
Revisiting the Corrosion of the Aluminum Current Collector in Lithium-Ion Batteries
Ma, Tianyuan; Xu, Gui -Liang; Li, Yan; ...
2017-02-16
The corrosion of aluminum current collectors and the oxidation of solvents at a relatively high potential have been widely investigated with an aim to stabilize the electrochemical performance of lithium-ion batteries using such components. The corrosion behavior of aluminum current collectors was revisited using a home-build high-precision electrochemical measurement system, and the impact of electrolyte components and the surface protection layer on aluminum foil was systematically studied. The electrochemical results showed that the corrosion of aluminum foil was triggered by the electrochemical oxidation of solvent molecules, like ethylene carbonate, at a relative high potential. The organic radical cations generated frommore » the electrochemical oxidation are energetically unstable, and readily undergo a deprotonation reaction that generates protons and promote the dissolution of Al 3+ from the aluminum foil. Finally, this new reaction mechanism can also shed light on the dissolution of transitional metal at high potentials.« less
Revisiting the Corrosion of the Aluminum Current Collector in Lithium-Ion Batteries.
Ma, Tianyuan; Xu, Gui-Liang; Li, Yan; Wang, Li; He, Xiangming; Zheng, Jianming; Liu, Jun; Engelhard, Mark H; Zapol, Peter; Curtiss, Larry A; Jorne, Jacob; Amine, Khalil; Chen, Zonghai
2017-03-02
The corrosion of aluminum current collectors and the oxidation of solvents at a relatively high potential have been widely investigated with an aim to stabilize the electrochemical performance of lithium-ion batteries using such components. The corrosion behavior of aluminum current collectors was revisited using a home-build high-precision electrochemical measurement system, and the impact of electrolyte components and the surface protection layer on aluminum foil was systematically studied. The electrochemical results showed that the corrosion of aluminum foil was triggered by the electrochemical oxidation of solvent molecules, like ethylene carbonate, at a relative high potential. The organic radical cations generated from the electrochemical oxidation are energetically unstable and readily undergo a deprotonation reaction that generates protons and promotes the dissolution of Al 3+ from the aluminum foil. This new reaction mechanism can also shed light on the dissolution of transitional metal at high potentials.
Revisiting the Corrosion of the Aluminum Current Collector in Lithium-Ion Batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Tianyuan; Xu, Gui -Liang; Li, Yan
The corrosion of aluminum current collectors and the oxidation of solvents at a relatively high potential have been widely investigated with an aim to stabilize the electrochemical performance of lithium-ion batteries using such components. The corrosion behavior of aluminum current collectors was revisited using a home-build high-precision electrochemical measurement system, and the impact of electrolyte components and the surface protection layer on aluminum foil was systematically studied. The electrochemical results showed that the corrosion of aluminum foil was triggered by the electrochemical oxidation of solvent molecules, like ethylene carbonate, at a relative high potential. The organic radical cations generated frommore » the electrochemical oxidation are energetically unstable, and readily undergo a deprotonation reaction that generates protons and promote the dissolution of Al 3+ from the aluminum foil. Finally, this new reaction mechanism can also shed light on the dissolution of transitional metal at high potentials.« less
NASA Astrophysics Data System (ADS)
Singh, Upendra N.; Refaat, Tamer F.; Ismail, Syed; Petros, Mulugeta; Davis, Kenneth J.; Kawa, Stephan R.; Menzies, Robert T.
2018-04-01
Modeling of a space-based high-energy 2-μm triple-pulse Integrated Path Differential Absorption (IPDA) lidar was conducted to demonstrate carbon dioxide (CO2) measurement capability and to evaluate random and systematic errors. A high pulse energy laser and an advanced MCT e-APD detector were incorporated in this model. Projected performance shows 0.5 ppm precision and 0.3 ppm bias in low-tropospheric column CO2 mixing ratio measurements from space for 10 second signal averaging over Railroad Valley (RRV) reference surface.
Study design in high-dimensional classification analysis.
Sánchez, Brisa N; Wu, Meihua; Song, Peter X K; Wang, Wen
2016-10-01
Advances in high throughput technology have accelerated the use of hundreds to millions of biomarkers to construct classifiers that partition patients into different clinical conditions. Prior to classifier development in actual studies, a critical need is to determine the sample size required to reach a specified classification precision. We develop a systematic approach for sample size determination in high-dimensional (large [Formula: see text] small [Formula: see text]) classification analysis. Our method utilizes the probability of correct classification (PCC) as the optimization objective function and incorporates the higher criticism thresholding procedure for classifier development. Further, we derive the theoretical bound of maximal PCC gain from feature augmentation (e.g. when molecular and clinical predictors are combined in classifier development). Our methods are motivated and illustrated by a study using proteomics markers to classify post-kidney transplantation patients into stable and rejecting classes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Quantifying systematics from the shear inversion on weak-lensing peak counts
NASA Astrophysics Data System (ADS)
Lin, Chieh-An; Kilbinger, Martin
2018-06-01
Weak-lensing peak counts provide a straightforward way to constrain cosmology by linking local maxima of the lensing signal to the mass function. Recent applications to data have already been numerous and fruitful. However, the importance of understanding and dealing with systematics increases as data quality reaches an unprecedented level. One of the sources of systematics is the convergence-shear inversion. This effect, inevitable when carrying out a convergence field from observations, is usually neglected by theoretical peak models. Thus, it could have an impact on cosmological results. In this paper, we study the bias from neglecting (mis-modeling) the inversion. Our tests show a small but non-negligible bias. The cosmological dependence of this bias seems to be related to the parameter Σ8 ≡ (Ωm/(1 - α))1 - α(σ8/α)α, where α = 2/3. When this bias propagates to the parameter estimation, we discovered that constraint contours involving the dark energy equation of state can differ by 2σ. Such an effect can be even larger for future high-precision surveys and we argue that the inversion should be properly modeled for theoretical peak models.
Ghetmiri, Seyed Amir; Zhou, Yiyin; Margetis, Joe; Al-Kabi, Sattar; Dou, Wei; Mosleh, Aboozar; Du, Wei; Kuchuk, Andrian; Liu, Jifeng; Sun, Greg; Soref, Richard A; Tolle, John; Naseem, Hameed A; Li, Baohua; Mortazavi, Mansour; Yu, Shui-Qing
2017-02-01
A SiGeSn/GeSn/SiGeSn single quantum well structure was grown using an industry standard chemical vapor deposition reactor with low-cost commercially available precursors. The material characterization revealed the precisely controlled material growth process. Temperature-dependent photoluminescence spectra were correlated with band structure calculation for a structure accurately determined by high-resolution x-ray diffraction and transmission electron microscopy. Based on the result, a systematic study of SiGeSn and GeSn bandgap energy separation and barrier heights versus material compositions and strain was conducted, leading to a practical design of a type-I direct bandgap quantum well.
Group Matching: Is This a Research Technique to Be Avoided?
ERIC Educational Resources Information Center
Ross, Donald C.; Klein, Donald F.
1988-01-01
The variance of the sample difference and the power of the "F" test for mean differences were studied under group matching on covariates and also under random assignment. Results shed light on systematic assignment procedures advocated to provide more precise estimates of treatment effects than simple random assignment. (TJH)
Heavy flavor results at RHIC - A comparative overview
Dong, Xin
2012-01-01
I review the latest heavy flavor measurements at RHIC experiments. Measurements from RHIC together with preliminary results from LHC offer us an opportunity to systematically study the sQGP medium properties. In the end, I will outlook a prospective future on precision heavy flavor measurements with detector upgrades at RHIC.
Malan-Müller, Stefanie; Kilian, Sanja; van den Heuvel, Leigh L; Bardien, Soraya; Asmal, Laila; Warnich, Louise; Emsley, Robin A; Hemmings, Sîan M J; Seedat, Soraya
2016-01-01
Metabolic syndrome (MetS) is a cluster of factors that increases the risk of cardiovascular disease (CVD), one of the leading causes of mortality in patients with schizophrenia. Incidence rates of MetS are significantly higher in patients with schizophrenia compared to the general population. Several factors contribute to this high comorbidity. This systematic review focuses on genetic factors and interrogates data from association studies of genes implicated in the development of MetS in patients with schizophrenia. We aimed to identify variants that potentially contribute to the high comorbidity between these disorders. PubMed, Web of Science and Scopus databases were accessed and a systematic review of published studies was conducted. Several genes showed strong evidence for an association with MetS in patients with schizophrenia, including the fat mass and obesity associated gene (FTO), leptin and leptin receptor genes (LEP, LEPR), methylenetetrahydrofolate reductase (MTHFR) gene and the serotonin receptor 2C gene (HTR2C). Genetic association studies in complex disorders are convoluted by the multifactorial nature of these disorders, further complicating investigations of comorbidity. Recommendations for future studies include assessment of larger samples, inclusion of healthy controls, longitudinal rather than cross-sectional study designs, detailed capturing of data on confounding variables for both disorders and verification of significant findings in other populations. In future, big genomic datasets may allow for the calculation of polygenic risk scores in risk prediction of MetS in patients with schizophrenia. This could ultimately facilitate early, precise, and patient-specific pharmacological and non-pharmacological interventions to minimise CVD associated morbidity and mortality. Copyright © 2015 Elsevier B.V. All rights reserved.
Systematic characterization of maturation time of fluorescent proteins in living cells
Balleza, Enrique; Kim, J. Mark; Cluzel, Philippe
2017-01-01
Slow maturation time of fluorescent proteins limits accurate measurement of rapid gene expression dynamics and effectively reduces fluorescence signal in growing cells. We used high-precision time-lapse microscopy to characterize, at two different temperatures in E. coli, the maturation kinetics of 50 FPs that span the visible spectrum. We identified fast-maturing FPs that yield the highest signal-to-noise ratio and temporal resolution in individual growing cells. PMID:29320486
NASA Astrophysics Data System (ADS)
D'Incao, Jose P.; Willians, Jason R.
2015-05-01
Precision atom interferometers (AI) in space are a key element for several applications of interest to NASA. Our proposal for participating in the Cold Atom Laboratory (CAL) onboard the International Space Station is dedicated to mitigating the leading-order systematics expected to corrupt future high-precision AI-based measurements of fundamental physics in microgravity. One important focus of our proposal is to enhance initial state preparation for dual-species AIs. Our proposed filtering scheme uses Feshbach molecular states to create highly correlated mixtures of heteronuclear atomic gases in both their position and momentum distributions. We will detail our filtering scheme along with the main factors that determine its efficiency. We also show that the atomic and molecular heating and loss rates can be mitigated at the unique temperature and density regimes accessible on CAL. This research is supported by the National Aeronautics and Space Administration.
Towards a Precision Measurement of the Lamb Shift in Hydrogen-Like Nitrogen
NASA Astrophysics Data System (ADS)
Myers, E. G.; Tarbutt, M. R.
Measurements of the 2S1/2-2P1/2 and 2S1/2 -2P3/2 transitions in moderate Z hydrogen-like ions can test Quantum-Electrodynamic calculations relevant to the interpretation of high-precision spectroscopy of atomic hydrogen. There is now particular interest in testing calculations of the two-loop self-energy. Experimental conditions are favorable for a measurement of the 2S1/2 - 2P3/2 transition in N6+ using a carbon dioxide laser. As a preliminary experiment, we have observed the 2S1/2 -2P3/2 transition in 14N6+ using a 2.5 MeV2 laser operating on the hot band of 12C16O2. The measured value of the transition centroid, 834.94(7) cm-1, agrees with, but is less precise than theory. However, the counting rate and signal-to-background ratio obtained indicate, that with careful control of systematics, a precision test of the theory is practical. Work towards constructing such a set-up is in pro gress.
Precision Photometry and Astrometry from Pan-STARRS
NASA Astrophysics Data System (ADS)
Magnier, Eugene A.; Pan-STARRS Team
2018-01-01
The Pan-STARRS 3pi Survey has been calibrated with excellent precision for both astrometry and photometry. The Pan-STARRS Data Release 1, opened to the public on 2016 Dec 16, provides photometry in 5 well-calibrated, well-defined bandpasses (grizy) astrometrically registered to the Gaia frame. Comparisons with other surveys illustrate the high quality of the calibration and provide tests of remaining systematic errors in both Pan-STARRS and those external surveys. With photometry and astrometry of roughly 3 billion astronomical objects, the Pan-STARRS DR1 has substantial overlap with Gaia, SDSS, 2MASS and other surveys. I will discuss the astrometric tie between Pan-STARRS DR1 and Gaia and show comparisons between Pan-STARRS and other large-scale surveys.
Procedure for the systematic orientation of digitised cranial models. Design and validation.
Bailo, M; Baena, S; Marín, J J; Arredondo, J M; Auría, J M; Sánchez, B; Tardío, E; Falcón, L
2015-12-01
Comparison of bony pieces requires that they are oriented systematically to ensure that homologous regions are compared. Few orientation methods are highly accurate; this is particularly true for methods applied to three-dimensional models obtained by surface scanning, a technique whose special features make it a powerful tool in forensic contexts. The aim of this study was to develop and evaluate a systematic, assisted orientation method for aligning three-dimensional cranial models relative to the Frankfurt Plane, which would be produce accurate orientations independent of operator and anthropological expertise. The study sample comprised four crania of known age and sex. All the crania were scanned and reconstructed using an Eva Artec™ portable 3D surface scanner and subsequently, the position of certain characteristic landmarks were determined by three different operators using the Rhinoceros 3D surface modelling software. Intra-observer analysis showed a tendency for orientation to be more accurate when using the assisted method than when using conventional manual orientation. Inter-observer analysis showed that experienced evaluators achieve results at least as accurate if not more accurate using the assisted method than those obtained using manual orientation; while inexperienced evaluators achieved more accurate orientation using the assisted method. The method tested is a an innovative system capable of providing very precise, systematic and automatised spatial orientations of virtual cranial models relative to standardised anatomical planes independent of the operator and operator experience. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A two-phase model of resource allocation in visual working memory.
Ye, Chaoxiong; Hu, Zhonghua; Li, Hong; Ristaniemi, Tapani; Liu, Qiang; Liu, Taosheng
2017-10-01
Two broad theories of visual working memory (VWM) storage have emerged from current research, a discrete slot-based theory and a continuous resource theory. However, neither the discrete slot-based theory or continuous resource theory clearly stipulates how the mental commodity for VWM (discrete slot or continuous resource) is allocated. Allocation may be based on the number of items via stimulus-driven factors, or it may be based on task demands via voluntary control. Previous studies have obtained conflicting results regarding the automaticity versus controllability of such allocation. In the current study, we propose a two-phase allocation model, in which the mental commodity could be allocated only by stimulus-driven factors in the early consolidation phase. However, when there is sufficient time to complete the early phase, allocation can enter the late consolidation phase, where it can be flexibly and voluntarily controlled according to task demands. In an orientation recall task, we instructed participants to store either fewer items at high-precision or more items at low-precision. In 3 experiments, we systematically manipulated memory set size and exposure duration. We did not find an effect of task demands when the set size was high and exposure duration was short. However, when we either decreased the set size or increased the exposure duration, we found a trade-off between the number and precision of VWM representations. These results can be explained by a two-phase model, which can also account for previous conflicting findings in the literature. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Tissue-Specific Analysis of Pharmacological Pathways.
Hao, Yun; Quinnies, Kayla; Realubit, Ronald; Karan, Charles; Tatonetti, Nicholas P
2018-06-19
Understanding the downstream consequences of pharmacologically targeted proteins is essential to drug design. Current approaches investigate molecular effects under tissue-naïve assumptions. Many target proteins, however, have tissue-specific expression. A systematic study connecting drugs to target pathways in in vivo human tissues is needed. We introduced a data-driven method that integrates drug-target relationships with gene expression, protein-protein interaction, and pathway annotation data. We applied our method to four independent genomewide expression datasets and built 467,396 connections between 1,034 drugs and 954 pathways in 259 human tissues or cell lines. We validated our results using data from L1000 and Pharmacogenomics Knowledgebase (PharmGKB), and observed high precision and recall. We predicted and tested anticoagulant effects of 22 compounds experimentally that were previously unknown, and used clinical data to validate these effects retrospectively. Our systematic study provides a better understanding of the cellular response to drugs and can be applied to many research topics in systems pharmacology. © 2018 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Precise time series photometry for the Kepler-2.0 mission
NASA Astrophysics Data System (ADS)
Aigrain, S.; Hodgkin, S. T.; Irwin, M. J.; Lewis, J. R.; Roberts, S. J.
2015-03-01
The recently approved NASA K2 mission has the potential to multiply by an order of magnitude the number of short-period transiting planets found by Kepler around bright and low-mass stars, and to revolutionize our understanding of stellar variability in open clusters. However, the data processing is made more challenging by the reduced pointing accuracy of the satellite, which has only two functioning reaction wheels. We present a new method to extract precise light curves from K2 data, combining list-driven, soft-edged aperture photometry with a star-by-star correction of systematic effects associated with the drift in the roll angle of the satellite about its boresight. The systematics are modelled simultaneously with the stars' intrinsic variability using a semiparametric Gaussian process model. We test this method on a week of data collected during an engineering test in 2014 January, perform checks to verify that our method does not alter intrinsic variability signals, and compute the precision as a function of magnitude on long-cadence (30 min) and planetary transit (2.5 h) time-scales. In both cases, we reach photometric precisions close to the precision reached during the nominal Kepler mission for stars fainter than 12th magnitude, and between 40 and 80 parts per million for brighter stars. These results confirm the bright prospects for planet detection and characterization, asteroseismology and stellar variability studies with K2. Finally, we perform a basic transit search on the light curves, detecting two bona fide transit-like events, seven detached eclipsing binaries and 13 classical variables.
Chowdhury, M A K; Sharif Ullah, A M M; Anwar, Saqib
2017-09-12
Ti6Al4V alloys are difficult-to-cut materials that have extensive applications in the automotive and aerospace industry. A great deal of effort has been made to develop and improve the machining operations of Ti6Al4V alloys. This paper presents an experimental study that systematically analyzes the effects of the machining conditions (ultrasonic power, feed rate, spindle speed, and tool diameter) on the performance parameters (cutting force, tool wear, overcut error, and cylindricity error), while drilling high precision holes on the workpiece made of Ti6Al4V alloys using rotary ultrasonic machining (RUM). Numerical results were obtained by conducting experiments following the design of an experiment procedure. The effects of the machining conditions on each performance parameter have been determined by constructing a set of possibility distributions (i.e., trapezoidal fuzzy numbers) from the experimental data. A possibility distribution is a probability-distribution-neural representation of uncertainty, and is effective in quantifying the uncertainty underlying physical quantities when there is a limited number of data points which is the case here. Lastly, the optimal machining conditions have been identified using these possibility distributions.
Dunet, Vincent; Klein, Ran; Allenbach, Gilles; Renaud, Jennifer; deKemp, Robert A; Prior, John O
2016-06-01
Several analysis software packages for myocardial blood flow (MBF) quantification from cardiac PET studies exist, but they have not been compared using concordance analysis, which can characterize precision and bias separately. Reproducible measurements are needed for quantification to fully develop its clinical potential. Fifty-one patients underwent dynamic Rb-82 PET at rest and during adenosine stress. Data were processed with PMOD and FlowQuant (Lortie model). MBF and myocardial flow reserve (MFR) polar maps were quantified and analyzed using a 17-segment model. Comparisons used Pearson's correlation ρ (measuring precision), Bland and Altman limit-of-agreement and Lin's concordance correlation ρc = ρ·C b (C b measuring systematic bias). Lin's concordance and Pearson's correlation values were very similar, suggesting no systematic bias between software packages with an excellent precision ρ for MBF (ρ = 0.97, ρc = 0.96, C b = 0.99) and good precision for MFR (ρ = 0.83, ρc = 0.76, C b = 0.92). On a per-segment basis, no mean bias was observed on Bland-Altman plots, although PMOD provided slightly higher values than FlowQuant at higher MBF and MFR values (P < .0001). Concordance between software packages was excellent for MBF and MFR, despite higher values by PMOD at higher MBF values. Both software packages can be used interchangeably for quantification in daily practice of Rb-82 cardiac PET.
Rathbone, John; Carter, Matt; Hoffmann, Tammy; Glasziou, Paul
2016-02-09
Bibliographic databases are the primary resource for identifying systematic reviews of health care interventions. Reliable retrieval of systematic reviews depends on the scope of indexing used by database providers. Therefore, searching one database may be insufficient, but it is unclear how many need to be searched. We sought to evaluate the performance of seven major bibliographic databases for the identification of systematic reviews for hypertension. We searched seven databases (Cochrane library, Database of Abstracts of Reviews of Effects (DARE), Excerpta Medica Database (EMBASE), Epistemonikos, Medical Literature Analysis and Retrieval System Online (MEDLINE), PubMed Health and Turning Research Into Practice (TRIP)) from 2003 to 2015 for systematic reviews of any intervention for hypertension. Citations retrieved were screened for relevance, coded and checked for screening consistency using a fuzzy text matching query. The performance of each database was assessed by calculating its sensitivity, precision, the number of missed reviews and the number of unique records retrieved. Four hundred systematic reviews were identified for inclusion from 11,381 citations retrieved from seven databases. No single database identified all the retrieved systematic reviews for hypertension. EMBASE identified the most reviews (sensitivity 69 %) but also retrieved the most irrelevant citations with 7.2 % precision (Pr). The sensitivity of the Cochrane library was 60 %, DARE 57 %, MEDLINE 57 %, PubMed Health 53 %, Epistemonikos 49 % and TRIP 33 %. EMBASE contained the highest number of unique records (n = 43). The Cochrane library identified seven unique records and had the highest precision (Pr = 30 %), followed by Epistemonikos (n = 2, Pr = 19 %). No unique records were found in PubMed Health (Pr = 24 %) DARE (Pr = 21 %), TRIP (Pr = 10 %) or MEDLINE (Pr = 10 %). Searching EMBASE and the Cochrane library identified 88 % of all systematic reviews in the reference set, and searching the freely available databases (Cochrane, Epistemonikos, MEDLINE) identified 83 % of all the reviews. The databases were re-analysed after systematic reviews of non-conventional interventions (e.g. yoga, acupuncture) were removed. Similarly, no database identified all the retrieved systematic reviews. EMBASE identified the most relevant systematic reviews (sensitivity 73 %) but also retrieved the most irrelevant citations with Pr = 5 %. The sensitivity of the Cochrane database was 62 %, followed by MEDLINE (60 %), DARE (55 %), PubMed Health (54 %), Epistemonikos (50 %) and TRIP (31 %). The precision of the Cochrane library was the highest (20 %), followed by PubMed Health (Pr = 16 %), DARE (Pr = 13 %), Epistemonikos (Pr = 12 %), MEDLINE (Pr = 6 %), TRIP (Pr = 6 %) and EMBASE (Pr = 5 %). EMBASE contained the most unique records (n = 34). The Cochrane library identified seven unique records. The other databases held no unique records. The coverage of bibliographic databases varies considerably due to differences in their scope and content. Researchers wishing to identify systematic reviews should not rely on one database but search multiple databases.
[High Precision Identification of Igneous Rock Lithology by Laser Induced Breakdown Spectroscopy].
Wang, Chao; Zhang, Wei-gang; Yan, Zhi-quan
2015-09-01
In the field of petroleum exploration, lithology identification of finely cuttings sample, especially high precision identification of igneous rock with similar property, has become one of the geological problems. In order to solve this problem, a new method is proposed based on element analysis of Laser-Induced Breakdown Spectroscopy (LIBS) and Total Alkali versus Silica (TAS) diagram. Using independent LIBS system, factors influencing spectral signal, such as pulse energy, acquisition time delay, spectrum acquisition method and pre-ablation are researched through contrast experiments systematically. The best analysis conditions of igneous rock are determined: pulse energy is 50 mJ, acquisition time delay is 2 μs, the analysis result is integral average of 20 different points of sample's surface, and pre-ablation has been proved not suitable for igneous rock sample by experiment. The repeatability of spectral data is improved effectively. Characteristic lines of 7 elements (Na, Mg, Al, Si, K, Ca, Fe) commonly used for lithology identification of igneous rock are determined, and igneous rock samples of different lithology are analyzed and compared. Calibration curves of Na, K, Si are generated by using national standard series of rock samples, and all the linearly dependent coefficients are greater than 0.9. The accuracy of quantitative analysis is investigated by national standard samples. Element content of igneous rock is analyzed quantitatively by calibration curve, and its lithology is identified accurately by the method of TAS diagram, whose accuracy rate is 90.7%. The study indicates that LIBS can effectively achieve the high precision identification of the lithology of igneous rock.
Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...
2016-06-01
Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
Wilson, Lisa M; Avila Tang, Erika; Chander, Geetanjali; Hutton, Heidi E; Odelola, Olaide A; Elf, Jessica L; Heckman-Stoddard, Brandy M; Bass, Eric B; Little, Emily A; Haberl, Elisabeth B; Apelberg, Benjamin J
2012-01-01
Policymakers need estimates of the impact of tobacco control (TC) policies to set priorities and targets for reducing tobacco use. We systematically reviewed the independent effects of TC policies on smoking behavior. We searched MEDLINE (through January 2012) and EMBASE and other databases through February 2009, looking for studies published after 1989 in any language that assessed the effects of each TC intervention on smoking prevalence, initiation, cessation, or price participation elasticity. Paired reviewers extracted data from studies that isolated the impact of a single TC intervention. We included 84 studies. The strength of evidence quantifying the independent effect on smoking prevalence was high for increasing tobacco prices and moderate for smoking bans in public places and antitobacco mass media campaigns. Limited direct evidence was available to quantify the effects of health warning labels and bans on advertising and sponsorship. Studies were too heterogeneous to pool effect estimates. We found evidence of an independent effect for several TC policies on smoking prevalence. However, we could not derive precise estimates of the effects across different settings because of variability in the characteristics of the intervention, level of policy enforcement, and underlying tobacco control environment.
Wilson, Lisa M.; Avila Tang, Erika; Chander, Geetanjali; Hutton, Heidi E.; Odelola, Olaide A.; Elf, Jessica L.; Heckman-Stoddard, Brandy M.; Bass, Eric B.; Little, Emily A.; Haberl, Elisabeth B.; Apelberg, Benjamin J.
2012-01-01
Background. Policymakers need estimates of the impact of tobacco control (TC) policies to set priorities and targets for reducing tobacco use. We systematically reviewed the independent effects of TC policies on smoking behavior. Methods. We searched MEDLINE (through January 2012) and EMBASE and other databases through February 2009, looking for studies published after 1989 in any language that assessed the effects of each TC intervention on smoking prevalence, initiation, cessation, or price participation elasticity. Paired reviewers extracted data from studies that isolated the impact of a single TC intervention. Findings. We included 84 studies. The strength of evidence quantifying the independent effect on smoking prevalence was high for increasing tobacco prices and moderate for smoking bans in public places and antitobacco mass media campaigns. Limited direct evidence was available to quantify the effects of health warning labels and bans on advertising and sponsorship. Studies were too heterogeneous to pool effect estimates. Interpretations. We found evidence of an independent effect for several TC policies on smoking prevalence. However, we could not derive precise estimates of the effects across different settings because of variability in the characteristics of the intervention, level of policy enforcement, and underlying tobacco control environment. PMID:22719777
High-precision ground-based photometry of exoplanets
NASA Astrophysics Data System (ADS)
de Mooij, Ernst J. W.; Jayawardhana, Ray
2013-04-01
High-precision photometry of transiting exoplanet systems has contributed significantly to our understanding of the properties of their atmospheres. The best targets are the bright exoplanet systems, for which the high number of photons allow very high signal-to-noise ratios. Most of the current instruments are not optimised for these high-precision measurements, either they have a large read-out overhead to reduce the readnoise and/or their field-of-view is limited, preventing simultaneous observations of both the target and a reference star. Recently we have proposed a new wide-field imager for the Observatoir de Mont-Megantic optimised for these bright systems (PI: Jayawardhana). The instruments has a dual beam design and a field-of-view of 17' by 17'. The cameras have a read-out time of 2 seconds, significantly reducing read-out overheads. Over the past years we have obtained significant experience with how to reach the high precision required for the characterisation of exoplanet atmospheres. Based on our experience we provide the following advice:
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pajek, Daniel, E-mail: dpajek@sri.utoronto.ca; Hynynen, Kullervo
2013-12-15
Purpose: Transcranial focused ultrasound is an emerging therapeutic modality that can be used to perform noninvasive neurosurgical procedures. The current clinical transcranial phased array operates at 650 kHz, however the development of a higher frequency array would enable more precision, while reducing the risk of standing waves. However, the smaller wavelength and the skull's increased distortion at this frequency are problematic. It would require an order of magnitude more elements to create such an array. Random sparse arrays enable steering of a therapeutic array with fewer elements. However, the tradeoffs inherent in the use of sparsity in a transcranial phasedmore » array have not been systematically investigated and so the objective of this simulation study is to investigate the effect of sparsity on transcranial arrays at a frequency of 1.5 MHz that provides small focal spots for precise exposure control. Methods: Transcranial sonication simulations were conducted using a multilayer Rayleigh-Sommerfeld propagation model. Element size and element population were varied and the phased array's ability to steer was assessed. Results: The focal pressures decreased proportionally as elements were removed. However, off-focus hotspots were generated if a high degree of steering was attempted with very sparse arrays. A phased array consisting of 1588 elements 3 mm in size, a 10% population, was appropriate for steering up to 4 cm in all directions. However, a higher element population would be required if near-skull sonication is desired. Conclusions: This study demonstrated that the development of a sparse, hemispherical array at 1.5 MHz could enable more precision in therapies that utilize lower intensity sonications.« less
High-precision isotopic characterization of USGS reference materials by TIMS and MC-ICP-MS
NASA Astrophysics Data System (ADS)
Weis, Dominique; Kieffer, Bruno; Maerschalk, Claude; Barling, Jane; de Jong, Jeroen; Williams, Gwen A.; Hanano, Diane; Pretorius, Wilma; Mattielli, Nadine; Scoates, James S.; Goolaerts, Arnaud; Friedman, Richard M.; Mahoney, J. Brian
2006-08-01
The Pacific Centre for Isotopic and Geochemical Research (PCIGR) at the University of British Columbia has undertaken a systematic analysis of the isotopic (Sr, Nd, and Pb) compositions and concentrations of a broad compositional range of U.S. Geological Survey (USGS) reference materials, including basalt (BCR-1, 2; BHVO-1, 2), andesite (AGV-1, 2), rhyolite (RGM-1, 2), syenite (STM-1, 2), granodiorite (GSP-2), and granite (G-2, 3). USGS rock reference materials are geochemically well characterized, but there is neither a systematic methodology nor a database for radiogenic isotopic compositions, even for the widely used BCR-1. This investigation represents the first comprehensive, systematic analysis of the isotopic composition and concentration of USGS reference materials and provides an important database for the isotopic community. In addition, the range of equipment at the PCIGR, including a Nu Instruments Plasma MC-ICP-MS, a Thermo Finnigan Triton TIMS, and a Thermo Finnigan Element2 HR-ICP-MS, permits an assessment and comparison of the precision and accuracy of isotopic analyses determined by both the TIMS and MC-ICP-MS methods (e.g., Nd isotopic compositions). For each of the reference materials, 5 to 10 complete replicate analyses provide coherent isotopic results, all with external precision below 30 ppm (2 SD) for Sr and Nd isotopic compositions (27 and 24 ppm for TIMS and MC-ICP-MS, respectively). Our results also show that the first- and second-generation USGS reference materials have homogeneous Sr and Nd isotopic compositions. Nd isotopic compositions by MC-ICP-MS and TIMS agree to within 15 ppm for all reference materials. Interlaboratory MC-ICP-MS comparisons show excellent agreement for Pb isotopic compositions; however, the reproducibility is not as good as for Sr and Nd. A careful, sequential leaching experiment of three first- and second-generation reference materials (BCR, BHVO, AGV) indicates that the heterogeneity in Pb isotopic compositions, and concentrations, could be directly related to contamination by the steel (mortar/pestle) used to process the materials. Contamination also accounts for the high concentrations of certain other trace elements (e.g., Li, Mo, Cd, Sn, Sb, W) in various USGS reference materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, Amrendra
2015-05-01
The Q-weak experiment aims to measure the weak charge of proton with a precision of 4.2%. The proposed precision on weak charge required a 2.5% measurement of the parity violating asymmetry in elastic electron - proton scattering. Polarimetry was the largest experimental contribution to this uncertainty and a new Compton polarimeter was installed in Hall C at Jefferson Lab to make the goal achievable. In this polarimeter the electron beam collides with green laser light in a low gain Fabry-Perot Cavity; the scattered electrons are detected in 4 planes of a novel diamond micro strip detector while the back scatteredmore » photons are detected in lead tungstate crystals. This diamond micro-strip detector is the first such device to be used as a tracking detector in a nuclear and particle physics experiment. The diamond detectors are read out using custom built electronic modules that include a preamplifier, a pulse shaping amplifier and a discriminator for each detector micro-strip. We use field programmable gate array based general purpose logic modules for event selection and histogramming. Extensive Monte Carlo simulations and data acquisition simulations were performed to estimate the systematic uncertainties. Additionally, the Moller and Compton polarimeters were cross calibrated at low electron beam currents using a series of interleaved measurements. In this dissertation, we describe all the subsystems of the Compton polarimeter with emphasis on the electron detector. We focus on the FPGA based data acquisition system built by the author and the data analysis methods implemented by the author. The simulations of the data acquisition and the polarimeter that helped rigorously establish the systematic uncertainties of the polarimeter are also elaborated, resulting in the first sub 1% measurement of low energy (?1 GeV) electron beam polarization with a Compton electron detector. We have demonstrated that diamond based micro-strip detectors can be used for tracking in a high radiation environment and it has enabled us to achieve the desired precision in the measurement of the electron beam polarization which in turn has allowed the most precise determination of the weak charge of the proton.« less
NASA Astrophysics Data System (ADS)
Narayan, Amrendra
The Q-weak experiment aims to measure the weak charge of proton with a precision of 4.2%. The proposed precision on weak charge required a 2.5% measurement of the parity violating asymmetry in elastic electron - proton scattering. Polarimetry was the largest experimental contribution to this uncertainty and a new Compton polarimeter was installed in Hall C at Jefferson Lab to make the goal achievable. In this polarimeter the electron beam collides with green laser light in a low gain Fabry-Perot Cavity; the scattered electrons are detected in 4 planes of a novel diamond micro strip detector while the back scattered photons are detected in lead tungstate crystals. This diamond micro-strip detector is the first such device to be used as a tracking detector in a nuclear and particle physics experiment. The diamond detectors are read out using custom built electronic modules that include a preamplifier, a pulse shaping amplifier and a discriminator for each detector micro-strip. We use field programmable gate array based general purpose logic modules for event selection and histogramming. Extensive Monte Carlo simulations and data acquisition simulations were performed to estimate the systematic uncertainties. Additionally, the Moller and Compton polarimeters were cross calibrated at low electron beam currents using a series of interleaved measurements. In this dissertation, we describe all the subsystems of the Compton polarimeter with emphasis on the electron detector. We focus on the FPGA based data acquisition system built by the author and the data analysis methods implemented by the author. The simulations of the data acquisition and the polarimeter that helped rigorously establish the systematic uncertainties of the polarimeter are also elaborated, resulting in the first sub 1% measurement of low energy (~1GeV) electron beam polarization with a Compton electron detector. We have demonstrated that diamond based micro-strip detectors can be used for tracking in a high radiation environment and it has enabled us to achieve the desired precision in the measurement of the electron beam polarization which in turn has allowed the most precise determination of the weak charge of the proton.
Radial Velocity Detection of Extra-Solar Planetary Systems
NASA Technical Reports Server (NTRS)
Cochran, William D.
1998-01-01
The McDonald Observatory Planetary Search (MOPS) was designed to search for Jovian-mass planets in orbit around solar-type stars by making high-precision measurements of the Radial Velocity (RV) of a star, to attempt to detect the reflex orbital motion of the star around the star-planet barycenter. In our solar system, the velocity of the Sun around the Sun-Jupiter barycenter averages 12.3 m/ s. The MOPS survey started operation in September 1987, and searches 36 bright, nearby, solar-type dwarfs to 10 m/s precision. The survey was started using telluric O2 absorption lines as the velocity reference metric. Observations use the McDonald Observatory 2.7-m Harlan Smith Telescope coude spectrograph with the six-foot camera. This spectrograph configuration isolates a single order of the echelle grating on a Texas Instruments 800 x 800 CCD. The telluric line method gave us a routine radial velocity precision of about 15 m/s for stars down to about 5-th magnitude. However, the data obtained with this technique suffered from some source of long-term systematic errors, which was probably the intrinsic velocity variability of the terrestrial atmosphere, i.e. winds. In order to eliminate this systematic error and to improve our overall measurement precision, we installed a stabilized I2 gas absorption cell as the velocity metric for the MOPS in October 1990. In use at the telescope, the cell is placed directly in front of the spectrograph entrance slit, with starlight passing through the cell. The use of this sealed stabilized I2 cell removes potential problems with possible long-term drifts in the velocity metric. The survey now includes a sample of 36 nearby F, G, and K type stars of luminosity class V or IV-V.
Moradi, Yousef; Baradaran, Hamid Reza; Yazdandoost, Maryam; Atrak, Shahla; Kashanian, Maryam
2015-01-01
Background: Burnout is currently a major concern among physicians due to their high level of stress at work. There are several reports on various levels of burnout in residency programs due to several predisposing factors. The aim of this systematic review was to estimate a more precise prevalence of burnout among residents of obstetrics and gynecology. Methods: PubMed, Science Direct and Scopus were searched to identify peer-reviewed Englishlanguage studies published from January 1974 to 2005 reporting burnout among residents of obstetrics and gynecology. The key words used in the search were as follows: Residents, gynecology and obstetrics, professional burnout, depersonalization, distress, anxiety, or emotional exhaustion. Relevant additional articles were identified from the lists of the retrieved articles. Results: We identified 12 studies which met our criteria. A total of 2509 participants were included in this meta-analysis. The overall prevalence rate of burnout on all the three subscales was 44% (95% CI: 30 - 57) in this group of residents. Conclusion: This meta-analysis revealed a high prevalence of burnout syndrome in residents during obstetrics and gynecology residency program. Therefore, it is recommended to consider and address this important issue to develop solutions and interventions which could improve the work condition of the medical residents. PMID:26793673
Chaos and the Double Function of Communication
NASA Astrophysics Data System (ADS)
Aula, P. S.
Since at least the needle model age, communication researchers have systematically sought means to explain, control and predict communication behavior between people. For many reasons, the accuracy of constructed models and the studies based upon them has not risen very high. It can be claimed that the reasons for the inaccuracy of communication models, and thus the poor predictability of everyday action, originate from the processes' innate chaos, apparent beneath their behavior. This leads to the argument that communication systems, which appear stable and have precisely identical starting points and identical operating environments, can nevertheless behave in an exceptional and completely different manner, despite the fact that their behavior is ruled or directed by the same rules or laws.
Error analysis and system optimization of non-null aspheric testing system
NASA Astrophysics Data System (ADS)
Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo
2010-10-01
A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.
Planck 2015 results: XI. CMB power spectra, likelihoods, and robustness of parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghanim, N.; Arnaud, M.; Ashdown, M.
This study presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlationfunctions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based on the same hybrid approach used for the previous release, i.e., a pixel-based likelihood at low multipoles (ℓ< 30) and a Gaussian approximation to the distribution of cross-power spectra at higher multipoles. The main improvements are the use of more and better processed data and of Planck polarization information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy broughtmore » by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck, in particular with regard to small-scale foreground properties. Progress in the modelling of foreground emission enables the retention of a larger fraction of the sky to determine the properties of the CMB, which also contributes to the enhanced precision of the spectra. Improvements in data processing and instrumental modelling further reduce uncertainties. Extensive tests establish the robustness and accuracy of the likelihood results, from temperature alone, from polarization alone, and from their combination. For temperature, we also perform a full likelihood analysis of realistic end-to-end simulations of the instrumental response to the sky, which were fed into the actual data processing pipeline; this does not reveal biases from residual low-level instrumental systematics. Even with the increase in precision and robustness, the ΛCDM cosmological model continues to offer a very good fit to the Planck data. The slope of the primordial scalar fluctuations, n s, is confirmed smaller than unity at more than 5σ from Planck alone. We further validate the robustness of the likelihood results against specific extensions to the baseline cosmology, which are particularly sensitive to data at high multipoles. For instance, the effective number of neutrino species remains compatible with the canonical value of 3.046. For this first detailed analysis of Planck polarization spectra, we concentrate at high multipoles on the E modes, leaving the analysis of the weaker B modes to future work. At low multipoles we use temperature maps at all Planck frequencies along with a subset of polarization data. These data take advantage of Planck’s wide frequency coverage to improve the separation of CMB and foreground emission. Within the baseline ΛCDM cosmology this requires τ = 0.078 ± 0.019 for the reionization optical depth, which is significantly lower than estimates without the use of high-frequency data for explicit monitoring of dust emission. At high multipoles we detect residual systematic errors in E polarization, typically at the μK 2 level; we therefore choose to retain temperature information alone for high multipoles as the recommended baseline, in particular for testing non-minimal models. Finally and nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the ΛCDM model, showing consistency with those established independently from temperature information alone.« less
Planck 2015 results: XI. CMB power spectra, likelihoods, and robustness of parameters
Aghanim, N.; Arnaud, M.; Ashdown, M.; ...
2016-09-20
This study presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlationfunctions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based on the same hybrid approach used for the previous release, i.e., a pixel-based likelihood at low multipoles (ℓ< 30) and a Gaussian approximation to the distribution of cross-power spectra at higher multipoles. The main improvements are the use of more and better processed data and of Planck polarization information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy broughtmore » by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck, in particular with regard to small-scale foreground properties. Progress in the modelling of foreground emission enables the retention of a larger fraction of the sky to determine the properties of the CMB, which also contributes to the enhanced precision of the spectra. Improvements in data processing and instrumental modelling further reduce uncertainties. Extensive tests establish the robustness and accuracy of the likelihood results, from temperature alone, from polarization alone, and from their combination. For temperature, we also perform a full likelihood analysis of realistic end-to-end simulations of the instrumental response to the sky, which were fed into the actual data processing pipeline; this does not reveal biases from residual low-level instrumental systematics. Even with the increase in precision and robustness, the ΛCDM cosmological model continues to offer a very good fit to the Planck data. The slope of the primordial scalar fluctuations, n s, is confirmed smaller than unity at more than 5σ from Planck alone. We further validate the robustness of the likelihood results against specific extensions to the baseline cosmology, which are particularly sensitive to data at high multipoles. For instance, the effective number of neutrino species remains compatible with the canonical value of 3.046. For this first detailed analysis of Planck polarization spectra, we concentrate at high multipoles on the E modes, leaving the analysis of the weaker B modes to future work. At low multipoles we use temperature maps at all Planck frequencies along with a subset of polarization data. These data take advantage of Planck’s wide frequency coverage to improve the separation of CMB and foreground emission. Within the baseline ΛCDM cosmology this requires τ = 0.078 ± 0.019 for the reionization optical depth, which is significantly lower than estimates without the use of high-frequency data for explicit monitoring of dust emission. At high multipoles we detect residual systematic errors in E polarization, typically at the μK 2 level; we therefore choose to retain temperature information alone for high multipoles as the recommended baseline, in particular for testing non-minimal models. Finally and nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the ΛCDM model, showing consistency with those established independently from temperature information alone.« less
Can Systematic Reviews Inform GMO Risk Assessment and Risk Management?
Kohl, Christian; Frampton, Geoff; Sweet, Jeremy; Spök, Armin; Haddaway, Neal Robert; Wilhelm, Ralf; Unger, Stefan; Schiemann, Joachim
2015-01-01
Systematic reviews represent powerful tools to identify, collect, synthesize, and evaluate primary research data on specific research questions in a highly standardized and reproducible manner. They enable the defensible synthesis of outcomes by increasing precision and minimizing bias whilst ensuring transparency of the methods used. This makes them especially valuable to inform evidence-based risk analysis and decision making in various topics and research disciplines. Although seen as a “gold standard” for synthesizing primary research data, systematic reviews are not without limitations as they are often cost, labor and time intensive and the utility of synthesis outcomes depends upon the availability of sufficient and robust primary research data. In this paper, we (1) consider the added value systematic reviews could provide when synthesizing primary research data on genetically modified organisms (GMO) and (2) critically assess the adequacy and feasibility of systematic review for collating and analyzing data on potential impacts of GMOs in order to better inform specific steps within GMO risk assessment and risk management. The regulatory framework of the EU is used as an example, although the issues we discuss are likely to be more widely applicable. PMID:26322307
Precise Lamb Shift Measurements in Hydrogen-Like Heavy Ions—Status and Perspectives
NASA Astrophysics Data System (ADS)
Andrianov, V.; Beckert, K.; Bleile, A.; Chatterjee, Ch.; Echler, A.; Egelhof, P.; Gumberidze, A.; Ilieva, S.; Kiselev, O.; Kilbourne, C.; Kluge, H.-J.; Kraft-Bermuth, S.; McCammon, D.; Meier, J. P.; Reuschl, R.; Stöhlker, T.; Trassinelli, M.
2009-12-01
The precise determination of the energy of the Lyman α1 and α2 lines in hydrogen-like heavy ions provides a sensitive test of quantum electrodynamics in very strong Coulomb fields. For the first time, a calorimetric low-temperature detector was applied in an experiment to precisely determine the transition energy of the Lyman lines of lead ions 207Pb81+ at the Experimental Storage Ring (ESR) at GSI. The detectors consist of silicon thermistors, provided by the NASA/Goddard Space Flight Center, and Pb or Sn absorbers to obtain high quantum efficiency in the energy range of 40-80 keV, where the Doppler-shifted Lyman lines are located. The measured energy of the Lyman α1 line, E(Ly-α1, 207Pb81+) = (77937±12stat±23syst) eV, agrees within errors with theoretical predictions. The systematic error is mainly due to uncertainties in the non-linear energy calibration of the detectors as well as the relative position of detector and gas-jet target.
QED Effects in Molecules: Test on Rotational Quantum States of H2
NASA Astrophysics Data System (ADS)
Salumbides, E. J.; Dickenson, G. D.; Ivanov, T. I.; Ubachs, W.
2011-07-01
Quantum electrodynamic effects have been systematically tested in the progression of rotational quantum states in the XΣg+1, v=0 vibronic ground state of molecular hydrogen. High-precision Doppler-free spectroscopy of the EFΣg+1-XΣg+1 (0,0) band was performed with 0.005cm-1 accuracy on rotationally hot H2 (with rotational quantum states J up to 16). QED and relativistic contributions to rotational level energies as high as 0.13cm-1 are extracted, and are in perfect agreement with recent calculations of QED and high-order relativistic effects for the H2 ground state.
Fiber Mode Scrambler for the Subaru Infrared Doppler Instrument (IRD)
NASA Astrophysics Data System (ADS)
Ishizuka, Masato; Kotani, Takayuki; Nishikawa, Jun; Kurokawa, Takashi; Mori, Takahiro; Kokubo, Tsukasa; Tamura, Motohide
2018-06-01
We report the results of fiber mode scrambler experiments for the Infra-Red Doppler instrument (IRD) on the Subaru 8.2-m telescope. IRD is a fiber-fed, high precision radial velocity (RV) instrument to search for exoplanets around nearby M dwarfs at near-infrared wavelengths. It is a high-resolution spectrograph with an Echelle grating. The expected RV measurement precision is ∼1 m s‑1 with a state of the art laser frequency comb for the wavelength calibration. In IRD observations, one of the most significant instrumental noise is a change of intensity distribution of multi-mode fiber exit, which degrades RV measurement precision. To stabilize the intensity distribution of fiber exit an introduction of fiber mode scrambler is mandatory. Several kinds of mode scramblers have been suggested in previous research, though it is necessary to determine the most appropriate mode scrambler system for IRD. Thus, we conducted systematic measurements of performance for a variety of mode scramblers, both static and dynamic. We tested various length multi-mode fibers, an octagonal fiber, a double fiber scrambler, and two kinds of dynamic scramblers, and their combinations. We report the performances of these mode scramblers and propose candidate mode scrambler systems for IRD.
NASA Technical Reports Server (NTRS)
Richards, Paul L.
1998-01-01
Precise measurements of the angular power spectrum of the Cosmic Microwave Background (CMB) anisotropy will revolutionize cosmology. These measurements will discriminate between competing cosmological models and, if the standard inflationary scenario is correct, will determine each of the fundamental cosmological parameters with high precision. The astrophysics community has recognized this potential: the orbital experiments MAP and PLANCK, have been approved to measure CMB anisotropy. Balloon-borne experiments can realize much of this potential before these missions are launched. Additionally, properly designed balloon-borne experiments can complement MAP in frequency and angular resolution and can give the first realistic test of the instrumentation proposed for the high frequency instrument on PLANCK. The MAXIMA experiment is part of the MAXIMA/BOOMERANG collaboration which is doing balloon observations of the angular power spectrum of the Cosmic Microwave Background from l = 10 to l = 800. These experiments are designed to use the benefits of both North American and Antarctic long-duration ballooning to full advantage. We have developed several new technologies that together allow the power spectrum to be measured with unprecedented combination of angular resolution, beam throw, sensitivity, sky coverage and control of systematic effects. These technologies are the basis for the high frequency instrument for the PLANCK mission. Our measurements will strongly discriminate between models of the origin and evolution of structure in the universe and, for many models, will determine the value of the basic cosmological parameters to high precision.
Centroiding Experiment for Determining the Positions of Stars with High Precision
NASA Astrophysics Data System (ADS)
Yano, T.; Araki, H.; Hanada, H.; Tazawa, S.; Gouda, N.; Kobayashi, Y.; Yamada, Y.; Niwa, Y.
2010-12-01
We have experimented with the determination of the positions of star images on a detector with high precision such as 10 microarcseconds, required by a space astrometry satellite, JASMINE. In order to accomplish such a precision, we take the following two procedures. (1) We determine the positions of star images on the detector with the precision of about 0.01 pixel for one measurement, using an algorithm for estimating them from photon weighted means of the star images. (2) We determine the positions of star images with the precision of about 0.0001-0.00001 pixel, which corresponds to that of 10 microarcseconds, using a large amount of data over 10000 measurements, that is, the error of the positions decreases according to the amount of data. Here, we note that the procedure 2 is not accomplished when the systematic error in our data is not excluded adequately even if we use a large amount of data. We first show the method to determine the positions of star images on the detector using photon weighted means of star images. This algorithm, used in this experiment, is very useful because it is easy to calculate the photon weighted mean from the data. This is very important in treating a large amount of data. Furthermore, we need not assume the shape of the point spread function in deriving the centroid of star images. Second, we show the results in the laboratory experiment for precision of determining the positions of star images. We obtain that the precision of estimation of positions of star images on the detector is under a variance of 0.01 pixel for one measurement (procedure 1). We also obtain that the precision of the positions of star images becomes a variance of about 0.0001 pixel using about 10000 measurements (procedure 2).
Yousefi-Nooraie, Reza; Irani, Shirin; Mortaz-Hedjri, Soroush; Shakiba, Behnam
2013-10-01
The aim of this study was to compare the performance of three search methods in the retrieval of relevant clinical trials from PubMed to answer specific clinical questions. Included studies of a sample of 100 Cochrane reviews which recorded in PubMed were considered as the reference standard. The search queries were formulated based on the systematic review titles. Precision, recall and number of retrieved records for limiting the results to clinical trial publication type, and using sensitive and specific clinical queries filters were compared. The number of keywords, presence of specific names of intervention and syndrome in the search keywords were used in a model to predict the recalls and precisions. The Clinical queries-sensitive search strategy retrieved the largest number of records (33) and had the highest recall (41.6%) and lowest precision (4.8%). The presence of specific intervention name was the only significant predictor of all recalls and precisions (P = 0.016). The recall and precision of combination of simple clinical search queries and methodological search filters to find clinical trials in various subjects were considerably low. The limit field strategy yielded in higher precision and fewer retrieved records and approximately similar recall, compared with the clinical queries-sensitive strategy. Presence of specific intervention name in the search keywords increased both recall and precision. © 2010 John Wiley & Sons Ltd.
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision.
Ender, Andreas; Mehl, Albert
2013-02-01
A new approach to both 3-dimensional (3D) trueness and precision is necessary to assess the accuracy of intraoral digital impressions and compare them to conventionally acquired impressions. The purpose of this in vitro study was to evaluate whether a new reference scanner is capable of measuring conventional and digital intraoral complete-arch impressions for 3D accuracy. A steel reference dentate model was fabricated and measured with a reference scanner (digital reference model). Conventional impressions were made from the reference model, poured with Type IV dental stone, scanned with the reference scanner, and exported as digital models. Additionally, digital impressions of the reference model were made and the digital models were exported. Precision was measured by superimposing the digital models within each group. Superimposing the digital models on the digital reference model assessed the trueness of each impression method. Statistical significance was assessed with an independent sample t test (α=.05). The reference scanner delivered high accuracy over the entire dental arch with a precision of 1.6 ±0.6 µm and a trueness of 5.3 ±1.1 µm. Conventional impressions showed significantly higher precision (12.5 ±2.5 µm) and trueness values (20.4 ±2.2 µm) with small deviations in the second molar region (P<.001). Digital impressions were significantly less accurate with a precision of 32.4 ±9.6 µm and a trueness of 58.6 ±15.8µm (P<.001). More systematic deviations of the digital models were visible across the entire dental arch. The new reference scanner is capable of measuring the precision and trueness of both digital and conventional complete-arch impressions. The digital impression is less accurate and shows a different pattern of deviation than the conventional impression. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Renne, Walter; Ludlow, Mark; Fryml, John; Schurch, Zach; Mennito, Anthony; Kessler, Ray; Lauer, Abigail
2017-07-01
As digital impressions become more common and more digital impression systems are released onto the market, it is essential to systematically and objectively evaluate their accuracy. The purpose of this in vitro study was to evaluate and compare the trueness and precision of 6 intraoral scanners and 1 laboratory scanner in both sextant and complete-arch scenarios. Furthermore, time of scanning was evaluated and correlated with trueness and precision. A custom complete-arch model was fabricated with a refractive index similar to that of tooth structure. Seven digital impression systems were used to scan the custom model for both posterior sextant and complete arch scenarios. Analysis was performed using 3-dimensional metrology software to measure discrepancies between the master model and experimental casts. Of the intraoral scanners, the Planscan was found to have the best trueness and precision while the 3Shape Trios was found to have the poorest for sextant scanning (P<.001). The order of trueness for complete arch scanning was as follows: 3Shape D800 >iTero >3Shape TRIOS 3 >Carestream 3500 >Planscan >CEREC Omnicam >CEREC Bluecam. The order of precision for complete-arch scanning was as follows: CS3500 >iTero >3Shape D800 >3Shape TRIOS 3 >CEREC Omnicam >Planscan >CEREC Bluecam. For the secondary outcome evaluating the effect time has on trueness and precision, the complete- arch scan time was highly correlated with both trueness (r=0.771) and precision (r=0.771). For sextant scanning, the Planscan was found to be the most precise and true scanner. For complete-arch scanning, the 3Shape Trios was found to have the best balance of speed and accuracy. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Application of high-precision two-way ranging to Galileo Earth-1 encounter navigation
NASA Technical Reports Server (NTRS)
Pollmeier, V. M.; Thurman, S. W.
1992-01-01
The application of precision two-way ranging to orbit determination with relatively short data arcs is investigated for the Galileo spacecraft's approach to its first Earth encounter (December 8, 1990). Analysis of previous S-band (2.3-GHz) ranging data acquired from Galileo indicated that under good signal conditions submeter precision and 10-m ranging accuracy were achieved. It is shown that ranging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. A range data filtering technique, in which explicit modeling of range measurement bias parameters for each station pass is utilized, is shown to largely remove the systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle-finding capabilities of the data. The accuracy of the Galileo orbit solutions obtained with S-band Doppler and precision ranging were found to be consistent with simple theoretical calculations, which predicted that angular accuracies of 0.26-0.34 microrad were achievable. In addition, the navigation accuracy achieved with precision ranging was marginally better than that obtained using delta-differenced one-way range (delta DOR), the principal data type that was previously used to obtain spacecraft angular position measurements operationally.
U-Th-Pb isotopic systematics of lunar norite 78235
NASA Technical Reports Server (NTRS)
Premo, W. R.; Tatsumoto, M.
1991-01-01
A pristine high-Mg noritic cumulate thought to be relict deep-seated lunar crust is studied with an eye to obtaining evidence of initial Pb isotopic composition and U/Pb ratios of early lunar magma sources and possibly of a primary magma ocean. A leaching procedure was conducted on polymineralic separates to assure the removal of secondary Pb components. The Pb from leached separates do not form a linear trend on the Pb-Pb diagram, indicating open-system behavior either from mixtures of Pb or postcrystallization disturbances. Calculated initial Pb compositions and corresponding U-238/Pb-204 (mu) values are presented, with the assumption of reasonably precise radiometric ages from the literature for norite 78236. The results obtained support the contention that high-Mg suite rocks are coeval with the ferroan anorthosites, both being produced during the earliest stages of lunar evolution.
Hassanpour, Gholmreza; Mohebali, Mehdi; Zeraati, Hojjat; Raeisi, Ahmad; Keshavarz, Hossein
2017-06-01
The objective of this study was to find an appropriate approach to asymptomatic malaria in elimination setting through a systematic review. A broad search was conducted to find articles with the words 'malaria' in their titles and 'asymptomatic' or 'submicroscopic' in their texts, irrespective of the type of study conducted. The Cochrane, Medline/Pub Med, and Scopus databases, as well as Google Scholar were systematically searched for English articles and reports and Iran's databases-Iran Medex, SID and Magiran were searched for Persian reports and articles, with no time limitation. The study was qualitatively summarized if it contained precise information on the role of asymptomatic malaria in the elimination phase. Six articles were selected from the initial 2645 articles. The results all re-emphasize the significance of asymptomatic malaria in the elimination phase, and emphasize the significance of diagnostic tests of higher sensitivity to locate these patients and perform interventions to reduce the asymptomatic parasitic reservoirs particularly in regions of low transmission. However, we may infer from the results that the current evidence cannot yet specify an accurate strategy on the role of asymptomatic malaria in the elimination phase. To eliminate malaria, alongside vector control, and treatment of symptomatic and asymptomatic patients, active and inactive methods of case detection need to be employed. The precise monitoring of asymptomatic individuals and submicroscopic cases of malaria through molecular assays and valid serological methods, especially in regions where seasonal and low transmission exists can be very helpful at this phase.
Xu, Man; Wachters, Arthur J H; van Deelen, Joop; Mourad, Maurice C D; Buskens, Pascal J P
2014-03-10
We present a systematic study of the effect of variation of the zinc oxide (ZnO) and copper indium gallium (di)selenide (CIGS) layer thickness on the absorption characteristics of CIGS solar cells using a simulation program based on finite element method (FEM). We show that the absorption in the CIGS layer does not decrease monotonically with its layer thickness due to interference effects. Ergo, high precision is required in the CIGS production process, especially when using ultra-thin absorber layers, to accurately realize the required thickness of the ZnO, cadmium sulfide (CdS) and CIGS layer. We show that patterning the ZnO window layer can strongly suppress these interference effects allowing a higher tolerance in the production process.
Molecular engineering of colloidal liquid crystals using DNA origami
NASA Astrophysics Data System (ADS)
Siavashpouri, Mahsa; Wachauf, Christian; Zakhary, Mark; Praetorius, Florian; Dietz, Hendrik; Dogic, Zvonimir
Understanding the microscopic origin of cholesteric phase remains a foundational, yet unresolved problem in the field of liquid crystals. Lack of experimental model system that allows for the systematic control of the microscopic chiral structure makes it difficult to investigate this problem for several years. Here, using DNA origami technology, we systematically vary the chirality of the colloidal particles with molecular precision and establish a quantitative relationship between the microscopic structure of particles and the macroscopic cholesteric pitch. Our study presents a new methodology for predicting bulk behavior of diverse phases based on the microscopic architectures of the constituent molecules.
[Core muscle chains activation during core exercises determined by EMG-a systematic review].
Rogan, Slavko; Riesen, Jan; Taeymans, Jan
2014-10-15
Good core muscles strength is essential for daily life and sports activities. However, the mechanism how core muscles may be effectively triggered by exercises is not yet precisely described in the literature. The aim of this systematic review was to evaluate the rate of activation as measured by electromyography of the ventral, lateral and dorsal core muscle chains during core (trunk) muscle exercises. A total of 16 studies were included. Exercises with a vertical starting position, such as the deadlift or squat activated significantly more core muscles than exercises in the horizontal initial position.
Vestergaard, Rikke Falsig; Søballe, Kjeld; Hasenkam, John Michael; Stilling, Maiken
2018-05-18
A small, but unstable, saw-gap may hinder bone-bridging and induce development of painful sternal dehiscence. We propose the use of Radiostereometric Analysis (RSA) for evaluation of sternal instability and present a method validation. Four bone analogs (phantoms) were sternotomized and tantalum beads were inserted in each half. The models were reunited with wire cerclage and placed in a radiolucent separation device. Stereoradiographs (n = 48) of the phantoms in 3 positions were recorded at 4 imposed separation points. The accuracy and precision was compared statistically and presented as translations along the 3 orthogonal axes. 7 sternotomized patients were evaluated for clinical RSA precision by double-examination stereoradiographs (n = 28). In the phantom study, we found no systematic error (p > 0.3) between the three phantom positions, and precision for evaluation of sternal separation was 0.02 mm. Phantom accuracy was mean 0.13 mm (SD 0.25). In the clinical study, we found a detection limit of 0.42 mm for sternal separation and of 2 mm for anterior-posterior dislocation of the sternal halves for the individual patient. RSA is a precise and low-dose image modality feasible for clinical evaluation of sternal stability in research. ClinicalTrials.gov Identifier: NCT02738437 , retrospectively registered.
Carpenter, Danielle; Walker, Susan; Prescott, Natalie; Schalkwijk, Joost; Armour, John Al
2011-08-18
Copy number variation (CNV) contributes to the variation observed between individuals and can influence human disease progression, but the accurate measurement of individual copy numbers is technically challenging. In the work presented here we describe a modification to a previously described paralogue ratio test (PRT) method for genotyping the CCL3L1/CCL4L1 copy variable region, which we use to ascertain CCL3L1/CCL4L1 copy number in 1581 European samples. As the products of CCL3L1 and CCL4L1 potentially play a role in autoimmunity we performed case control association studies with Crohn's disease, rheumatoid arthritis and psoriasis clinical cohorts. We evaluate the PRT methodology used, paying particular attention to accuracy and precision, and highlight the problems of differential bias in copy number measurements. Our PRT methods for measuring copy number were of sufficient precision to detect very slight but systematic differential bias between results from case and control DNA samples in one study. We find no evidence for an association between CCL3L1 copy number and Crohn's disease, rheumatoid arthritis or psoriasis. Differential bias of this small magnitude, but applied systematically across large numbers of samples, would create a serious risk of false positive associations in copy number, if measured using methods of lower precision, or methods relying on single uncorroborated measurements. In this study the small differential bias detected by PRT in one sample set was resolved by a simple pre-treatment by restriction enzyme digestion.
2011-01-01
Background Copy number variation (CNV) contributes to the variation observed between individuals and can influence human disease progression, but the accurate measurement of individual copy numbers is technically challenging. In the work presented here we describe a modification to a previously described paralogue ratio test (PRT) method for genotyping the CCL3L1/CCL4L1 copy variable region, which we use to ascertain CCL3L1/CCL4L1 copy number in 1581 European samples. As the products of CCL3L1 and CCL4L1 potentially play a role in autoimmunity we performed case control association studies with Crohn's disease, rheumatoid arthritis and psoriasis clinical cohorts. Results We evaluate the PRT methodology used, paying particular attention to accuracy and precision, and highlight the problems of differential bias in copy number measurements. Our PRT methods for measuring copy number were of sufficient precision to detect very slight but systematic differential bias between results from case and control DNA samples in one study. We find no evidence for an association between CCL3L1 copy number and Crohn's disease, rheumatoid arthritis or psoriasis. Conclusions Differential bias of this small magnitude, but applied systematically across large numbers of samples, would create a serious risk of false positive associations in copy number, if measured using methods of lower precision, or methods relying on single uncorroborated measurements. In this study the small differential bias detected by PRT in one sample set was resolved by a simple pre-treatment by restriction enzyme digestion. PMID:21851606
NASA Technical Reports Server (NTRS)
Hong, Jaesub; Allen, Branden; Grindlay, Jonathan; Barthelmy, Scott D.
2016-01-01
Wide-field (greater than or approximately equal to 100 degrees squared) hard X-ray coded-aperture telescopes with high angular resolution (greater than or approximately equal to 2 minutes) will enable a wide range of time domain astrophysics. For instance, transient sources such as gamma-ray bursts can be precisely localized without the assistance of secondary focusing X-ray telescopes to enable rapid followup studies. On the other hand, high angular resolution in coded-aperture imaging introduces a new challenge in handling the systematic uncertainty: the average photon count per pixel is often too small to establish a proper background pattern or model the systematic uncertainty in a timescale where the model remains invariant. We introduce two new techniques to improve detection sensitivity, which are designed for, but not limited to, a high-resolution coded-aperture system: a self-background modeling scheme which utilizes continuous scan or dithering operations, and a Poisson-statistics based probabilistic approach to evaluate the significance of source detection without subtraction in handling the background. We illustrate these new imaging analysis techniques in high resolution coded-aperture telescope using the data acquired by the wide-field hard X-ray telescope ProtoEXIST2 during a high-altitude balloon flight in fall 2012. We review the imaging sensitivity of ProtoEXIST2 during the flight, and demonstrate the performance of the new techniques using our balloon flight data in comparison with a simulated ideal Poisson background.
Spectral purity study for IPDA lidar measurement of CO2
NASA Astrophysics Data System (ADS)
Ma, Hui; Liu, Dong; Xie, Chen-Bo; Tan, Min; Deng, Qian; Xu, Ji-Wei; Tian, Xiao-Min; Wang, Zhen-Zhu; Wang, Bang-Xin; Wang, Ying-Jian
2018-02-01
A high sensitivity and global covered observation of carbon dioxide (CO2) is expected by space-borne integrated path differential absorption (IPDA) lidar which has been designed as the next generation measurement. The stringent precision of space-borne CO2 data, for example 1ppm or better, is required to address the largest number of carbon cycle science questions. Spectral purity, which is defined as the ratio of effective absorbed energy to the total energy transmitted, is one of the most important system parameters of IPDA lidar which directly influences the precision of CO2. Due to the column averaged dry air mixing ratio of CO2 is inferred from comparison of the two echo pulse signals, the laser output usually accompanied by an unexpected spectrally broadband background radiation would posing significant systematic error. In this study, the spectral energy density line shape and spectral impurity line shape are modeled as Lorentz line shape for the simulation, and the latter is assumed as an unabsorbed component by CO2. An error equation is deduced according to IPDA detecting theory for calculating the system error caused by spectral impurity. For a spectral purity of 99%, the induced error could reach up to 8.97 ppm.
The role of social media in online weight management: systematic review.
Chang, Tammy; Chopra, Vineet; Zhang, Catherine; Woolford, Susan J
2013-11-28
Social media applications are promising adjuncts to online weight management interventions through facilitating education, engagement, and peer support. However, the precise impact of social media on weight management is unclear. The objective of this study was to systematically describe the use and impact of social media in online weight management interventions. PubMed, PsycINFO, EMBASE, Web of Science, and Scopus were searched for English-language studies published through March 25, 2013. Additional studies were identified by searching bibliographies of electronically retrieved articles. Randomized controlled trials of online weight management interventions that included a social media component for individuals of all ages were selected. Studies were evaluated using 2 systematic scales to assess risk of bias and study quality. Of 517 citations identified, 20 studies met eligibility criteria. All study participants were adults. Because the included studies varied greatly in study design and reported outcomes, meta-analysis of interventions was not attempted. Although message boards and chat rooms were the most common social media component included, their effect on weight outcomes was not reported in most studies. Only one study measured the isolated effect of social media. It found greater engagement of participants, but no difference in weight-related outcomes. In all, 65% of studies were of high quality; 15% of studies were at low risk of bias. Despite the widespread use of social media, few studies have quantified the effect of social media in online weight management interventions; thus, its impact is still unknown. Although social media may play a role in retaining and engaging participants, studies that are designed to measure its effect are needed to understand whether and how social media may meaningfully improve weight management.
The Role of Social Media in Online Weight Management: Systematic Review
Chopra, Vineet; Zhang, Catherine; Woolford, Susan J
2013-01-01
Background Social media applications are promising adjuncts to online weight management interventions through facilitating education, engagement, and peer support. However, the precise impact of social media on weight management is unclear. Objective The objective of this study was to systematically describe the use and impact of social media in online weight management interventions. Methods PubMed, PsycINFO, EMBASE, Web of Science, and Scopus were searched for English-language studies published through March 25, 2013. Additional studies were identified by searching bibliographies of electronically retrieved articles. Randomized controlled trials of online weight management interventions that included a social media component for individuals of all ages were selected. Studies were evaluated using 2 systematic scales to assess risk of bias and study quality. Results Of 517 citations identified, 20 studies met eligibility criteria. All study participants were adults. Because the included studies varied greatly in study design and reported outcomes, meta-analysis of interventions was not attempted. Although message boards and chat rooms were the most common social media component included, their effect on weight outcomes was not reported in most studies. Only one study measured the isolated effect of social media. It found greater engagement of participants, but no difference in weight-related outcomes. In all, 65% of studies were of high quality; 15% of studies were at low risk of bias. Conclusions Despite the widespread use of social media, few studies have quantified the effect of social media in online weight management interventions; thus, its impact is still unknown. Although social media may play a role in retaining and engaging participants, studies that are designed to measure its effect are needed to understand whether and how social media may meaningfully improve weight management. PMID:24287455
Automated estimation of leaf distribution for individual trees based on TLS point clouds
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Rutzinger, Martin; Bremer, Magnus
2017-04-01
Light Detection and Ranging (LiDAR) especially the ground based LiDAR (Terrestrial Laser Scanning - TLS) is an operational used and widely available measurement tool supporting forest inventory updating and research in forest ecology. High resolution point clouds from TLS already represent single leaves which can be used for a more precise estimation of Leaf Area Index (LAI) and for higher accurate biomass estimation. However, currently the methodology for extracting single leafs from the unclassified point clouds for individual trees is still missing. The aim of this study is to present a novel segmentation approach in order to extract single leaves and derive features related to leaf morphology (such as area, slope, length and width) of each single leaf from TLS point cloud data. For the study two exemplary single trees were scanned in leaf-on condition on the university campus of Innsbruck during calm wind conditions. A northern red oak (Quercus rubra) was scanned by a discrete return recording Optech ILRIS-3D TLS scanner and a tulip tree (Liliodendron tulpifera) with Riegl VZ-6000 scanner. During the scanning campaign a reference dataset was measured parallel to scanning. In this case 230 leaves were randomly collected around the lower branches of the tree and photos were taken. The developed workflow steps were the following: in the first step normal vectors and eigenvalues were calculated based on the user specified neighborhood. Then using the direction of the largest eigenvalue outliers i.e. ghost points were removed. After that region growing segmentation based on the curvature and angles between normal vectors was applied on the filtered point cloud. On each segment a RANSAC plane fitting algorithm was applied in order to extract the segment based normal vectors. Using the related features of the calculated segments the stem and branches were labeled as non-leaf and other segments were classified as leaf. The validation of the different segmentation parameters was evaluated as the following: i) the sum area of the collected leaves and the point cloud, ii) the segmented leaf length-width ratio iii) the distribution of the leaf area for the segmented and the reference-ones were compared and the ideal parameter-set was found. The results show that the leaves can be captured with the developed workflow and the slope can be determined robustly for the segmented leaves. However, area, length and width values are systematically depending on the angle and the distance from the scanner. For correction of the systematic underestimation, more systematic measurement or LiDAR simulation is required for further detailed analysis. The results of leaf segmentation algorithm show high potential in generating more precise tree models with correctly located leaves in order to extract more precise input model for biological modeling of LAI or atmospheric corrections studies. The presented workflow also can be used in monitoring the change of angle of the leaves due to sun irradiation, water balance, and day-night rhythm.
Pulse energy dependence of subcellular dissection by femtosecond laser pulses
NASA Technical Reports Server (NTRS)
Heisterkamp, A.; Maxwell, I. Z.; Mazur, E.; Underwood, J. M.; Nickerson, J. A.; Kumar, S.; Ingber, D. E.
2005-01-01
Precise dissection of cells with ultrashort laser pulses requires a clear understanding of how the onset and extent of ablation (i.e., the removal of material) depends on pulse energy. We carried out a systematic study of the energy dependence of the plasma-mediated ablation of fluorescently-labeled subcellular structures in the cytoskeleton and nuclei of fixed endothelial cells using femtosecond, near-infrared laser pulses focused through a high-numerical aperture objective lens (1.4 NA). We find that the energy threshold for photobleaching lies between 0.9 and 1.7 nJ. By comparing the changes in fluorescence with the actual material loss determined by electron microscopy, we find that the threshold for true material ablation is about 20% higher than the photobleaching threshold. This information makes it possible to use the fluorescence to determine the onset of true material ablation without resorting to electron microscopy. We confirm the precision of this technique by severing a single microtubule without disrupting the neighboring microtubules, less than 1 micrometer away. c2005 Optical Society of America.
[Biobanking and the further development of precision medicine].
Dahl, E
2018-06-06
Over the last 15 years, an estimated 3000 large centralized biobanks have been established worldwide, making important contributions to the further development of precision medicine. In many cases, these biobanks are affiliated with pathological institutes or work closely with them. In which translational research projects, and during which phases in the development of new drugs are human bioprobes being used and can their use be easily traced in the literature? PubMed, Internet research, and information from the German Biobank Alliance and the European initiative BBMRI-ERIC. High-quality biosamples from centralized biobanks are increasingly used in clinical research and development projects. Success stories, where bioprobes have contributed to the further development of precision medicine, are shown in this paper using among others the example of RET gene fusion discovery in lung cancer. Interestingly enough, many key publications in the field of precision medicine do not contain exact references to the biobanks involved. The importance of centralized biobanks in translational research and clinical development is constantly increasing. However, in order to ensure the acceptance and visibility of biobanks, their participation in success stories of biomedical progress must be systematically documented and published.
'Toxgnostics': an unmet need in cancer medicine.
Church, David; Kerr, Rachel; Domingo, Enric; Rosmarin, Dan; Palles, Claire; Maskell, Kevin; Tomlinson, Ian; Kerr, David
2014-06-01
If we were to summarize the rationale that underpins medical oncology in a Latin aphorism, it might be 'veneno ergo sum'; that is, I poison, therefore I am. The burden of chemotherapy-associated toxicity is well recognized, but we have relatively few tools that increase the precision of anticancer drug prescribing. We propose a shift in emphasis from the focussed study of polymorphisms in drug metabolic pathways in small sets of patients to broader agnostic analyses to systematically correlate germline genetic variants with adverse events in large, well-defined cancer populations. Thus, we propose the new science of 'toxgnostics' (that is, the systematic, agnostic study of genetic predictors of toxicity from anticancer therapy).
U-Th-Pb, Sm-Nd, Rb-Sr, and Lu-Hf systematics of returned Mars samples
NASA Technical Reports Server (NTRS)
Tatsumoto, M.; Premo, W. R.
1988-01-01
The advantage of studying returned planetary samples cannot be overstated. A wider range of analytical techniques with higher sensitivities and accuracies can be applied to returned samples. Measurement of U-Th-Pb, Sm-Nd, Rb-Sr, and Lu-Hf isotopic systematics for chronology and isotopic tracer studies of planetary specimens cannot be done in situ with desirable precision. Returned Mars samples will be examined using all the physical, chemical, and geologic methods necessary to gain information on the origin and evolution of Mars. A returned Martian sample would provide ample information regarding the accretionary and evolutionary history of the Martian planetary body and possibly other planets of our solar system.
Bravi, Francesca; Tavani, Alessandra; Bosetti, Cristina; Boffetta, Paolo; La Vecchia, Carlo
2017-09-01
An inverse association has been reported between coffee drinking and the risk of hepatocellular carcinoma (HCC) and chronic liver disease (CLD), but its magnitude is still unclear. Thus, we carried out a systematic review and meta-analysis of prospective cohort studies that investigated the association between coffee consumption and the risk of HCC or CLD. We separately estimated the relative risk (RR) of the two conditions, for regular, low, and high consumption compared with no or occasional coffee consumption; we also calculated the summary RR for an increment of one cup of coffee per day. Twelve studies on HCC (3414 cases) and six studies on CLD (1463 cases) were identified. The summary RRs for HCC were 0.66 [95% confidence interval (CI): 0.55-0.78] for regular, 0.78 (95% CI: 0.66-0.91) for low, and 0.50 (95% CI: 0.43-0.58) for high coffee consumption, respectively. The summary RR for an increment of one cup per day was 0.85 (95% CI: 0.81-0.90). The summary RRs for CLD were 0.62 (95% CI: 0.47-0.82) for regular, 0.72 (95% CI: 0.59-0.88) for low, 0.35 (95% CI: 0.22-0.56) for high, and 0.74 (95% CI: 0.65-0.83) for an increment of one cup per day. The present meta-analysis provides a precise quantification of the inverse relation between coffee consumption and the risk of HCC, and adds evidence to the presence of an even stronger negative association with CLD.
Digital PCR: A Sensitive and Precise Method for KIT D816V Quantification in Mastocytosis.
Greiner, Georg; Gurbisz, Michael; Ratzinger, Franz; Witzeneder, Nadine; Simonitsch-Klupp, Ingrid; Mitterbauer-Hohendanner, Gerlinde; Mayerhofer, Matthias; Müllauer, Leonhard; Sperr, Wolfgang R; Valent, Peter; Hoermann, Gregor
2018-03-01
The analytically sensitive detection of KIT D816V in blood and bone marrow is important for diagnosing systemic mastocytosis (SM). Additionally, precise quantification of the KIT D816V variant allele fraction (VAF) is relevant clinically because it helps to predict multilineage involvement and prognosis in cases of advanced SM. Digital PCR (dPCR) is a promising new method for sensitive detection and accurate quantification of somatic mutations. We performed a validation study of dPCR for KIT D816V on 302 peripheral blood and bone marrow samples from 156 patients with mastocytosis for comparison with melting curve analysis after peptide nucleic acid-mediated PCR clamping (clamp-PCR) and allele-specific quantitative real-time PCR (qPCR). dPCR showed a limit of detection of 0.01% VAF with a mean CV of 8.5% and identified the mutation in 90% of patients compared with 70% for clamp-PCR ( P < 0.001). Moreover, dPCR for KIT D816V was highly concordant with qPCR without systematic deviation of results, and confirmed the clinical value of KIT D816V VAF measurements. Thus, patients with advanced SM showed a significantly higher KIT D816V VAF (median, 2.43%) compared with patients with indolent SM (median, 0.14%; P < 0.001). Moreover, dPCR confirmed the prognostic significance of a high KIT D816V VAF regarding survival ( P < 0.001). dPCR for KIT D816V provides a high degree of precision and sensitivity combined with the potential for interlaboratory standardization, which is crucial for the implementation of KIT D816V allele burden measurement. Thus, dPCR is suitable as a new method for KIT D816V testing in patients with mastocytosis. © 2017 American Association for Clinical Chemistry.
López-Miguel, Alberto; Martínez-Almeida, Loreto; González-García, María J; Coco-Martín, María B; Sobrado-Calvo, Paloma; Maldonado, Miguel J
2013-02-01
To assess the intrasession and intersession precision of ocular, corneal, and internal higher-order aberrations (HOAs) measured using an integrated topographer and Hartmann-Shack wavefront sensor (Topcon KR-1W) in refractive surgery candidates. IOBA-Eye Institute, Valladolid, Spain. Evaluation of diagnostic technology. To analyze intrasession repeatability, 1 experienced examiner measured eyes 9 times successively. To study intersession reproducibility, the same clinician obtained measurements from another set of eyes in 2 consecutive sessions 1 week apart. Ocular, corneal, and internal HOAs were obtained. Coma and spherical aberrations, 3rd- and 4th-order aberrations, and total HOAs were calculated for a 6.0 mm pupil diameter. For intrasession repeatability (75 eyes), excellent intraclass correlation coefficients (ICCs) were obtained (ICC >0.87), except for internal primary coma (ICC = 0.75) and 3rd-order (ICC = 0.72) HOAs. Repeatability precision (1.96 × S(w)) values ranged from 0.03 μm (corneal primary spherical) to 0.08 μm (ocular primary coma). For intersession reproducibility (50 eyes), ICCs were good (>0.8) for ocular primary spherical, 3rd-order, and total higher-order aberrations; reproducibility precision values ranged from 0.06 μm (corneal primary spherical) to 0.21 μm (internal 3rd order), with internal HOAs having the lowest precision (≥0.12 μm). No systematic bias was found between examinations on different days. The intrasession repeatability was high; therefore, the device's ability to measure HOAs in a reliable way was excellent. Under intersession reproducibility conditions, dependable corneal primary spherical aberrations were provided. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
The paradox of sham therapy and placebo effect in osteopathy: A systematic review.
Cerritelli, Francesco; Verzella, Marco; Cicchitti, Luca; D'Alessandro, Giandomenico; Vanacore, Nicola
2016-08-01
Placebo, defined as "false treatment," is a common gold-standard method to assess the validity of a therapy both in pharmacological trials and manual medicine research where placebo is also referred to as "sham therapy." In the medical literature, guidelines have been proposed on how to conduct robust placebo-controlled trials, but mainly in a drug-based scenario. In contrast, there are not precise guidelines on how to conduct a placebo-controlled in manual medicine trials (particularly osteopathy). The aim of the present systematic review was to report how and what type of sham methods, dosage, operator characteristics, and patient types were used in osteopathic clinical trials and, eventually, assess sham clinical effectiveness. A systematic Cochrane-based review was conducted by analyzing the osteopathic trials that used both manual and nonmanual placebo control. Searches were conducted on 8 databases from journal inception to December 2015 using a pragmatic literature search approach. Two independent reviewers conducted the study selection and data extraction for each study. The risk of bias was evaluated according to the Cochrane methods. A total of 64 studies were eligible for analysis collecting a total of 5024 participants. More than half (43 studies) used a manual placebo; 9 studies used a nonmanual placebo; and 12 studies used both manual and nonmanual placebo. Data showed lack of reporting sham therapy information across studies. Risk of bias analysis demonstrated a high risk of bias for allocation, blinding of personnel and participants, selective, and other bias. To explore the clinical effects of sham therapies used, a quantitative analysis was planned. However, due to the high heterogeneity of sham approaches used no further analyses were performed. High heterogeneity regarding placebo used between studies, lack of reporting information on placebo methods and within-study variability between sham and real treatment procedures suggest prudence in reading and interpreting study findings in manual osteopathic randomized controlled trials (RCTs). Efforts must be made to promote guidelines to design the most reliable placebo for manual RCTs as a means of increasing the internal validity and improve external validity of findings.
A Low-cost Environmental Control System for Precise Radial Velocity Spectrometers
NASA Astrophysics Data System (ADS)
Sliski, David H.; Blake, Cullen H.; Halverson, Samuel
2017-12-01
We present an environmental control system (ECS) designed to achieve milliKelvin (mK) level temperature stability for small-scale astronomical instruments. This ECS is inexpensive and is primarily built from commercially available components. The primary application for our ECS is the high-precision Doppler spectrometer MINERVA-Red, where the thermal variations of the optical components within the instrument represent a major source of systematic error. We demonstrate ±2 mK temperature stability within a 0.5 m3 thermal enclosure using resistive heaters in conjunction with a commercially available PID controller and off-the-shelf thermal sensors. The enclosure is maintained above ambient temperature, enabling rapid cooling through heat dissipation into the surrounding environment. We demonstrate peak-to-valley (PV) temperature stability of better than 5 mK within the MINERVA-Red vacuum chamber, which is located inside the thermal enclosure, despite large temperature swings in the ambient laboratory environment. During periods of stable laboratory conditions, the PV variations within the vacuum chamber are less than 3 mK. This temperature stability is comparable to the best stability demonstrated for Doppler spectrometers currently achieving m s-1 radial velocity precision. We discuss the challenges of using commercially available thermoelectrically cooled CCD cameras in a temperature-stabilized environment, and demonstrate that the effects of variable heat output from the CCD camera body can be mitigated using PID-controlled chilled water systems. The ECS presented here could potentially provide the stable operating environment required for future compact “astrophotonic” precise radial velocity (PRV) spectrometers to achieve high Doppler measurement precision with a modest budget.
NASA Astrophysics Data System (ADS)
Jeong, U.; Kim, J.; Liu, X.; Lee, K. H.; Chance, K.; Song, C. H.
2015-12-01
The predicted accuracy of the trace gases and aerosol retrievals from the geostationary environment monitoring spectrometer (GEMS) was investigated. The GEMS is one of the first sensors to monitor NO2, SO2, HCHO, O3, and aerosols onboard geostationary earth orbit (GEO) over Asia. Since the GEMS is not launched yet, the simulated measurements and its precision were used in this study. The random and systematic component of the measurement error was estimated based on the instrument design. The atmospheric profiles were obtained from Model for Ozone And Related chemical Tracers (MOZART) simulations and surface reflectances were obtained from climatology of OMI Lambertian equivalent reflectance. The uncertainties of the GEMS trace gas and aerosol products were estimated based on the OE method using the atmospheric profile and surface reflectance. Most of the estimated uncertainties of NO2, HCHO, stratospheric and total O3 products satisfied the user's requirements with sufficient margin. However, about 26% of the estimated uncertainties of SO2 and about 30% of the estimated uncertainties of tropospheric O3 do not meet the required precision. Particularly the estimated uncertainty of SO2 is high in winter, when the emission is strong in East Asia. Further efforts are necessary in order to improve the retrieval accuracy of SO2 and tropospheric O3 in order to reach the scientific goal of GEMS. Random measurement error of GEMS was important for the NO2, SO2, and HCHO retrieval, while both the random and systematic measurement errors were important for the O3 retrievals. The degree of freedom for signal of tropospheric O3 was 0.8 ± 0.2 and that for stratospheric O3 was 2.9 ± 0.5. The estimated uncertainties of the aerosol retrieval from GEMS measurements were predicted to be lower than the required precision for the SZA range of the trace gas retrievals.
Research gaps identified during systematic reviews of clinical trials: glass-ionomer cements.
Mickenautsch, Steffen
2012-06-29
To report the results of an audit concerning research gaps in clinical trials that were accepted for appraisal in authored and published systematic reviews regarding the application of glass-ionomer cements (GIC) in dental practice Information concerning research gaps in trial precision was extracted, following a framework that included classification of the research gap reasons: 'imprecision of information (results)', 'biased information', 'inconsistency or unknown consistency' and 'not the right information', as well as research gap characterization using PICOS elements: population (P), intervention (I), comparison (C), outcomes (O) and setting (S). Internal trial validity assessment was based on the understanding that successful control for systematic error cannot be assured on the basis of inclusion of adequate methods alone, but also requires empirical evidence about whether such attempt was successful. A comprehensive and interconnected coverage of GIC-related clinical topics was established. The most common reasons found for gaps in trial precision were lack of sufficient trials and lack of sufficient large sample size. Only a few research gaps were ascribed to 'Lack of information' caused by focus on mainly surrogate trial outcomes. According to the chosen assessment criteria, a lack of adequate randomisation, allocation concealment and blinding/masking in trials covering all reviewed GIC topics was noted (selection- and detection/performance bias risk). Trial results appear to be less affected by loss-to-follow-up (attrition bias risk). This audit represents an adjunct of the systematic review articles it has covered. Its results do not change the systematic review's conclusions but highlight existing research gaps concerning the precision and internal validity of reviewed trials in detail. These gaps should be addressed in future GIC-related clinical research.
Larbi, A; Pesquer, L; Reboul, G; Omoumi, P; Perozziello, A; Abadie, P; Loriaut, P; Copin, P; Ducouret, E; Dallaudière, B
2016-10-01
Recent studies described that MRI is a good examination to assess damage in chronic athletic pubalgia (AP). However, to our knowledge, no studies focus on systematic correlation of precise tendon or parietal lesion in MRI with surgery and histological assessment. Therefore, we performed a case-control study to determine if MRI can precisely assess Adductor longus (AL) tendinopathy and parietal lesion, compared with surgery and histology. MRI can determine if AP comes from pubis symphysis, musculotendinous or inguinal orifice structures. Eighteen consecutive patients were enrolled from November 2011 to April 2013 for chronic AP. To constitute a control group, we also enrolled 18 asymptomatic men. All MRI were reviewed in consensus by 2 skeletal radiologists for pubic symphysis, musculotendinous, abdominal wall assessment and compared to surgery and histology findings. Regarding pubis symphysis, we found 4 symmetric bone marrow oedema (14%), 2 secondary cleft (7%) and 2 superior ligaments lesions (7%). For AL tendon, we mainly found 13 asymmetric bone marrow oedema (46%), 15 hyperaemia (54%). Regarding abdominal wall, the deep inguinal orifice size in the group of symptomatic athletes and the control group was respectively 27.3±6.4mm and 23.8±6.3mm. The correlation between MRI and surgery/histology was low: 20% for the AL tendon and 9% for the abdominal wall. If we chose the criteria "affected versus unaffected", this correlation became higher: 100% for AL tendon and 73% for the abdominal wall. MRI chronic athletic pubalgia concerns preferentially AL tendinopathy and deep inguinal canal dehiscence with high correlation to surgery/histology when only considering the item "affected versus unaffected" despite low correlation when we try to precisely grade these lesions. III: case-control study. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
High-pressure electronic phase diagrams in FeSe1-xSx superconductors
NASA Astrophysics Data System (ADS)
Matsuura, Kohei; Arai, Yuki; Hosoi, Suguru; Ishida, Kousuke; Mizukami, Yuta; Watashige, Tatsuya; Kasahara, Shigeru; Matsuda, Yuji; Maejima, Naoyuki; Machida, Akihiko; Watanuki, Tetsu; Fukuda, Tatsuo; Uwatoko, Yoshiya; Shibauchi, Takasada
The spin fluctuations are believed to be related to the mechanism of the unconventional superconductors. On the other hand, many recent studies suggest that the nematic order that spontaneously breaks rotational symmetry of the system exists in the Fe-based superconductors and its quantum fluctuations may play an essential role for the superconductivity. However, this remains unclear because the nematic order usually coexists with the magnetic order. To solve this issue, FeSe exhibiting a nonmagnetic nematic order is a key system. Under pressure, this order is suppressed and concurrently magnetic order appears, which competes with high-Tc superconducting phase. In isovalent substitution system FeSe1-xSx, we found a nonmagnetic nematic quantum critical point. Here we report our recent high-pressure studies in high-quality single-crystalline FeSe1-xSx up to 8 GPa. We find a systematic change of the pressure phase diagram in FeSe by the S-substitution. Our results imply that the respective role of nematic and magnetic fluctuations can be elucidated from the precise control of pressure and substitution in this system.
NASA Technical Reports Server (NTRS)
Voss, P. B.; Stimpfle, R. M.; Cohen, R. C.; Hanisco, T. F.; Bonne, G. P.; Perkins, K. K.; Lanzendorf, E. J.; Anderson, J. G.; Salawitch, R. J.
2001-01-01
We examine inorganic chlorine (Cly) partitioning in the summer lower stratosphere using in situ ER-2 aircraft observations made during the Photochemistry of Ozone Loss in the Arctic Region in Summer (POLARIS) campaign. New steady state and numerical models estimate [ClONO2]/[HCl] using currently accepted photochemistry. These models are tightly constrained by observations with OH (parameterized as a function of solar zenith angle) substituting for modeled HO2 chemistry. We find that inorganic chlorine photochemistry alone overestimates observed [ClONO2]/[HCl] by approximately 55-60% at mid and high latitudes. On the basis of POLARIS studies of the inorganic chlorine budget, [ClO]/[ClONO2], and an intercomparison with balloon observations, the most direct explanation for the model-measurement discrepancy in Cly partitioning is an error in the reactions, rate constants, and measured species concentrations linking HCl and ClO (simulated [ClO]/[HCl] too high) in combination with a possible systematic error in the ER-2 ClONO2 measurement (too low). The high precision of our simulation (+/-15% 1-sigma for [ClONO2]/[HCl], which is compared with observations) increases confidence in the observations, photolysis calculations, and laboratory rate constants. These results, along with other findings, should lead to improvements in both the accuracy and precision of stratospheric photochemical models.
Inter-satellite links: A versatile tool for geodesy and planetary and interplanetary navigation
NASA Astrophysics Data System (ADS)
Schlicht, Anja; Hugentobler, Urs; Hauk, Markus; Murböck, Michael; Pail, Roland
2016-07-01
With the use of low-low satellite-to-satellite tracking gravity field recovery made a big step forward. Based on this technique the Gravity Recovery And Climate Experiment (GRACE) mission delivers monthly gravity field with high precision, allowing to measure effects in Earth water storage basins and variations in ice mass in Greenland and Antarctica from space. GRACE is using a Ka-band inter-satellite ranging technique, GRACE Follow-On will in addition test optical ranging. In fundamental physics high-precision optical inter-satellite tracking will be used to detect gravitational waves in space, as a first step LISA Pathfinder was launched recently. Inter-satellite links are not only used for ranging, also data transfer in space is based on such links. ESA's European Data Relay System will be established in up-coming years to collect data from the low orbiting Sentinel satellites and transfer the high data rate to ground. The same link may be used for ranging, data transfer and time transfer, a functionality that is discussed for next generation Galileo satellites. But to exploit this synergy a common concept for all three tasks has to be developed. In this paper we show that with inter-satellite ranging techniques with µm accuracy the limited accuracy of GNSS based orbit determination of low Earth orbiters (LEO), which is due to the limitations of one-way microwave tracking (unsynchronized clocks, phase center variations and offsets of the sending and receiving antennas) can be overcome. In the ESA study GETRIS the following question is answered: How can a highly accurate and precise GEO-based two-way ranging method support GNSS tracking? The reduction of systematic errors in LEO precise orbit determination (POD) by exploiting the synergy between ranging, data- and time-transfer is assessed in a concept consisting of precise two-way GEO-LEO tracking (as used for data transfer) and an ultra-stable oscillator on-board of the geostationary satellite (GEO) synchronized from ground. We now want to get a step further and design a versatile concept for the use of this synergy in a satellite constellation based on existing and future planned ESA infrastructure and highlight the benefits in different disciplines from geodesy to interplanetary ranging, with emphasis on gravity field recovery.
MISR CMVs and Multiangular Views of Tropical Cyclone Inner-Core Dynamics
NASA Technical Reports Server (NTRS)
Wu, Dong L.; Diner, David J.; Garay, Michael J; Jovanovic, Veljko M.; Lee, Jae N.; Moroney, Catherine M.; Mueller, Kevin J.; Nelson, David L.
2010-01-01
Multi-camera stereo imaging of cloud features from the MISR (Multiangle Imaging SpectroRadiometer) instrument on NASA's Terra satellite provides accurate and precise measurements of cloud top heights (CTH) and cloud motion vector (CMV) winds. MISR observes each cloudy scene from nine viewing angles (Nadir, +/-26(sup o), +/-46(sup o), +/-60(sup o), +/-70(sup o)) with approximatel 275-m pixel resolution. This paper provides an update on MISR CMV and CTH algorithm improvements, and explores a high-resolution retrieval of tangential winds inside the eyewall of tropical cyclones (TC). The MISR CMV and CTH retrievals from the updated algorithm are significantly improved in terms of spatial coverage and systematic errors. A new product, the 1.1-km cross-track wind, provides high accuracy and precision in measuring convective outflows. Preliminary results obtained from the 1.1-km tangential wind retrieval inside the TC eyewall show that the inner-core rotation is often faster near the eyewall, and this faster rotation appears to be related linearly to cyclone intensity.
Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method
Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni
2017-01-01
The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508
Fisher, Brian T; Robinson, Paula D; Lehrnbecher, Thomas; Steinbach, William J; Zaoutis, Theoklis E; Phillips, Bob; Sung, Lillian
2017-05-26
Although a number of risk factors have been associated with invasive fungal disease (IFD), a systematic review of the literature to document pediatric-specific factors has not been performed. We used the Ovid SP platform to search Medline, Medline In-Process, and Embase for studies that identified risk factors for IFD in children with cancer or those who undergo hematopoietic stem cell transplantation (HSCT). We included studies if they consisted of children or adolescents (<25 years) who were receiving treatment for cancer or undergoing HSCT and if the study evaluated risk factors among patients with and those without IFD. Among the 3566 studies screened, 22 studies were included. A number of pediatric factors commonly associated with an increased risk for IFD were confirmed, including prolonged neutropenia, high-dose steroid exposure, intensive-timing chemotherapy for acute myeloid leukemia, and acute and chronic graft-versus-host disease. Increasing age, a factor not commonly associated with IFD risk, was identified as a risk factor in multiple published cohorts. With this systematic review, we have confirmed IFD risk factors that are considered routinely in daily clinical practice. Increasing age should also be considered when assessing patient risk for IFD. Future efforts should focus on defining more precise thresholds for a particular risk factor (ie, age, neutropenia duration) and on development of prediction rules inclusive of individual factors to further refine the risk prediction. © The Author 2017. Published by Oxford University Press on behalf of The Journal of the Pediatric Infectious Diseases Society. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
First Local Ties from Data of the Wettzell Triple Radio Telescope Array
NASA Astrophysics Data System (ADS)
Schüler, T.; Plötz, C.; Mähler, S.; Klügel, T.; Neidhardt, A.; Bertarini, A.; Halsig, S.; Nothnagel, A.; Lösler, M.; Eschelbach, C.; Anderson, J.
2016-12-01
The Geodetic Observatory Wettzell features three radio telescopes. Local ties between the reference points are available from terrestrial precision surveying with an expected accuracy below 0.7 mm. In addition, local VLBI data analysis is currently investigated to provide independent vectors and to provide quality feedback to the engineers. The preliminary results presented in this paper show a deviation from the local survey at the level of one millimeter with a clear systematic component. Sub-millimeter precision is reached after removal of this bias. This systematic effect is likely caused by omission of thermal expansion and gravity deformation, which is not yet implemented in our local VLBI analysis software.
Rivero-Santana, Amado; Del Pino-Sedeño, Tasmania; Ramallo-Fariña, Yolanda; Vergara, Itziar; Serrano-Aguilar, Pedro
2017-02-01
A considerable proportion of the geriatric population experiences unfavorable outcomes of hospital emergency department care. An assessment of risk for adverse outcomes would facilitate making changes in clinical management by adjusting available resources to needs according to an individual patient's risk. Risk assessment tools are available, but their prognostic precision varies. This systematic review sought to quantify the prognostic precision of 2 geriatric screening and risk assessment tools commonly used in emergency settings for patients at high risk of adverse outcomes (revisits, functional deterioration, readmissions, or death): the Identification of Seniors at Risk (ISAR) scale and the Triage Risk Screening Tool (TRST). We searched PubMed, EMBASE, the Cochrane Central Register of Controlled Trials, and SCOPUS, with no date limits, to find relevant studies. Quality was assessed with the QUADAS-2 checklist (for quality assessment of diagnostic accuracy studies). We pooled data for prognostic yield reported for the ISAR and TRST scores for each short- and medium-term outcome using bivariate random-effects modeling. The sensitivity of the ISAR scoring system as a whole ranged between 67% and 99%; specificity fell between 21% and 41%. TRST sensitivity ranged between 52% and 75% and specificity between 39% and 51%.We conclude that the tools currently used to assess risk of adverse outcomes in patients of advanced age attended in hospital emergency departments do not have adequate prognostic precision to be clinically useful.
A High-precision Trigonometric Parallax to an Ancient Metal-poor Globular Cluster
NASA Astrophysics Data System (ADS)
Brown, T. M.; Casertano, S.; Strader, J.; Riess, A.; VandenBerg, D. A.; Soderblom, D. R.; Kalirai, J.; Salinas, R.
2018-03-01
Using the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST), we have obtained a direct trigonometric parallax for the nearest metal-poor globular cluster, NGC 6397. Although trigonometric parallaxes have been previously measured for many nearby open clusters, this is the first parallax for an ancient metal-poor population—one that is used as a fundamental template in many stellar population studies. This high-precision measurement was enabled by the HST/WFC3 spatial-scanning mode, providing hundreds of astrometric measurements for dozens of stars in the cluster and also for Galactic field stars along the same sightline. We find a parallax of 0.418 ± 0.013 ± 0.018 mas (statistical, systematic), corresponding to a true distance modulus of 11.89 ± 0.07 ± 0.09 mag (2.39 ± 0.07 ± 0.10 kpc). The V luminosity at the stellar main-sequence turnoff implies an absolute cluster age of 13.4 ± 0.7 ± 1.2 Gyr. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs GO-13817, GO-14336, and GO-14773.
Carbohydrate malabsorption in acutely malnourished children and infants: a systematic review
Kvissberg, Matilda A.; Dalvi, Prasad S.; Kerac, Marko; Voskuijl, Wieger; Berkley, James A.; Priebe, Marion G.
2016-01-01
Context: Severe acute malnutrition (SAM) accounts for approximately 1 million child deaths per year. High mortality is linked with comorbidities, such as diarrhea and pneumonia. Objective: The aim of this systematic review was to determine the extent to which carbohydrate malabsorption occurs in children with SAM. Data Sources: The PubMed and Embase databases were searched. Reference lists of selected articles were checked. Data Extraction: All observational and controlled intervention studies involving children with SAM in which direct or indirect measures of carbohydrate absorption were analyzed were eligible for inclusion. A total of 20 articles were selected for this review. Data Synthesis: Most studies reported carbohydrate malabsorption, particularly lactose malabsorption, and suggested an increase in diarrhea and reduced weight gain in children on a lactose-containing diet. As most studies reviewed were observational, there was no conclusive scientific evidence of a causal relationship between lactose malabsorption and a worse clinical outcome among malnourished children. Conclusion: The combined data indicate that carbohydrate malabsorption is prevalent in children with SAM. Additional well-designed intervention studies are needed to determine whether outcomes of SAM complicated by carbohydrate malabsorption could be improved by altering the carbohydrate/lactose content of therapeutic feeds and to elucidate the precise mechanisms involved. PMID:26578625
Faggion, C M; Liu, J; Huda, F; Atieh, M
2014-04-01
Proper scientific reporting is necessary to ensure the correct interpretation of study results by readers. The main objective of this study was to assess the quality of reporting in abstracts of systematic reviews (SRs) with meta-analyses in periodontology and implant dentistry. Differences in reporting of abstracts in Cochrane and paper-based reviews were also assessed. The PubMed electronic database and the Cochrane database for SRs were searched on November 11, 2012, independently and in duplicate, for SRs with meta-analyses related to interventions in periodontology and implant dentistry. Assessment of the quality of reporting was performed independently and in duplicate, taking into account items related to the effect direction, numerical estimates of effect size, measures of precision, probability and consistency. We initially screened 433 papers and included 146 (127 paper-based and 19 Cochrane reviews, respectively). The direction of evidence was reported in two-thirds of the abstracts while strength of evidence and measure of precision (i.e., confidence interval) were reported in less than half the selected abstracts. Measures of consistency such as I(2) statistics were reported in only 5% of the selected sample of abstracts. Cochrane abstracts reported the limitations of evidence and precision better than paper-based ones. Two items ("meta-analysis" in title and abstract, respectively), were nevertheless better reported in paper-based abstracts. Abstracts of SRs with meta-analyses in periodontology and implant dentistry currently have no uniform standard of reporting, which may hinder readers' understanding of study outcomes. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Human Activity Recognition Supported on Indoor Localization: A Systematic Review.
Cerón, Jesús; López, Diego M
2018-01-01
The number of older adults is growing worldwide. This has a social and economic impact in all countries because of the increased number of older adults affected by chronic diseases, health emergencies, and disabilities, representing at the end high cost for the health system. To face this problem, the Ambient Assisted Living (AAL) domain has emerged. Its main objective is to extend the time that older adults can live independently in their homes. AAL is supported by different fields and technologies, being Human Activity Recognition (HAR), control of vital signs and location tracking the three of most interest during the last years. To perform a systematic review about Human Activity Recognition (HAR) approaches supported on Indoor Localization (IL) and vice versa, describing the methods they have used, the accuracy they have obtained and whether they have been directed towards the AAL domain or not. A systematic review of six databases was carried out (ACM, IEEE Xplore, PubMed, Science Direct and Springer). 27 papers were found. They were categorised into three groups according their approach: paper focus on 1. HAR, 2. IL, 3. HAR and IL. A detailed analysis of the following factors was performed: type of methods and technologies used for HAR, IL and data fusion, as well as the precision obtained for them. This systematic review shows that the relationship between HAR and IL has been very little studied, therefore providing insights of its potential mutual support to provide AAL solutions.
Study on manufacturing method of optical surface with high precision in angle and surface
NASA Astrophysics Data System (ADS)
Yu, Xin; Li, Xin; Yu, Ze; Zhao, Bin; Zhang, Xuebin; Sun, Lipeng; Tong, Yi
2016-10-01
This paper studied a manufacturing processing of optical surface with high precision in angel and surface. By theoretical analysis of the relationships between the angel precision and surface, the measurement conversion of the technical indicators, optical-cement method application, the optical-cement tooling design, the experiment has been finished successfully, the processing method has been verified, which can be also used in the manufacturing of the optical surface with similar high precision in angle and surface.
Can new doctors be prepared for practice? A review.
Alexander, Cameron; Cameron, Alexander; Millar, James; Szmidt, Natasha; Hanlon, Katie; Cleland, Jennifer
2014-06-01
The transition from medical student to junior doctor is an important period of change. Research shows junior doctors often experience high levels of stress, and consequently burnout. Understanding how to prepare for the transition may allow individuals who are likely to struggle to be identified and assisted. The aim of this paper is to systematically review the literature on preparedness for practice in newly qualified junior doctors. This was a systematic review of literature concerning the transition from student to junior doctor, published in the last 10 years, and that measured or explored one or more factors affecting preparedness. Nine papers were included in this review. These varied in design and methodological quality. Most used survey methodology (n = 7). Six found knowledge and skills, particularly deficiencies in prescribing and practical procedures, relevant in terms of preparedness. Five looked at personal traits, with high levels of neuroticism and low confidence deemed to be important. Medical school and workplace factors, including early clinical experience and shadowing, positively affected preparedness. A lack of senior support proved detrimental. The influence of demographics was inconclusive. The studies reviewed indicate that both personal and organisational factors are pertinent to managing the transition from student to junior doctor. Further prospective studies, both qualitative and quantitative, drawing on theories of change, are required to identify what precise factors would make a difference to this transition. © 2014 John Wiley & Sons Ltd.
Kandegedara, R. M. E. B.; Bollen, G.; Eibach, M.; ...
2017-10-20
This manuscript describes a measurement of the Q value for the highly forbidden beta-decays of 50V and the double electron capture decay of 50Cr. The Q value corresponds to the total energy released during the decay and is equivalent to the mass difference between parent and daughter atoms. This mass difference was measured using high precision Penning trap mass spectrometry with the Low Energy Beam and Ion Trap facility at the National Superconducting Cyclotron Laboratory. The Q value enters into theoretical calculations of the half-life and beta-decay spectrum for the decay, so improves these calculations. In addition the Q valuemore » corresponds to the end point energy of the beta-decay spectrum, which has been precisely measured for several highly-forbidden decays using modern low background detector techniques. Hence, our Q value measurements provide a test of systematics for these detectors. In addition, we have measured the absolute atomic masses of 46,47,49,50Ti, 50,51V, and 50,52-52Cr, providing improvements in precision by factors of up to 3. These atomic masses help to strengthen global evaluations of all atomic mass data, such as the Atomic Mass Evaluation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kandegedara, R. M. E. B.; Bollen, G.; Eibach, M.
This manuscript describes a measurement of the Q value for the highly forbidden beta-decays of 50V and the double electron capture decay of 50Cr. The Q value corresponds to the total energy released during the decay and is equivalent to the mass difference between parent and daughter atoms. This mass difference was measured using high precision Penning trap mass spectrometry with the Low Energy Beam and Ion Trap facility at the National Superconducting Cyclotron Laboratory. The Q value enters into theoretical calculations of the half-life and beta-decay spectrum for the decay, so improves these calculations. In addition the Q valuemore » corresponds to the end point energy of the beta-decay spectrum, which has been precisely measured for several highly-forbidden decays using modern low background detector techniques. Hence, our Q value measurements provide a test of systematics for these detectors. In addition, we have measured the absolute atomic masses of 46,47,49,50Ti, 50,51V, and 50,52-52Cr, providing improvements in precision by factors of up to 3. These atomic masses help to strengthen global evaluations of all atomic mass data, such as the Atomic Mass Evaluation.« less
Speck, Thomas; Bohn, Holger F.
2018-01-01
The surfaces of plant leaves are rarely smooth and often possess a species-specific micro- and/or nano-structuring. These structures usually influence the surface functionality of the leaves such as wettability, optical properties, friction and adhesion in insect–plant interactions. This work presents a simple, convenient, inexpensive and precise two-step micro-replication technique to transfer surface microstructures of plant leaves onto highly transparent soft polymer material. Leaves of three different plants with variable size (0.5–100 µm), shape and complexity (hierarchical levels) of their surface microstructures were selected as model bio-templates. A thermoset epoxy resin was used at ambient conditions to produce negative moulds directly from fresh plant leaves. An alkaline chemical treatment was established to remove the entirety of the leaf material from the cured negative epoxy mould when necessary, i.e. for highly complex hierarchical structures. Obtained moulds were filled up afterwards with low viscosity silicone elastomer (PDMS) to obtain positive surface replicas. Comparative scanning electron microscopy investigations (original plant leaves and replicated polymeric surfaces) reveal the high precision and versatility of this replication technique. This technique has promising future application for the development of bioinspired functional surfaces. Additionally, the fabricated polymer replicas provide a model to systematically investigate the structural key points of surface functionalities. PMID:29765666
NASA Astrophysics Data System (ADS)
Ho, Shirley; Agarwal, Nishant; Myers, Adam D.; Lyons, Richard; Disbrow, Ashley; Seo, Hee-Jong; Ross, Ashley; Hirata, Christopher; Padmanabhan, Nikhil; O'Connell, Ross; Huff, Eric; Schlegel, David; Slosar, Anže; Weinberg, David; Strauss, Michael; Ross, Nicholas P.; Schneider, Donald P.; Bahcall, Neta; Brinkmann, J.; Palanque-Delabrouille, Nathalie; Yèche, Christophe
2015-05-01
The Sloan Digital Sky Survey has surveyed 14,555 square degrees of the sky, and delivered over a trillion pixels of imaging data. We present the large-scale clustering of 1.6 million quasars between z=0.5 and z=2.5 that have been classified from this imaging, representing the highest density of quasars ever studied for clustering measurements. This data set spans 0~ 11,00 square degrees and probes a volume of 80 h-3 Gpc3. In principle, such a large volume and medium density of tracers should facilitate high-precision cosmological constraints. We measure the angular clustering of photometrically classified quasars using an optimal quadratic estimator in four redshift slices with an accuracy of ~ 25% over a bin width of δl ~ 10-15 on scales corresponding to matter-radiation equality and larger (0l ~ 2-3). Observational systematics can strongly bias clustering measurements on large scales, which can mimic cosmologically relevant signals such as deviations from Gaussianity in the spectrum of primordial perturbations. We account for systematics by employing a new method recently proposed by Agarwal et al. (2014) to the clustering of photometrically classified quasars. We carefully apply our methodology to mitigate known observational systematics and further remove angular bins that are contaminated by unknown systematics. Combining quasar data with the photometric luminous red galaxy (LRG) sample of Ross et al. (2011) and Ho et al. (2012), and marginalizing over all bias and shot noise-like parameters, we obtain a constraint on local primordial non-Gaussianity of fNL = -113+154-154 (1σ error). We next assume that the bias of quasar and galaxy distributions can be obtained independently from quasar/galaxy-CMB lensing cross-correlation measurements (such as those in Sherwin et al. (2013)). This can be facilitated by spectroscopic observations of the sources, enabling the redshift distribution to be completely determined, and allowing precise estimates of the bias parameters. In this paper, if the bias and shot noise parameters are fixed to their known values (which we model by fixing them to their best-fit Gaussian values), we find that the error bar reduces to 1σ simeq 65. We expect this error bar to reduce further by at least another factor of five if the data is free of any observational systematics. We therefore emphasize that in order to make best use of large scale structure data we need an accurate modeling of known systematics, a method to mitigate unknown systematics, and additionally independent theoretical models or observations to probe the bias of dark matter halos.
Improved half-life determination and β-delayed γ-ray spectroscopy for 18Ne decay
NASA Astrophysics Data System (ADS)
Grinyer, G. F.; Ball, G. C.; Bouzomita, H.; Ettenauer, S.; Finlay, P.; Garnsworthy, A. B.; Garrett, P. E.; Green, K. L.; Hackman, G.; Leslie, J. R.; Pearson, C. J.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Thomas, J. C.; Triambak, S.; Williams, S. J.
2013-04-01
The half-life of the superallowed Fermi β+ emitter 18Ne has been determined to ±0.07% precision by counting 1042 keV delayed γ rays that follow approximately 8% of all β decays. The deduced half-life, T1/2=1.6648(11) s, includes a 0.7% correction that accounts for systematic losses associated with rate-dependent detector pulse pileup that was determined using a recently developed γ-ray photopeak-counting technique. This result is a factor of two times more precise than, and in excellent agreement with, a previous lower-statistics measurement that employed the same experimental setup. High-resolution β-delayed γ-ray spectroscopy results for the relative γ-ray intensities and β-decay branching ratios to excited states in the daughter 18F are also presented.
Precision Measurement of the Electron's Electric Dipole Moment Using Trapped Molecular Ions
NASA Astrophysics Data System (ADS)
Cairncross, William B.; Gresh, Daniel N.; Grau, Matt; Cossel, Kevin C.; Roussy, Tanya S.; Ni, Yiqi; Zhou, Yan; Ye, Jun; Cornell, Eric A.
2017-10-01
We describe the first precision measurement of the electron's electric dipole moment (de) using trapped molecular ions, demonstrating the application of spin interrogation times over 700 ms to achieve high sensitivity and stringent rejection of systematic errors. Through electron spin resonance spectroscopy on 180Hf 19F+ in its metastable 3Δ1 electronic state, we obtain de=(0.9 ±7. 7stat±1. 7syst)×10-29 e cm , resulting in an upper bound of |de|<1.3 ×10-28 e cm (90% confidence). Our result provides independent confirmation of the current upper bound of |de|<9.4 ×10-29 e cm [J. Baron et al., New J. Phys. 19, 073029 (2017), 10.1088/1367-2630/aa708e], and offers the potential to improve on this limit in the near future.
Use of big data in drug development for precision medicine
Kim, Rosa S.; Goossens, Nicolas; Hoshida, Yujin
2016-01-01
Summary Drug development has been a costly and lengthy process with an extremely low success rate and lack of consideration of individual diversity in drug response and toxicity. Over the past decade, an alternative “big data” approach has been expanding at an unprecedented pace based on the development of electronic databases of chemical substances, disease gene/protein targets, functional readouts, and clinical information covering inter-individual genetic variations and toxicities. This paradigm shift has enabled systematic, high-throughput, and accelerated identification of novel drugs or repurposed indications of existing drugs for pathogenic molecular aberrations specifically present in each individual patient. The exploding interest from the information technology and direct-to-consumer genetic testing industries has been further facilitating the use of big data to achieve personalized Precision Medicine. Here we overview currently available resources and discuss future prospects. PMID:27430024
High power tests of an electroforming cavity operating at 11.424 GHz
NASA Astrophysics Data System (ADS)
Dolgashev, V. A.; Gatti, G.; Higashi, Y.; Leonardi, O.; Lewandowski, J. R.; Marcelli, A.; Rosenzweig, J.; Spataro, B.; Tantawi, S. G.; Yeremian, D. A.
2016-03-01
The achievement of ultra high accelerating gradients is mandatory in order to fabricate compact accelerators at 11.424 GHz for scientific and industrial applications. An extensive experimental and theoretical program to determine a reliable ultra high gradient operation of the future linear accelerators is under way in many laboratories. In particular, systematic studies on the 11.424 GHz frequency accelerator structures, R&D on new materials and the associated microwave technology are in progress to achieve accelerating gradients well above 120 MeV/m. Among the many, the electroforming procedure is a promising approach to manufacture high performance RF devices in order to avoid the high temperature brazing and to produce precise RF structures. We report here the characterization of a hard high gradient RF accelerating structure at 11.424 GHz fabricated using the electroforming technique. Low-level RF measurements and high power RF tests carried out at the SLAC National Accelerator Laboratory on this prototype are presented and discussed. In addition, we present also a possible layout where the water-cooling of irises based on the electroforming process has been considered for the first time.
Zijlstra, Eric; Heinemann, Lutz; Fischer, Annelie; Kapitza, Christoph
2016-01-01
Background: The objective was to evaluate the performance (in terms of accuracy, precision, and trueness) of 5 CE-certified and commercially available blood glucose (BG) systems (meters plus test strips) using an innovative clinical-experimental study design with a 3-step glucose clamp approach and frequent capillary BG measurements. Methods: Sixteen subjects with type 1 diabetes participated in this open label, single center trial. BG was clamped at 3 levels for 60 minutes each: 60-100-200 mg/dL. Medical staff performed regular finger pricks (up to 10 per BG level) to obtain capillary blood samples for paired BG measurements with the 5 BG systems and a laboratory method as comparison. Results: Three BG systems displayed significantly lower mean absolute relative deviations (MARD) (ACCU-Chek® Aviva Nano [5.4%], BGStar® [5.1%], iBGStar® [5.3%]) than 2 others (FreeStyle InsuLinx® [7.7%], OneTouch Verio®IQ [10.3%]). The measurement precision of all BG systems was comparable, but relative bias was also lower for the 3 systems with lower MARD (ACCU-Chek [1.3%], BGStar [–0.9%], iBGStar [1.0%]) compared with the 2 others (FreeStyle [–7.2%], OneTouch [8.9%]). Conclusions: This 3 range glucose clamp approach enables a systematic performance evaluation of BG systems under controlled and reproducible conditions. The random error of the tested BG systems was comparable, but some showed a lower systematic error than others. These BG systems allow an accurate glucose measurement at low, normal and high BG levels. PMID:27605592
Zijlstra, Eric; Heinemann, Lutz; Fischer, Annelie; Kapitza, Christoph
2016-11-01
The objective was to evaluate the performance (in terms of accuracy, precision, and trueness) of 5 CE-certified and commercially available blood glucose (BG) systems (meters plus test strips) using an innovative clinical-experimental study design with a 3-step glucose clamp approach and frequent capillary BG measurements. Sixteen subjects with type 1 diabetes participated in this open label, single center trial. BG was clamped at 3 levels for 60 minutes each: 60-100-200 mg/dL. Medical staff performed regular finger pricks (up to 10 per BG level) to obtain capillary blood samples for paired BG measurements with the 5 BG systems and a laboratory method as comparison. Three BG systems displayed significantly lower mean absolute relative deviations (MARD) (ACCU-Chek® Aviva Nano [5.4%], BGStar® [5.1%], iBGStar® [5.3%]) than 2 others (FreeStyle InsuLinx® [7.7%], OneTouch Verio®IQ [10.3%]). The measurement precision of all BG systems was comparable, but relative bias was also lower for the 3 systems with lower MARD (ACCU-Chek [1.3%], BGStar [-0.9%], iBGStar [1.0%]) compared with the 2 others (FreeStyle [-7.2%], OneTouch [8.9%]). This 3 range glucose clamp approach enables a systematic performance evaluation of BG systems under controlled and reproducible conditions. The random error of the tested BG systems was comparable, but some showed a lower systematic error than others. These BG systems allow an accurate glucose measurement at low, normal and high BG levels. © 2016 Diabetes Technology Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croc, Aurelien
The top quark is the heaviest standard model quark. Discovered in 1995 by the two Tevatron experiments it has atypical properties. In particular its time life is so short that it decays before hadronizing, so the top quark mass could be measured with a high precision. Data collected by the DØ experiment between 2002 and 2009, which represent an integrated luminosity of 5.4 fb⁻¹, are used to measure the top quark mass by using the matrix element method in the three dilepton channels: dielectron, electron--muon and dimuon. The measured mass, 174.0 ± 1.8 (stat.) ± 2.4 (syst.) GeV, is inmore » a good agreement with other measurements and limited by the systematic uncertainties for the first time in these channels. In this thesis different approaches have been studied to improve the accuracy of this measurement: the use of b-quark jet identification in order to optimize the selection of top--anti-top events and a better determination of the main systematic uncertainties. A special attention has been paid to the Monte-Carlo simulation of muons in D0: the improved smearing procedure for the simulated muons, discussed in this thesis, will be used to increase the accuracy of the top properties measurements as well as the precision of many other D0 measurements.« less
Cannabis and anxiety: a critical review of the evidence.
Crippa, José Alexandre; Zuardi, Antonio Waldo; Martín-Santos, Rocio; Bhattacharyya, Sagnik; Atakan, Zerrin; McGuire, Philip; Fusar-Poli, Paolo
2009-10-01
Anxiety reactions and panic attacks are the acute symptoms most frequently associated with cannabis use. Understanding the relationship between cannabis and anxiety may clarify the mechanism of action of cannabis and the pathophysiology of anxiety. Aims of the present study were to review the nature of the relationship between cannabis use and anxiety, as well as the possible clinical, diagnostic and causal implications. Systematic review of the Medline, PsycLIT and EMBASE literature. Frequent cannabis users consistently have a high prevalence of anxiety disorders and patients with anxiety disorders have relatively high rates of cannabis use. However, it is unclear if cannabis use increases the risk of developing long-lasting anxiety disorders. Many hypotheses have been proposed in an attempt to explain these relationships, including neurobiological, environmental and social influences. The precise relationship between cannabis use and anxiety has yet to be established. Research is needed to fully clarify the mechanisms of such the association.
Past, Present and Future of UHECR Observations
NASA Astrophysics Data System (ADS)
Dawson, B. R.; Fukushima, M.; Sokolsky, P.
2017-12-01
Great advances have been made in the study of ultra-high energy cosmic rays (UHECR) in the past two decades. These include the discovery of the spectral cut-off near 5 x 10^19 eV and complex structure at lower energies, as well as increasingly precise information about the composition of cosmic rays as a function of energy. Important improvements in techniques, including extensive surface detector arrays and high resolution air fluorescence detectors, have been instrumental in facilitating this progress. We discuss the status of the field, including the open questions about the nature of spectral structure, systematic issues related to our understanding of composition, and emerging evidence for anisotropy at the highest energies. We review prospects for upgraded and future observatories including Telescope Array, Pierre Auger and JEM-EUSO and other space-based proposals, and discuss promising new technologies based on radio emission from extensive air showers produced by UHECR.
NASA Astrophysics Data System (ADS)
Ponder, Kara A.
In the late 1990s, Type Ia supernovae (SNeIa) led to the discovery that the Universe is expanding at an accelerating rate due to dark energy. Since then, many different tracers of acceleration have been used to characterize dark energy, but the source of cosmic acceleration has remained a mystery. To better understand dark energy, future surveys such as the ground-based Large Synoptic Survey Telescope and the space-based Wide-Field Infrared Survey Telescope will collect thousands of SNeIa to use as a primary dark energy probe. These large surveys will be systematics limited, which makes it imperative for our insight regarding systematics to dramatically increase over the next decade for SNeIa to continue to contribute to precision cosmology. I approach this problem by improving statistical methods in the likelihood analysis and collecting near infrared (NIR) SNeIa with their host galaxies to improve the nearby data set and search for additional systematics. Using more statistically robust methods to account for systematics within the likelihood function can increase accuracy in cosmological parameters with a minimal precision loss. Though a sample of at least 10,000 SNeIa is necessary to confirm multiple populations of SNeIa, the bias in cosmology is ˜ 2 sigma with only 2,500 SNeIa. This work focused on an example systematic (host galaxy correlations), but it can be generalized for any systematic that can be represented by a distribution of multiple Gaussians. The SweetSpot survey gathered 114 low-redshift, NIR SNeIa that will act as a crucial anchor sample for the future high redshift surveys. NIR observations are not as affected by dust contamination, which may lead to increased understanding of systematics seen in optical wavelengths. We obtained spatially resolved spectra for 32 SweetSpot host galaxies to test for local host galaxy correlations. For the first time, we probe global host galaxy correlations with NIR brightnesses from the current literature sample of SNeIa with host galaxy data from publicly available catalogs. We find inconclusive evidence that more massive galaxies host SNeIa that are brighter in the NIR than SNeIa hosted in less massive galaxies.
Hassanpour, Gholmreza; Mohebali, Mehdi; Zeraati, Hojjat; Raeisi, Ahmad; Keshavarz, Hossein
2017-01-01
Background: The objective of this study was to find an appropriate approach to asymptomatic malaria in elimination setting through a systematic review. Methods: A broad search was conducted to find articles with the words ‘malaria’ in their titles and ‘asymptomatic’ or ‘submicroscopic’ in their texts, irrespective of the type of study conducted. The Cochrane, Medline/Pub Med, and Scopus databases, as well as Google Scholar were systematically searched for English articles and reports and Iran’s databases-Iran Medex, SID and Magiran were searched for Persian reports and articles, with no time limitation. The study was qualitatively summarized if it contained precise information on the role of asymptomatic malaria in the elimination phase. Results: Six articles were selected from the initial 2645 articles. The results all re-emphasize the significance of asymptomatic malaria in the elimination phase, and emphasize the significance of diagnostic tests of higher sensitivity to locate these patients and perform interventions to reduce the asymptomatic parasitic reservoirs particularly in regions of low transmission. However, we may infer from the results that the current evidence cannot yet specify an accurate strategy on the role of asymptomatic malaria in the elimination phase. Conclusion: To eliminate malaria, alongside vector control, and treatment of symptomatic and asymptomatic patients, active and inactive methods of case detection need to be employed. The precise monitoring of asymptomatic individuals and submicroscopic cases of malaria through molecular assays and valid serological methods, especially in regions where seasonal and low transmission exists can be very helpful at this phase. PMID:29062842
Azarmehr, Iman; Stokbro, Kasper; Bell, R Bryan; Thygesen, Torben
2017-09-01
This systematic review investigates the most common indications, treatments, and outcomes of surgical navigation (SN) published from 2010 to 2015. The evolution of SN and its application in oral and maxillofacial surgery have rapidly developed over recent years, and therapeutic indications are discussed. A systematic search in relevant electronic databases, journals, and bibliographies of the included articles was carried out. Clinical studies with 5 or more patients published between 2010 and 2015 were included. Traumatology, orthognathic surgery, cancer and reconstruction surgery, skull-base surgery, and foreign body removal were the areas of interests. The search generated 13 articles dealing with traumatology; 5, 6, 2, and 0 studies were found that dealt with the topics of orthognathic surgery, cancer and reconstruction surgery, skull-base surgery, and foreign body removal, respectively. The average technical system accuracy and intraoperative precision reported were less than 1 mm and 1 to 2 mm, respectively. In general, SN is reported to be a useful tool for surgical planning, execution, evaluation, and research. The largest numbers of studies and patients were identified in the field of traumatology. Treatment of complex orbital fractures was considerably improved by the use of SN compared with traditionally treated control groups. SN seems to be a very promising addition to the surgical toolkit. Planning details of the surgical procedure in a 3-dimensional virtual environment and execution with real-time guidance can significantly improve precision. Among factors to be considered are the financial investments necessary and the learning curve. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. All rights reserved.
XMM-Subaru:Complete High Precision Study of Galaxy Clusters for Modern Cosmology
NASA Astrophysics Data System (ADS)
Zhang, Yu-Ying
2011-10-01
We request 382 ks data for 12 clusters to complete our survey of a volume-limited sample of 55 clusters. We investigated the existing data, which hints a mass dependent bias in the X-ray to weak lensing mass ratios for disturbed ones. X-ray mass proxies, e.g., Yx, show low scatter, but the best fits, particularly the slopes, of the mass-observable relations may be biased due to this mass dependence. Our program will quantify any mass/radial dependent bias based on three independent probes (X-ray/lensing/velocity dispersion) for such a volume-limited sample, and deliver definitive constraints on systematics for upcoming cluster cosmology surveys. The dataset will be a major asset for programs aiming to measure dark energy and programs adding a multi-wavelength focus to studies of cluster physics.
Reinforcement learning in complementarity game and population dynamics
NASA Astrophysics Data System (ADS)
Jost, Jürgen; Li, Wei
2014-02-01
We systematically test and compare different reinforcement learning schemes in a complementarity game [J. Jost and W. Li, Physica A 345, 245 (2005), 10.1016/j.physa.2004.07.005] played between members of two populations. More precisely, we study the Roth-Erev, Bush-Mosteller, and SoftMax reinforcement learning schemes. A modified version of Roth-Erev with a power exponent of 1.5, as opposed to 1 in the standard version, performs best. We also compare these reinforcement learning strategies with evolutionary schemes. This gives insight into aspects like the issue of quick adaptation as opposed to systematic exploration or the role of learning rates.
Modeling longitudinal data, I: principles of multivariate analysis.
Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick
2009-01-01
Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).
Schlösser, Magnus; Seitz, Hendrik; Rupp, Simone; Herwig, Philipp; Alecu, Catalin Gabriel; Sturm, Michael; Bornschein, Beate
2013-03-05
Highly accurate, in-line, and real-time composition measurements of gases are mandatory in many processing applications. The quantitative analysis of mixtures of hydrogen isotopologues (H2, D2, T2, HD, HT, and DT) is of high importance in such fields as DT fusion, neutrino mass measurements using tritium β-decay or photonuclear experiments where HD targets are used. Raman spectroscopy is a favorable method for these tasks. In this publication we present a method for the in-line calibration of Raman systems for the nonradioactive hydrogen isotopologues. It is based on precise volumetric gas mixing of the homonuclear species H2/D2 and a controlled catalytic production of the heteronuclear species HD. Systematic effects like spurious exchange reactions with wall materials and others are considered with care during the procedure. A detailed discussion of statistical and systematic uncertainties is presented which finally yields a calibration accuracy of better than 0.4%.
Modeling and Implementation of Multi-Position Non-Continuous Rotation Gyroscope North Finder.
Luo, Jun; Wang, Zhiqian; Shen, Chengwu; Kuijper, Arjan; Wen, Zhuoman; Liu, Shaojin
2016-09-20
Even when the Global Positioning System (GPS) signal is blocked, a rate gyroscope (gyro) north finder is capable of providing the required azimuth reference information to a certain extent. In order to measure the azimuth between the observer and the north direction very accurately, we propose a multi-position non-continuous rotation gyro north finding scheme. Our new generalized mathematical model analyzes the elements that affect the azimuth measurement precision and can thus provide high precision azimuth reference information. Based on the gyro's principle of detecting a projection of the earth rotation rate on its sensitive axis and the proposed north finding scheme, we are able to deduct an accurate mathematical model of the gyro outputs against azimuth with the gyro and shaft misalignments. Combining the gyro outputs model and the theory of propagation of uncertainty, some approaches to optimize north finding are provided, including reducing the gyro bias error, constraining the gyro random error, increasing the number of rotation points, improving rotation angle measurement precision, decreasing the gyro and the shaft misalignment angles. According them, a north finder setup is built and the azimuth uncertainty of 18" is obtained. This paper provides systematic theory for analyzing the details of the gyro north finder scheme from simulation to implementation. The proposed theory can guide both applied researchers in academia and advanced practitioners in industry for designing high precision robust north finder based on different types of rate gyroscopes.
Precision of natural satellite ephemerides from observations of different types
NASA Astrophysics Data System (ADS)
Emelyanov, N. V.
2017-08-01
Currently, various types of observations of natural planetary satellites are used to refine their ephemerides. A new type of measurement - determining the instants of apparent satellite encounters - has recently been proposed by Morgado and co-workers. The problem that arises is which type of measurement to choose in order to obtain an ephemeris precision that is as high as possible. The answer can be obtained only by modelling the entire process: observations, obtaining the measured values, refining the satellite motion parameters, and generating the ephemeris. The explicit dependence of the ephemeris precision on observational accuracy as well as on the type of observations is unknown. In this paper, such a dependence is investigated using the Monte Carlo statistical method. The relationship between the ephemeris precision for different types of observations is then assessed. The possibility of using the instants of apparent satellite encounters to obtain an ephemeris is investigated. A method is proposed that can be used to fit the satellite orbital parameters to this type of measurement. It is shown that, in the absence of systematic scale errors in the CCD frame, the use of the instants of apparent encounters leads to less precise ephemerides. However, in the presence of significant scale errors, which is often the case, this type of measurement becomes effective because the instants of apparent satellite encounters do not depend on scale errors.
Patient safety and systematic reviews: finding papers indexed in MEDLINE, EMBASE and CINAHL.
Tanon, A A; Champagne, F; Contandriopoulos, A-P; Pomey, M-P; Vadeboncoeur, A; Nguyen, H
2010-10-01
To develop search strategies for identifying papers on patient safety in MEDLINE, EMBASE and CINAHL. Six journals were electronically searched for papers on patient safety published between 2000 and 2006. Identified papers were divided into two gold standards: one to build and the other to validate the search strategies. Candidate terms for strategy construction were identified using a word frequency analysis of titles, abstracts and keywords used to index the papers in the databases. Searches were run for each one of the selected terms independently in every database. Sensitivity, precision and specificity were calculated for each candidate term. Terms with sensitivity greater than 10% were combined to form the final strategies. The search strategies developed were run against the validation gold standard to assess their performance. A final step in the validation process was to compare the performance of each strategy to those of other strategies found in the literature. We developed strategies for all three databases that were highly sensitive (range 95%-100%), precise (range 40%-60%) and balanced (the product of sensitivity and precision being in the range of 30%-40%). The strategies were very specific and outperformed those found in the literature. The strategies we developed can meet the needs of users aiming to maximise either sensitivity or precision, or seeking a reasonable compromise between sensitivity and precision, when searching for papers on patient safety in MEDLINE, EMBASE or CINAHL.
Resolving the neutron lifetime puzzle
NASA Astrophysics Data System (ADS)
Mumm, Pieter
2018-05-01
Free electrons and protons are stable, but outside atomic nuclei, free neutrons decay into a proton, electron, and antineutrino through the weak interaction, with a lifetime of ∼880 s (see the figure). The most precise measurements have stated uncertainties below 1 s (0.1%), but different techniques, although internally consistent, disagree by 4 standard deviations given the quoted uncertainties. Resolving this “neutron lifetime puzzle” has spawned much experimental effort as well as exotic theoretical mechanisms, thus far without a clear explanation. On page 627 of this issue, Pattie et al. (1) present the most precise measurement of the neutron lifetime to date. A new method of measuring trapped neutrons in situ allows a more detailed exploration of one of the more pernicious systematic effects in neutron traps, neutron phase-space evolution (the changing orbits of neutrons in the trap), than do previous methods. The precision achieved, combined with a very different set of systematic uncertainties, gives hope that experiments such as this one can help resolve the current situation with the neutron lifetime.
Mapping coastal marine debris using aerial imagery and spatial analysis.
Moy, Kirsten; Neilson, Brian; Chung, Anne; Meadows, Amber; Castrence, Miguel; Ambagis, Stephen; Davidson, Kristine
2017-12-19
This study is the first to systematically quantify, categorize, and map marine macro-debris across the main Hawaiian Islands (MHI), including remote areas (e.g., Niihau, Kahoolawe, and northern Molokai). Aerial surveys were conducted over each island to collect high resolution photos, which were processed into orthorectified imagery and visually analyzed in GIS. The technique provided precise measurements of the quantity, location, type, and size of macro-debris (>0.05m 2 ), identifying 20,658 total debris items. Northeastern (windward) shorelines had the highest density of debris. Plastics, including nets, lines, buoys, floats, and foam, comprised 83% of the total count. In addition, the study located six vessels from the 2011 Tōhoku tsunami. These results created a baseline of the location, distribution, and composition of marine macro-debris across the MHI. Resource managers and communities may target high priority areas, particularly along remote coastlines where macro-debris counts were largely undocumented. Copyright © 2017 Elsevier Ltd. All rights reserved.
Demonstration of improved sensitivity of echo interferometers to gravitational acceleration
NASA Astrophysics Data System (ADS)
Mok, C.; Barrett, B.; Carew, A.; Berthiaume, R.; Beattie, S.; Kumarakrishnan, A.
2013-08-01
We have developed two configurations of an echo interferometer that rely on standing-wave excitation of a laser-cooled sample of rubidium atoms. Both configurations can be used to measure acceleration a along the axis of excitation. For a two-pulse configuration, the signal from the interferometer is modulated at the recoil frequency and exhibits a sinusoidal frequency chirp as a function of pulse spacing. In comparison, for a three-pulse stimulated-echo configuration, the signal is observed without recoil modulation and exhibits a modulation at a single frequency as a function of pulse spacing. The three-pulse configuration is less sensitive to effects of vibrations and magnetic field curvature, leading to a longer experimental time scale. For both configurations of the atom interferometer (AI), we show that a measurement of acceleration with a statistical precision of 0.5% can be realized by analyzing the shape of the echo envelope that has a temporal duration of a few microseconds. Using the two-pulse AI, we obtain measurements of acceleration that are statistically precise to 6 parts per million (ppm) on a 25 ms time scale. In comparison, using the three-pulse AI, we obtain measurements of acceleration that are statistically precise to 0.4 ppm on a time scale of 50 ms. A further statistical enhancement is achieved by analyzing the data across the echo envelope so that the statistical error is reduced to 75 parts per billion (ppb). The inhomogeneous field of a magnetized vacuum chamber limited the experimental time scale and resulted in prominent systematic effects. Extended time scales and improved signal-to-noise ratio observed in recent echo experiments using a nonmagnetic vacuum chamber suggest that echo techniques are suitable for a high-precision measurement of gravitational acceleration g. We discuss methods for reducing systematic effects and improving the signal-to-noise ratio. Simulations of both AI configurations with a time scale of 300 ms suggest that an optimized experiment with improved vibration isolation and atoms selected in the mF=0 state can result in measurements of g statistically precise to 0.3 ppb for the two-pulse AI and 0.6 ppb for the three-pulse AI.
Half-life of the superallowed β+ emitter Ne18
NASA Astrophysics Data System (ADS)
Grinyer, G. F.; Smith, M. B.; Andreoiu, C.; Andreyev, A. N.; Ball, G. C.; Bricault, P.; Chakrawarthy, R. S.; Daoud, J. J.; Finlay, P.; Garrett, P. E.; Hackman, G.; Hyland, B.; Leslie, J. R.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Schumaker, M. A.; Svensson, C. E.; Valiente-Dobón, J. J.; Williams, S. J.; Zganjar, E. F.
2007-08-01
The half-life of Ne18 has been determined by detecting 1042-keV γ rays in the daughter F18 following the superallowed-Fermi β+ decay of samples implanted at the center of the 8πγ-ray spectrometer, a spherical array of 20 HPGe detectors. Radioactive Ne18 beams were produced on-line, mass-separated, and ionized using an electron-cyclotron-resonance ionization source at the ISAC facility at TRIUMF in Vancouver, Canada. This is the first high-precision half-life measurement of a superallowed Fermi β decay to utilize both a large-scale HPGe spectrometer and the isotope separation on-line technique. The half-life of Ne18, 1.6656 ± 0.0019 s, deduced following a 1.4σ correction for detector pulse pile-up, is four times more precise than the previous world average. As part of an investigation into potential systematic effects, the half-life of the heavier isotope Ne23 was determined to be 37.11 ± 0.06 s, a factor of 2 improvement over the previous precision.
Half-life of the superallowed {beta}{sup +} emitter {sup 18}Ne
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grinyer, G. F.; Andreoiu, C.; Finlay, P.
The half-life of {sup 18}Ne has been determined by detecting 1042-keV {gamma} rays in the daughter {sup 18}F following the superallowed-Fermi {beta}{sup +} decay of samples implanted at the center of the 8{pi}{gamma}-ray spectrometer, a spherical array of 20 HPGe detectors. Radioactive {sup 18}Ne beams were produced on-line, mass-separated, and ionized using an electron-cyclotron-resonance ionization source at the ISAC facility at TRIUMF in Vancouver, Canada. This is the first high-precision half-life measurement of a superallowed Fermi {beta} decay to utilize both a large-scale HPGe spectrometer and the isotope separation on-line technique. The half-life of {sup 18}Ne, 1.6656 {+-} 0.0019 s,more » deduced following a 1.4{sigma} correction for detector pulse pile-up, is four times more precise than the previous world average. As part of an investigation into potential systematic effects, the half-life of the heavier isotope {sup 23}Ne was determined to be 37.11 {+-} 0.06 s, a factor of 2 improvement over the previous precision.« less
Search for new physics in a precise 20F beta spectrum shape measurement
NASA Astrophysics Data System (ADS)
George, Elizabeth; Voytas, Paul; Chuna, Thomas; Naviliat-Cuncic, Oscar; Gade, Alexandra; Hughes, Max; Huyan, Xueying; Liddick, Sean; Minamisono, Kei; Paulauskas, Stanley; Weisshaar, Dirk; Ban, Gilles; Flechard, Xavier; Lienard, Etienne
2015-10-01
We are carrying out a measurement of the shape of the energy spectrum of β particles from 20F decay. We aim to achieve a relative precision below 3%, representing an order of magnitude improvement compared to previous experiments. This level of precision will enable a test of the so-called strong form of the conserved vector current (CVC) hypothesis, and should also enable us to place competitive limits on the contributions of exotic tensor couplings in beta decay. In order to control systematic effects, we are using a technique that takes advantage of high energy radioactive beams at the NSCL to implant the decaying nuclei in a scintillation detector deep enough that the emitted beta particles cannot escape. The β-particle energy is measured with the implantation detector after switching off the beam implantation. Ancillary detectors are used to tag the 1.633-MeV γ-rays following the β decay for coincidence measurements in order to reduce backgrounds. We will give an overview and report on the status of the experiment.
Markstein, Michele; Pitsouli, Chrysoula; Villalta, Christians; Celniker, Susan E; Perrimon, Norbert
2008-04-01
A major obstacle to creating precisely expressed transgenes lies in the epigenetic effects of the host chromatin that surrounds them. Here we present a strategy to overcome this problem, employing a Gal4-inducible luciferase assay to systematically quantify position effects of host chromatin and the ability of insulators to counteract these effects at phiC31 integration loci randomly distributed throughout the Drosophila genome. We identify loci that can be exploited to deliver precise doses of transgene expression to specific tissues. Moreover, we uncover a previously unrecognized property of the gypsy retrovirus insulator to boost gene expression to levels severalfold greater than at most or possibly all un-insulated loci, in every tissue tested. These findings provide the first opportunity to create a battery of transgenes that can be reliably expressed at high levels in virtually any tissue by integration at a single locus, and conversely, to engineer a controlled phenotypic allelic series by exploiting several loci. The generality of our approach makes it adaptable to other model systems to identify and modify loci for optimal transgene expression.
Influence of volunteer and project characteristics on data quality of biological surveys.
Lewandowski, Eva; Specht, Hannah
2015-06-01
Volunteer involvement in biological surveys is becoming common in conservation and ecology, prompting questions on the quality of data collected in such surveys. In a systematic review of the peer-reviewed literature on the quality of data collected by volunteers, we examined the characteristics of volunteers (e.g., age, prior knowledge) and projects (e.g., systematic vs. opportunistic monitoring schemes) that affect data quality with regards to standardization of sampling, accuracy and precision of data collection, spatial and temporal representation of data, and sample size. Most studies (70%, n = 71) focused on the act of data collection. The majority of assessments of volunteer characteristics (58%, n = 93) examined the effect of prior knowledge and experience on quality of the data collected, often by comparing volunteers with experts or professionals, who were usually assumed to collect higher quality data. However, when both groups' data were compared with the same accuracy standard, professional data were more accurate in only 4 of 7 cases. The few studies that measured precision of volunteer and professional data did not conclusively show that professional data were less variable than volunteer data. To improve data quality, studies recommended changes to survey protocols, volunteer training, statistical analyses, and project structure (e.g., volunteer recruitment and retention). © 2015, Society for Conservation Biology.
A procedure for the significance testing of unmodeled errors in GNSS observations
NASA Astrophysics Data System (ADS)
Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling
2018-01-01
It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolphin, Andrew E., E-mail: adolphin@raytheon.com
The combination of spectroscopic stellar metallicities and resolved star color–magnitude diagrams (CMDs) has the potential to constrain the entire star formation history (SFH) of a galaxy better than fitting CMDs alone (as is most common in SFH studies using resolved stellar populations). In this paper, two approaches to incorporating external metallicity information into CMD-fitting techniques are presented. Overall, the joint fitting of metallicity and CMD information can increase the precision of measured age–metallicity relationships (AMRs) and star formation rates by 10% over CMD fitting alone. However, systematics in stellar isochrones and mismatches between spectroscopic and photometric determinations of metallicity canmore » reduce the accuracy of the recovered SFHs. I present a simple mitigation of these systematics that can reduce their amplitude to the level obtained from CMD fitting alone, while ensuring that the AMR is consistent with spectroscopic metallicities. As is the case in CMD-fitting analysis, improved stellar models and calibrations between spectroscopic and photometric metallicities are currently the primary impediment to gains in SFH precision from jointly fitting stellar metallicities and CMDs.« less
Precision Measurement of the Beryllium-7 Solar Neutrino Interaction Rate in Borexino
NASA Astrophysics Data System (ADS)
Saldanha, Richard Nigel
Solar neutrinos, since their first detection nearly forty years ago, have revealed valuable information regarding the source of energy production in the Sun, and have demonstrated that neutrino oscillations are well described by the Large Mixing Angle (LMA) oscillation parameters with matter interactions due to the Mikheyev-Smirnov-Wolfenstein (MSW) effect. This thesis presents a precision measurement of the 7Be solar neutrino interaction rate within Borexino, an underground liquid scintillator detector that is designed to measure solar neutrino interactions through neutrino-electron elastic scattering. The thesis includes a detailed description of the analysis techniques developed and used for this measurement as well as an evaluation of the relevant systematic uncertainties that affect the precision of the result. The rate of neutrino-electron elastic scattering from 0.862 MeV 7Be neutrinos is determined to be 45.4 +/- 1.6 (stat) +/- 1.5 (sys) counts/day/100 ton. Due to extensive detector calibrations and improved analysis methods, the systematic uncertainty in the interaction rate has been reduced by more than a factor of two from the previous evaluation. In the no-oscillation hypothesis, the interaction rate corresponds to a 0.862 MeV 7Be electron neutrino flux of (2.75 +/- 0.13) x 10 9 cm-2 sec-1. Including the predicted neutrino flux from the Standard Solar Model yields an electron neutrino survival probability of Pee 0.51 +/- 0.07 and rules out the no-oscillation hypothesis at 5.1sigma The LMA-MSW neutrino oscillation model predicts a transition in the solar Pee value between low (< 1 MeV) and high (> 10 MeV) energies which has not yet been experimentally confirmed. This result, in conjunction with the Standard Solar Model, represents the most precise measurement of the electron neutrino survival probability for solar neutrinos at sub-MeV energies.
O'Brien, Anthony Terrence; Torrealba Acosta, Gabriel; Huerta, Rodrigo; Thibaut, Aurore
2017-06-22
Dexterity is described as coordinated hand and finger movement for precision tasks. It is essential for day-to-day activities like computer use, writing or buttoning a shirt. Integrity of brain motor networks is crucial to properly execute these fine hand tasks. When these networks are damaged, interventions to enhance recovery are frequently accompanied by unwanted side effects or limited in their effect. Non-invasive brain stimulation (NIBS) are postulated to target affected motor areas and improve hand motor function with few side effects. However, the results across studies vary, and the current literature does not allow us to draw clear conclusions on the use of NIBS to promote hand function recovery. Therefore, we developed a protocol for a systematic review and meta-analysis on the effects of different NIBS technologies on dexterity in diverse populations. This study will potentially help future evidence-based research and guidelines that use these NIBS technologies for recovering hand dexterity. This protocol will compare the effects of active versus sham NIBS on precise hand activity. Records will be obtained by searching relevant databases. Included articles will be randomised clinical trials in adults, testing the therapeutic effects of NIBS on continuous dexterity data. Records will be studied for risk of bias. Narrative and quantitative synthesis will be done. No private health information is included; the study is not interventional. Ethical approval is not required. The results will be reported in a peer-review journal. PROSPERO International prospective register of systematic reviews registration number: CRD42016043809. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Lin, Yan-Cheng; Chou, Wu-Ching; Susha, Andrei S.; Kershaw, Stephen V.; Rogach, Andrey L.
2013-03-01
The application of static high pressure provides a method for precisely controlling and investigating many fundamental and unique properties of semiconductor nanocrystals (NCs). This study systematically investigates the high-pressure photoluminescence (PL) and time-resolved carrier dynamics of thiol-capped CdTe NCs of different sizes, at different concentrations, and in various stress environments. The zincblende-to-rocksalt phase transition in thiol-capped CdTe NCs is observed at a pressure far in excess of the bulk phase transition pressure. Additionally, the process of transformation depends strongly on NC size, and the phase transition pressure increases with NC size. These peculiar phenomena are attributed to the distinctive bonding of thiols to the NC surface. In a nonhydrostatic environment, considerable flattening of the PL energy of CdTe NC powder is observed above 3.0 GPa. Furthermore, asymmetric and double-peak PL emissions are obtained from a concentrated solution of CdTe NCs under hydrostatic pressure, implying the feasibility of pressure-induced interparticle coupling.
Water-soluble CdTe nanocrystals under high pressure
NASA Astrophysics Data System (ADS)
Lin, Yan-Cheng
2015-02-01
The application of static high pressure provides a method for precisely controlling and investigating many fundamental and unique properties of semiconductor nanocrystals (NCs). This study systematically investigates the high-pressure photoluminescence (PL) and time-resolved carrier dynamics of thiol-capped CdTe NCs of different sizes, at different concentrations, and in various stress environments. The zincblende-to-rocksalt phase transition in thiol-capped CdTe NCs is observed at a pressure far in excess of the bulk phase transition pressure. Additionally, the process of transformation depends strongly on NC size, and the phase transition pressure increases with NC size. These peculiar phenomena are attributed to the distinctive bonding of thiols to the NC surface. In a nonhydrostatic environment, considerable flattening of the PL energy of CdTe NCs powder is observed above 3.0 GPa. Furthermore, asymmetric and double-peak PL emissions are obtained from a concentrated solution of CdTe NCs under hydrostatic pressure, implying the feasibility of pressure-induced interparticle coupling.
Results from the HARP Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Catanesi, M. G.
2008-02-21
Hadron production is a key ingredient in many aspects of {nu} physics. Precise prediction of atmospheric {nu} fluxes, characterization of accelerator {nu} beams, quantification of {pi} production and capture for {nu}-factory designs, all of these would profit from hadron production measurements. HARP at the CERN PS was the first hadron production experiment designed on purpose to match all these requirements. It combines a large, full phase space acceptance with low systematic errors and high statistics. HARP was operated in the range from 3 GeV to 15 GeV. We briefly describe here the most recent results.
Measurement of optical Feshbach resonances in an ideal gas.
Blatt, S; Nicholson, T L; Bloom, B J; Williams, J R; Thomsen, J W; Julienne, P S; Ye, J
2011-08-12
Using a narrow intercombination line in alkaline earth atoms to mitigate large inelastic losses, we explore the optical Feshbach resonance effect in an ultracold gas of bosonic (88)Sr. A systematic measurement of three resonances allows precise determinations of the optical Feshbach resonance strength and scaling law, in agreement with coupled-channel theory. Resonant enhancement of the complex scattering length leads to thermalization mediated by elastic and inelastic collisions in an otherwise ideal gas. Optical Feshbach resonance could be used to control atomic interactions with high spatial and temporal resolution.
Using PS1 and Type Ia Supernovae To Make Most Precise Measurement of Dark Energy To Date
NASA Astrophysics Data System (ADS)
Scolnic, Daniel; Pan-STARRS
2018-01-01
I will review recent results that present optical light curves, redshifts, and classifications for 361 spectroscopically confirmed Type Ia supernovae (SNeIa) discovered by the Pan-STARRS1 (PS1) Medium Deep Survey. I will go over improvements to the PS1 SN photometry, astrometry and calibration that reduce the systematic uncertainties in the PS1 SN Ia distances. We combined distances of PS1 SNe with distance estimates of SNIa from SDSS, SNLS, various low-z and HST samples to form the largest combined sample of SN Ia consisting of a total of ~1050 SN Ia ranging from 0.01 < z < 2.3, which we call the ‘Pantheon Sample’. Photometric calibration uncertainties have long dominated the systematic error budget of every major analysis of cosmological parameters with SNIa. Using the PS1 relative calibration, we have reduced these calibration systematics to the point where they are similar in magnitude to the other major sources of known systematic uncertainties: the nature of the intrinsic scatter of SNIa and modeling of selection effects. I will present measurements of dark energy which are now the most precise measurements of dark energy to date.
Precision cosmology with time delay lenses: High resolution imaging requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Xiao -Lei; Treu, Tommaso; Agnello, Adriano
Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ``Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration ofmore » the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ tot∝ r–γ' for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. Furthermore, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive Optics System, and TMT, will only be of order a few minutes per system, thus making the follow-up of hundreds of systems a practical and efficient cosmological probe.« less
Precision cosmology with time delay lenses: high resolution imaging requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Xiao-Lei; Liao, Kai; Treu, Tommaso
Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ''Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration ofmore » the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ{sub tot}∝ r{sup −γ'} for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. However, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive Optics System, and TMT, will only be of order a few minutes per system, thus making the follow-up of hundreds of systems a practical and efficient cosmological probe.« less
Beyond MEDLINE for literature searches.
Conn, Vicki S; Isaramalai, Sang-arun; Rath, Sabyasachi; Jantarakupt, Peeranuch; Wadhawan, Rohini; Dash, Yashodhara
2003-01-01
To describe strategies for a comprehensive literature search. MEDLINE searches result in limited numbers of studies that are often biased toward statistically significant findings. Diversified search strategies are needed. Empirical evidence about the recall and precision of diverse search strategies is presented. Challenges and strengths of each search strategy are identified. Search strategies vary in recall and precision. Often sensitivity and specificity are inversely related. Valuable search strategies include examination of multiple diverse computerized databases, ancestry searches, citation index searches, examination of research registries, journal hand searching, contact with the "invisible college," examination of abstracts, Internet searches, and contact with sources of synthesized information. Extending searches beyond MEDLINE enables researchers to conduct more systematic comprehensive searches.
Study on high-precision measurement of long radius of curvature
NASA Astrophysics Data System (ADS)
Wu, Dongcheng; Peng, Shijun; Gao, Songtao
2016-09-01
It is hard to get high-precision measurement of the radius of curvature (ROC), because of many factors that affect the measurement accuracy. For the measurement of long radius of curvature, some factors take more important position than others'. So, at first this paper makes some research about which factor is related to the long measurement distance, and also analyse the uncertain of the measurement accuracy. At second this article also study the influence about the support status and the adjust error about the cat's eye and confocal position. At last, a 1055micrometer radius of curvature convex is measured in high-precision laboratory. Experimental results show that the proper steady support (three-point support) can guarantee the high-precision measurement of radius of curvature. Through calibrating the gain of cat's eye and confocal position, is useful to ensure the precise position in order to increase the measurement accuracy. After finish all the above process, the high-precision long ROC measurement is realized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, B.; Erni, W.; Krusche, B.
Simulation results for future measurements of electromagnetic proton form factors atmore » $$\\overline{\\rm P}$$ANDA (FAIR) within the PandaRoot software framework are reported. The statistical precision with which the proton form factors can be determined is estimated. The signal channel p¯p → e +e – is studied on the basis of two different but consistent procedures. The suppression of the main background channel, i.e. p¯p → π +π –, is studied. Furthermore, the background versus signal efficiency, statistical and systematical uncertainties on the extracted proton form factors are evaluated using two different procedures. The results are consistent with those of a previous simulation study using an older, simplified framework. Furthermore, a slightly better precision is achieved in the PandaRoot study in a large range of momentum transfer, assuming the nominal beam conditions and detector performance.« less
Singh, B.; Erni, W.; Krusche, B.; ...
2016-10-28
Simulation results for future measurements of electromagnetic proton form factors atmore » $$\\overline{\\rm P}$$ANDA (FAIR) within the PandaRoot software framework are reported. The statistical precision with which the proton form factors can be determined is estimated. The signal channel p¯p → e +e – is studied on the basis of two different but consistent procedures. The suppression of the main background channel, i.e. p¯p → π +π –, is studied. Furthermore, the background versus signal efficiency, statistical and systematical uncertainties on the extracted proton form factors are evaluated using two different procedures. The results are consistent with those of a previous simulation study using an older, simplified framework. Furthermore, a slightly better precision is achieved in the PandaRoot study in a large range of momentum transfer, assuming the nominal beam conditions and detector performance.« less
A Systematic Approach to the Study of Accelerated weathering of Building Joint Sealants
Christopher C. White; Donald L. Hunston; Kar Tean Tan; James J. Filliben; Adam L. Pintar; Greg Schueneman
2012-01-01
An accurate service life prediction model is needed for building joint sealants in order to greatly reduce the time to market of a new product and reduce the risk of introducing a poorly performing product into the marketplace. A stepping stone to the success of this effort is the precise control of environmental variables in a laboratory accelerated test apparatus in...
Cosmic ray measurements with LOPES: Status and recent results
NASA Astrophysics Data System (ADS)
Schröder, F. G.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bähren, L.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Chiavassa, A.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Falcke, H.; Fuchs, B.; Fuhrmann, D.; Gemmeke, H.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Horneffer, A.; Huber, D.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Krömer, O.; Kuijpers, J.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Melissas, M.; Morello, C.; Oehlschläger, J.; Palmieri, N.; Pierog, T.; Rautenberg, J.; Rebel, H.; Roth, M.; Rühle, C.; Saftoiu, A.; Schieler, H.; Schmidt, A.; Sima, O.; Toma, G.; Trinchero, G. C.; Weindl, A.; Wochele, J.; Zabierowski, J.; Zensus, J. A.
2013-05-01
LOPES is a digital antenna array at the Karlsruhe Institute of Technology, Germany, for cosmic-ray air-shower measurements. Triggered by the co-located KASCADE-Grande air-shower array, LOPES detects the radio emission of air showers via digital radio interferometry. We summarize the status of LOPES and recent results. In particular, we present an update on the reconstruction of the primary-particle properties based on almost 500 events above 100PeV. With LOPES, the arrival direction can be reconstructed with a precision of at least 0.65°, and the energy with a precision of at least 20%, which, however, does not include systematic uncertainties on the absolute energy scale. For many particle and astrophysics questions the reconstruction of the atmospheric depth of the shower maximum, Xmax, is important, since it yields information on the type of the primary particle and its interaction with the atmosphere. Recently, we found experimental evidence that the slope of the radio lateral distribution is indeed sensitive to the longitudinal development of the air shower, but unfortunately, the Xmax precision at LOPES is limited by the high level of anthropogenic radio background. Nevertheless, the developed methods can be transferred to next generation experiments with lower background, which should provide an Xmax precision competitive to other detection technologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Straten, W., E-mail: vanstraten.willem@gmail.com
2013-01-15
A new method of polarimetric calibration is presented in which the instrumental response is derived from regular observations of PSR J0437-4715 based on the assumption that the mean polarized emission from this millisecond pulsar remains constant over time. The technique is applicable to any experiment in which high-fidelity polarimetry is required over long timescales; it is demonstrated by calibrating 7.2 years of high-precision timing observations of PSR J1022+1001 made at the Parkes Observatory. Application of the new technique followed by arrival time estimation using matrix template matching yields post-fit residuals with an uncertainty-weighted standard deviation of 880 ns, two timesmore » smaller than that of arrival time residuals obtained via conventional methods of calibration and arrival time estimation. The precision achieved by this experiment yields the first significant measurements of the secular variation of the projected semimajor axis, the precession of periastron, and the Shapiro delay; it also places PSR J1022+1001 among the 10 best pulsars regularly observed as part of the Parkes Pulsar Timing Array (PPTA) project. It is shown that the timing accuracy of a large fraction of the pulsars in the PPTA is currently limited by the systematic timing error due to instrumental polarization artifacts. More importantly, long-term variations of systematic error are correlated between different pulsars, which adversely affects the primary objectives of any pulsar timing array experiment. These limitations may be overcome by adopting the techniques presented in this work, which relax the demand for instrumental polarization purity and thereby have the potential to reduce the development cost of next-generation telescopes such as the Square Kilometre Array.« less
Precision medicine: opportunities, possibilities, and challenges for patients and providers.
Adams, Samantha A; Petersen, Carolyn
2016-07-01
Precision medicine approaches disease treatment and prevention by taking patients' individual variability in genes, environment, and lifestyle into account. Although the ideas underlying precision medicine are not new, opportunities for its more widespread use in practice have been enhanced by the development of large-scale databases, new methods for categorizing and representing patients, and computational tools for analyzing large datasets. New research methods may create uncertainty for both healthcare professionals and patients. In such situations, frameworks that address ethical, legal, and social challenges can be instrumental for facilitating trust between patients and providers, but must protect patients while not stifling progress or overburdening healthcare professionals. In this perspective, we outline several ethical, legal, and social issues related to the Precision Medicine Initiative's proposed changes to current institutions, values, and frameworks. This piece is not an exhaustive overview, but is intended to highlight areas meriting further study and action, so that precision medicine's goal of facilitating systematic learning and research at the point of care does not overshadow healthcare's goal of providing care to patients. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Dielectric inspection of erythrocyte morphology.
Hayashi, Yoshihito; Oshige, Ikuya; Katsumoto, Yoichi; Omori, Shinji; Yasuda, Akio; Asami, Koji
2008-05-21
We performed a systematic study of the sensitivity of dielectric spectroscopy to erythrocyte morphology. Namely, rabbit erythrocytes of four different shapes were prepared by precisely controlling the pH of the suspending medium, and their complex permittivities over the frequency range from 0.1 to 110 MHz were measured and analyzed. Their quantitative analysis shows that the characteristic frequency and the broadening parameter of the dielectric relaxation of interfacial polarization are highly specific to the erythrocyte shape, while they are insensitive to the cell volume fraction. Therefore, these two dielectric parameters can be used to differentiate erythrocytes of different shapes, if dielectric spectroscopy is applied to flow-cytometric inspection of single blood cells. In addition, we revealed the applicability and limitations of the analytical theory of interfacial polarization to explain the experimental permittivities of non-spherical erythrocytes.
Cosmology with the Large Synoptic Survey Telescope: an overview
NASA Astrophysics Data System (ADS)
Zhan, Hu; Tyson, J. Anthony
2018-06-01
The Large Synoptic Survey Telescope (LSST) is a high étendue imaging facility that is being constructed atop Cerro Pachón in northern Chile. It is scheduled to begin science operations in 2022. With an ( effective) aperture, a novel three-mirror design achieving a seeing-limited field of view, and a 3.2 gigapixel camera, the LSST has the deep-wide-fast imaging capability necessary to carry out an survey in six passbands (ugrizy) to a coadded depth of over 10 years using of its observational time. The remaining of the time will be devoted to considerably deeper and faster time-domain observations and smaller surveys. In total, each patch of the sky in the main survey will receive 800 visits allocated across the six passbands with exposure visits. The huge volume of high-quality LSST data will provide a wide range of science opportunities and, in particular, open a new era of precision cosmology with unprecedented statistical power and tight control of systematic errors. In this review, we give a brief account of the LSST cosmology program with an emphasis on dark energy investigations. The LSST will address dark energy physics and cosmology in general by exploiting diverse precision probes including large-scale structure, weak lensing, type Ia supernovae, galaxy clusters, and strong lensing. Combined with the cosmic microwave background data, these probes form interlocking tests on the cosmological model and the nature of dark energy in the presence of various systematics. The LSST data products will be made available to the US and Chilean scientific communities and to international partners with no proprietary period. Close collaborations with contemporaneous imaging and spectroscopy surveys observing at a variety of wavelengths, resolutions, depths, and timescales will be a vital part of the LSST science program, which will not only enhance specific studies but, more importantly, also allow a more complete understanding of the Universe through different windows.
Targeted Quantitation of Proteins by Mass Spectrometry
2013-01-01
Quantitative measurement of proteins is one of the most fundamental analytical tasks in a biochemistry laboratory, but widely used immunochemical methods often have limited specificity and high measurement variation. In this review, we discuss applications of multiple-reaction monitoring (MRM) mass spectrometry, which allows sensitive, precise quantitative analyses of peptides and the proteins from which they are derived. Systematic development of MRM assays is permitted by databases of peptide mass spectra and sequences, software tools for analysis design and data analysis, and rapid evolution of tandem mass spectrometer technology. Key advantages of MRM assays are the ability to target specific peptide sequences, including variants and modified forms, and the capacity for multiplexing that allows analysis of dozens to hundreds of peptides. Different quantitative standardization methods provide options that balance precision, sensitivity, and assay cost. Targeted protein quantitation by MRM and related mass spectrometry methods can advance biochemistry by transforming approaches to protein measurement. PMID:23517332
A disturbance observer-based adaptive control approach for flexure beam nano manipulators.
Zhang, Yangming; Yan, Peng; Zhang, Zhen
2016-01-01
This paper presents a systematic modeling and control methodology for a two-dimensional flexure beam-based servo stage supporting micro/nano manipulations. Compared with conventional mechatronic systems, such systems have major control challenges including cross-axis coupling, dynamical uncertainties, as well as input saturations, which may have adverse effects on system performance unless effectively eliminated. A novel disturbance observer-based adaptive backstepping-like control approach is developed for high precision servo manipulation purposes, which effectively accommodates model uncertainties and coupling dynamics. An auxiliary system is also introduced, on top of the proposed control scheme, to compensate the input saturations. The proposed control architecture is deployed on a customized-designed nano manipulating system featured with a flexure beam structure and voice coil actuators (VCA). Real time experiments on various manipulating tasks, such as trajectory/contour tracking, demonstrate precision errors of less than 1%. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Probing the N = 14 subshell closure: g factor of the 26Mg (21+) state
NASA Astrophysics Data System (ADS)
McCormick, B. P.; Stuchbery, A. E.; Kibédi, T.; Lane, G. J.; Reed, M. W.; Eriksen, T. K.; Hota, S. S.; Lee, B. Q.; Palalani, N.
2018-04-01
The first-excited state g factor of 26Mg has been measured relative to the g factor of the 24Mg (21+) state using the high-velocity transient-field technique, giving g = + 0.86 ± 0.10. This new measurement is in strong disagreement with the currently adopted value, but in agreement with the sd-shell model using the USDB interaction. The newly measured g factor, along with E (21+) and B (E 2) systematics, signal the closure of the νd5/2 subshell at N = 14. The possibility that precise g-factor measurements may indicate the onset of neutron pf admixtures in first-excited state even-even magnesium isotopes below 32Mg is discussed and the importance of precise excited-state g-factor measurements on sd shell nuclei with N ≠ Z to test shell-model wavefunctions is noted.
First β-ν correlation measurement from the recoil-energy spectrum of Penning trapped Ar35 ions
NASA Astrophysics Data System (ADS)
Van Gorp, S.; Breitenfeldt, M.; Tandecki, M.; Beck, M.; Finlay, P.; Friedag, P.; Glück, F.; Herlert, A.; Kozlov, V.; Porobic, T.; Soti, G.; Traykov, E.; Wauters, F.; Weinheimer, Ch.; Zákoucký, D.; Severijns, N.
2014-08-01
We demonstrate a novel method to search for physics beyond the standard model by determining the β-ν angular correlation from the recoil-ion energy distribution after β decay of ions stored in a Penning trap. This recoil-ion energy distribution is measured with a retardation spectrometer. The unique combination of the spectrometer with a Penning trap provides a number of advantages, e.g., a high recoil-ion count rate and low sensitivity to the initial position and velocity distribution of the ions and completely different sources of systematic errors compared to other state-of-the-art experiments. Results of a first measurement with the isotope Ar35 are presented. Although currently at limited precision, we show that a statistical precision of about 0.5% is achievable with this unique method, thereby opening up the possibility of contributing to state-of-the-art searches for exotic currents in weak interactions.
Targeted quantitation of proteins by mass spectrometry.
Liebler, Daniel C; Zimmerman, Lisa J
2013-06-04
Quantitative measurement of proteins is one of the most fundamental analytical tasks in a biochemistry laboratory, but widely used immunochemical methods often have limited specificity and high measurement variation. In this review, we discuss applications of multiple-reaction monitoring (MRM) mass spectrometry, which allows sensitive, precise quantitative analyses of peptides and the proteins from which they are derived. Systematic development of MRM assays is permitted by databases of peptide mass spectra and sequences, software tools for analysis design and data analysis, and rapid evolution of tandem mass spectrometer technology. Key advantages of MRM assays are the ability to target specific peptide sequences, including variants and modified forms, and the capacity for multiplexing that allows analysis of dozens to hundreds of peptides. Different quantitative standardization methods provide options that balance precision, sensitivity, and assay cost. Targeted protein quantitation by MRM and related mass spectrometry methods can advance biochemistry by transforming approaches to protein measurement.
Precision Measurement of the Electron's Electric Dipole Moment Using Trapped Molecular Ions.
Cairncross, William B; Gresh, Daniel N; Grau, Matt; Cossel, Kevin C; Roussy, Tanya S; Ni, Yiqi; Zhou, Yan; Ye, Jun; Cornell, Eric A
2017-10-13
We describe the first precision measurement of the electron's electric dipole moment (d_{e}) using trapped molecular ions, demonstrating the application of spin interrogation times over 700 ms to achieve high sensitivity and stringent rejection of systematic errors. Through electron spin resonance spectroscopy on ^{180}Hf^{19}F^{+} in its metastable ^{3}Δ_{1} electronic state, we obtain d_{e}=(0.9±7.7_{stat}±1.7_{syst})×10^{-29} e cm, resulting in an upper bound of |d_{e}|<1.3×10^{-28} e cm (90% confidence). Our result provides independent confirmation of the current upper bound of |d_{e}|<9.4×10^{-29} e cm [J. Baron et al., New J. Phys. 19, 073029 (2017)NJOPFM1367-263010.1088/1367-2630/aa708e], and offers the potential to improve on this limit in the near future.
NASA Astrophysics Data System (ADS)
Siciliani de Cumis, M.; Eramo, R.; Coluccelli, N.; Galzerano, G.; Laporta, P.; Cancio Pastor, P.
2018-03-01
We investigated a set of nineteen 12C16O2 transitions of the 2ν1 + ν3 ro-vibrational band in the spectral region from 5064 to 5126 cm-1 at different pressures, using frequency-comb Vernier spectroscopy. Our spectrometer enabled the systematic acquisition of molecular absorption profiles with high precision. Spectroscopic parameters, namely, transition frequency, linestrength, and self-pressure broadening coefficient, have been accurately determined by using a global fit procedure. These data are in agreement with theoretical values contained in HITRAN2016 database [I. E. Gordon et al., J. Quant. Spectrosc. Radiat. Transfer 203, 3-69 (2017)] at the same precision level. A moderate improvement of the line intensity determinations, by a factor 1.5 in the best case [P(10) transition at 5091.6 cm-1], should be noticed, projecting direct-comb-Vernier-spectroscopy as an adequate tool for spectral intensity calibration.
Evaluation of the accuracy of LF and TV synchronization techniques inChina via portable clock.
NASA Astrophysics Data System (ADS)
Miao, Y.-R.; Pan, X.-P.; Song, J.-A.; Bian, Y.-J.; Luo, D.-C.; Zhuang, Q.-X.
Shanxi, Beijing and Shanghai observatories cooperated with the U. S. Naval Observatory in making two portable clock experiments in 1981 and 1982. A high performance cesium clock was compared with the 1 pps signals from master clock, Loran-C receiver and TV Line-6 receiver in different observatories. The comparison of the experimental results with the prediction of the time delay between transmitter and each observatory indicates that the accuracy of LF synchronization technique in China can reach ±1 μs, timing precision is 0.05 - 0.2 μs at a distance of 2000 km. (It has been shown that there is a systematic error in the Daily Relative Phase Values, Ser. 4 of the U. S. Naval Observatory for the Northwest Pacific Loran-C chain.) For passive CCTV synchronization, timing accuracy is 2 μs or better and daily frequency calibration precision is (2 - 20)×10-13.
Chen, Ming; Wu, Si; Lu, Haidong D.; Roe, Anna W.
2013-01-01
Interpreting population responses in the primary visual cortex (V1) remains a challenge especially with the advent of techniques measuring activations of large cortical areas simultaneously with high precision. For successful interpretation, a quantitatively precise model prediction is of great importance. In this study, we investigate how accurate a spatiotemporal filter (STF) model predicts average response profiles to coherently drifting random dot motion obtained by optical imaging of intrinsic signals in V1 of anesthetized macaques. We establish that orientation difference maps, obtained by subtracting orthogonal axis-of-motion, invert with increasing drift speeds, consistent with the motion streak effect. Consistent with perception, the speed at which the map inverts (the critical speed) depends on cortical eccentricity and systematically increases from foveal to parafoveal. We report that critical speeds and response maps to drifting motion are excellently reproduced by the STF model. Our study thus suggests that the STF model is quantitatively accurate enough to be used as a first model of choice for interpreting responses obtained with intrinsic imaging methods in V1. We show further that this good quantitative correspondence opens the possibility to infer otherwise not easily accessible population receptive field properties from responses to complex stimuli, such as drifting random dot motions. PMID:23197457
Severens, J L; Mulder, J; Laheij, R J; Verbeek, A L
2000-07-01
The impact of disease on the ability of a person to perform work should be part of an economic evaluation when a societal viewpoint is used for the analysis. This impact is reflected by calculating productivity costs. Measurement of these costs is often performed retrospectively. The purpose of our study was to study precision and accuracy of a retrospective self-administered questionnaire on sick leave. Employees of a company were asked to indicate the number of days absent from work due to illness during the past 2 weeks, 4 weeks, 2 months, 6 months, and the past 12 months. The percentage of respondents with an absolute difference of a maximum of respectively 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 or more days between reported, and company-registered absence due to illness was determined. Besides this, the proportional difference was calculated. A systematic difference was tested with a signed rank test. Of the reported data, 95% matched the registered data perfectly when the recall period was limited to 2 and 4 weeks. This percentage decreased to 87%, 57%, and 51% for 2 months, 6 months, and 12 months. The weighted mean proportional differences for the recall periods were respectively 32.9, 35.2, 45.3, 34.9, and 113.6%. No systematic positive or negative difference was found between registered and reported sick leave. The results suggest that the recall period for retrospective measurement of sick leave is limited according to the precision level, which seems to be appropriate for the subject and the purpose of the study. We recommend using a recall period of no more than 2 months.
High accuracy position response calibration method for a micro-channel plate ion detector
NASA Astrophysics Data System (ADS)
Hong, R.; Leredde, A.; Bagdasarova, Y.; Fléchard, X.; García, A.; Müller, P.; Knecht, A.; Liénard, E.; Kossin, M.; Sternberg, M. G.; Swanson, H. E.; Zumwalt, D. W.
2016-11-01
We have developed a position response calibration method for a micro-channel plate (MCP) detector with a delay-line anode position readout scheme. Using an in situ calibration mask, an accuracy of 8 μm and a resolution of 85 μm (FWHM) have been achieved for MeV-scale α particles and ions with energies of ∼10 keV. At this level of accuracy, the difference between the MCP position responses to high-energy α particles and low-energy ions is significant. The improved performance of the MCP detector can find applications in many fields of AMO and nuclear physics. In our case, it helps reducing systematic uncertainties in a high-precision nuclear β-decay experiment.
The future of RICH detectors through the light of the LHCb RICH
NASA Astrophysics Data System (ADS)
D'Ambrosio, C.; LHCb RICH Collaboration
2017-12-01
The limitations in performance of the present RICH system in the LHCb experiment are given by the natural chromatic dispersion of the gaseous Cherenkov radiator, the aberrations of the optical system and the pixel size of the photon detectors. Moreover, the overall PID performance can be affected by high detector occupancy as the pattern recognition becomes more difficult with high particle multiplicities. This paper shows a way to improve performance by systematically addressing each of the previously mentioned limitations. These ideas are applied in the present and future upgrade phases of the LHCb experiment. Although applied to specific circumstances, they are used as a paradigm on what is achievable in the development and realisation of high precision RICH detectors.
Fabricating binary optics: An overview of binary optics process technology
NASA Technical Reports Server (NTRS)
Stern, Margaret B.
1993-01-01
A review of binary optics processing technology is presented. Pattern replication techniques have been optimized to generate high-quality efficient microoptics in visible and infrared materials. High resolution optical photolithography and precision alignment is used to fabricate maximally efficient fused silica diffractive microlenses at lambda = 633 nm. The degradation in optical efficiency of four-phase-level fused silica microlenses resulting from an intentional 0.35 micron translational error has been systematically measured as a function of lens speed (F/2 - F/60). Novel processes necessary for high sag refractive IR microoptics arrays, including deep anisotropic Si-etching, planarization of deep topography and multilayer resist techniques, are described. Initial results are presented for monolithic integration of photonic and microoptic systems.
Automatically finding relevant citations for clinical guideline development.
Bui, Duy Duc An; Jonnalagadda, Siddhartha; Del Fiol, Guilherme
2015-10-01
Literature database search is a crucial step in the development of clinical practice guidelines and systematic reviews. In the age of information technology, the process of literature search is still conducted manually, therefore it is costly, slow and subject to human errors. In this research, we sought to improve the traditional search approach using innovative query expansion and citation ranking approaches. We developed a citation retrieval system composed of query expansion and citation ranking methods. The methods are unsupervised and easily integrated over the PubMed search engine. To validate the system, we developed a gold standard consisting of citations that were systematically searched and screened to support the development of cardiovascular clinical practice guidelines. The expansion and ranking methods were evaluated separately and compared with baseline approaches. Compared with the baseline PubMed expansion, the query expansion algorithm improved recall (80.2% vs. 51.5%) with small loss on precision (0.4% vs. 0.6%). The algorithm could find all citations used to support a larger number of guideline recommendations than the baseline approach (64.5% vs. 37.2%, p<0.001). In addition, the citation ranking approach performed better than PubMed's "most recent" ranking (average precision +6.5%, recall@k +21.1%, p<0.001), PubMed's rank by "relevance" (average precision +6.1%, recall@k +14.8%, p<0.001), and the machine learning classifier that identifies scientifically sound studies from MEDLINE citations (average precision +4.9%, recall@k +4.2%, p<0.001). Our unsupervised query expansion and ranking techniques are more flexible and effective than PubMed's default search engine behavior and the machine learning classifier. Automated citation finding is promising to augment the traditional literature search. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellens, N; Farahani, K
2015-06-15
Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precisionmore » of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many preclinical applications including focused drug delivery and thermal therapy. Funding support provided by Philips Healthcare.« less
NASA Astrophysics Data System (ADS)
Kirstetter, G.; Popinet, S.; Fullana, J. M.; Lagrée, P. Y.; Josserand, C.
2015-12-01
The full resolution of shallow-water equations for modeling flash floods may have a high computational cost, so that majority of flood simulation softwares used for flood forecasting uses a simplification of this model : 1D approximations, diffusive or kinematic wave approximations or exotic models using non-physical free parameters. These kind of approximations permit to save a lot of computational time by sacrificing in an unquantified way the precision of simulations. To reduce drastically the cost of such 2D simulations by quantifying the lost of precision, we propose a 2D shallow-water flow solver built with the open source code Basilisk1, which is using adaptive refinement on a quadtree grid. This solver uses a well-balanced central-upwind scheme, which is at second order in time and space, and treats the friction and rain terms implicitly in finite volume approach. We demonstrate the validity of our simulation on the case of the flood of Tewkesbury (UK) occurred in July 2007, as shown on Fig. 1. On this case, a systematic study of the impact of the chosen criterium for adaptive refinement is performed. The criterium which has the best computational time / precision ratio is proposed. Finally, we present the power law giving the computational time in respect to the maximum resolution and we show that this law for our 2D simulation is close to the one of 1D simulation, thanks to the fractal dimension of the topography. [1] http://basilisk.fr/
What implementation interventions increase cancer screening rates? a systematic review
2011-01-01
Background Appropriate screening may reduce the mortality and morbidity of colorectal, breast, and cervical cancers. However, effective implementation strategies are warranted if the full benefits of screening are to be realized. As part of a larger agenda to create an implementation guideline, we conducted a systematic review to evaluate interventions designed to increase the rate of breast, cervical, and colorectal cancer (CRC) screening. The interventions considered were: client reminders, client incentives, mass media, small media, group education, one-on-one education, reduction in structural barriers, reduction in out-of-pocket costs, provider assessment and feedback interventions, and provider incentives. Our primary outcome, screening completion, was calculated as the overall median post-intervention absolute percentage point (PP) change in completed screening tests. Methods Our first step was to conduct an iterative scoping review in the research area. This yielded three relevant high-quality systematic reviews. Serving as our evidentiary foundation, we conducted a formal update. Randomized controlled trials and cluster randomized controlled trials, published between 2004 and 2010, were searched in MEDLINE, EMBASE and PSYCHinfo. Results The update yielded 66 studies new eligible studies with 74 comparisons. The new studies ranged considerably in quality. Client reminders, small media, and provider audit and feedback appear to be effective interventions to increase the uptake of screening for three cancers. One-on-one education and reduction of structural barriers also appears effective, but their roles with CRC and cervical screening, respectively, are less established. More study is required to assess client incentives, mass media, group education, reduction of out-of-pocket costs, and provider incentive interventions. Conclusion The new evidence generally aligns with the evidence and conclusions from the original systematic reviews. This review served as the evidentiary foundation for an implementation guideline. Poor reporting, lack of precision and consistency in defining operational elements, and insufficient consideration of context and differences among populations are areas for additional research. PMID:21958556
Local band gap measurements by VEELS of thin film solar cells.
Keller, Debora; Buecheler, Stephan; Reinhard, Patrick; Pianezzi, Fabian; Pohl, Darius; Surrey, Alexander; Rellinghaus, Bernd; Erni, Rolf; Tiwari, Ayodhya N
2014-08-01
This work presents a systematic study that evaluates the feasibility and reliability of local band gap measurements of Cu(In,Ga)Se2 thin films by valence electron energy-loss spectroscopy (VEELS). The compositional gradients across the Cu(In,Ga)Se2 layer cause variations in the band gap energy, which are experimentally determined using a monochromated scanning transmission electron microscope (STEM). The results reveal the expected band gap variation across the Cu(In,Ga)Se2 layer and therefore confirm the feasibility of local band gap measurements of Cu(In,Ga)Se2 by VEELS. The precision and accuracy of the results are discussed based on the analysis of individual error sources, which leads to the conclusion that the precision of our measurements is most limited by the acquisition reproducibility, if the signal-to-noise ratio of the spectrum is high enough. Furthermore, we simulate the impact of radiation losses on the measured band gap value and propose a thickness-dependent correction. In future work, localized band gap variations will be measured on a more localized length scale to investigate, e.g., the influence of chemical inhomogeneities and dopant accumulations at grain boundaries.
Średnicka-Tober, Dominika; Barański, Marcin; Seal, Chris; Sanderson, Roy; Benbrook, Charles; Steinshamn, Håvard; Gromadzka-Ostrowska, Joanna; Rembiałkowska, Ewa; Skwarło-Sońta, Krystyna; Eyre, Mick; Cozzi, Giulio; Krogh Larsen, Mette; Jordon, Teresa; Niggli, Urs; Sakowski, Tomasz; Calder, Philip C; Burdge, Graham C; Sotiraki, Smaragda; Stefanakis, Alexandros; Yolcu, Halil; Stergiadis, Sokratis; Chatzidimitriou, Eleni; Butler, Gillian; Stewart, Gavin; Leifert, Carlo
2016-03-28
Demand for organic meat is partially driven by consumer perceptions that organic foods are more nutritious than non-organic foods. However, there have been no systematic reviews comparing specifically the nutrient content of organic and conventionally produced meat. In this study, we report results of a meta-analysis based on sixty-seven published studies comparing the composition of organic and non-organic meat products. For many nutritionally relevant compounds (e.g. minerals, antioxidants and most individual fatty acids (FA)), the evidence base was too weak for meaningful meta-analyses. However, significant differences in FA profiles were detected when data from all livestock species were pooled. Concentrations of SFA and MUFA were similar or slightly lower, respectively, in organic compared with conventional meat. Larger differences were detected for total PUFA and n-3 PUFA, which were an estimated 23 (95 % CI 11, 35) % and 47 (95 % CI 10, 84) % higher in organic meat, respectively. However, for these and many other composition parameters, for which meta-analyses found significant differences, heterogeneity was high, and this could be explained by differences between animal species/meat types. Evidence from controlled experimental studies indicates that the high grazing/forage-based diets prescribed under organic farming standards may be the main reason for differences in FA profiles. Further studies are required to enable meta-analyses for a wider range of parameters (e.g. antioxidant, vitamin and mineral concentrations) and to improve both precision and consistency of results for FA profiles for all species. Potential impacts of composition differences on human health are discussed.
Measurements of the vector boson production with the ATLAS detector
NASA Astrophysics Data System (ADS)
Lapertosa, A.
2018-01-01
Measurements of the Drell-Yan production of W and Z bosons at the LHC provide a benchmark of our understanding of perturbative QCD and probe the proton structure in a unique way. The ATLAS collaboration has performed new high precision measurements at a center-of-mass energy of 7 TeV. The measurements are performed for W+, W- and Z bosons integrated and as a function of the boson or lepton rapidity and the Z mass. Unprecedented precision is reached and strong constraints on Parton Distribution Functions, in particular the strange density are found. Z boson cross sections are also measured at center-of-mass energies of 8 TeV and 13 TeV, and cross-section ratios to the top-quark pair production have been derived. This ratio measurement leads to a cancellation of systematic effects and allows for a high precision comparison to the theory predictions. The production of jets in association with vector bosons is a further important process to study perturbative QCD in a multi-scale environment. The ATLAS collaboration has performed new measurements of Z boson plus jets cross sections, differential in several kinematic variables, in proton-proton collision data taken at a center-of-mass energy of 13 TeV. The measurements are compared to state-of-the art theory predictions. They are sensitive to higher-order pQCD effects, probe flavour and mass schemes and can be used to constrain the proton structure. In addition, a new measurement of the splitting scales of the kt jet-clustering algorithm for final states containing a Z boson candidate at a center-of-mass energy of 8 TeV is presented.
Crossover from Incoherent to Coherent Phonon Scattering in Epitaxial Oxide Superlattices
2013-12-08
function of interface density. We do so by synthesizing superlattices of electrically insulating perovskite oxides 1. REPORT DATE (DD-MM-YYYY) 4. TITLE...synthesizing superlattices of electrically insulating perovskite oxides and systematically varying the interface density, with unit-cell precision, using two...a function of interface density. Wedo so by synthesizing superlattices of electrically insulating perovskite oxides and systematically varying the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoang, André H.; Lepenik, Christopher; Preisser, Moritz
Here, we provide a systematic renormalization group formalism for the mass effects in the relation of the pole mass m Q pole and short-distance masses such as themore » $$—\\atop{MS}$$ mass $$—\\atop{m}$$ Q of a heavy quark Q, coming from virtual loop insertions of massive quarks lighter than Q. The formalism reflects the constraints from heavy quark symmetry and entails a combined matching and evolution procedure that allows to disentangle and successively integrate out the corrections coming from the lighter massive quarks and the momentum regions between them and to precisely control the large order asymptotic behavior. With the formalism we systematically sum logarithms of ratios of the lighter quark masses and m Q , relate the QCD corrections for different external heavy quarks to each other, predict the O(α$$4\\atop{s}$$) virtual quark mass corrections in the pole-$$—\\atop{MS}$$ mass relation, calculate the pole mass differences for the top, bottom and charm quarks with a precision of around 20 MeV and analyze the decoupling of the lighter massive quark flavors at large orders. The summation of logarithms is most relevant for the top quark pole mass m t pole, where the hierarchy to the bottom and charm quarks is large. We determine the ambiguity of the pole mass for top, bottom and charm quarks in different scenarios with massive or massless bottom and charm quarks in a way consistent with heavy quark symmetry, and we find that it is 250 MeV. The ambiguity is larger than current projections for the precision of top quark mass measurements in the high-luminosity phase of the LHC.« less
Hoang, André H.; Lepenik, Christopher; Preisser, Moritz
2017-09-20
Here, we provide a systematic renormalization group formalism for the mass effects in the relation of the pole mass m Q pole and short-distance masses such as themore » $$—\\atop{MS}$$ mass $$—\\atop{m}$$ Q of a heavy quark Q, coming from virtual loop insertions of massive quarks lighter than Q. The formalism reflects the constraints from heavy quark symmetry and entails a combined matching and evolution procedure that allows to disentangle and successively integrate out the corrections coming from the lighter massive quarks and the momentum regions between them and to precisely control the large order asymptotic behavior. With the formalism we systematically sum logarithms of ratios of the lighter quark masses and m Q , relate the QCD corrections for different external heavy quarks to each other, predict the O(α$$4\\atop{s}$$) virtual quark mass corrections in the pole-$$—\\atop{MS}$$ mass relation, calculate the pole mass differences for the top, bottom and charm quarks with a precision of around 20 MeV and analyze the decoupling of the lighter massive quark flavors at large orders. The summation of logarithms is most relevant for the top quark pole mass m t pole, where the hierarchy to the bottom and charm quarks is large. We determine the ambiguity of the pole mass for top, bottom and charm quarks in different scenarios with massive or massless bottom and charm quarks in a way consistent with heavy quark symmetry, and we find that it is 250 MeV. The ambiguity is larger than current projections for the precision of top quark mass measurements in the high-luminosity phase of the LHC.« less
Sensitivity and systematics of calorimetric neutrino mass experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nucciotti, A.; Cremonesi, O.; Ferri, E.
2009-12-16
A large calorimetric neutrino mass experiment using thermal detectors is expected to play a crucial role in the challenge for directly assessing the neutrino mass. We discuss and compare here two approaches for the estimation of the experimental sensitivity of such an experiment. The first method uses an analytic formulation and allows to obtain readily a close estimate over a wide range of experimental configurations. The second method is based on a Montecarlo technique and is more precise and reliable. The Montecarlo approach is then exploited to study some sources of systematic uncertainties peculiar to calorimetric experiments. Finally, the toolsmore » are applied to investigate the optimal experimental configuration of the MARE project.« less
Munube, Deogratias; Kasirye, Philip; Mupere, Ezekiel; Jin, Zhezhen; LaRussa, Philip; Idro, Richard; Green, Nancy S.
2018-01-01
Objectives. The prevalence of stroke among children with sickle cell disease (SCD) in sub-Saharan Africa was systematically reviewed. Methods. Comprehensive searches of PubMed, Embase, and Web of Science were performed for articles published between 1980 and 2016 (English or French) reporting stroke prevalence. Using preselected inclusion criteria, titles and abstracts were screened and full-text articles were reviewed. Results. Ten full-text articles met selection criteria. Cross-sectional clinic-based data reported 2.9% to 16.9% stroke prevalence among children with SCD. Using available sickle gene frequencies by country, estimated pediatric mortality, and fixed- and random-effects model, the number of affected individuals is projected as 29 800 (95% confidence interval = 25 571-34 027) and 59 732 (37 004-82 460), respectively. Conclusion. Systematic review enabled the estimation of the number of children with SCD stroke in sub-Saharan Africa. High disease mortality, inaccurate diagnosis, and regional variability of risk hamper more precise estimates. Adopting standardized stroke assessments may provide more accurate determination of numbers affected to inform preventive interventions. PMID:29785408
Marks, Lianna J; Munube, Deogratias; Kasirye, Philip; Mupere, Ezekiel; Jin, Zhezhen; LaRussa, Philip; Idro, Richard; Green, Nancy S
2018-01-01
Objectives . The prevalence of stroke among children with sickle cell disease (SCD) in sub-Saharan Africa was systematically reviewed. Methods . Comprehensive searches of PubMed, Embase, and Web of Science were performed for articles published between 1980 and 2016 (English or French) reporting stroke prevalence. Using preselected inclusion criteria, titles and abstracts were screened and full-text articles were reviewed. Results . Ten full-text articles met selection criteria. Cross-sectional clinic-based data reported 2.9% to 16.9% stroke prevalence among children with SCD. Using available sickle gene frequencies by country, estimated pediatric mortality, and fixed- and random-effects model, the number of affected individuals is projected as 29 800 (95% confidence interval = 25 571-34 027) and 59 732 (37 004-82 460), respectively. Conclusion . Systematic review enabled the estimation of the number of children with SCD stroke in sub-Saharan Africa. High disease mortality, inaccurate diagnosis, and regional variability of risk hamper more precise estimates. Adopting standardized stroke assessments may provide more accurate determination of numbers affected to inform preventive interventions.
Williams, S.J.; Penland, S.; Sallenger, A.H.; McBride, R.A.; Kindlinger, J.L.
1991-01-01
A study of the barrier islands and wetlands in the deltaic plain of Louisiana is presented. Its purpose was to document rapid changes and to learn more about the processes responsible and the geologic framework within which they operate. It included systematic collection and analysis of precision nearshore hydrographic data, high resolution seismic profiles, surface sediment samples, continuous vibracores, digital shoreline plots, records of storm overwash events, and analysis of tide gage records to quantify the rise in relative sea level. Results from these studies demonstrate that deltaic progradation, river channel switching, and subsequent rapid erosion accompanying the marine transgression are regular and predictable events along the Mississippi River delta plain and will likely continue in the future. Mitigation measures, such as shoreline nourishment and barrier restoration, that mimic the natural processes may slow the land loss.
A Color-locus Method for Mapping R V Using Ensembles of Stars
NASA Astrophysics Data System (ADS)
Lee, Albert; Green, Gregory M.; Schlafly, Edward F.; Finkbeiner, Douglas P.; Burgett, William; Chambers, Ken; Flewelling, Heather; Hodapp, Klaus; Kaiser, Nick; Kudritzki, Rolf-Peter; Magnier, Eugene; Metcalfe, Nigel; Wainscoat, Richard; Waters, Christopher
2018-02-01
We present a simple but effective technique for measuring angular variation in R V across the sky. We divide stars from the Pan-STARRS1 catalog into Healpix pixels and determine the posterior distribution of reddening and R V for each pixel using two independent Monte Carlo methods. We find the two methods to be self-consistent in the limits where they are expected to perform similarly. We also find some agreement with high-precision photometric studies of R V in Perseus and Ophiuchus, as well as with a map of reddening near the Galactic plane based on stellar spectra from APOGEE. While current studies of R V are mostly limited to isolated clouds, we have developed a systematic method for comparing R V values for the majority of observable dust. This is a proof of concept for a more rigorous Galactic reddening map.
Lunny, Carole; McKenzie, Joanne E; McDonald, Steve
2016-06-01
Locating overviews of systematic reviews is difficult because of an absence of appropriate indexing terms and inconsistent terminology used to describe overviews. Our objective was to develop a validated search strategy to retrieve overviews in MEDLINE. We derived a test set of overviews from the references of two method articles on overviews. Two population sets were used to identify discriminating terms, that is, terms that appear frequently in the test set but infrequently in two population sets of references found in MEDLINE. We used text mining to conduct a frequency analysis of terms appearing in the titles and abstracts. Candidate terms were combined and tested in MEDLINE in various permutations, and the performance of strategies measured using sensitivity and precision. Two search strategies were developed: a sensitivity-maximizing strategy, achieving 93% sensitivity (95% confidence interval [CI]: 87, 96) and 7% precision (95% CI: 6, 8), and a sensitivity-and-precision-maximizing strategy, achieving 66% sensitivity (95% CI: 58, 74) and 21% precision (95% CI: 17, 25). The developed search strategies enable users to more efficiently identify overviews of reviews compared to current strategies. Consistent language in describing overviews would aid in their identification, as would a specific MEDLINE Publication Type. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Thurman, S. W.
1992-01-01
An approximate six-parameter analytic model for Earth-based differential range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 micro-rad, and angular rate precision on the order of 10 to 25 x 10(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wideband and narrowband (delta) VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 micro-rad, and angular rate precisions of 0.5 to 1.0 x 10(exp -12) rad/sec.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Thurman, S. W.
1992-01-01
An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.
NASA Astrophysics Data System (ADS)
Dong, Yibo; Xie, Yiyang; Xu, Chen; Li, Xuejian; Deng, Jun; Fan, Xing; Pan, Guanzhong; Wang, Qiuhua; Xiong, Fangzhu; Fu, Yafei; Sun, Jie
2018-02-01
A method of producing large area continuous graphene directly on SiO2 by chemical vapor deposition is systematically developed. Cu thin film catalysts are sputtered onto the SiO2 and pre-patterned. During graphene deposition, high temperature induces evaporation and balling of the Cu, and the graphene "lands onto" SiO2. Due to the high heating and growth rate, continuous graphene is largely completed before the Cu evaporation and balling. 60 nm is identified as the optimal thickness of the Cu for a successful graphene growth and μm-large feature size in the graphene. An all-carbon device is demonstrated based on this technique.
Time-resolved x-ray spectra from laser-generated high-density plasmas
NASA Astrophysics Data System (ADS)
Andiel, U.; Eidmann, Klaus; Witte, Klaus-Juergen
2001-04-01
We focused frequency doubled ultra short laser pulses on solid C, F, Na and Al targets, K-shell emission was systematically investigated by time resolved spectroscopy using a sub-ps streak camera. A large number of laser shots can be accumulated when triggering the camera with an Auston switch system at very high temporal precision. The system provides an outstanding time resolution of 1.7ps accumulating thousands of laser shots. The time duration of the He-(alpha) K-shell resonance lines was observed in the range of (2-4)ps and shows a decrease with the atomic number. The experimental results are well reproduced by hydro code simulations post processed with an atomic kinetics code.
Liu, Wanpeng; Zhou, Zhitao; Zhang, Shaoqing; Shi, Zhifeng; Tabarini, Justin; Lee, Woonsoo; Zhang, Yeshun; Gilbert Corder, S. N.; Li, Xinxin; Dong, Fei; Cheng, Liang; Liu, Mengkun; Kaplan, David L.; Omenetto, Fiorenzo G.
2017-01-01
Precise patterning of biomaterials has widespread applications, including drug release, degradable implants, tissue engineering, and regenerative medicine. Patterning of protein‐based microstructures using UV‐photolithography has been demonstrated using protein as the resist material. The Achilles heel of existing protein‐based biophotoresists is the inevitable wide molecular weight distribution during the protein extraction/regeneration process, hindering their practical uses in the semiconductor industry where reliability and repeatability are paramount. A wafer‐scale high resolution patterning of bio‐microstructures using well‐defined silk fibroin light chain as the resist material is presented showing unprecedent performances. The lithographic and etching performance of silk fibroin light chain resists are evaluated systematically and the underlying mechanisms are thoroughly discussed. The micropatterned silk structures are tested as cellular substrates for the successful spatial guidance of fetal neural stems cells seeded on the patterned substrates. The enhanced patterning resolution, the improved etch resistance, and the inherent biocompatibility of such protein‐based photoresist provide new opportunities in fabricating large scale biocompatible functional microstructures. PMID:28932678
Zhou, Hui-Ting; Chen, Hsin-Chang; Ding, Wang-Hsien
2018-02-20
An analytical method that utilizes isotope-dilution ultrahigh-performance liquid chromatography coupled with hybrid quadrupole time-of-flight mass spectrometry (UHPLC-QTOF-MS or called UHPLC-HRMS) was developed, and validated to be highly precise and accurate for the detection of nine parabens (methyl-, ethyl-, propyl-, isopropyl-, butyl-, isobutyl-, pentyl-, hexyl-, and benzyl-parabens) in human urine samples. After sample preparation by ultrasound-assisted emulsification microextraction (USAEME), the extract was directly injected into UHPLC-HRMS. By using negative electrospray ionization in the multiple reaction monitoring (MRM) mode and measuring the peak area ratios of both the natural and the labeled-analogues in the samples and calibration standards, the target analytes could be accurately identified and quantified. Another use for the labeled-analogues was to correct for systematic errors associated with the analysis, such as the matrix effect and other variations. The limits of quantitation (LOQs) were ranging from 0.3 to 0.6 ng/mL. High precisions for both repeatability and reproducibility were obtained ranging from 1 to 8%. High trueness (mean extraction recovery, or called accuracy) ranged from 93 to 107% on two concentration levels. According to preliminary results, the total concentrations of four most detected parabens (methyl-, ethyl-, propyl- and butyl-) ranged from 0.5 to 79.1 ng/mL in male urine samples, and from 17 to 237 ng/mL in female urine samples. Interestingly, two infrequently detected pentyl- and hexyl-parabens were found in one of the male samples in this study. Copyright © 2017 Elsevier B.V. All rights reserved.
Bahar, Muh Akbar; Setiawan, Didik; Hak, Eelko; Wilffert, Bob
2017-05-01
Currently, most guidelines on drug-drug interaction (DDI) neither consider the potential effect of genetic polymorphism in the strength of the interaction nor do they account for the complex interaction caused by the combination of DDI and drug-gene interaction (DGI) where there are multiple biotransformation pathways, which is referred to as drug-drug-gene interaction (DDGI). In this systematic review, we report the impact of pharmacogenetics on DDI and DDGI in which three major drug-metabolizing enzymes - CYP2C9, CYP2C19 and CYP2D6 - are central. We observed that several DDI and DDGI are highly gene-dependent, leading to a different magnitude of interaction. Precision drug therapy should take pharmacogenetics into account when drug interactions in clinical practice are expected.
Boaz, Annette; Baeza, Juan; Fraser, Alec
2011-06-22
The gap between research findings and clinical practice is well documented and a range of interventions has been developed to increase the implementation of research into clinical practice. A review of systematic reviews of the effectiveness of interventions designed to increase the use of research in clinical practice. A search for relevant systematic reviews was conducted of Medline and the Cochrane Database of Reviews 1998-2009. 13 systematic reviews containing 313 primary studies were included. Four strategy types are identified: audit and feedback; computerised decision support; opinion leaders; and multifaceted interventions. Nine of the reviews reported on multifaceted interventions. This review highlights the small effects of single interventions such as audit and feedback, computerised decision support and opinion leaders. Systematic reviews of multifaceted interventions claim an improvement in effectiveness over single interventions, with effect sizes ranging from small to moderate. This review found that a number of published systematic reviews fail to state whether the recommended practice change is based on the best available research evidence. This overview of systematic reviews updates the body of knowledge relating to the effectiveness of key mechanisms for improving clinical practice and service development. Multifaceted interventions are more likely to improve practice than single interventions such as audit and feedback. This review identified a small literature focusing explicitly on getting research evidence into clinical practice. It emphasizes the importance of ensuring that primary studies and systematic reviews are precise about the extent to which the reported interventions focus on changing practice based on research evidence (as opposed to other information codified in guidelines and education materials).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif, E-mail: ertekin@illinois.edu
2015-12-14
The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlledmore » and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.« less
A systems biology approach to study systemic inflammation.
Chen, Bor-Sen; Wu, Chia-Chou
2014-01-01
Systemic inflammation needs a precise control on the sequence and magnitude of occurring events. The high throughput data on the host-pathogen interactions gives us an opportunity to have a glimpse on the systemic inflammation. In this article, a dynamic Candida albicans-zebrafish interactive infectious network is built as an example to demonstrate how systems biology approach can be used to study systematic inflammation. In particular, based on microarray data of C. albicans and zebrafish during infection, the hyphal growth, zebrafish, and host-pathogen intercellular PPI networks were combined to form an integrated infectious PPI network that helps us understand the systematic mechanisms underlying the pathogenicity of C. albicans and the immune response of the host. The signaling pathways for morphogenesis and hyphal growth of C. albicans were 2 significant interactions found in the intercellular PPI network. Two cellular networks were also developed corresponding to the different infection stages (adhesion and invasion), and then compared with each other to identify proteins to gain more insight into the pathogenic role of hyphal development in the C. albicans infection process. Important defense-related proteins in zebrafish were predicted using the same approach. This integrated network consisting of intercellular invasion and cellular defense processes during infection can improve medical therapies and facilitate development of new antifungal drugs.
Occupational therapy and return to work: a systematic literature review
2011-01-01
Background The primary aim of this review study was to gather evidence on the effectiveness in terms of return to work (RTW) of occupational therapy interventions (OTIs) in rehabilitation patients with non-congenital disorders. A secondary aim was to be able to select the most efficient OTI. Methods A systematic literature review of peer-reviewed papers was conducted using electronic databases (Cinahl, Cochrane Library, Ebsco, Medline (Pubmed), and PsycInfo). The search focussed on randomised controlled trials and cohort studies published in English from 1980 until September 2010. Scientific validity of the studies was assessed. Results Starting from 1532 papers with pertinent titles, six studies met the quality criteria. Results show systematic reviewing of OTIs on RTW was challenging due to varying populations, different outcome measures, and poor descriptions of methodology. There is evidence that OTIs as part of rehabilitation programs, increase RTW rates, although the methodological evidence of most studies is weak. Conclusions Analysis of the selected papers indicated that OTIs positively influence RTW; two studies described precisely what the content of their OTI was. In order to identify the added value of OTIs on RTW, studies with well-defined OT intervention protocols are necessary. PMID:21810228
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yifan; Apai, Dániel; Schneider, Glenn
The Hubble Space Telescope Wide Field Camera 3 (WFC3) near-IR channel is extensively used in time-resolved observations, especially for transiting exoplanet spectroscopy as well as brown dwarf and directly imaged exoplanet rotational phase mapping. The ramp effect is the dominant source of systematics in the WFC3 for time-resolved observations, which limits its photometric precision. Current mitigation strategies are based on empirical fits and require additional orbits to help the telescope reach a thermal equilibrium . We show that the ramp-effect profiles can be explained and corrected with high fidelity using charge trapping theories. We also present a model for this processmore » that can be used to predict and to correct charge trap systematics. Our model is based on a very small number of parameters that are intrinsic to the detector. We find that these parameters are very stable between the different data sets, and we provide best-fit values. Our model is tested with more than 120 orbits (∼40 visits) of WFC3 observations and is proved to be able to provide near photon noise limited corrections for observations made with both staring and scanning modes of transiting exoplanets as well as for starting-mode observations of brown dwarfs. After our model correction, the light curve of the first orbit in each visit has the same photometric precision as subsequent orbits, so data from the first orbit no longer need to be discarded. Near-IR arrays with the same physical characteristics (e.g., JWST/NIRCam ) may also benefit from the extension of this model if similar systematic profiles are observed.« less
NASA Astrophysics Data System (ADS)
Zhou, Yifan; Apai, Dániel; Lew, Ben W. P.; Schneider, Glenn
2017-06-01
The Hubble Space Telescope Wide Field Camera 3 (WFC3) near-IR channel is extensively used in time-resolved observations, especially for transiting exoplanet spectroscopy as well as brown dwarf and directly imaged exoplanet rotational phase mapping. The ramp effect is the dominant source of systematics in the WFC3 for time-resolved observations, which limits its photometric precision. Current mitigation strategies are based on empirical fits and require additional orbits to help the telescope reach a thermal equilibrium. We show that the ramp-effect profiles can be explained and corrected with high fidelity using charge trapping theories. We also present a model for this process that can be used to predict and to correct charge trap systematics. Our model is based on a very small number of parameters that are intrinsic to the detector. We find that these parameters are very stable between the different data sets, and we provide best-fit values. Our model is tested with more than 120 orbits (∼40 visits) of WFC3 observations and is proved to be able to provide near photon noise limited corrections for observations made with both staring and scanning modes of transiting exoplanets as well as for starting-mode observations of brown dwarfs. After our model correction, the light curve of the first orbit in each visit has the same photometric precision as subsequent orbits, so data from the first orbit no longer need to be discarded. Near-IR arrays with the same physical characteristics (e.g., JWST/NIRCam) may also benefit from the extension of this model if similar systematic profiles are observed.
Desikan, Prabha; Khan, Zeba
2017-01-01
Hepatitis B virus (HBV) and hepatitis C virus (HCV) have several important similarities including worldwide distribution, hepato-tropism, similar modes of transmission and the ability to induce chronic infection that may lead to liver cirrhosis and hepatocellular carcinoma. Since both viruses are individually known to cause the pathologies mentioned above, co-infection with both HBV and HCV would be expected to be linked with higher morbidity as well as mortality and impact healthcare resource utilisation. Precise estimate of the prevalence of HBV/HCV co-infection would be needed to formulate policy decisions and plan communal health interventions. This systematic review and meta-analysis, therefore, aims to understand the prevalence of HBV and HCV co-infection in India based on the available literature. Following PRISMA guidelines, primary studies reporting the prevalence of HBV/HCV co-infection in India were retrieved through searches conducted in PubMed, Google SCHOLAR, Medline, Cochrane Library, WHO reports, Indian and International journals online. All online searches were conducted between December 2016 and February 2017. Meta-analysis was carried out using StatsDirect statistical software. Thirty studies published between 2000 and 2016 conducted across six regions of India were included in this review. The pooled HBV/HCV co-infection prevalence rate across the thirty studies was 1.89% (95% confidence intervals [CI] = 1.2%-2.4%). A high heterogeneity was observed between prevalence estimates. The HBV/HCV co-infection prevalence in different subgroups varied from 0.02% (95% CI = 0.0019%-0.090%) to 3.2% (95% CI = 1.3%-5.9%). The pooled prevalence of HBV/HCV co-infection in India was found to be 1.89%. This systematic review and meta-analysis revealed high prevalence of HBV/HCV co-infection in chronic liver patients, followed by HIV-positive patients, and then followed by persons who inject drugs and kidney disease patients.
A new detector at RHIC, sPHENIX goals and status
NASA Astrophysics Data System (ADS)
Reed, Rosi;
2017-01-01
The study of heavy-ion collisions, which can create a new form matter, a nearly ideal strongly interacting fluid where quarks and gluons are no longer confined into nucleons, called Quark Gluon Plasma (QGP), is on the frontier of QCD studies. The Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Lab (BNL) has had a long and successful program of QGP study since 2000, with many upgrades that have increased the delivered luminosity considerably in the last decade. The sPHENIX proposal is for a second generation experiment at RHIC, which will take advantage of the increased luminosity, and allow measurements of jets, jet correlations and Upsilons (ϒs), with a kinematic reach that will overlap with measurements made at the Large Hadron Collider (LHC). Complementary measurements at RHIC and at the LHC probe the QGP at different temperatures and densities, which are necessary to determine the temperature dependence of transport coefficients of the QGP. The sPHENIX detector will have large acceptance electromagnetic and hadronic calorimetry, as well as precision tracking, and high rate capability which are necessary for precision jet and ϒ observables. The experiment will enable a program of systematic measurements at RHIC, with a detector capable of acquiring a large sample of events in p+p, p+A, and A+A collisions. This proceedings outlines the key measurements enabled by the new detector, and status of the project itself.
[Robot-assisted minimally invasive lobectomy with systematic lymphadenectomy for lung cancer].
Egberts, J-H; Schlemminger, M; Schafmayer, C; Dohrmann, P; Becker, T
2015-02-01
Lobectomy for lung cancer is the standard therapy for lung cancer in limited stages. The adoption of minimally invasive lobectomy (video-assisted thoracic surgery or VATS lobectomy) has increased worldwide since its first description more than 15 years ago. However, the VATS technique has a long learning curve and sometimes limitations in terms of precise preparation and presentability of the central structures of the lung hilus due to the limited mobility of the standard thoracoscopic instruments. By using a four-arm robotic platform (DaVinci®), not only the preparation of the hilus structures but also the central lymphadenectomy can be performed in a comfortable and safe way under a clear and precise view. Surgical treatment of locally limited lung cancer in the right lower lobe (squamous cell carcinoma). Robot-assisted, minimally invasive right lower lobectomy with systematic lymphadenectomy. Robot-assisted minimal invasive lobectomy is feasible with special regard to oncological and technical aspects. Especially the intrathoracic precise dissection of the tissue under a perfect view allow a comfortable and safe operation technique. Georg Thieme Verlag KG Stuttgart · New York.
Planck 2015 results. XI. CMB power spectra, likelihoods, and robustness of parameters
NASA Astrophysics Data System (ADS)
Planck Collaboration; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombo, L. P. L.; Combet, C.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Di Valentino, E.; Dickinson, C.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dunkley, J.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Gauthier, C.; Gerbino, M.; Giard, M.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hamann, J.; Hansen, F. K.; Harrison, D. L.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Holmes, W. A.; Hornstrup, A.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Lewis, A.; Liguori, M.; Lilje, P. B.; Lilley, M.; Linden-Vørnle, M.; Lindholm, V.; López-Caniego, M.; Macías-Pérez, J. F.; Maffei, B.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Meinhold, P. R.; Melchiorri, A.; Migliaccio, M.; Millea, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Mottet, S.; Munshi, D.; Murphy, J. A.; Narimani, A.; Naselsky, P.; Nati, F.; Natoli, P.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rossetti, M.; Roudier, G.; Rouillé d'Orfeuil, B.; Rubiño-Martín, J. A.; Rusholme, B.; Salvati, L.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Serra, P.; Spencer, L. D.; Spinelli, M.; Stolyarov, V.; Stompor, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
This paper presents the Planck 2015 likelihoods, statistical descriptions of the 2-point correlationfunctions of the cosmic microwave background (CMB) temperature and polarization fluctuations that account for relevant uncertainties, both instrumental and astrophysical in nature. They are based on the same hybrid approach used for the previous release, I.e., a pixel-based likelihood at low multipoles (ℓ< 30) and a Gaussian approximation to the distribution of cross-power spectra at higher multipoles. The main improvements are the use of more and better processed data and of Planck polarization information, along with more detailed models of foregrounds and instrumental uncertainties. The increased redundancy brought by more than doubling the amount of data analysed enables further consistency checks and enhanced immunity to systematic effects. It also improves the constraining power of Planck, in particular with regard to small-scale foreground properties. Progress in the modelling of foreground emission enables the retention of a larger fraction of the sky to determine the properties of the CMB, which also contributes to the enhanced precision of the spectra. Improvements in data processing and instrumental modelling further reduce uncertainties. Extensive tests establish the robustness and accuracy of the likelihood results, from temperature alone, from polarization alone, and from their combination. For temperature, we also perform a full likelihood analysis of realistic end-to-end simulations of the instrumental response to the sky, which were fed into the actual data processing pipeline; this does not reveal biases from residual low-level instrumental systematics. Even with the increase in precision and robustness, the ΛCDM cosmological model continues to offer a very good fit to the Planck data. The slope of the primordial scalar fluctuations, ns, is confirmed smaller than unity at more than 5σ from Planck alone. We further validate the robustness of the likelihood results against specific extensions to the baseline cosmology, which are particularly sensitive to data at high multipoles. For instance, the effective number of neutrino species remains compatible with the canonical value of 3.046. For this first detailed analysis of Planck polarization spectra, we concentrate at high multipoles on the E modes, leaving the analysis of the weaker B modes to future work. At low multipoles we use temperature maps at all Planck frequencies along with a subset of polarization data. These data take advantage of Planck's wide frequency coverage to improve the separation of CMB and foreground emission. Within the baseline ΛCDM cosmology this requires τ = 0.078 ± 0.019 for the reionization optical depth, which is significantly lower than estimates without the use of high-frequency data for explicit monitoring of dust emission. At high multipoles we detect residual systematic errors in E polarization, typically at the μK2 level; we therefore choose to retain temperature information alone for high multipoles as the recommended baseline, in particular for testing non-minimal models. Nevertheless, the high-multipole polarization spectra from Planck are already good enough to enable a separate high-precision determination of the parameters of the ΛCDM model, showing consistency with those established independently from temperature information alone.
Methodology for the systematic reviews on an adjacent segment pathology.
Norvell, Daniel C; Dettori, Joseph R; Skelly, Andrea C; Riew, K Daniel; Chapman, Jens R; Anderson, Paul A
2012-10-15
A systematic review. To provide a detailed description of the methods undertaken in the systematic search and analytical summary of adjacent segment pathology (ASP) issues and to describe the process used to develop consensus statements and clinical recommendations regarding factors associated with the prevention and treatment of ASP. We present methods used in conducting the systematic, evidence-based reviews and development of expert panel consensus statements and clinical recommendations on the classification, natural history, risk factors, and treatment of radiographical and clinical ASP. Our intent is that clinicians will combine the information from these reviews with an understanding of their own capacities and experience to better manage patients at risk of ASP and consider future research for the prevention and treatment of ASP. A systematic search and critical review of the English-language literature was undertaken for articles published on the classification, risk, risk factors, and treatment of radiographical and clinical ASP. Articles were screened for relevance using a priori criteria, and relevant articles were critically reviewed. Whether an article was included for review depended on whether the study question was descriptive, one of therapy, or one of prognosis. The strength of evidence for the overall body of literature in each topic area was determined by 2 independent reviewers considering risk of bias, consistency, directness, and precision of results using a modification of the Grades of Recommendation Assessment, Development and Evaluation (GRADE) criteria. Disagreements were resolved by consensus. Findings from articles meeting inclusion criteria were summarized. From these summaries, consensus statements or clinical recommendations were formulated among subject experts through a modified Delphi process using the GRADE approach. A total of 3382 articles were identified and screened on 14 topics relating to the classification, risks, risk factors, and treatment of radiographical and clinical ASP. Of these, 127 met our predetermined inclusion criteria and were used to answer specific clinical questions within each topic. Lack of precision in the terminology related to adjacent segment disease and critical evaluation of definitions used across included articles led to a consensus to use ASP and suggest it as a standard. No validated comprehensive classification system for ASP currently exists. The expert panel developed a consensus definition of radiographical and clinical ASP (RASP and CASP). Some of the highlights from the analyses included the annual, 5- and 10-year risks of developing cervical and lumbar ASP after surgery, several important risk factors associated with the development of cervical and lumbar ASP, and the possibility that some motion sparing procedures may be associated with a lower risk of ASP compared with fusion despite kinematic studies demonstrating similar adjacent segment mobility following these procedures. Other highlights included a high risk of proximal junctional kyphosis (PJK) following long fusions for deformity correction, postsurgical malalignment as a potential risk factor for RASP and the paucity of studies on treatment of cervical and lumbar ASP. Systematic reviews were undertaken to understand the classification, risks, risk factors, and treatment of RASP and CASP and to provide consensus statements and clinical recommendations. This article reports the methods used in the reviews.
NASA Astrophysics Data System (ADS)
Li, Junye; Hu, Jinglei; Wang, Binyu; Sheng, Liang; Zhang, Xinming
2018-03-01
In order to investigate the effect of abrasive flow polishing surface variable diameter pipe parts, with high precision dispensing needles as the research object, the numerical simulation of the process of polishing high precision dispensing needle was carried out. Analysis of different volume fraction conditions, the distribution of the dynamic pressure and the turbulence viscosity of the abrasive flow field in the high precision dispensing needle, through comparative analysis, the effectiveness of the abrasive grain polishing high precision dispensing needle was studied, controlling the volume fraction of silicon carbide can change the viscosity characteristics of the abrasive flow during the polishing process, so that the polishing quality of the abrasive grains can be controlled.
Long-term impact of precision agriculture on a farmer’s field
USDA-ARS?s Scientific Manuscript database
Targeting management practices and inputs with precision agriculture has high potential to meet some of the grand challenges of sustainability in the coming century. Although potential is high, few studies have documented long-term effects of precision agriculture on crop production and environmenta...
Shared processing of planning articulatory gestures and grasping.
Vainio, L; Tiainen, M; Tiippana, K; Vainio, M
2014-07-01
It has been proposed that articulatory gestures are shaped by tight integration in planning mouth and hand acts. This hypothesis is supported by recent behavioral evidence showing that response selection between the precision and power grip is systematically influenced by simultaneous articulation of a syllable. For example, precision grip responses are performed relatively fast when the syllable articulation employs the tongue tip (e.g., [te]), whereas power grip responses are performed relatively fast when the syllable articulation employs the tongue body (e.g., [ke]). However, this correspondence effect, and other similar effects that demonstrate the interplay between grasping and articulatory gestures, has been found when the grasping is performed during overt articulation. The present study demonstrates that merely reading the syllables silently (Experiment 1) or hearing them (Experiment 2) results in a similar correspondence effect. The results suggest that the correspondence effect is based on integration in planning articulatory gestures and grasping rather than requiring an overt articulation of the syllables. We propose that this effect reflects partially overlapped planning of goal shapes of the two distal effectors: a vocal tract shape for articulation and a hand shape for grasping. In addition, the paper shows a pitch-grip correspondence effect in which the precision grip is associated with a high-pitched vocalization of the auditory stimuli and the power grip is associated with a low-pitched vocalization. The underlying mechanisms of this phenomenon are discussed in relation to the articulation-grip correspondence.
Dreier, Maren; Borutta, Birgit; Stahmeyer, Jona; Krauth, Christian; Walter, Ulla
2010-06-14
HEALTH CARE POLICY BACKGROUND: Findings from scientific studies form the basis for evidence-based health policy decisions. Quality assessments to evaluate the credibility of study results are an essential part of health technology assessment reports and systematic reviews. Quality assessment tools (QAT) for assessing the study quality examine to what extent study results are systematically distorted by confounding or bias (internal validity). The tools can be divided into checklists, scales and component ratings. What QAT are available to assess the quality of interventional studies or studies in the field of health economics, how do they differ from each other and what conclusions can be drawn from these results for quality assessments? A systematic search of relevant databases from 1988 onwards is done, supplemented by screening of the references, of the HTA reports of the German Agency for Health Technology Assessment (DAHTA) and an internet search. The selection of relevant literature, the data extraction and the quality assessment are carried out by two independent reviewers. The substantive elements of the QAT are extracted using a modified criteria list consisting of items and domains specific to randomized trials, observational studies, diagnostic studies, systematic reviews and health economic studies. Based on the number of covered items and domains, more and less comprehensive QAT are distinguished. In order to exchange experiences regarding problems in the practical application of tools, a workshop is hosted. A total of eight systematic methodological reviews is identified as well as 147 QAT: 15 for systematic reviews, 80 for randomized trials, 30 for observational studies, 17 for diagnostic studies and 22 for health economic studies. The tools vary considerably with regard to the content, the performance and quality of operationalisation. Some tools do not only include the items of internal validity but also the items of quality of reporting and external validity. No tool covers all elements or domains. Design-specific generic tools are presented, which cover most of the content criteria. The evaluation of QAT by using content criteria is difficult, because there is no scientific consensus on the necessary elements of internal validity, and not all of the generally accepted elements are based on empirical evidence. Comparing QAT with regard to contents neglects the operationalisation of the respective parameters, for which the quality and precision are important for transparency, replicability, the correct assessment and interrater reliability. QAT, which mix items on the quality of reporting and internal validity, should be avoided. There are different, design-specific tools available which can be preferred for quality assessment, because of its wider coverage of substantive elements of internal validity. To minimise the subjectivity of the assessment, tools with a detailed and precise operationalisation of the individual elements should be applied. For health economic studies, tools should be developed and complemented with instructions, which define the appropriateness of the criteria. Further research is needed to identify study characteristics that influence the internal validity of studies.
New Advances in Re-Os Geochronology of Organic-rich Sedimentary Rocks.
NASA Astrophysics Data System (ADS)
Creaser, R. A.; Selby, D.; Kendall, B. S.
2003-12-01
Geochronology using 187Re-187Os is applicable to limited rock and mineral matrices, but one valuable application is the determination of depositional ages for organic-rich clastic sedimentary rocks like black shales. Clastic sedimentary rocks, in most cases, do not yield depositional ages using other radioactive isotope methods, but host much of Earth's fossil record upon which the relative geological timescale is based. As such, Re-Os dating of black shales has potentially wide application in timescale calibration studies and basin analysis, if sufficiently high precision and accuracy could be achieved. This goal requires detailed, systematic studies and evaluation of factors like standard compound stoichiometry, geologic effects, and the 187Re decay constant. Ongoing studies have resulted in an improved understanding of the abilities, limitations and systematics of the Re-Os geochronometer in black shales. First-order knowledge of the effects of processes like hydrocarbon maturation and low-grade metamorphism is now established. Hydrocarbon maturation does not impact the ability of the Re-Os geochronometer to determine depositional ages from black shales. The Re-Os age determined for the Exshaw Fm of western Canada is accurate within 2σ analytical uncertainty of the known age of the unit (U-Pb monazite from ash, conodont biostratigraphy). This suggests that the large improvement in precision attained for Re-Os dating of black shales by Cohen et al (ESPL 1999) over the pioneering work of Ravizza & Turekian (GCA 1989), relates to advances in analytical methodologies and sampling strategies, rather than a lack of disturbance by hydrocarbon maturation. We have found that a significant reduction in isochron scatter can be achieved by using an alternate dissolution medium, which preferentially attacks organic matter in which Re and Os are largely concentrated. This likely results from a more limited release of detrital Os and Re held in silicate materials during dissolution, compared with the inverse aqua regia medium used for Carius tube analysis. Using these "organic-selective" dissolution techniques, precise depositional ages have now been obtained from samples with very low TOC contents ( ˜0.5%), meaning that a greater range of clastic sedimentary rocks is amenable for Re-Os age dating. Well-fitted Re-Os isochrons of plausible geological age have also been determined from low-TOC shales subjected to chlorite-grade regional metamorphism. These results further illustrate the wide, but currently underutilized, potential of the Re-Os geochronometer in shales. The precision of age data attainable by the Re-Os system directly from black shales can be better than +/- 1% uncertainty (2σ , derived from isochron regression analysis), and the derived ages are demonstrably accurate.
Chung, Mei; Ma, Jiantao; Patel, Kamal; Berger, Samantha; Lau, Joseph; Lichtenstein, Alice H
2014-01-01
Background: Concerns have been raised about the concurrent temporal trend between simple sugar intakes, especially of fructose or high-fructose corn syrup (HFCS), and rates of nonalcoholic fatty liver disease (NAFLD) in the United States. Objective: We examined the effect of different amounts and forms of dietary fructose on the incidence or prevalence of NAFLD and indexes of liver health in humans. Design: We conducted a systematic review of English-language, human studies of any design in children and adults with low to no alcohol intake and that reported at least one predetermined measure of liver health. The strength of the evidence was evaluated by considering risk of bias, consistency, directness, and precision. Results: Six observational studies and 21 intervention studies met the inclusion criteria. The overall strength of evidence for observational studies was rated insufficient because of high risk of biases and inconsistent study findings. Of 21 intervention studies, 19 studies were in adults without NAFLD (predominantly healthy, young men) and 1 study each in adults or children with NAFLD. We found a low level of evidence that a hypercaloric fructose diet (supplemented by pure fructose) increases liver fat and aspartate aminotransferase (AST) concentrations in healthy men compared with the consumption of a weight-maintenance diet. In addition, there was a low level of evidence that hypercaloric fructose and glucose diets have similar effects on liver fat and liver enzymes in healthy adults. There was insufficient evidence to draw a conclusion for effects of HFCS or sucrose on NAFLD. Conclusions: On the basis of indirect comparisons across study findings, the apparent association between indexes of liver health (ie, liver fat, hepatic de novo lipogenesis, alanine aminotransferase, AST, and γ-glutamyl transpeptase) and fructose or sucrose intake appear to be confounded by excessive energy intake. Overall, the available evidence is not sufficiently robust to draw conclusions regarding effects of fructose, HFCS, or sucrose consumption on NAFLD. PMID:25099546
Chung, Mei; Ma, Jiantao; Patel, Kamal; Berger, Samantha; Lau, Joseph; Lichtenstein, Alice H
2014-09-01
Concerns have been raised about the concurrent temporal trend between simple sugar intakes, especially of fructose or high-fructose corn syrup (HFCS), and rates of nonalcoholic fatty liver disease (NAFLD) in the United States. We examined the effect of different amounts and forms of dietary fructose on the incidence or prevalence of NAFLD and indexes of liver health in humans. We conducted a systematic review of English-language, human studies of any design in children and adults with low to no alcohol intake and that reported at least one predetermined measure of liver health. The strength of the evidence was evaluated by considering risk of bias, consistency, directness, and precision. Six observational studies and 21 intervention studies met the inclusion criteria. The overall strength of evidence for observational studies was rated insufficient because of high risk of biases and inconsistent study findings. Of 21 intervention studies, 19 studies were in adults without NAFLD (predominantly healthy, young men) and 1 study each in adults or children with NAFLD. We found a low level of evidence that a hypercaloric fructose diet (supplemented by pure fructose) increases liver fat and aspartate aminotransferase (AST) concentrations in healthy men compared with the consumption of a weight-maintenance diet. In addition, there was a low level of evidence that hypercaloric fructose and glucose diets have similar effects on liver fat and liver enzymes in healthy adults. There was insufficient evidence to draw a conclusion for effects of HFCS or sucrose on NAFLD. On the basis of indirect comparisons across study findings, the apparent association between indexes of liver health (ie, liver fat, hepatic de novo lipogenesis, alanine aminotransferase, AST, and γ-glutamyl transpeptase) and fructose or sucrose intake appear to be confounded by excessive energy intake. Overall, the available evidence is not sufficiently robust to draw conclusions regarding effects of fructose, HFCS, or sucrose consumption on NAFLD. © 2014 American Society for Nutrition.
House, Alisoun; Balkwill, Kevin
2016-03-01
External pollen grain morphology has been widely used in the taxonomy and systematics of flowering plants, especially the Acanthaceae which are noted for pollen diversity. However internal pollen wall features have received far less attention due to the difficulty of examining the wall structure. Advancing technology in the field of microscopy has made it possible, with the use of a focused ion beam-scanning electron microscope (FIB-SEM), to view the structure of pollen grain walls in far greater detail and in three dimensions. In this study the wall structures of 13 species from the Acanthaceae were investigated for features of potential systematic relevance. FIB-SEM was applied to obtain precise cross sections of pollen grains at selected positions for examining the wall ultrastructure. Exploratory studies of the exine have thus far identified five basic structural types. The investigations also show that similar external pollen wall features may have a distinctly different internal structure. FIB-SEM studies have revealed diverse internal pollen wall features which may now be investigated for their systematic and functional significance.
Admiraal, M M; van Rootselaar, A-F; Horn, J
2017-02-01
Electroencephalographic (EEG) reactivity testing is often presented as a clear-cut element of electrophysiological testing. Absence of EEG reactivity is generally considered an indicator of poor outcome, especially in patients after cardiac arrest. However, guidelines do not clearly describe how to test for reactivity and how to evaluate the results. In a quest for clear guidelines, we performed a systematic review aimed at identifying testing methods and definitions of EEG reactivity. We systematically searched the literature between 1970 and May 2016. Methodological quality of the studies was assessed using the QUality In Prognostic Studies tool. Quality of the descriptions of stimulus protocol and reactivity definition was rated on a four-category grading scale based on reproducibility. We found that protocols for EEG reactivity testing vary greatly and descriptions of protocols are almost never replicable. Furthermore, replicable definitions of presence or absence of EEG reactivity are never provided. In order to draw firm conclusions on EEG reactivity as a prognostic factor, future studies should include a precise stimulation protocol and reactivity definition to facilitate guideline formation. © 2016 EAN.
An absolute calibration system for millimeter-accuracy APOLLO measurements
NASA Astrophysics Data System (ADS)
Adelberger, E. G.; Battat, J. B. R.; Birkmeier, K. J.; Colmenares, N. R.; Davis, R.; Hoyle, C. D.; Huang, L. R.; McMillan, R. J.; Murphy, T. W., Jr.; Schlerman, E.; Skrobol, C.; Stubbs, C. W.; Zach, A.
2017-12-01
Lunar laser ranging provides a number of leading experimental tests of gravitation—important in our quest to unify general relativity and the standard model of physics. The apache point observatory lunar laser-ranging operation (APOLLO) has for years achieved median range precision at the ∼2 mm level. Yet residuals in model-measurement comparisons are an order-of-magnitude larger, raising the question of whether the ranging data are not nearly as accurate as they are precise, or if the models are incomplete or ill-conditioned. This paper describes a new absolute calibration system (ACS) intended both as a tool for exposing and eliminating sources of systematic error, and also as a means to directly calibrate ranging data in situ. The system consists of a high-repetition-rate (80 MHz) laser emitting short (< 10 ps) pulses that are locked to a cesium clock. In essence, the ACS delivers photons to the APOLLO detector at exquisitely well-defined time intervals as a ‘truth’ input against which APOLLO’s timing performance may be judged and corrected. Preliminary analysis indicates no inaccuracies in APOLLO data beyond the ∼3 mm level, suggesting that historical APOLLO data are of high quality and motivating continued work on model capabilities. The ACS provides the means to deliver APOLLO data both accurate and precise below the 2 mm level.
Enabling Characteristics Of Optical Autocovariance Lidar For Global Wind And Aerosol Profiling
NASA Astrophysics Data System (ADS)
Grund, C. J.; Stephens, M.; Lieber, M.; Weimer, C.
2008-12-01
Systematic global wind measurements with 70 km horizontal resolution and, depending on altitude from the PBL to stratosphere, 250m-2km vertical resolution and 0.5m/s - 2 m/s velocity precision are recognized as key to the understanding and monitoring of complex climate modulations, validation of models, and improved precision and range for weather forecasts. Optical Autocovariance Wind Lidar (OAWL) is a relatively new interferometric direct detection Doppler lidar approach that promises to meet the required wind profile resolution at substantial mass, cost, and power savings, and at reduced technical risk for a space-based system meeting the most demanding velocity precision and spatial and temporal resolution requirements. A proof of concept Optical Autocovariance Wind Lidar (OAWL) has been demonstrated, and a robust multi- wavelength, field-widened (more than 100 microR) lidar system suitable for high altitude (over 16km) aircraft demonstration is under construction. Other advantages of the OAWL technique include insensitivity to aerosol/molecular backscatter mixing ratio, freedom from complex receiver/transmitter optical frequency lock loops, prospects for practical continuous large-area coverage wind profiling from GEO, and the availability of simultaneous multiple wavelength High Spectral Resolution Lidar (OA-HSRL) for aerosol identification and optical property measurements. We will discuss theory, development and demonstration status, advantages, limitations, and space-based performance of OAWL and OA-HSRL, as well as the potential for combined mission synergies.
Protein Domain-Level Landscape of Cancer-Type-Specific Somatic Mutations
Yang, Fan; Petsalaki, Evangelia; Rolland, Thomas; Hill, David E.; Vidal, Marc; Roth, Frederick P.
2015-01-01
Identifying driver mutations and their functional consequences is critical to our understanding of cancer. Towards this goal, and because domains are the functional units of a protein, we explored the protein domain-level landscape of cancer-type-specific somatic mutations. Specifically, we systematically examined tumor genomes from 21 cancer types to identify domains with high mutational density in specific tissues, the positions of mutational hotspots within these domains, and the functional and structural context where possible. While hotspots corresponding to specific gain-of-function mutations are expected for oncoproteins, we found that tumor suppressor proteins also exhibit strong biases toward being mutated in particular domains. Within domains, however, we observed the expected patterns of mutation, with recurrently mutated positions for oncogenes and evenly distributed mutations for tumor suppressors. For example, we identified both known and new endometrial cancer hotspots in the tyrosine kinase domain of the FGFR2 protein, one of which is also a hotspot in breast cancer, and found new two hotspots in the Immunoglobulin I-set domain in colon cancer. Thus, to prioritize cancer mutations for further functional studies aimed at more precise cancer treatments, we have systematically correlated mutations and cancer types at the protein domain level. PMID:25794154
PHYSICAL PROPERTIES OF THE 0.94-DAY PERIOD TRANSITING PLANETARY SYSTEM WASP-18
DOE Office of Scientific and Technical Information (OSTI.GOV)
Southworth, John; Anderson, D. R.; Maxted, P. F. L.
2009-12-10
We present high-precision photometry of five consecutive transits of WASP-18, an extrasolar planetary system with one of the shortest orbital periods known. Through the use of telescope defocusing we achieve a photometric precision of 0.47-0.83 mmag per observation over complete transit events. The data are analyzed using the JKTEBOP code and three different sets of stellar evolutionary models. We find the mass and radius of the planet to be M {sub b} = 10.43 +- 0.30 +- 0.24 M {sub Jup} and R {sub b} = 1.165 +- 0.055 +- 0.014 R {sub Jup} (statistical and systematic errors), respectively. Themore » systematic errors in the orbital separation and the stellar and planetary masses, arising from the use of theoretical predictions, are of a similar size to the statistical errors and set a limit on our understanding of the WASP-18 system. We point out that seven of the nine known massive transiting planets (M {sub b} > 3 M {sub Jup}) have eccentric orbits, whereas significant orbital eccentricity has been detected for only four of the 46 less-massive planets. This may indicate that there are two different populations of transiting planets, but could also be explained by observational biases. Further radial velocity observations of low-mass planets will make it possible to choose between these two scenarios.« less
A systematic review of microwave-based therapy for axillary hyperhidrosis.
Hsu, Tzu-Herng; Chen, Yu-Tsung; Tu, Yu-Kang; Li, Chien-Nien
2017-10-01
To systematically analyse the literature on the use of the microwave-based device for subdermal thermolysis of the axilla and its efficacy for the treatment of axillary hyperhidrosis. A systematic review was conducted using PubMed, Embase, SCOPUS and Cochrane databases on 2 June 2016. The inclusion criteria including: (1) studies with human subjects, (2) full-text articles published in English, (3) a microwave-based device used to treat axillary hyperhidrosis and (4) trials that precisely evaluated axillary hyperhidrosis. Exclusion criteria were the following: (1) studies that did not fit the inclusion criteria mentioned above and (2) case reports and reviews. We reviewed five clinical trials and 189 patients, all of which were published between 2012 and 2016. There was one randomized controlled trial, one retrospective study and the remainder were prospective studies. Although all of the studies were conducted with a small sample size, the results indicated that microwave-based device treatment of axillary hyperhidrosis had long-term efficacy with mild adverse effects. In addition, most patients were satisfied with the outcomes in these studies. Microwave-based device treatment may be an effective alternative treatment for axillary hyperhidrosis. However, further investigation is necessary to determine its long-term efficacy and safety.
High Resolution Seamless Dom Generation Over CHANG'E-5 Landing Area Using Lroc Nac Images
NASA Astrophysics Data System (ADS)
Di, K.; Jia, M.; Xin, X.; Liu, B.; Liu, Z.; Peng, M.; Yue, Z.
2018-04-01
Chang'e-5, China's first sample return lunar mission, will be launched in 2019, and the planned landing area is near Mons Rümker in Oceanus Procellarum. High-resolution and high-precision mapping of the landing area is of great importance for supporting scientific analysis and safe landing. This paper proposes a systematic method for large area seamless digital orthophoto map (DOM) generation, and presents the mapping result of Chang'e-5 landing area using over 700 LROC NAC images. The developed method mainly consists of two stages of data processing: stage 1 includes subarea block adjustment with rational function model (RFM) and seamless subarea DOM generation; stage 2 includes whole area adjustment through registration of the subarea DOMs with thin plate spline model and seamless DOM mosaicking. The resultant seamless DOM coves a large area (20° longitude × 4° latitude) and is tied to the widely used reference DEM - SLDEM2015. As a result, the RMS errors of the tie points are all around half pixel in image space, indicating a high internal precision; the RMS errors of the control points are about one grid cell size of SLDEM2015, indicating that the resultant DOM is tied to SLDEM2015 well.
NASA Astrophysics Data System (ADS)
Henry, William; Jefferson Lab Hall A Collaboration
2017-09-01
Jefferson Lab's cutting-edge parity-violating electron scattering program has increasingly stringent requirements for systematic errors. Beam polarimetry is often one of the dominant systematic errors in these experiments. A new Møller Polarimeter in Hall A of Jefferson Lab (JLab) was installed in 2015 and has taken first measurements for a polarized scattering experiment. Upcoming parity violation experiments in Hall A include CREX, PREX-II, MOLLER and SOLID with the latter two requiring <0.5% precision on beam polarization measurements. The polarimeter measures the Møller scattering rates of the polarized electron beam incident upon an iron target placed in a saturating magnetic field. The spectrometer consists of four focusing quadrapoles and one momentum selection dipole. The detector is designed to measure the scattered and knock out target electrons in coincidence. Beam polarization is extracted by constructing an asymmetry from the scattering rates when the incident electron spin is parallel and anti-parallel to the target electron spin. Initial data will be presented. Sources of systematic errors include target magnetization, spectrometer acceptance, the Levchuk effect, and radiative corrections which will be discussed. National Science Foundation.
Kassianos, Angelos P; Ioannou, Myria; Koutsantoni, Marianna; Charalambous, Haris
2018-01-01
Specialized palliative care (SPC) is currently underutilized or provided late in cancer care. The aim of this systematic review and meta-analysis is to critically evaluate the impact of SPC on patients' health-related quality of life (HRQoL). Five databases were searched through June 2016. Randomized controlled trials (RCTs) and prospective studies using a pre- and post- assessment of HRQoL were included. The PRISMA reporting statement was followed. Criteria from available checklists were used to evaluate the studies' quality. A meta-analysis followed using random-effect models separately for RCTs and non-RCTs. Eleven studies including five RCTs and 2939 cancer patients published between 2001 and 2014 were identified. There was improved HRQoL in patients with cancer following SPC especially in symptoms like pain, nausea, and fatigue as well as improvement of physical and psychological functioning. Less or no improvements were observed in social and spiritual domains. In general, studies of inpatients showed a larger benefit from SPC than studies of outpatients whereas patients' age and treatment duration did not moderate the impact of SPC. Methodological shortcomings of included studies include high attrition rates, low precision, and power and poor reporting of control procedures. The methodological problems and publication bias call for higher-quality studies to be designed, funded, and published. However, there is a clear message that SPC is multi-disciplinary and aims at palliation of symptoms and burden in line with current recommendations.
Nonradial and radial period changes of the δ Scuti star 4 CVn. II. Systematic behavior over 40 years
NASA Astrophysics Data System (ADS)
Breger, M.; Montgomery, M. H.; Lenz, P.; Pamyatnykh, A. A.
2017-03-01
Aims: Radial and nonradial pulsators on and near the main sequence show period and amplitude changes that are too large to be the product of stellar evolution. The multiperiodic δ Sct stars are well suited to study this, as the period changes of different modes excited in the same star can be compared. This requires a very large amount of photometric data covering years and decades as well as mode identifications. Methods: We have examined over 800 nights of high-precision photometry of the multiperiodic pulsator 4 CVn obtained from 1966 through 2012. Because most of the data were obtained in adjacent observing seasons, it is possible to derive very accurate period values for a number of the excited pulsation modes and to study their systematic changes from 1974 to 2012. Results: Most pulsation modes show systematic significant period and amplitude changes on a timescale of decades. For the well-studied modes, around 1986 a general reversal of the directions of both the positive and negative period changes occurred. Furthermore, the period changes between the different modes are strongly correlated, although they differ in size and sign. For the modes with known values of the spherical degree and azimuthal order, we find a correlation between the direction of the period changes and the identified azimuthal order, m. The associated amplitude changes generally have similar timescales of years or decades, but show little systematic or correlated behavior from mode to mode. Conclusions: A natural explanation for the opposite behavior of the prograde and retrograde modes is that their period changes are driven by a changing rotation profile. The changes in the rotation profile could in turn be driven by processes, perhaps the pulsations themselves, that redistribute angular momentum within the star. In general, different modes have different rotation kernels, so this will produce period shifts of varying magnitude for different modes.
Miyaoka, Yuichiro; Berman, Jennifer R; Cooper, Samantha B; Mayerl, Steven J; Chan, Amanda H; Zhang, Bin; Karlin-Neumann, George A; Conklin, Bruce R
2016-03-31
Precise genome-editing relies on the repair of sequence-specific nuclease-induced DNA nicking or double-strand breaks (DSBs) by homology-directed repair (HDR). However, nonhomologous end-joining (NHEJ), an error-prone repair, acts concurrently, reducing the rate of high-fidelity edits. The identification of genome-editing conditions that favor HDR over NHEJ has been hindered by the lack of a simple method to measure HDR and NHEJ directly and simultaneously at endogenous loci. To overcome this challenge, we developed a novel, rapid, digital PCR-based assay that can simultaneously detect one HDR or NHEJ event out of 1,000 copies of the genome. Using this assay, we systematically monitored genome-editing outcomes of CRISPR-associated protein 9 (Cas9), Cas9 nickases, catalytically dead Cas9 fused to FokI, and transcription activator-like effector nuclease at three disease-associated endogenous gene loci in HEK293T cells, HeLa cells, and human induced pluripotent stem cells. Although it is widely thought that NHEJ generally occurs more often than HDR, we found that more HDR than NHEJ was induced under multiple conditions. Surprisingly, the HDR/NHEJ ratios were highly dependent on gene locus, nuclease platform, and cell type. The new assay system, and our findings based on it, will enable mechanistic studies of genome-editing and help improve genome-editing technology.
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu
2017-05-25
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu
2017-01-01
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3–5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems. PMID:28587086
NASA Astrophysics Data System (ADS)
Zhao, Q.
2017-12-01
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.
Piera-Mardemootoo, Carole; Lambert, Philippe; Faillie, Jean-Luc
2018-02-21
Metformin is recommended as the first-line treatment of type 2 diabetes mellitus. Despite its common use, few studies have been conducted to precisely measure the efficacy of metformin versus placebo as a first-line treatment. This study aims to assess the precise effects of metformin monotherapy on glycemic control and weight in drug-naive patients with type 2 diabetes mellitus. Medline ® and Cochrane databases were searched until March 19, 2016 to perform a systematic review and meta-analysis of placebo-controlled randomized trials evaluating metformin monotherapy in drug-naive patients with type 2 diabetes mellitus. Assessed outcomes include glycemic control (fasting plasma glucose, glycosated hemoglobin) and weight. Overall, 16 studies (1140 patients) were selected. Compared to placebo, metformin monotherapy was associated with decreased glycosated hemoglobin by 0.95% at 3 months (95% CI: 0.50 to 1.39, I 2 =87%) and 1.32% at 6 months (95% CI: 1.01 to 1.62, I 2 =71%), and decreased fasting plasma glucose by 1.92mmol/L at 1 month (95% CI: 0.11 to 3.74, I 2 =88%), 1.79mmol/L at 3 months (95% CI: 0.92 to 2.66, I 2 =88%) and 2.14mmol/L at 6 months (95% CI: 1.17 to 3.12, I 2 =82%). No significant difference was demonstrated for the comparisons of weight due to relatively small number of studies retrieved from the literature resulting in insufficient statistical power. This study provides the precise effects of metformin monotherapy regarding the decreases in fasting plasma glucose and glycosated hemoglobin that physician can expected in drug-naive patients with type 2 diabetes mellitus. No evidence was found for the effects on weight. Copyright © 2018 Société française de pharmacologie et de thérapeutique. Published by Elsevier Masson SAS. All rights reserved.
Experimental Constraints on Neutrino Spectra Following Fission
NASA Astrophysics Data System (ADS)
Napolitano, Jim; Daya Bay Collaboration
2016-09-01
We discuss new initiatives to constrain predictions of fission neutrino spectra from nuclear reactors. These predictions are germane to the understanding of reactor flux anomalies; are needed to reduce systematic uncertainty in neutrino oscillation spectra; and inform searches for the diffuse supernova neutrino background. The initiatives include a search for very high- Q beta decay components to the neutrino spectrum from the Daya Bay power plant; plans for a measurement of the β- spectrum from 252Cf fission products; and precision measurements of the 235U fission neutrino spectrum from PROSPECT and other very short baseline reactor experiments.
John, George; Mason, Megan; Ajayan, Pulickel M; Dordick, Jonathan S
2004-11-24
A limited combinatorial strategy was used to synthesize a small library of soft lipid-based materials ranging from structurally unordered fibers to highly uniform nanotubes. The latter nanotubes are comprised of a bilayer structure with interdigitated alkyl chains associated through hydrophobic interactions. These tubes contain accessible 2,6-diaminopyridine linkers that can interact with thymidine and related nucleosides through multipoint hydrogen bonding, thereby quenching the intrinsic fluorescence of the aromatic linker. These results are the first example of a systematic strategy to design functional lipid nanotubes with precise structural and functional features.
Immobilisation precision in VMAT for oral cancer patients
NASA Astrophysics Data System (ADS)
Norfadilah, M. N.; Ahmad, R.; Heng, S. P.; Lam, K. S.; Radzi, A. B. Ahmad; John, L. S. H.
2017-05-01
A study was conducted to evaluate and quantify a precision of the interfraction setup with different immobilisation devices throughout the treatment time. Local setup accuracy was analysed for 8 oral cancer patients receiving radiotherapy; 4 with HeadFIX® mouthpiece moulded with wax (HFW) and 4 with 10 ml/cc syringe barrel (SYR). Each patients underwent Image Guided Radiotherapy (IGRT) with total of 209 cone-beam computed tomography (CBCT) data sets for position set up errors measurement. The setup variations in the mediolateral (ML), craniocaudal (CC), and anteroposterior (AP) dimensions were measured. Overall mean displacement (M), the population systematic (Σ) and random (σ) errors and the 3D vector length were calculated. Clinical target volume to planning target volume (CTV-PTV) margins were calculated according to the van Herk formula (2.5Σ+0.7σ). The M values for both group were < 1 mm and < 1° in all translational and rotational directions. This indicate there is no significant imprecision in the equipment (lasers) and during procedure. The interfraction translational 3 dimension vector for HFW and SYR were 1.93±0.66mm and 3.84±1.34mm, respectively. The interfraction average rotational error were 0.00°±0.65° and 0.34°±0.59°, respectively. CTV-PTV margins along the 3 translational axis (Right-Left, Superior-Inferior, Anterior-Posterior) calculated were 3.08, 2.22 and 0.81 mm for HFW and 3.76, 6.24 and 5.06 mm for SYR. The results of this study have demonstrated that HFW more precise in reproducing patient position compared to conventionally used SYR (p<0.001). All margin calculated did not exceed hospital protocol (5mm) except S-I and A-P axes using syringe. For this reason, a daily IGRT is highly recommended to improve the immobilisation precision.
Ratio of He{sup 2+}/He{sup +} from 80 to 800 eV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samson, J.A.R.; Stolte, W.C.; He, Z.X.
1997-04-01
The importance of studying the double ionization of He by single photons lies in the fact that He presents the simplest structure for the study of electron correlation processes. Even so it has proved a challenging problem to understand and describe theoretically. Surprisingly, it has also proved difficult to agree experimentally on the absolute values of the He{sup 2+}/He{sup +} ratios. The availability of new synchrotron facilities with high intensity light outputs have increased the experimental activity in this area. However, by the very nature of those continuum sources systematic errors occur due to the presence of higher order spectramore » and great care must be exercised. The authors have measured the He{sup 2+}/He{sup +} ratios over a period of 5 years, the last three at the ALS utilizing beamlines 9.0.1 and 6.3.2. The sources of systematic errors that they have considered include: scattered light, higher order spectra, detector sensitivity to differently charged ions, discriminator levels in the counting equipment, gas purity, and stray electrons from filters and metal supports. The measurements have been made at three different synchrotron facilities with different types of monochromators and their potential for different sources of systematic errors. However, the authors data from all these different measurements agree within a few percent of each other. From the above results and their precision total photoionization cross sections for He, the authors can obtain the absolute photoionization cross section for He{sup 2+}. They find similar near perfect agreement with several of the latest calculations.« less
NASA Astrophysics Data System (ADS)
Brennecka, Gregory A.; Borg, Lars E.; Romaniello, Stephen J.; Souders, Amanda K.; Shollenberger, Quinn R.; Marks, Naomi E.; Wadhwa, Meenakshi
2017-03-01
Although there is limited direct evidence for supernova input into the nascent Solar System, many models suggest it formed by the gravitational collapse of a molecular cloud that was triggered by a nearby supernova. Existing lines of evidence, mostly in the form of short-lived radionuclides present in the early Solar System, are potentially consistent with this hypothesis, but still allow for alternative explanations. Since the natural production of 126Sn is thought to occur only in supernovae and this isotope has a short half-life (126Sn→126Te, t1/2 = 235 ky), the discovery of extant 126Sn would provide unequivocal proof of supernova input to the early Solar System. Previous attempts to quantify the initial abundance of 126Sn by examining Sn-Te systematics in early solids have been hampered by difficulties in precisely measuring Te isotope ratios in these materials. Thus, here we describe a novel technique that uses hydride generation to dramatically increase the ionization efficiency of Te-an approximately 30-fold increase over previous work. This introduction system, when coupled to a MC-ICPMS, enables high-precision Te isotopic analyses on samples with <10 ng of Te. We used this technique to analyze Te from a unique set of calcium-aluminum-rich inclusions (CAIs) that exhibit an exceptionally large range in Sn/Te ratios, facilitating the search for the short-lived isotope 126Sn. This sample set shows no evidence of live 126Sn, implying at most minor input of supernova material during the time at which the CAIs formed. However, based on the petrology of this sample set combined with the higher than expected concentrations of Sn and Te, as well as the lack of nucleosynthetic anomalies in other isotopes of Te suggest that the bulk of the Sn and Te recovered from these particular refractory inclusions is not of primary origin and thus does not represent a primary signature of Sn-Te systematics of the protosolar nebula during condensation of CAIs or their precursors. Although no evidence of supernova input was found based on Sn-Te systematics in this sample set, hydride generation represents a powerful tool that can now be used to further explore Te isotope systematics in less altered materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eifler, Tim; Krause, Elisabeth; Dodelson, Scott
2014-05-28
Systematic uncertainties that have been subdominant in past large-scale structure (LSS) surveys are likely to exceed statistical uncertainties of current and future LSS data sets, potentially limiting the extraction of cosmological information. Here we present a general framework (PCA marginalization) to consistently incorporate systematic effects into a likelihood analysis. This technique naturally accounts for degeneracies between nuisance parameters and can substantially reduce the dimension of the parameter space that needs to be sampled. As a practical application, we apply PCA marginalization to account for baryonic physics as an uncertainty in cosmic shear tomography. Specifically, we use CosmoLike to run simulatedmore » likelihood analyses on three independent sets of numerical simulations, each covering a wide range of baryonic scenarios differing in cooling, star formation, and feedback mechanisms. We simulate a Stage III (Dark Energy Survey) and Stage IV (Large Synoptic Survey Telescope/Euclid) survey and find a substantial bias in cosmological constraints if baryonic physics is not accounted for. We then show that PCA marginalization (employing at most 3 to 4 nuisance parameters) removes this bias. Our study demonstrates that it is possible to obtain robust, precise constraints on the dark energy equation of state even in the presence of large levels of systematic uncertainty in astrophysical processes. We conclude that the PCA marginalization technique is a powerful, general tool for addressing many of the challenges facing the precision cosmology program.« less
NASA Astrophysics Data System (ADS)
Eason, Thomas J.; Bond, Leonard J.; Lozev, Mark G.
2016-02-01
The accuracy, precision, and reliability of ultrasonic thickness structural health monitoring systems are discussed in-cluding the influence of systematic and environmental factors. To quantify some of these factors, a compression wave ultrasonic thickness structural health monitoring experiment is conducted on a flat calibration block at ambient temperature with forty four thin-film sol-gel transducers and various time-of-flight thickness calculation methods. As an initial calibration, the voltage response signals from each sensor are used to determine the common material velocity as well as the signal offset unique to each calculation method. Next, the measurement precision of the thickness error of each method is determined with a proposed weighted censored relative maximum likelihood analysis technique incorporating the propagation of asymmetric measurement uncertainty. The results are presented as upper and lower confidence limits analogous to the a90/95 terminology used in industry recognized Probability-of-Detection assessments. Future work is proposed to apply the statistical analysis technique to quantify measurement precision of various thickness calculation methods under different environmental conditions such as high temperature, rough back-wall surface, and system degradation with an intended application to monitor naphthenic acid corrosion in oil refineries.
Accuracy of the HST Standard Astrometric Catalogs w.r.t. Gaia
NASA Astrophysics Data System (ADS)
Kozhurina-Platais, V.; Grogin, N.; Sabbi, E.
2018-02-01
The goal of astrometric calibration of the HST ACS/WFC and WFC3/UVIS imaging instruments is to provide a coordinate system free of distortion to the precision level of 0.1 pixel 4-5 mas or better. This astrometric calibration is based on two HST astrometric standard fields in the vicinity of the globular clusters, 47 Tuc and omega Cen, respectively. The derived calibration of the geometric distortion is assumed to be accurate down to 2-3 mas. Is this accuracy in agreement with the true value? Now, with the access to globally accurate positions from the first Gaia data release (DR1), we found that there are measurable offsets, rotation, scale and other deviations of distortion parameters in two HST standard astrometric catalogs. These deviations from the distortion-free and properly aligned coordinate system should be accounted and corrected for, so that the high precision HST positions are free of any systematic errors. We also found that the precision of the HST pixel coordinates is substantially better than the accuracy listed in the Gaia DR1. Therefore, in order to finalize the components of distortion in the HST standard catalogs, the next release of Gaia data is needed.
Patient similarity for precision medicine: a systematic review.
Parimbelli, E; Marini, S; Sacchi, L; Bellazzi, R
2018-06-01
Evidence-based medicine is the most prevalent paradigm adopted by physicians. Clinical practice guidelines typically define a set of recommendations together with eligibility criteria that restrict their applicability to a specific group of patients. The ever-growing size and availability of health-related data is currently challenging the broad definitions of guideline-defined patient groups. Precision medicine leverages on genetic, phenotypic, or psychosocial characteristics to provide precise identification of patient subsets for treatment targeting. Defining a patient similarity measure is thus an essential step to allow stratification of patients into clinically-meaningful subgroups. The present review investigates the use of patient similarity as a tool to enable precision medicine. 279 articles were analyzed along four dimensions: data types considered, clinical domains of application, data analysis methods, and translational stage of findings. Cancer-related research employing molecular profiling and standard data analysis techniques such as clustering constitute the majority of the retrieved studies. Chronic and psychiatric diseases follow as the second most represented clinical domains. Interestingly, almost one quarter of the studies analyzed presented a novel methodology, with the most advanced employing data integration strategies and being portable to different clinical domains. Integration of such techniques into decision support systems constitutes and interesting trend for future research. Copyright © 2018. Published by Elsevier Inc.
Schlippenbach, Trixi von; Oefner, Peter J; Gronwald, Wolfram
2018-03-09
Non-uniform sampling (NUS) allows the accelerated acquisition of multidimensional NMR spectra. The aim of this contribution was the systematic evaluation of the impact of various quantitative NUS parameters on the accuracy and precision of 2D NMR measurements of urinary metabolites. Urine aliquots spiked with varying concentrations (15.6-500.0 µM) of tryptophan, tyrosine, glutamine, glutamic acid, lactic acid, and threonine, which can only be resolved fully by 2D NMR, were used to assess the influence of the sampling scheme, reconstruction algorithm, amount of omitted data points, and seed value on the quantitative performance of NUS in 1 H, 1 H-TOCSY and 1 H, 1 H-COSY45 NMR spectroscopy. Sinusoidal Poisson-gap sampling and a compressed sensing approach employing the iterative re-weighted least squares method for spectral reconstruction allowed a 50% reduction in measurement time while maintaining sufficient quantitative accuracy and precision for both types of homonuclear 2D NMR spectroscopy. Together with other advances in instrument design, such as state-of-the-art cryogenic probes, use of 2D NMR spectroscopy in large biomedical cohort studies seems feasible.
Gao, Chan; Zhang, Xueyong; Zhang, Chuanchao; Sui, Zhilei; Hou, Meng; Dai, Rucheng; Wang, Zhongping; Zheng, Xianxu; Zhang, Zengming
2018-05-17
Herein, pressure-induced phase transitions of RDX up to 50 GPa were systematically studied under different compression conditions. Precise phase transition points were obtained based on high-quality Raman spectra with small pressure intervals. This favors the correctness of the theoretical formula for detonation and the design of a precision weapon. The experimental results indicated that α-RDX immediately transformed to γ-RDX at 3.5 GPa due to hydrostatic conditions and possible interaction between the penetrating helium and RDX, with helium gas as the pressure-transmitting medium (PTM). Mapping of pressure distribution in samples demonstrates that the pressure gradient is generated in the chamber and independent of other PTMs. The gradient induced the first phase transition starts at 2.3 GPa and completed at 4.1 GPa. The larger pressure gradient promoted phase transition in advance under higher pressures. Experimental results supported that there existed two conformers of AAI and AAE for γ-RDX, as proposed by another group. δ-RDX was considered to only occur in a hydrostatic environment around 18 GPa using helium as the PTM. This study confirms that δ-RDX is independent of PTM and exists under non-hydrostatic conditions. Evidence for a new phase (ζ) was found at about 28 GPa. These 4 phases have also been verified via XRD under high pressures. In addition to this, another new phase (η) may exist above 38 GPa, and it needs to be further confirmed in the future. Moreover, all the phase transitions were reversible after the pressure was released, and original α-RDX was always obtained at ambient pressure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, J.; Hao, H.; Li, J. Y.
We report a systematic experimental study of a storage ring two-color free-electron laser (FEL) operating simultaneously in the infrared (IR) and ultraviolet (UV) wavelength regions. The two-color FEL lasing has been realized using a pair of dual-band high-reflectivity FEL mirrors with two different undulator configurations. We have demonstrated independent wavelength tuning in a wide range for each lasing color, as well as harmonically locked wavelength tuning when the UV lasing occurs at the second harmonic of the IR lasing. Precise power control of two-color lasing with good power stability has also been achieved. In addition, the impact of the degradationmore » of FEL mirrors on the two-color FEL operation is reported. Moreover, we have investigated the temporal structures of the two-color FEL beams, showing simultaneous two-color micropulses with their intensity modulations displayed as FEL macropulses.« less
Accessory Mineral Records of Early Earth Crust-Mantle Systematics: an Example From West Greenland
NASA Astrophysics Data System (ADS)
Storey, C. D.; Hawkesworth, C. J.
2008-12-01
Conditions for the formation and the nature of Earth's early crust are enigmatic due to poor preservation. Before c.4 Ga the only archives are detrital minerals eroded from earlier crust, such as the Jack Hills zircons in western Australia, or extinct isotope systematics. Zircons are particularly powerful since they retain precise records of their ages of crystallisation, and the Lu-Hf radiogenic isotope and O stable isotope systematics of the reservoir from which they crystallised. In principle, this allows insight into the nature of the crust, the mantle reservoir from which the melt was extracted and any reworked material incorporated into that melt. We have used in situ methods to measure U-Pb, O and Lu-Hf within single zircon crystals from tonalitic gneisses from West Greenland in the vicinity of the Isua Supracrustal Belt. They have little disturbed ages of c.3.8 Ga, mantle-like O isotope signatures and Lu-Hf isotope signatures that lie on the CHUR evolution line at 3.8 Ga. These samples have previously been subjected to Pb isotope feldspar and 142Nd whole rock analysis and have helped constrain models in which early differentiation of a proto-crust must have occurred. The CHUR-like Lu-Hf signature, along with mantle-like O signature from these zircons suggests juvenile melt production at 3.8 Ga from undifferentiated mantle, yet the other isotope systems preclude this possibility. Alternatively, this is further strong evidence for a heterogeneous mantle in the early Earth. Whilst zircons afford insight into the nature of the early crust and mantle, it is through the Sm-Nd system that the mantle has traditionally been viewed. Titanite often contains several thousand ppm Nd, making it amenable to precise analysis, and is a common accessory phase. It has a reasonably high closure temperature for Pb and O, and it can retain cores with older ages and distinct REE chemistry. It is often the main accessory phase alongside zircon, and it is the main carrier of Nd within the whole rock such that Nd isotope analysis of titanite may be able to see-through later alteration that may have partially reset the whole rock system. We present new in-situ U-Pb, O and Sm-Nd and high-precision U-Pb ID-TIMS and Sm-Nd MC- ICPMS data from individual or fragmented titanite grains. We discuss how these data complement the zircon data and may help to resolve long-standing debates in ancient gneiss terranes, with utility to the nature and formation of crust on the early Earth.
Bucher Della Torre, Sophie; Keller, Amélie; Laure Depeyre, Jocelyne; Kruseman, Maaike
2016-04-01
In the context of a worldwide high prevalence of childhood obesity, the role of sugar-sweetened beverage (SSB) consumption as a cause of excess weight gain remains controversial. Conflicting results may be due to methodological issues in original studies and in reviews. The aim of this review was to systematically analyze the methodology of studies investigating the influence of SSB consumption on risk of obesity and obesity among children and adolescents, and the studies' ability to answer this research question. A systematic review of cohort and experimental studies published until December 2013 in peer-reviewed journals was performed on Medline, CINAHL, Web of Knowledge, and ClinicalTrials.gov. Studies investigating the influence of SSB consumption on risk of obesity and obesity among children and adolescents were included, and methodological quality to answer this question was assessed independently by two investigators using the Academy of Nutrition and Dietetics Quality Criteria Checklist. Among the 32 identified studies, nine had positive quality ratings and 23 studies had at least one major methodological issue. Main methodological issues included SSB definition and inadequate measurement of exposure. Studies with positive quality ratings found an association between SSB consumption and risk of obesity or obesity (n=5) (ie, when SSB consumption increased so did obesity) or mixed results (n=4). Studies with a neutral quality rating found a positive association (n=7), mixed results (n=9), or no association (n=7). The present review shows that the majority of studies with strong methodology indicated a positive association between SSB consumption and risk of obesity or obesity, especially among overweight children. In addition, study findings highlight the need for the careful and precise measurement of the consumption of SSBs and of important confounders. Copyright © 2016 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Photometric Type Ia supernova surveys in narrow-band filters
NASA Astrophysics Data System (ADS)
Xavier, Henrique S.; Abramo, L. Raul; Sako, Masao; Benítez, Narciso; Calvão, Maurício O.; Ederoclite, Alessandro; Marín-Franch, Antonio; Molino, Alberto; Reis, Ribamar R. R.; Siffert, Beatriz B.; Sodré, Laerte.
2014-11-01
We study the characteristics of a narrow-band Type Ia supernova (SN) survey through simulations based on the upcoming Javalambre Physics of the accelerating Universe Astrophysical Survey. This unique survey has the capabilities of obtaining distances, redshifts and the SN type from a single experiment thereby circumventing the challenges faced by the resource-intensive spectroscopic follow-up observations. We analyse the flux measurements signal-to-noise ratio and bias, the SN typing performance, the ability to recover light-curve parameters given by the SALT2 model, the photometric redshift precision from Type Ia SN light curves and the effects of systematic errors on the data. We show that such a survey is not only feasible but may yield large Type Ia SN samples (up to 250 SNe at z < 0.5 per month of search) with low core-collapse contamination (˜1.5 per cent), good precision on the SALT2 parameters (average σ _{m_B}=0.063, σ _{x_1}=0.47 and σc = 0.040) and on the distance modulus (average σμ = 0.16, assuming an intrinsic scatter σint = 0.14), with identified systematic uncertainties σsys ≲ 0.10σstat. Moreover, the filters are narrow enough to detect most spectral features and obtain excellent photometric redshift precision of σz = 0.005, apart from ˜2 per cent of outliers. We also present a few strategies for optimizing the survey's outcome. Together with the detailed host galaxy information, narrow-band surveys can be very valuable for the study of SN rates, spectral feature relations, intrinsic colour variations and correlations between SN and host galaxy properties, all of which are important information for SN cosmological applications.
Bullet trajectory reconstruction - Methods, accuracy and precision.
Mattijssen, Erwin J A T; Kerkhoff, Wim
2016-05-01
Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Estimating Rates of Motor Vehicle Crashes Using Medical Encounter Data: A Feasibility Study
2015-11-05
used to develop more detailed predictive risk models as well as strategies for preventing specific types of MVCs. Systematic Review of Evidence... used to estimate rates of accident-related injuries more generally,9 but not with specific reference to MVCs. For the present report, rates of...precise rate estimates based on person-years rather than active duty strength, (e) multivariable effects of specific risk /protective factors after
García-Cabezas, Miguel Á.; John, Yohan J.; Barbas, Helen; Zikopoulos, Basilis
2016-01-01
The estimation of the number or density of neurons and types of glial cells and their relative proportions in different brain areas are at the core of rigorous quantitative neuroanatomical studies. Unfortunately, the lack of detailed, updated, systematic and well-illustrated descriptions of the cytology of neurons and glial cell types, especially in the primate brain, makes such studies especially demanding, often limiting their scope and broad use. Here, following an extensive analysis of histological materials and the review of current and classical literature, we compile a list of precise morphological criteria that can facilitate and standardize identification of cells in stained sections examined under the microscope. We describe systematically and in detail the cytological features of neurons and glial cell types in the cerebral cortex of the macaque monkey and the human using semithin and thick sections stained for Nissl. We used this classical staining technique because it labels all cells in the brain in distinct ways. In addition, we corroborate key distinguishing characteristics of different cell types in sections immunolabeled for specific markers counterstained for Nissl and in ultrathin sections processed for electron microscopy. Finally, we summarize the core features that distinguish each cell type in easy-to-use tables and sketches, and structure these key features in an algorithm that can be used to systematically distinguish cellular types in the cerebral cortex. Moreover, we report high inter-observer algorithm reliability, which is a crucial test for obtaining consistent and reproducible cell counts in unbiased stereological studies. This protocol establishes a consistent framework that can be used to reliably identify and quantify cells in the cerebral cortex of primates as well as other mammalian species in health and disease. PMID:27847469
Reliable low precision simulations in land surface models
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.
2017-12-01
Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.
Planar-Structure Perovskite Solar Cells with Efficiency beyond 21.
Jiang, Qi; Chu, Zema; Wang, Pengyang; Yang, Xiaolei; Liu, Heng; Wang, Ye; Yin, Zhigang; Wu, Jinliang; Zhang, Xingwang; You, Jingbi
2017-12-01
Low temperature solution processed planar-structure perovskite solar cells gain great attention recently, while their power conversions are still lower than that of high temperature mesoporous counterpart. Previous reports are mainly focused on perovskite morphology control and interface engineering to improve performance. Here, this study systematically investigates the effect of precise stoichiometry, especially the PbI 2 contents on device performance including efficiency, hysteresis and stability. This study finds that a moderate residual of PbI 2 can deliver stable and high efficiency of solar cells without hysteresis, while too much residual PbI 2 will lead to serious hysteresis and poor transit stability. Solar cells with the efficiencies of 21.6% in small size (0.0737 cm 2 ) and 20.1% in large size (1 cm 2 ) with moderate residual PbI 2 in perovskite layer are obtained. The certificated efficiency for small size shows the efficiency of 20.9%, which is the highest efficiency ever recorded in planar-structure perovskite solar cells, showing the planar-structure perovskite solar cells are very promising. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gewirtz-Meydan, Ateret; Hafford-Letchfield, Trish; Ayalon, Liat; Benyamini, Yael; Biermann, Violetta; Coffey, Alice; Jackson, Jeanne; Phelan, Amanda; Voß, Peggy; Geiger Zeman, Marija; Zeman, Zdenko
2018-06-04
This study captured older people's attitudes and concerns about sex and sexuality in later life by synthesising qualitative research published on this issue. The systematic review was conducted between November 2015 and June 2016 based on a pre-determined protocol. Key words were used to ensure a precise search strategy. Empirically based, qualitative literature from 18 databases was found. Twenty studies met the inclusion criteria. Thomas and Harden's thematic synthesis was used to generate 'analytical themes' which summarise this body of literature. Three main themes were identified: (a) social legitimacy for sexuality in later life; (b) health, not age, is what truly impacts sexuality, and (c) the hegemony of penetrative sex. The themes illustrate the complex and delicate relation between ageing and sexuality. Older adults facing health issues that affect sexual function adopt broader definitions of sexuality and sexual activity.
A precise measurement of the [Formula: see text] meson oscillation frequency.
Aaij, R; Abellán Beteta, C; Adeva, B; Adinolfi, M; Affolder, A; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Anderson, J; Andreassi, G; Andreotti, M; Andrews, J E; Appleby, R B; Aquines Gutierrez, O; Archilli, F; d'Argent, P; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Bachmann, S; Back, J J; Badalov, A; Baesso, C; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Batozskaya, V; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Bel, L J; Bellee, V; Belloli, N; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bertolin, A; Bettler, M-O; van Beuzekom, M; Bien, A; Bifani, S; Billoir, P; Bird, T; Birnkraut, A; Bizzeti, A; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Bondar, A; Bondar, N; Bonivento, W; Borghi, S; Borsato, M; Bowcock, T J V; Bowen, E; Bozzi, C; Braun, S; Britsch, M; Britton, T; Brodzicka, J; Brook, N H; Buchanan, E; Bursche, A; Buytaert, J; Cadeddu, S; Calabrese, R; Calvi, M; Calvo Gomez, M; Campana, P; Campora Perez, D; Capriotti, L; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carniti, P; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Cavallero, G; Cenci, R; Charles, M; Charpentier, Ph; Chefdeville, M; Chen, S; Cheung, S-F; Chiapolini, N; Chrzaszcz, M; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Cogoni, V; Cojocariu, L; Collazuol, G; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombes, M; Coquereau, S; Corti, G; Corvo, M; Couturier, B; Cowan, G A; Craik, D C; Crocombe, A; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Dall'Occo, E; Dalseno, J; David, P N Y; Davis, A; De Aguiar Francisco, O; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Simone, P; Dean, C-T; Decamp, D; Deckenhoff, M; Del Buono, L; Déléage, N; Demmer, M; Derkach, D; Deschamps, O; Dettori, F; Dey, B; Di Canto, A; Di Ruscio, F; Dijkstra, H; Donleavy, S; Dordei, F; Dorigo, M; Dosil Suárez, A; Dossett, D; Dovbnya, A; Dreimanis, K; Dufour, L; Dujany, G; Dupertuis, F; Durante, P; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Easo, S; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; El Rifai, I; Elsasser, Ch; Ely, S; Esen, S; Evans, H M; Evans, T; Falabella, A; Färber, C; Farley, N; Farry, S; Fay, R; Ferguson, D; Fernandez Albor, V; Ferrari, F; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fohl, K; Fol, P; Fontana, M; Fontanelli, F; C Forshaw, D; Forty, R; Frank, M; Frei, C; Frosini, M; Fu, J; Furfaro, E; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; García Pardiñas, J; Garra Tico, J; Garrido, L; Gascon, D; Gaspar, C; Gauld, R; Gavardi, L; Gazzoni, G; Gerick, D; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianì, S; Gibson, V; Girard, O G; Giubega, L; Gligorov, V V; Göbel, C; Golubkov, D; Golutvin, A; Gomes, A; Gotti, C; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graverini, E; Graziani, G; Grecu, A; Greening, E; Gregson, S; Griffith, P; Grillo, L; Grünberg, O; Gui, B; Gushchin, E; Guz, Yu; Gys, T; Hadavizadeh, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hamilton, B; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; He, J; Head, T; Heijne, V; Heister, A; Hennessy, K; Henrard, P; Henry, L; Hernando Morata, J A; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hoballah, M; Hombach, C; Hulsbergen, W; Humair, T; Hussain, N; Hutchcroft, D; Hynds, D; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jalocha, J; Jans, E; Jawahery, A; Jing, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kandybei, S; Kanso, W; Karacson, M; Karbach, T M; Karodia, S; Kecke, M; Kelsey, M; Kenyon, I R; Kenzie, M; Ketel, T; Khanji, B; Khurewathanakul, C; Kirn, T; Klaver, S; Klimaszewski, K; Kochebina, O; Kolpin, M; Komarov, I; Koopman, R F; Koppenburg, P; Kozeiha, M; Kravchuk, L; Kreplin, K; Kreps, M; Krocker, G; Krokovny, P; Kruse, F; Krzemien, W; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; K Kuonen, A; Kurek, K; Kvaratskheliya, T; Lacarrere, D; Lafferty, G; Lai, A; Lambert, D; Lanfranchi, G; Langenbruch, C; Langhans, B; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Lees, J-P; Lefèvre, R; Leflat, A; Lefrançois, J; Lemos Cid, E; Leroy, O; Lesiak, T; Leverington, B; Li, Y; Likhomanenko, T; Liles, M; Lindner, R; Linn, C; Lionetto, F; Liu, B; Liu, X; Loh, D; Longstaff, I; Lopes, J H; Lucchesi, D; Lucio Martinez, M; Luo, H; Lupato, A; Luppi, E; Lupton, O; Lusardi, N; Lusiani, A; Machefert, F; Maciuc, F; Maev, O; Maguire, K; Malde, S; Malinin, A; Manca, G; Mancinelli, G; Manning, P; Mapelli, A; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marino, P; Marks, J; Martellotti, G; Martin, M; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massafferri, A; Matev, R; Mathad, A; Mathe, Z; Matteuzzi, C; Mauri, A; Maurin, B; Mazurov, A; McCann, M; McCarthy, J; McNab, A; McNulty, R; Meadows, B; Meier, F; Meissner, M; Melnychuk, D; Merk, M; Michielin, E; Milanes, D A; Minard, M-N; Mitzel, D S; Molina Rodriguez, J; Monroy, I A; Monteil, S; Morandin, M; Morawski, P; Mordà, A; Morello, M J; Moron, J; Morris, A B; Mountain, R; Muheim, F; Müller, D; Müller, J; Müller, K; Müller, V; Mussini, M; Muster, B; Naik, P; Nakada, T; Nandakumar, R; Nandi, A; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, A D; Nguyen, T D; Nguyen-Mau, C; Niess, V; Niet, R; Nikitin, N; Nikodem, T; Novoselov, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Ogilvy, S; Okhrimenko, O; Oldeman, R; Onderwater, C J G; Osorio Rodrigues, B; Otalora Goicochea, J M; Otto, A; Owen, P; Oyanguren, A; Palano, A; Palombo, F; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Pappalardo, L L; Pappenheimer, C; Parkes, C; Passaleva, G; Patel, G D; Patel, M; Patrignani, C; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perret, P; Pescatore, L; Petridis, K; Petrolini, A; Petruzzo, M; Picatoste Olloqui, E; Pietrzyk, B; Pilař, T; Pinci, D; Pistone, A; Piucci, A; Playfer, S; Plo Casasus, M; Poikela, T; Polci, F; Poluektov, A; Polyakov, I; Polycarpo, E; Popov, A; Popov, D; Popovici, B; Potterat, C; Price, E; Price, J D; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Quagliani, R; Rachwal, B; Rademacker, J H; Rama, M; Rangel, M S; Raniuk, I; Rauschmayr, N; Raven, G; Redi, F; Reichert, S; Reid, M M; Dos Reis, A C; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Lopez, J A; Rodriguez Perez, P; Roiser, S; Romanovsky, V; Romero Vidal, A; W Ronayne, J; Rotondo, M; Rouvinet, J; Ruf, T; Ruiz Valls, P; Saborido Silva, J J; Sagidova, N; Sail, P; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santimaria, M; Santovetti, E; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrina, D; Schael, S; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmelzer, T; Schmidt, B; Schneider, O; Schopper, A; Schubiger, M; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Semennikov, A; Sergi, A; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Shires, A; Siddi, B G; Silva Coutinho, R; Silva de Oliveira, L; Simi, G; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, E; Smith, E; Smith, I T; Smith, J; Smith, M; Snoek, H; Sokoloff, M D; Soler, F J P; Soomro, F; Souza, D; Souza De Paula, B; Spaan, B; Spradlin, P; Sridharan, S; Stagni, F; Stahl, M; Stahl, S; Stefkova, S; Steinkamp, O; Stenyakin, O; Stevenson, S; Stoica, S; Stone, S; Storaci, B; Stracka, S; Straticiuc, M; Straumann, U; Sun, L; Sutcliffe, W; Swientek, K; Swientek, S; Syropoulos, V; Szczekowski, M; Szczypka, P; Szumlak, T; T'Jampens, S; Tayduganov, A; Tekampe, T; Teklishyn, M; Tellarini, G; Teubert, F; Thomas, C; Thomas, E; van Tilburg, J; Tisserand, V; Tobin, M; Todd, J; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Torr, N; Tournefier, E; Tourneur, S; Trabelsi, K; Tran, M T; Tresch, M; Trisovic, A; Tsaregorodtsev, A; Tsopelas, P; Tuning, N; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vacca, C; Vagnoni, V; Valenti, G; Vallier, A; Vazquez Gomez, R; Vazquez Regueiro, P; Vázquez Sierra, C; Vecchi, S; van Veghel, M; Velthuis, J J; Veltri, M; Veneziano, G; Vesterinen, M; Viaud, B; Vieira, D; Vieites Diaz, M; Vilasis-Cardona, X; Vollhardt, A; Volyanskyy, D; Voong, D; Vorobyev, A; Vorobyev, V; Voß, C; de Vries, J A; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wandernoth, S; Wang, J; Ward, D R; Watson, N K; Websdale, D; Weiden, A; Whitehead, M; Wilkinson, G; Wilkinson, M; Williams, M; Williams, M P; Williams, M; Williams, T; Wilson, F F; Wimberley, J; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wright, S; Wyllie, K; Xie, Y; Xu, Z; Yang, Z; Yu, J; Yuan, X; Yushchenko, O; Zangoli, M; Zavertyaev, M; Zhang, L; Zhang, Y; Zhelezov, A; Zhokhov, A; Zhong, L; Zhukov, V; Zucchelli, S
2016-01-01
The oscillation frequency, [Formula: see text], of [Formula: see text] mesons is measured using semileptonic decays with a [Formula: see text] or [Formula: see text] meson in the final state. The data sample corresponds to 3.0[Formula: see text] of pp collisions, collected by the LHCb experiment at centre-of-mass energies [Formula: see text] = 7 and 8[Formula: see text]. A combination of the two decay modes gives [Formula: see text], where the first uncertainty is statistical and the second is systematic. This is the most precise single measurement of this parameter. It is consistent with the current world average and has similar precision.
Peeling, Rosanna W.; Sollis, Kimberly A.; Glover, Sarah; Crowe, Suzanne M.; Landay, Alan L.; Cheng, Ben; Barnett, David; Denny, Thomas N.; Spira, Thomas J.; Stevens, Wendy S.; Crowley, Siobhan; Essajee, Shaffiq; Vitoria, Marco; Ford, Nathan
2015-01-01
Background Measurement of CD4+ T-lymphocytes (CD4) is a crucial parameter in the management of HIV patients, particularly in determining eligibility to initiate antiretroviral treatment (ART). A number of technologies exist for CD4 enumeration, with considerable variation in cost, complexity, and operational requirements. We conducted a systematic review of the performance of technologies for CD4 enumeration. Methods and Findings Studies were identified by searching electronic databases MEDLINE and EMBASE using a pre-defined search strategy. Data on test accuracy and precision included bias and limits of agreement with a reference standard, and misclassification probabilities around CD4 thresholds of 200 and 350 cells/μl over a clinically relevant range. The secondary outcome measure was test imprecision, expressed as % coefficient of variation. Thirty-two studies evaluating 15 CD4 technologies were included, of which less than half presented data on bias and misclassification compared to the same reference technology. At CD4 counts <350 cells/μl, bias ranged from -35.2 to +13.1 cells/μl while at counts >350 cells/μl, bias ranged from -70.7 to +47 cells/μl, compared to the BD FACSCount as a reference technology. Misclassification around the threshold of 350 cells/μl ranged from 1-29% for upward classification, resulting in under-treatment, and 7-68% for downward classification resulting in overtreatment. Less than half of these studies reported within laboratory precision or reproducibility of the CD4 values obtained. Conclusions A wide range of bias and percent misclassification around treatment thresholds were reported on the CD4 enumeration technologies included in this review, with few studies reporting assay precision. The lack of standardised methodology on test evaluation, including the use of different reference standards, is a barrier to assessing relative assay performance and could hinder the introduction of new point-of-care assays in countries where they are most needed. PMID:25790185
Lattice study of finite volume effect in HVP for muon g-2
NASA Astrophysics Data System (ADS)
Izubuchi, Taku; Kuramashi, Yoshinobu; Lehner, Christoph; Shintani, Eigo
2018-03-01
We study the finite volume effect of the hadronic vacuum polarization contribution to muon g-2, aμhvp, in lattice QCD by comparison with two different volumes, L4 = (5.4)4 and (8.1)4 fm4, at physical pion. We perform the lattice computation of highly precise vector-vector current correlator with optimized AMA technique on Nf = 2 + 1 PACS gauge configurations in Wilson-clover fermion and stout smeared gluon action at one lattice cut-off, a-1 = 2.33 GeV. We compare two integrals of aμhvp, momentum integral and time-slice summation, on the lattice and numerically show that the different size of finite volume effect appears between two methods. We also discuss the effect of backward-state propagation into the result of aμhvp with the different boundary condition. Our model-independent study suggest that the lattice computation at physical pion is important for correct estimate of finite volume and other lattice systematics in aμhvp.
Travtek Global Evaluation And Executive Summary
DOT National Transportation Integrated Search
2000-09-01
Several measures have been carried out in the Long Term Pavement Performance (LTPP) Program to ensure uniform distress data collection and interpretation. However, no systematic evaluation has been done to quantify the variability (bias and precision...
Labrie, Nanon H M; Ludolph, Ramona; Schulz, Peter J
2017-03-21
The scientific and public debate concerning organized mammography screening is unprecedentedly strong. With research evidence concerning its efficacy being ambiguous, the recommendations pertaining to the age-thresholds for program inclusion vary between - and even within - countries. Data shows that young women who are not yet eligible for systematic screening, have opportunistic mammograms relatively often and, moreover, want to be included in organized programs. Yet, to date, little is known about the precise motivations underlying young women's desire and intentions to go for, not medically indicated, mammographic screening. A cross-sectional online survey was carried out among women aged 30-49 years (n = 918) from Switzerland. The findings show that high fear (β = .08, p ≤ .05), perceived susceptibility (β = .10, p ≤ .05), and ego-involvement (β = .34, p ≤ .001) are the main predictors of screening intentions among women who are not yet eligible for the systematic program. Also, geographical location (Swiss-French group: β = .15, p ≤ .001; Swiss-Italian group: β = .26, p ≤ .001) and age (β = .11, p ≤ .001) play a role. In turn, breast cancer knowledge, risk perceptions, and educational status do not have a significant impact. Young women seem to differ inherently from those who are already eligible for systematic screening in terms of the factors underlying their intentions to engage in mammographic screening. Thus, when striving to promote adherence to systematic screening guidelines - whether based on unequivocal scientific evidence or policy decisions - and to allow women to make evidence-based, informed decisions about mammography, differential strategies are needed to reach different age-groups.
NASA Astrophysics Data System (ADS)
Boizelle, Benjamin
2018-01-01
ALMA is now capable of providing the most precise determinations of the masses of supermassive black holes in early-type galaxies (ETGs). In ALMA Cycle 2 we began a program to map the molecular gas kinematics in nearby ETGs that host central dust disks as seen in Hubble Space Telescope imaging. These initial observations targeted CO(2-1) emission at ~0.3" resolution, corresponding roughly to the projected radii of influence of the central black holes. In all cases we detect significant (~108 M⊙) molecular gas reservoirs that are in dynamically cold rotation, providing the most sensitive probes of the inner gravitational potentials of luminous ETGs. Using these gas kinematics, we verify that these molecular disks are formally stable against gravitational fragmentation and collapse. In several galaxies we detect central high-velocity gas rotation that provides direct kinematic evidence for a black hole. For two of these targets, NGC 1332 and NGC 3258, we have obtained higher-resolution observations (0.044" and 0.09") in Cycles 3 and 4 that more fully map out the gas rotation within the gravitational sphere of influence. We present dynamical modeling results for these targets, demonstrating that ALMA observations can enable black hole mass measurements at a precision of 10% or better, with minimal susceptibility to the systematic uncertainties that affect other methods of black hole mass measurement in ETGs. We discuss the impact of future high-resolution ALMA observations on black hole demographics and their potential to refine the high-mass end of the black hole-host galaxy scaling relationships.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croll, Bryce; Albert, Loic; Lafreniere, David
We present detections of the near-infrared thermal emission of three hot Jupiters and one brown dwarf using the Wide-field Infrared Camera (WIRCam) on the Canada-France-Hawaii Telescope (CFHT). These include Ks-band secondary eclipse detections of the hot Jupiters WASP-3b and Qatar-1b and the brown dwarf KELT-1b. We also report Y-band, K {sub CONT}-band, and two new and one reanalyzed Ks-band detections of the thermal emission of the hot Jupiter WASP-12b. We present a new reduction pipeline for CFHT/WIRCam data, which is optimized for high precision photometry. We also describe novel techniques for constraining systematic errors in ground-based near-infrared photometry, so asmore » to return reliable secondary eclipse depths and uncertainties. We discuss the noise properties of our ground-based photometry for wavelengths spanning the near-infrared (the YJHK bands), for faint and bright stars, and for the same object on several occasions. For the hot Jupiters WASP-3b and WASP-12b we demonstrate the repeatability of our eclipse depth measurements in the Ks band; we therefore place stringent limits on the systematics of ground-based, near-infrared photometry, and also rule out violent weather changes in the deep, high pressure atmospheres of these two hot Jupiters at the epochs of our observations.« less
Investing in updating: how do conclusions change when Cochrane systematic reviews are updated?
French, Simon D; McDonald, Steve; McKenzie, Joanne E; Green, Sally E
2005-01-01
Background Cochrane systematic reviews aim to provide readers with the most up-to-date evidence on the effects of healthcare interventions. The policy of updating Cochrane reviews every two years consumes valuable time and resources and may not be appropriate for all reviews. The objective of this study was to examine the effect of updating Cochrane systematic reviews over a four year period. Methods This descriptive study examined all completed systematic reviews in the Cochrane Database of Systematic Reviews (CDSR) Issue 2, 1998. The latest version of each of these reviews was then identified in CDSR Issue 2, 2002 and changes in the review were described. For reviews that were updated within this time period and had additional studies, we determined whether their conclusion had changed and if there were factors that were predictive of this change. Results A total of 377 complete reviews were published in CDSR Issue 2, 1998. In Issue 2, 2002, 14 of these reviews were withdrawn and one was split, leaving 362 reviews to examine for the purpose of this study. Of these reviews, 254 (70%) were updated. Of these updated reviews, 23 (9%) had a change in conclusion. Both an increase in precision and a change in statistical significance of the primary outcome were predictive of a change in conclusion of the review. Conclusion The concerns around a lack of updating for some reviews may not be justified considering the small proportion of updated reviews that resulted in a changed conclusion. A priority-setting approach to the updating of Cochrane systematic reviews may be more appropriate than a time-based approach. Updating all reviews as frequently as every two years may not be necessary, however some reviews may need to be updated more often than every two years. PMID:16225692
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Yokum, Jeffrey S.; Pryputniewicz, Ryszard J.
2002-06-01
Sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography based on fiber optics and high-spatial and high-digital resolution cameras, are discussed in this paper. It is shown that sensitivity, accuracy, and precision dependent on both, the effective determination of optical phase and the effective characterization of the illumination-observation conditions. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gages, demonstrating the applicability of quantitative optical metrology techniques to satisfy constantly increasing needs for the study and development of emerging technologies.
New Ca-Tims and La-Icp Analyses of GJ-1, Plesovice, and FC1 Reference Materials
NASA Astrophysics Data System (ADS)
Feldman, J. D.; Möller, A.; Walker, J. D.
2014-12-01
Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) U-Pb zircon geochronology relies on external reference standards to monitor and correct for different mass fractionation effects and instrument drift. Common zircon reference materials used within the community, including the KU Isotope Geochemistry Laboratory, are GJ-1 (207Pb/206Pb age: 608.53 +/- 0.37Ma; Jackson et al., 2004), Plesovice (337.13 +/- 0.37 Ma; Slama et al., 2008), and FC-1 (1099.0 +/-0.6 Ma; Paces and Miller, 1993). The age distribution of zircon reference material varies slightly from sample fraction to sample fraction, and the published results for GJ-1 are slightly discordant. As a result, using the published data for the distributed standard splits can lead to small systematic variations when comparing datasets from different labs, and more high precision data are needed to evaluate potential inhomogeneity of sample splits used in different laboratories. Here we characterize these reference materials with cathodoluminescence, LA-ICP-MS traverses across grains, and high precision CA-TIMS to better constrain the ages and assess zoning of these standards, and present the data for comparison with other laboratories. Reducing systematic error by dating our own reference material lends confidence to our analyses and allows for inter-laboratory age reproducibility of unknowns. Additionally, the reduction in propagated uncertainties (especially in GJ-1, for which both the red and yellow variety will be analyzed) will be used to improve long-term reproducibility, comparisons between samples of similar age, detrital populations and composite pluton zircons. Jackson, S.E., et al., 2004, Chemical Geology, v. 211, p. 47-69. Paces, J.B. & Miller, J.D., 1993, Journal of Geophysical Research, v. 80, p. 13997-14013. Slama, J., et al., 2008, Chemical Geology, v. 249. p. 1-35.
NASA Astrophysics Data System (ADS)
Capozzi, F.; Lisi, E.; Marrone, A.
2015-11-01
Nuclear reactors provide intense sources of electron antineutrinos, characterized by few-MeV energy E and unoscillated spectral shape Φ (E ). High-statistics observations of reactor neutrino oscillations over medium-baseline distances L ˜O (50 ) km would provide unprecedented opportunities to probe both the long-wavelength mass-mixing parameters (δ m2 and θ12) and the short-wavelength ones (Δ mee 2 and θ13), together with the subtle interference effects associated with the neutrino mass hierarchy (either normal or inverted). In a given experimental setting—here taken as in the JUNO project for definiteness—the achievable hierarchy sensitivity and parameter accuracy depend not only on the accumulated statistics but also on systematic uncertainties, which include (but are not limited to) the mass-mixing priors and the normalizations of signals and backgrounds. We examine, in addition, the effect of introducing smooth deformations of the detector energy scale, E →E'(E ), and of the reactor flux shape, Φ (E )→Φ'(E ), within reasonable error bands inspired by state-of-the-art estimates. It turns out that energy-scale and flux-shape systematics can noticeably affect the performance of a JUNO-like experiment, both on the hierarchy discrimination and on precision oscillation physics. It is shown that a significant reduction of the assumed energy-scale and flux-shape uncertainties (by, say, a factor of 2) would be highly beneficial to the physics program of medium-baseline reactor projects. Our results also shed some light on the role of the inverse-beta decay threshold, of geoneutrino backgrounds, and of matter effects in the analysis of future reactor oscillation data.
The Foundation Supernova Survey: motivation, design, implementation, and first data release
NASA Astrophysics Data System (ADS)
Foley, Ryan J.; Scolnic, Daniel; Rest, Armin; Jha, S. W.; Pan, Y.-C.; Riess, A. G.; Challis, P.; Chambers, K. C.; Coulter, D. A.; Dettman, K. G.; Foley, M. M.; Fox, O. D.; Huber, M. E.; Jones, D. O.; Kilpatrick, C. D.; Kirshner, R. P.; Schultz, A. S. B.; Siebert, M. R.; Flewelling, H. A.; Gibson, B.; Magnier, E. A.; Miller, J. A.; Primak, N.; Smartt, S. J.; Smith, K. W.; Wainscoat, R. J.; Waters, C.; Willman, M.
2018-03-01
The Foundation Supernova Survey aims to provide a large, high-fidelity, homogeneous, and precisely calibrated low-redshift Type Ia supernova (SN Ia) sample for cosmology. The calibration of the current low-redshift SN sample is the largest component of systematic uncertainties for SN cosmology, and new data are necessary to make progress. We present the motivation, survey design, observation strategy, implementation, and first results for the Foundation Supernova Survey. We are using the Pan-STARRS telescope to obtain photometry for up to 800 SNe Ia at z ≲ 0.1. This strategy has several unique advantages: (1) the Pan-STARRS system is a superbly calibrated telescopic system, (2) Pan-STARRS has observed 3/4 of the sky in grizyP1 making future template observations unnecessary, (3) we have a well-tested data-reduction pipeline, and (4) we have observed ˜3000 high-redshift SNe Ia on this system. Here, we present our initial sample of 225 SN Ia grizP1 light curves, of which 180 pass all criteria for inclusion in a cosmological sample. The Foundation Supernova Survey already contains more cosmologically useful SNe Ia than all other published low-redshift SN Ia samples combined. We expect that the systematic uncertainties for the Foundation Supernova Sample will be two to three times smaller than other low-redshift samples. We find that our cosmologically useful sample has an intrinsic scatter of 0.111 mag, smaller than other low-redshift samples. We perform detailed simulations showing that simply replacing the current low-redshift SN Ia sample with an equally sized Foundation sample will improve the precision on the dark energy equation-of-state parameter by 35 per cent, and the dark energy figure of merit by 72 per cent.
Planet Detectability in the Alpha Centauri System
NASA Astrophysics Data System (ADS)
Zhao, Lily; Fischer, Debra A.; Brewer, John; Giguere, Matt; Rojas-Ayala, Bárbara
2018-01-01
We use more than a decade of radial-velocity measurements for α {Cen} A, B, and Proxima Centauri from the High Accuracy Radial Velocity Planet Searcher, CTIO High Resolution Spectrograph, and the Ultraviolet and Visual Echelle Spectrograph to identify the M\\sin i and orbital periods of planets that could have been detected if they existed. At each point in a mass–period grid, we sample a simulated, Keplerian signal with the precision and cadence of existing data and assess the probability that the signal could have been produced by noise alone. Existing data places detection thresholds in the classically defined habitable zones at about M\\sin i of 53 {M}\\oplus for α {Cen} A, 8.4 {M}\\oplus for α {Cen} B, and 0.47 {M}\\oplus for Proxima Centauri. Additionally, we examine the impact of systematic errors, or “red noise” in the data. A comparison of white- and red-noise simulations highlights quasi-periodic variability in the radial velocities that may be caused by systematic errors, photospheric velocity signals, or planetary signals. For example, the red-noise simulations show a peak above white-noise simulations at the period of Proxima Centauri b. We also carry out a spectroscopic analysis of the chemical composition of the α {Centauri} stars. The stars have super-solar metallicity with ratios of C/O and Mg/Si that are similar to the Sun, suggesting that any small planets in the α {Cen} system may be compositionally similar to our terrestrial planets. Although the small projected separation of α {Cen} A and B currently hampers extreme-precision radial-velocity measurements, the angular separation is now increasing. By 2019, α {Cen} A and B will be ideal targets for renewed Doppler planet surveys.
Image Subtraction Reduction of Open Clusters M35 & NGC 2158 in the K2 Campaign 0 Super Stamps
NASA Astrophysics Data System (ADS)
Soares-Furtado, M.; Hartman, J. D.; Bakos, G. Á.; Huang, C. X.; Penev, K.; Bhatti, W.
2017-04-01
We observed the open clusters M35 and NGC 2158 during the initial K2 campaign (C0). Reducing these data to high-precision photometric timeseries is challenging due to the wide point-spread function (PSF) and the blending of stellar light in such dense regions. We developed an image-subtraction-based K2 reduction pipeline that is applicable to both crowded and sparse stellar fields. We applied our pipeline to the data-rich C0 K2 super stamp, containing the two open clusters, as well as to the neighboring postage stamps. In this paper, we present our image subtraction reduction pipeline and demonstrate that this technique achieves ultra-high photometric precision for sources in the C0 super stamp. We extract the raw light curves of 3960 stars taken from the UCAC4 and EPIC catalogs and de-trend them for systematic effects. We compare our photometric results with the prior reductions published in the literature. For de-trended TFA-corrected sources in the 12-12.25 {{{K}}}{{p}} magnitude range, we achieve a best 6.5-hour window running rms of 35 ppm, falling to 100 ppm for fainter stars in the 14-14.25 {{{K}}}{{p}} magnitude range. For stars with {K}p> 14, our de-trended and 6.5-hour binned light curves achieve the highest photometric precision. Moreover, all our TFA-corrected sources have higher precision on all timescales investigated. This work represents the first published image subtraction analysis of a K2 super stamp. This method will be particularly useful for analyzing the Galactic bulge observations carried out during K2 campaign 9. The raw light curves and the final results of our de-trending processes are publicly available at http://k2.hatsurveys.org/archive/.
Revolution of Alzheimer Precision Neurology Passageway of Systems Biology and Neurophysiology.
Hampel, Harald; Toschi, Nicola; Babiloni, Claudio; Baldacci, Filippo; Black, Keith L; Bokde, Arun L W; Bun, René S; Cacciola, Francesco; Cavedo, Enrica; Chiesa, Patrizia A; Colliot, Olivier; Coman, Cristina-Maria; Dubois, Bruno; Duggento, Andrea; Durrleman, Stanley; Ferretti, Maria-Teresa; George, Nathalie; Genthon, Remy; Habert, Marie-Odile; Herholz, Karl; Koronyo, Yosef; Koronyo-Hamaoui, Maya; Lamari, Foudil; Langevin, Todd; Lehéricy, Stéphane; Lorenceau, Jean; Neri, Christian; Nisticò, Robert; Nyasse-Messene, Francis; Ritchie, Craig; Rossi, Simone; Santarnecchi, Emiliano; Sporns, Olaf; Verdooner, Steven R; Vergallo, Andrea; Villain, Nicolas; Younesi, Erfan; Garaci, Francesco; Lista, Simone
2018-03-16
The Precision Neurology development process implements systems theory with system biology and neurophysiology in a parallel, bidirectional research path: a combined hypothesis-driven investigation of systems dysfunction within distinct molecular, cellular, and large-scale neural network systems in both animal models as well as through tests for the usefulness of these candidate dynamic systems biomarkers in different diseases and subgroups at different stages of pathophysiological progression. This translational research path is paralleled by an "omics"-based, hypothesis-free, exploratory research pathway, which will collect multimodal data from progressing asymptomatic, preclinical, and clinical neurodegenerative disease (ND) populations, within the wide continuous biological and clinical spectrum of ND, applying high-throughput and high-content technologies combined with powerful computational and statistical modeling tools, aimed at identifying novel dysfunctional systems and predictive marker signatures associated with ND. The goals are to identify common biological denominators or differentiating classifiers across the continuum of ND during detectable stages of pathophysiological progression, characterize systems-based intermediate endophenotypes, validate multi-modal novel diagnostic systems biomarkers, and advance clinical intervention trial designs by utilizing systems-based intermediate endophenotypes and candidate surrogate markers. Achieving these goals is key to the ultimate development of early and effective individualized treatment of ND, such as Alzheimer's disease. The Alzheimer Precision Medicine Initiative (APMI) and cohort program (APMI-CP), as well as the Paris based core of the Sorbonne University Clinical Research Group "Alzheimer Precision Medicine" (GRC-APM) were recently launched to facilitate the passageway from conventional clinical diagnostic and drug development toward breakthrough innovation based on the investigation of the comprehensive biological nature of aging individuals. The APMI movement is gaining momentum to systematically apply both systems neurophysiology and systems biology in exploratory translational neuroscience research on ND.
Revolution of Alzheimer Precision Neurology: Passageway of Systems Biology and Neurophysiology
Hampel, Harald; Toschi, Nicola; Babiloni, Claudio; Baldacci, Filippo; Black, Keith L.; Bokde, Arun L.W.; Bun, René S.; Cacciola, Francesco; Cavedo, Enrica; Chiesa, Patrizia A.; Colliot, Olivier; Coman, Cristina-Maria; Dubois, Bruno; Duggento, Andrea; Durrleman, Stanley; Ferretti, Maria-Teresa; George, Nathalie; Genthon, Remy; Habert, Marie-Odile; Herholz, Karl; Koronyo, Yosef; Koronyo-Hamaoui, Maya; Lamari, Foudil; Langevin, Todd; Lehéricy, Stéphane; Lorenceau, Jean; Neri, Christian; Nisticò, Robert; Nyasse-Messene, Francis; Ritchie, Craig; Rossi, Simone; Santarnecchi, Emiliano; Sporns, Olaf; Verdooner, Steven R.; Vergallo, Andrea; Villain, Nicolas; Younesi, Erfan; Garaci, Francesco; Lista, Simone
2018-01-01
The Precision Neurology development process implements systems theory with system biology and neurophysiology in a parallel, bidirectional research path: a combined hypothesis-driven investigation of systems dysfunction within distinct molecular, cellular and large-scale neural network systems in both animal models as well as through tests for the usefulness of these candidate dynamic systems biomarkers in different diseases and subgroups at different stages of pathophysiological progression. This translational research path is paralleled by an “omics”-based, hypothesis-free, exploratory research pathway, which will collect multimodal data from progressing asymptomatic, preclinical and clinical neurodegenerative disease (ND) populations, within the wide continuous biological and clinical spectrum of ND, applying high-throughput and high-content technologies combined with powerful computational and statistical modeling tools, aimed at identifying novel dysfunctional systems and predictive marker signatures associated with ND. The goals are to identify common biological denominators or differentiating classifiers across the continuum of ND during detectable stages of pathophysiological progression, characterize systems-based intermediate endophenotypes, validate multi-modal novel diagnostic systems biomarkers, and advance clinical intervention trial designs by utilizing systems-based intermediate endophenotypes and candidate surrogate markers. Achieving these goals is key to the ultimate development of early and effective individualized treatment of ND, such as Alzheimer’s disease (AD). The Alzheimer Precision Medicine Initiative (APMI) and cohort program (APMI-CP), as well as the Paris based core of the Sorbonne University Clinical Research Group “Alzheimer Precision Medicine” (GRC-APM) were recently launched to facilitate the passageway from conventional clinical diagnostic and drug development towards breakthrough innovation based on the investigation of the comprehensive biological nature of aging individuals. The APMI movement is gaining momentum to systematically apply both systems neurophysiology and systems biology in exploratory translational neuroscience research on ND. PMID:29562524
Madanat, Rami; Mäkinen, Tatu J; Aro, Hannu T; Bragdon, Charles; Malchau, Henrik
2014-09-01
Guidelines for standardization of radiostereometry (RSA) of implants were published in 2005 to facilitate comparison of outcomes between various research groups. In this systematic review, we determined how well studies have adhered to these guidelines. We carried out a literature search to identify all articles published between January 2000 and December 2011 that used RSA in the evaluation of hip or knee prosthesis migration. 2 investigators independently evaluated each of the studies for adherence to the 13 individual guideline items. Since some of the 13 points included more than 1 criterion, studies were assessed on whether each point was fully met, partially met, or not met. 153 studies that met our inclusion criteria were identified. 61 of these were published before the guidelines were introduced (2000-2005) and 92 after the guidelines were introduced (2006-2011). The methodological quality of RSA studies clearly improved from 2000 to 2011. None of the studies fully met all 13 guidelines. Nearly half (43) of the studies published after the guidelines demonstrated a high methodological quality and adhered at least partially to 10 of the 13 guidelines, whereas less than one-fifth (11) of the studies published before the guidelines had the same methodological quality. Commonly unaddressed guideline items were related to imaging methodology, determination of precision from double examinations, and also mean error of rigid-body fitting and condition number cutoff levels. The guidelines have improved methodological reporting in RSA studies, but adherence to these guidelines is still relatively low. There is a need to update and clarify the guidelines for clinical hip and knee arthroplasty RSA studies.
LYSO based precision timing calorimeters
NASA Astrophysics Data System (ADS)
Bornheim, A.; Apresyan, A.; Ronzhin, A.; Xie, S.; Duarte, J.; Spiropulu, M.; Trevor, J.; Anderson, D.; Pena, C.; Hassanshahi, M. H.
2017-11-01
In this report we outline the study of the development of calorimeter detectors using bright scintillating crystals. We discuss how timing information with a precision of a few tens of pico seconds and below can significantly improve the reconstruction of the physics events under challenging high pileup conditions to be faced at the High-Luminosity LHC or a future hadron collider. The particular challenge in measuring the time of arrival of a high energy photon lies in the stochastic component of the distance of initial conversion and the size of the electromagnetic shower. We present studies and measurements from test beams for calorimeter based timing measurements to explore the ultimate timing precision achievable for high energy photons of 10 GeV and above. We focus on techniques to measure the timing with a high precision in association with the energy of the photon. We present test-beam studies and results on the timing performance and characterization of the time resolution of LYSO-based calorimeters. We demonstrate time resolution of 30 ps is achievable for a particular design.
A precision medicine approach for psychiatric disease based on repeated symptom scores.
Fojo, Anthony T; Musliner, Katherine L; Zandi, Peter P; Zeger, Scott L
2017-12-01
For psychiatric diseases, rich information exists in the serial measurement of mental health symptom scores. We present a precision medicine framework for using the trajectories of multiple symptoms to make personalized predictions about future symptoms and related psychiatric events. Our approach fits a Bayesian hierarchical model that estimates a population-average trajectory for all symptoms and individual deviations from the average trajectory, then fits a second model that uses individual symptom trajectories to estimate the risk of experiencing an event. The fitted models are used to make clinically relevant predictions for new individuals. We demonstrate this approach on data from a study of antipsychotic therapy for schizophrenia, predicting future scores for positive, negative, and general symptoms, and the risk of treatment failure in 522 schizophrenic patients with observations over 8 weeks. While precision medicine has focused largely on genetic and molecular data, the complementary approach we present illustrates that innovative analytic methods for existing data can extend its reach more broadly. The systematic use of repeated measurements of psychiatric symptoms offers the promise of precision medicine in the field of mental health. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Lost Muon Study for the Muon G-2 Experiment at Fermilab*
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganguly, S.; Crnkovic, J.; Morse, W. M.
The Fermilab Muon g-2 Experiment has a goal of measuring the muon anomalous magnetic moment to a precision of 140 ppb - a fourfold improvement over the 540 ppb precision obtained by the BNL Muon g-2 Experiment. Some muons in the storage ring will interact with material and undergo bremsstrahlung, emitting radiation and loosing energy. These so called lost muons will curl in towards the center of the ring and be lost, but some of them will be detected by the calorimeters. A systematic error will arise if the lost muons have a different average spin phase than the storedmore » muons. Algorithms are being developed to estimate the relative number of lost muons, so as to optimize the stored muon beam. This study presents initial testing of algorithms that can be used to estimate the lost muons by using either double or triple detection coincidences in the calorimeters.« less
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742
Interactions Between Genetics, Lifestyle, and Environmental Factors for Healthcare.
Lin, Yuxin; Chen, Jiajia; Shen, Bairong
2017-01-01
The occurrence and progression of diseases are strongly associated with a combination of genetic, lifestyle, and environmental factors. Understanding the interplay between genetic and nongenetic components provides deep insights into disease pathogenesis and promotes personalized strategies for people healthcare. Recently, the paradigm of systems medicine, which integrates biomedical data and knowledge at multidimensional levels, is considered to be an optimal way for disease management and clinical decision-making in the era of precision medicine. In this chapter, epigenetic-mediated genetics-lifestyle-environment interactions within specific diseases and different ethnic groups are systematically discussed, and data sources, computational models, and translational platforms for systems medicine research are sequentially presented. Moreover, feasible suggestions on precision healthcare and healthy longevity are kindly proposed based on the comprehensive review of current studies.
Area estimation of crops by digital analysis of Landsat data
NASA Technical Reports Server (NTRS)
Bauer, M. E.; Hixson, M. M.; Davis, B. J.
1978-01-01
The study for which the results are presented had these objectives: (1) to use Landsat data and computer-implemented pattern recognition to classify the major crops from regions encompassing different climates, soils, and crops; (2) to estimate crop areas for counties and states by using crop identification data obtained from the Landsat identifications; and (3) to evaluate the accuracy, precision, and timeliness of crop area estimates obtained from Landsat data. The paper describes the method of developing the training statistics and evaluating the classification accuracy. Landsat MSS data were adequate to accurately identify wheat in Kansas; corn and soybean estimates for Indiana were less accurate. Systematic sampling of entire counties made possible by computer classification methods resulted in very precise area estimates at county, district, and state levels.
The influence of clerkship on students' stigma towards mental illness: a meta-analysis.
Petkari, Eleni; Masedo Gutiérrez, Ana I; Xavier, Miguel; Moreno Küstner, Berta
2018-07-01
In university programmes preparing students to work with patients with mental illness, clerkship is proposed as a component that may contribute to the battle against stigma, through bringing students into contact with the patients' reality. Yet, the precise contribution of clerkship remains unclear, perhaps because of the variety of university programmes, clerkship characteristics or types of stigma explored. This is the first systematic meta-analysis of available evidence determining the precise effect size of the influence of clerkship on stigma and the potential moderators. We carried out a systematic literature review in Eric, PsycINFO, Pubmed, Scopus, UMI and Proquest dissertations, aiming to identify all the studies exploring health care students' stigma of mental illness (measured as overall stigma or as attitudes, affect and behavioural intentions) before and after a clerkship from 2000 to 2017. Twenty-two studies were included in the meta-analysis, providing data from 22 independent samples. The total sample consisted of 3161 students. The effects of programme (medicine, nursing, occupational therapy, and their combination), study design (paired-unpaired samples), publication year, sex, age and clerkship context, and inclusion of theoretical training and duration, were examined as potential moderators. Our analyses yielded a highly significant medium effect size for overall stigma (Hedge's g = 0.35; p < 0.001; 95% confidence interval [CI], 0.20, 0.42), attitudes (Hedge's g = 0.308; p = 0.003; 95% CI, 0.10, 0.51) and behavioural intentions (Hedge's g = 0.247; p < 0.001; 95% CI, 0.17, 0.33), indicating a considerable change, whereas there was no significant change in the students' affect. Moderator analyses provided evidence for the distinct nature of each stigma outcome, as they were influenced by different clerkship and student characteristics such as clerkship context, theoretical training, age and sex. The robust effect of clerkship on students' stigma of mental illness established by the present meta-analysis highlights its role as a crucial curriculum component for experiential learning and as a necessary agent for the battle against stigma. © 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
An exacting transition probability measurement - a direct test of atomic many-body theories.
Dutta, Tarun; De Munshi, Debashis; Yum, Dahyun; Rebhi, Riadh; Mukherjee, Manas
2016-07-19
A new protocol for measuring the branching fraction of hydrogenic atoms with only statistically limited uncertainty is proposed and demonstrated for the decay of the P3/2 level of the barium ion, with precision below 0.5%. Heavy hydrogenic atoms like the barium ion are test beds for fundamental physics such as atomic parity violation and they also hold the key to understanding nucleo-synthesis in stars. To draw definitive conclusion about possible physics beyond the standard model by measuring atomic parity violation in the barium ion it is necessary to measure the dipole transition probabilities of low-lying excited states with a precision better than 1%. Furthermore, enhancing our understanding of the barium puzzle in barium stars requires branching fraction data for proper modelling of nucleo-synthesis. Our measurements are the first to provide a direct test of quantum many-body calculations on the barium ion with a precision below one percent and more importantly with no known systematic uncertainties. The unique measurement protocol proposed here can be easily extended to any decay with more than two channels and hence paves the way for measuring the branching fractions of other hydrogenic atoms with no significant systematic uncertainties.
Activation measurement of the 3He(alpha,gamma)7Be cross section at low energy.
Bemmerer, D; Confortola, F; Costantini, H; Formicola, A; Gyürky, Gy; Bonetti, R; Broggini, C; Corvisiero, P; Elekes, Z; Fülöp, Zs; Gervino, G; Guglielmetti, A; Gustavino, C; Imbriani, G; Junker, M; Laubenstein, M; Lemut, A; Limata, B; Lozza, V; Marta, M; Menegazzo, R; Prati, P; Roca, V; Rolfs, C; Alvarez, C Rossi; Somorjai, E; Straniero, O; Strieder, F; Terrasi, F; Trautvetter, H P
2006-09-22
The nuclear physics input from the 3He(alpha,gamma)7Be cross section is a major uncertainty in the fluxes of 7Be and 8B neutrinos from the Sun predicted by solar models and in the 7Li abundance obtained in big-bang nucleosynthesis calculations. The present work reports on a new precision experiment using the activation technique at energies directly relevant to big-bang nucleosynthesis. Previously such low energies had been reached experimentally only by the prompt-gamma technique and with inferior precision. Using a windowless gas target, high beam intensity, and low background gamma-counting facilities, the 3He(alpha,gamma)7Be cross section has been determined at 127, 148, and 169 keV center-of-mass energy with a total uncertainty of 4%. The sources of systematic uncertainty are discussed in detail. The present data can be used in big-bang nucleosynthesis calculations and to constrain the extrapolation of the 3He(alpha,gamma)7Be astrophysical S factor to solar energies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukhovoj, A. M., E-mail: suchovoj@nf.jinr.ru; Mitsyna, L. V., E-mail: mitsyna@nf.jinr.ru; Jovancevic, N., E-mail: nikola.jovancevic@uns.ac.rs
The intensities of two-step cascades in 43 nuclei of mass number in the range of 28 ≤ A ≤ 200 were approximated to a high degree of precision within a modified version of the practical cascade-gammadecay model introduced earlier. In this version, the rate of the decrease in the model-dependent density of vibrational levels has the same value for any Cooper pair undergoing breakdown. The most probable values of radiative strength functions both for E1 and for M1 transitions are determined by using one or two peaks against a smooth model dependence on the gamma-transition energy. The statement that themore » thresholds for the breaking of Cooper pairs are higher for spherical than for deformed nuclei is a basic result of the respective analysis. The parameters of the cascade-decay process are now determined to a precision that makes it possible to observe the systematic distinctions between them for nuclei characterized by different parities of neutrons and protons.« less
A Bitter Pill: The Cosmic Lithium Problem
NASA Astrophysics Data System (ADS)
Fields, Brian
2014-03-01
Primordial nucleosynthesis describes the production of the lightest nuclides in the first three minutes of cosmic time. We will discuss the transformative influence of the WMAP and Planck determinations of the cosmic baryon density. Coupled with nucleosynthesis theory, these measurements make tight predictions for the primordial light element abundances: deuterium observations agree spectacularly with these predictions, helium observations are in good agreement, but lithium observations (in ancient halo stars) are significantly discrepant-this is the ``lithium problem.'' Over the past decade, the lithium discrepancy has become more severe, and very recently the solution space has shrunk. A solution due to new nuclear resonances has now been essentially ruled out experimentally. Stellar evolution solutions remain viable but must be finely tuned. Observational systematics are now being probed by qualitatively new methods of lithium observation. Finally, new physics solutions are now strongly constrained by the combination of the precision baryon determination by Planck, and the need to match the D/H abundances now measured to unprecedented precision at high redshift. Supported in part by NSF grant PHY-1214082.
A precision search for WIMPs with charged cosmic rays
NASA Astrophysics Data System (ADS)
Reinert, Annika; Winkler, Martin Wolfgang
2018-01-01
AMS-02 has reached the sensitivity to probe canonical thermal WIMPs by their annihilation into antiprotons. Due to the high precision of the data, uncertainties in the astrophysical background have become the most limiting factor for indirect dark matter detection. In this work we systematically quantify and—where possible—reduce uncertainties in the antiproton background. We constrain the propagation of charged cosmic rays through the combination of antiproton, B/C and positron data. Cross section uncertainties are determined from a wide collection of accelerator data and are—for the first time ever—fully taken into account. This allows us to robustly constrain even subdominant dark matter signals through their spectral properties. For a standard NFW dark matter profile we are able to exclude thermal WIMPs with masses up to 570 GeV which annihilate into bottom quarks. While we confirm a reported excess compatible with dark matter of mass around 80 GeV, its local (global) significance only reaches 2.2 σ (1.1 σ) in our analysis.
Probing the top-quark width using the charge identification of b jets
Giardino, Pier Paolo; Zhang, Cen
2017-07-18
We propose a new method for measuring the top-quark width based on the on-/off-shell ratio of b -charge asymmetry in pp → Wbj production at the LHC. The charge asymmetry removes virtually all backgrounds and related uncertainties, while remaining systematic and theoretical uncertainties can be taken under control by the ratio of cross sections. Limited only by statistical error, in an optimistic scenario, we find that our approach leads to good precision at high integrated luminosity, at a few hundred MeV assuming 300 – 3000 fb -1 at the LHC. The approach directly probes the total width, in such amore » way that model-dependence can be minimized. It is complementary to existing cross section measurements which always leave a degeneracy between the total rate and the branching ratio, and provides valuable information about the properties of the top quark. Here, the proposal opens up new opportunities for precision top measurements using a b-charge identification algorithm.« less
NASA Astrophysics Data System (ADS)
Smith, P.; Scheelbeek, P.; Bird, F.; Green, R.; Dangour, A.
2017-12-01
Background - Environmental changes—including climatic change, water scarcity, and biodiversity loss—threaten agricultural production and pose challenges to global food security. In this study, we review the evidence of the effects of environmental change on the yield and quality of fruits and vegetables - a food group that plays a highly important role in our diets - and assess possible implications for nutrition and health outcomes. Methods - We undertook two systematic reviews of the published literature on the effect of 8 different environmental stressors on yields and nutritional quality of (1) fruits and (2) vegetables, measured in greenhouse and field studies. We combined the review outcomes with Food Balance Sheet data to assess the potential consequences of changed availability and quality of fruits and vegetables for global nutrient deficiencies and related chronic diseases. Findings - Overall, fruits were affected more prominently by changing environmental patterns than vegetables. In tropical countries, there were largely adverse effects on yield of increased temperature and changing precipitation patterns, although in more temperate zones some beneficial effects were reported. In contrast, the effects on nutritional quality were mostly positive, especially in fruit crops, with higher vitamin and mineral content measured in most crops. Increased atmospheric CO2 concentrations had a predominantly positive effect on yield, especially in legumes, but a negative effect on nutritional quality of both fruits and vegetables. Adverse nutritional implications were estimated to be largest in areas characterised by high vulnerability to environmental change, high dependency on local markets, and high rates of food insecurity. Interpretation - Our study identified effects of environmental change on yields and quality of fruits and vegetables that might pose threats to population health, especially in areas vulnerable to climate-change and food insecurity. To obtain more precise estimates of the burden of disease attributable to these effects, further research is needed on farmers' and consumers' adaptation/substitution strategies, which would allow development of future food security scenarios.
Light curves of 213 Type Ia supernovae from the Essence survey
Narayan, G.; Rest, A.; Tucker, B. E.; ...
2016-05-06
The ESSENCE survey discovered 213 Type Ia supernovae at redshiftsmore » $$0.1\\lt z\\lt 0.81$$ between 2002 and 2008. We present their R- and I-band photometry, measured from images obtained using the MOSAIC II camera at the CTIO Blanco, along with rapid-response spectroscopy for each object. We use our spectroscopic follow-up observations to determine an accurate, quantitative classification, and precise redshift. Through an extensive calibration program we have improved the precision of the CTIO Blanco natural photometric system. We use several empirical metrics to measure our internal photometric consistency and our absolute calibration of the survey. Here, we assess the effect of various potential sources of systematic bias on our measured fluxes, and estimate the dominant term in the systematic error budget from the photometric calibration on our absolute fluxes is ~1%.« less
Light curves of 213 Type Ia supernovae from the Essence survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, G.; Rest, A.; Tucker, B. E.
The ESSENCE survey discovered 213 Type Ia supernovae at redshiftsmore » $$0.1\\lt z\\lt 0.81$$ between 2002 and 2008. We present their R- and I-band photometry, measured from images obtained using the MOSAIC II camera at the CTIO Blanco, along with rapid-response spectroscopy for each object. We use our spectroscopic follow-up observations to determine an accurate, quantitative classification, and precise redshift. Through an extensive calibration program we have improved the precision of the CTIO Blanco natural photometric system. We use several empirical metrics to measure our internal photometric consistency and our absolute calibration of the survey. Here, we assess the effect of various potential sources of systematic bias on our measured fluxes, and estimate the dominant term in the systematic error budget from the photometric calibration on our absolute fluxes is ~1%.« less
Water line positions in the 782-840 nm region
NASA Astrophysics Data System (ADS)
Hu, S.-M.; Chen, B.; Tan, Y.; Wang, J.; Cheng, C.-F.; Liu, A.-W.
2015-10-01
A set of water transitions in the 782-840 nm region, including 38 H216O lines, 12 HD16O lines, and 30 D216O lines, were recorded with a cavity ring-down spectrometer calibrated using precise atomic lines. Absolute frequencies of the lines were determined with an accuracy of about 5 MHz. Systematic shifts were found in the line positions given in the HITRAN database and the upper energy levels given in recent MARVEL studies.
Validation of search filters for identifying pediatric studies in PubMed.
Leclercq, Edith; Leeflang, Mariska M G; van Dalen, Elvira C; Kremer, Leontien C M
2013-03-01
To identify and validate PubMed search filters for retrieving studies including children and to develop a new pediatric search filter for PubMed. We developed 2 different datasets of studies to evaluate the performance of the identified pediatric search filters, expressed in terms of sensitivity, precision, specificity, accuracy, and number needed to read (NNR). An optimal search filter will have a high sensitivity and high precision with a low NNR. In addition to the PubMed Limits: All Child: 0-18 years filter (in May 2012 renamed to PubMed Filter Child: 0-18 years), 6 search filters for identifying studies including children were identified: 3 developed by Kastner et al, 1 developed by BestBets, one by the Child Health Field, and 1 by the Cochrane Childhood Cancer Group. Three search filters (Cochrane Childhood Cancer Group, Child Health Field, and BestBets) had the highest sensitivity (99.3%, 99.5%, and 99.3%, respectively) but a lower precision (64.5%, 68.4%, and 66.6% respectively) compared with the other search filters. Two Kastner search filters had a high precision (93.0% and 93.7%, respectively) but a low sensitivity (58.5% and 44.8%, respectively). They failed to identify many pediatric studies in our datasets. The search terms responsible for false-positive results in the reference dataset were determined. With these data, we developed a new search filter for identifying studies with children in PubMed with an optimal sensitivity (99.5%) and precision (69.0%). Search filters to identify studies including children either have a low sensitivity or a low precision with a high NNR. A new pediatric search filter with a high sensitivity and a low NNR has been developed. Copyright © 2013 Mosby, Inc. All rights reserved.
REDUNDANT ARRAY CONFIGURATIONS FOR 21 cm COSMOLOGY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillon, Joshua S.; Parsons, Aaron R., E-mail: jsdillon@berkeley.edu
Realizing the potential of 21 cm tomography to statistically probe the intergalactic medium before and during the Epoch of Reionization requires large telescopes and precise control of systematics. Next-generation telescopes are now being designed and built to meet these challenges, drawing lessons from first-generation experiments that showed the benefits of densely packed, highly redundant arrays—in which the same mode on the sky is sampled by many antenna pairs—for achieving high sensitivity, precise calibration, and robust foreground mitigation. In this work, we focus on the Hydrogen Epoch of Reionization Array (HERA) as an interferometer with a dense, redundant core designed followingmore » these lessons to be optimized for 21 cm cosmology. We show how modestly supplementing or modifying a compact design like HERA’s can still deliver high sensitivity while enhancing strategies for calibration and foreground mitigation. In particular, we compare the imaging capability of several array configurations, both instantaneously (to address instrumental and ionospheric effects) and with rotation synthesis (for foreground removal). We also examine the effects that configuration has on calibratability using instantaneous redundancy. We find that improved imaging with sub-aperture sampling via “off-grid” antennas and increased angular resolution via far-flung “outrigger” antennas is possible with a redundantly calibratable array configuration.« less
Redundant Array Configurations for 21 cm Cosmology
NASA Astrophysics Data System (ADS)
Dillon, Joshua S.; Parsons, Aaron R.
2016-08-01
Realizing the potential of 21 cm tomography to statistically probe the intergalactic medium before and during the Epoch of Reionization requires large telescopes and precise control of systematics. Next-generation telescopes are now being designed and built to meet these challenges, drawing lessons from first-generation experiments that showed the benefits of densely packed, highly redundant arrays—in which the same mode on the sky is sampled by many antenna pairs—for achieving high sensitivity, precise calibration, and robust foreground mitigation. In this work, we focus on the Hydrogen Epoch of Reionization Array (HERA) as an interferometer with a dense, redundant core designed following these lessons to be optimized for 21 cm cosmology. We show how modestly supplementing or modifying a compact design like HERA’s can still deliver high sensitivity while enhancing strategies for calibration and foreground mitigation. In particular, we compare the imaging capability of several array configurations, both instantaneously (to address instrumental and ionospheric effects) and with rotation synthesis (for foreground removal). We also examine the effects that configuration has on calibratability using instantaneous redundancy. We find that improved imaging with sub-aperture sampling via “off-grid” antennas and increased angular resolution via far-flung “outrigger” antennas is possible with a redundantly calibratable array configuration.
Ultra-low frequency vertical vibration isolator based on LaCoste spring linkage.
Li, G; Hu, H; Wu, K; Wang, G; Wang, L J
2014-10-01
For the applications in precision measurement such as absolute gravimeter, we have designed and built an ultra-low frequency vertical vibration isolator based on LaCoste spring linkage. In the system, an arm with test mass is suspended by a mechanical extension spring, and one end of the arm is connected to the frame with flexible pivots. The displacement of the arm is detected by an optical reflection method. With the displacement signal, a feedback control force is exerted on the arm to keep it at the balance position. This method can also correct the systematic drift caused by temperature change. In order to study the vibration isolation performance of the system, we analyze the dynamic characteristics of the spring linkage in the general case, and present key methods to adjust the natural oscillating period of the system. With careful adjustment, the system can achieve a steady oscillation with a natural period up to 32 s. This isolator has been tested based on the T-1 absolute gravimeter. A statistical uncertainty of 2 μGal has been achieved within a typical 12 h measurement. The experimental results verify that the isolator has significant vibration isolation performance, and it is very suitable for applications in high precision absolute gravity measurement.
Ultra-low frequency vertical vibration isolator based on LaCoste spring linkage
NASA Astrophysics Data System (ADS)
Li, G.; Hu, H.; Wu, K.; Wang, G.; Wang, L. J.
2014-10-01
For the applications in precision measurement such as absolute gravimeter, we have designed and built an ultra-low frequency vertical vibration isolator based on LaCoste spring linkage. In the system, an arm with test mass is suspended by a mechanical extension spring, and one end of the arm is connected to the frame with flexible pivots. The displacement of the arm is detected by an optical reflection method. With the displacement signal, a feedback control force is exerted on the arm to keep it at the balance position. This method can also correct the systematic drift caused by temperature change. In order to study the vibration isolation performance of the system, we analyze the dynamic characteristics of the spring linkage in the general case, and present key methods to adjust the natural oscillating period of the system. With careful adjustment, the system can achieve a steady oscillation with a natural period up to 32 s. This isolator has been tested based on the T-1 absolute gravimeter. A statistical uncertainty of 2 μGal has been achieved within a typical 12 h measurement. The experimental results verify that the isolator has significant vibration isolation performance, and it is very suitable for applications in high precision absolute gravity measurement.
Use of Terrestrial Laser Scanning Technology for Long Term High Precision Deformation Monitoring
Vezočnik, Rok; Ambrožič, Tomaž; Sterle, Oskar; Bilban, Gregor; Pfeifer, Norbert; Stopar, Bojan
2009-01-01
The paper presents a new methodology for high precision monitoring of deformations with a long term perspective using terrestrial laser scanning technology. In order to solve the problem of a stable reference system and to assure the high quality of possible position changes of point clouds, scanning is integrated with two complementary surveying techniques, i.e., high quality static GNSS positioning and precise tacheometry. The case study object where the proposed methodology was tested is a high pressure underground pipeline situated in an area which is geologically unstable. PMID:22303152
Zhu, Zhonglin; Li, Guoan
2013-01-01
Fluoroscopic image technique, using either a single image or dual images, has been widely applied to measure in vivo human knee joint kinematics. However, few studies have compared the advantages of using single and dual fluoroscopic images. Furthermore, due to the size limitation of the image intensifiers, it is possible that only a portion of the knee joint could be captured by the fluoroscopy during dynamic knee joint motion. In this paper, we presented a systematic evaluation of an automatic 2D–3D image matching method in reproducing spatial knee joint positions using either single or dual fluoroscopic image techniques. The data indicated that for the femur and tibia, their spatial positions could be determined with an accuracy and precision less than 0.2 mm in translation and less than 0.4° in orientation when dual fluoroscopic images were used. Using single fluoroscopic images, the method could produce satisfactory accuracy in joint positions in the imaging plane (average up to 0.5 mm in translation and 1.3° in rotation), but large variations along the out-plane direction (in average up to 4.0 mm in translation and 2.28 in rotation). The precision of using single fluoroscopic images to determine the actual knee positions was worse than its accuracy obtained. The data also indicated that when using dual fluoroscopic image technique, if the knee joint outlines in one image were incomplete by 80%, the algorithm could still reproduce the joint positions with high precisions. PMID:21806411
Matoza, Robin S.; Shearer, Peter M.; Okubo, Paul G.
2016-01-01
Long-period (0.5–5 Hz, LP) seismicity has been recorded for decades in the summit region of Kı̄lauea Volcano, Hawai‘i, and is postulated as linked with the magma transport and shallow hydrothermal systems. To better characterize its spatiotemporal occurrence, we perform a systematic analysis of 49,030 seismic events occurring in the Kı̄lauea summit region from January 1986 to March 2009 recorded by the ∼50-station Hawaiian Volcano Observatory permanent network. We estimate 215,437 P wave spectra, considering all events on all stations, and use a station-averaged spectral metric to consistently classify LP and non-LP seismicity. We compute high-precision relative relocations for 5327 LP events (43% of all classified LP events) using waveform cross correlation and cluster analysis with 6.4 million event pairs, combined with the source-specific station term method. The majority of intermediate-depth (5–15 km) LPs collapse to a compact volume, with remarkable source location stability over 23 years indicating a source process controlled by geological or conduit structure.
Precision determination of absolute neutron flux
Yue, A. T.; Anderson, E. S.; Dewey, M. S.; ...
2018-06-08
A technique for establishing the total neutron rate of a highly-collimated monochromatic cold neutron beam was demonstrated using an alpha–gamma counter. The method involves only the counting of measured rates and is independent of neutron cross sections, decay chain branching ratios, and neutron beam energy. For the measurement, a target of 10B-enriched boron carbide totally absorbed the neutrons in a monochromatic beam, and the rate of absorbed neutrons was determined by counting 478 keV gamma rays from neutron capture on 10B with calibrated high-purity germanium detectors. A second measurement based on Bragg diffraction from a perfect silicon crystal was performedmore » to determine the mean de Broglie wavelength of the beam to a precision of 0.024%. With these measurements, the detection efficiency of a neutron monitor based on neutron absorption on 6Li was determined to an overall uncertainty of 0.058%. We discuss the principle of the alpha–gamma method and present details of how the measurement was performed including the systematic effects. We further describe how this method may be used for applications in neutron dosimetry and metrology, fundamental neutron physics, and neutron cross section measurements.« less
Animal research as a basis for clinical trials.
Faggion, Clovis M
2015-04-01
Animal experiments are critical for the development of new human therapeutics because they provide mechanistic information, as well as important information on efficacy and safety. Some evidence suggests that authors of animal research in dentistry do not observe important methodological issues when planning animal experiments, for example sample-size calculation. Low-quality animal research directly interferes with development of the research process in which multiple levels of research are interconnected. For example, high-quality animal experiments generate sound information for the further planning and development of randomized controlled trials in humans. These randomized controlled trials are the main source for the development of systematic reviews and meta-analyses, which will generate the best evidence for the development of clinical guidelines. Therefore, adequate planning of animal research is a sine qua non condition for increasing efficacy and efficiency in research. Ethical concerns arise when animal research is not performed with high standards. This Focus article presents the latest information on the standards of animal research in dentistry, more precisely in the field of implant dentistry. Issues on precision and risk of bias are discussed, and strategies to reduce risk of bias in animal research are reported. © 2015 Eur J Oral Sci.
Uncertainty budgets for liquid waveguide CDOM absorption measurements.
Lefering, Ina; Röttgers, Rüdiger; Utschig, Christian; McKee, David
2017-08-01
Long path length liquid waveguide capillary cell (LWCC) systems using simple spectrometers to determine the spectral absorption by colored dissolved organic matter (CDOM) have previously been shown to have better measurement sensitivity compared to high-end spectrophotometers using 10 cm cuvettes. Information on the magnitude of measurement uncertainties for LWCC systems, however, has remained scarce. Cross-comparison of three different LWCC systems with three different path lengths (50, 100, and 250 cm) and two different cladding materials enabled quantification of measurement precision and accuracy, revealing strong wavelength dependency in both parameters. Stable pumping of the sample through the capillary cell was found to improve measurement precision over measurements made with the sample kept stationary. Results from the 50 and 100 cm LWCC systems, with higher refractive index cladding, showed systematic artifacts including small but unphysical negative offsets and high-frequency spectral perturbations due to limited performance of the salinity correction. In comparison, the newer 250 cm LWCC with lower refractive index cladding returned small positive offsets that may be physically correct. After null correction of measurements at 700 nm, overall agreement of CDOM absorption data at 440 nm was found to be within 5% root mean square percentage error.
NASA Astrophysics Data System (ADS)
Kuschenerus, Mieke; Cullen, Robert
2016-08-01
To ensure reliability and precision of wave height estimates for future satellite altimetry missions such as Sentinel 6, reliable parameter retrieval algorithms that can extract significant wave heights up to 20 m have to be established. The retrieved parameters, i.e. the retrieval methods need to be validated extensively on a wide range of possible significant wave heights. Although current missions require wave height retrievals up to 20 m, there is little evidence of systematic validation of parameter retrieval methods for sea states with wave heights above 10 m. This paper provides a definition of a set of simulated sea states with significant wave height up to 20 m, that allow simulation of radar altimeter response echoes for extreme sea states in SAR and low resolution mode. The simulated radar responses are used to derive significant wave height estimates, which can be compared with the initial models, allowing precision estimations of the applied parameter retrieval methods. Thus we establish a validation method for significant wave height retrieval for sea states causing high significant wave heights, to allow improved understanding and planning of future satellite altimetry mission validation.
Precision determination of absolute neutron flux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue, A. T.; Anderson, E. S.; Dewey, M. S.
A technique for establishing the total neutron rate of a highly-collimated monochromatic cold neutron beam was demonstrated using an alpha–gamma counter. The method involves only the counting of measured rates and is independent of neutron cross sections, decay chain branching ratios, and neutron beam energy. For the measurement, a target of 10B-enriched boron carbide totally absorbed the neutrons in a monochromatic beam, and the rate of absorbed neutrons was determined by counting 478 keV gamma rays from neutron capture on 10B with calibrated high-purity germanium detectors. A second measurement based on Bragg diffraction from a perfect silicon crystal was performedmore » to determine the mean de Broglie wavelength of the beam to a precision of 0.024%. With these measurements, the detection efficiency of a neutron monitor based on neutron absorption on 6Li was determined to an overall uncertainty of 0.058%. We discuss the principle of the alpha–gamma method and present details of how the measurement was performed including the systematic effects. We further describe how this method may be used for applications in neutron dosimetry and metrology, fundamental neutron physics, and neutron cross section measurements.« less
Hypothesis testing for band size detection of high-dimensional banded precision matrices.
An, Baiguo; Guo, Jianhua; Liu, Yufeng
2014-06-01
Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.
Xerostomia, Hyposalivation, and Salivary Flow in Diabetes Patients
Casañas, Elisabeth; Ramírez, Lucía; de Arriba, Lorenzo; Hernández, Gonzalo
2016-01-01
The presence of xerostomia and hyposalivation is frequent among diabetes mellitus (DM) patients. It is not clear if the presence of xerostomia and hyposalivation is greater in DM than non-DM patients. The aims of this systematic review are (1) to compare the prevalence rates of xerostomia, (2) to evaluate the salivary flow rate, and (3) to compare the prevalence rates of hyposalivation in DM versus non-DM population. This systematic review was conducted according to the PRISMA group guidelines by performing systematic literature searches in biomedical databases from 1970 until January 18th, 2016. All studies showed higher prevalence of xerostomia in DM patients in relation to non-DM population, 12.5%–53.5% versus 0–30%. Studies that analyzed the quantity of saliva in DM population in relation to non-DM patients reported higher flow rates in non-DM than in DM patients. The variation flow rate among different studies in each group (DM/CG) is very large. Only one existing study showed higher hyposalivation prevalence in DM than non-DM patients (45% versus 2.5%). In addition, quality assessment showed the low quality of the existing studies. We recommend new studies that use more precise and current definitions concerning the determination and diagnosis of DM patients and salivary flow collection. PMID:27478847
Xerostomia, Hyposalivation, and Salivary Flow in Diabetes Patients.
López-Pintor, Rosa María; Casañas, Elisabeth; González-Serrano, José; Serrano, Julia; Ramírez, Lucía; de Arriba, Lorenzo; Hernández, Gonzalo
2016-01-01
The presence of xerostomia and hyposalivation is frequent among diabetes mellitus (DM) patients. It is not clear if the presence of xerostomia and hyposalivation is greater in DM than non-DM patients. The aims of this systematic review are (1) to compare the prevalence rates of xerostomia, (2) to evaluate the salivary flow rate, and (3) to compare the prevalence rates of hyposalivation in DM versus non-DM population. This systematic review was conducted according to the PRISMA group guidelines by performing systematic literature searches in biomedical databases from 1970 until January 18th, 2016. All studies showed higher prevalence of xerostomia in DM patients in relation to non-DM population, 12.5%-53.5% versus 0-30%. Studies that analyzed the quantity of saliva in DM population in relation to non-DM patients reported higher flow rates in non-DM than in DM patients. The variation flow rate among different studies in each group (DM/CG) is very large. Only one existing study showed higher hyposalivation prevalence in DM than non-DM patients (45% versus 2.5%). In addition, quality assessment showed the low quality of the existing studies. We recommend new studies that use more precise and current definitions concerning the determination and diagnosis of DM patients and salivary flow collection.
Gerdel, Katharina; Spielmann, Felix Maximilian; Hammerle, Albin; Wohlfahrt, Georg
2017-01-01
The trace gas carbonyl sulphide (COS) has lately received growing interest in the eddy covariance (EC) community due to its potential to serve as an independent approach for constraining gross primary production and canopy stomatal conductance. Thanks to recent developments of fast-response high-precision trace gas analysers (e.g. quantum cascade laser absorption spectrometers (QCLAS)), a handful of EC COS flux measurements have been published since 2013. To date, however, a thorough methodological characterisation of QCLAS with regard to the requirements of the EC technique and the necessary processing steps has not been conducted. The objective of this study is to present a detailed characterization of the COS measurement with the Aerodyne QCLAS in the context of the EC technique, and to recommend best EC processing practices for those measurements. Data were collected from May to October 2015 at a temperate mountain grassland in Tyrol, Austria. Analysis of the Allan variance of high-frequency concentration measurements revealed sensor drift to occur under field conditions after an averaging time of around 50 s. We thus explored the use of two high-pass filtering approaches (linear detrending and recursive filtering) as opposed to block averaging and linear interpolation of regular background measurements for covariance computation. Experimental low-pass filtering correction factors were derived from a detailed cospectral analysis. The CO2 and H2O flux measurements obtained with the QCLAS were compared against those obtained with a closed-path infrared gas analyser. Overall, our results suggest small, but systematic differences between the various high-pass filtering scenarios with regard to the fraction of data retained in the quality control and flux magnitudes. When COS and CO2 fluxes are combined in the so-called ecosystem relative uptake rate, systematic differences between the high-pass filtering scenarios largely cancel out, suggesting that this relative metric represents a robust key parameter comparable between studies relying on different post-processing schemes. PMID:29093762
Gerdel, Katharina; Spielmann, Felix Maximilian; Hammerle, Albin; Wohlfahrt, Georg
2017-09-26
The trace gas carbonyl sulphide (COS) has lately received growing interest in the eddy covariance (EC) community due to its potential to serve as an independent approach for constraining gross primary production and canopy stomatal conductance. Thanks to recent developments of fast-response high-precision trace gas analysers (e.g. quantum cascade laser absorption spectrometers (QCLAS)), a handful of EC COS flux measurements have been published since 2013. To date, however, a thorough methodological characterisation of QCLAS with regard to the requirements of the EC technique and the necessary processing steps has not been conducted. The objective of this study is to present a detailed characterization of the COS measurement with the Aerodyne QCLAS in the context of the EC technique, and to recommend best EC processing practices for those measurements. Data were collected from May to October 2015 at a temperate mountain grassland in Tyrol, Austria. Analysis of the Allan variance of high-frequency concentration measurements revealed sensor drift to occur under field conditions after an averaging time of around 50 s. We thus explored the use of two high-pass filtering approaches (linear detrending and recursive filtering) as opposed to block averaging and linear interpolation of regular background measurements for covariance computation. Experimental low-pass filtering correction factors were derived from a detailed cospectral analysis. The CO 2 and H 2 O flux measurements obtained with the QCLAS were compared against those obtained with a closed-path infrared gas analyser. Overall, our results suggest small, but systematic differences between the various high-pass filtering scenarios with regard to the fraction of data retained in the quality control and flux magnitudes. When COS and CO 2 fluxes are combined in the so-called ecosystem relative uptake rate, systematic differences between the high-pass filtering scenarios largely cancel out, suggesting that this relative metric represents a robust key parameter comparable between studies relying on different post-processing schemes.
NASA Astrophysics Data System (ADS)
Gerdel, Katharina; Spielmann, Felix Maximilian; Hammerle, Albin; Wohlfahrt, Georg
2017-09-01
The trace gas carbonyl sulfide (COS) has lately received growing interest from the eddy covariance (EC) community due to its potential to serve as an independent approach for constraining gross primary production and canopy stomatal conductance. Thanks to recent developments of fast-response high-precision trace gas analysers (e.g. quantum cascade laser absorption spectrometers, QCLAS), a handful of EC COS flux measurements have been published since 2013. To date, however, a thorough methodological characterisation of QCLAS with regard to the requirements of the EC technique and the necessary processing steps has not been conducted. The objective of this study is to present a detailed characterisation of the COS measurement with the Aerodyne QCLAS in the context of the EC technique and to recommend best EC processing practices for those measurements. Data were collected from May to October 2015 at a temperate mountain grassland in Tyrol, Austria. Analysis of the Allan variance of high-frequency concentration measurements revealed the occurrence of sensor drift under field conditions after an averaging time of around 50 s. We thus explored the use of two high-pass filtering approaches (linear detrending and recursive filtering) as opposed to block averaging and linear interpolation of regular background measurements for covariance computation. Experimental low-pass filtering correction factors were derived from a detailed cospectral analysis. The CO2 and H2O flux measurements obtained with the QCLAS were compared with those obtained with a closed-path infrared gas analyser. Overall, our results suggest small, but systematic differences between the various high-pass filtering scenarios with regard to the fraction of data retained in the quality control and flux magnitudes. When COS and CO2 fluxes are combined in the ecosystem relative uptake rate, systematic differences between the high-pass filtering scenarios largely cancel out, suggesting that this relative metric represents a robust key parameter comparable between studies relying on different post-processing schemes.
NASA Astrophysics Data System (ADS)
DeLong, S. B.; Avdievitch, N. N.
2014-12-01
As high-resolution topographic data become increasingly available, comparison of multitemporal and disparate datasets (e.g. airborne and terrestrial lidar) enable high-accuracy quantification of landscape change and detailed mapping of surface processes. However, if these data are not properly managed and aligned with maximum precision, results may be spurious. Often this is due to slight differences in coordinate systems that require complex geographic transformations and systematic error that is difficult to diagnose and correct. Here we present an analysis of four airborne and three terrestrial lidar datasets collected between 2003 and 2014 that we use to quantify change at an active earthflow in Mill Gulch, Sonoma County, California. We first identify and address systematic error internal to each dataset, such as registration offset between flight lines or scan positions. We then use a variant of an iterative closest point (ICP) algorithm to align point cloud data by maximizing use of stable portions of the landscape with minimal internal error. Using products derived from the aligned point clouds, we make our geomorphic analyses. These methods may be especially useful for change detection analyses in which accurate georeferencing is unavailable, as is often the case with some terrestrial lidar or "structure from motion" data. Our results show that the Mill Gulch earthflow has been active throughout the study period. We see continuous downslope flow, ongoing incorporation of new hillslope material into the flow, sediment loss from hillslopes, episodic fluvial erosion of the earthflow toe, and an indication of increased activity during periods of high precipitation.
A precise measurement of the $B^0$ meson oscillation frequency
Aaij, R.; Abellán Beteta, C.; Adeva, B.; ...
2016-07-21
The oscillation frequency, Δm d, of B 0 mesons is measured using semileptonic decays with a D – or D* – meson in the final state. The data sample corresponds to 3.0fb –1 of pp collisions, collected by the LHCb experiment at centre-of-mass energies √s = 7 and 8TeV. A combination of the two decay modes gives Δm d=(505.0±2.1±1.0)ns –1, where the first uncertainty is statistical and the second is systematic. This is the most precise single measurement of this parameter. It is consistent with the current world average and has similar precision.
Superallowed Fermi β decay studies at TRIUMF-ISAC
NASA Astrophysics Data System (ADS)
Svensson, C. E.; Dunlop, R.; Finlay, P.; Ball, G. C.; Ettenauer, S.; Leslie, J. R.; Towner, I. S.; Andreoiu, C.; Austin, R. A. E.; Bandyopadhyay, D.; Chagnon-Lessard, S.; Chester, A.; Cross, D. S.; Demand, G.; Djongolov, M.; Garnsworthy, A. B.; Garrett, P. E.; Green, K. L.; Glister, J.; Grinyer, G. F.; Hackman, G.; Hadinia, B.; Leach, K. G.; Pearson, C. J.; Phillips, A. A.; Rand, E. T.; Starosta, K.; Sumithrarachchi, C. S.; Tardiff, E. R.; Triambak, S.; Williams, S. J.; Wong, J.; Yates, S. W.; Zganjar, E. F.
2013-10-01
A program of high-precision superallowed Fermi β decay studies is being carried out at the Isotope Separator and Accelerator (ISAC) radioactive ion beam facility at TRIUMF. Recent high-precision branching ratio measurements for the superallowed decays of 74Rb and 26Alm, as well as a half-life measurement for 26Alm that is the most precise half-life measurement for any superallowed emitter to date, are reported. These results provide demanding tests of the theoretical isospin symmetry breaking corrections in superallowed Fermi β decays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.; DePoy, D. L.; Marshall, J. L.
Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.; DePoy, D. L.; Marshall, J. L.
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence ofmore » the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
Prognostic significance of serum lactate dehydrogenase levels in Ewing's sarcoma: A meta-analysis.
Li, Suoyuan; Yang, Qing; Wang, Hongsheng; Wang, Zhuoying; Zuo, Dongqing; Cai, Zhengdong; Hua, Yingqi
2016-12-01
A number of studies have investigated the role of serum lactate dehydrogenase (LDH) levels in patients with Ewing's sarcoma, although these have yielded inconsistent and inconclusive results. Therefore, the present study aimed to systematically review the published studies and conduct a meta-analysis to assess its prognostic value more precisely. Cohort studies assessing the prognostic role of LDH levels in patients with Ewing's sarcoma were included. A pooled hazard ratio (HR) with 95% confidence intervals (CIs) of overall survival (OS) or 5-year disease-free survival (DFS) was used to assess the prognostic role of the levels of serum LDH. Nine studies published between 1980 and 2014, with a total of 1,412 patients with Ewing's sarcoma, were included. Six studies, with a total of 644 patients, used OS as the primary endpoint and four studies, with 795 patients, used 5-year DFS. Overall, the pooled HR evaluating high LDH levels was 2.90 (95% CI: 2.09-4.04) for OS and 2.40 (95% CI: 1.93-2.98) for 5-year DFS. This meta-analysis demonstrates that high levels of serum LDH are associated with lower OS and 5-year DFS rates in patients with Ewing's sarcoma. Therefore, serum LDH levels are an effective biomarker of Ewing's sarcoma prognosis.
Black hole spectroscopy: Systematic errors and ringdown energy estimates
NASA Astrophysics Data System (ADS)
Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav
2018-02-01
The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.
Musci, Marilena; Yao, Shicong
2017-12-01
Pu-erh tea is a post-fermented tea that has recently gained popularity worldwide, due to potential health benefits related to the antioxidant activity resulting from its high polyphenolic content. The Folin-Ciocalteu method is a simple, rapid, and inexpensive assay widely applied for the determination of total polyphenol content. Over the past years, it has been subjected to many modifications, often without any systematic optimization or validation. In our study, we sought to optimize the Folin-Ciocalteu method, evaluate quality parameters including linearity, precision and stability, and then apply the optimized model to determine the total polyphenol content of 57 Chinese teas, including green tea, aged and ripened Pu-erh tea. Our optimized Folin-Ciocalteu method reduced analysis time, allowed for the analysis of a large number of samples, to discriminate among the different teas, and to assess the effect of the post-fermentation process on polyphenol content.
NASA Astrophysics Data System (ADS)
Li, Shanshan; Freymueller, Jeffrey T.
2018-04-01
We resurveyed preexisting campaign Global Positioning System (GPS) sites and estimated a highly precise GPS velocity field for the Alaska Peninsula. We use the TDEFNODE software to model the slip deficit distribution using the new GPS velocities. We find systematic misfits to the vertical velocities from the optimal model that fits the horizontal velocities well, which cannot be explained by altering the slip distribution, so we use only the horizontal velocities in the study. Locations of three boundaries that mark significant along-strike change in the locking distribution are identified. The Kodiak segment is strongly locked, the Semidi segment is intermediate, the Shumagin segment is weakly locked, and the Sanak segment is dominantly creeping. We suggest that a change in preexisting plate fabric orientation on the downgoing plate has an important control on the along-strike variation in the megathrust locking distribution and subduction seismicity.
Storage ring two-color free-electron laser
Yan, J.; Hao, H.; Li, J. Y.; ...
2016-07-05
We report a systematic experimental study of a storage ring two-color free-electron laser (FEL) operating simultaneously in the infrared (IR) and ultraviolet (UV) wavelength regions. The two-color FEL lasing has been realized using a pair of dual-band high-reflectivity FEL mirrors with two different undulator configurations. We have demonstrated independent wavelength tuning in a wide range for each lasing color, as well as harmonically locked wavelength tuning when the UV lasing occurs at the second harmonic of the IR lasing. Precise power control of two-color lasing with good power stability has also been achieved. In addition, the impact of the degradationmore » of FEL mirrors on the two-color FEL operation is reported. Moreover, we have investigated the temporal structures of the two-color FEL beams, showing simultaneous two-color micropulses with their intensity modulations displayed as FEL macropulses.« less
uvby photometry in McCormick proper motion fields
NASA Technical Reports Server (NTRS)
Degewij, J.
1982-01-01
The Danish 50 cm telescope at the European Southern Observatory was used to obtain high-precision uvby photometry for 50 F2 to G2 stars, with V values in the 9.4-12.3 mag range, which were selected in the southern galactic polar regions of the McCormick proper motion fields and measured on six different nights. The brighter stars are found to systematically exhibit smaller m(1) indices, of about 0.02 mag, upon comparison with the earlier data of Blaauw et al (1976). Single measurements are given for 98 stars in eight McCormick fields at intermediate southern galactic latitudes.
Yan, Xintian; Zhao, Xinzhi; Li, Juxue; He, Lin; Xu, Mingqing
2018-04-20
Lines of evidence have demonstrated that early-life malnutrition is highly correlated with neurodevelopment and adulthood neuropsychiatric disorders, while some findings are conflicting with each other. In addition, the biological mechanisms are less investigated. We systematically reviewed the evidence linking early-life nutrition status with neurodevelopment and clinical observations in human and animal models. We summarized the effects of special nutritious on neuropsychiatric disorders and explored the underlying potential mechanisms. The further understanding of the biological regulation of early-life nutritional status on neurodevelopment might shed light on precision nutrition at an integrative systems biology framework. Copyright © 2018 Elsevier Inc. All rights reserved.
Toward Millimagnitude Photometric Calibration (Abstract)
NASA Astrophysics Data System (ADS)
Dose, E.
2014-12-01
(Abstract only) Asteroid roation, exoplanet transits, and similar measurements will increasingly call for photometric precisions better than about 10 millimagnitudes, often between nights and ideally between distant observers. The present work applies detailed spectral simulations to test popular photometric calibration practices, and to test new extensions of these practices. Using 107 synthetic spectra of stars of diverse colors, detailed atmospheric transmission spectra computed by solar-energy software, realistic spectra of popular astronomy gear, and the option of three sources of noise added at realistic millimagnitude levels, we find that certain adjustments to current calibration practices can help remove small systematic errors, especially for imperfect filters, high airmasses, and possibly passing thin cirrus clouds.
Single atom catalysts on amorphous supports: A quenched disorder perspective
NASA Astrophysics Data System (ADS)
Peters, Baron; Scott, Susannah L.
2015-03-01
Phenomenological models that invoke catalyst sites with different adsorption constants and rate constants are well-established, but computational and experimental methods are just beginning to provide atomically resolved details about amorphous surfaces and their active sites. This letter develops a statistical transformation from the quenched disorder distribution of site structures to the distribution of activation energies for sites on amorphous supports. We show that the overall kinetics are highly sensitive to the precise nature of the low energy tail in the activation energy distribution. Our analysis motivates further development of systematic methods to identify and understand the most reactive members of the active site distribution.
Synthesis of mesoporous nano-hydroxyapatite by using zwitterions surfactant
Mesoporous nano-hydroxyapatite (mn-HAP) was successfully synthesized via a novel micelle-templating method using lauryl dimethylaminoacetic acid as zwitterionic surfactant. The systematic use of such a surfactant in combination with microwave energy inputenables the precise contr...
Altez-Fernandez, Carlos; Ortiz, Victor; Mirzazadeh, Majid; Zegarra, Luis; Seas, Carlos; Ugarte-Gil, Cesar
2017-06-05
Genitourinary tuberculosis is the third most common form of extrapulmonary tuberculosis. Diagnosis is difficult because of unspecific clinical manifestations and low accuracy of conventional tests. Unfortunately, the delayed diagnosis impacts the urinary tract severely. Nucleic acid amplification tests yield fast results, and among these, new technologies can also detect drug resistance. There is lack of consensus regarding the use of these tests in genitourinary tuberculosis; we therefore aimed to assess the accuracy of nucleic acid amplification tests in the diagnosis of genitourinary tuberculosis and to evaluate the heterogeneity between studies. We did a systematic review and meta-analysis of research articles comparing the accuracy of a reference standard and a nucleic acid amplification test for diagnosis of urinary tract tuberculosis. We searched Medline, EMBASE, Web of Science, LILACS, Cochrane Library, and Scopus for articles published between Jan 1, 1990, and Apr 14, 2016. Two investigators identified eligible articles and extracted data for individual study sites. We analyzed data in groups with the same index test. Then, we generated pooled summary estimates (95% CIs) for sensitivity and specificity by use of random-effects meta-analysis when studies were not heterogeneous. We identified eleven relevant studies from ten articles, giving information on PCR, LCR and Xpert MTB/RIF tests. All PCR studies were "in-house" tests, with different gene targets and had several quality concerns therefore we did not proceed with a pooled analysis. Only one study used LCR. Xpert studies were of good quality and not heterogeneous, pooled sensitivity was 0·87 (0·66-0·96) and specificity was 0·91 (0·84-0·95). PCR studies were highly heterogeneous. Among Xpert MTB/RIF studies, specificity was favorable with an acceptable confidence interval, however new studies can update meta-analysis and get more precise estimates. Further high-quality studies are urgently needed to improve diagnosis of genitourinary tuberculosis. PROSPERO CRD42016039020.
Burden of clinical infections due to S. pneumoniae during Hajj: A systematic review.
Alqahtani, Amani S; Tashani, Mohamed; Ridda, Iman; Gamil, Amgad; Booy, Robert; Rashid, Harunor
2018-06-20
The burden of pneumococcal disease at Hajj has not been precisely evaluated through a systematic review. To this end we have conducted a systematic review on the burden of clinical infections due to Streptococcus pneumoniae among Hajj pilgrims. Major electronic databases including OVID Medline, Web of Science, OVID Embase, Social Sciences Citation Index, Google Scholar and relevant websites (e.g., online Saudi Epidemiology Bulletin) were searched by using MeSH terms and text words containing but not limited to 'Hajj', pneumonia and S. pneumoniae. This was buttressed by hand searching of reference lists of identified studies. Of 21 full text papers reviewed, nine articles were included in this review. Seven studies reported the burden of pneumococcal pneumonia and the other two reported the burden of invasive pneumococcal diseases including meningitis and sepsis. The proportion of pneumonia that was pneumococcal ranged from 1% to 54% of bacteriologically confirmed pneumonias. The pneumococcus accounted for 2/3rd of bacteriologically diagnosed meningitis cases, and 1/3rd of confirmed cases of sepsis. Case fatality rate of pneumococcal pneumonia was recorded in only two studies: 33.3% and 50%. Only one study provided data on antimicrobial susceptibility of S. pneumoniae isolates, reporting 33.3% to be penicillin resistant. None of the included studies provided data on serotype distribution of S. pneumoniae. This systematic review highlights the significance of pneumococcal disease during Hajj, and demonstrates paucity of data on its burden particularly on disease-causing serotype. Copyright © 2018. Published by Elsevier Ltd.
Bonaretti, Serena; Vilayphiou, Nicolas; Chan, Caroline Mai; Yu, Andrew; Nishiyama, Kyle; Liu, Danmei; Boutroy, Stephanie; Ghasem-Zadeh, Ali; Boyd, Steven K.; Chapurlat, Roland; McKay, Heather; Shane, Elizabeth; Bouxsein, Mary L.; Black, Dennis M.; Majumdar, Sharmila; Orwoll, Eric S.; Lang, Thomas F.; Khosla, Sundeep; Burghardt, Andrew J.
2017-01-01
Introduction HR-pQCT is increasingly used to assess bone quality, fracture risk and anti-fracture interventions. The contribution of the operator has not been adequately accounted in measurement precision. Operators acquire a 2D projection (“scout view image”) and define the region to be scanned by positioning a “reference line” on a standard anatomical landmark. In this study, we (i) evaluated the contribution of positioning variability to in vivo measurement precision, (ii) measured intra- and inter-operator positioning variability, and (iii) tested if custom training software led to superior reproducibility in new operators compared to experienced operators. Methods To evaluate the operator in vivo measurement precision we compared precision errors calculated in 64 co-registered and non-co-registered scan-rescan images. To quantify operator variability, we developed software that simulates the positioning process of the scanner’s software. Eight experienced operators positioned reference lines on scout view images designed to test intra- and inter-operator reproducibility. Finally, we developed modules for training and evaluation of reference line positioning. We enrolled 6 new operators to participate in a common training, followed by the same reproducibility experiments performed by the experienced group. Results In vivo precision errors were up to three-fold greater (Tt.BMD and Ct.Th) when variability in scan positioning was included. Inter-operator precision errors were significantly greater than short-term intra-operator precision (p<0.001). New trained operators achieved comparable intra-operator reproducibility to experienced operators, and lower inter-operator reproducibility (p<0.001). Precision errors were significantly greater for the radius than for the tibia. Conclusion Operator reference line positioning contributes significantly to in vivo measurement precision and is significantly greater for multi-operator datasets. Inter-operator variability can be significantly reduced using a systematic training platform, now available online (http://webapps.radiology.ucsf.edu/refline/). PMID:27475931
Cosmology with the Large Synoptic Survey Telescope: an overview.
Zhan, Hu; Anthony Tyson, J
2018-06-01
The Large Synoptic Survey Telescope (LSST) is a high étendue imaging facility that is being constructed atop Cerro Pachón in northern Chile. It is scheduled to begin science operations in 2022. With an [Formula: see text] ([Formula: see text] effective) aperture, a novel three-mirror design achieving a seeing-limited [Formula: see text] field of view, and a 3.2 gigapixel camera, the LSST has the deep-wide-fast imaging capability necessary to carry out an [Formula: see text] survey in six passbands (ugrizy) to a coadded depth of [Formula: see text] over 10 years using [Formula: see text] of its observational time. The remaining [Formula: see text] of the time will be devoted to considerably deeper and faster time-domain observations and smaller surveys. In total, each patch of the sky in the main survey will receive 800 visits allocated across the six passbands with [Formula: see text] exposure visits. The huge volume of high-quality LSST data will provide a wide range of science opportunities and, in particular, open a new era of precision cosmology with unprecedented statistical power and tight control of systematic errors. In this review, we give a brief account of the LSST cosmology program with an emphasis on dark energy investigations. The LSST will address dark energy physics and cosmology in general by exploiting diverse precision probes including large-scale structure, weak lensing, type Ia supernovae, galaxy clusters, and strong lensing. Combined with the cosmic microwave background data, these probes form interlocking tests on the cosmological model and the nature of dark energy in the presence of various systematics. The LSST data products will be made available to the US and Chilean scientific communities and to international partners with no proprietary period. Close collaborations with contemporaneous imaging and spectroscopy surveys observing at a variety of wavelengths, resolutions, depths, and timescales will be a vital part of the LSST science program, which will not only enhance specific studies but, more importantly, also allow a more complete understanding of the Universe through different windows.
Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G
NASA Astrophysics Data System (ADS)
DeSalvo, Riccardo
2015-06-01
Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.
Hong, Ie-Hong; Liao, Yung-Cheng; Tsai, Yung-Feng
2013-11-05
The perfectly ordered parallel arrays of periodic Ce silicide nanowires can self-organize with atomic precision on single-domain Si(110)-16 × 2 surfaces. The growth evolution of self-ordered parallel Ce silicide nanowire arrays is investigated over a broad range of Ce coverages on single-domain Si(110)-16 × 2 surfaces by scanning tunneling microscopy (STM). Three different types of well-ordered parallel arrays, consisting of uniformly spaced and atomically identical Ce silicide nanowires, are self-organized through the heteroepitaxial growth of Ce silicides on a long-range grating-like 16 × 2 reconstruction at the deposition of various Ce coverages. Each atomically precise Ce silicide nanowire consists of a bundle of chains and rows with different atomic structures. The atomic-resolution dual-polarity STM images reveal that the interchain coupling leads to the formation of the registry-aligned chain bundles within individual Ce silicide nanowire. The nanowire width and the interchain coupling can be adjusted systematically by varying the Ce coverage on a Si(110) surface. This natural template-directed self-organization of perfectly regular parallel nanowire arrays allows for the precise control of the feature size and positions within ±0.2 nm over a large area. Thus, it is a promising route to produce parallel nanowire arrays in a straightforward, low-cost, high-throughput process.
2013-01-01
The perfectly ordered parallel arrays of periodic Ce silicide nanowires can self-organize with atomic precision on single-domain Si(110)-16 × 2 surfaces. The growth evolution of self-ordered parallel Ce silicide nanowire arrays is investigated over a broad range of Ce coverages on single-domain Si(110)-16 × 2 surfaces by scanning tunneling microscopy (STM). Three different types of well-ordered parallel arrays, consisting of uniformly spaced and atomically identical Ce silicide nanowires, are self-organized through the heteroepitaxial growth of Ce silicides on a long-range grating-like 16 × 2 reconstruction at the deposition of various Ce coverages. Each atomically precise Ce silicide nanowire consists of a bundle of chains and rows with different atomic structures. The atomic-resolution dual-polarity STM images reveal that the interchain coupling leads to the formation of the registry-aligned chain bundles within individual Ce silicide nanowire. The nanowire width and the interchain coupling can be adjusted systematically by varying the Ce coverage on a Si(110) surface. This natural template-directed self-organization of perfectly regular parallel nanowire arrays allows for the precise control of the feature size and positions within ±0.2 nm over a large area. Thus, it is a promising route to produce parallel nanowire arrays in a straightforward, low-cost, high-throughput process. PMID:24188092
van der Gronde, Toon; Uyl-de Groot, Carin A.
2017-01-01
Context Recent public outcry has highlighted the rising cost of prescription drugs worldwide, which in several disease areas outpaces other health care expenditures and results in a suboptimal global availability of essential medicines. Method A systematic review of Pubmed, the Financial Times, the New York Times, the Wall Street Journal and the Guardian was performed to identify articles related to the pricing of medicines. Findings Changes in drug life cycles have dramatically affected patent medicine markets, which have long been considered a self-evident and self-sustainable source of income for highly profitable drug companies. Market failure in combination with high merger and acquisition activity in the sector have allowed price increases for even off-patent drugs. With market interventions and the introduction of QALY measures in health care, governments have tried to influence drug prices, but often encounter unintended consequences. Patent reform legislation, reference pricing, outcome-based pricing and incentivizing physicians and pharmacists to prescribe low-cost drugs are among the most promising short-term policy options. Due to the lack of systematic research on the effectiveness of policy measures, an increasing number of ad hoc decisions have been made with counterproductive effects on the availability of essential drugs. Future challenges demand new policies, for which recommendations are offered. Conclusion A fertile ground for high-priced drugs has been created by changes in drug life-cycle dynamics, the unintended effects of patent legislation, government policy measures and orphan drug programs. There is an urgent need for regulatory reform to curtail prices and safeguard equitable access to innovative medicines. PMID:28813502
Gronde, Toon van der; Uyl-de Groot, Carin A; Pieters, Toine
2017-01-01
Recent public outcry has highlighted the rising cost of prescription drugs worldwide, which in several disease areas outpaces other health care expenditures and results in a suboptimal global availability of essential medicines. A systematic review of Pubmed, the Financial Times, the New York Times, the Wall Street Journal and the Guardian was performed to identify articles related to the pricing of medicines. Changes in drug life cycles have dramatically affected patent medicine markets, which have long been considered a self-evident and self-sustainable source of income for highly profitable drug companies. Market failure in combination with high merger and acquisition activity in the sector have allowed price increases for even off-patent drugs. With market interventions and the introduction of QALY measures in health care, governments have tried to influence drug prices, but often encounter unintended consequences. Patent reform legislation, reference pricing, outcome-based pricing and incentivizing physicians and pharmacists to prescribe low-cost drugs are among the most promising short-term policy options. Due to the lack of systematic research on the effectiveness of policy measures, an increasing number of ad hoc decisions have been made with counterproductive effects on the availability of essential drugs. Future challenges demand new policies, for which recommendations are offered. A fertile ground for high-priced drugs has been created by changes in drug life-cycle dynamics, the unintended effects of patent legislation, government policy measures and orphan drug programs. There is an urgent need for regulatory reform to curtail prices and safeguard equitable access to innovative medicines.
Discovery of functional elements in 12 Drosophila genomes using evolutionary signatures
Stark, Alexander; Lin, Michael F.; Kheradpour, Pouya; Pedersen, Jakob S.; Parts, Leopold; Carlson, Joseph W.; Crosby, Madeline A.; Rasmussen, Matthew D.; Roy, Sushmita; Deoras, Ameya N.; Ruby, J. Graham; Brennecke, Julius; Hodges, Emily; Hinrichs, Angie S.; Caspi, Anat; Paten, Benedict; Park, Seung-Won; Han, Mira V.; Maeder, Morgan L.; Polansky, Benjamin J.; Robson, Bryanne E.; Aerts, Stein; van Helden, Jacques; Hassan, Bassem; Gilbert, Donald G.; Eastman, Deborah A.; Rice, Michael; Weir, Michael; Hahn, Matthew W.; Park, Yongkyu; Dewey, Colin N.; Pachter, Lior; Kent, W. James; Haussler, David; Lai, Eric C.; Bartel, David P.; Hannon, Gregory J.; Kaufman, Thomas C.; Eisen, Michael B.; Clark, Andrew G.; Smith, Douglas; Celniker, Susan E.; Gelbart, William M.; Kellis, Manolis
2008-01-01
Sequencing of multiple related species followed by comparative genomics analysis constitutes a powerful approach for the systematic understanding of any genome. Here, we use the genomes of 12 Drosophila species for the de novo discovery of functional elements in the fly. Each type of functional element shows characteristic patterns of change, or ‘evolutionary signatures’, dictated by its precise selective constraints. Such signatures enable recognition of new protein-coding genes and exons, spurious and incorrect gene annotations, and numerous unusual gene structures, including abundant stop-codon readthrough. Similarly, we predict non-protein-coding RNA genes and structures, and new microRNA (miRNA) genes. We provide evidence of miRNA processing and functionality from both hairpin arms and both DNA strands. We identify several classes of pre- and post-transcriptional regulatory motifs, and predict individual motif instances with high confidence. We also study how discovery power scales with the divergence and number of species compared, and we provide general guidelines for comparative studies. PMID:17994088
Hydrothermal growth of ZnO nanowire arrays: fine tuning by precursor supersaturation
Yan, Danhua; Cen, Jiajie; Zhang, Wenrui; ...
2016-12-20
In this paper, we develop a technique that fine tunes the hydrothermal growth of ZnO nanowires to address the difficulties in controlling their growth in a conventional one-pot hydrothermal method. In our technique, precursors are separately and slowly supplied with the assistance of a syringe pump, through the entire course of the growth. Compared to the one-pot method, the significantly lowered supersaturation of precursors helps eliminating competitive homogeneous nucleation and improves the reproducibility. The supersaturation degree can be readily tuned by the precursor quantity and injection rate, thus forming ZnO nanowire arrays of various geometries and packing densities in amore » highly controllable fashion. The precise control of ZnO nanowire growth enables systematic studies on the correlation between the material's properties and its morphology. Finally, in this work, ZnO nanowire arrays of various morphologies are studied as photoelectrochemical (PEC) water splitting photoanodes, in which we establish clear correlations between the water splitting performance and the nanowires' size, shape, and packing density.« less
NASA Astrophysics Data System (ADS)
Lichti, Derek D.; Chow, Jacky; Lahamy, Hervé
One of the important systematic error parameters identified in terrestrial laser scanners is the collimation axis error, which models the non-orthogonality between two instrumental axes. The quality of this parameter determined by self-calibration, as measured by its estimated precision and its correlation with the tertiary rotation angle κ of the scanner exterior orientation, is strongly dependent on instrument architecture. While the quality is generally very high for panoramic-type scanners, it is comparably poor for hybrid-style instruments. Two methods for improving the quality of the collimation axis error in hybrid instrument self-calibration are proposed herein: (1) the inclusion of independent observations of the tertiary rotation angle κ; and (2) the use of a new collimation axis error model. Five real datasets were captured with two different hybrid-style scanners to test each method's efficacy. While the first method achieves the desired outcome of complete decoupling of the collimation axis error from κ, it is shown that the high correlation is simply transferred to other model variables. The second method achieves partial parameter de-correlation to acceptable levels. Importantly, it does so without any adverse, secondary correlations and is therefore the method recommended for future use. Finally, systematic error model identification has been greatly aided in previous studies by graphical analyses of self-calibration residuals. This paper presents results showing the architecture dependence of this technique, revealing its limitations for hybrid scanners.
NASA Astrophysics Data System (ADS)
Yu, Yan-mei; Sahoo, B. K.
2016-12-01
We investigate the transition between the fine structure levels of the ground state, 3 p 2P1 /2→3 p 2P3 /2 , of the highly charged Al-like 10+51V, 11+53Cr, 12+55Mn, 13+57Fe, 14+59Co, 15+61Ni, and 16+63Cu ions for frequency standards. To comprehend them as prospective atomic clocks, we determine their transition wavelengths, quality factors, and various plausible systematics during the measurements. Since most of these ions have nuclear spin I =3 /2 , uncertainties due to dominant quadrupole shifts can be evaded in the F =0 hyperfine level of the 3 p 2P3 /2 state. Other dominant systematics such as quadratic Stark and black-body radiation shifts have been evaluated precisely demonstrating the feasibility of achieving high accuracy, below 10-19 fractional uncertainty, atomic clocks using the above transitions. Moreover, relativistic sensitivity coefficients are determined to find out the aptness of these proposed clocks to investigate possible temporal variation of the fine structure constant. To carry out these analysis, a relativistic coupled-cluster method considering Dirac-Coulomb-Breit Hamiltonian along with lower-order quantum electrodynamics interactions is employed and many spectroscopic properties are evaluated. These properties are also of immense interest for astrophysical studies.
LYSO based precision timing calorimeters
Bornheim, A.; Apresyan, A.; Ronzhin, A.; ...
2017-11-01
In this report we outline the study of the development of calorimeter detectors using bright scintillating crystals. We discuss how timing information with a precision of a few tens of pico seconds and below can significantly improve the reconstruction of the physics events under challenging high pileup conditions to be faced at the High-Luminosity LHC or a future hadron collider. The particular challenge in measuring the time of arrival of a high energy photon lies in the stochastic component of the distance of initial conversion and the size of the electromagnetic shower. We present studies and measurements from test beamsmore » for calorimeter based timing measurements to explore the ultimate timing precision achievable for high energy photons of 10 GeV and above. We focus on techniques to measure the timing with a high precision in association with the energy of the photon. We present test-beam studies and results on the timing performance and characterization of the time resolution of LYSO-based calorimeters. We demonstrate time resolution of 30 ps is achievable for a particular design.« less
NASA Astrophysics Data System (ADS)
Ishino, Hirokazu
2016-07-01
We present LiteBIRD, a satellite project dedicated for the detection of the CMB B-mode polarization. The purpose of LiteBIRD is to measure the tensor-to-scalar ratio r with a precision of σr < 0.001 to test large-single-field slow-roll inflation models by scanning all the sky area for three years at the sun-earth L2 with the sensitivity of 3.2μKṡarcmin. We report an overview and the status of the project, including the ongoing detector and systematic studies.
NASA Astrophysics Data System (ADS)
Shelly, D. R.; Ellsworth, W. L.; Prejean, S. G.; Hill, D. P.; Hardebeck, J.; Hsieh, P. A.
2015-12-01
Earthquake swarms, sequences of sustained seismicity, convey active subsurface processes that sometimes precede larger tectonic or volcanic episodes. Their extended activity and spatiotemporal migration can often be attributed to fluid pressure transients as migrating crustal fluids (typically water and CO2) interact with subsurface structures. Although the swarms analyzed here are interpreted to be natural in origin, the mechanisms of seismic activation likely mirror those observed for earthquakes induced by industrial fluid injection. Here, we use massive-scale waveform correlation to detect and precisely locate 3-10 times as many earthquakes as included in routine catalogs for recent (2014-2015) swarms beneath Mammoth Mountain, Long Valley Caldera, Lassen Volcanic Center, and Fillmore areas of California, USA. These enhanced catalogs, with location precision as good as a few meters, reveal signatures of fluid-faulting interactions, such as systematic migration, fault-valve behavior, and fracture mesh structures, not resolved in routine catalogs. We extend this analysis to characterize source mechanism similarity even for very small newly detected events using relative P and S polarity estimates. This information complements precise locations to define fault complexities that would otherwise be invisible. In particular, although swarms often consist of groups of highly similar events, some swarms contain a population of outliers with different slip and/or fault orientations. These events highlight the complexity of fluid-faulting interactions. Despite their different settings, the four swarms analyzed here share many similarities, including pronounced hypocenter migration suggestive of a fluid pressure trigger. This includes the July 2015 Fillmore swarm, which, unlike the others, occurred outside of an obvious volcanic zone. Nevertheless, it exhibited systematic westward and downdip migration on a ~1x1.5 km low-angle, NW-dipping reverse fault at midcrustal depth.
Cancer Precision Medicine: Why More Is More and DNA Is Not Enough.
Schütte, Moritz; Ogilvie, Lesley A; Rieke, Damian T; Lange, Bodo M H; Yaspo, Marie-Laure; Lehrach, Hans
2017-01-01
Every tumour is different. They arise in patients with different genomes, from cells with different epigenetic modifications, and by random processes affecting the genome and/or epigenome of a somatic cell, allowing it to escape the usual controls on its growth. Tumours and patients therefore often respond very differently to the drugs they receive. Cancer precision medicine aims to characterise the tumour (and often also the patient) to be able to predict, with high accuracy, its response to different treatments, with options ranging from the selective characterisation of a few genomic variants considered particularly important to predict the response of the tumour to specific drugs, to deep genome analysis of both tumour and patient, combined with deep transcriptome analysis of the tumour. Here, we compare the expected results of carrying out such analyses at different levels, from different size panels to a comprehensive analysis incorporating both patient and tumour at the DNA and RNA levels. In doing so, we illustrate the additional power gained by this unusually deep analysis strategy, a potential basis for a future precision medicine first strategy in cancer drug therapy. However, this is only a step along the way of increasingly detailed molecular characterisation, which in our view will, in the future, introduce additional molecular characterisation techniques, including systematic analysis of proteins and protein modification states and different types of metabolites in the tumour, systematic analysis of circulating tumour cells and nucleic acids, the use of spatially resolved analysis techniques to address the problem of tumour heterogeneity as well as the deep analyses of the immune system of the patient to, e.g., predict the response of the patient to different types of immunotherapy. Such analyses will generate data sets of even greater complexity, requiring mechanistic modelling approaches to capture enough of the complex situation in the real patient to be able to accurately predict his/her responses to all available therapies. © 2017 S. Karger AG, Basel.
Takasaki, Hiroshi; Okuyama, Kousuke; Rosedale, Richard
2017-02-01
Mechanical Diagnosis and Therapy (MDT) is used in the treatment of extremity problems. Classifying clinical problems is one method of providing effective treatment to a target population. Classification reliability is a key factor to determine the precise clinical problem and to direct an appropriate intervention. To explore inter-examiner reliability of the MDT classification for extremity problems in three reliability designs: 1) vignette reliability using surveys with patient vignettes, 2) concurrent reliability, where multiple assessors decide a classification by observing someone's assessment, 3) successive reliability, where multiple assessors independently assess the same patient at different times. Systematic review with data synthesis in a quantitative format. Agreement of MDT subgroups was examined using the Kappa value, with the operational definition of acceptable reliability set at ≥ 0.6. The level of evidence was determined considering the methodological quality of the studies. Six studies were included and all studies met the criteria for high quality. Kappa values for the vignette reliability design (five studies) were ≥ 0.7. There was data from two cohorts in one study for the concurrent reliability design and the Kappa values ranged from 0.45 to 1.0. Kappa values for the successive reliability design (data from three cohorts in one study) were < 0.6. The current review found strong evidence of acceptable inter-examiner reliability of MDT classification for extremity problems in the vignette reliability design, limited evidence of acceptable reliability in the concurrent reliability design and unacceptable reliability in the successive reliability design. Copyright © 2017 Elsevier Ltd. All rights reserved.
Precision experiments on mirror transitions at Notre Dame
NASA Astrophysics Data System (ADS)
Brodeur, Maxime; TwinSol Collaboration
2016-09-01
Thanks to extensive experimental efforts that led to a precise determination of important experimental quantities of superallowed pure Fermi transitions, we now have a very precise value for Vud that leads to a stringent test of the CKM matrix unitarity. Despite this achievement, measurements in other systems remain relevant as conflicting results could uncover unknown systematic effects or even new physics. One such system is the superallowed mixed transition, which can help refine theoretical corrections used for pure Fermi transitions and improve the accuracy of Vud. However, as a corrected Ft-value determination from these systems requires the more challenging determination of the Fermi Gamow-Teller mixing ratio, only five transitions, spreading from 19Ne to 37Ar, are currently fully characterized. To rectify the situation, an experimental program on precision experiment of mirror transitions that includes precision half-life measurements, and in the future, the determination of the Fermi Gamow-Teller mixing ratio, has started at the University of Notre Dame. This work is supported in part by the National Science Foundation.
High-Precision Half-Life Measurement for the Superallowed β+ Emitter Alm26
NASA Astrophysics Data System (ADS)
Finlay, P.; Ettenauer, S.; Ball, G. C.; Leslie, J. R.; Svensson, C. E.; Andreoiu, C.; Austin, R. A. E.; Bandyopadhyay, D.; Cross, D. S.; Demand, G.; Djongolov, M.; Garrett, P. E.; Green, K. L.; Grinyer, G. F.; Hackman, G.; Leach, K. G.; Pearson, C. J.; Phillips, A. A.; Sumithrarachchi, C. S.; Triambak, S.; Williams, S. J.
2011-01-01
A high-precision half-life measurement for the superallowed β+ emitter Alm26 was performed at the TRIUMF-ISAC radioactive ion beam facility yielding T1/2=6346.54±0.46stat±0.60systms, consistent with, but 2.5 times more precise than, the previous world average. The Alm26 half-life and ft value, 3037.53(61) s, are now the most precisely determined for any superallowed β decay. Combined with recent theoretical corrections for isospin-symmetry-breaking and radiative effects, the corrected Ft value for Alm26, 3073.0(12) s, sets a new benchmark for the high-precision superallowed Fermi β-decay studies used to test the conserved vector current hypothesis and determine the Vud element of the Cabibbo-Kobayashi-Maskawa quark mixing matrix.
High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS
NASA Astrophysics Data System (ADS)
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu
2017-05-01
Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.
Closing the wedge: Search strategies for extended Higgs sectors with heavy flavor final states
Gori, Stefania; Kim, Ian-Woo; Shah, Nausheen R.; ...
2016-04-29
We consider search strategies for an extended Higgs sector at the high-luminosity LHC14 utilizing multitop final states. In the framework of a two Higgs doublet model, the purely top final states (more » $$t\\bar{t}$$, 4t) are important channels for heavy Higgs bosons with masses in the wedge above 2m t and at low values of tanβ, while a 2b2t final state is most relevant at moderate values of tanβ. We find, in the $$t\\bar{t}$$ H channel, with H→$$t\\bar{t}$$, that both single and three lepton final states can provide statistically significant constraints at low values of tanβ for mA as high as ~750 GeV. When systematics on the $$t\\bar{t}$$ background are taken into account, however, the three lepton final state is more powerful, though the precise constraint depends fairly sensitively on lepton fake rates. We also find that neither 2b2t nor $$t\\bar{t}$$ final states provide constraints on additional heavy Higgs bosons with couplings to tops smaller than the top Yukawa due to expected systematic uncertainties in the tt background.« less
I too, am America: a review of research on systemic lupus erythematosus in African-Americans
Williams, Edith M; Bruner, Larisa; Adkins, Alyssa; Vrana, Caroline; Logan, Ayaba; Kamen, Diane; Oates, James C
2016-01-01
Systemic lupus erythematosus (SLE) is a multi-organ autoimmune disorder that can cause significant morbidity and mortality. A large body of evidence has shown that African-Americans experience the disease more severely than other racial-ethnic groups. Relevant literature for the years 2000 to August 2015 were obtained from systematic searches of PubMed, Scopus, and the EBSCOHost platform that includes MEDLINE, CINAHL, etc. to evaluate research focused on SLE in African-Americans. Thirty-six of the 1502 articles were classified according to their level of evidence. The systematic review of the literature reported a wide range of adverse outcomes in African-American SLE patients and risk factors observed in other mono and multi-ethnic investigations. Studies limited to African-Americans with SLE identified novel methods for more precise ascertainment of risk and observed novel findings that hadn't been previously reported in African-Americans with SLE. Both environmental and genetic studies included in this review have highlighted unique African-American populations in an attempt to isolate risk attributable to African ancestry and observed increased genetic influence on overall disease in this cohort. The review also revealed emerging research in areas of quality of life, race-tailored interventions, and self-management. This review reemphasizes the importance of additional studies to better elucidate the natural history of SLE in African-Americans and optimize therapeutic strategies for those who are identified as being at high risk. PMID:27651918
A Bayesian Approach to Systematic Error Correction in Kepler Photometric Time Series
NASA Astrophysics Data System (ADS)
Jenkins, Jon Michael; VanCleve, J.; Twicken, J. D.; Smith, J. C.; Kepler Science Team
2011-01-01
In order for the Kepler mission to achieve its required 20 ppm photometric precision for 6.5 hr observations of 12th magnitude stars, the Presearch Data Conditioning (PDC) software component of the Kepler Science Processing Pipeline must reduce systematic errors in flux time series to the limit of stochastic noise for errors with time-scales less than three days, without smoothing or over-fitting away the transits that Kepler seeks. The current version of PDC co-trends against ancillary engineering data and Pipeline generated data using essentially a least squares (LS) approach. This approach is successful for quiet stars when all sources of systematic error have been identified. If the stars are intrinsically variable or some sources of systematic error are unknown, LS will nonetheless attempt to explain all of a given time series, not just the part the model can explain well. Negative consequences can include loss of astrophysically interesting signal, and injection of high-frequency noise into the result. As a remedy, we present a Bayesian Maximum A Posteriori (MAP) approach, in which a subset of intrinsically quiet and highly-correlated stars is used to establish the probability density function (PDF) of robust fit parameters in a diagonalized basis. The PDFs then determine a "reasonable” range for the fit parameters for all stars, and brake the runaway fitting that can distort signals and inject noise. We present a closed-form solution for Gaussian PDFs, and show examples using publically available Quarter 1 Kepler data. A companion poster (Van Cleve et al.) shows applications and discusses current work in more detail. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.
Liquid-Assisted Femtosecond Laser Precision-Machining of Silica.
Cao, Xiao-Wen; Chen, Qi-Dai; Fan, Hua; Zhang, Lei; Juodkazis, Saulius; Sun, Hong-Bo
2018-04-28
We report a systematical study on the liquid assisted femtosecond laser machining of quartz plate in water and under different etching solutions. The ablation features in liquid showed a better structuring quality and improved resolution with 1/3~1/2 smaller features as compared with those made in air. It has been demonstrated that laser induced periodic structures are present to a lesser extent when laser processed in water solutions. The redistribution of oxygen revealed a strong surface modification, which is related to the etching selectivity of laser irradiated regions. Laser ablation in KOH and HF solution showed very different morphology, which relates to the evolution of laser induced plasma on the formation of micro/nano-features in liquid. This work extends laser precision fabrication of hard materials. The mechanism of strong absorption in the regions with permittivity (epsilon) near zero is discussed.
Lee, Taehwa; Luo, Wei; Li, Qiaochu; Demirci, Hakan; Guo, L Jay
2017-10-01
Beyond the implementation of the photoacoustic effect to photoacoustic imaging and laser ultrasonics, this study demonstrates a novel application of the photoacoustic effect for high-precision cavitation treatment of tissue using laser-induced focused ultrasound. The focused ultrasound is generated by pulsed optical excitation of an efficient photoacoustic film coated on a concave surface, and its amplitude is high enough to produce controllable microcavitation within the focal region (lateral focus <100 µm). Such microcavitation is used to cut or ablate soft tissue in a highly precise manner. This work demonstrates precise cutting of tissue-mimicking gels as well as accurate ablation of gels and animal eye tissues. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Chang, Yu Min; Lu, Nien Hua; Wu, Tsung Chiang
2005-06-01
This study applies 3D Laser scanning technology to develop a high-precision measuring system for digital survey of historical building. It outperformed other methods in obtaining abundant high-precision measuring points and computing data instantly. In this study, the Pei-tien Temple, a Chinese Taoism temple in southern Taiwan famous for its highly intricate architecture and more than 300-year history, was adopted as the target to proof the high accuracy and efficiency of this system. By using French made MENSI GS-100 Laser Scanner, numerous measuring points were precisely plotted to present the plane map, vertical map and 3D map of the property. Accuracies of 0.1-1 mm in the digital data have consistently been achieved for the historical heritage measurement.
[Estimation of desert vegetation coverage based on multi-source remote sensing data].
Wan, Hong-Mei; Li, Xia; Dong, Dao-Rui
2012-12-01
Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study areaAbstract: Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study area and based on the ground investigation and the multi-source remote sensing data of different resolutions, the estimation models for desert vegetation coverage were built, with the precisions of different estimation methods and models compared. The results showed that with the increasing spatial resolution of remote sensing data, the precisions of the estimation models increased. The estimation precision of the models based on the high, middle-high, and middle-low resolution remote sensing data was 89.5%, 87.0%, and 84.56%, respectively, and the precisions of the remote sensing models were higher than that of vegetation index method. This study revealed the change patterns of the estimation precision of desert vegetation coverage based on different spatial resolution remote sensing data, and realized the quantitative conversion of the parameters and scales among the high, middle, and low spatial resolution remote sensing data of desert vegetation coverage, which would provide direct evidence for establishing and implementing comprehensive remote sensing monitoring scheme for the ecological restoration in the study area.
Preparation and purification of organic samples for selenium isotope studies.
Banning, Helena; Stelling, Monika; König, Stephan; Schoenberg, Ronny; Neumann, Thomas
2018-01-01
Selenium (Se) is an important micronutrient but also a strong toxin with a narrow tolerance range for many organisms. As such, a globally heterogeneous Se distribution in soils is responsible for various disease patterns (i.e. Se excess and deficiency) and environmental problems, whereby plants play a key role for the Se entrance into the biosphere. Selenium isotope variations were proved to be a powerful tracer for redox processes and are therefore promising for the exploration of the species dependent Se metabolism in plants and the Se cycling within the Critical Zone. Plant cultivation setups enable systematic controlled investigations, but samples derived from them-plant tissue and phytoagar-are particularly challenging and require specific preparation and purification steps to ensure precise and valid Se isotope analytics performed with HG-MC-ICP-MS. In this study, different methods for the entire process from solid tissue preparation to Se isotope measurements were tested, optimized and validated. A particular microwave digestion procedure for plant tissue and a vacuum filtration method for phytoagar led to full Se recoveries, whereby unfavorable organic residues were reduced to a minimum. Three purification methods predominantly described in the literature were systematically tested with pure Se solution, high concentrated multi-element standard solution as well as plant and phytoagar as target matrices. All these methods efficiently remove critical matrix elements, but differ in Se recovery and organic residues. Validation tests doping Se-free plant material and phytoagar with a reference material of known Se isotope composition revealed the high impact of organic residues on the accuracy of MC-ICP-MS measurements. Only the purification method with no detectable organic residues, hydride generation and trapping, results in valid mass bias correction for plant samples with an average deviation to true δ82/76Se values of 0.2 ‰ and a reproducibility (2 SD) of ± 0.2 ‰. For phytoagar this test yields a higher deviation of 1.1 ‰ from the true value and a 2 SD of ± 0.1 ‰. The application of the developed methods to cultivated plants shows sufficient accuracy and precision and is a promising approach to resolve plant internal Se isotope fractionations, for which respective δ82/76Se values of +2.3 to +3.5 ‰ for selenate and +1.2 to +1.9 ‰ for selenite were obtained.
Preparation and purification of organic samples for selenium isotope studies
Stelling, Monika; König, Stephan; Schoenberg, Ronny; Neumann, Thomas
2018-01-01
Selenium (Se) is an important micronutrient but also a strong toxin with a narrow tolerance range for many organisms. As such, a globally heterogeneous Se distribution in soils is responsible for various disease patterns (i.e. Se excess and deficiency) and environmental problems, whereby plants play a key role for the Se entrance into the biosphere. Selenium isotope variations were proved to be a powerful tracer for redox processes and are therefore promising for the exploration of the species dependent Se metabolism in plants and the Se cycling within the Critical Zone. Plant cultivation setups enable systematic controlled investigations, but samples derived from them–plant tissue and phytoagar–are particularly challenging and require specific preparation and purification steps to ensure precise and valid Se isotope analytics performed with HG-MC-ICP-MS. In this study, different methods for the entire process from solid tissue preparation to Se isotope measurements were tested, optimized and validated. A particular microwave digestion procedure for plant tissue and a vacuum filtration method for phytoagar led to full Se recoveries, whereby unfavorable organic residues were reduced to a minimum. Three purification methods predominantly described in the literature were systematically tested with pure Se solution, high concentrated multi-element standard solution as well as plant and phytoagar as target matrices. All these methods efficiently remove critical matrix elements, but differ in Se recovery and organic residues. Validation tests doping Se-free plant material and phytoagar with a reference material of known Se isotope composition revealed the high impact of organic residues on the accuracy of MC-ICP-MS measurements. Only the purification method with no detectable organic residues, hydride generation and trapping, results in valid mass bias correction for plant samples with an average deviation to true δ82/76Se values of 0.2 ‰ and a reproducibility (2 SD) of ± 0.2 ‰. For phytoagar this test yields a higher deviation of 1.1 ‰ from the true value and a 2 SD of ± 0.1 ‰. The application of the developed methods to cultivated plants shows sufficient accuracy and precision and is a promising approach to resolve plant internal Se isotope fractionations, for which respective δ82/76Se values of +2.3 to +3.5 ‰ for selenate and +1.2 to +1.9 ‰ for selenite were obtained. PMID:29509798
The paradox of sham therapy and placebo effect in osteopathy
Cerritelli, Francesco; Verzella, Marco; Cicchitti, Luca; D’Alessandro, Giandomenico; Vanacore, Nicola
2016-01-01
Abstract Background: Placebo, defined as “false treatment,” is a common gold-standard method to assess the validity of a therapy both in pharmacological trials and manual medicine research where placebo is also referred to as “sham therapy.” In the medical literature, guidelines have been proposed on how to conduct robust placebo-controlled trials, but mainly in a drug-based scenario. In contrast, there are not precise guidelines on how to conduct a placebo-controlled in manual medicine trials (particularly osteopathy). The aim of the present systematic review was to report how and what type of sham methods, dosage, operator characteristics, and patient types were used in osteopathic clinical trials and, eventually, assess sham clinical effectiveness. Methods: A systematic Cochrane-based review was conducted by analyzing the osteopathic trials that used both manual and nonmanual placebo control. Searches were conducted on 8 databases from journal inception to December 2015 using a pragmatic literature search approach. Two independent reviewers conducted the study selection and data extraction for each study. The risk of bias was evaluated according to the Cochrane methods. Results: A total of 64 studies were eligible for analysis collecting a total of 5024 participants. More than half (43 studies) used a manual placebo; 9 studies used a nonmanual placebo; and 12 studies used both manual and nonmanual placebo. Data showed lack of reporting sham therapy information across studies. Risk of bias analysis demonstrated a high risk of bias for allocation, blinding of personnel and participants, selective, and other bias. To explore the clinical effects of sham therapies used, a quantitative analysis was planned. However, due to the high heterogeneity of sham approaches used no further analyses were performed. Conclusion: High heterogeneity regarding placebo used between studies, lack of reporting information on placebo methods and within-study variability between sham and real treatment procedures suggest prudence in reading and interpreting study findings in manual osteopathic randomized controlled trials (RCTs). Efforts must be made to promote guidelines to design the most reliable placebo for manual RCTs as a means of increasing the internal validity and improve external validity of findings. PMID:27583913
Cognitive Task Analysis: Bringing Olympic Athlete Style Training to Surgical Education.
Wingfield, Laura R; Kulendran, Myutan; Chow, Andre; Nehme, Jean; Purkayastha, Sanjay
2015-08-01
Surgical training is changing and evolving as time, pressure, and legislative demands continue to mount on trainee surgeons. A paradigm change in the focus of training has resulted in experts examining the cognitive steps needed to perform complex and often highly pressurized surgical procedures. To provide an overview of the collective evidence on cognitive task analysis (CTA) as a surgical training method, and determine if CTA improves a surgeon's performance as measured by technical and nontechnical skills assessment, including precision, accuracy, and operative errors. A systematic literature review was performed. PubMed, Cochrane, and reference lists were analyzed for appropriate inclusion. A total of 595 surgical participants were identified through the literature review and a total of 13 articles were included. Of these articles, 6 studies focused on general surgery, 2 focused on practical procedures relevant to surgery (central venous catheterization placement), 2 studies focused on head and neck surgical procedures (cricothyroidotomy and percutaneous tracheostomy placement), 2 studies highlighted vascular procedures (endovascular aortic aneurysm repair and carotid artery stenting), and 1 detailed endovascular repair (abdominal aorta and thoracic aorta). Overall, 92.3% of studies showed that CTA improves surgical outcome parameters, including time, precision, accuracy, and error reduction in both simulated and real-world environments. CTA has been shown to be a more effective training tool when compared with traditional methods of surgical training. There is a need for the introduction of CTA into surgical curriculums as this can improve surgical skill and ultimately create better patient outcomes. © The Author(s) 2014.
Farhan, Nabeel; Motschall, Edith; Vach, Werner; Boeker, Martin
2017-01-01
Objectives An increasing number of International Medical Graduates (IMG), who are defined to be physicians working in a country other than their country of origin and training, immigrate to Western countries. In order to ensure safe and high-quality patient care, they have to take medical and language tests. This systematic review aims to (1) collect all empiric research on intercultural communication of IMGs in medical settings, (2) identify and categorize all text passages mentioning intercultural issues in the included studies, and (3) describe the most commonly reported intercultural areas of communication of IMGs. Methods This review was based on the PRISMA-Guidelines for systematic reviews. We conducted a broad and systematic electronic literature search for empiric research in the following databases: MEDLINE, BIOSIS Citation Index, BIOSIS Previews, KCI-Korean Journal Database and SciELO Citation Index. The search results were synthesized and analyzed with the aid of coding systems. These coding systems were based on textual analysis and derived from the themes and topics of the results and discussion sections from the included studies. A quality assessment was performed, comparing the studies with their corresponding checklist (COREQ or STROBE). Textual results of the studies were extracted and categorized. Results Among 10,630 search results, 47 studies were identified for analysis. 31 studies were qualitative, 12 quantitative and 4 studies used mixed methods. The quality assessment revealed a low level of quality of the studies in general. The following intercultural problems were identified: IMGs were not familiar with shared decision-making and lower hierarchies in the health care system in general. They had difficulties with patient-centered care, the subtleties of the foreign language and with the organizational structures of the new health care system. In addition, they described the medical education in their home countries as science-oriented, without focusing on psychosocial aspects. Conclusion There is a need for a better training of IMGs on culture-related and not culture-related topics in the new workplace country. The topics that emerged in this review constitute a basis for developing these courses. Further empiric research is needed to describe the findings of this review more precisely and should be in accordance with the existing reporting guidelines. PMID:28715467
Michalski, Kerstin; Farhan, Nabeel; Motschall, Edith; Vach, Werner; Boeker, Martin
2017-01-01
An increasing number of International Medical Graduates (IMG), who are defined to be physicians working in a country other than their country of origin and training, immigrate to Western countries. In order to ensure safe and high-quality patient care, they have to take medical and language tests. This systematic review aims to (1) collect all empiric research on intercultural communication of IMGs in medical settings, (2) identify and categorize all text passages mentioning intercultural issues in the included studies, and (3) describe the most commonly reported intercultural areas of communication of IMGs. This review was based on the PRISMA-Guidelines for systematic reviews. We conducted a broad and systematic electronic literature search for empiric research in the following databases: MEDLINE, BIOSIS Citation Index, BIOSIS Previews, KCI-Korean Journal Database and SciELO Citation Index. The search results were synthesized and analyzed with the aid of coding systems. These coding systems were based on textual analysis and derived from the themes and topics of the results and discussion sections from the included studies. A quality assessment was performed, comparing the studies with their corresponding checklist (COREQ or STROBE). Textual results of the studies were extracted and categorized. Among 10,630 search results, 47 studies were identified for analysis. 31 studies were qualitative, 12 quantitative and 4 studies used mixed methods. The quality assessment revealed a low level of quality of the studies in general. The following intercultural problems were identified: IMGs were not familiar with shared decision-making and lower hierarchies in the health care system in general. They had difficulties with patient-centered care, the subtleties of the foreign language and with the organizational structures of the new health care system. In addition, they described the medical education in their home countries as science-oriented, without focusing on psychosocial aspects. There is a need for a better training of IMGs on culture-related and not culture-related topics in the new workplace country. The topics that emerged in this review constitute a basis for developing these courses. Further empiric research is needed to describe the findings of this review more precisely and should be in accordance with the existing reporting guidelines.
PRECISE:PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare
Chen, Feng; Wang, Shuang; Mohammed, Noman; Cheng, Samuel; Jiang, Xiaoqian
2015-01-01
Quality improvement (QI) requires systematic and continuous efforts to enhance healthcare services. A healthcare provider might wish to compare local statistics with those from other institutions in order to identify problems and develop intervention to improve the quality of care. However, the sharing of institution information may be deterred by institutional privacy as publicizing such statistics could lead to embarrassment and even financial damage. In this article, we propose a PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare (PRECISE), which aims at enabling cross-institution comparison of healthcare statistics while protecting privacy. The proposed framework relies on a set of state-of-the-art cryptographic protocols including homomorphic encryption and Yao’s garbled circuit schemes. By securely pooling data from different institutions, PRECISE can rank the encrypted statistics to facilitate QI among participating institutes. We conducted experiments using MIMIC II database and demonstrated the feasibility of the proposed PRECISE framework. PMID:26146645
PRECISE:PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare.
Chen, Feng; Wang, Shuang; Mohammed, Noman; Cheng, Samuel; Jiang, Xiaoqian
2014-10-01
Quality improvement (QI) requires systematic and continuous efforts to enhance healthcare services. A healthcare provider might wish to compare local statistics with those from other institutions in order to identify problems and develop intervention to improve the quality of care. However, the sharing of institution information may be deterred by institutional privacy as publicizing such statistics could lead to embarrassment and even financial damage. In this article, we propose a PRivacy-prEserving Cloud-assisted quality Improvement Service in hEalthcare (PRECISE), which aims at enabling cross-institution comparison of healthcare statistics while protecting privacy. The proposed framework relies on a set of state-of-the-art cryptographic protocols including homomorphic encryption and Yao's garbled circuit schemes. By securely pooling data from different institutions, PRECISE can rank the encrypted statistics to facilitate QI among participating institutes. We conducted experiments using MIMIC II database and demonstrated the feasibility of the proposed PRECISE framework.
Sami, Musa Basser; Rabiner, Eugenii A; Bhattacharyya, Sagnik
2015-08-01
A significant body of epidemiological evidence has linked psychotic symptoms with both acute and chronic use of cannabis. Precisely how these effects of THC are mediated at the neurochemical level is unclear. While abnormalities in multiple pathways may lead to schizophrenia, an abnormality in dopamine neurotransmission is considered to be the final common abnormality. One would thus expect cannabis use to be associated with dopamine signaling alterations. This is the first systematic review of all studies, both observational as well as experimental, examining the acute as well as chronic effect of cannabis or its main psychoactive ingredient, THC, on the dopamine system in man. We aimed to review all studies conducted in man, with any reported neurochemical outcomes related to the dopamine system after cannabis, cannabinoid or endocannabinoid administration or use. We identified 25 studies reporting outcomes on over 568 participants, of which 244 participants belonged to the cannabis/cannabinoid exposure group. In man, there is as yet little direct evidence to suggest that cannabis use affects acute striatal dopamine release or affects chronic dopamine receptor status in healthy human volunteers. However some work has suggested that acute cannabis exposure increases dopamine release in striatal and pre-frontal areas in those genetically predisposed for, or at clinical high risk of psychosis. Furthermore, recent studies are suggesting that chronic cannabis use blunts dopamine synthesis and dopamine release capacity. Further well-designed studies are required to definitively delineate the effects of cannabis use on the dopaminergic system in man. Copyright © 2015 Elsevier B.V. and ECNP. All rights reserved.
Prochazka, Ivan; Kodet, Jan; Panek, Petr
2012-11-01
We have designed, constructed, and tested the overall performance of the electronic circuit for the two-way time transfer between two timing devices over modest distances with sub-picosecond precision and a systematic error of a few picoseconds. The concept of the electronic circuit enables to carry out time tagging of pulses of interest in parallel to the comparison of the time scales of these timing devices. The key timing parameters of the circuit are: temperature change of the delay is below 100 fs/K, timing stability time deviation better than 8 fs for averaging time from minutes to hours, sub-picosecond time transfer precision, and a few picoseconds time transfer accuracy.
Revisiting the pH Effect on Orthophosphate Control of Plumbosolvency
Although solubility models for Pb(II) have largely been successful for giving corrosion control treatment guidance for over 2 decades, very little systematic research has been done to precisely define plumbosolvency responses to changes in pH, carbonate and phosphate concentratio...
Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).
Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan
2013-11-01
Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms, such that volumes quantified from scans of different reconstruction algorithms can be compared. The little difference found between the precision of FBP and iterative reconstructions could be a result of both iterative reconstruction's diminished noise reduction at the edge of the nodules as well as the loss of resolution at high noise levels with iterative reconstruction. The findings do not rule out potential advantage of IR that might be evident in a study that uses a larger number of nodules or repeated scans.