Sample records for large threshold corrections

  1. Interplay of threshold resummation and hadron mass corrections in deep inelastic processes

    DOE PAGES

    Accardi, Alberto; Anderle, Daniele P.; Ringer, Felix

    2015-02-01

    We discuss hadron mass corrections and threshold resummation for deep-inelastic scattering lN-->l'X and semi-inclusive annihilation e +e - → hX processes, and provide a prescription how to consistently combine these two corrections respecting all kinematic thresholds. We find an interesting interplay between threshold resummation and target mass corrections for deep-inelastic scattering at large values of Bjorken x B. In semi-inclusive annihilation, on the contrary, the two considered corrections are relevant in different kinematic regions and do not affect each other. A detailed analysis is nonetheless of interest in the light of recent high precision data from BaBar and Belle onmore » pion and kaon production, with which we compare our calculations. For both deep inelastic scattering and single inclusive annihilation, the size of the combined corrections compared to the precision of world data is shown to be large. Therefore, we conclude that these theoretical corrections are relevant for global QCD fits in order to extract precise parton distributions at large Bjorken x B, and fragmentation functions over the whole kinematic range.« less

  2. Threshold corrections to the bottom quark mass revisited

    DOE PAGES

    Anandakrishnan, Archana; Bryant, B. Charles; Raby, Stuart

    2015-05-19

    Threshold corrections to the bottom quark mass are often estimated under the approximation that tan β enhanced contributions are the most dominant. In this work we revisit this common approximation made to the estimation of the supersymmetric thresh-old corrections to the bottom quark mass. We calculate the full one-loop supersymmetric corrections to the bottom quark mass and survey a large part of the phenomenological MSSM parameter space to study the validity of considering only the tan β enhanced corrections. Our analysis demonstrates that this approximation underestimates the size of the threshold corrections by ~12.5% for most of the considered parametermore » space. We discuss the consequences for fitting the bottom quark mass and for the effective couplings to Higgses. Here, we find that it is important to consider the additional contributions when fitting the bottom quark mass but the modifications to the effective Higgs couplings are typically O(few)% for the majority of the parameter space considered.« less

  3. Threshold corrections to dimension-six proton decay operators in SUSY SU(5)

    NASA Astrophysics Data System (ADS)

    Kuwahara, Takumi

    2017-11-01

    Proton decay is a significant phenomenon to verify supersymmetric grand unified theories (SUSY GUTs). To predict the proton lifetime precisely, it is important to include the next-leading order (NLO) corrections to the proton decay operators. In this talk, we have shown threshold corrections to the dimension-six proton decay operators in the minimal SUSY SU(5) GUT, its extended models with extra matters, and the missing partner SUSY SU(5) GUT. As a result, we have found that the threshold effects give rise to corrections a few percent in the minimal setup and below 5% in its extension with extra matters in spite of a large unified coupling at the GUT scale. On the other hand, in the missing partner model the correction to the proton decay rate is suppression about 60% due to a number of component fields of 75 and their mass splitting.

  4. A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer

    NASA Astrophysics Data System (ADS)

    Zheng, G.; Cheng, Y.; He, K.; Duan, F.; Ma, Y.

    2014-01-01

    Sunset Semi-Continuous Carbon Analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, here we identified a new type of SCCA calculation discrepancy caused by the default multi-point baseline correction method. When exceeding a certain threshold carbon load, multi-point correction could cause significant Total Carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples with three temperature protocols. For ambient samples, 22%, 36% and 12% TC was underestimated by the three protocols, respectively, with corresponding threshold being ~0, 20 and 25 μg C. For sucrose, however, such discrepancy was observed with only one of these protocols, indicating the need of more refractory SCCA calibration substance. The discrepancy was less significant for the NIOSH (National Institute for Occupational Safety and Health)-like protocol compared with the other two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments). Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. Proposed correction method was to use multi-point corrected data when below the determined threshold, while use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.

  5. A newly identified calculation discrepancy of the Sunset semi-continuous carbon analyzer

    NASA Astrophysics Data System (ADS)

    Zheng, G. J.; Cheng, Y.; He, K. B.; Duan, F. K.; Ma, Y. L.

    2014-07-01

    The Sunset semi-continuous carbon analyzer (SCCA) is an instrument widely used for carbonaceous aerosol measurement. Despite previous validation work, in this study we identified a new type of SCCA calculation discrepancy caused by the default multipoint baseline correction method. When exceeding a certain threshold carbon load, multipoint correction could cause significant total carbon (TC) underestimation. This calculation discrepancy was characterized for both sucrose and ambient samples, with two protocols based on IMPROVE (Interagency Monitoring of PROtected Visual Environments) (i.e., IMPshort and IMPlong) and one NIOSH (National Institute for Occupational Safety and Health)-like protocol (rtNIOSH). For ambient samples, the IMPshort, IMPlong and rtNIOSH protocol underestimated 22, 36 and 12% of TC, respectively, with the corresponding threshold being ~ 0, 20 and 25 μgC. For sucrose, however, such discrepancy was observed only with the IMPshort protocol, indicating the need of more refractory SCCA calibration substance. Although the calculation discrepancy could be largely reduced by the single-point baseline correction method, the instrumental blanks of single-point method were higher. The correction method proposed was to use multipoint-corrected data when below the determined threshold, and use single-point results when beyond that threshold. The effectiveness of this correction method was supported by correlation with optical data.

  6. Non-abelian factorisation for next-to-leading-power threshold logarithms

    NASA Astrophysics Data System (ADS)

    Bonocore, D.; Laenen, E.; Magnea, L.; Vernazza, L.; White, C. D.

    2016-12-01

    Soft and collinear radiation is responsible for large corrections to many hadronic cross sections, near thresholds for the production of heavy final states. There is much interest in extending our understanding of this radiation to next-to-leading power (NLP) in the threshold expansion. In this paper, we generalise a previously proposed all-order NLP factorisation formula to include non-abelian corrections. We define a nonabelian radiative jet function, organising collinear enhancements at NLP, and compute it for quark jets at one loop. We discuss in detail the issue of double counting between soft and collinear regions. Finally, we verify our prescription by reproducing all NLP logarithms in Drell-Yan production up to NNLO, including those associated with double real emission. Our results constitute an important step in the development of a fully general resummation formalism for NLP threshold effects.

  7. Overcoming the effects of false positives and threshold bias in graph theoretical analyses of neuroimaging data.

    PubMed

    Drakesmith, M; Caeyenberghs, K; Dutt, A; Lewis, G; David, A S; Jones, D K

    2015-09-01

    Graph theory (GT) is a powerful framework for quantifying topological features of neuroimaging-derived functional and structural networks. However, false positive (FP) connections arise frequently and influence the inferred topology of networks. Thresholding is often used to overcome this problem, but an appropriate threshold often relies on a priori assumptions, which will alter inferred network topologies. Four common network metrics (global efficiency, mean clustering coefficient, mean betweenness and smallworldness) were tested using a model tractography dataset. It was found that all four network metrics were significantly affected even by just one FP. Results also show that thresholding effectively dampens the impact of FPs, but at the expense of adding significant bias to network metrics. In a larger number (n=248) of tractography datasets, statistics were computed across random group permutations for a range of thresholds, revealing that statistics for network metrics varied significantly more than for non-network metrics (i.e., number of streamlines and number of edges). Varying degrees of network atrophy were introduced artificially to half the datasets, to test sensitivity to genuine group differences. For some network metrics, this atrophy was detected as significant (p<0.05, determined using permutation testing) only across a limited range of thresholds. We propose a multi-threshold permutation correction (MTPC) method, based on the cluster-enhanced permutation correction approach, to identify sustained significant effects across clusters of thresholds. This approach minimises requirements to determine a single threshold a priori. We demonstrate improved sensitivity of MTPC-corrected metrics to genuine group effects compared to an existing approach and demonstrate the use of MTPC on a previously published network analysis of tractography data derived from a clinical population. In conclusion, we show that there are large biases and instability induced by thresholding, making statistical comparisons of network metrics difficult. However, by testing for effects across multiple thresholds using MTPC, true group differences can be robustly identified. Copyright © 2015. Published by Elsevier Inc.

  8. Theoretical studies of the potential surface for the F - H2 greater than HF + H reaction

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Walch, Stephen, P.; Langhoff, Stephen R.; Taylor, Peter R.; Jaffe, Richard L.

    1987-01-01

    The F + H2 yields HF + H potential energy hypersurface was studied in the saddle point and entrance channel regions. Using a large (5s 5p 3d 2f 1g/4s 3p 2d) atomic natural orbital basis set, a classical barrier height of 1.86 kcal/mole was obtained at the CASSCF/multireference CI level (MRCI) after correcting for basis set superposition error and including a Davidson correction (+Q) for higher excitations. Based upon an analysis of the computed results, the true classical barrier is estimated to be about 1.4 kcal/mole. The location of the bottleneck on the lowest vibrationally adiabatic potential curve was also computed and the translational energy threshold determined from a one-dimensional tunneling calculation. Using the difference between the calculated and experimental threshold to adjust the classical barrier height on the computed surface yields a classical barrier in the range of 1.0 to 1.5 kcal/mole. Combining the results of the direct estimates of the classical barrier height with the empirical values obtained from the approximation calculations of the dynamical threshold, it is predicted that the true classical barrier height is 1.4 + or - 0.4 kcal/mole. Arguments are presented in favor of including the relatively large +Q correction obtained when nine electrons are correlated at the CASSCF/MRCI level.

  9. Large NLO corrections in t\\overline{t}{W}^{± } and t\\overline{t}t\\overline{t} hadroproduction from supposedly subleading EW contributions

    NASA Astrophysics Data System (ADS)

    Frederix, Rikkert; Pagani, Davide; Zaro, Marco

    2018-02-01

    We calculate the complete-NLO predictions for t\\overline{t}{W}^{± } and t\\overline{t}t\\overline{t} production in proton-proton collisions at 13 and 100 TeV. All the non-vanishing contributions of O({α}_s^i{α}^j) with i + j = 3 , 4 for t\\overline{t}{W}^{± } and i + j = 4 , 5 for t\\overline{t}t\\overline{t} are evaluated without any approximation. For t\\overline{t}{W}^{± } we find that, due to the presence of tW → tW scattering, at 13(100) TeV the O({α}_s{α}^3) contribution is about 12(70)% of the LO, i.e., it is larger than the so-called NLO EW corrections (the O({α}_s^2{α}^2) terms) and has opposite sign. In the case of t\\overline{t}t\\overline{t} production, large contributions from electroweak tt → tt scattering are already present at LO in the O({α}_s^3α ) and O({α}_s^2{α}^2) terms. For the same reason we find that both NLO terms of O({α}_s^4α ) , i.e., the NLO EW corrections, and O({α}_s^3{α}^2) are large (±15% of the LO) and their relative contributions strongly depend on the values of the renormalisation and factorisation scales. However, large accidental cancellations are present (away from the threshold region) between these two contributions. Moreover, the NLO corrections strongly depend on the kinematics and are particularly large at the threshold, where even the relative contribution from O({α}_s^2{α}^3) terms amounts to tens of percents.

  10. Superconducting quantum circuits at the surface code threshold for fault tolerance.

    PubMed

    Barends, R; Kelly, J; Megrant, A; Veitia, A; Sank, D; Jeffrey, E; White, T C; Mutus, J; Fowler, A G; Campbell, B; Chen, Y; Chen, Z; Chiaro, B; Dunsworth, A; Neill, C; O'Malley, P; Roushan, P; Vainsencher, A; Wenner, J; Korotkov, A N; Cleland, A N; Martinis, John M

    2014-04-24

    A quantum computer can solve hard problems, such as prime factoring, database searching and quantum simulation, at the cost of needing to protect fragile quantum states from error. Quantum error correction provides this protection by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication. For superconducting qubits, the surface code approach to quantum computing is a natural choice for error correction, because it uses only nearest-neighbour coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbour coupling. As a further demonstration, we construct a five-qubit Greenberger-Horne-Zeilinger state using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits.

  11. Complete next-to-leading-order calculation for pion production in nucleon-nucleon collisions at threshold

    NASA Astrophysics Data System (ADS)

    Hanhart, C.; Kaiser, N.

    2002-11-01

    Based on a counting scheme that explicitly takes into account the large momentum (Mmπ) characteristic for pion production in nucleon-nucleon collisions we calculate all diagrams for the reaction NN-->NNπ at threshold up to next-to-leading-order. At this order there are no free parameters and the size of the next-to-leading- order contributions is in line with the expectation from power counting. The sum of loop corrections at that order vanishes for the process pp-->ppπ0 at threshold. The total contribution at next-to-leading-order from loop diagrams that include the delta degree of freedom vanishes at threshold in both reaction channels pp-->ppπ0,pnπ+.

  12. Kuramoto model with uniformly spaced frequencies: Finite-N asymptotics of the locking threshold.

    PubMed

    Ottino-Löffler, Bertrand; Strogatz, Steven H

    2016-06-01

    We study phase locking in the Kuramoto model of coupled oscillators in the special case where the number of oscillators, N, is large but finite, and the oscillators' natural frequencies are evenly spaced on a given interval. In this case, stable phase-locked solutions are known to exist if and only if the frequency interval is narrower than a certain critical width, called the locking threshold. For infinite N, the exact value of the locking threshold was calculated 30 years ago; however, the leading corrections to it for finite N have remained unsolved analytically. Here we derive an asymptotic formula for the locking threshold when N≫1. The leading correction to the infinite-N result scales like either N^{-3/2} or N^{-1}, depending on whether the frequencies are evenly spaced according to a midpoint rule or an end-point rule. These scaling laws agree with numerical results obtained by Pazó [D. Pazó, Phys. Rev. E 72, 046211 (2005)PLEEE81539-375510.1103/PhysRevE.72.046211]. Moreover, our analysis yields the exact prefactors in the scaling laws, which also match the numerics.

  13. A Study on a Microwave-Driven Smart Material Actuator

    NASA Technical Reports Server (NTRS)

    Choi, Sang H.; Chu, Sang-Hyon; Kwak, M.; Cutler, A. D.

    2001-01-01

    NASA s Next Generation Space Telescope (NGST) has a large deployable, fragmented optical surface (greater than or = 2 8 m in diameter) that requires autonomous correction of deployment misalignments and thermal effects. Its high and stringent resolution requirement imposes a great deal of challenge for optical correction. The threshold value for optical correction is dictated by lambda/20 (30 nm for NGST optics). Control of an adaptive optics array consisting of a large number of optical elements and smart material actuators is so complex that power distribution for activation and control of actuators must be done by other than hard-wired circuitry. The concept of microwave-driven smart actuators is envisioned as the best option to alleviate the complexity associated with hard-wiring. A microwave-driven actuator was studied to realize such a concept for future applications. Piezoelectric material was used as an actuator that shows dimensional change with high electric field. The actuators were coupled with microwave rectenna and tested to correlate the coupling effect of electromagnetic wave. In experiments, a 3x3 rectenna patch array generated more than 50 volts which is a threshold voltage for 30-nm displacement of a single piezoelectric material. Overall, the test results indicate that the microwave-driven actuator concept can be adopted for NGST applications.

  14. Ultrahigh Error Threshold for Surface Codes with Biased Noise

    NASA Astrophysics Data System (ADS)

    Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.

    2018-02-01

    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

  15. High-Threshold Fault-Tolerant Quantum Computation with Analog Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Fukui, Kosuke; Tomita, Akihisa; Okamoto, Atsushi; Fujii, Keisuke

    2018-04-01

    To implement fault-tolerant quantum computation with continuous variables, the Gottesman-Kitaev-Preskill (GKP) qubit has been recognized as an important technological element. However, it is still challenging to experimentally generate the GKP qubit with the required squeezing level, 14.8 dB, of the existing fault-tolerant quantum computation. To reduce this requirement, we propose a high-threshold fault-tolerant quantum computation with GKP qubits using topologically protected measurement-based quantum computation with the surface code. By harnessing analog information contained in the GKP qubits, we apply analog quantum error correction to the surface code. Furthermore, we develop a method to prevent the squeezing level from decreasing during the construction of the large-scale cluster states for the topologically protected, measurement-based, quantum computation. We numerically show that the required squeezing level can be relaxed to less than 10 dB, which is within the reach of the current experimental technology. Hence, this work can considerably alleviate this experimental requirement and take a step closer to the realization of large-scale quantum computation.

  16. Temporal integration property of stereopsis after higher-order aberration correction

    PubMed Central

    Kang, Jian; Dai, Yun; Zhang, Yudong

    2015-01-01

    Based on a binocular adaptive optics visual simulator, we investigated the effect of higher-order aberration correction on the temporal integration property of stereopsis. Stereo threshold for line stimuli, viewed in 550nm monochromatic light, was measured as a function of exposure duration, with higher-order aberrations uncorrected, binocularly corrected or monocularly corrected. Under all optical conditions, stereo threshold decreased with increasing exposure duration until a steady-state threshold was reached. The critical duration was determined by a quadratic summation model and the high goodness of fit suggested this model was reasonable. For normal subjects, the slope for stereo threshold versus exposure duration was about −0.5 on logarithmic coordinates, and the critical duration was about 200 ms. Both the slope and the critical duration were independent of the optical condition of the eye, showing no significant effect of higher-order aberration correction on the temporal integration property of stereopsis. PMID:26601010

  17. SU(6) GUT breaking on a projective plane

    NASA Astrophysics Data System (ADS)

    Anandakrishnan, Archana; Raby, Stuart

    2013-03-01

    We consider a 6-dimensional supersymmetric SU(6) gauge theory and compactify two extra-dimensions on a multiply-connected manifold with non-trivial topology. The SU(6) is broken down to the Standard Model gauge groups in two steps by an orbifold projection, followed by a Wilson line. The Higgs doublets of the low energy electroweak theory come from a chiral adjoint of SU(6). We thus have gauge-Higgs unification. The three families of the Standard Model can either be located in the 6D bulk or at 4D N=1 supersymmetric fixed points. We calculate the Kaluza-Klein spectrum of states arising as a result of the orbifolding. We also calculate the threshold corrections to the coupling constants due to this tower of states at the lowest compactification scale. We study the regions of parameter space of this model where the threshold corrections are consistent with low energy physics. We find that the couplings receive only logarithmic corrections at all scales. This feature can be attributed to the large N=2 6D SUSY of the underlying model.

  18. Beauty and charm production in fixed target experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kidonakis, Nikolaos; Vogt, Ramona

    We present calculations of NNLO threshold corrections for beauty and charm production in {pi}{sup -} p and pp interactions at fixed-target experiments. Recent calculations for heavy quark hadroproduction have included next-to-next-to-leading-order (NNLO) soft-gluon corrections [1] to the double differential cross section from threshold resummation techniques [2]. These corrections are important for near-threshold beauty and charm production at fixed-target experiments, including HERA-B and some of the current and future heavy ion experiments.

  19. Unipolar Terminal-Attractor Based Neural Associative Memory with Adaptive Threshold

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)

    1996-01-01

    A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner-product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.

  20. Unipolar terminal-attractor based neural associative memory with adaptive threshold

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor); Barhen, Jacob (Inventor); Farhat, Nabil H. (Inventor); Wu, Chwan-Hwa (Inventor)

    1993-01-01

    A unipolar terminal-attractor based neural associative memory (TABAM) system with adaptive threshold for perfect convergence is presented. By adaptively setting the threshold values for the dynamic iteration for the unipolar binary neuron states with terminal-attractors for the purpose of reducing the spurious states in a Hopfield neural network for associative memory and using the inner product approach, perfect convergence and correct retrieval is achieved. Simulation is completed with a small number of stored states (M) and a small number of neurons (N) but a large M/N ratio. An experiment with optical exclusive-OR logic operation using LCTV SLMs shows the feasibility of optoelectronic implementation of the models. A complete inner-product TABAM is implemented using a PC for calculation of adaptive threshold values to achieve a unipolar TABAM (UIT) in the case where there is no crosstalk, and a crosstalk model (CRIT) in the case where crosstalk corrupts the desired state.

  1. Comparison of memory thresholds for planar qudit geometries

    NASA Astrophysics Data System (ADS)

    Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad

    2017-11-01

    We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.

  2. Fuel cell flooding detection and correction

    DOEpatents

    DiPierno Bosco, Andrew; Fronk, Matthew Howard

    2000-08-15

    Method and apparatus for monitoring an H.sub.2 -O.sub.2 PEM fuel cells to detect and correct flooding. The pressure drop across a given H.sub.2 or O.sub.2 flow field is monitored and compared to predetermined thresholds of unacceptability. If the pressure drop exists a threshold of unacceptability corrective measures are automatically initiated.

  3. Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image

    NASA Astrophysics Data System (ADS)

    Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.

    2017-12-01

    Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.

  4. Aging and Visual Counting

    PubMed Central

    Li, Roger W.; MacKeben, Manfred; Chat, Sandy W.; Kumar, Maya; Ngo, Charlie; Levi, Dennis M.

    2010-01-01

    Background Much previous work on how normal aging affects visual enumeration has been focused on the response time required to enumerate, with unlimited stimulus duration. There is a fundamental question, not yet addressed, of how many visual items the aging visual system can enumerate in a “single glance”, without the confounding influence of eye movements. Methodology/Principal Findings We recruited 104 observers with normal vision across the age span (age 21–85). They were briefly (200 ms) presented with a number of well- separated black dots against a gray background on a monitor screen, and were asked to judge the number of dots. By limiting the stimulus presentation time, we can determine the maximum number of visual items an observer can correctly enumerate at a criterion level of performance (counting threshold, defined as the number of visual items at which ≈63% correct rate on a psychometric curve), without confounding by eye movements. Our findings reveal a 30% decrease in the mean counting threshold of the oldest group (age 61–85: ∼5 dots) when compared with the youngest groups (age 21–40: 7 dots). Surprisingly, despite decreased counting threshold, on average counting accuracy function (defined as the mean number of dots reported for each number tested) is largely unaffected by age, reflecting that the threshold loss can be primarily attributed to increased random errors. We further expanded this interesting finding to show that both young and old adults tend to over-count small numbers, but older observers over-count more. Conclusion/Significance Here we show that age reduces the ability to correctly enumerate in a glance, but the accuracy (veridicality), on average, remains unchanged with advancing age. Control experiments indicate that the degraded performance cannot be explained by optical, retinal or other perceptual factors, but is cortical in origin. PMID:20976149

  5. Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates

    PubMed Central

    Malone, Brian J.

    2017-01-01

    Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194

  6. 30 CFR 62.174 - Follow-up corrective measures when a standard threshold shift is detected.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Follow-up corrective measures when a standard threshold shift is detected. 62.174 Section 62.174 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR UNIFORM MINE HEALTH REGULATIONS OCCUPATIONAL NOISE EXPOSURE § 62.174 Follow-up corrective measures when a standard...

  7. 30 CFR 62.174 - Follow-up corrective measures when a standard threshold shift is detected.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Follow-up corrective measures when a standard threshold shift is detected. 62.174 Section 62.174 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR UNIFORM MINE HEALTH REGULATIONS OCCUPATIONAL NOISE EXPOSURE § 62.174 Follow-up corrective measures when a standard...

  8. 30 CFR 62.174 - Follow-up corrective measures when a standard threshold shift is detected.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Follow-up corrective measures when a standard threshold shift is detected. 62.174 Section 62.174 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR UNIFORM MINE HEALTH REGULATIONS OCCUPATIONAL NOISE EXPOSURE § 62.174 Follow-up corrective measures when a standard...

  9. 30 CFR 62.174 - Follow-up corrective measures when a standard threshold shift is detected.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Follow-up corrective measures when a standard threshold shift is detected. 62.174 Section 62.174 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR UNIFORM MINE HEALTH REGULATIONS OCCUPATIONAL NOISE EXPOSURE § 62.174 Follow-up corrective measures when a standard...

  10. Assessment of statistical methods used in library-based approaches to microbial source tracking.

    PubMed

    Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D

    2003-12-01

    Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.

  11. Modeling jointly low, moderate, and heavy rainfall intensities without a threshold selection

    NASA Astrophysics Data System (ADS)

    Naveau, Philippe; Huser, Raphael; Ribereau, Pierre; Hannart, Alexis

    2016-04-01

    In statistics, extreme events are often defined as excesses above a given large threshold. This definition allows hydrologists and flood planners to apply Extreme-Value Theory (EVT) to their time series of interest. Even in the stationary univariate context, this approach has at least two main drawbacks. First, working with excesses implies that a lot of observations (those below the chosen threshold) are completely disregarded. The range of precipitation is artificially shopped down into two pieces, namely large intensities and the rest, which necessarily imposes different statistical models for each piece. Second, this strategy raises a nontrivial and very practical difficultly: how to choose the optimal threshold which correctly discriminates between low and heavy rainfall intensities. To address these issues, we propose a statistical model in which EVT results apply not only to heavy, but also to low precipitation amounts (zeros excluded). Our model is in compliance with EVT on both ends of the spectrum and allows a smooth transition between the two tails, while keeping a low number of parameters. In terms of inference, we have implemented and tested two classical methods of estimation: likelihood maximization and probability weighed moments. Last but not least, there is no need to choose a threshold to define low and high excesses. The performance and flexibility of this approach are illustrated on simulated and hourly precipitation recorded in Lyon, France.

  12. Nature of collective decision-making by simple yes/no decision units.

    PubMed

    Hasegawa, Eisuke; Mizumoto, Nobuaki; Kobayashi, Kazuya; Dobata, Shigeto; Yoshimura, Jin; Watanabe, Saori; Murakami, Yuuka; Matsuura, Kenji

    2017-10-31

    The study of collective decision-making spans various fields such as brain and behavioural sciences, economics, management sciences, and artificial intelligence. Despite these interdisciplinary applications, little is known regarding how a group of simple 'yes/no' units, such as neurons in the brain, can select the best option among multiple options. One prerequisite for achieving such correct choices by the brain is correct evaluation of relative option quality, which enables a collective decision maker to efficiently choose the best option. Here, we applied a sensory discrimination mechanism using yes/no units with differential thresholds to a model for making a collective choice among multiple options. The performance corresponding to the correct choice was shown to be affected by various parameters. High performance can be achieved by tuning the threshold distribution with the options' quality distribution. The number of yes/no units allocated to each option and its variability profoundly affects performance. When this variability is large, a quorum decision becomes superior to a majority decision under some conditions. The general features of this collective decision-making by a group of simple yes/no units revealed in this study suggest that this mechanism may be useful in applications across various fields.

  13. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  14. Efficacy and workload analysis of a fixed vertical couch position technique and a fixed‐action–level protocol in whole‐breast radiotherapy

    PubMed Central

    Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank

    2015-01-01

    Quantification of the setup errors is vital to define appropriate setup margins preventing geographical misses. The no‐action–level (NAL) correction protocol reduces the systematic setup errors and, hence, the setup margins. The manual entry of the setup corrections in the record‐and‐verify software, however, increases the susceptibility of the NAL protocol to human errors. Moreover, the impact of the skin mobility on the anteroposterior patient setup reproducibility in whole‐breast radiotherapy (WBRT) is unknown. In this study, we therefore investigated the potential of fixed vertical couch position‐based patient setup in WBRT. The possibility to introduce a threshold for correction of the systematic setup errors was also explored. We measured the anteroposterior, mediolateral, and superior–inferior setup errors during fractions 1–12 and weekly thereafter with tangential angled single modality paired imaging. These setup data were used to simulate the residual setup errors of the NAL protocol, the fixed vertical couch position protocol, and the fixed‐action–level protocol with different correction thresholds. Population statistics of the setup errors of 20 breast cancer patients and 20 breast cancer patients with additional regional lymph node (LN) irradiation were calculated to determine the setup margins of each off‐line correction protocol. Our data showed the potential of the fixed vertical couch position protocol to restrict the systematic and random anteroposterior residual setup errors to 1.8 mm and 2.2 mm, respectively. Compared to the NAL protocol, a correction threshold of 2.5 mm reduced the frequency of mediolateral and superior–inferior setup corrections with 40% and 63%, respectively. The implementation of the correction threshold did not deteriorate the accuracy of the off‐line setup correction compared to the NAL protocol. The combination of the fixed vertical couch position protocol, for correction of the anteroposterior setup error, and the fixed‐action–level protocol with 2.5 mm correction threshold, for correction of the mediolateral and the superior–inferior setup errors, was proved to provide adequate and comparable patient setup accuracy in WBRT and WBRT with additional LN irradiation. PACS numbers: 87.53.Kn, 87.57.‐s

  15. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    PubMed Central

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999–2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Methods Regression analysis was used to derive new age-correction values using audiometric data from the 1999–2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20–75 years. Results The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20–75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61–75 years. Conclusions Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. PMID:26169804

  16. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    PubMed

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. Evaluation of refractive correction for standard automated perimetry in eyes wearing multifocal contact lenses

    PubMed Central

    Hirasawa, Kazunori; Ito, Hikaru; Ohori, Yukari; Takano, Yui; Shoji, Nobuyuki

    2017-01-01

    AIM To evaluate the refractive correction for standard automated perimetry (SAP) in eyes with refractive multifocal contact lenses (CL) in healthy young participants. METHODS Twenty-nine eyes of 29 participants were included. Accommodation was paralyzed in all participants with 1% cyclopentolate hydrochloride. SAP was performed using the Humphrey SITA-standard 24-2 and 10-2 protocol under three refractive conditions: monofocal CL corrected for near distance (baseline); multifocal CL corrected for distance (mCL-D); and mCL-D corrected for near vision using a spectacle lens (mCL-N). Primary outcome measures were the foveal threshold, mean deviation (MD), and pattern standard deviation (PSD). RESULTS The foveal threshold of mCL-N with both the 24-2 and 10-2 protocols significantly decreased by 2.2-2.5 dB (P<0.001), while that of mCL-D with the 24-2 protocol significantly decreased by 1.5 dB (P=0.0427), as compared with that of baseline. Although there was no significant difference between the MD of baseline and mCL-D with the 24-2 and 10-2 protocols, the MD of mCL-N was significantly decreased by 1.0-1.3 dB (P<0.001) as compared with that of both baseline and mCL-D, with both 24-2 and 10-2 protocols. There was no significant difference in the PSD among the three refractive conditions with both the 24-2 and 10-2 protocols. CONCLUSION Despite the induced mydriasis and the optical design of the multifocal lens used in this study, our results indicated that, when the dome-shaped visual field test is performed with eyes with large pupils and wearing refractive multifocal CLs, distance correction without additional near correction is to be recommended. PMID:29062776

  18. Evaluation of refractive correction for standard automated perimetry in eyes wearing multifocal contact lenses.

    PubMed

    Hirasawa, Kazunori; Ito, Hikaru; Ohori, Yukari; Takano, Yui; Shoji, Nobuyuki

    2017-01-01

    To evaluate the refractive correction for standard automated perimetry (SAP) in eyes with refractive multifocal contact lenses (CL) in healthy young participants. Twenty-nine eyes of 29 participants were included. Accommodation was paralyzed in all participants with 1% cyclopentolate hydrochloride. SAP was performed using the Humphrey SITA-standard 24-2 and 10-2 protocol under three refractive conditions: monofocal CL corrected for near distance (baseline); multifocal CL corrected for distance (mCL-D); and mCL-D corrected for near vision using a spectacle lens (mCL-N). Primary outcome measures were the foveal threshold, mean deviation (MD), and pattern standard deviation (PSD). The foveal threshold of mCL-N with both the 24-2 and 10-2 protocols significantly decreased by 2.2-2.5 dB ( P <0.001), while that of mCL-D with the 24-2 protocol significantly decreased by 1.5 dB ( P =0.0427), as compared with that of baseline. Although there was no significant difference between the MD of baseline and mCL-D with the 24-2 and 10-2 protocols, the MD of mCL-N was significantly decreased by 1.0-1.3 dB ( P <0.001) as compared with that of both baseline and mCL-D, with both 24-2 and 10-2 protocols. There was no significant difference in the PSD among the three refractive conditions with both the 24-2 and 10-2 protocols. Despite the induced mydriasis and the optical design of the multifocal lens used in this study, our results indicated that, when the dome-shaped visual field test is performed with eyes with large pupils and wearing refractive multifocal CLs, distance correction without additional near correction is to be recommended.

  19. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.

    PubMed

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-06-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.

  20. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics

    PubMed Central

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-01-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery. PMID:23761849

  1. Double Resummation for Higgs Production

    NASA Astrophysics Data System (ADS)

    Bonvini, Marco; Marzani, Simone

    2018-05-01

    We present the first double-resummed prediction of the inclusive cross section for the main Higgs production channel in proton-proton collisions, namely, gluon fusion. Our calculation incorporates to all orders in perturbation theory two distinct towers of logarithmic corrections which are enhanced, respectively, at threshold, i.e., large x , and in the high-energy limit, i.e., small x . Large-x logarithms are resummed to next-to-next-to-next-to-leading logarithmic accuracy, while small-x ones to leading logarithmic accuracy. The double-resummed cross section is furthermore matched to the state-of-the-art fixed-order prediction at next-to-next-to-next-to-leading accuracy. We find that double resummation corrects the Higgs production rate by 2% at the currently explored center-of-mass energy of 13 TeV and its impact reaches 10% at future circular colliders at 100 TeV.

  2. Mapping Shallow Landslide Slope Inestability at Large Scales Using Remote Sensing and GIS

    NASA Astrophysics Data System (ADS)

    Avalon Cullen, C.; Kashuk, S.; Temimi, M.; Suhili, R.; Khanbilvardi, R.

    2015-12-01

    Rainfall induced landslides are one of the most frequent hazards on slanted terrains. They lead to great economic losses and fatalities worldwide. Most factors inducing shallow landslides are local and can only be mapped with high levels of uncertainty at larger scales. This work presents an attempt to determine slope instability at large scales. Buffer and threshold techniques are used to downscale areas and minimize uncertainties. Four static parameters (slope angle, soil type, land cover and elevation) for 261 shallow rainfall-induced landslides in the continental United States are examined. ASTER GDEM is used as bases for topographical characterization of slope and buffer analysis. Slope angle threshold assessment at the 50, 75, 95, 98, and 99 percentiles is tested locally. Further analysis of each threshold in relation to other parameters is investigated in a logistic regression environment for the continental U.S. It is determined that lower than 95-percentile thresholds under-estimate slope angles. Best regression fit can be achieved when utilizing the 99-threshold slope angle. This model predicts the highest number of cases correctly at 87.0% accuracy. A one-unit rise in the 99-threshold range increases landslide likelihood by 11.8%. The logistic regression model is carried over to ArcGIS where all variables are processed based on their corresponding coefficients. A regional slope instability map for the continental United States is created and analyzed against the available landslide records and their spatial distributions. It is expected that future inclusion of dynamic parameters like precipitation and other proxies like soil moisture into the model will further improve accuracy.

  3. Thermoelectricity near Anderson localization transitions

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kaoru; Aharony, Amnon; Entin-Wohlman, Ora; Hatano, Naomichi

    2017-10-01

    The electronic thermoelectric coefficients are analyzed in the vicinity of one and two Anderson localization thresholds in three dimensions. For a single mobility edge, we correct and extend previous studies and find universal approximants which allow us to deduce the critical exponent for the zero-temperature conductivity from thermoelectric measurements. In particular, we find that at nonzero low temperatures the Seebeck coefficient and the thermoelectric efficiency can be very large on the "insulating" side, for chemical potentials below the (zero-temperature) localization threshold. Corrections to the leading power-law singularity in the zero-temperature conductivity are shown to introduce nonuniversal temperature-dependent corrections to the otherwise universal functions which describe the Seebeck coefficient, the figure of merit, and the Wiedemann-Franz ratio. Next, the thermoelectric coefficients are shown to have interesting dependences on the system size. While the Seebeck coefficient decreases with decreasing size, the figure of merit first decreases but then increases, while the Wiedemann-Franz ratio first increases but then decreases as the size decreases. Small (but finite) samples may thus have larger thermoelectric efficiencies. In the last part we study thermoelectricity in systems with a pair of localization edges, the ubiquitous situation in random systems near the centers of electronic energy bands. As the disorder increases, the two thresholds approach each other, and then the Seebeck coefficient and the figure of merit increase significantly, as expected from the general arguments of Mahan and Sofo [J. D. Mahan and J. O. Sofo, Proc. Natl. Acad. Sci. USA 93, 7436 (1996), 10.1073/pnas.93.15.7436] for a narrow energy range of the zero-temperature metallic behavior.

  4. Network capability estimation. Vela network evaluation and automatic processing research. Technical report. [NETWORTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snell, N.S.

    1976-09-24

    NETWORTH is a computer program which calculates the detection and location capability of seismic networks. A modified version of NETWORTH has been developed. This program has been used to evaluate the effect of station 'downtime', the signal amplitude variance, and the station detection threshold upon network detection capability. In this version all parameters may be changed separately for individual stations. The capability of using signal amplitude corrections has been added. The function of amplitude corrections is to remove possible bias in the magnitude estimate due to inhomogeneous signal attenuation. These corrections may be applied to individual stations, individual epicenters, ormore » individual station/epicenter combinations. An option has been added to calculate the effect of station 'downtime' upon network capability. This study indicates that, if capability loss due to detection errors can be minimized, then station detection threshold and station reliability will be the fundamental limits to network performance. A baseline network of thirteen stations has been performed. These stations are as follows: Alaskan Long Period Array, (ALPA); Ankara, (ANK); Chiang Mai, (CHG); Korean Seismic Research Station, (KSRS); Large Aperture Seismic Array, (LASA); Mashhad, (MSH); Mundaring, (MUN); Norwegian Seismic Array, (NORSAR); New Delhi, (NWDEL); Red Knife, Ontario, (RK-ON); Shillong, (SHL); Taipei, (TAP); and White Horse, Yukon, (WH-YK).« less

  5. Dealing with ocular artifacts on lateralized ERPs in studies of visual-spatial attention and memory: ICA correction versus epoch rejection.

    PubMed

    Drisdelle, Brandi Lee; Aubin, Sébrina; Jolicoeur, Pierre

    2017-01-01

    The objective of the present study was to assess the robustness and reliability of independent component analysis (ICA) as a method for ocular artifact correction in electrophysiological studies of visual-spatial attention and memory. The N2pc and sustained posterior contralateral negativity (SPCN), electrophysiological markers of visual-spatial attention and memory, respectively, are lateralized posterior ERPs typically observed following the presentation of lateral stimuli (targets and distractors) along with instructions to maintain fixation on the center of the visual search for the entire trial. Traditionally, trials in which subjects may have displaced their gaze are rejected based on a cutoff threshold, minimizing electrophysiological contamination by saccades. Given the loss of data resulting from rejection, we examined ocular correction by comparing results using standard fixation instructions against a condition where subjects were instructed to shift their gaze toward possible targets. Both conditions were analyzed using a rejection threshold and ICA correction for saccade activity management. Results demonstrate that ICA conserves data that would have otherwise been removed and leaves the underlying neural activity intact, as demonstrated by experimental manipulations previously shown to modulate the N2pc and the SPCN. Not only does ICA salvage and not distort data, but also large eye movements had only subtle effects. Overall, the findings provide convincing evidence for ICA correction for not only special cases (e.g., subjects did not follow fixation instruction) but also as a candidate for standard ocular artifact management in electrophysiological studies interested in visual-spatial attention and memory. © 2016 Society for Psychophysiological Research.

  6. Threshold Dynamics of a Semiconductor Single Atom Maser

    NASA Astrophysics Data System (ADS)

    Liu, Yinyu

    Photon emission from single emitters provides fundamental insight into the detailed interaction between light and matter. Here we demonstrate a semiconductor single atom maser (SeSAM) that consists of a single InAs double quantum dot (DQD) that is coupled to a high quality factor microwave cavity. A finite bias results in population inversion in the DQD, enabling sizable cavity gain and stimulated emission. We develop a pulsed-gate approach that allows the SeSAM to be tuned across the masing threshold. The cavity output power as a function of DQD current is in good agreement with single atom maser theory once a small correction for lead emission is included. Photon statistics measurements show that the second-order correlation function of intra-cavity photon number, nc, crosses over from 〈nc2 〉 /〈nc 〉 2 = 2.1 below threshold to 〈nc2 〉 /〈nc 〉 2 = 1.2 above threshold. Large fluctuations are observed at threshold. In collaboration with J. Stehlik, C. Eichler, X. Mi, T. R. Hartke, M. J. Gullans, J. M. Taylor and J. R. Petta. Supported by the NSF and the Gordon and Betty Moore Foundation's EPiQS initiative through Grant No. GBMF4535.

  7. Scatter and beam hardening reduction in industrial computed tomography using photon counting detectors

    NASA Astrophysics Data System (ADS)

    Schumacher, David; Sharma, Ravi; Grager, Jan-Carl; Schrapp, Michael

    2018-07-01

    Photon counting detectors (PCD) offer new possibilities for x-ray micro computed tomography (CT) in the field of non-destructive testing. For large and/or dense objects with high atomic numbers the problem of scattered radiation and beam hardening severely influences the image quality. This work shows that using an energy discriminating PCD based on CdTe allows to address these problems by intrinsically reducing both the influence of scattering and beam hardening. Based on 2D-radiographic measurements it is shown that by energy thresholding the influence of scattered radiation can be reduced by up to in case of a PCD compared to a conventional energy-integrating detector (EID). To demonstrate the capabilities of a PCD in reducing beam hardening, cupping artefacts are analyzed quantitatively. The PCD results show that the higher the energy threshold is set, the lower the cupping effect emerges. But since numerous beam hardening correction algorithms exist, the results of the PCD are compared to EID results corrected by common techniques. Nevertheless, the highest energy thresholds yield lower cupping artefacts than any of the applied correction algorithms. As an example of a potential industrial CT application, a turbine blade is investigated by CT. The inner structure of the turbine blade allows for comparing the image quality between PCD and EID in terms of absolute contrast, as well as normalized signal-to-noise and contrast-to-noise ratio. Where the absolute contrast can be improved by raising the energy thresholds of the PCD, it is found that due to lower statistics the normalized contrast-to-noise-ratio could not be improved compared to the EID. These results might change to the contrary when discarding pre-filtering of the x-ray spectra and thus allowing more low-energy photons to reach the detectors. Despite still being in the early phase in technological progress, PCDs already allow to improve CT image quality compared to conventional detectors in terms of scatter and beam hardening reduction.

  8. Quantum Corrections to the 'Atomistic' MOSFET Simulations

    NASA Technical Reports Server (NTRS)

    Asenov, Asen; Slavcheva, G.; Kaya, S.; Balasubramaniam, R.

    2000-01-01

    We have introduced in a simple and efficient manner quantum mechanical corrections in our 3D 'atomistic' MOSFET simulator using the density gradient formalism. We have studied in comparison with classical simulations the effect of the quantum mechanical corrections on the simulation of random dopant induced threshold voltage fluctuations, the effect of the single charge trapping on interface states and the effect of the oxide thickness fluctuations in decanano MOSFETs with ultrathin gate oxides. The introduction of quantum corrections enhances the threshold voltage fluctuations but does not affect significantly the amplitude of the random telegraph noise associated with single carrier trapping. The importance of the quantum corrections for proper simulation of oxide thickness fluctuation effects has also been demonstrated.

  9. Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks

    PubMed Central

    Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng

    2017-01-01

    High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328

  10. An algorithm to correct saturated mass spectrometry ion abundances for enhanced quantitation and mass accuracy in omic studies

    DOE PAGES

    Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.; ...

    2017-11-06

    We present that the mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can yield erroneous results if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flightmore » MS. Here, in this method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases for highly saturated species and dynamic range increased by 1–2 orders of magnitude for peptides in a blood serum sample.« less

  11. An algorithm to correct saturated mass spectrometry ion abundances for enhanced quantitation and mass accuracy in omic studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.

    The mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can easily cause problems if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flight MS. In thismore » method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases with highly saturated species and dynamic range increased by 1-2 orders of magnitude for peptides in a blood serum sample.« less

  12. An algorithm to correct saturated mass spectrometry ion abundances for enhanced quantitation and mass accuracy in omic studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.

    We present that the mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can yield erroneous results if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flightmore » MS. Here, in this method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases for highly saturated species and dynamic range increased by 1–2 orders of magnitude for peptides in a blood serum sample.« less

  13. Proton therapy for prostate cancer treatment employing online image guidance and an action level threshold.

    PubMed

    Vargas, Carlos; Falchook, Aaron; Indelicato, Daniel; Yeung, Anamaria; Henderson, Randall; Olivier, Kenneth; Keole, Sameer; Williams, Christopher; Li, Zuofeng; Palta, Jatinder

    2009-04-01

    The ability to determine the accuracy of the final prostate position within a determined action level threshold for image-guided proton therapy is unclear. Three thousand one hundred ten images for 20 consecutive patients treated in 1 of our 3 proton prostate protocols from February to May of 2007 were analyzed. Daily kV images and patient repositioning were performed employing an action-level threshold (ALT) of > or = 2.5 mm for each beam. Isocentric orthogonal x-rays were obtained, and prostate position was defined via 3 gold markers for each patient in the 3 axes. To achieve and confirm our action level threshold, an average of 2 x-rays sets (median 2; range, 0-4) was taken daily for each patient. Based on our ALT, we made no corrections in 8.7% (range, 0%-54%), 1 correction in 82% (41%-98%), and 2 to 3 corrections in 9% (0-27%). No patient needed 4 or more corrections. All patients were treated with a confirmed error of < 2.5 mm for every beam delivered. After all corrections, the mean and standard deviations were: anterior-posterior (z): 0.003 +/- 0.094 cm; superior-inferior (y): 0.028 +/- 0.073 cm; and right-left (x) -0.013 +/- 0.08 cm. It is feasible to limit all final prostate positions to less than 2.5 mm employing an action level image-guided radiation therapy (IGRT) process. The residual errors after corrections were very small.

  14. Non-Gaussianity and Excursion Set Theory: Halo Bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adshead, Peter; Baxter, Eric J.; Dodelson, Scott

    2012-09-01

    We study the impact of primordial non-Gaussianity generated during inflation on the bias of halos using excursion set theory. We recapture the familiar result that the bias scales asmore » $$k^{-2}$$ on large scales for local type non-Gaussianity but explicitly identify the approximations that go into this conclusion and the corrections to it. We solve the more complicated problem of non-spherical halos, for which the collapse threshold is scale dependent.« less

  15. A surface code quantum computer in silicon

    PubMed Central

    Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.

    2015-01-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310

  16. A surface code quantum computer in silicon.

    PubMed

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.

  17. 30 CFR 62.174 - Follow-up corrective measures when a standard threshold shift is detected.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... threshold shift is detected. 62.174 Section 62.174 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION... measures when a standard threshold shift is detected. The mine operator must, within 30 calendar days of receiving evidence or confirmation of a standard threshold shift, unless a physician or audiologist...

  18. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems.

    PubMed

    Li, Ying

    2016-09-16

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.

  19. Final state interactions at the threshold of Higgs boson pair production

    NASA Astrophysics Data System (ADS)

    Zhang, Zhentao

    2015-11-01

    We study the effect of final state interactions at the threshold of Higgs boson pair production in the Glashow-Weinberg-Salam model. We consider three major processes of the pair production in the model: lepton pair annihilation, ZZ fusion, and WW fusion. We find that the corrections caused by the effect for these processes are markedly different. According to our results, the effect can cause non-negligible corrections to the cross sections for lepton pair annihilation and small corrections for ZZ fusion, and this effect is negligible for WW fusion.

  20. Reconciling threshold and subthreshold expansions for pion-nucleon scattering

    NASA Astrophysics Data System (ADS)

    Siemens, D.; Ruiz de Elvira, J.; Epelbaum, E.; Hoferichter, M.; Krebs, H.; Kubis, B.; Meißner, U.-G.

    2017-07-01

    Heavy-baryon chiral perturbation theory (ChPT) at one loop fails in relating the pion-nucleon amplitude in the physical region and for subthreshold kinematics due to loop effects enhanced by large low-energy constants. Studying the chiral convergence of threshold and subthreshold parameters up to fourth order in the small-scale expansion, we address the question to what extent this tension can be mitigated by including the Δ (1232) as an explicit degree of freedom and/or using a covariant formulation of baryon ChPT. We find that the inclusion of the Δ indeed reduces the low-energy constants to more natural values and thereby improves consistency between threshold and subthreshold kinematics. In addition, even in the Δ-less theory the resummation of 1 /mN corrections in the covariant scheme improves the results markedly over the heavy-baryon formulation, in line with previous observations in the single-baryon sector of ChPT that so far have evaded a profound theoretical explanation.

  1. Reconciling threshold and subthreshold expansions for pion–nucleon scattering

    DOE PAGES

    Siemens, D.; Ruiz de Elvira, J.; Epelbaum, E.; ...

    2017-04-21

    Heavy-baryon chiral perturbation theory (ChPT) at one loop fails in relating the pion–nucleon amplitude in the physical region and for subthreshold kinematics due to loop effects enhanced by large low-energy constants. Studying the chiral convergence of threshold and subthreshold parameters up to fourth order in the small-scale expansion, we address the question to what extent this tension can be mitigated by including the Δ(1232) as an explicit degree of freedom and/or using a covariant formulation of baryon ChPT. We find that the inclusion of the Δ indeed reduces the low-energy constants to more natural values and thereby improves consistency betweenmore » threshold and subthreshold kinematics. In addition, even in the Δ-less theory the resummation of 1/m N corrections in the covariant scheme improves the results markedly over the heavy-baryon formulation, in line with previous observations in the single-baryon sector of ChPT that so far have evaded a profound theoretical explanation.« less

  2. Gpm Level 1 Science Requirements: Science and Performance Viewed from the Ground

    NASA Technical Reports Server (NTRS)

    Petersen, W.; Kirstetter, P.; Wolff, D.; Kidd, C.; Tokay, A.; Chandrasekar, V.; Grecu, M.; Huffman, G.; Jackson, G. S.

    2016-01-01

    GPM meets Level 1 science requirements for rain estimation based on the strong performance of its radar algorithms. Changes in the V5 GPROF algorithm should correct errors in V4 and will likely resolve GPROF performance issues relative to L1 requirements. L1 FOV Snow detection largely verified but at unknown SWE rate threshold (likely < 0.5 –1 mm/hr/liquid equivalent). Ongoing work to improve SWE rate estimation for both satellite and GV remote sensing.

  3. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE PAGES

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; ...

    2017-02-15

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  4. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    PubMed Central

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik; Rudinger, Kenneth; Mizrahi, Jonathan; Fortier, Kevin; Maunz, Peter

    2017-01-01

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography to completely characterize operations on a trapped-Yb+-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10−4). PMID:28198466

  5. Demonstration of qubit operations below a rigorous fault tolerance threshold with gate set tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blume-Kohout, Robin; Gamble, John King; Nielsen, Erik

    Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if—and only if—the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different error rate that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Finally, we usemore » gate set tomography to completely characterize operations on a trapped-Yb +-ion qubit and demonstrate with greater than 95% confidence that they satisfy a rigorous threshold for FTQEC (diamond norm ≤6.7 × 10 -4).« less

  6. Acoustic Reflexes in Normal-Hearing Adults, Typically Developing Children, and Children with Suspected Auditory Processing Disorder: Thresholds, Real-Ear Corrections, and the Role of Static Compliance on Estimates.

    PubMed

    Saxena, Udit; Allan, Chris; Allen, Prudence

    2017-06-01

    Previous studies have suggested elevated reflex thresholds in children with auditory processing disorders (APDs). However, some aspects of the child's ear such as ear canal volume and static compliance of the middle ear could possibly affect the measurements of reflex thresholds and thus impact its interpretation. Sound levels used to elicit reflexes in a child's ear may be higher than predicted by calibration in a standard 2-cc coupler, and lower static compliance could make visualization of very small changes in impedance at threshold difficult. For this purpose, it is important to evaluate threshold data with consideration of differences between children and adults. A set of studies were conducted. The first compared reflex thresholds obtained using standard clinical procedures in children with suspected APD to that of typically developing children and adults to test the replicability of previous studies. The second study examined the impact of ear canal volume on estimates of reflex thresholds by applying real-ear corrections. Lastly, the relationship between static compliance and reflex threshold estimates was explored. The research is a set of case-control studies with a repeated measures design. The first study included data from 20 normal-hearing adults, 28 typically developing children, and 66 children suspected of having an APD. The second study included 28 normal-hearing adults and 30 typically developing children. In the first study, crossed and uncrossed reflex thresholds were measured in 5-dB step size. Reflex thresholds were analyzed using repeated measures analysis of variance (RM-ANOVA). In the second study, uncrossed reflex thresholds, real-ear correction, ear canal volume, and static compliance were measured. Reflex thresholds were measured using a 1-dB step size. The effect of real-ear correction and static compliance on reflex threshold was examined using RM-ANOVA and Pearson correlation coefficient, respectively. Study 1 replicated previous studies showing elevated reflex thresholds in many children with suspected APD when compared to data from adults using standard clinical procedures, especially in the crossed condition. The thresholds measured in children with suspected APD tended to be higher than those measured in the typically developing children. There were no significant differences between the typically developing children and adults. However, when real-ear calibrated stimulus levels were used, it was found that children's thresholds were elicited at higher levels than in the adults. A significant relationship between reflex thresholds and static compliance was found in the adult data, showing a trend for higher thresholds in ears with lower static compliance, but no such relationship was found in the data from the children. This study suggests that reflex measures in children should be adjusted for real-ear-to-coupler differences before interpretation. The data in children with suspected APD support previous studies suggesting abnormalities in reflex thresholds. The lack of correlation between threshold and static compliance estimates in children as was observed in the adults may suggest a nonmechanical explanation for age and clinically related effects. American Academy of Audiology

  7. Cost-effectiveness of different strategies for diagnosis of uncomplicated urinary tract infections in women presenting in primary care

    PubMed Central

    Coupé, Veerle M. H.; Knottnerus, Bart J.; Geerlings, Suzanne E.; Moll van Charante, Eric P.; ter Riet, Gerben

    2017-01-01

    Background Uncomplicated Urinary Tract Infections (UTIs) are common in primary care resulting in substantial costs. Since antimicrobial resistance against antibiotics for UTIs is rising, accurate diagnosis is needed in settings with low rates of multidrug-resistant bacteria. Objective To compare the cost-effectiveness of different strategies to diagnose UTIs in women who contacted their general practitioner (GP) with painful and/or frequent micturition between 2006 and 2008 in and around Amsterdam, The Netherlands. Methods This is a model-based cost-effectiveness analysis using data from 196 women who underwent four tests: history, urine stick, sediment, dipslide, and the gold standard, a urine culture. Decision trees were constructed reflecting 15 diagnostic strategies comprising different parallel and sequential combinations of the four tests. Using the decision trees, for each strategy the costs and the proportion of women with a correct positive or negative diagnosis were estimated. Probabilistic sensitivity analysis was used to estimate uncertainty surrounding costs and effects. Uncertainty was presented using cost-effectiveness planes and acceptability curves. Results Most sequential testing strategies resulted in higher proportions of correctly classified women and lower costs than parallel testing strategies. For different willingness to pay thresholds, the most cost-effective strategies were: 1) performing a dipstick after a positive history for thresholds below €10 per additional correctly classified patient, 2) performing both a history and dipstick for thresholds between €10 and €17 per additional correctly classified patient, 3) performing a dipstick if history was negative, followed by a sediment if the dipstick was negative for thresholds between €17 and €118 per additional correctly classified patient, 4) performing a dipstick if history was negative, followed by a dipslide if the dipstick was negative for thresholds above €118 per additional correctly classified patient. Conclusion Depending on decision makers’ willingness to pay for one additional correctly classified woman, the strategy consisting of performing a history and dipstick simultaneously (ceiling ratios between €10 and €17) or performing a sediment if history and subsequent dipstick are negative (ceiling ratios between €17 and €118) are the most cost-effective strategies to diagnose a UTI. PMID:29186185

  8. Cost-effectiveness of different strategies for diagnosis of uncomplicated urinary tract infections in women presenting in primary care.

    PubMed

    Bosmans, Judith E; Coupé, Veerle M H; Knottnerus, Bart J; Geerlings, Suzanne E; Moll van Charante, Eric P; Ter Riet, Gerben

    2017-01-01

    Uncomplicated Urinary Tract Infections (UTIs) are common in primary care resulting in substantial costs. Since antimicrobial resistance against antibiotics for UTIs is rising, accurate diagnosis is needed in settings with low rates of multidrug-resistant bacteria. To compare the cost-effectiveness of different strategies to diagnose UTIs in women who contacted their general practitioner (GP) with painful and/or frequent micturition between 2006 and 2008 in and around Amsterdam, The Netherlands. This is a model-based cost-effectiveness analysis using data from 196 women who underwent four tests: history, urine stick, sediment, dipslide, and the gold standard, a urine culture. Decision trees were constructed reflecting 15 diagnostic strategies comprising different parallel and sequential combinations of the four tests. Using the decision trees, for each strategy the costs and the proportion of women with a correct positive or negative diagnosis were estimated. Probabilistic sensitivity analysis was used to estimate uncertainty surrounding costs and effects. Uncertainty was presented using cost-effectiveness planes and acceptability curves. Most sequential testing strategies resulted in higher proportions of correctly classified women and lower costs than parallel testing strategies. For different willingness to pay thresholds, the most cost-effective strategies were: 1) performing a dipstick after a positive history for thresholds below €10 per additional correctly classified patient, 2) performing both a history and dipstick for thresholds between €10 and €17 per additional correctly classified patient, 3) performing a dipstick if history was negative, followed by a sediment if the dipstick was negative for thresholds between €17 and €118 per additional correctly classified patient, 4) performing a dipstick if history was negative, followed by a dipslide if the dipstick was negative for thresholds above €118 per additional correctly classified patient. Depending on decision makers' willingness to pay for one additional correctly classified woman, the strategy consisting of performing a history and dipstick simultaneously (ceiling ratios between €10 and €17) or performing a sediment if history and subsequent dipstick are negative (ceiling ratios between €17 and €118) are the most cost-effective strategies to diagnose a UTI.

  9. Effects of SO(10)-inspired scalar non-universality on the MSSM parameter space at large tanβ

    NASA Astrophysics Data System (ADS)

    Ramage, M. R.

    2005-08-01

    We analyze the parameter space of the ( μ>0, A=0) CMSSM at large tanβ with a small degree of non-universality originating from D-terms and Higgs-sfermion splitting inspired by SO(10) GUT models. The effects of such non-universalities on the sparticle spectrum and observables such as (, B(b→Xγ), the SUSY threshold corrections to the bottom mass and Ωh are examined in detail and the consequences for the allowed parameter space of the model are investigated. We find that even small deviations to universality can result in large qualitative differences compared to the universal case; for certain values of the parameters, we find, even at low m and m, that radiative electroweak symmetry breaking fails as a consequence of either |<0 or mA2<0. We find particularly large departures from the mSugra case for the neutralino relic density, which is sensitive to significant changes in the position and shape of the A resonance and a substantial increase in the Higgsino component of the LSP. However, we find that the corrections to the bottom mass are not sufficient to allow for Yukawa unification.

  10. Comparison of epicardial adipose tissue radiodensity threshold between contrast and non-contrast enhanced computed tomography scans: A cohort study of derivation and validation.

    PubMed

    Xu, Lingyu; Xu, Yuancheng; Coulden, Richard; Sonnex, Emer; Hrybouski, Stanislau; Paterson, Ian; Butler, Craig

    2018-05-11

    Epicardial adipose tissue (EAT) volume derived from contrast enhanced (CE) computed tomography (CT) scans is not well validated. We aim to establish a reliable threshold to accurately quantify EAT volume from CE datasets. We analyzed EAT volume on paired non-contrast (NC) and CE datasets from 25 patients to derive appropriate Hounsfield (HU) cutpoints to equalize two EAT volume estimates. The gold standard threshold (-190HU, -30HU) was used to assess EAT volume on NC datasets. For CE datasets, EAT volumes were estimated using three previously reported thresholds: (-190HU, -30HU), (-190HU, -15HU), (-175HU, -15HU) and were analyzed by a semi-automated 3D Fat analysis software. Subsequently, we applied a threshold correction to (-190HU, -30HU) based on mean differences in radiodensity between NC and CE images (ΔEATrd = CE radiodensity - NC radiodensity). We then validated our findings on EAT threshold in 21 additional patients with paired CT datasets. EAT volume from CE datasets using previously published thresholds consistently underestimated EAT volume from NC dataset standard by a magnitude of 8.2%-19.1%. Using our corrected threshold (-190HU, -3HU) in CE datasets yielded statistically identical EAT volume to NC EAT volume in the validation cohort (186.1 ± 80.3 vs. 185.5 ± 80.1 cm 3 , Δ = 0.6 cm 3 , 0.3%, p = 0.374). Estimating EAT volume from contrast enhanced CT scans using a corrected threshold of -190HU, -3HU provided excellent agreement with EAT volume from non-contrast CT scans using a standard threshold of -190HU, -30HU. Copyright © 2018. Published by Elsevier B.V.

  11. Audiometric analyses confirm a cochlear component, disproportional to age, in stapedial otosclerosis.

    PubMed

    Topsakal, Vedat; Fransen, Erik; Schmerber, Sébastien; Declau, Frank; Yung, Matthew; Gordts, Frans; Van Camp, Guy; Van de Heyning, Paul

    2006-09-01

    To report the preoperative audiometric profile of surgically confirmed otosclerosis. Retrospective, multicenter study. Four tertiary referral centers. One thousand sixty-four surgically confirmed patients with otosclerosis. Therapeutic ear surgery for hearing improvement. Preoperative audiometric air conduction (AC) and bone conduction (BC) hearing thresholds were obtained retrospectively for 1064 patients with otosclerosis. A cross-sectional multiple linear regression analysis was performed on audiometric data of affected ears. Influences of age and sex were analyzed and age-related typical audiograms were created. Bone conduction thresholds were corrected for Carhart effect and presbyacusis; in addition, we tested to see if separate cochlear otosclerosis component existed. Corrected thresholds were than analyzed separately for progression of cochlear otosclerosis. The study population consisted of 35% men and 65% women (mean age, 44 yr). The mean pure-tone average at 0.5, 1, and 2 kHz was 57 dB hearing level. Multiple linear regression analysis showed significant progression for all measured AC and BC thresholds. The average annual threshold deterioration for AC was 0.45 dB/yr and the annual threshold deterioration for BC was 0.37 dB/yr. The average annual gap expansion was 0.08 dB/year. The corrected BC thresholds for Carhart effect and presbyacusis remained significantly different from zero, but only showed progression at 2 kHz. The preoperative audiological profile of otosclerosis is described. There is a significant sensorineural component in patients with otosclerosis planned for stapedotomy, which is worse than age-related hearing loss by itself. Deterioration rates of AC and BC thresholds have been reported, which can be helpful in clinical practice and might also guide the characterization of allegedly different phenotypes for familial and sporadic otosclerosis.

  12. Additional studies of forest classification accuracy as influenced by multispectral scanner spatial resolution

    NASA Technical Reports Server (NTRS)

    Sadowski, F. E.; Sarno, J. E.

    1976-01-01

    First, an analysis of forest feature signatures was used to help explain the large variation in classification accuracy that can occur among individual forest features for any one case of spatial resolution and the inconsistent changes in classification accuracy that were demonstrated among features as spatial resolution was degraded. Second, the classification rejection threshold was varied in an effort to reduce the large proportion of unclassified resolution elements that previously appeared in the processing of coarse resolution data when a constant rejection threshold was used for all cases of spatial resolution. For the signature analysis, two-channel ellipse plots showing the feature signature distributions for several cases of spatial resolution indicated that the capability of signatures to correctly identify their respective features is dependent on the amount of statistical overlap among signatures. Reductions in signature variance that occur in data of degraded spatial resolution may not necessarily decrease the amount of statistical overlap among signatures having large variance and small mean separations. Features classified by such signatures may thus continue to have similar amounts of misclassified elements in coarser resolution data, and thus, not necessarily improve in classification accuracy.

  13. Effective sextic superpotential and B-L violation in NMSGUT

    NASA Astrophysics Data System (ADS)

    Aulakh, C. S.; Awasthi, R. L.; Krishna, Shri

    2017-10-01

    We list operators of the superpotential of the effective MSSM that emerge from the NMSGUT up to sextic degree. We give illustrative expressions for the coefficients in terms of NMSGUT parameters. We also estimate the impact of GUT scale threshold corrections on these effective operators in view of the demonstration that B violation via quartic superpotential terms can be suppressed to acceptable levels after including such corrections in the NMSGUT. We find a novel B, B-L violating quintic operator that leads to the decay mode n→ e^- K^+. We also remark that the threshold corrections to the Type-I seesaw mechanism make the deviation of right-handed neutrino masses from the GUT scale more natural while Type-II seesaw neutrino masses, which earlier tended to utterly negligible receive threshold enhancement. Our results are of relevance for analysing B-L violating operator-based, sphaleron-safe, baryogenesis.

  14. Genetically-Adjusted PSA Values May Prevent Delayed Biopsies in African-American Men

    PubMed Central

    Donin, Nicholas; Loeb, Stacy; Cooper, Phillip R.; Roehl, Kimberly A.; Baumann, Nikola A.; J.Catalona, William; Helfand, Brian T.

    2014-01-01

    Purpose Genetic variants called PSA-single nucleotide polymorphisms (PSA-SNPs) have been associated with serum PSA levels. We previously demonstrated that genetic correction of serum PSA in Caucasian men could reduce both potentially unnecessary biopsies by 15% to 20% and potentially delayed biopsies by 3%. Our objective was to evaluate whether genetic correction with the PSA-SNPs could reduce potentially unnecessary and/or delayed biopsies in African-American (AA) men. Materials and Methods We compared the genotypes of 4 PSA-SNPs between 964 Caucasian and 363 AA men without known PC. We adjusted PSA values based upon an individual's PSA-SNP carrier status, and calculated the percentage of men that would meet commonly used PSA thresholds for biopsy (≥2.5 or ≥4.0ng/mL) before and after genetic correction. Potentially unnecessary and delayed biopsies were defined as those men who went below and above the biopsy threshold after genetic correction, respectively. Results Overall, 349 (96.1%) and 354 (97.5%) AA men had measured PSA levels <2.5 and <4.0 ng/mL. Genetic correction in AA men did not avoid any potentially unnecessary biopsies, but resulted in a significant (p<0.001) reduction in potentially delayed biopsies by 2.5% and 3.9% based upon the biopsy threshold cutoff. Conclusions There are significant differences in the influence of the PSA-SNPs between AA and Caucasian men without known PC, as genetic correction resulted in an increased proportion of AA men crossing the threshold for biopsy. These results raise the question whether genetic differences in PSA might contribute to delayed PC diagnosis in AA patients. PMID:24712975

  15. Higgs boson gluon-fusion production in QCD at three loops.

    PubMed

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; Herzog, Franz; Mistlberger, Bernhard

    2015-05-29

    We present the cross section for the production of a Higgs boson at hadron colliders at next-to-next-to-next-to-leading order (N^{3}LO) in perturbative QCD. The calculation is based on a method to perform a series expansion of the partonic cross section around the threshold limit to an arbitrary order. We perform this expansion to sufficiently high order to obtain the value of the hadronic cross at N^{3}LO in the large top-mass limit. For renormalization and factorization scales equal to half the Higgs boson mass, the N^{3}LO corrections are of the order of +2.2%. The total scale variation at N^{3}LO is 3%, reducing the uncertainty due to missing higher order QCD corrections by a factor of 3.

  16. Effect of background correction on peak detection and quantification in online comprehensive two-dimensional liquid chromatography using diode array detection.

    PubMed

    Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W

    2012-09-07

    A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Re: Request for Correction - IRIS Assessment for Trichloroethylene

    EPA Pesticide Factsheets

    Letter from Faye Graul providing supplemental information to her Request for Correction for Threshold of Trichloroethylene Contamination of Maternal Drinking Waters submitted under the Information Quality Act.

  18. Re: Supplement to Request for Correction - IRIS Assessment of Trichloroethylene

    EPA Pesticide Factsheets

    Letter from Faye Graul providing supplemental information to her Request for Correction for Threshold of Trichloroethylene Contamination of Maternal Drinking Waters submitted under the Information Quality Act.

  19. Local-duality QCD sum rules for strong isospin breaking in the decay constants of heavy-light mesons.

    PubMed

    Lucha, Wolfgang; Melikhov, Dmitri; Simula, Silvano

    2018-01-01

    We discuss the leptonic decay constants of heavy-light mesons by means of Borel QCD sum rules in the local-duality (LD) limit of infinitely large Borel mass parameter. In this limit, for an appropriate choice of the invariant structures in the QCD correlation functions, all vacuum-condensate contributions vanish and all nonperturbative effects are contained in only one quantity, the effective threshold. We study properties of the LD effective thresholds in the limits of large heavy-quark mass [Formula: see text] and small light-quark mass [Formula: see text]. In the heavy-quark limit, we clarify the role played by the radiative corrections in the effective threshold for reproducing the pQCD expansion of the decay constants of pseudoscalar and vector mesons. We show that the dependence of the meson decay constants on [Formula: see text] arises predominantly (at the level of 70-80%) from the calculable [Formula: see text]-dependence of the perturbative spectral densities. Making use of the lattice QCD results for the decay constants of nonstrange and strange pseudoscalar and vector heavy mesons, we obtain solid predictions for the decay constants of heavy-light mesons as functions of [Formula: see text] in the range from a few to 100 MeV and evaluate the corresponding strong isospin-breaking effects: [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text].

  20. Adaptive spline autoregression threshold method in forecasting Mitsubishi car sales volume at PT Srikandi Diamond Motors

    NASA Astrophysics Data System (ADS)

    Susanti, D.; Hartini, E.; Permana, A.

    2017-01-01

    Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.

  1. Respiration-Averaged CT for Attenuation Correction of PET Images – Impact on PET Texture Features in Non-Small Cell Lung Cancer Patients

    PubMed Central

    Cheng, Nai-Ming; Fang, Yu-Hua Dean; Tsan, Din-Li

    2016-01-01

    Purpose We compared attenuation correction of PET images with helical CT (PET/HCT) and respiration-averaged CT (PET/ACT) in patients with non-small-cell lung cancer (NSCLC) with the goal of investigating the impact of respiration-averaged CT on 18F FDG PET texture parameters. Materials and Methods A total of 56 patients were enrolled. Tumors were segmented on pretreatment PET images using the adaptive threshold. Twelve different texture parameters were computed: standard uptake value (SUV) entropy, uniformity, entropy, dissimilarity, homogeneity, coarseness, busyness, contrast, complexity, grey-level nonuniformity, zone-size nonuniformity, and high grey-level large zone emphasis. Comparisons of PET/HCT and PET/ACT were performed using Wilcoxon signed-rank tests, intraclass correlation coefficients, and Bland-Altman analysis. Receiver operating characteristic (ROC) curves as well as univariate and multivariate Cox regression analyses were used to identify the parameters significantly associated with disease-specific survival (DSS). A fixed threshold at 45% of the maximum SUV (T45) was used for validation. Results SUV maximum and total lesion glycolysis (TLG) were significantly higher in PET/ACT. However, texture parameters obtained with PET/ACT and PET/HCT showed a high degree of agreement. The lowest levels of variation between the two modalities were observed for SUV entropy (9.7%) and entropy (9.8%). SUV entropy, entropy, and coarseness from both PET/ACT and PET/HCT were significantly associated with DSS. Validation analyses using T45 confirmed the usefulness of SUV entropy and entropy in both PET/HCT and PET/ACT for the prediction of DSS, but only coarseness from PET/ACT achieved the statistical significance threshold. Conclusions Our results indicate that 1) texture parameters from PET/ACT are clinically useful in the prediction of survival in NSCLC patients and 2) SUV entropy and entropy are robust to attenuation correction methods. PMID:26930211

  2. A non-parametric postprocessor for bias-correcting multi-model ensemble forecasts of hydrometeorological and hydrologic variables

    NASA Astrophysics Data System (ADS)

    Brown, James; Seo, Dong-Jun

    2010-05-01

    Operational forecasts of hydrometeorological and hydrologic variables often contain large uncertainties, for which ensemble techniques are increasingly used. However, the utility of ensemble forecasts depends on the unbiasedness of the forecast probabilities. We describe a technique for quantifying and removing biases from ensemble forecasts of hydrometeorological and hydrologic variables, intended for use in operational forecasting. The technique makes no a priori assumptions about the distributional form of the variables, which is often unknown or difficult to model parametrically. The aim is to estimate the conditional cumulative distribution function (ccdf) of the observed variable given a (possibly biased) real-time ensemble forecast from one or several forecasting systems (multi-model ensembles). The technique is based on Bayesian optimal linear estimation of indicator variables, and is analogous to indicator cokriging (ICK) in geostatistics. By developing linear estimators for the conditional expectation of the observed variable at many thresholds, ICK provides a discrete approximation of the full ccdf. Since ICK minimizes the conditional error variance of the indicator expectation at each threshold, it effectively minimizes the Continuous Ranked Probability Score (CRPS) when infinitely many thresholds are employed. However, the ensemble members used as predictors in ICK, and other bias-correction techniques, are often highly cross-correlated, both within and between models. Thus, we propose an orthogonal transform of the predictors used in ICK, which is analogous to using their principal components in the linear system of equations. This leads to a well-posed problem in which a minimum number of predictors are used to provide maximum information content in terms of the total variance explained. The technique is used to bias-correct precipitation ensemble forecasts from the NCEP Global Ensemble Forecast System (GEFS), for which independent validation results are presented. Extension to multimodel ensembles from the NCEP GFS and Short Range Ensemble Forecast (SREF) systems is also proposed.

  3. Size determines antennal sensitivity and behavioral threshold to odors in bumblebee workers

    NASA Astrophysics Data System (ADS)

    Spaethe, Johannes; Brockmann, Axel; Halbig, Christine; Tautz, Jürgen

    2007-09-01

    The eusocial bumblebees exhibit pronounced size variation among workers of the same colony. Differently sized workers engage in different tasks (alloethism); large individuals are found to have a higher probability to leave the colony and search for food, whereas small workers tend to stay inside the nest and attend to nest duties. We investigated the effect of size variation on morphology and physiology of the peripheral olfactory system and the behavioral response thresholds to odors in workers of Bombus terrestris. Number and density of olfactory sensilla on the antennae correlate significantly with worker size. Consistent with these morphological changes, we found that antennal sensitivity to odors increases with body size. Antennae of large individuals show higher electroantennogram responses to a given odor concentration than those of smaller nestmates. This finding indicates that large antennae exhibit an increased capability to catch odor molecules and thus are more sensitive to odors than small antennae. We confirmed this prediction in a dual choice behavioral experiment showing that large workers indeed are able to respond correctly to much lower odor concentrations than small workers. Learning performance in these experiments did not differ between small and large bumblebees. Our results clearly show that, in the social bumblebees, variation in olfactory sensilla number due to size differences among workers strongly affects individual odor sensitivity. We speculate that superior odor sensitivity of large workers has favored size-related division of labor in bumblebee colonies.

  4. Estimating daily climatologies for climate indices derived from climate model data and observations

    PubMed Central

    Mahlstein, Irina; Spirig, Christoph; Liniger, Mark A; Appenzeller, Christof

    2015-01-01

    Climate indices help to describe the past, present, and the future climate. They are usually closer related to possible impacts and are therefore more illustrative to users than simple climate means. Indices are often based on daily data series and thresholds. It is shown that the percentile-based thresholds are sensitive to the method of computation, and so are the climatological daily mean and the daily standard deviation, which are used for bias corrections of daily climate model data. Sample size issues of either the observed reference period or the model data lead to uncertainties in these estimations. A large number of past ensemble seasonal forecasts, called hindcasts, is used to explore these sampling uncertainties and to compare two different approaches. Based on a perfect model approach it is shown that a fitting approach can improve substantially the estimates of daily climatologies of percentile-based thresholds over land areas, as well as the mean and the variability. These improvements are relevant for bias removal in long-range forecasts or predictions of climate indices based on percentile thresholds. But also for climate change studies, the method shows potential for use. Key Points More robust estimates of daily climate characteristics Statistical fitting approach Based on a perfect model approach PMID:26042192

  5. Noise-induced escape in an excitable system

    NASA Astrophysics Data System (ADS)

    Khovanov, I. A.; Polovinkin, A. V.; Luchinsky, D. G.; McClintock, P. V. E.

    2013-03-01

    We consider the stochastic dynamics of escape in an excitable system, the FitzHugh-Nagumo (FHN) neuronal model, for different classes of excitability. We discuss, first, the threshold structure of the FHN model as an example of a system without a saddle state. We then develop a nonlinear (nonlocal) stability approach based on the theory of large fluctuations, including a finite-noise correction, to describe noise-induced escape in the excitable regime. We show that the threshold structure is revealed via patterns of most probable (optimal) fluctuational paths. The approach allows us to estimate the escape rate and the exit location distribution. We compare the responses of a monostable resonator and monostable integrator to stochastic input signals and to a mixture of periodic and stochastic stimuli. Unlike the commonly used local analysis of the stable state, our nonlocal approach based on optimal paths yields results that are in good agreement with direct numerical simulations of the Langevin equation.

  6. Heavy-flavor parton distributions without heavy-flavor matching prescriptions

    NASA Astrophysics Data System (ADS)

    Bertone, Valerio; Glazov, Alexandre; Mitov, Alexander; Papanastasiou, Andrew S.; Ubiali, Maria

    2018-04-01

    We show that the well-known obstacle for working with the zero-mass variable flavor number scheme, namely, the omission of O(1) mass power corrections close to the conventional heavy flavor matching point (HFMP) μ b = m, can be easily overcome. For this it is sufficient to take advantage of the freedom in choosing the position of the HFMP. We demonstrate that by choosing a sufficiently large HFMP, which could be as large as 10 times the mass of the heavy quark, one can achieve the following improvements: 1) above the HFMP the size of missing power corrections O(m) is restricted by the value of μ b and, therefore, the error associated with their omission can be made negligible; 2) additional prescriptions for the definition of cross-sections are not required; 3) the resummation accuracy is maintained and 4) contrary to the common lore we find that the discontinuity of α s and pdfs across thresholds leads to improved continuity in predictions for observables. We have considered a large set of proton-proton and electron-proton collider processes, many through NNLO QCD, that demonstrate the broad applicability of our proposal.

  7. Understanding the many-body expansion for large systems. III. Critical role of four-body terms, counterpoise corrections, and cutoffs.

    PubMed

    Liu, Kuan-Yu; Herbert, John M

    2017-10-28

    Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H 2 O) 37 , four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H 2 O) 20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.

  8. Understanding the many-body expansion for large systems. III. Critical role of four-body terms, counterpoise corrections, and cutoffs

    NASA Astrophysics Data System (ADS)

    Liu, Kuan-Yu; Herbert, John M.

    2017-10-01

    Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H2O)37, four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H2O)20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.

  9. Algorithmic detectability threshold of the stochastic block model

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  10. Discrimination thresholds of normal and anomalous trichromats: Model of senescent changes in ocular media density on the Cambridge Colour Test

    PubMed Central

    Shinomori, Keizo; Panorgias, Athanasios; Werner, John S.

    2017-01-01

    Age-related changes in chromatic discrimination along dichromatic confusion lines were measured with the Cambridge Colour Test (CCT). One hundred and sixty-two individuals (16 to 88 years old) with normal Rayleigh matches were the major focus of this paper. An additional 32 anomalous trichromats classified by their Rayleigh matches were also tested. All subjects were screened to rule out abnormalities of the anterior and posterior segments. Thresholds on all three chromatic vectors measured with the CCT showed age-related increases. Protan and deutan vector thresholds increased linearly with age while the tritan vector threshold was described with a bilinear model. Analysis and modeling demonstrated that the nominal vectors of the CCT are shifted by senescent changes in ocular media density, and a method for correcting the CCT vectors is demonstrated. A correction for these shifts indicates that classification among individuals of different ages is unaffected. New vector thresholds for elderly observers and for all age groups are suggested based on calculated tolerance limits. PMID:26974943

  11. Exact ∇4ℛ4 couplings and helicity supertraces

    NASA Astrophysics Data System (ADS)

    Bossard, Guillaume; Pioline, Boris

    2017-01-01

    In type II string theory compactified on a d-dimensional torus T d down to D = 10- ddimensions,the ℛ4 and ∇4ℛ4 four-gravitoncouplingsareknownexactly,forall values of the moduli, in terms of certain Eisenstein series of the U-duality group E d ( ℤ). In the limit where one circle in the torus becomes large, these couplings are expected to reduce to their counterpart in dimension D +1, plus threshold effects and exponentially suppressed corrections corresponding to BPS black holes in dimension D + 1 whose worldline winds around the circle. By combining the weak coupling and large radius limits, we determine these exponentially suppressed corrections exactly, and demonstrate that the contributions of 1/4-BPS black holes to the ∇4ℛ4 coupling are proportional to the appropriate helicity supertrace. Mathematically, our results provide the complete Fourier expansion of the next-to-minimal theta series of E d + 1( ℤ) with respect to the maximal parabolic subgroup with Levi component E d for d ≤ 6, and the complete Abelian part of the Fourier expansion of the same for d = 7.

  12. Long-range epidemic spreading in a random environment.

    PubMed

    Juhász, Róbert; Kovács, István A; Iglói, Ferenc

    2015-03-01

    Modeling long-range epidemic spreading in a random environment, we consider a quenched, disordered, d-dimensional contact process with infection rates decaying with distance as 1/rd+σ. We study the dynamical behavior of the model at and below the epidemic threshold by a variant of the strong-disorder renormalization-group method and by Monte Carlo simulations in one and two spatial dimensions. Starting from a single infected site, the average survival probability is found to decay as P(t)∼t-d/z up to multiplicative logarithmic corrections. Below the epidemic threshold, a Griffiths phase emerges, where the dynamical exponent z varies continuously with the control parameter and tends to zc=d+σ as the threshold is approached. At the threshold, the spatial extension of the infected cluster (in surviving trials) is found to grow as R(t)∼t1/zc with a multiplicative logarithmic correction and the average number of infected sites in surviving trials is found to increase as Ns(t)∼(lnt)χ with χ=2 in one dimension.

  13. Correcting Velocity Dispersions of Dwarf Spheroidal Galaxies for Binary Orbital Motion

    NASA Astrophysics Data System (ADS)

    Minor, Quinn E.; Martinez, Greg; Bullock, James; Kaplinghat, Manoj; Trainor, Ryan

    2010-10-01

    We show that the measured velocity dispersions of dwarf spheroidal galaxies from about 4 to 10 km s-1 are unlikely to be inflated by more than 30% due to the orbital motion of binary stars and demonstrate that the intrinsic velocity dispersions can be determined to within a few percent accuracy using two-epoch observations with 1-2 yr as the optimal time interval. The crucial observable is the threshold fraction—the fraction of stars that show velocity changes larger than a given threshold between measurements. The threshold fraction is tightly correlated with the dispersion introduced by binaries, independent of the underlying binary fraction and distribution of orbital parameters. We outline a simple procedure to correct the velocity dispersion to within a few percent accuracy by using the threshold fraction and provide fitting functions for this method. We also develop a methodology for constraining properties of binary populations from both single- and two-epoch velocity measurements by including the binary velocity distribution in a Bayesian analysis.

  14. Quantitative evaluation method of the threshold adjustment and the flat field correction performances of hybrid photon counting pixel detectors

    NASA Astrophysics Data System (ADS)

    Medjoubi, K.; Dawiec, A.

    2017-12-01

    A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.

  15. Threshold Assessment of Gear Diagnostic Tools on Flight and Test Rig Data

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Mosher, Marianne; Huff, Edward M.

    2003-01-01

    A method for defining thresholds for vibration-based algorithms that provides the minimum number of false alarms while maintaining sensitivity to gear damage was developed. This analysis focused on two vibration based gear damage detection algorithms, FM4 and MSA. This method was developed using vibration data collected during surface fatigue tests performed in a spur gearbox rig. The thresholds were defined based on damage progression during tests with damage. The thresholds false alarm rates were then evaluated on spur gear tests without damage. Next, the same thresholds were applied to flight data from an OH-58 helicopter transmission. Results showed that thresholds defined in test rigs can be used to define thresholds in flight to correctly classify the transmission operation as normal.

  16. Lesion Generation Through Ribs Using Histotripsy Therapy Without Aberration Correction

    PubMed Central

    Kim, Yohan; Wang, Tzu-Yin; Xu, Zhen; Cain, Charles A.

    2012-01-01

    This study investigates the feasibility of using high-intensity pulsed therapeutic ultrasound, or histotripsy, to non-invasively generate lesions through the ribs. Histotripsy therapy mechanically ablates tissue through the generation of a cavitation bubble cloud, which occurs when the focal pressure exceeds a certain threshold. We hypothesize that histotripsy can generate precise lesions through the ribs without aberration correction if the main lobe retains its shape and exceeds the cavitation initiation threshold and the secondary lobes remain below the threshold. To test this hypothesis, a 750-kHz focused transducer was used to generate lesions in tissue-mimicking phantoms with and without the presence of rib aberrators. In all cases, 8000 pulses with 16 to 18 MPa peak rarefactional pressure at a repetition frequency of 100 Hz were applied without aberration correction. Despite the high secondary lobes introduced by the aberrators, high-speed imaging showed that bubble clouds were generated exclusively at the focus, resulting in well-confined lesions with comparable dimensions. Collateral damage from secondary lobes was negligible, caused by single bubbles that failed to form a cloud. These results support our hypothesis, suggesting that histotripsy has a high tolerance for aberrated fields and can generate confined focal lesions through rib obstacles without aberration correction. PMID:22083767

  17. Lesion generation through ribs using histotripsy therapy without aberration correction.

    PubMed

    Kim, Yohan; Wang, Tzu-Yin; Xu, Zhen; Cain, Charles A

    2011-11-01

    This study investigates the feasibility of using high-intensity pulsed therapeutic ultrasound, or histotripsy, to non-invasively generate lesions through the ribs. Histotripsy therapy mechanically ablates tissue through the generation of a cavitation bubble cloud, which occurs when the focal pressure exceeds a certain threshold. We hypothesize that histotripsy can generate precise lesions through the ribs without aberration correction if the main lobe retains its shape and exceeds the cavitation initiation threshold and the secondary lobes remain below the threshold. To test this hypothesis, a 750-kHz focused transducer was used to generate lesions in tissue-mimicking phantoms with and without the presence of rib aberrators. In all cases, 8000 pulses with 16 to 18 MPa peak rarefactional pressure at a repetition frequency of 100 Hz were applied without aberration correction. Despite the high secondary lobes introduced by the aberrators, high-speed imaging showed that bubble clouds were generated exclusively at the focus, resulting in well-confined lesions with comparable dimensions. Collateral damage from secondary lobes was negligible, caused by single bubbles that failed to form a cloud. These results support our hypothesis, suggesting that histotripsy has a high tolerance for aberrated fields and can generate confined focal lesions through rib obstacles without aberration correction.

  18. Simulation of rare events in quantum error correction

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey; Vargo, Alexander

    2013-12-01

    We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.

  19. The effects of electroacupuncture on analgesia and peripheral sensory thresholds in patients with burn scar pain.

    PubMed

    Cuignet, Olivier; Pirlot, A; Ortiz, S; Rose, T

    2015-09-01

    The aim of this study is to observe if the effects of electro-acupuncture (EA) on analgesia and peripheral sensory thresholds are transposable from the model of heat pain in volunteers to the clinical setting of burn scar pain. After severe burns, pathological burn scars (PPBS) may occur with excruciating pain that respond poorly to treatment and prevent patients from wearing their pressure garments, thereby leading to unesthetic and function-limiting scars. EA might be of greater benefit in terms of analgesia and functional recovery, should it interrupt this vicious circle by counteracting the peripheral hyperalgesia characterizing PPBS. Therefore we enrolled 32 patients (22 males/10 females) aged of 46±11 years with clinical signs of PPBS and of neuropathic pain despite treatment. The study protocol consisted in 3 weekly 30-min sessions of standardized EA with extra individual needles in accordance to Traditional Chinese Medicine, in addition of previous treatments. We assessed VAS for pain and quantitative sensory testing (QST) twice: one week before and one after protocol. QST measured electrical thresholds for non-nociceptive A-beta fibers, nociceptive A-delta and C fibers in 2 dermatomes, respectively from the PPBS and from the contralateral pain-free areas. Based on heat pain studies, EA consisted in sessions at the extremity points of the main meridian flowing through PPBS (0.300s, 5Hz, sub noxious intensity, 15min) and at the bilateral paravertebral points corresponding to the same metameric level, 15min. VAS reduction of 3 points or below 3 on a 10 points scale was considered clinically relevant. Paired t-test compared thresholds (mean [SD]) and Wilcoxon test compared VAS (median [IQR]) pre and after treatment, significant p<0.05. The reduction of VAS for pain reached statistical but not clinical relevance (6.8 [3] vs. 4.5 [3.6]). This was due to a large subgroup of 14 non-responders whose VAS did not change after treatment (6.6 [2.7] vs. 7.2 [3.8]). That subgroup exhibited significant differences in sensory thresholds when compared to the 18 responders (VAS from 7 [3] to 3 [1]). First, responders' thresholds for A-delta and C fibers in the PPBS area were significantly lower than those in the pain-free area before treatment but corrected after acupuncture (from respectively 60 [30] and 63 [10]% to 91 [11] and 106 [36]%). That might account for a nociceptive hypersensitivity in the PPBS that corrected after treatment. On the contrary, in non-responders nociceptive thresholds were similar in both the PPBS and the pain-free areas before treatment and did not change after EA. However, absolute values for thresholds in the pain-free areas where significantly lower for non-responders than for responders. The fact that non-responders had significant pain scores while presenting with lowered nociceptive thresholds even in the pain-free areas might evoke the possibility of a generalized supra-spinal hyperalgesia. The fact that acupuncture did not correct the pain nor the nociceptive thresholds in this subgroup requires further investigation. We also observed a statistically and clinically relevant reduction in VAS for pruritus for all patients - even those from the subgroup of non-responders to pain - that is worth to be mentioned and requires further studies to be confirmed. This observational study is the first that confirms the effects of acupuncture on analgesia and nociceptive thresholds in the clinical setting of burn pain only for patients presenting with a burn-localized but not a generalized hyperalgesia. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.

  20. Validity and reliability of in-situ air conduction thresholds measured through hearing aids coupled to closed and open instant-fit tips.

    PubMed

    O'Brien, Anna; Keidser, Gitte; Yeend, Ingrid; Hartley, Lisa; Dillon, Harvey

    2010-12-01

    Audiometric measurements through a hearing aid ('in-situ') may facilitate provision of hearing services where these are limited. This study investigated the validity and reliability of in-situ air conduction hearing thresholds measured with closed and open domes relative to thresholds measured with insert earphones, and explored sources of variability in the measures. Twenty-four adults with sensorineural hearing impairment attended two sessions in which thresholds and real-ear-to-dial-difference (REDD) values were measured. Without correction, significantly higher low-frequency thresholds in dB HL were measured in-situ than with insert earphones. Differences were due predominantly to differences in ear canal SPL, as measured with the REDD, which were attributed to leaking low-frequency energy. Test-retest data yielded higher variability with the closed dome coupling due to inconsistent seals achieved with this tip. For all three conditions, inter-participant variability in the REDD values was greater than intra-participant variability. Overall, in-situ audiometry is as valid and reliable as conventional audiometry provided appropriate REDD corrections are made and ambient sound in the test environment is controlled.

  1. Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.

    PubMed

    Menicucci, Nicolas C

    2014-03-28

    A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.

  2. Laser induced damage thresholds and laser safety levels. Do the units of measurement matter?

    NASA Astrophysics Data System (ADS)

    Wood, R. M.

    1998-04-01

    The commonly used units of measurement for laser induced damage are those of peak energy or power density. However, the laser induced damage thresholds, LIDT, of all materials are well known to be absorption, wavelength, spot size and pulse length dependent. As workers using these values become divorced from the theory it becomes increasingly important to use the correct units and to understand the correct scaling factors. This paper summarizes the theory and highlights the danger of using the wrong LIDT units in the context of potentially hazardous materials, laser safety eyewear and laser safety screens.

  3. Self-dual random-plaquette gauge model and the quantum toric code

    NASA Astrophysics Data System (ADS)

    Takeda, Koujin; Nishimori, Hidetoshi

    2004-05-01

    We study the four-dimensional Z2 random-plaquette lattice gauge theory as a model of topological quantum memory, the toric code in particular. In this model, the procedure of quantum error correction works properly in the ordered (Higgs) phase, and phase boundary between the ordered (Higgs) and disordered (confinement) phases gives the accuracy threshold of error correction. Using self-duality of the model in conjunction with the replica method, we show that this model has exactly the same mathematical structure as that of the two-dimensional random-bond Ising model, which has been studied very extensively. This observation enables us to derive a conjecture on the exact location of the multicritical point (accuracy threshold) of the model, pc=0.889972…, and leads to several nontrivial results including bounds on the accuracy threshold in three dimensions.

  4. Correcting Systemic Deficiencies in Our Scientific Infrastructure

    PubMed Central

    Doss, Mohan

    2014-01-01

    Scientific method is inherently self-correcting. When different hypotheses are proposed, their study would result in the rejection of the invalid ones. If the study of a competing hypothesis is prevented because of the faith in an unverified one, scientific progress is stalled. This has happened in the study of low dose radiation. Though radiation hormesis was hypothesized to reduce cancers in 1980, it could not be studied in humans because of the faith in the unverified linear no-threshold model hypothesis, likely resulting in over 15 million preventable cancer deaths worldwide during the past two decades, since evidence has accumulated supporting the validity of the phenomenon of radiation hormesis. Since our society has been guided by scientific advisory committees that ostensibly follow the scientific method, the long duration of such large casualties is indicative of systemic deficiencies in the infrastructure that has evolved in our society for the application of science. Some of these deficiencies have been identified in a few elements of the scientific infrastructure, and remedial steps suggested. Identifying and correcting such deficiencies may prevent similar tolls in the future. PMID:24910580

  5. 78 FR 6272 - Rules Relating to Additional Medicare Tax; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-30

    ... Rules Relating to Additional Medicare Tax; Correction AGENCY: Internal Revenue Service (IRS), Treasury... regulations are relating to Additional Hospital Insurance Tax on income above threshold amounts (``Additional Medicare Tax''), as added by the Affordable Care Act. Specifically, these proposed regulations provide...

  6. Methods to increase reproducibility in differential gene expression via meta-analysis

    PubMed Central

    Sweeney, Timothy E.; Haynes, Winston A.; Vallania, Francesco; Ioannidis, John P.; Khatri, Purvesh

    2017-01-01

    Findings from clinical and biological studies are often not reproducible when tested in independent cohorts. Due to the testing of a large number of hypotheses and relatively small sample sizes, results from whole-genome expression studies in particular are often not reproducible. Compared to single-study analysis, gene expression meta-analysis can improve reproducibility by integrating data from multiple studies. However, there are multiple choices in designing and carrying out a meta-analysis. Yet, clear guidelines on best practices are scarce. Here, we hypothesized that studying subsets of very large meta-analyses would allow for systematic identification of best practices to improve reproducibility. We therefore constructed three very large gene expression meta-analyses from clinical samples, and then examined meta-analyses of subsets of the datasets (all combinations of datasets with up to N/2 samples and K/2 datasets) compared to a ‘silver standard’ of differentially expressed genes found in the entire cohort. We tested three random-effects meta-analysis models using this procedure. We showed relatively greater reproducibility with more-stringent effect size thresholds with relaxed significance thresholds; relatively lower reproducibility when imposing extraneous constraints on residual heterogeneity; and an underestimation of actual false positive rate by Benjamini–Hochberg correction. In addition, multivariate regression showed that the accuracy of a meta-analysis increased significantly with more included datasets even when controlling for sample size. PMID:27634930

  7. Ex post facto assessment of diffusion tensor imaging metrics from different MRI protocols: preparing for multicentre studies in ALS.

    PubMed

    Rosskopf, Johannes; Müller, Hans-Peter; Dreyhaupt, Jens; Gorges, Martin; Ludolph, Albert C; Kassubek, Jan

    2015-03-01

    Diffusion tensor imaging (DTI) for assessing ALS-associated white matter alterations has still not reached the level of a neuroimaging biomarker. Since large-scale multicentre DTI studies in ALS may be hampered by differences in scanning protocols, an approach for pooling of DTI data acquired with different protocols was investigated. Three hundred and nine datasets from 170 ALS patients and 139 controls were collected ex post facto from a monocentric database reflecting different scanning protocols. A 3D correction algorithm was introduced for a combined analysis of DTI metrics despite different acquisition protocols, with the focus on the CST as the tract correlate of ALS neuropathological stage 1. A homogenous set of data was obtained by application of 3D correction matrices. Results showed that a fractional anisotropy (FA) threshold of 0.41 could be defined to discriminate ALS patients from controls (sensitivity/specificity, 74%/72%). For the remaining test sample, sensitivity/specificity values of 68%/74% were obtained. In conclusion, the objective was to merge data recorded with different DTI protocols with 3D correction matrices for analyses at group level. These post processing tools might facilitate analysis of large study samples in a multicentre setting for DTI analysis at group level to aid in establishing DTI as a non-invasive biomarker for ALS.

  8. Revised standards for statistical evidence.

    PubMed

    Johnson, Valen E

    2013-11-26

    Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25-50:1, and to 100-200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.

  9. GOTHiC, a probabilistic model to resolve complex biases and to identify real interactions in Hi-C data.

    PubMed

    Mifsud, Borbala; Martincorena, Inigo; Darbo, Elodie; Sugar, Robert; Schoenfelder, Stefan; Fraser, Peter; Luscombe, Nicholas M

    2017-01-01

    Hi-C is one of the main methods for investigating spatial co-localisation of DNA in the nucleus. However, the raw sequencing data obtained from Hi-C experiments suffer from large biases and spurious contacts, making it difficult to identify true interactions. Existing methods use complex models to account for biases and do not provide a significance threshold for detecting interactions. Here we introduce a simple binomial probabilistic model that resolves complex biases and distinguishes between true and false interactions. The model corrects biases of known and unknown origin and yields a p-value for each interaction, providing a reliable threshold based on significance. We demonstrate this experimentally by testing the method against a random ligation dataset. Our method outperforms previous methods and provides a statistical framework for further data analysis, such as comparisons of Hi-C interactions between different conditions. GOTHiC is available as a BioConductor package (http://www.bioconductor.org/packages/release/bioc/html/GOTHiC.html).

  10. Red blood cell transfusion in newborn infants

    PubMed Central

    Whyte, Robin K; Jefferies, Ann L

    2014-01-01

    Red blood cell transfusion is an important and frequent component of neonatal intensive care. The present position statement addresses the methods and indications for red blood cell transfusion of the newborn, based on a review of the current literature. The most frequent indications for blood transfusion in the newborn are the acute treatment of perinatal hemorrhagic shock and the recurrent correction of anemia of prematurity. Perinatal hemorrhagic shock requires immediate treatment with large quantities of red blood cells; the effects of massive transfusion on other blood components must be considered. Some guidelines are now available from clinical trials investigating transfusion in anemia of prematurity; however, considerable uncertainty remains. There is weak evidence that cognitive impairment may be more severe at follow-up in extremely low birth weight infants transfused at lower hemoglobin thresholds; therefore, these thresholds should be maintained by transfusion therapy. Although the risks of transfusion have declined considerably in recent years, they can be minimized further by carefully restricting neonatal blood sampling. PMID:24855419

  11. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction

    PubMed Central

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-01

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods. PMID:29342857

  12. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction.

    PubMed

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-13

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods.

  13. Effects of ocular aberrations on contrast detection in noise.

    PubMed

    Liang, Bo; Liu, Rong; Dai, Yun; Zhou, Jiawei; Zhou, Yifeng; Zhang, Yudong

    2012-08-06

    We use adaptive optics (AO) techniques to manipulate the ocular aberrations and elucidate the effects of these ocular aberrations on contrast detection in a noisy background. The detectability of sine wave gratings at frequencies of 4, 8, and 16 circles per degree (cpd) was measured in a standard two-interval force-choice staircase procedure against backgrounds of various levels of white noise. The observer's ocular aberrations were either corrected with AO or left uncorrected. In low levels of external noise, contrast detection thresholds are always lowered by AO correction, whereas in high levels of external noise, they are generally elevated by AO correction. Higher levels of external noise are required to make this threshold elevation observable when signal spatial frequencies increase from 4 to 16 cpd. The linear-amplifier-model fit shows that mostly sampling efficiency and equivalent noise both decrease with AO correction. Our findings indicate that ocular aberrations could be beneficial for contrast detection in high-level noises. The implications of these findings are discussed.

  14. [Do prisms according to Hans-Joachim Haase influence ocular prevalence?].

    PubMed

    Kromeier, Miriam; Schmitt, Christina; Bach, Michael; Kommerell, Guntram

    2002-12-01

    Ocular prevalence is defined as an unequal weighting of the eyes in the directional perception of stereo objects. Opinions differ as to the cause and relevance of ocular prevalence. Hans-Joachim Haase suggested that ocular prevalence is due to fixation disparity, brought about by incomplete compensation of heterophoria. He further suggested that prismatic spectacles determined by his "measuring and correcting methodology" (MKH) could restore bicentral fixation and thus establish a perceptual balance between both eyes. We examined 10 non-strabismic subjects with a visual acuity of > or = 1.0 in both eyes. It turned out that all 10 had a "fixation disparity type II", characterised according to Haase by a "disparate retinal correspondence". All subjects underwent the automatic Freiburg Ocular Prevalence Test, without and with MKH prisms. In addition we examined ocular prevalence under forced vergence and compared ocular prevalence with stereoacuity. Spontaneous ocular prevalence ranged between 1 and 69 %. Averaged over all 10 subjects, ocular prevalence without and with the MKH prisms were not significantly different. Statistical evaluation of single subjects revealed only in one of the 10 a significant difference (Bonferroni-corrected p = 0.001). In the subgroup of 5 subjects who underwent forced vergence, ocular prevalence remained unaltered between 0 and 18 Delta base out. The stereoscopic threshold of all 10 subjects ranged between 1.5 and 14.5 arcsec. There was no correlation between ocular prevalence and stereoscopic threshold (r = - 0.2, p = 0.5). Our results indicate that ocular prevalence is largely independent of phoria correction and vergence stress. The excellent stereoacuity of all subjects suggests that ocular prevalence is abandoned for the sake of optimal resolution when very small differences in depth have to be judged.

  15. Automatic detection of cardiovascular risk in CT attenuation correction maps in Rb-82 PET/CTs

    NASA Astrophysics Data System (ADS)

    Išgum, Ivana; de Vos, Bob D.; Wolterink, Jelmer M.; Dey, Damini; Berman, Daniel S.; Rubeaux, Mathieu; Leiner, Tim; Slomka, Piotr J.

    2016-03-01

    CT attenuation correction (CTAC) images acquired with PET/CT visualize coronary artery calcium (CAC) and enable CAC quantification. CAC scores acquired with CTAC have been suggested as a marker of cardiovascular disease (CVD). In this work, an algorithm previously developed for automatic CAC scoring in dedicated cardiac CT was applied to automatic CAC detection in CTAC. The study included 134 consecutive patients undergoing 82-Rb PET/CT. Low-dose rest CTAC scans were acquired (100 kV, 11 mAs, 1.4mm×1.4mm×3mm voxel size). An experienced observer defined the reference standard with the clinically used intensity level threshold for calcium identification (130 HU). Five scans were removed from analysis due to artifacts. The algorithm extracted potential CAC by intensity-based thresholding and 3D connected component labeling. Each candidate was described by location, size, shape and intensity features. An ensemble of extremely randomized decision trees was used to identify CAC. The data set was randomly divided into training and test sets. Automatically identified CAC was quantified using volume and Agatston scores. In 33 test scans, the system detected on average 469mm3/730mm3 (64%) of CAC with 36mm3 false positive volume per scan. The intraclass correlation coefficient for volume scores was 0.84. Each patient was assigned to one of four CVD risk categories based on the Agatston score (0-10, 11-100, 101-400, <400). The correct CVD category was assigned to 85% of patients (Cohen's linearly weighted κ0.82). Automatic detection of CVD risk based on CAC scoring in rest CTAC images is feasible. This may enable large scale studies evaluating clinical value of CAC scoring in CTAC data.

  16. 78 FR 4032 - Prompt Corrective Action, Requirements for Insurance, and Promulgation of NCUA Rules and Regulations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-18

    ... interest rate risk requirements. The amended IRPS increases the asset threshold that identifies credit... asset threshold used to define a ``complex'' credit union for determining whether risk-based net worth... or credit unions) with assets of $50 million or less from interest rate risk rule requirements. To...

  17. Higgs boson gluon-fusion production beyond threshold in N 3LO QCD

    DOE PAGES

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; ...

    2015-03-18

    In this study, we compute the gluon fusion Higgs boson cross-section at N 3LO through the second term in the threshold expansion. This calculation constitutes a major milestone towards the full N 3LO cross section. Our result has the best formal accuracy in the threshold expansion currently available, and includes contributions from collinear regions besides subleading corrections from soft and hard regions, as well as certain logarithmically enhanced contributions for general kinematics. We use our results to perform a critical appraisal of the validity of the threshold approximation at N 3LO in perturbative QCD.

  18. Threshold region for Higgs boson production in gluon fusion.

    PubMed

    Bonvini, Marco; Forte, Stefano; Ridolfi, Giovanni

    2012-09-07

    We provide a quantitative determination of the effective partonic kinematics for Higgs boson production in gluon fusion in terms of the collider energy at the LHC. We use the result to assess, as a function of the Higgs boson mass, whether the large m(t) approximation is adequate and Sudakov resummation advantageous. We argue that our results hold to all perturbative orders. Based on our results, we conclude that the full inclusion of finite top mass corrections is likely to be important for accurate phenomenology for a light Higgs boson with m(H)~125 GeV at the LHC with √s=14 TeV.

  19. Optimizing the rapid measurement of detection thresholds in infants

    PubMed Central

    Jones, Pete R.; Kalwarowsky, Sarah; Braddick, Oliver J.; Atkinson, Janette; Nardini, Marko

    2015-01-01

    Accurate measures of perceptual threshold are difficult to obtain in infants. In a clinical context, the challenges are particularly acute because the methods must yield meaningful results quickly and within a single individual. The present work considers how best to maximize speed, accuracy, and reliability when testing infants behaviorally and suggests some simple principles for improving test efficiency. Monte Carlo simulations, together with empirical (visual acuity) data from 65 infants, are used to demonstrate how psychophysical methods developed with adults can produce misleading results when applied to infants. The statistical properties of an effective clinical infant test are characterized, and based on these, it is shown that (a) a reduced (false-positive) guessing rate can greatly increase test efficiency, (b) the ideal threshold to target is often below 50% correct, and (c) simply taking the max correct response can often provide the best measure of an infant's perceptual sensitivity. PMID:26237298

  20. 3-D transcranial ultrasound imaging with bilateral phase aberration correction of multiple isoplanatic patches: a pilot human study with microbubble contrast enhancement.

    PubMed

    Lindsey, Brooks D; Nicoletto, Heather A; Bennett, Ellen R; Laskowitz, Daniel T; Smith, Stephen W

    2014-01-01

    With stroke currently the second-leading cause of death globally, and 87% of all strokes classified as ischemic, the development of a fast, accessible, cost-effective approach for imaging occlusive stroke could have a significant impact on health care outcomes and costs. Although clinical examination and standard computed tomography alone do not provide adequate information for understanding the complex temporal events that occur during an ischemic stroke, ultrasound imaging is well suited to the task of examining blood flow dynamics in real time and may allow for localization of a clot. A prototype bilateral 3-D ultrasound imaging system using two matrix array probes on either side of the head allows for correction of skull-induced aberration throughout two entire phased array imaging volumes. We investigated the feasibility of applying this custom correction technique in five healthy volunteers with Definity microbubble contrast enhancement. Subjects were scanned simultaneously via both temporal acoustic windows in 3-D color flow mode. The number of color flow voxels above a common threshold increased as a result of aberration correction in five of five subjects, with a mean increase of 33.9%. The percentage of large arteries visualized by 3-D color Doppler imaging increased from 46% without aberration correction to 60% with aberration correction. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  1. In vivo droplet vaporization for occlusion therapy and phase aberration correction.

    PubMed

    Kripfgans, Oliver D; Fowlkes, J Brian; Woydt, Michael; Eldevik, Odd P; Carson, Paul L

    2002-06-01

    The objective was to determine whether a transpulmonary droplet emulsion (90%, <6 microm diameter) could be used to form large gas bubbles (>30 microm) temporarily in vivo. Such bubbles could occlude a targeted capillary bed when used in a large number density. Alternatively, for a very sparse population of droplets, the resulting gas bubbles could serve as point beacons for phase aberration corrections in ultrasonic imaging. Gas bubbles can be made in vivo by acoustic droplet vaporization (ADV) of injected, superheated, dodecafluoropentane droplets. Droplets vaporize in an acoustic field whose peak rarefactional pressure exceeds a well-defined threshold. In this new work, it has been found that intraarterial and intravenous injections can be used to introduce the emulsion into the blood stream for subsequent ADV (B- and M-mode on a clinical scanner) in situ. Intravenous administration results in a lower gas bubble yield, possibly because of filtering in the lung, dilution in the blood volume, or other circulatory effects. Results show that for occlusion purposes, a reduction in regional blood flow of 34% can be achieved. Individual point beacons with a +24 dB backscatter amplitude relative to white matter were created by intravenous injection and ADV.

  2. Evaluation of liver fat in the presence of iron with MRI using T2* correction: a clinical approach.

    PubMed

    Henninger, Benjamin; Benjamin, Henninger; Kremser, Christian; Christian, Kremser; Rauch, Stefan; Stefan, Rauch; Eder, Robert; Robert, Eder; Judmaier, Werner; Werner, Judmaier; Zoller, Heinz; Heinz, Zoller; Michaely, Henrik; Henrik, Michaely; Schocke, Michael; Michael, Schocke

    2013-06-01

    To assess magnetic resonance imaging (MRI) with conventional chemical shift-based sequences with and without T2* correction for the evaluation of steatosis hepatitis (SH) in the presence of iron. Thirty-one patients who underwent MRI and liver biopsy because of clinically suspected diffuse liver disease were retrospectively analysed. The signal intensity (SI) was calculated in co-localised regions of interest (ROIs) using conventional spoiled gradient-echo T1 FLASH in-phase and opposed-phase (IP/OP). T2* relaxation time was recorded in a fat-saturated multi-echo-gradient-echo sequence. The fat fraction (FF) was calculated with non-corrected and T2*-corrected SIs. Results were correlated with liver biopsy. There was significant difference (P < 0.001) between uncorrected and T2* corrected FF in patients with SH and concomitant hepatic iron overload (HIO). Using 5 % as a threshold resulted in eight false negative results with uncorrected FF whereas T2* corrected FF lead to true positive results in 5/8 patients. ROC analysis calculated three threshold values (8.97 %, 5.3 % and 3.92 %) for T2* corrected FF with accuracy 84 %, sensitivity 83-91 % and specificity 63-88 %. FF with T2* correction is accurate for the diagnosis of hepatic fat in the presence of HIO. Findings of our study suggest the use of IP/OP imaging in combination with T2* correction. • Magnetic resonance helps quantify both iron and fat content within the liver • T2* correction helps to predict the correct diagnosis of steatosis hepatitis • "Fat fraction" from T2*-corrected chemical shift-based sequences accurately quantifies hepatic fat • "Fat fraction" without T2* correction underestimates hepatic fat with iron overload.

  3. Summer temperature metrics for predicting brook trout (Salvelinus fontinalis) distribution in streams

    USGS Publications Warehouse

    Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.

    2012-01-01

    We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.

  4. Development of a Detailed Volumetric Finite Element Model of the Spine to Simulate Surgical Correction of Spinal Deformities

    PubMed Central

    Driscoll, Mark; Mac-Thiong, Jean-Marc; Labelle, Hubert; Parent, Stefan

    2013-01-01

    A large spectrum of medical devices exists; it aims to correct deformities associated with spinal disorders. The development of a detailed volumetric finite element model of the osteoligamentous spine would serve as a valuable tool to assess, compare, and optimize spinal devices. Thus the purpose of the study was to develop and initiate validation of a detailed osteoligamentous finite element model of the spine with simulated correction from spinal instrumentation. A finite element of the spine from T1 to L5 was developed using properties and geometry from the published literature and patient data. Spinal instrumentation, consisting of segmental translation of a scoliotic spine, was emulated. Postoperative patient and relevant published data of intervertebral disc stress, screw/vertebra pullout forces, and spinal profiles was used to evaluate the models validity. Intervertebral disc and vertebral reaction stresses respected published in vivo, ex vivo, and in silico values. Screw/vertebra reaction forces agreed with accepted pullout threshold values. Cobb angle measurements of spinal deformity following simulated surgical instrumentation corroborated with patient data. This computational biomechanical analysis validated a detailed volumetric spine model. Future studies seek to exploit the model to explore the performance of corrective spinal devices. PMID:23991426

  5. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.

  6. Fast genomic predictions via Bayesian G-BLUP and multilocus models of threshold traits including censored Gaussian data.

    PubMed

    Kärkkäinen, Hanni P; Sillanpää, Mikko J

    2013-09-04

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.

  7. Fast Genomic Predictions via Bayesian G-BLUP and Multilocus Models of Threshold Traits Including Censored Gaussian Data

    PubMed Central

    Kärkkäinen, Hanni P.; Sillanpää, Mikko J.

    2013-01-01

    Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618

  8. Measurement of $$t\\bar{t}$$ production with a veto on additional central jet activity in pp collisions at $$\\sqrt{s}=7$$ TeV using the ATLAS detector

    DOE PAGES

    Aad, G.; Abbott, B.; Abdallah, J.; ...

    2012-06-21

    A measurement of the jet activity inmore » $$t\\bar{t}$$ events produced in proton–proton collisions at a centre-of-mass energy of 7 TeV is presented, using 2.05 fb -1 of integrated luminosity collected by the ATLAS detector at the Large Hadron Collider. The $$t\\bar{t}$$ events are selected in the dilepton decay channel with two identified b-jets from the top quark decays. Events are vetoed if they contain an additional jet with transverse momentum above a threshold in a central rapidity interval. The fraction of events surviving the jet veto is presented as a function of this threshold for four different central rapidity interval definitions. An alternate measurement is also performed, in which events are vetoed if the scalar transverse momentum sum of the additional jets in each rapidity interval is above a threshold. In both measurements, the data are corrected for detector effects and compared to the theoretical models implemented in MC@NLO, Powheg, Alpgen and Sherpa. The experimental uncertainties are often smaller than the spread of theoretical predictions, allowing deviations between data and theory to be observed in some regions of phase space.« less

  9. On thermal corrections to near-threshold annihilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seyong; Laine, M., E-mail: skim@sejong.ac.kr, E-mail: laine@itp.unibe.ch

    2017-01-01

    We consider non-relativistic ''dark'' particles interacting through gauge boson exchange. At finite temperature, gauge exchange is modified in many ways: virtual corrections lead to Debye screening; real corrections amount to frequent scatterings of the heavy particles on light plasma constituents; mixing angles change. In a certain temperature and energy range, these effects are of order unity. Taking them into account in a resummed form, we estimate the near-threshold spectrum of kinetically equilibrated annihilating TeV scale particles. Weakly bound states are shown to 'melt' below freeze-out, whereas with attractive strong interactions, relevant e.g. for gluinos, bound states boost the annihilation ratemore » by a factor 4 ... 80 with respect to the Sommerfeld estimate, thereby perhaps helping to avoid overclosure of the universe. Modestly non-degenerate dark sector masses and a way to combine the contributions of channels with different gauge and spin structures are also discussed.« less

  10. Evaluating methods of correcting for multiple comparisons implemented in SPM12 in social neuroscience fMRI studies: an example from moral psychology.

    PubMed

    Han, Hyemin; Glenn, Andrea L

    2018-06-01

    In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.

  11. Correction of the near threshold behavior of electron collisional excitation cross-sections in the plane-wave Born approximation

    NASA Astrophysics Data System (ADS)

    Kilcrease, D. P.; Brookes, S.

    2013-12-01

    The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. A simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure for the Born cross-sections that employs the Elwert-Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. We also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.

  12. Spatial beam shaping for lowering the threshold energy for femtosecond laser pulse photodisruption

    NASA Astrophysics Data System (ADS)

    Hansen, Anja; Ripken, Tammo; Heisterkamp, Alexander

    2011-10-01

    High precision femtosecond laser surgery is achieved by focusing femtosecond (fs) laser pulses in transparent tissues to create an optical breakdown leading to tissue dissection through photodisruption. For moving applications in ophthalmology from corneal or lental applications in the anterior eye to vitreal or retinal surgery in the posterior eye the applied pulse energy needs to be minimized in order to avoid harm to the retina. However, the aberrations of the anterior eye elements cause a distortion of the wave front and consequently an increase in size of the irradiated area and a decrease in photon density in the focal volume. Therefore, higher pulse energy is required to still surpass the threshold irradiance. In this work, aberrations in an eye model consisting of a plano-convex lens for focusing and 2-hydroxyethylmethacrylate (HEMA) in a water cuvette as eye tissue were corrected with a deformable mirror in combination with a Hartmann-Shack-sensor. The influence of an adaptive optics aberration correction on the pulse energy required for photodisruption was investigated. A reduction of the threshold energy was shown in the aberration-corrected case and the spatial confinement raised the irradiance at constant pulse energy. As less energy is required for photodisruption when correcting for wave front aberrations the potential risk of peripheral damage is reduced, especially for the retina during laser surgery in the posterior eye segment. This offers new possibilities for high precision fs-laser surgery in the treatment of several vitreal and retinal pathologies.

  13. Measurement of aortic valve calcification using multislice computed tomography: correlation with haemodynamic severity of aortic stenosis and clinical implication for patients with low ejection fraction.

    PubMed

    Cueff, Caroline; Serfaty, Jean-Michel; Cimadevilla, Claire; Laissy, Jean-Pierre; Himbert, Dominique; Tubach, Florence; Duval, Xavier; Iung, Bernard; Enriquez-Sarano, Maurice; Vahanian, Alec; Messika-Zeitoun, David

    2011-05-01

    Measurement of the degree of aortic valve calcification (AVC) using electron beam computed tomography (EBCT) is an accurate and complementary method to transthoracic echocardiography (TTE) for assessment of the severity of aortic stenosis (AS). Whether threshold values of AVC obtained with EBCT could be extrapolated to multislice computed tomography (MSCT) was unclear and AVC diagnostic value in patients with low ejection fraction (EF) has never been specifically evaluated. Patients with mild to severe AS underwent prospectively within 1 week MSCT and TTE. Severe AS was defined as an aortic valve area (AVA) of less than 1 cm(2). In 179 patients with EF greater than 40% (validation set), the relationship between AVC and AVA was evaluated. The best threshold of AVC for the diagnosis of severe AS was then evaluated in a second subset (testing set) of 49 patients with low EF (≤40%). In this subgroup, AS severity was defined based on mean gradient, natural history or dobutamine stress echocardiography. Correlation between AVC and AVA was good (r=-0.63, p<0.0001). A threshold of 1651 arbitrary units (AU) provided 82% sensitivity, 80% specificity, 88% negative-predictive value and 70% positive-predictive value. In the testing set (patients with low EF), this threshold correctly differentiated patients with severe AS from non-severe AS in all but three cases. These three patients had an AVC score close to the threshold (1206, 1436 and 1797 AU). In this large series of patients with a wide range of AS, AVC was shown to be well correlated to AVA and may be a useful adjunct for the evaluation of AS severity especially in difficult cases such as patients with low EF.

  14. Alcohol consumption and NHMRC guidelines: has the message got out, are people conforming and are they aware that alcohol causes cancer?

    PubMed

    Bowden, Jacqueline A; Delfabbro, Paul; Room, Robin; Miller, Caroline L; Wilson, Carlene

    2014-02-01

    To examine self-reported alcohol consumption and relationships between consumption, awareness of the 2009 NHMRC guidelines of no more than two standard drinks per day, drinking in excess of the guideline threshold and perceptions of alcohol as a risk factor for cancer. Questions were included in annual, cross-sectional surveys of approximately 2,700 South Australians aged 18 years and over from 2004 to 2012. Consumption data for 2011 and 2012 were merged for the majority of analyses. In 2011 and 2012, 21.6% of adults drank in excess of the guideline threshold (33.0% males; 10.7% females). While 53.5% correctly identified the NHMRC consumption threshold for women, only 20.3% did so for men (39.0% nominated a higher amount). A large minority said they did not know the consumption threshold for women (39.2%) or men (40.4%). In 2012, only 36.6% saw alcohol as an important risk factor for cancer. Important predictors of excess consumption for men were: higher household income; and not perceiving alcohol as an important risk factor for cancer. Predictors for women were similar but the role of household income was even more prominent. Men were nearly three times as likely to drink in excess of the guidelines as women. The majority of the population did not see an important link between alcohol and cancer. Awareness of the latest NHMRC guidelines consumption threshold is still low, particularly for men. A strategy to raise awareness of the NHMRC guidelines and the link between alcohol and cancer is warranted. © 2014 The Authors. ANZJPH © 2014 Public Health Association of Australia.

  15. Improved algorithm for computerized detection and quantification of pulmonary emphysema at high-resolution computed tomography (HRCT)

    NASA Astrophysics Data System (ADS)

    Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik

    2001-05-01

    Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.

  16. Energy-loss- and thickness-dependent contrast in atomic-scale electron energy-loss spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Haiyan; Zhu, Ye; Dwyer, Christian

    2014-12-31

    Atomic-scale elemental maps of materials acquired by core-loss inelastic electron scattering often exhibit an undesirable sensitivity to the unavoidable elastic scattering, making the maps counter-intuitive to interpret. Here, we present a systematic study that scrutinizes the energy-loss and sample-thickness dependence of atomic-scale elemental maps acquired using 100 keV incident electrons in a scanning transmission electron microscope. For single-crystal silicon, the balance between elastic and inelastic scattering means that maps generated from the near-threshold Si-L signal (energy loss of 99 eV) show no discernible contrast for a thickness of 0.5λ (λ is the electron mean-free path, here approximately 110 nm). Atmore » greater thicknesses we observe a counter-intuitive “negative” contrast. Only at much higher energy losses is an intuitive “positive” contrast gradually restored. Our quantitative analysis shows that the energy-loss at which a positive contrast is restored depends linearly on the sample thickness. This behavior is in very good agreement with our double-channeling inelastic scattering calculations. We test a recently-proposed experimental method to correct the core-loss inelastic scattering and restore an intuitive “positive” chemical contrast. The method is demonstrated to be reliable over a large range of energy losses and sample thicknesses. The corrected contrast for near-threshold maps is demonstrated to be (desirably) inversely proportional to sample thickness. As a result, implications for the interpretation of atomic-scale elemental maps are discussed.« less

  17. Inner-shell photoionization of atomic chlorine near the 2p-1 edge: a Breit-Pauli R-matrix calculation

    NASA Astrophysics Data System (ADS)

    Felfli, Z.; Deb, N. C.; Manson, S. T.; Hibbert, A.; Msezane, A. Z.

    2009-05-01

    An R-matrix calculation which takes into account relativistic effects via the Breit-Pauli (BP) operator is performed for photoionization cross sections of atomic Cl near the 2p threshold. The wavefunctions are constructed with orbitals generated from a careful large scale configuration interaction (CI) calculation with relativistic corrections using the CIV3 code of Hibbert [1] and Glass and Hibbert [2]. The results are contrasted with the calculation of Martins [3], which uses a CI with relativistic corrections, and compared with the most recent measurements [4]. [1] A. Hibbert, Comput. Phys. Commun. 9, 141 (1975) [2] R. Glass and A. Hibbert, Comput. Phys. Commun. 16, 19 (1978) [3] M. Martins, J. Phys. B 34, 1321 (2001) [4] D. Lindle et al (private communication) Research supported by U.S. DOE, Division of Chemical Sciences, NSF and CAU CFNM, NSF-CREST Program. Computing facilities at Queen's University of Belfast, UK and of DOE Office of Science, NERSC are appreciated.

  18. An evaluation of the signature extension approach to large area crop inventories utilizing space image data. [Kansas and North Dakota

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Cicone, R. C.; Stinson, J. L.; Balon, R. J.

    1977-01-01

    The author has identified the following significant results. Two examples of haze correction algorithms were tested: CROP-A and XSTAR. The CROP-A was tested in a unitemporal mode on data collected in 1973-74 over ten sample segments in Kansas. Because of the uniformly low level of haze present in these segments, no conclusion could be reached about CROP-A's ability to compensate for haze. It was noted, however, that in some cases CROP-A made serious errors which actually degraded classification performance. The haze correction algorithm XSTAR was tested in a multitemporal mode on 1975-76 LACIE sample segment data over 23 blind sites in Kansas and 18 sample segments in North Dakota, providing wide range of haze levels and other conditions for algorithm evaluation. It was found that this algorithm substantially improved signature extension classification accuracy when a sum-of-likelihoods classifier was used with an alien rejection threshold.

  19. Extending Measurements to En=30 MeV and Beyond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duke, Dana Lynn

    The majority of energy release in the fission process is due to the kinetic energy of the fission fragments. Average Total Kinetic Energy measurements for the major actinides over a wide range of incident neutron energies were performed at LANSCE using a Frisch-gridded ionization chamber. The experiments and results of the 238U(n,f) and 235U(n,f) will be presented, including (En), (A), and mass yield distributions as a function of neutron energy. A preliminary (En) for 239Pu(n,f) will also be shown. The (En) shows a clear structure at multichance fission thresholds for all the reactions that we studied. The fragment masses aremore » determined using the iterative double energy (2E) method, with a resolution of A = 4 - 5 amu. The correction for the prompt fission neutrons is the main source of uncertainty, especially at high incident neutron energies, since the behavior of nubar(A,En) is largely unknown. Different correction methods will be discussed.« less

  20. Secondary production of massive quarks in thrust

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoang, André H.; Erwin Schrödinger International Institute for Mathematical Physics, University of Vienna, Boltzmanngasse 9, A-1090 Vienna; Mateu, Vicent

    2016-01-22

    We present a factorization framework that takes into account the production of heavy quarks through gluon splitting in the thrust distribution for e{sup +}e{sup −} → hadrons. The explicit factorization theorems and some numerical results are displayed in the dijet region where the kinematic scales are widely separated, which can be extended systematically to the whole spectrum. We account for the necessary two-loop matrix elements, threshold corrections, and include resummation up to N{sup 3}LL order. We include nonperturbative power corrections through a field theoretical shape function, and remove the O(Λ{sub QCD}) renormalon in the partonic soft function by appropriate mass-dependentmore » subtractions. Our results hold for any value of the quark mass, from an infinitesimally small (merging to the known massless result) to an infinitely large one (achieving the decoupling limit). This is the first example of an application of a variable flavor number scheme to final state jets.« less

  1. Aberration correction results in the IBM STEM instrument.

    PubMed

    Batson, P E

    2003-09-01

    Results from the installation of aberration correction in the IBM 120 kV STEM argue that a sub-angstrom probe size has been achieved. Results and the experimental methods used to obtain them are described here. Some post-experiment processing is necessary to demonstrate the probe size of about 0.078 nm. While the promise of aberration correction is demonstrated, we remain at the very threshold of practicality, given the very stringent stability requirements.

  2. Error threshold for color codes and random three-body Ising models.

    PubMed

    Katzgraber, Helmut G; Bombin, H; Martin-Delgado, M A

    2009-08-28

    We study the error threshold of color codes, a class of topological quantum codes that allow a direct implementation of quantum Clifford gates suitable for entanglement distillation, teleportation, and fault-tolerant quantum computation. We map the error-correction process onto a statistical mechanical random three-body Ising model and study its phase diagram via Monte Carlo simulations. The obtained error threshold of p(c) = 0.109(2) is very close to that of Kitaev's toric code, showing that enhanced computational capabilities do not necessarily imply lower resistance to noise.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko

    In this study, we compute the gluon fusion Higgs boson cross-section at N 3LO through the second term in the threshold expansion. This calculation constitutes a major milestone towards the full N 3LO cross section. Our result has the best formal accuracy in the threshold expansion currently available, and includes contributions from collinear regions besides subleading corrections from soft and hard regions, as well as certain logarithmically enhanced contributions for general kinematics. We use our results to perform a critical appraisal of the validity of the threshold approximation at N 3LO in perturbative QCD.

  4. Search for Spatially Extended Fermi-LAT Sources Using Two Years of Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lande, Joshua; Ackermann, Markus; Allafort, Alice

    2012-07-13

    Spatial extension is an important characteristic for correctly associating {gamma}-ray-emitting sources with their counterparts at other wavelengths and for obtaining an unbiased model of their spectra. We present a new method for quantifying the spatial extension of sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi). We perform a series of Monte Carlo simulations to validate this tool and calculate the LAT threshold for detecting the spatial extension of sources. We then test all sources in the second Fermi -LAT catalog (2FGL) for extension. We report the detection of sevenmore » new spatially extended sources.« less

  5. Repeat-aware modeling and correction of short read errors.

    PubMed

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.

  6. Analysis of Prostate Patient Setup and Tracking Data: Potential Intervention Strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su Zhong, E-mail: zsu@floridaproton.org; Zhang Lisha; Murphy, Martin

    Purpose: To evaluate the setup, interfraction, and intrafraction organ motion error distributions and simulate intrafraction intervention strategies for prostate radiotherapy. Methods and Materials: A total of 17 patients underwent treatment setup and were monitored using the Calypso system during radiotherapy. On average, the prostate tracking measurements were performed for 8 min/fraction for 28 fractions for each patient. For both patient couch shift data and intrafraction organ motion data, the systematic and random errors were obtained from the patient population. The planning target volume margins were calculated using the van Herk formula. Two intervention strategies were simulated using the tracking data:more » the deviation threshold and period. The related planning target volume margins, time costs, and prostate position 'fluctuation' were presented. Results: The required treatment margin for the left-right, superoinferior, and anteroposterior axes was 8.4, 10.8, and 14.7 mm for skin mark-only setup and 1.3, 2.3, and 2.8 mm using the on-line setup correction, respectively. Prostate motion significantly correlated among the superoinferior and anteroposterior directions. Of the 17 patients, 14 had prostate motion within 5 mm of the initial setup position for {>=}91.6% of the total tracking time. The treatment margin decreased to 1.1, 1.8, and 2.3 mm with a 3-mm threshold correction and to 0.5, 1.0, and 1.5 mm with an every-2-min correction in the left-right, superoinferior, and anteroposterior directions, respectively. The periodic corrections significantly increase the treatment time and increased the number of instances when the setup correction was made during transient excursions. Conclusions: The residual systematic and random error due to intrafraction prostate motion is small after on-line setup correction. Threshold-based and time-based intervention strategies both reduced the planning target volume margins. The time-based strategies increased the treatment time and the in-fraction position fluctuation.« less

  7. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 2. How a mistake led BEIR I to adopt LNT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calabrese, Edward J., E-mail: edwardc@schoolph.uma

    This paper reveals that nearly 25 years after the used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. -- Highlights: • The BEAR I Genetics Panel made an error in denying dose rate for mutation. •more » The BEIR I Genetics Subcommittee attempted to correct this dose rate error. • The control group used for risk assessment by BEIR I is now known to be in error. • Correcting this error contradicts the LNT, supporting a threshold model.« less

  8. Dental age estimation: the role of probability estimates at the 10 year threshold.

    PubMed

    Lucas, Victoria S; McDonald, Fraser; Neil, Monica; Roberts, Graham

    2014-08-01

    The use of probability at the 18 year threshold has simplified the reporting of dental age estimates for emerging adults. The availability of simple to use widely available software has enabled the development of the probability threshold for individual teeth in growing children. Tooth development stage data from a previous study at the 10 year threshold were reused to estimate the probability of developing teeth being above or below the 10 year thresh-hold using the NORMDIST Function in Microsoft Excel. The probabilities within an individual subject are averaged to give a single probability that a subject is above or below 10 years old. To test the validity of this approach dental panoramic radiographs of 50 female and 50 male children within 2 years of the chronological age were assessed with the chronological age masked. Once the whole validation set of 100 radiographs had been assessed the masking was removed and the chronological age and dental age compared. The dental age was compared with chronological age to determine whether the dental age correctly or incorrectly identified a validation subject as above or below the 10 year threshold. The probability estimates correctly identified children as above or below on 94% of occasions. Only 2% of the validation group with a chronological age of less than 10 years were assigned to the over 10 year group. This study indicates the very high accuracy of assignment at the 10 year threshold. Further work at other legally important age thresholds is needed to explore the value of this approach to the technique of age estimation. Copyright © 2014. Published by Elsevier Ltd.

  9. The prestimulus default mode network state predicts cognitive task performance levels on a mental rotation task.

    PubMed

    Kamp, Tabea; Sorger, Bettina; Benjamins, Caroline; Hausfeld, Lars; Goebel, Rainer

    2018-06-22

    Linking individual task performance to preceding, regional brain activation is an ongoing goal of neuroscientific research. Recently, it could be shown that the activation and connectivity within large-scale brain networks prior to task onset influence performance levels. More specifically, prestimulus default mode network (DMN) effects have been linked to performance levels in sensory near-threshold tasks, as well as cognitive tasks. However, it still remains uncertain how the DMN state preceding cognitive tasks affects performance levels when the period between task trials is long and flexible, allowing participants to engage in different cognitive states. We here investigated whether the prestimulus activation and within-network connectivity of the DMN are predictive of the correctness and speed of task performance levels on a cognitive (match-to-sample) mental rotation task, employing a sparse event-related functional magnetic resonance imaging (fMRI) design. We found that prestimulus activation in the DMN predicted the speed of correct trials, with a higher amplitude preceding correct fast response trials compared to correct slow response trials. Moreover, we found higher connectivity within the DMN before incorrect trials compared to correct trials. These results indicate that pre-existing activation and connectivity states within the DMN influence task performance on cognitive tasks, both effecting the correctness and speed of task execution. The findings support existing theories and empirical work on relating mind-wandering and cognitive task performance to the DMN and expand these by establishing a relationship between the prestimulus DMN state and the speed of cognitive task performance. © 2018 The Authors. Brain and Behavior published by Wiley Periodicals, Inc.

  10. Air temperature thresholds to evaluate snow melting at the surface of Alpine glaciers by T-index models: the case study of Forni Glacier (Italy)

    NASA Astrophysics Data System (ADS)

    Senese, A.; Maugeri, M.; Vuillermoz, E.; Smiraglia, C.; Diolaiuti, G.

    2014-03-01

    The glacier melt conditions (i.e.: null surface temperature and positive energy budget) can be assessed by analyzing meteorological and energy data acquired by a supraglacial Automatic Weather Station (AWS). In the case this latter is not present the assessment of actual melting conditions and the evaluation of the melt amount is difficult and simple methods based on T-index (or degree days) models are generally applied. These models require the choice of a correct temperature threshold. In fact, melt does not necessarily occur at daily air temperatures higher than 273.15 K. In this paper, to detect the most indicative threshold witnessing melt conditions in the April-June period, we have analyzed air temperature data recorded from 2006 to 2012 by a supraglacial AWS set up at 2631 m a.s.l. on the ablation tongue of the Forni Glacier (Italian Alps), and by a weather station located outside the studied glacier (at Bormio, a village at 1225 m a.s.l.). Moreover we have evaluated the glacier energy budget and the Snow Water Equivalent (SWE) values during this time-frame. Then the snow ablation amount was estimated both from the surface energy balance (from supraglacial AWS data) and from T-index method (from Bormio data, applying the mean tropospheric lapse rate and varying the air temperature threshold) and the results were compared. We found that the mean tropospheric lapse rate permits a good and reliable reconstruction of glacier air temperatures and the major uncertainty in the computation of snow melt is driven by the choice of an appropriate temperature threshold. From our study using a 5.0 K lower threshold value (with respect to the largely applied 273.15 K) permits the most reliable reconstruction of glacier melt.

  11. Second look at the spread of epidemics on networks

    NASA Astrophysics Data System (ADS)

    Kenah, Eben; Robins, James M.

    2007-09-01

    In an important paper, Newman [Phys. Rev. E66, 016128 (2002)] claimed that a general network-based stochastic Susceptible-Infectious-Removed (SIR) epidemic model is isomorphic to a bond percolation model, where the bonds are the edges of the contact network and the bond occupation probability is equal to the marginal probability of transmission from an infected node to a susceptible neighbor. In this paper, we show that this isomorphism is incorrect and define a semidirected random network we call the epidemic percolation network that is exactly isomorphic to the SIR epidemic model in any finite population. In the limit of a large population, (i) the distribution of (self-limited) outbreak sizes is identical to the size distribution of (small) out-components, (ii) the epidemic threshold corresponds to the phase transition where a giant strongly connected component appears, (iii) the probability of a large epidemic is equal to the probability that an initial infection occurs in the giant in-component, and (iv) the relative final size of an epidemic is equal to the proportion of the network contained in the giant out-component. For the SIR model considered by Newman, we show that the epidemic percolation network predicts the same mean outbreak size below the epidemic threshold, the same epidemic threshold, and the same final size of an epidemic as the bond percolation model. However, the bond percolation model fails to predict the correct outbreak size distribution and probability of an epidemic when there is a nondegenerate infectious period distribution. We confirm our findings by comparing predictions from percolation networks and bond percolation models to the results of simulations. In the Appendix, we show that an isomorphism to an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model.

  12. Automated scoring of regional lung perfusion in children from contrast enhanced 3D MRI

    NASA Astrophysics Data System (ADS)

    Heimann, Tobias; Eichinger, Monika; Bauman, Grzegorz; Bischoff, Arved; Puderbach, Michael; Meinzer, Hans-Peter

    2012-03-01

    MRI perfusion images give information about regional lung function and can be used to detect pulmonary pathologies in cystic fibrosis (CF) children. However, manual assessment of the percentage of pathologic tissue in defined lung subvolumes features large inter- and intra-observer variation, making it difficult to determine disease progression consistently. We present an automated method to calculate a regional score for this purpose. First, lungs are located based on thresholding and morphological operations. Second, statistical shape models of left and right children's lungs are initialized at the determined locations and used to precisely segment morphological images. Segmentation results are transferred to perfusion maps and employed as masks to calculate perfusion statistics. An automated threshold to determine pathologic tissue is calculated and used to determine accurate regional scores. We evaluated the method on 10 MRI images and achieved an average surface distance of less than 1.5 mm compared to manual reference segmentations. Pathologic tissue was detected correctly in 9 cases. The approach seems suitable for detecting early signs of CF and monitoring response to therapy.

  13. Towards a Clinical Decision Support System for External Beam Radiation Oncology Prostate Cancer Patients: Proton vs. Photon Radiotherapy? A Radiobiological Study of Robustness and Stability

    PubMed Central

    Walsh, Seán; Roelofs, Erik; Kuess, Peter; van Wijk, Yvonka; Lambin, Philippe; Jones, Bleddyn; Verhaegen, Frank

    2018-01-01

    We present a methodology which can be utilized to select proton or photon radiotherapy in prostate cancer patients. Four state-of-the-art competing treatment modalities were compared (by way of an in silico trial) for a cohort of 25 prostate cancer patients, with and without correction strategies for prostate displacements. Metrics measured from clinical image guidance systems were used. Three correction strategies were investigated; no-correction, extended-no-action-limit, and online-correction. Clinical efficacy was estimated via radiobiological models incorporating robustness (how probable a given treatment plan was delivered) and stability (the consistency between the probable best and worst delivered treatments at the 95% confidence limit). The results obtained at the cohort level enabled the determination of a threshold for likely clinical benefit at the individual level. Depending on the imaging system and correction strategy; 24%, 32% and 44% of patients were identified as suitable candidates for proton therapy. For the constraints of this study: Intensity-modulated proton therapy with online-correction was on average the most effective modality. Irrespective of the imaging system, each treatment modality is similar in terms of robustness, with and without the correction strategies. Conversely, there is substantial variation in stability between the treatment modalities, which is greatly reduced by correction strategies. This study provides a ‘proof-of-concept’ methodology to enable the prospective identification of individual patients that will most likely (above a certain threshold) benefit from proton therapy. PMID:29463018

  14. A post-reconstruction method to correct cupping artifacts in cone beam breast computed tomography

    PubMed Central

    Altunbas, M. C.; Shaw, C. C.; Chen, L.; Lai, C.; Liu, X.; Han, T.; Wang, T.

    2007-01-01

    In cone beam breast computed tomography (CT), scattered radiation leads to nonuniform biasing of CT numbers known as a cupping artifact. Besides being visual distractions, cupping artifacts appear as background nonuniformities, which impair efficient gray scale windowing and pose a problem in threshold based volume visualization/segmentation. To overcome this problem, we have developed a background nonuniformity correction method specifically designed for cone beam breast CT. With this technique, the cupping artifact is modeled as an additive background signal profile in the reconstructed breast images. Due to the largely circularly symmetric shape of a typical breast, the additive background signal profile was also assumed to be circularly symmetric. The radial variation of the background signals were estimated by measuring the spatial variation of adipose tissue signals in front view breast images. To extract adipose tissue signals in an automated manner, a signal sampling scheme in polar coordinates and a background trend fitting algorithm were implemented. The background fits compared with targeted adipose tissue signal value (constant throughout the breast volume) to get an additive correction value for each tissue voxel. To test the accuracy, we applied the technique to cone beam CT images of mastectomy specimens. After correction, the images demonstrated significantly improved signal uniformity in both front and side view slices. The reduction of both intra-slice and inter-slice variations in adipose tissue CT numbers supported our observations. PMID:17822018

  15. Correction of the near threshold behavior of electron collisional excitation cross-sections in the plane-wave Born approximation

    DOE PAGES

    Kilcrease, D. P.; Brookes, S.

    2013-08-19

    The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. Additionally, a simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure formore » the Born cross-sections that employs the Elwert–Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. Furthermore, we also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.« less

  16. Detection of short-term changes in vegetation cover by use of LANDSAT imagery. [Arizona

    NASA Technical Reports Server (NTRS)

    Turner, R. M. (Principal Investigator); Wiseman, F. M.

    1975-01-01

    The author has identified the following significant results. By using a constant band 6 to band 5 radiance ratio of 1.25, the changing pattern of areas of relatively dense vegetation cover was detected for the semiarid region in the vicinity of Tucson, Arizona. Electronically produced binary thematic masks were used to map areas with dense vegetation. The foliar cover threshold represented by the ratio was not accurately determined but field measurements show that the threshold lies in the range of 10 to 25 percent foliage cover. Montana evergreen forests with constant dense cover were correctly shown to exceed the threshold on all dates. The summer active grassland exceeded the threshold in the summer unless rainfall was insufficient. Desert areas exceeded the threshold during the spring of 1973 following heavy rains; the same areas during the rainless spring of 1974 did not exceed threshold. Irrigated fields, parks, golf courses, and riparian communities were among the habitats most frequently surpassing the threshold.

  17. Self-adjusting threshold mechanism for pixel detectors

    NASA Astrophysics Data System (ADS)

    Heim, Timon; Garcia-Sciveres, Maurice

    2017-09-01

    Readout chips of hybrid pixel detectors use a low power amplifier and threshold discrimination to process charge deposited in semiconductor sensors. Due to transistor mismatch each pixel circuit needs to be calibrated individually to achieve response uniformity. Traditionally this is addressed by programmable threshold trimming in each pixel, but requires robustness against radiation effects, temperature, and time. In this paper a self-adjusting threshold mechanism is presented, which corrects the threshold for both spatial inequality and time variation and maintains a constant response. It exploits the electrical noise as relative measure for the threshold and automatically adjust the threshold of each pixel to always achieve a uniform frequency of noise hits. A digital implementation of the method in the form of an up/down counter and combinatorial logic filter is presented. The behavior of this circuit has been simulated to evaluate its performance and compare it to traditional calibration results. The simulation results show that this mechanism can perform equally well, but eliminates instability over time and is immune to single event upsets.

  18. Threshold units: A correct metric for reaction time?

    PubMed Central

    Zele, Andrew J.; Cao, Dingcai; Pokorny, Joel

    2007-01-01

    Purpose To compare reaction time (RT) to rod incremental and decremental stimuli expressed in physical contrast units or psychophysical threshold units. Methods Rod contrast detection thresholds and suprathreshold RTs were measured for Rapid-On and Rapid-Off ramp stimuli. Results Threshold sensitivity to Rapid-Off stimuli was higher than to Rapid-On stimuli. Suprathreshold RTs specified in Weber contrast for Rapid-Off stimuli were shorter than for Rapid-On stimuli. Reaction time data expressed in multiples of threshold reversed the outcomes: Reaction times for Rapid-On stimuli were shorter than those for Rapid-Off stimuli. The use of alternative contrast metrics also failed to equate RTs. Conclusions A case is made that the interpretation of RT data may be confounded when expressed in threshold units. Stimulus energy or contrast is the only metric common to the response characteristics of the cells underlying speeded responses. The use of threshold metrics for RT can confuse the interpretation of an underlying physiological process. PMID:17240416

  19. The importance of reference materials in doping-control analysis.

    PubMed

    Mackay, Lindsey G; Kazlauskas, Rymantas

    2011-08-01

    Currently a large range of pure substance reference materials are available for calibration of doping-control methods. These materials enable traceability to the International System of Units (SI) for the results generated by World Anti-Doping Agency (WADA)-accredited laboratories. Only a small number of prohibited substances have threshold limits for which quantification is highly important. For these analytes only the highest quality reference materials that are available should be used. Many prohibited substances have no threshold limits and reference materials provide essential identity confirmation. For these reference materials the correct identity is critical and the methods used to assess identity in these cases should be critically evaluated. There is still a lack of certified matrix reference materials to support many aspects of doping analysis. However, in key areas a range of urine matrix materials have been produced for substances with threshold limits, for example 19-norandrosterone and testosterone/epitestosterone (T/E) ratio. These matrix-certified reference materials (CRMs) are an excellent independent means of checking method recovery and bias and will typically be used in method validation and then regularly as quality-control checks. They can be particularly important in the analysis of samples close to threshold limits, in which measurement accuracy becomes critical. Some reference materials for isotope ratio mass spectrometry (IRMS) analysis are available and a matrix material certified for steroid delta values is currently under production. In other new areas, for example the Athlete Biological Passport, peptide hormone testing, designer steroids, and gene doping, reference material needs still need to be thoroughly assessed and prioritised.

  20. Meta-Analysis of Single-Case Research Design Studies on Instructional Pacing.

    PubMed

    Tincani, Matt; De Mers, Marilyn

    2016-11-01

    More than four decades of research on instructional pacing has yielded varying and, in some cases, conflicting findings. The purpose of this meta-analysis was to synthesize single-case research design (SCRD) studies on instructional pacing to determine the relative benefits of brisker or slower pacing. Participants were children and youth with and without disabilities in educational settings, excluding higher education. Tau-U, a non-parametric statistic for analyzing data in SCRD studies, was used to determine effect size estimates. The article extraction yielded 13 instructional pacing studies meeting contemporary standards for high quality SCRD research. Eleven of the 13 studies reported small to large magnitude effects when two or more pacing parameters were compared, suggesting that instructional pacing is a robust instructional variable. Brisker instructional pacing with brief inter-trial interval (ITI) produced small increases in correct responding and medium to large reductions in challenging behavior compared with extended ITI. Slower instructional pacing with extended wait-time produced small increases in correct responding, but also produced small increases in challenging behavior compared with brief wait-time. Neither brief ITI nor extended wait-time meets recently established thresholds for evidence-based practice, highlighting the need for further instructional pacing research. © The Author(s) 2016.

  1. NNLO computational techniques: The cases H→γγ and H→gg

    NASA Astrophysics Data System (ADS)

    Actis, Stefano; Passarino, Giampiero; Sturm, Christian; Uccirati, Sandro

    2009-04-01

    A large set of techniques needed to compute decay rates at the two-loop level are derived and systematized. The main emphasis of the paper is on the two Standard Model decays H→γγ and H→gg. The techniques, however, have a much wider range of application: they give practical examples of general rules for two-loop renormalization; they introduce simple recipes for handling internal unstable particles in two-loop processes; they illustrate simple procedures for the extraction of collinear logarithms from the amplitude. The latter is particularly relevant to show cancellations, e.g. cancellation of collinear divergencies. Furthermore, the paper deals with the proper treatment of non-enhanced two-loop QCD and electroweak contributions to different physical (pseudo-)observables, showing how they can be transformed in a way that allows for a stable numerical integration. Numerical results for the two-loop percentage corrections to H→γγ,gg are presented and discussed. When applied to the process pp→gg+X→H+X, the results show that the electroweak scaling factor for the cross section is between -4% and +6% in the range 100 GeV

  2. Measurement-based quantum communication with resource states generated by entanglement purification

    NASA Astrophysics Data System (ADS)

    Wallnöfer, J.; Dür, W.

    2017-01-01

    We investigate measurement-based quantum communication with noisy resource states that are generated by entanglement purification. We consider the transmission of encoded information via noisy quantum channels using a measurement-based implementation of encoding, error correction, and decoding. We show that such an approach offers advantages over direct transmission, gate-based error correction, and measurement-based schemes with direct generation of resource states. We analyze the noise structure of resource states generated by entanglement purification and show that a local error model, i.e., noise acting independently on all qubits of the resource state, is a good approximation in general, and provides an exact description for Greenberger-Horne-Zeilinger states. The latter are resources for a measurement-based implementation of error-correction codes for bit-flip or phase-flip errors. This provides an approach to link the recently found very high thresholds for fault-tolerant measurement-based quantum information processing based on local error models for resource states with error thresholds for gate-based computational models.

  3. The development and testing of a brief ('gist-based') supplementary colorectal cancer screening information leaflet.

    PubMed

    Smith, Samuel G; Wolf, Michael S; Obichere, Austin; Raine, Rosalind; Wardle, Jane; von Wagner, Christian

    2013-12-01

    To design and user-test a 'gist-based' colorectal cancer screening information leaflet, which promotes comprehension of the screening offer. Twenty-eight individuals approaching screening age were recruited from organisations in deprived areas of England. Using a between-subjects design, we tested iterations of a newly-designed gist-based information leaflet. Participants read the leaflet and answered 8 'true' or 'false' comprehension statements. For the leaflet to be considered fit-for-purpose, all statements had to be answered correctly by at least 80% of participants in each round. Alterations were made if this threshold was not met and additional rounds of testing were undertaken. At round 1, answers to 2/8 statements did not meet the threshold. After changes, answers in round 2 did not reach the threshold for 1/8 statements. In round 3, all answers were adequate and the leaflet was deemed fit-for-purpose. Qualitative data offered solutions such as language and layout changes which led to improved comprehension of the leaflet. User-testing substantially improved the design and subsequent comprehensibility of a theory-driven gist-based colorectal cancer screening information leaflet. This leaflet will be evaluated as part of a large national randomised controlled trial designed to reduce socioeconomic inequalities in colorectal cancer screening participation. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  4. Estimating economic thresholds for site-specific weed control using manual weed counts and sensor technology: an example based on three winter wheat trials.

    PubMed

    Keller, Martina; Gutjahr, Christoph; Möhring, Jens; Weis, Martin; Sökefeld, Markus; Gerhards, Roland

    2014-02-01

    Precision experimental design uses the natural heterogeneity of agricultural fields and combines sensor technology with linear mixed models to estimate the effect of weeds, soil properties and herbicide on yield. These estimates can be used to derive economic thresholds. Three field trials are presented using the precision experimental design in winter wheat. Weed densities were determined by manual sampling and bi-spectral cameras, yield and soil properties were mapped. Galium aparine, other broad-leaved weeds and Alopecurus myosuroides reduced yield by 17.5, 1.2 and 12.4 kg ha(-1) plant(-1)  m(2) in one trial. The determined thresholds for site-specific weed control with independently applied herbicides were 4, 48 and 12 plants m(-2), respectively. Spring drought reduced yield effects of weeds considerably in one trial, since water became yield limiting. A negative herbicide effect on the crop was negligible, except in one trial, in which the herbicide mixture tended to reduce yield by 0.6 t ha(-1). Bi-spectral cameras for weed counting were of limited use and still need improvement. Nevertheless, large weed patches were correctly identified. The current paper presents a new approach to conducting field trials and deriving decision rules for weed control in farmers' fields. © 2013 Society of Chemical Industry.

  5. Matte painting in stereoscopic synthetic imagery

    NASA Astrophysics Data System (ADS)

    Eisenmann, Jonathan; Parent, Rick

    2010-02-01

    While there have been numerous studies concerning human perception in stereoscopic environments, rules of thumb for cinematography in stereoscopy have not yet been well-established. To that aim, we present experiments and results of subject testing in a stereoscopic environment, similar to that of a theater (i.e. large flat screen without head-tracking). In particular we wish to empirically identify thresholds at which different types of backgrounds, referred to in the computer animation industry as matte paintings, can be used while still maintaining the illusion of seamless perspective and depth for a particular scene and camera shot. In monoscopic synthetic imagery, any type of matte painting that maintains proper perspective lines, depth cues, and coherent lighting and textures saves in production costs while still maintaining the illusion of an alternate cinematic reality. However, in stereoscopic synthetic imagery, a 2D matte painting that worked in monoscopy may fail to provide the intended illusion of depth because the viewer has added depth information provided by stereopsis. We intend to observe two stereoscopic perceptual thresholds in this study which will provide practical guidelines indicating when to use each of three types of matte paintings. We ran subject tests in two virtual testing environments, each with varying conditions. Data were collected showing how the choices of the users matched the correct response, and the resulting perceptual threshold patterns are discussed below.

  6. Higgs boson gluon–fusion production at threshold in N 3LO QCD

    DOE PAGES

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; ...

    2014-09-02

    We present the cross-section for the threshold production of the Higgs boson at hadron-colliders at next-to-next-to-next-to-leading order (N 3LO) in perturbative QCD. Furthermore, we present an analytic expression for the partonic cross-section at threshold and the impact of these corrections on the numerical estimates for the hadronic cross-section at the LHC. With this result we achieve a major milestone towards a complete evaluation of the cross-section at N 3LO which will reduce the theoretical uncertainty in the determination of the strengths of the Higgs boson interactions.

  7. Modeling of digital mammograms using bicubic spline functions and additive noise

    NASA Astrophysics Data System (ADS)

    Graffigne, Christine; Maintournam, Aboubakar; Strauss, Anne

    1998-09-01

    The purpose of our work is the microcalcifications detection on digital mammograms. In order to do so, we model the grey levels of digital mammograms by the sum of a surface trend (bicubic spline function) and an additive noise or texture. We also introduce a robust estimation method in order to overcome the bias introduced by the microcalcifications. After the estimation we consider the subtraction image values as noise. If the noise is not correlated, we adjust its distribution probability by the Pearson's system of densities. It allows us to threshold accurately the images of subtraction and therefore to detect the microcalcifications. If the noise is correlated, a unilateral autoregressive process is used and its coefficients are again estimated by the least squares method. We then consider non overlapping windows on the residues image. In each window the texture residue is computed and compared with an a priori threshold. This provides correct localization of the microcalcifications clusters. However this technique is definitely more time consuming that then automatic threshold assuming uncorrelated noise and does not lead to significantly better results. As a conclusion, even if the assumption of uncorrelated noise is not correct, the automatic thresholding based on the Pearson's system performs quite well on most of our images.

  8. Attenuation correction with region growing method used in the positron emission mammography imaging system

    NASA Astrophysics Data System (ADS)

    Gu, Xiao-Yue; Li, Lin; Yin, Peng-Fei; Yun, Ming-Kai; Chai, Pei; Huang, Xian-Chao; Sun, Xiao-Li; Wei, Long

    2015-10-01

    The Positron Emission Mammography imaging system (PEMi) provides a novel nuclear diagnosis method dedicated for breast imaging. With a better resolution than whole body PET, PEMi can detect millimeter-sized breast tumors. To address the requirement of semi-quantitative analysis with a radiotracer concentration map of the breast, a new attenuation correction method based on a three-dimensional seeded region growing image segmentation (3DSRG-AC) method has been developed. The method gives a 3D connected region as the segmentation result instead of image slices. The continuity property of the segmentation result makes this new method free of activity variation of breast tissues. The threshold value chosen is the key process for the segmentation method. The first valley in the grey level histogram of the reconstruction image is set as the lower threshold, which works well in clinical application. Results show that attenuation correction for PEMi improves the image quality and the quantitative accuracy of radioactivity distribution determination. Attenuation correction also improves the probability of detecting small and early breast tumors. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)

  9. SU-E-T-458: Determining Threshold-Of-Failure for Dead Pixel Rows in EPID-Based Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gersh, J; Wiant, D

    Purpose: A pixel correction map is applied to all EPID-based applications on the TrueBeam (Varian Medical Systems, Palo Alto, CA). When dead pixels are detected, an interpolative smoothing algorithm is applied using neighboring-pixel information to supplement missing-pixel information. The vendor suggests that when the number of dead pixels exceeds 70,000, the panel should be replaced. It is common for entire detector rows to be dead, as well as their neighboring rows. Approximately 70 rows can be dead before the panel reaches this threshold. This study determines the number of neighboring dead-pixel rows that would create a large enough deviation inmore » measured fluence to cause failures in portal dosimetry (PD). Methods: Four clinical two-arc VMAT plans were generated using Eclipse's AXB algorithm and PD plans were created using the PDIP algorithm. These plans were chosen to represent those commonly encountered in the clinic: prostate, lung, abdomen, and neck treatments. During each iteration of this study, an increasing number of dead-pixel rows are artificially applied to the correction map and a fluence QA is performed using the EPID (corrected with this map). To provide a worst-case-scenario, the dead-pixel rows are chosen so that they present artifacts in the highfluence region of the field. Results: For all eight arc-fields deemed acceptable via a 3%/3mm gamma analysis (pass rate greater than 99%), VMAT QA yielded identical results with a 5 pixel-width dead zone. When 10 dead lines were present, half of the fields had pass rates below the 99% pass rate. With increasing dead rows, the pass rates were reduced substantially. Conclusion: While the vendor still suggests to request service at the point where 70,000 dead rows are measured (as recommended by the vendor), the authors suggest that service should be requested when there are greater than 5 consecutive dead rows.« less

  10. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies

    PubMed Central

    2014-01-01

    Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686

  11. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

    PubMed

    Kottas, Martina; Kuss, Oliver; Zapf, Antonia

    2014-02-19

    The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

  12. Orientational preferences of neighboring helices can drive ER insertion of a marginally hydrophobic transmembrane helix

    PubMed Central

    Öjemalm, Karin; Halling, Katrin K.; Nilsson, IngMarie; von Heijne, Gunnar

    2013-01-01

    Summary α-helical integral membrane proteins critically depend on the correct insertion of their transmembrane α-helices into the lipid bilayer for proper folding, yet a surprisingly large fraction of the transmembrane α-helices in multispanning integral membrane proteins are not sufficiently hydrophobic to insert into the target membrane by themselves. How can such marginally hydrophobic segments nevertheless form transmembrane helices in the folded structure? Here, we show that a transmembrane helix with a strong orientational preference (Ncyt-Clum or Nlum-Ccyt) can both increase and decrease the hydrophobicity threshold for membrane insertion of a neighboring, marginally hydrophobic helix. This effect helps explain the ‘missing hydrophobicity’ in polytopic membrane proteins. PMID:22281052

  13. A simplified focusing and astigmatism correction method for a scanning electron microscope

    NASA Astrophysics Data System (ADS)

    Lu, Yihua; Zhang, Xianmin; Li, Hai

    2018-01-01

    Defocus and astigmatism can lead to blurred images and poor resolution. This paper presents a simplified method for focusing and astigmatism correction of a scanning electron microscope (SEM). The method consists of two steps. In the first step, the fast Fourier transform (FFT) of the SEM image is performed and the FFT is subsequently processed with a threshold to achieve a suitable result. In the second step, the threshold FFT is used for ellipse fitting to determine the presence of defocus and astigmatism. The proposed method clearly provides the relationships between the defocus, the astigmatism and the direction of stretching of the FFT, and it can determine the astigmatism in a single image. Experimental studies are conducted to demonstrate the validity of the proposed method.

  14. Rod-cone interaction in light adaptation

    PubMed Central

    Latch, M.; Lennie, P.

    1977-01-01

    1. The increment-threshold for a small test spot in the peripheral visual field was measured against backgrounds that were red or blue. 2. When the background was a large uniform field, threshold over most of the scotopic range depended exactly upon the background's effect upon rods. This confirms Flamant & Stiles (1948). But when the background was small, threshold was elevated more by a long wave-length than a short wave-length background equated for its effect on rods. 3. The influence of cones was explored in a further experiment. The scotopic increment-threshold was established for a short wave-length test spot on a large, short wave-length background. Then a steady red circular patch, conspicuous to cones, but below the increment-threshold for rod vision, was added to the background. When it was small, but not when it was large, this patch substantially raised the threshold for the test. 4. When a similar experiment was made using, instead of a red patch, a short wave-length one that was conspicuous in rod vision, threshold varied similarly with patch size. These results support the notion that the influence of small backgrounds arises in some size-selective mechanism that is indifferent to the receptor system in which visual signals originate. Two corollaries of this hypothesis were tested in further experiments. 5. A small patch was chosen so as to lift scotopic threshold substantially above its level on a uniform field. This threshold elevation persisted for minutes after extinction of the patch, but only when the patch was small. A large patch made bright enough to elevate threshold by as much as the small one gave rise to no corresponding after-effect. 6. Increment-thresholds for a small red test spot, detected through cones, followed the same course whether a large uniform background was long- or short wave-length. When the background was small, threshold upon the short wave-length one began to rise for much lower levels of background illumination, suggesting the influence of rods. This was confirmed by repeating the experiment after a strong bleach when the cones, but not rods, had fully recovered their sensitivity. Increment-thresholds upon small backgrounds of long or short wave-lengths then followed the same course. PMID:894602

  15. Criterion for correct recalls in associative-memory neural networks

    NASA Astrophysics Data System (ADS)

    Ji, Han-Bing

    1992-12-01

    A novel weighted outer-product learning (WOPL) scheme for associative memory neural networks (AMNNs) is presented. In the scheme, each fundamental memory is allocated a learning weight to direct its correct recall. Both the Hopfield and multiple training models are instances of the WOPL model with certain sets of learning weights. A necessary condition of choosing learning weights for the convergence property of the WOPL model is obtained through neural dynamics. A criterion for choosing learning weights for correct associative recalls of the fundamental memories is proposed. In this paper, an important parameter called signal to noise ratio gain (SNRG) is devised, and it is found out empirically that SNRGs have their own threshold values which means that any fundamental memory can be correctly recalled when its corresponding SNRG is greater than or equal to its threshold value. Furthermore, a theorem is given and some theoretical results on the conditions of SNRGs and learning weights for good associative recall performance of the WOPL model are accordingly obtained. In principle, when all SNRGs or learning weights chosen satisfy the theoretically obtained conditions, the asymptotic storage capacity of the WOPL model will grow at the greatest rate under certain known stochastic meaning for AMNNs, and thus the WOPL model can achieve correct recalls for all fundamental memories. The representative computer simulations confirm the criterion and theoretical analysis.

  16. Validation and evaluation of epistemic uncertainty in rainfall thresholds for regional scale landslide forecasting

    NASA Astrophysics Data System (ADS)

    Gariano, Stefano Luigi; Brunetti, Maria Teresa; Iovine, Giulio; Melillo, Massimo; Peruccacci, Silvia; Terranova, Oreste Giuseppe; Vennari, Carmela; Guzzetti, Fausto

    2015-04-01

    Prediction of rainfall-induced landslides can rely on empirical rainfall thresholds. These are obtained from the analysis of past rainfall events that have (or have not) resulted in slope failures. Accurate prediction requires reliable thresholds, which need to be validated before their use in operational landslide warning systems. Despite the clear relevance of validation, only a few studies have addressed the problem, and have proposed and tested robust validation procedures. We propose a validation procedure that allows for the definition of optimal thresholds for early warning purposes. The validation is based on contingency table, skill scores, and receiver operating characteristic (ROC) analysis. To establish the optimal threshold, which maximizes the correct landslide predictions and minimizes the incorrect predictions, we propose an index that results from the linear combination of three weighted skill scores. Selection of the optimal threshold depends on the scope and the operational characteristics of the early warning system. The choice is made by selecting appropriately the weights, and by searching for the optimal (maximum) value of the index. We discuss weakness in the validation procedure caused by the inherent lack of information (epistemic uncertainty) on landslide occurrence typical of large study areas. When working at the regional scale, landslides may have occurred and may have not been reported. This results in biases and variations in the contingencies and the skill scores. We introduce two parameters to represent the unknown proportion of rainfall events (above and below the threshold) for which landslides occurred and went unreported. We show that even a very small underestimation in the number of landslides can result in a significant decrease in the performance of a threshold measured by the skill scores. We show that the variations in the skill scores are different for different uncertainty of events above or below the threshold. This has consequences in the ROC analysis. We applied the proposed procedure to a catalogue of rainfall conditions that have resulted in landslides, and to a set of rainfall events that - presumably - have not resulted in landslides, in Sicily, in the period 2002-2012. First, we determined regional event duration-cumulated event (ED) rainfall thresholds for shallow landslide occurrence using 200 rainfall conditions that have resulted in 223 shallow landslides in Sicily in the period 2002-2011. Next, we validated the thresholds using 29 rainfall conditions that have triggered 42 shallow landslides in Sicily in 2012, and 1250 rainfall events that presumably have not resulted in landslides in the same year. We performed a back analysis simulating the use of the thresholds in a hypothetical landslide warning system operating in 2012.

  17. Noise Levels and Data Correction Analysis for Seven General Aviation Propeller Aircraft.

    DTIC Science & Technology

    1980-09-01

    31-1 60 M, NORTH THRESHOLD RNWY. 13 DATE: JUNE 19,1978 EVENT EPNL DBA(M) DBD (M) OASPL PNL(M) PNLT(M) DUR(P) TC AEPNL * I APPROACH 2 87.2 78.7 85.6...DOT/TSC 10/18/78 SUMMARY NOISE LEVEL. DATA AS MEASURED * SITE NO. 31-2 2000 M. NORTH THRESHOLD RNWY. 13 DATE: JUNE 19,1978 EVENT EPNL DBA(M) DBD (M...DOT/TSC 11/13/78 SUMMARY NOISE LEVEL DATA AS MEASURED SITE NO, 31-3 3485 M, NORTH THRESHOLD RNWY. 13 DATE: JUNE 19P1978 EVENT EPNL DBA(M) DBD (M) OASPL

  18. Identifying a key physical factor sensitive to the performance of Madden-Julian oscillation simulation in climate models

    NASA Astrophysics Data System (ADS)

    Kim, Go-Un; Seo, Kyong-Hwan

    2018-01-01

    A key physical factor in regulating the performance of Madden-Julian oscillation (MJO) simulation is examined by using 26 climate model simulations from the World Meteorological Organization's Working Group for Numerical Experimentation/Global Energy and Water Cycle Experiment Atmospheric System Study (WGNE and MJO-Task Force/GASS) global model comparison project. For this, intraseasonal moisture budget equation is analyzed and a simple, efficient physical quantity is developed. The result shows that MJO skill is most sensitive to vertically integrated intraseasonal zonal wind convergence (ZC). In particular, a specific threshold value of the strength of the ZC can be used as distinguishing between good and poor models. An additional finding is that good models exhibit the correct simultaneous convection and large-scale circulation phase relationship. In poor models, however, the peak circulation response appears 3 days after peak rainfall, suggesting unfavorable coupling between convection and circulation. For an improving simulation of the MJO in climate models, we propose that this delay of circulation in response to convection needs to be corrected in the cumulus parameterization scheme.

  19. Echocardiography underestimates stroke volume and aortic valve area: implications for patients with small-area low-gradient aortic stenosis.

    PubMed

    Chin, Calvin W L; Khaw, Hwan J; Luo, Elton; Tan, Shuwei; White, Audrey C; Newby, David E; Dweck, Marc R

    2014-09-01

    Discordance between small aortic valve area (AVA; < 1.0 cm(2)) and low mean pressure gradient (MPG; < 40 mm Hg) affects a third of patients with moderate or severe aortic stenosis (AS). We hypothesized that this is largely due to inaccurate echocardiographic measurements of the left ventricular outflow tract area (LVOTarea) and stroke volume alongside inconsistencies in recommended thresholds. One hundred thirty-three patients with mild to severe AS and 33 control individuals underwent comprehensive echocardiography and cardiovascular magnetic resonance imaging (MRI). Stroke volume and LVOTarea were calculated using echocardiography and MRI, and the effects on AVA estimation were assessed. The relationship between AVA and MPG measurements was then modelled with nonlinear regression and consistent thresholds for these parameters calculated. Finally the effect of these modified AVA measurements and novel thresholds on the number of patients with small-area low-gradient AS was investigated. Compared with MRI, echocardiography underestimated LVOTarea (n = 40; -0.7 cm(2); 95% confidence interval [CI], -2.6 to 1.3), stroke volumes (-6.5 mL/m(2); 95% CI, -28.9 to 16.0) and consequently, AVA (-0.23 cm(2); 95% CI, -1.01 to 0.59). Moreover, an AVA of 1.0 cm(2) corresponded to MPG of 24 mm Hg based on echocardiographic measurements and 37 mm Hg after correction with MRI-derived stroke volumes. Based on conventional measures, 56 patients had discordant small-area low-gradient AS. Using MRI-derived stroke volumes and the revised thresholds, a 48% reduction in discordance was observed (n = 29). Echocardiography underestimated LVOTarea, stroke volume, and therefore AVA, compared with MRI. The thresholds based on current guidelines were also inconsistent. In combination, these factors explain > 40% of patients with discordant small-area low-gradient AS. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  20. Laser-induced retinal injury studies with wavefront correction

    NASA Astrophysics Data System (ADS)

    Lund, Brian J.; Lund, David J.; Edsall, Peter R.

    2007-02-01

    The ability of a laser beam to damage the retina of the eye depends on the accuracy to which the optics of the eye focuses the beam onto the retina. Data acquired through retinal injury threshold studies indicate that the focus achieved by the eye of an anesthetized non-human primate (NHP) is worse than theoretical predictions, and therefore the measured injury threshold will decrease with decreasing retinal irradiance area until the beam diameter at the retina is less than 10 μm. However, a number of investigations over a range of wavelengths and exposure durations show that the incident energy required to produce a retinal injury in a NHP eye does not decrease for retinal irradiance diameters smaller than ~100 μm, but reaches a minimum at that diameter and remains nearly constant for smaller diameters. A possible explanation is that uncompensated aberrations of the eye of the anesthetized NHP are larger than predicted. Focus is a dynamic process which is purposely defeated while performing measurements of retinal injury thresholds. Optical wavefront correction systems have become available which have the capability to compensate for ocular aberrations. This paper will report on an injury threshold experiment which incorporates an adaptive optics system to compensate for the aberrations of a NHP eye during exposure to a collimated laser beam, therefore producing a near diffraction limited beam spot on the retina.

  1. A complementary graphical method for reducing and analyzing large data sets. Case studies demonstrating thresholds setting and selection.

    PubMed

    Jing, X; Cimino, J J

    2014-01-01

    Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases. Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection. The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on "heavily represented topics" in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone. Our filtering method reduces large graphs to a manageable size by removing relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical terminology. The method is applicable to large data sets (such as a hundred thousand records or more) and can be used to generate new hypotheses from data sets coded with hierarchical terminologies.

  2. Blueprint for a microwave trapped ion quantum computer.

    PubMed

    Lekitsch, Bjoern; Weidt, Sebastian; Fowler, Austin G; Mølmer, Klaus; Devitt, Simon J; Wunderlich, Christof; Hensinger, Winfried K

    2017-02-01

    The availability of a universal quantum computer may have a fundamental impact on a vast number of research fields and on society as a whole. An increasingly large scientific and industrial community is working toward the realization of such a device. An arbitrarily large quantum computer may best be constructed using a modular approach. We present a blueprint for a trapped ion-based scalable quantum computer module, making it possible to create a scalable quantum computer architecture based on long-wavelength radiation quantum gates. The modules control all operations as stand-alone units, are constructed using silicon microfabrication techniques, and are within reach of current technology. To perform the required quantum computations, the modules make use of long-wavelength radiation-based quantum gate technology. To scale this microwave quantum computer architecture to a large size, we present a fully scalable design that makes use of ion transport between different modules, thereby allowing arbitrarily many modules to be connected to construct a large-scale device. A high error-threshold surface error correction code can be implemented in the proposed architecture to execute fault-tolerant operations. With appropriate adjustments, the proposed modules are also suitable for alternative trapped ion quantum computer architectures, such as schemes using photonic interconnects.

  3. Evidence supporting radiation hormesis in atomic bomb survivor cancer mortality data.

    PubMed

    Doss, Mohan

    2012-12-01

    A recent update on the atomic bomb survivor cancer mortality data has concluded that excess relative risk (ERR) for solid cancers increases linearly with dose and that zero dose is the best estimate for the threshold, apparently validating the present use of the linear no threshold (LNT) model for estimating the cancer risk from low dose radiation. A major flaw in the standard ERR formalism for estimating cancer risk from radiation (and other carcinogens) is that it ignores the potential for a large systematic bias in the measured baseline cancer mortality rate, which can have a major effect on the ERR values. Cancer rates are highly variable from year to year and between adjacent regions and so the likelihood of such a bias is high. Calculations show that a correction for such a bias can lower the ERRs in the atomic bomb survivor data to negative values for intermediate doses. This is consistent with the phenomenon of radiation hormesis, providing a rational explanation for the decreased risk of cancer observed at intermediate doses for which there is no explanation based on the LNT model. The recent atomic bomb survivor data provides additional evidence for radiation hormesis in humans.

  4. Role of extrinsic noise in the sensitivity of the rod pathway: rapid dark adaptation of nocturnal vision in humans.

    PubMed

    Reeves, Adam; Grayhem, Rebecca

    2016-03-01

    Rod-mediated 500 nm test spots were flashed in Maxwellian view at 5 deg eccentricity, both on steady 10.4 deg fields of intensities (I) from 0.00001 to 1.0 scotopic troland (sc td) and from 0.2 s to 1 s after extinguishing the field. On dim fields, thresholds of tiny (5') tests were proportional to √I (Rose-DeVries law), while thresholds after extinction fell within 0.6 s to the fully dark-adapted absolute threshold. Thresholds of large (1.3 deg) tests were proportional to I (Weber law) and extinction thresholds, to √I. rod thresholds are elevated by photon-driven noise from dim fields that disappears at field extinction; large spot thresholds are additionally elevated by neural light adaptation proportional to √I. At night, recovery from dimly lit fields is fast, not slow.

  5. Perceptibility curve test for digital radiographs before and after correction for attenuation and correction for attenuation and visual response.

    PubMed

    Li, G; Welander, U; Yoshiura, K; Shi, X-Q; McDavid, W D

    2003-11-01

    Two digital image processing methods, correction for X-ray attenuation and correction for attenuation and visual response, have been developed. The aim of the present study was to compare digital radiographs before and after correction for attenuation and correction for attenuation and visual response by means of a perceptibility curve test. Radiographs were exposed of an aluminium test object containing holes ranging from 0.03 mm to 0.30 mm with increments of 0.03 mm. Fourteen radiographs were exposed with the Dixi system (Planmeca Oy, Helsinki, Finland) and twelve radiographs were exposed with the F1 iOX system (Fimet Oy, Monninkylä, Finland) from low to high exposures covering the full exposure ranges of the systems. Radiographs obtained from the Dixi and F1 iOX systems were 12 bit and 8 bit images, respectively. Original radiographs were then processed for correction for attenuation and correction for attenuation and visual response. Thus, two series of radiographs were created. Ten viewers evaluated all the radiographs in the same random order under the same viewing conditions. The object detail having the lowest perceptible contrast was recorded for each observer. Perceptibility curves were plotted according to the mean of observer data. The perceptibility curves for processed radiographs obtained with the F1 iOX system are higher than those for originals in the exposure range up to the peak, where the curves are basically the same. For radiographs exposed with the Dixi system, perceptibility curves for processed radiographs are higher than those for originals for all exposures. Perceptibility curves show that for 8 bit radiographs obtained from the F1 iOX system, the contrast threshold was increased in processed radiographs up to the peak, while for 12 bit radiographs obtained with the Dixi system, the contrast threshold was increased in processed radiographs for all exposures. When comparisons were made between radiographs corrected for attenuation and corrected for attenuation and visual response, basically no differences were found. Radiographs processed for correction for attenuation and correction for attenuation and visual response may improve perception, especially for 12 bit originals.

  6. Production of heavy Higgs bosons and decay into top quarks at the LHC

    NASA Astrophysics Data System (ADS)

    Bernreuther, W.; Galler, P.; Mellein, C.; Si, Z.-G.; Uwer, P.

    2016-02-01

    We investigate the production of heavy, neutral Higgs boson resonances and their decays to top-quark top-antiquark (t t ¯) pairs at the Large Hadron Collider (LHC) at next-to-leading order (NLO) in the strong coupling of quantum chromodynamics (QCD). The NLO corrections to heavy Higgs boson production and the Higgs-QCD interference are calculated in the large mt limit with an effective K-factor rescaling. The nonresonant t t ¯ background is taken into account at NLO QCD including weak-interaction corrections. In order to consistently determine the total decay widths of the heavy Higgs bosons, we consider for definiteness the type-II two-Higgs-doublet extension of the standard model and choose three parameter scenarios that entail two heavy neutral Higgs bosons with masses above the t t ¯ threshold and unsuppressed Yukawa couplings to top quarks. For these three scenarios we compute, for the LHC operating at 13 TeV, the t t ¯ cross section and the distributions of the t t ¯ invariant mass, of the transverse top-quark momentum and rapidity, and of the cosine of the Collins-Soper angle with and without the two heavy Higgs resonances. For selected Mt t ¯ bins we estimate the significances for detecting a heavy Higgs signal in the t t ¯ dileptonic and lepton plus jets decay channels.

  7. Matching health information seekers' queries to medical terms

    PubMed Central

    2012-01-01

    Background The Internet is a major source of health information but most seekers are not familiar with medical vocabularies. Hence, their searches fail due to bad query formulation. Several methods have been proposed to improve information retrieval: query expansion, syntactic and semantic techniques or knowledge-based methods. However, it would be useful to clean those queries which are misspelled. In this paper, we propose a simple yet efficient method in order to correct misspellings of queries submitted by health information seekers to a medical online search tool. Methods In addition to query normalizations and exact phonetic term matching, we tested two approximate string comparators: the similarity score function of Stoilos and the normalized Levenshtein edit distance. We propose here to combine them to increase the number of matched medical terms in French. We first took a sample of query logs to determine the thresholds and processing times. In the second run, at a greater scale we tested different combinations of query normalizations before or after misspelling correction with the retained thresholds in the first run. Results According to the total number of suggestions (around 163, the number of the first sample of queries), at a threshold comparator score of 0.3, the normalized Levenshtein edit distance gave the highest F-Measure (88.15%) and at a threshold comparator score of 0.7, the Stoilos function gave the highest F-Measure (84.31%). By combining Levenshtein and Stoilos, the highest F-Measure (80.28%) is obtained with 0.2 and 0.7 thresholds respectively. However, queries are composed by several terms that may be combination of medical terms. The process of query normalization and segmentation is thus required. The highest F-Measure (64.18%) is obtained when this process is realized before spelling-correction. Conclusions Despite the widely known high performance of the normalized edit distance of Levenshtein, we show in this paper that its combination with the Stoilos algorithm improved the results for misspelling correction of user queries. Accuracy is improved by combining spelling, phoneme-based information and string normalizations and segmentations into medical terms. These encouraging results have enabled the integration of this method into two projects funded by the French National Research Agency-Technologies for Health Care. The first aims to facilitate the coding process of clinical free texts contained in Electronic Health Records and discharge summaries, whereas the second aims at improving information retrieval through Electronic Health Records. PMID:23095521

  8. Do You See What I See? Exploring the Consequences of Luminosity Limits in Black Hole-Galaxy Evolution Studies

    NASA Astrophysics Data System (ADS)

    Jones, Mackenzie L.; Hickox, Ryan C.; Mutch, Simon J.; Croton, Darren J.; Ptak, Andrew F.; DiPompeo, Michael A.

    2017-07-01

    In studies of the connection between active galactic nuclei (AGNs) and their host galaxies, there is widespread disagreement on some key aspects of the connection. These disagreements largely stem from a lack of understanding of the nature of the full underlying AGN population. Recent attempts to probe this connection utilize both observations and simulations to correct for a missed population, but presently are limited by intrinsic biases and complicated models. We take a simple simulation for galaxy evolution and add a new prescription for AGN activity to connect galaxy growth to dark matter halo properties and AGN activity to star formation. We explicitly model selection effects to produce an “observed” AGN population for comparison with observations and empirically motivated models of the local universe. This allows us to bypass the difficulties inherent in models that attempt to infer the AGN population by inverting selection effects. We investigate the impact of selecting AGNs based on thresholds in luminosity or Eddington ratio on the “observed” AGN population. By limiting our model AGN sample in luminosity, we are able to recreate the observed local AGN luminosity function and specific star formation-stellar mass distribution, and show that using an Eddington ratio threshold introduces less bias into the sample by selecting the full range of growing black holes, despite the challenge of selecting low-mass black holes. We find that selecting AGNs using these various thresholds yield samples with different AGN host galaxy properties.

  9. Maui-VIA: A User-Friendly Software for Visual Identification, Alignment, Correction, and Quantification of Gas Chromatography–Mass Spectrometry Data

    PubMed Central

    Kuich, P. Henning J. L.; Hoffmann, Nils; Kempa, Stefan

    2015-01-01

    A current bottleneck in GC–MS metabolomics is the processing of raw machine data into a final datamatrix that contains the quantities of identified metabolites in each sample. While there are many bioinformatics tools available to aid the initial steps of the process, their use requires both significant technical expertise and a subsequent manual validation of identifications and alignments if high data quality is desired. The manual validation is tedious and time consuming, becoming prohibitively so as sample numbers increase. We have, therefore, developed Maui-VIA, a solution based on a visual interface that allows experts and non-experts to simultaneously and quickly process, inspect, and correct large numbers of GC–MS samples. It allows for the visual inspection of identifications and alignments, facilitating a unique and, due to its visualization and keyboard shortcuts, very fast interaction with the data. Therefore, Maui-Via fills an important niche by (1) providing functionality that optimizes the component of data processing that is currently most labor intensive to save time and (2) lowering the threshold of expertise required to process GC–MS data. Maui-VIA projects are initiated with baseline-corrected raw data, peaklists, and a database of metabolite spectra and retention indices used for identification. It provides functionality for retention index calculation, a targeted library search, the visual annotation, alignment, correction interface, and metabolite quantification, as well as the export of the final datamatrix. The high quality of data produced by Maui-VIA is illustrated by its comparison to data attained manually by an expert using vendor software on a previously published dataset concerning the response of Chlamydomonas reinhardtii to salt stress. In conclusion, Maui-VIA provides the opportunity for fast, confident, and high-quality data processing validation of large numbers of GC–MS samples by non-experts. PMID:25654076

  10. Applications of Derandomization Theory in Coding

    NASA Astrophysics Data System (ADS)

    Cheraghchi, Mahdi

    2011-07-01

    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.

  11. SNW 2000 Proceedings. Oxide Thickness Variation Induced Threshold Voltage Fluctuations in Decanano MOSFETs: a 3D Density Gradient Simulation Study

    NASA Technical Reports Server (NTRS)

    Asenov, Asen; Kaya, S.; Davies, J. H.; Saini, S.

    2000-01-01

    We use the density gradient (DG) simulation approach to study, in 3D, the effect of local oxide thickness fluctuations on the threshold voltage of decanano MOSFETs in a statistical manner. A description of the reconstruction procedure for the random 2D surfaces representing the 'atomistic' Si-SiO2 interface variations is presented. The procedure is based on power spectrum synthesis in the Fourier domain and can include either Gaussian or exponential spectra. The simulations show that threshold voltage variations induced by oxide thickness fluctuation become significant when the gate length of the devices become comparable to the correlation length of the fluctuations. The extent of quantum corrections in the simulations with respect to the classical case and the dependence of threshold variations on the oxide thickness are examined.

  12. Experimental determination of Ramsey numbers.

    PubMed

    Bian, Zhengbing; Chudak, Fabian; Macready, William G; Clark, Lane; Gaitan, Frank

    2013-09-27

    Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.

  13. Experimental Determination of Ramsey Numbers

    NASA Astrophysics Data System (ADS)

    Bian, Zhengbing; Chudak, Fabian; Macready, William G.; Clark, Lane; Gaitan, Frank

    2013-09-01

    Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.

  14. Dual-balanced detection scheme with optical hard-limiters in an optical code division multiple access system

    NASA Astrophysics Data System (ADS)

    Liu, Maw-Yang; Hsu, Yi-Kai

    2017-03-01

    Three-arm dual-balanced detection scheme is studied in an optical code division multiple access system. As the MAI and beat noise are the main deleterious source of system performance, we utilize optical hard-limiters to alleviate such channel impairment. In addition, once the channel condition is improved effectively, the proposed two-dimensional error correction code can remarkably enhance the system performance. In our proposed scheme, the optimal thresholds of optical hard-limiters and decision circuitry are fixed, and they will not change with other system parameters. Our proposed scheme can accommodate a large number of users simultaneously and is suitable for burst traffic with asynchronous transmission. Therefore, it is highly recommended as the platform for broadband optical access network.

  15. High-resolution subgrid models: background, grid generation, and implementation

    NASA Astrophysics Data System (ADS)

    Sehili, Aissa; Lang, Günther; Lippert, Christoph

    2014-04-01

    The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.

  16. Particle identification using the time-over-threshold measurements in straw tube detectors

    NASA Astrophysics Data System (ADS)

    Jowzaee, S.; Fioravanti, E.; Gianotti, P.; Idzik, M.; Korcyl, G.; Palka, M.; Przyborowski, D.; Pysz, K.; Ritman, J.; Salabura, P.; Savrie, M.; Smyrski, J.; Strzempek, P.; Wintz, P.

    2013-08-01

    The identification of charged particles based on energy losses in straw tube detectors has been simulated. The response of a new front-end chip developed for the PANDA straw tube tracker was implemented in the simulations and corrections for track distance to sense wire were included. Separation power for p - K, p - π and K - π pairs obtained using the time-over-threshold technique was compared with the one based on the measurement of collected charge.

  17. Photodetachment cross sections of negative ions - The range of validity of the Wigner threshold law

    NASA Technical Reports Server (NTRS)

    Farley, John W.

    1989-01-01

    The threshold behavior of the photodetachment cross section of negative ions as a function of photon frequency is usually described by the Wigner law. This paper reports the results of a model calculation using the zero-core-contribution (ZCC) approximation. Theoretical expressions for the leading correction to the Wigner law are developed, giving the range of validity of the Wigner law and the expected accuracy. The results are relevant to extraction of electron affinities from experimental photodetachment data.

  18. Improving ontology matching with propagation strategy and user feedback

    NASA Astrophysics Data System (ADS)

    Li, Chunhua; Cui, Zhiming; Zhao, Pengpeng; Wu, Jian; Xin, Jie; He, Tianxu

    2015-07-01

    Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. The existing approach requires a threshold to produce matching candidates and use a small set of constraints acting as filter to select the final alignments. We introduce novel match propagation strategy to model the influences between potential entity mappings across ontologies, which can help to identify the correct correspondences and produce missed correspondences. The estimation of appropriate threshold is a difficult task. We propose an interactive method for threshold selection through which we obtain an additional measurable improvement. Running experiments on a public dataset has demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.

  19. Dimits shift in realistic gyrokinetic plasma-turbulence simulations.

    PubMed

    Mikkelsen, D R; Dorland, W

    2008-09-26

    In simulations of turbulent plasma transport due to long wavelength (k perpendicular rhoi < or = 1) electrostatic drift-type instabilities, we find a persistent nonlinear up-shift of the effective threshold. Next-generation tokamaks will likely benefit from the higher effective threshold for turbulent transport, and transport models should incorporate suitable corrections to linear thresholds. The gyrokinetic simulations reported here are more realistic than previous reports of a Dimits shift because they include nonadiabatic electron dynamics, strong collisional damping of zonal flows, and finite electron and ion collisionality together with realistic shaped magnetic geometry. Reversing previously reported results based on idealized adiabatic electrons, we find that increasing collisionality reduces the heat flux because collisionality reduces the nonadiabatic electron microinstability drive.

  20. XYZ-SU3 breakings from Laplace sum rules at higher orders

    NASA Astrophysics Data System (ADS)

    Albuquerque, R.; Narison, S.; Rabetiarivony, D.; Randriamanatrika, G.

    2018-06-01

    We present new compact integrated expressions of SU3 breaking corrections to QCD spectral functions of heavy-light molecules and four-quark XY Z-like states at lowest order (LO) of perturbative (PT) QCD and up to d = 8 condensates of the Operator Product Expansion (OPE). Including next-to-next-to-leading order (N2LO) PT corrections in the chiral limit and next-to-leading order (NLO) SU3 PT corrections, which we have estimated by assuming the factorization of the four-quark spectral functions, we improve previous LO results for the XY Z-like masses and decay constants from QCD spectral sum rules (QSSR). Systematic errors are estimated from a geometric growth of the higher order PT corrections and from some partially known d = 8 nonperturbative contributions. Our optimal results, based on stability criteria, are summarized in Tables 18-21 while the 0++ and 1++ channels are compared with some existing LO results in Table 22. One can note that, in most channels, the SU3 corrections on the meson masses are tiny: ≤ 10% (respectively ≤ 3%) for the c (respectively b)-quark channel but can be large for the couplings ( ≤ 20%). Within the lowest dimension currents, most of the 0++ and 1++ states are below the physical thresholds while our predictions cannot discriminate a molecule from a four-quark state. A comparison with the masses of some experimental candidates indicates that the 0++ X(4500) might have a large D¯s0∗D s0∗ molecule component while an interpretation of the 0++ candidates as four-quark ground states is not supported by our findings. The 1++ X(4147) and X(4273) are compatible with the D¯s∗D s, D¯s0∗D s1 molecules and/or with the axial-vector Ac four-quark ground state. Our results for the 0‑±, 1‑± and for different beauty states can be tested in the future data. Finally, we revisit our previous estimates1 for the D¯0∗D 0∗ and D¯0∗D 1 and present new results for the D¯1D1.

  1. Smartphone-Based Hearing Screening in Noisy Environments

    PubMed Central

    Na, Youngmin; Joo, Hyo Sung; Yang, Hyejin; Kang, Soojin; Hong, Sung Hwa; Woo, Jihwan

    2014-01-01

    It is important and recommended to detect hearing loss as soon as possible. If it is found early, proper treatment may help improve hearing and reduce the negative consequences of hearing loss. In this study, we developed smartphone-based hearing screening methods that can ubiquitously test hearing. However, environmental noise generally results in the loss of ear sensitivity, which causes a hearing threshold shift (HTS). To overcome this limitation in the hearing screening location, we developed a correction algorithm to reduce the HTS effect. A built-in microphone and headphone were calibrated to provide the standard units of measure. The HTSs in the presence of either white or babble noise were systematically investigated to determine the mean HTS as a function of noise level. When the hearing screening application runs, the smartphone automatically measures the environmental noise and provides the HTS value to correct the hearing threshold. A comparison to pure tone audiometry shows that this hearing screening method in the presence of noise could closely estimate the hearing threshold. We expect that the proposed ubiquitous hearing test method could be used as a simple hearing screening tool and could alert the user if they suffer from hearing loss. PMID:24926692

  2. Accuracy of Rhenium-188 SPECT/CT activity quantification for applications in radionuclide therapy using clinical reconstruction methods.

    PubMed

    Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna

    2017-07-20

    The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors  <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors  >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.

  3. Potts glass reflection of the decoding threshold for qudit quantum error correcting codes

    NASA Astrophysics Data System (ADS)

    Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.

    We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).

  4. Precision control of eluted activity from a Sr/Rb generator for cardiac positron emission tomography.

    PubMed

    Klein, R; Adler, A; Beanlands, R S; deKemp, R A

    2004-01-01

    A rubidium-82 (/sup 82/Rb) elution system is described for use with clinical positron emission tomography. The system is self-calibrating with 1.4% repeatability, independent of generator activity and elution flow rate. Saline flow is switched between a /sup 82/Sr//sup 82/Rb generator and a bypass line to achieve a constant activity elution of /sup 82/Rb. In the present study, pulse width modulation (PWM) of a solenoid valve is compared to simple threshold control as a means to simulate a proportional valve. A predictive-corrective control algorithm is developed which produces a constant activity elution within the constraints of long feedback delay and short elution time. Accurate constant-activity elutions of 10-70% of the total generator activity were demonstrated using the threshold comparison control. The adaptive-corrective control of the PWM valve provided a substantial improvement in precision of the steady-state output.

  5. [Changes in blood gases with temperature: implications for clinical practice].

    PubMed

    Tremey, B; Vigué, B

    2004-05-01

    To understand changes in blood gases results with core temperature. Analysis from two case reports. Hypothermia induces a decrease in PaCO(2) with a related increase in pH, thus a physiologic alkalosis. Decrease in PaCO(2) is due to an increase of gas solubility and a decrease of peripheral consumption that can be estimated from comparison between corrected and non-corrected for temperature blood gases. For O(2), variations of temperature induce variations of solubility but also of haemoglobin affinity for O(2). During hyperthermia, haemoglobin affinity for O(2) is decreased with a decreased SvO(2) for a same PvO(2). SvO(2) ischemic or therapeutic thresholds are thus modified with core temperature. Blood gases cannot be understood without patient core temperature. Physiologic variations of PaCO(2) and pH must probably be tolerated. Ischemic threshold should be estimated on PvO(2), not only on PvO(2).

  6. Methodological issues when comparing hearing thresholds of a group with population standards: the case of the ferry engineers.

    PubMed

    Dobie, Robert A

    2006-10-01

    To discuss appropriate and inappropriate methods for comparing distributions of hearing thresholds of a study group with distributions in population standards and to determine whether the thresholds of Washington State Ferries engineers are different from those of men in the general population, using both frequency-by-frequency comparisons and analysis of audiometric shape. The most recent hearing conservation program audiograms of 321 noise-exposed engineers, ages 35 to 64, were compared with the predictions of Annexes A, B, and C from ANSI S3.44. There was no screening by history or otoscopy; all audiograms were included. 95% confidence intervals (95% CIs) were calculated for the engineers' median thresholds for each ear, for the better ear (defined two ways), and for the binaural average. For Annex B, where 95% CIs are also available, it was possible to calculate z scores for the differences between Annex B and the engineers' better ears. Bulge depth, an audiometric shape statistic, measured curvature between 1 and 6 kHz. Engineers' better-ear median thresholds were worse than those in Annex A but (except at 1 kHz) were as good as or better than those in Annexes B and C, which are more appropriate for comparison to an unscreened noise-exposed group like the engineers. Average bulge depth for the engineers was similar to that of the Annex B standard (no added occupational noise) and was much less than that of audiograms created by using the standard with added occupational noise between 90 and 100 dBA. Audiograms from groups that have been selected for a particular exposure, but, without regard to severity, can appropriately be compared with population standards, if certain pitfalls are avoided. For unscreened study groups with large age-sex subgroups, a simple method to assess statistical significance, taking into consideration uncertainties in both the study group and the comparison standard, is the calculation of z scores for the proportion of better-ear thresholds above the Annex B median. A less powerful method combines small age-sex subgroups after age correction. Small threshold differences, even if statistically significant, may not be due to genuine differences in hearing sensitivity between study group and standard. Audiometric shape analysis offers an independent dimension of comparison between the study group and audiograms predicted from the ANSI S3.44 standard, with and without occupational noise exposure. Important pitfalls in comparison to population standards include nonrandom selection of study groups, inappropriate choice of population standard, use of the right and left ear thresholds instead of the better-ear threshold for comparison to Annex B, and comparing means with medians. The thresholds of the engineers in this study were similar to published standards for an unscreened population.

  7. Improvement in the measurement error of the specific binding ratio in dopamine transporter SPECT imaging due to exclusion of the cerebrospinal fluid fraction using the threshold of voxel RI count.

    PubMed

    Mizumura, Sunao; Nishikawa, Kazuhiro; Murata, Akihiro; Yoshimura, Kosei; Ishii, Nobutomo; Kokubo, Tadashi; Morooka, Miyako; Kajiyama, Akiko; Terahara, Atsuro

    2018-05-01

    In Japan, the Southampton method for dopamine transporter (DAT) SPECT is widely used to quantitatively evaluate striatal radioactivity. The specific binding ratio (SBR) is the ratio of specific to non-specific binding observed after placing pentagonal striatal voxels of interest (VOIs) as references. Although the method can reduce the partial volume effect, the SBR may fluctuate due to the presence of low-count areas of cerebrospinal fluid (CSF), caused by brain atrophy, in the striatal VOIs. We examined the effect of the exclusion of low-count VOIs on SBR measurement. We retrospectively reviewed DAT imaging of 36 patients with parkinsonian syndromes performed after injection of 123 I-FP-CIT. SPECT data were reconstructed using three conditions. We defined the CSF area in each SPECT image after segmenting the brain tissues. A merged image of gray and white matter images was constructed from each patient's magnetic resonance imaging (MRI) to create an idealized brain image that excluded the CSF fraction (MRI-mask method). We calculated the SBR and asymmetric index (AI) in the MRI-mask method for each reconstruction condition. We then calculated the mean and standard deviation (SD) of voxel RI counts in the reference VOI without the striatal VOIs in each image, and determined the SBR by excluding the low-count pixels (threshold method) using five thresholds: mean-0.0SD, mean-0.5SD, mean-1.0SD, mean-1.5SD, and mean-2.0SD. We also calculated the AIs from the SBRs measured using the threshold method. We examined the correlation among the SBRs of the threshold method, between the uncorrected SBRs and the SBRs of the MRI-mask method, and between the uncorrected AIs and the AIs of the MRI-mask method. The intraclass correlation coefficient indicated an extremely high correlation among the SBRs and among the AIs of the MRI-mask and threshold methods at thresholds between mean-2.0D and mean-1.0SD, regardless of the reconstruction correction. The differences among the SBRs and the AIs of the two methods were smallest at thresholds between man-2.0SD and mean-1.0SD. The SBR calculated using the threshold method was highly correlated with the MRI-SBR. These results suggest that the CSF correction of the threshold method is effective for the calculation of idealized SBR and AI values.

  8. Louisiana Wetland Monitoring Using TOPEX/POSEIDON Altimetry

    NASA Astrophysics Data System (ADS)

    Yi, Y.; Lee, H.; Ibaraki, M.; Shum, C.

    2006-12-01

    Conventional satellite radar altimetry is designed to observe ocean topography and significant technological advance has enabled our capability to measure sea level change, ice sheet elevation and sea ice freeboard height changes, hydrologic changes for large inland lake and rivers, and potentially land deformation. Wide- swath altimetry or interferometric altimetry onboard proposed and planned platforms are anticipated to significantly improve the spatial resolution of observations over ocean, land water, and ice surfaces. Coastal estuaries and wetlands play important roles in ecological environments. They not only provide habitat for thousands of aquatic/terrestrial plant and animal species but also control floods and storm surges by absorbing and reducing the velocity of storm water. Regional measurement of wetland water level changes from space is essential for hydrological studies. To our knowledge, there have been no reported successful attempts to use Ku-band altimetry for this purpose, especially over wetlands with seasonally varying vegetations. Here we demonstrate the use of the pulsed-limited radar altimeter (TOPEX), for the potential monitoring of wetland water level changes. The specific study regions are over the vegetated wetland in Louisiana. In addition to the retracking of Ku-band radar waveforms and generate a water level change time series over Louisiana wetland, we study the effect of media corrections, including the ionosphere and wet troposphere delays which are largely not applied for inland hydrological studies using altimetry. We find that most of the TOPEX waveform responses over the study region are specular or narrow-peaked, and we have tested various retrackers including the conventional OCOG, threshold, and the modified threshold algorithms which result in a decadal (1992-2002) height time series over several specific regions of the Louisiana wetland. It is found that the use of various corrections including wet troposphere delays computed from models (FMO/ECMWF) and DORIS ionosphere delays reduces variance of the resulting wetland water level measurements. The result of the study is anticipated to have an impact on the use of wide-swath radar altimetry for studies of hydrologic processes in world's wetlands.

  9. 2D Quantum Transport Modeling in Nanoscale MOSFETs

    NASA Technical Reports Server (NTRS)

    Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan

    2001-01-01

    With the onset of quantum confinement in the inversion layer in nanoscale MOSFETs, behavior of the resonant level inevitably determines all device characteristics. While most classical device simulators take quantization into account in some simplified manner, the important details of electrostatics are missing. Our work addresses this shortcoming and provides: (a) a framework to quantitatively explore device physics issues such as the source-drain and gate leakage currents, DIBL, and threshold voltage shift due to quantization, and b) a means of benchmarking quantum corrections to semiclassical models (such as density- gradient and quantum-corrected MEDICI). We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions, oxide tunneling and phase-breaking scattering are treated on equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. Quantum simulations are focused on MIT 25, 50 and 90 nm "well- tempered" MOSFETs and compared to classical and quantum corrected models. The important feature of quantum model is smaller slope of Id-Vg curve and consequently higher threshold voltage. These results are quantitatively consistent with I D Schroedinger-Poisson calculations. The effect of gate length on gate-oxide leakage and sub-threshold current has been studied. The shorter gate length device has an order of magnitude smaller current at zero gate bias than the longer gate length device without a significant trade-off in on-current. This should be a device design consideration.

  10. Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Pryadko, Leonid

    Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.

  11. Vehicle lift-off modelling and a new rollover detection criterion

    NASA Astrophysics Data System (ADS)

    Mashadi, Behrooz; Mostaghimi, Hamid

    2017-05-01

    The modelling and development of a general criterion for the prediction of rollover threshold is the main purpose of this work. Vehicle dynamics models after the wheels lift-off and when the vehicle moves on the two wheels are derived and the governing equations are used to develop the rollover threshold. These models include the properties of the suspension and steering systems. In order to study the stability of motion, the steady-state solutions of the equations of motion are carried out. Based on the stability analyses, a new relation is obtained for the rollover threshold in terms of measurable response parameters. The presented criterion predicts the best time for the prevention of the vehicle rollover by applying a correcting moment. It is shown that the introduced threshold of vehicle rollover is a proper state of vehicle motion that is best for stabilising the vehicle with a low energy requirement.

  12. Fault tolerance with noisy and slow measurements and preparation.

    PubMed

    Paz-Silva, Gerardo A; Brennen, Gavin K; Twamley, Jason

    2010-09-03

    It is not so well known that measurement-free quantum error correction protocols can be designed to achieve fault-tolerant quantum computing. Despite their potential advantages in terms of the relaxation of accuracy, speed, and addressing requirements, they have usually been overlooked since they are expected to yield a very bad threshold. We show that this is not the case. We design fault-tolerant circuits for the 9-qubit Bacon-Shor code and find an error threshold for unitary gates and preparation of p((p,g)thresh)=3.76×10(-5) (30% of the best known result for the same code using measurement) while admitting up to 1/3 error rates for measurements and allocating no constraints on measurement speed. We further show that demanding gate error rates sufficiently below the threshold pushes the preparation threshold up to p((p)thresh)=1/3.

  13. Optimal threshold estimation for binary classifiers using game theory.

    PubMed

    Sanchez, Ignacio Enrique

    2016-01-01

    Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC ) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.

  14. Preclinical modeling highlights the therapeutic potential of hematopoietic stem cell gene editing for correction of SCID-X1.

    PubMed

    Schiroli, Giulia; Ferrari, Samuele; Conway, Anthony; Jacob, Aurelien; Capo, Valentina; Albano, Luisa; Plati, Tiziana; Castiello, Maria C; Sanvito, Francesca; Gennery, Andrew R; Bovolenta, Chiara; Palchaudhuri, Rahul; Scadden, David T; Holmes, Michael C; Villa, Anna; Sitia, Giovanni; Lombardo, Angelo; Genovese, Pietro; Naldini, Luigi

    2017-10-11

    Targeted genome editing in hematopoietic stem/progenitor cells (HSPCs) is an attractive strategy for treating immunohematological diseases. However, the limited efficiency of homology-directed editing in primitive HSPCs constrains the yield of corrected cells and might affect the feasibility and safety of clinical translation. These concerns need to be addressed in stringent preclinical models and overcome by developing more efficient editing methods. We generated a humanized X-linked severe combined immunodeficiency (SCID-X1) mouse model and evaluated the efficacy and safety of hematopoietic reconstitution from limited input of functional HSPCs, establishing thresholds for full correction upon different types of conditioning. Unexpectedly, conditioning before HSPC infusion was required to protect the mice from lymphoma developing when transplanting small numbers of progenitors. We then designed a one-size-fits-all IL2RG (interleukin-2 receptor common γ-chain) gene correction strategy and, using the same reagents suitable for correction of human HSPC, validated the edited human gene in the disease model in vivo, providing evidence of targeted gene editing in mouse HSPCs and demonstrating the functionality of the IL2RG -edited lymphoid progeny. Finally, we optimized editing reagents and protocol for human HSPCs and attained the threshold of IL2RG editing in long-term repopulating cells predicted to safely rescue the disease, using clinically relevant HSPC sources and highly specific zinc finger nucleases or CRISPR (clustered regularly interspaced short palindromic repeats)/Cas9 (CRISPR-associated protein 9). Overall, our work establishes the rationale and guiding principles for clinical translation of SCID-X1 gene editing and provides a framework for developing gene correction for other diseases. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  15. Modeling spatially-varying landscape change points in species occurrence thresholds

    USGS Publications Warehouse

    Wagner, Tyler; Midway, Stephen R.

    2014-01-01

    Predicting species distributions at scales of regions to continents is often necessary, as large-scale phenomena influence the distributions of spatially structured populations. Land use and land cover are important large-scale drivers of species distributions, and landscapes are known to create species occurrence thresholds, where small changes in a landscape characteristic results in abrupt changes in occurrence. The value of the landscape characteristic at which this change occurs is referred to as a change point. We present a hierarchical Bayesian threshold model (HBTM) that allows for estimating spatially varying parameters, including change points. Our model also allows for modeling estimated parameters in an effort to understand large-scale drivers of variability in land use and land cover on species occurrence thresholds. We use range-wide detection/nondetection data for the eastern brook trout (Salvelinus fontinalis), a stream-dwelling salmonid, to illustrate our HBTM for estimating and modeling spatially varying threshold parameters in species occurrence. We parameterized the model for investigating thresholds in landscape predictor variables that are measured as proportions, and which are therefore restricted to values between 0 and 1. Our HBTM estimated spatially varying thresholds in brook trout occurrence for both the proportion agricultural and urban land uses. There was relatively little spatial variation in change point estimates, although there was spatial variability in the overall shape of the threshold response and associated uncertainty. In addition, regional mean stream water temperature was correlated to the change point parameters for the proportion of urban land use, with the change point value increasing with increasing mean stream water temperature. We present a framework for quantify macrosystem variability in spatially varying threshold model parameters in relation to important large-scale drivers such as land use and land cover. Although the model presented is a logistic HBTM, it can easily be extended to accommodate other statistical distributions for modeling species richness or abundance.

  16. Creep and tensile properties of several oxide-dispersion-strengthened nickel-base alloys at 1365 K

    NASA Technical Reports Server (NTRS)

    Wittenberger, J. D.

    1977-01-01

    The tensile properties at room temperature and at 1365 K and the tensile creep properties at low strain rates at 1365 K were measured for several oxide-dispersion-strengthened (ODS) alloys. The alloys examined included ODS Ni, ODS Ni-20Cr, and ODS Ni-16Cr-Al. Metallography of creep tested, large grain size ODS alloys indicated that creep of these alloys is an inhomogeneous process. All alloys appear to possess a threshold stress for creep. This threshold stress is believed to be associated with diffusional creep in the large grain size ODS alloys and normal dislocation motion in perfect single crystal (without transverse low angle boundaries) ODS alloys. Threshold stresses for large grain size ODS Ni-20Cr and Ni-16Cr-Al type alloys are dependent on the grain aspect ratio. Because of the deleterious effect of prior creep on room temperature mechanical properties of large grain size ODS alloys, it is speculated that the threshold stress may be the design limiting creep strength property.

  17. Rcorrector: efficient and accurate error correction for Illumina RNA-seq reads.

    PubMed

    Song, Li; Florea, Liliana

    2015-01-01

    Next-generation sequencing of cellular RNA (RNA-seq) is rapidly becoming the cornerstone of transcriptomic analysis. However, sequencing errors in the already short RNA-seq reads complicate bioinformatics analyses, in particular alignment and assembly. Error correction methods have been highly effective for whole-genome sequencing (WGS) reads, but are unsuitable for RNA-seq reads, owing to the variation in gene expression levels and alternative splicing. We developed a k-mer based method, Rcorrector, to correct random sequencing errors in Illumina RNA-seq reads. Rcorrector uses a De Bruijn graph to compactly represent all trusted k-mers in the input reads. Unlike WGS read correctors, which use a global threshold to determine trusted k-mers, Rcorrector computes a local threshold at every position in a read. Rcorrector has an accuracy higher than or comparable to existing methods, including the only other method (SEECER) designed for RNA-seq reads, and is more time and memory efficient. With a 5 GB memory footprint for 100 million reads, it can be run on virtually any desktop or server. The software is available free of charge under the GNU General Public License from https://github.com/mourisl/Rcorrector/.

  18. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    NASA Astrophysics Data System (ADS)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  19. Calibration and validation of rainfall thresholds for shallow landslide forecasting in Sicily, southern Italy

    NASA Astrophysics Data System (ADS)

    Gariano, S. L.; Brunetti, M. T.; Iovine, G.; Melillo, M.; Peruccacci, S.; Terranova, O.; Vennari, C.; Guzzetti, F.

    2015-01-01

    Empirical rainfall thresholds are tools to forecast the possible occurrence of rainfall-induced shallow landslides. Accurate prediction of landslide occurrence requires reliable thresholds, which need to be properly validated before their use in operational warning systems. We exploited a catalogue of 200 rainfall conditions that have resulted in at least 223 shallow landslides in Sicily, southern Italy, in the 11-year period 2002-2011, to determine regional event duration-cumulated event rainfall (ED) thresholds for shallow landslide occurrence. We computed ED thresholds for different exceedance probability levels and determined the uncertainty associated to the thresholds using a consolidated bootstrap nonparametric technique. We further determined subregional thresholds, and we studied the role of lithology and seasonal periods in the initiation of shallow landslides in Sicily. Next, we validated the regional rainfall thresholds using 29 rainfall conditions that have resulted in 42 shallow landslides in Sicily in 2012. We based the validation on contingency tables, skill scores, and a receiver operating characteristic (ROC) analysis for thresholds at different exceedance probability levels, from 1% to 50%. Validation of rainfall thresholds is hampered by lack of information on landslide occurrence. Therefore, we considered the effects of variations in the contingencies and the skill scores caused by lack of information. Based on the results obtained, we propose a general methodology for the objective identification of a threshold that provides an optimal balance between maximization of correct predictions and minimization of incorrect predictions, including missed and false alarms. We expect that the methodology will increase the reliability of rainfall thresholds, fostering the operational use of validated rainfall thresholds in operational early warning system for regional shallow landslide forecasting.

  20. Microscopy mineral image enhancement based on improved adaptive threshold in nonsubsampled shearlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Liangliang; Si, Yujuan; Jia, Zhenhong

    2018-03-01

    In this paper, a novel microscopy mineral image enhancement method based on adaptive threshold in non-subsampled shearlet transform (NSST) domain is proposed. First, the image is decomposed into one low-frequency sub-band and several high-frequency sub-bands. Second, the gamma correction is applied to process the low-frequency sub-band coefficients, and the improved adaptive threshold is adopted to suppress the noise of the high-frequency sub-bands coefficients. Third, the processed coefficients are reconstructed with the inverse NSST. Finally, the unsharp filter is used to enhance the details of the reconstructed image. Experimental results on various microscopy mineral images demonstrated that the proposed approach has a better enhancement effect in terms of objective metric and subjective metric.

  1. All-optical associative memory using photorefractive crystals and a saturable absorber

    NASA Astrophysics Data System (ADS)

    Duelli, Markus; Cudney, Roger S.; Keller, Claude; Guenter, Peter

    1995-07-01

    We report on the investigation of a new configuration of an all-optical associative memory. The images to be recalled associatively are stored in a LiNbO3 crystal via angular multiplexing. Thresholding of the reconstructed reference beams during associative readout is achieved by using a saturable absorber with an intensity-tunable threshold. We demonstrate associative readout and error correction for 10 strongly overlapping black-and-white images. Associative recall and full reconstruction is performed when only 1/500 of the image stored is entered.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko

    We present the cross-section for the threshold production of the Higgs boson at hadron-colliders at next-to-next-to-next-to-leading order (N 3LO) in perturbative QCD. Furthermore, we present an analytic expression for the partonic cross-section at threshold and the impact of these corrections on the numerical estimates for the hadronic cross-section at the LHC. With this result we achieve a major milestone towards a complete evaluation of the cross-section at N 3LO which will reduce the theoretical uncertainty in the determination of the strengths of the Higgs boson interactions.

  3. Passive quantum error correction of linear optics networks through error averaging

    NASA Astrophysics Data System (ADS)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  4. A summary of recent results from the GRAPES-3 experiment

    NASA Astrophysics Data System (ADS)

    Gupta, S. K.

    2017-06-01

    The GRAPES-3 experiment is a combination of a high density extensive air shower (EAS) array of nearly 400 plastic scintillator detectors, and a large 560 m2 area tracking muon telescope with an energy threshold Eμ >1 GeV. GRAPES-3 has been operating continuously in Ooty, India since 2000. By accurately correcting for the effects of atmospheric pressure and temperature, the muon telescope provides a high precision directional survey of the galactic cosmic ray (GCR) intensity. This telescope has been used to observe the acceleration of muons during thunderstorm events. The recent discovery of a transient weakening of the Earth's magnetic shield through the detection of a GCR burst was the highlight of the GRAPES-3 results. We have an ongoing major expansion activity to further enhance the capability of the GRAPES-3 muon telescope by doubling its area.

  5. Do You See What I See? Exploring the Consequences of Luminosity Limits in Black Hole–Galaxy Evolution Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Mackenzie L.; Hickox, Ryan C.; DiPompeo, Michael A.

    In studies of the connection between active galactic nuclei (AGNs) and their host galaxies, there is widespread disagreement on some key aspects of the connection. These disagreements largely stem from a lack of understanding of the nature of the full underlying AGN population. Recent attempts to probe this connection utilize both observations and simulations to correct for a missed population, but presently are limited by intrinsic biases and complicated models. We take a simple simulation for galaxy evolution and add a new prescription for AGN activity to connect galaxy growth to dark matter halo properties and AGN activity to starmore » formation. We explicitly model selection effects to produce an “observed” AGN population for comparison with observations and empirically motivated models of the local universe. This allows us to bypass the difficulties inherent in models that attempt to infer the AGN population by inverting selection effects. We investigate the impact of selecting AGNs based on thresholds in luminosity or Eddington ratio on the “observed” AGN population. By limiting our model AGN sample in luminosity, we are able to recreate the observed local AGN luminosity function and specific star formation-stellar mass distribution, and show that using an Eddington ratio threshold introduces less bias into the sample by selecting the full range of growing black holes, despite the challenge of selecting low-mass black holes. We find that selecting AGNs using these various thresholds yield samples with different AGN host galaxy properties.« less

  6. Objective definition of rainfall intensity-duration thresholds for the initiation of post-fire debris flows in southern California

    USGS Publications Warehouse

    Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.

    2012-01-01

    Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this study represent a computationally simple means of deriving critical values for other studies of nonlinear phenomena characterized by thresholds.

  7. Low-Threshold Active Teaching Methods for Mathematic Instruction

    ERIC Educational Resources Information Center

    Marotta, Sebastian M.; Hargis, Jace

    2011-01-01

    In this article, we present a large list of low-threshold active teaching methods categorized so the instructor can efficiently access and target the deployment of conceptually based lessons. The categories include teaching strategies for lecture on large and small class sizes; student action individually, in pairs, and groups; games; interaction…

  8. Laser-induced retinal damage thresholds for annular retinal beam profiles

    NASA Astrophysics Data System (ADS)

    Kennedy, Paul K.; Zuclich, Joseph A.; Lund, David J.; Edsall, Peter R.; Till, Stephen; Stuck, Bruce E.; Hollins, Richard C.

    2004-07-01

    The dependence of retinal damage thresholds on laser spot size, for annular retinal beam profiles, was measured in vivo for 3 μs, 590 nm pulses from a flashlamp-pumped dye laser. Minimum Visible Lesion (MVL)ED50 thresholds in rhesus were measured for annular retinal beam profiles covering 5, 10, and 20 mrad of visual field; which correspond to outer beam diameters of roughly 70, 160, and 300 μm, respectively, on the primate retina. Annular beam profiles at the retinal plane were achieved using a telescopic imaging system, with the focal properties of the eye represented as an equivalent thin lens, and all annular beam profiles had a 37% central obscuration. As a check on experimental data, theoretical MVL-ED50 thresholds for annular beam exposures were calculated using the Thompson-Gerstman granular model of laser-induced thermal damage to the retina. Threshold calculations were performed for the three experimental beam diameters and for an intermediate case with an outer beam diameter of 230 μm. Results indicate that the threshold vs. spot size trends, for annular beams, are similar to the trends for top hat beams determined in a previous study; i.e., the threshold dose varies with the retinal image area for larger image sizes. The model correctly predicts the threshold vs. spot size trends seen in the biological data, for both annular and top hat retinal beam profiles.

  9. Blueprint for a microwave trapped ion quantum computer

    PubMed Central

    Lekitsch, Bjoern; Weidt, Sebastian; Fowler, Austin G.; Mølmer, Klaus; Devitt, Simon J.; Wunderlich, Christof; Hensinger, Winfried K.

    2017-01-01

    The availability of a universal quantum computer may have a fundamental impact on a vast number of research fields and on society as a whole. An increasingly large scientific and industrial community is working toward the realization of such a device. An arbitrarily large quantum computer may best be constructed using a modular approach. We present a blueprint for a trapped ion–based scalable quantum computer module, making it possible to create a scalable quantum computer architecture based on long-wavelength radiation quantum gates. The modules control all operations as stand-alone units, are constructed using silicon microfabrication techniques, and are within reach of current technology. To perform the required quantum computations, the modules make use of long-wavelength radiation–based quantum gate technology. To scale this microwave quantum computer architecture to a large size, we present a fully scalable design that makes use of ion transport between different modules, thereby allowing arbitrarily many modules to be connected to construct a large-scale device. A high error–threshold surface error correction code can be implemented in the proposed architecture to execute fault-tolerant operations. With appropriate adjustments, the proposed modules are also suitable for alternative trapped ion quantum computer architectures, such as schemes using photonic interconnects. PMID:28164154

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elhadj, Selim; Yoo, Jae-hyuck; Negres, Raluca A.

    The optical damage performance of electrically conductive gallium nitride (GaN) and indium tin oxide (ITO) films is addressed using large area, high power laser beam exposures at 1064 nm sub-bandgap wavelength. Analysis of the laser damage process assumes that onset of damage (threshold) is determined by the absorption and heating of a nanoscale region of a characteristic size reaching a critical temperature. We use this model to rationalize semi-quantitatively the pulse width scaling of the damage threshold from picosecond to nanosecond timescales, along with the pulse width dependence of the damage threshold probability derived by fitting large beam damage densitymore » data. Multi-shot exposures were used to address lifetime performance degradation described by an empirical expression based on the single exposure damage model. A damage threshold degradation of at least 50% was observed for both materials. Overall, the GaN films tested had 5-10 × higher optical damage thresholds than the ITO films tested for comparable transmission and electrical conductivity. This route to optically robust, large aperture transparent electrodes and power optoelectronics may thus involve use of next generation widegap semiconductors such as GaN.« less

  11. Chaotic inflation in Jordan frame supergravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyun Min, E-mail: hyun.min.lee@cern.ch

    2010-08-01

    We consider the inflationary scenario with non-minimal coupling in 4D Jordan frame supergravity. We find that there occurs a tachyonic instability along the direction of the accompanying non-inflaton field in generic Jordan frame supergravity models. We propose a higher order correction to the Jordan frame function for solving the tachyonic mass problem and show that the necessary correction can be naturally generated by the heavy thresholds without spoiling the slow-roll conditions. We discuss the implication of the result on the Higgs inflation in NMSSM.

  12. Recollection is a continuous process: Evidence from plurality memory receiver operating characteristics.

    PubMed

    Slotnick, Scott D; Jeye, Brittany M; Dodson, Chad S

    2016-01-01

    Is recollection a continuous/graded process or a threshold/all-or-none process? Receiver operating characteristic (ROC) analysis can answer this question as the continuous model and the threshold model predict curved and linear recollection ROCs, respectively. As memory for plurality, an item's previous singular or plural form, is assumed to rely on recollection, the nature of recollection can be investigated by evaluating plurality memory ROCs. The present study consisted of four experiments. During encoding, words (singular or plural) or objects (single/singular or duplicate/plural) were presented. During retrieval, old items with the same plurality or different plurality were presented. For each item, participants made a confidence rating ranging from "very sure old", which was correct for same plurality items, to "very sure new", which was correct for different plurality items. Each plurality memory ROC was the proportion of same versus different plurality items classified as "old" (i.e., hits versus false alarms). Chi-squared analysis revealed that all of the plurality memory ROCs were adequately fit by the continuous unequal variance model, whereas none of the ROCs were adequately fit by the two-high threshold model. These plurality memory ROC results indicate recollection is a continuous process, which complements previous source memory and associative memory ROC findings.

  13. Refractive error and presbyopia among adults in Fiji.

    PubMed

    Brian, Garry; Pearce, Matthew G; Ramke, Jacqueline

    2011-04-01

    To characterize refractive error, presbyopia and their correction among adults aged ≥ 40 years in Fiji, and contribute to a regional overview of these conditions. A population-based cross-sectional survey using multistage cluster random sampling. Presenting distance and near vision were measured and dilated slitlamp examination performed. The survey achieved 73.0% participation (n=1381). Presenting binocular distance vision ≥ 6/18 was achieved by 1223 participants. Another 79 had vision impaired by refractive error. Three of these were blind. At threshold 6/18, 204 participants had refractive error. Among these, 125 had spectacle-corrected presenting vision ≥ 6/18 ("met refractive error need"); 79 presented wearing no (n=74) or under-correcting (n=5) distance spectacles ("unmet refractive error need"). Presenting binocular near vision ≥ N8 was achieved by 833 participants. At threshold N8, 811 participants had presbyopia. Among these, 336 attained N8 with presenting near spectacles ("met presbyopia need"); 475 presented with no (n=402) or under-correcting (n=73) near spectacles ("unmet presbyopia need"). Rural residence was predictive of unmet refractive error (p=0.040) and presbyopia (p=0.016) need. Gender and household income source were not. Ethnicity-gender-age-domicile-adjusted to the Fiji population aged ≥ 40 years, "met refractive error need" was 10.3% (95% confidence interval [CI] 8.7-11.9%), "unmet refractive error need" was 4.8% (95%CI 3.6-5.9%), "refractive error correction coverage" was 68.3% (95%CI 54.4-82.2%),"met presbyopia need" was 24.6% (95%CI 22.4-26.9%), "unmet presbyopia need" was 33.8% (95%CI 31.3-36.3%), and "presbyopia correction coverage" was 42.2% (95%CI 37.6-46.8%). Fiji refraction and dispensing services should encourage uptake by rural dwellers and promote presbyopia correction. Lack of comparable data from neighbouring countries prevents a regional overview.

  14. Towards self-correcting quantum memories

    NASA Astrophysics Data System (ADS)

    Michnicki, Kamil

    This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real implementations of quantum memories. Numerical evidence also suggests that the cellular automaton could function as a decoder with a soft threshold.

  15. How strange is pion electroproduction?

    DOE PAGES

    Gorchtein, Mikhail; Spiesberger, Hubert; Zhang, Xilin

    2015-11-18

    We consider pion production in parity-violating electron scattering (PVES) in the presence of nucleon strangeness in the framework of partial wave analysis with unitarity. Using the experimental bounds on the strange form factors obtained in elastic PVES, we study the sensitivity of the parity-violating asymmetry to strange nucleon form factors. For forward kinematics and electron energies above 1 GeV, we observe that this sensitivity may reach about 20% in the threshold region. With parity-violating asymmetries being as large as tens p.p.m., this study suggests that threshold pion production in PVES can be used as a promising way to better constrainmore » strangeness contributions. Using this model for the neutral current pion production, we update the estimate for the dispersive γZ-box correction to the weak charge of the proton. In the kinematics of the Qweak experiment, our new prediction reads Re V γZ(E = 1.165 GeV) = (5.58 ±1.41) ×10 –3, an improvement over the previous uncertainty estimate of ±2.0 ×10 –3. Our new prediction in the kinematics of the upcoming MESA/P2 experiment reads Re V γZ(E = 0.155 GeV) = (1.1 ±0.2) ×10 –3.« less

  16. Graph theoretical analysis of EEG functional connectivity during music perception.

    PubMed

    Wu, Junjie; Zhang, Junsong; Liu, Chu; Liu, Dongwei; Ding, Xiaojun; Zhou, Changle

    2012-11-05

    The present study evaluated the effect of music on large-scale structure of functional brain networks using graph theoretical concepts. While most studies on music perception used Western music as an acoustic stimulus, Guqin music, representative of Eastern music, was selected for this experiment to increase our knowledge of music perception. Electroencephalography (EEG) was recorded from non-musician volunteers in three conditions: Guqin music, noise and silence backgrounds. Phase coherence was calculated in the alpha band and between all pairs of EEG channels to construct correlation matrices. Each resulting matrix was converted into a weighted graph using a threshold, and two network measures: the clustering coefficient and characteristic path length were calculated. Music perception was found to display a higher level mean phase coherence. Over the whole range of thresholds, the clustering coefficient was larger while listening to music, whereas the path length was smaller. Networks in music background still had a shorter characteristic path length even after the correction for differences in mean synchronization level among background conditions. This topological change indicated a more optimal structure under music perception. Thus, prominent small-world properties are confirmed in functional brain networks. Furthermore, music perception shows an increase of functional connectivity and an enhancement of small-world network organizations. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Cascades in the Threshold Model for varying system sizes

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Panagiotis; Sreenivasan, Sameet; Szymanski, Boleslaw; Korniss, Gyorgy

    2015-03-01

    A classical model in opinion dynamics is the Threshold Model (TM) aiming to model the spread of a new opinion based on the social drive of peer pressure. Under the TM a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. Cascades in the TM depend on multiple parameters, such as the number and selection strategy of the initially active nodes (initiators), and the threshold distribution of the nodes. For a uniform threshold in the network there is a critical fraction of initiators for which a transition from small to large cascades occurs, which for ER graphs is largerly independent of the system size. Here, we study the spread contribution of each newly assigned initiator under the TM for different initiator selection strategies for synthetic graphs of various sizes. We observe that for ER graphs when large cascades occur, the spread contribution of the added initiator on the transition point is independent of the system size, while the contribution of the rest of the initiators converges to zero at infinite system size. This property is used for the identification of large transitions for various threshold distributions. Supported in part by ARL NS-CTA, ARO, ONR, and DARPA.

  18. [Atmospheric correction of visible-infrared band FY-3A/MERSI data based on 6S model].

    PubMed

    Wu, Yong-Li; Luan, Qing; Tian, Guo-Zhen

    2011-06-01

    Based on the observation data from the meteorological stations in Taiyuan City and its surrounding areas of Shanxi Province, the atmosphere parameters for 6S model were supplied, and the atmospheric correction of visible-infrared band (precision 250 meters) FY-3A/MERSI data was conducted. After atmospheric correction, the range of visible-infrared band FY-3A/MERSI data was widened, reflectivity increased, high peak was higher, and distribution histogram was smoother. In the meantime, the threshold value of NDVI data reflecting vegetation condition increased, and its high peak was higher, more close to the real data. Moreover, the color synthesis image of correction data showed more abundant information, its brightness increased, contrast enhanced, and the information reflected was more close to real.

  19. Radar-based quantitative precipitation estimation for the identification of debris flow occurrence over earthquake-affected regions in Sichuan, China

    NASA Astrophysics Data System (ADS)

    Shi, Zhao; Wei, Fangqiang; Chandrasekar, Venkatachalam

    2018-03-01

    Both Ms 8.0 Wenchuan earthquake on 12 May 2008 and Ms 7.0 Lushan earthquake on 20 April 2013 occurred in the province of Sichuan, China. In the earthquake-affected mountainous area, a large amount of loose material caused a high occurrence of debris flow during the rainy season. In order to evaluate the rainfall intensity-duration (I-D) threshold of the debris flow in the earthquake-affected area, and to fill up the observational gaps caused by the relatively scarce and low-altitude deployment of rain gauges in this area, raw data from two S-band China New Generation Doppler Weather Radar (CINRAD) were captured for six rainfall events that triggered 519 debris flows between 2012 and 2014. Due to the challenges of radar quantitative precipitation estimation (QPE) over mountainous areas, a series of improvement measures are considered: a hybrid scan mode, a vertical reflectivity profile (VPR) correction, a mosaic of reflectivity, a merged rainfall-reflectivity (R - Z) relationship for convective and stratiform rainfall, and rainfall bias adjustment with Kalman filter (KF). For validating rainfall accumulation over complex terrains, the study areas are divided into two kinds of regions by the height threshold of 1.5 km from the ground. Three kinds of radar rainfall estimates are compared with rain gauge measurements. It is observed that the normalized mean bias (NMB) is decreased by 39 % and the fitted linear ratio between radar and rain gauge observation reaches at 0.98. Furthermore, the radar-based I-D threshold derived by the frequentist method is I = 10.1D-0.52 and is underestimated by uncorrected raw radar data. In order to verify the impacts on observations due to spatial variation, I-D thresholds are identified from the nearest rain gauge observations and radar observations at the rain gauge locations. It is found that both kinds of observations have similar I-D thresholds and likewise underestimate I-D thresholds due to undershooting at the core of convective rainfall. It is indicated that improvement of spatial resolution and measuring accuracy of radar observation will lead to the improvement of identifying debris flow occurrence, especially for events triggered by the strong small-scale rainfall process in the study area.

  20. Coupling a regional warning system to a semantic engine on online news for enhancing landslide prediction

    NASA Astrophysics Data System (ADS)

    Battistini, Alessandro; Rosi, Ascanio; Segoni, Samuele; Catani, Filippo; Casagli, Nicola

    2017-04-01

    Landslide inventories are basic data for large scale landslide modelling, e.g. they are needed to calibrate and validate rainfall thresholds, physically based models and early warning systems. The setting up of landslide inventories with traditional methods (e.g. remote sensing, field surveys and manual retrieval of data from technical reports and local newspapers) is time consuming. The objective of this work is to automatically set up a landslide inventory using a state-of-the art semantic engine based on data mining on online news (Battistini et al., 2013) and to evaluate if the automatically generated inventory can be used to validate a regional scale landslide warning system based on rainfall-thresholds. The semantic engine scanned internet news in real time in a 50 months test period. At the end of the process, an inventory of approximately 900 landslides was set up for the Tuscany region (23,000 km2, Italy). The inventory was compared with the outputs of the regional landslide early warning system based on rainfall thresholds, and a good correspondence was found: e.g. 84% of the events reported in the news is correctly identified by the model. In addition, the cases of not correspondence were forwarded to the rainfall threshold developers, which used these inputs to update some of the thresholds. On the basis of the results obtained, we conclude that automatic validation of landslide models using geolocalized landslide events feedback is possible. The source of data for validation can be obtained directly from the internet channel using an appropriate semantic engine. We also automated the validation procedure, which is based on a comparison between forecasts and reported events. We verified that our approach can be automatically used for a near real time validation of the warning system and for a semi-automatic update of the rainfall thresholds, which could lead to an improvement of the forecasting effectiveness of the warning system. In the near future, the proposed procedure could operate in continuous time and could allow for a periodic update of landslide hazard models and landslide early warning systems with minimum human intervention. References: Battistini, A., Segoni, S., Manzo, G., Catani, F., Casagli, N. (2013). Web data mining for automatic inventory of geohazards at national scale. Applied Geography, 43, 147-158.

  1. A compact quantum correction model for symmetric double gate metal-oxide-semiconductor field-effect transistor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Edward Namkyu; Shin, Yong Hyeon; Yun, Ilgu, E-mail: iyun@yonsei.ac.kr

    2014-11-07

    A compact quantum correction model for a symmetric double gate (DG) metal-oxide-semiconductor field-effect transistor (MOSFET) is investigated. The compact quantum correction model is proposed from the concepts of the threshold voltage shift (ΔV{sub TH}{sup QM}) and the gate capacitance (C{sub g}) degradation. First of all, ΔV{sub TH}{sup QM} induced by quantum mechanical (QM) effects is modeled. The C{sub g} degradation is then modeled by introducing the inversion layer centroid. With ΔV{sub TH}{sup QM} and the C{sub g} degradation, the QM effects are implemented in previously reported classical model and a comparison between the proposed quantum correction model and numerical simulationmore » results is presented. Based on the results, the proposed quantum correction model can be applicable to the compact model of DG MOSFET.« less

  2. Spatial layout affects speed discrimination

    NASA Technical Reports Server (NTRS)

    Verghese, P.; Stone, L. S.

    1997-01-01

    We address a surprising result in a previous study of speed discrimination with multiple moving gratings: discrimination thresholds decreased when the number of stimuli was increased, but remained unchanged when the area of a single stimulus was increased [Verghese & Stone (1995). Vision Research, 35, 2811-2823]. In this study, we manipulated the spatial- and phase relationship between multiple grating patches to determine their effect on speed discrimination thresholds. In a fusion experiment, we merged multiple stimulus patches, in stages, into a single patch. Thresholds increased as the patches were brought closer and their phase relationship was adjusted to be consistent with a single patch. Thresholds increased further still as these patches were fused into a single patch. In a fission experiment, we divided a single large patch into multiple patches by superimposing a cross with luminance equal to that of the background. Thresholds decreased as the large patch was divided into quadrants and decreased further as the quadrants were maximally separated. However, when the cross luminance was darker than the background, it was perceived as an occluder and thresholds, on average, were unchanged from that for the single large patch. A control experiment shows that the observed trend in discrimination thresholds is not due to the differences in perceived speed of the stimuli. These results suggest that the parsing of the visual image into entities affects the combination of speed information across space, and that each discrete entity effectively provides a single independent estimate of speed.

  3. Scaling Laws for NanoFET Sensors

    NASA Astrophysics Data System (ADS)

    Wei, Qi-Huo; Zhou, Fu-Shan

    2008-03-01

    In this paper, we report our numerical studies of the scaling laws for nanoplate field-effect transistor (FET) sensors by simplifying the nanoplates as random resistor networks. Nanowire/tube FETs are included as the limiting cases where the device width goes small. Computer simulations show that the field effect strength exerted by the binding molecules has significant impact on the scaling behaviors. When the field effect strength is small, nanoFETs have little size and shape dependence. In contrast, when the field-effect strength becomes stronger, there exists a lower detection threshold for charge accumulation FETs and an upper detection threshold for charge depletion FET sensors. At these thresholds, the nanoFET devices undergo a transition between low and large sensitivities. These thresholds may set the detection limits of nanoFET sensors. We propose to eliminate these detection thresholds by employing devices with very short source-drain distance and large width.

  4. The power metric: a new statistically robust enrichment-type metric for virtual screening applications with early recovery capability.

    PubMed

    Lopes, Julio Cesar Dias; Dos Santos, Fábio Mendes; Martins-José, Andrelly; Augustyns, Koen; De Winter, Hans

    2017-01-01

    A new metric for the evaluation of model performance in the field of virtual screening and quantitative structure-activity relationship applications is described. This metric has been termed the power metric and is defined as the fraction of the true positive rate divided by the sum of the true positive and false positive rates, for a given cutoff threshold. The performance of this metric is compared with alternative metrics such as the enrichment factor, the relative enrichment factor, the receiver operating curve enrichment factor, the correct classification rate, Matthews correlation coefficient and Cohen's kappa coefficient. The performance of this new metric is found to be quite robust with respect to variations in the applied cutoff threshold and ratio of the number of active compounds to the total number of compounds, and at the same time being sensitive to variations in model quality. It possesses the correct characteristics for its application in early-recognition virtual screening problems.

  5. Reward rate optimization in two-alternative decision making: empirical tests of theoretical predictions.

    PubMed

    Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D

    2009-12-01

    The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.

  6. Nonuniformity correction for an infrared focal plane array based on diamond search block matching.

    PubMed

    Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian

    2016-05-01

    In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.

  7. Automatic 3D registration of dynamic stress and rest (82)Rb and flurpiridaz F 18 myocardial perfusion PET data for patient motion detection and correction.

    PubMed

    Woo, Jonghye; Tamarappoo, Balaji; Dey, Damini; Nakazato, Ryo; Le Meunier, Ludovic; Ramesh, Amit; Lazewatsky, Joel; Germano, Guido; Berman, Daniel S; Slomka, Piotr J

    2011-11-01

    The authors aimed to develop an image-based registration scheme to detect and correct patient motion in stress and rest cardiac positron emission tomography (PET)/CT images. The patient motion correction was of primary interest and the effects of patient motion with the use of flurpiridaz F 18 and (82)Rb were demonstrated. The authors evaluated stress/rest PET myocardial perfusion imaging datasets in 30 patients (60 datasets in total, 21 male and 9 female) using a new perfusion agent (flurpiridaz F 18) (n = 16) and (82)Rb (n = 14), acquired on a Siemens Biograph-64 scanner in list mode. Stress and rest images were reconstructed into 4 ((82)Rb) or 10 (flurpiridaz F 18) dynamic frames (60 s each) using standard reconstruction (2D attenuation weighted ordered subsets expectation maximization). Patient motion correction was achieved by an image-based registration scheme optimizing a cost function using modified normalized cross-correlation that combined global and local features. For comparison, visual scoring of motion was performed on the scale of 0 to 2 (no motion, moderate motion, and large motion) by two experienced observers. The proposed registration technique had a 93% success rate in removing left ventricular motion, as visually assessed. The maximum detected motion extent for stress and rest were 5.2 mm and 4.9 mm for flurpiridaz F 18 perfusion and 3.0 mm and 4.3 mm for (82)Rb perfusion studies, respectively. Motion extent (maximum frame-to-frame displacement) obtained for stress and rest were (2.2 ± 1.1, 1.4 ± 0.7, 1.9 ± 1.3) mm and (2.0 ± 1.1, 1.2 ±0 .9, 1.9 ± 0.9) mm for flurpiridaz F 18 perfusion studies and (1.9 ± 0.7, 0.7 ± 0.6, 1.3 ± 0.6) mm and (2.0 ± 0.9, 0.6 ± 0.4, 1.2 ± 1.2) mm for (82)Rb perfusion studies, respectively. A visually detectable patient motion threshold was established to be ≥2.2 mm, corresponding to visual user scores of 1 and 2. After motion correction, the average increases in contrast-to-noise ratio (CNR) from all frames for larger than the motion threshold were 16.2% in stress flurpiridaz F 18 and 12.2% in rest flurpiridaz F 18 studies. The average increases in CNR were 4.6% in stress (82)Rb studies and 4.3% in rest (82)Rb studies. Fully automatic motion correction of dynamic PET frames can be performed accurately, potentially allowing improved image quantification of cardiac PET data.

  8. Improved Controller Design of Grid Friendly™ Appliances for Primary Frequency Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lian, Jianming; Sun, Yannan; Marinovici, Laurentiu D.

    2015-09-01

    The Grid Friendlymore » $$^\\textrm{TM}$$ Appliance~(GFA) controller, developed at Pacific Northwest National Laboratory, can autonomously switch off the appliances by detecting the under-frequency events. In this paper, the impacts of curtailing frequency threshold on the performance of frequency responsive GFAs are carefully analyzed first. The current method of selecting curtailing frequency thresholds for GFAs is found to be insufficient to guarantee the desired performance especially when the frequency deviation is shallow. In addition, the power reduction of online GFAs could be so excessive that it can even impact the system response negatively. As a remedy to the deficiency of the current controller design, a different way of selecting curtailing frequency thresholds is proposed to ensure the effectiveness of GFAs in frequency protection. Moreover, it is also proposed to introduce a supervisor at each distribution feeder to monitor the curtailing frequency thresholds of online GFAs and take corrective actions if necessary.« less

  9. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    PubMed

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. A new statistical time-dependent model of earthquake occurrence: failure processes driven by a self-correcting model

    NASA Astrophysics Data System (ADS)

    Rotondi, Renata; Varini, Elisa

    2016-04-01

    The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.

  11. ChromatoGate: A Tool for Detecting Base Mis-Calls in Multiple Sequence Alignments by Semi-Automatic Chromatogram Inspection

    PubMed Central

    Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros

    2013-01-01

    Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors. PMID:24688709

  12. ChromatoGate: A Tool for Detecting Base Mis-Calls in Multiple Sequence Alignments by Semi-Automatic Chromatogram Inspection.

    PubMed

    Alachiotis, Nikolaos; Vogiatzi, Emmanouella; Pavlidis, Pavlos; Stamatakis, Alexandros

    2013-01-01

    Automated DNA sequencers generate chromatograms that contain raw sequencing data. They also generate data that translates the chromatograms into molecular sequences of A, C, G, T, or N (undetermined) characters. Since chromatogram translation programs frequently introduce errors, a manual inspection of the generated sequence data is required. As sequence numbers and lengths increase, visual inspection and manual correction of chromatograms and corresponding sequences on a per-peak and per-nucleotide basis becomes an error-prone, time-consuming, and tedious process. Here, we introduce ChromatoGate (CG), an open-source software that accelerates and partially automates the inspection of chromatograms and the detection of sequencing errors for bidirectional sequencing runs. To provide users full control over the error correction process, a fully automated error correction algorithm has not been implemented. Initially, the program scans a given multiple sequence alignment (MSA) for potential sequencing errors, assuming that each polymorphic site in the alignment may be attributed to a sequencing error with a certain probability. The guided MSA assembly procedure in ChromatoGate detects chromatogram peaks of all characters in an alignment that lead to polymorphic sites, given a user-defined threshold. The threshold value represents the sensitivity of the sequencing error detection mechanism. After this pre-filtering, the user only needs to inspect a small number of peaks in every chromatogram to correct sequencing errors. Finally, we show that correcting sequencing errors is important, because population genetic and phylogenetic inferences can be misled by MSAs with uncorrected mis-calls. Our experiments indicate that estimates of population mutation rates can be affected two- to three-fold by uncorrected errors.

  13. Improved multidimensional semiclassical tunneling theory.

    PubMed

    Wagner, Albert F

    2013-12-12

    We show that the analytic multidimensional semiclassical tunneling formula of Miller et al. [Miller, W. H.; Hernandez, R.; Handy, N. C.; Jayatilaka, D.; Willets, A. Chem. Phys. Lett. 1990, 172, 62] is qualitatively incorrect for deep tunneling at energies well below the top of the barrier. The origin of this deficiency is that the formula uses an effective barrier weakly related to the true energetics but correctly adjusted to reproduce the harmonic description and anharmonic corrections of the reaction path at the saddle point as determined by second order vibrational perturbation theory. We present an analytic improved semiclassical formula that correctly includes energetic information and allows a qualitatively correct representation of deep tunneling. This is done by constructing a three segment composite Eckart potential that is continuous everywhere in both value and derivative. This composite potential has an analytic barrier penetration integral from which the semiclassical action can be derived and then used to define the semiclassical tunneling probability. The middle segment of the composite potential by itself is superior to the original formula of Miller et al. because it incorporates the asymmetry of the reaction barrier produced by the known reaction exoergicity. Comparison of the semiclassical and exact quantum tunneling probability for the pure Eckart potential suggests a simple threshold multiplicative factor to the improved formula to account for quantum effects very near threshold not represented by semiclassical theory. The deep tunneling limitations of the original formula are echoed in semiclassical high-energy descriptions of bound vibrational states perpendicular to the reaction path at the saddle point. However, typically ab initio energetic information is not available to correct it. The Supporting Information contains a Fortran code, test input, and test output that implements the improved semiclassical tunneling formula.

  14. Comparison of ABR response amplitude, test time, and estimation of hearing threshold using frequency specific chirp and tone pip stimuli in newborns.

    PubMed

    Ferm, Inga; Lightfoot, Guy; Stevens, John

    2013-06-01

    To evaluate the auditory brainstem response (ABR) amplitudes evoked by tone pip and narrowband chirp (NB CE-Chirp) stimuli when testing post-screening newborns and to determine the difference in estimated hearing level correction values. Tests were performed with tone pips and NB CE-Chirps at 4 kHz or 1 kHz. The response amplitude, response quality (Fmp), and residual noise were compared for both stimuli. Thirty babies (42 ears) who passed our ABR discharge criterion at 4 kHz following referral from their newborn hearing screen. Overall, NB CE-Chirp responses were 64% larger than the tone pip responses, closer to those evoked by clicks. Fmp was significantly higher for NB CE-Chirps. It is anticipated that there could be significant reductions in test time for the same signal to noise ratio by using NB CE-Chirps when testing newborns. This effect may vary in practice and is likely to be most beneficial for babies with low amplitude ABR responses. We propose that the ABR nHL threshold to eHL correction for NB CE-Chirps should be approximately 5 dB less than the corrections for tone pips at 4 and 1 kHz.

  15. Diffusive Cosmic-Ray Acceleration at Shock Waves of Arbitrary Speed with Magnetostatic Turbulence. I. General Theory and Correct Nonrelativistic Speed Limit

    NASA Astrophysics Data System (ADS)

    Schlickeiser, R.; Oppotsch, J.

    2017-12-01

    The analytical theory of diffusive acceleration of cosmic rays at parallel stationary shock waves of arbitrary speed with magnetostatic turbulence is developed from first principles. The theory is based on the diffusion approximation to the gyrotropic cosmic-ray particle phase-space distribution functions in the respective rest frames of the up- and downstream medium. We derive the correct cosmic-ray jump conditions for the cosmic-ray current and density, and match the up- and downstream distribution functions at the position of the shock. It is essential to account for the different particle momentum coordinates in the up- and downstream media. Analytical expressions for the momentum spectra of shock-accelerated cosmic rays are calculated. These are valid for arbitrary shock speeds including relativistic shocks. The correctly taken limit for nonrelativistic shock speeds leads to a universal broken power-law momentum spectrum of accelerated particles with velocities well above the injection velocity threshold, where the universal power-law spectral index q≃ 2-{γ }1-4 is independent of the flow compression ratio r. For nonrelativistic shock speeds, we calculate for the first time the injection velocity threshold, settling the long-standing injection problem for nonrelativistic shock acceleration.

  16. Structure of 10N

    NASA Astrophysics Data System (ADS)

    Hooker, Joshua; Rogachev, Grigory; Goldberg, Vladilen; Koshchiy, Evgeny; Roeder, Brian; Jayatissa, Heshani; Hunt, Curtis; Magana, Cordero; Upadhyayula, Sriteja; Uberseder, Ethan; Saastamoinen, Antti

    2017-09-01

    We report on the first observation of the ground and first excited states in 10N via 9C+p resonance scattering. The experiment was carried out at the Cyclotron Institute at Texas A&M University. Both states were determined to be l = 0 . We can now reliably place the location of the 2s1/2 shell in 10N at 2.3 +/- 0.2 MeV above the proton decay threshold. Using mirror symmetry and correcting for Thomas-Ehrman shift we argue that the ground state of 10Li is an l = 0 states that should be very close to the neutron threshold.

  17. Threshold concepts: implications for the management of natural resources

    USGS Publications Warehouse

    Guntenspergen, Glenn R.; Gross, John

    2014-01-01

    Threshold concepts can have broad relevance in natural resource management. However, the concept of ecological thresholds has not been widely incorporated or adopted in management goals. This largely stems from the uncertainty revolving around threshold levels and the post hoc analyses that have generally been used to identify them. Natural resource managers have a need for new tools and approaches that will help them assess the existence and detection of conditions that demand management actions. Recognition of additional threshold concepts include: utility thresholds (which are based on human values about ecological systems) and decision thresholds (which reflect management objectives and values and include ecological knowledge about a system) as well as ecological thresholds. All of these concepts provide a framework for considering the use of threshold concepts in natural resource decision making.

  18. Optical damage performance of conductive widegap semiconductors: spatial, temporal, and lifetime modeling

    DOE PAGES

    Elhadj, Selim; Yoo, Jae-hyuck; Negres, Raluca A.; ...

    2016-12-19

    The optical damage performance of electrically conductive gallium nitride (GaN) and indium tin oxide (ITO) films is addressed using large area, high power laser beam exposures at 1064 nm sub-bandgap wavelength. Analysis of the laser damage process assumes that onset of damage (threshold) is determined by the absorption and heating of a nanoscale region of a characteristic size reaching a critical temperature. We use this model to rationalize semi-quantitatively the pulse width scaling of the damage threshold from picosecond to nanosecond timescales, along with the pulse width dependence of the damage threshold probability derived by fitting large beam damage densitymore » data. Multi-shot exposures were used to address lifetime performance degradation described by an empirical expression based on the single exposure damage model. A damage threshold degradation of at least 50% was observed for both materials. Overall, the GaN films tested had 5-10 × higher optical damage thresholds than the ITO films tested for comparable transmission and electrical conductivity. This route to optically robust, large aperture transparent electrodes and power optoelectronics may thus involve use of next generation widegap semiconductors such as GaN.« less

  19. Cross-modal cueing effects of visuospatial attention on conscious somatosensory perception.

    PubMed

    Doruk, Deniz; Chanes, Lorena; Malavera, Alejandra; Merabet, Lotfi B; Valero-Cabré, Antoni; Fregni, Felipe

    2018-04-01

    The impact of visuospatial attention on perception with supraliminal stimuli and stimuli at the threshold of conscious perception has been previously investigated. In this study, we assess the cross-modal effects of visuospatial attention on conscious perception for near-threshold somatosensory stimuli applied to the face. Fifteen healthy participants completed two sessions of a near-threshold cross-modality cue-target discrimination/conscious detection paradigm. Each trial began with an endogenous visuospatial cue that predicted the location of a weak near-threshold electrical pulse delivered to the right or left cheek with high probability (∼75%). Participants then completed two tasks: first, a forced-choice somatosensory discrimination task (felt once or twice?) and then, a somatosensory conscious detection task (did you feel the stimulus and, if yes, where (left/right)?). Somatosensory discrimination was evaluated with the response reaction times of correctly detected targets, whereas the somatosensory conscious detection was quantified using perceptual sensitivity (d') and response bias (beta). A 2 × 2 repeated measures ANOVA was used for statistical analysis. In the somatosensory discrimination task (1 st task), participants were significantly faster in responding to correctly detected targets (p < 0.001). In the somatosensory conscious detection task (2 nd task), a significant effect of visuospatial attention on response bias (p = 0.008) was observed, suggesting that participants had a less strict criterion for stimuli preceded by spatially valid than invalid visuospatial cues. We showed that spatial attention has the potential to modulate the discrimination and the conscious detection of near-threshold somatosensory stimuli as measured, respectively, by a reduction of reaction times and a shift in response bias toward less conservative responses when the cue predicted stimulus location. A shift in response bias indicates possible effects of spatial attention on internal decision processes. The lack of significant results in perceptual sensitivity (d') could be due to weaker effects of endogenous attention on perception.

  20. Differential Higgs production at N3LO beyond threshold

    NASA Astrophysics Data System (ADS)

    Dulat, Falko; Mistlberger, Bernhard; Pelloni, Andrea

    2018-01-01

    We present several key steps towards the computation of differential Higgs boson cross sections at N3LO in perturbative QCD. Specifically, we work in the framework of Higgs-differential cross sections that allows to compute precise predictions for realistic LHC observables. We demonstrate how to perform an expansion of the analytic N3LO coefficient functions around the production threshold of the Higgs boson. Our framework allows us to compute to arbitrarily high order in the threshold expansion and we explicitly obtain the first two expansion coefficients in analytic form. Furthermore, we assess the phenomenological viability of threshold expansions for differential distributions. We find that while a few terms in the threshold expansion are sufficient to approximate the exact rapidity distribution well, transverse momentum distributions require a signficantly higher number of terms in the expansion to be adequately described. We find that to improve state of the art predictions for the rapidity distribution beyond NNLO even more sub-leading terms in the threshold expansion than presented in this article are required. In addition, we report on an interesting obstacle for the computation of N3LO corrections with LHAPDF parton distribution functions and our solution. We provide files containing the analytic expressions for the partonic cross sections as supplementary material attached to this paper.

  1. Differential Higgs production at N 3LO beyond threshold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dulat, Falko; Mistlberger, Bernhard; Pelloni, Andrea

    We present several key steps towards the computation of differential Higgs boson cross sections at N 3LO in perturbative QCD. Specifically, we work in the framework of Higgs-differential cross sections that allows to compute precise predictions for realistic LHC observables. We demonstrate how to perform an expansion of the analytic N 3LO coefficient functions around the production threshold of the Higgs boson. Our framework allows us to compute to arbitrarily high order in the threshold expansion and we explicitly obtain the first two expansion coefficients in analytic form. Furthermore, we assess the phenomenological viability of threshold expansions for differential distributions.more » We find that while a few terms in the threshold expansion are sufficient to approximate the exact rapidity distribution well, transverse momentum distributions require a signficantly higher number of terms in the expansion to be adequately described. We find that to improve state of the art predictions for the rapidity distribution beyond NNLO even more sub-leading terms in the threshold expansion than presented in this article are required. In addition, we report on an interesting obstacle for the computation of N 3LO corrections with LHAPDF parton distribution functions and our solution. We provide files containing the analytic expressions for the partonic cross sections as supplementary material attached to this paper.« less

  2. Differential Higgs production at N 3LO beyond threshold

    DOE PAGES

    Dulat, Falko; Mistlberger, Bernhard; Pelloni, Andrea

    2018-01-29

    We present several key steps towards the computation of differential Higgs boson cross sections at N 3LO in perturbative QCD. Specifically, we work in the framework of Higgs-differential cross sections that allows to compute precise predictions for realistic LHC observables. We demonstrate how to perform an expansion of the analytic N 3LO coefficient functions around the production threshold of the Higgs boson. Our framework allows us to compute to arbitrarily high order in the threshold expansion and we explicitly obtain the first two expansion coefficients in analytic form. Furthermore, we assess the phenomenological viability of threshold expansions for differential distributions.more » We find that while a few terms in the threshold expansion are sufficient to approximate the exact rapidity distribution well, transverse momentum distributions require a signficantly higher number of terms in the expansion to be adequately described. We find that to improve state of the art predictions for the rapidity distribution beyond NNLO even more sub-leading terms in the threshold expansion than presented in this article are required. In addition, we report on an interesting obstacle for the computation of N 3LO corrections with LHAPDF parton distribution functions and our solution. We provide files containing the analytic expressions for the partonic cross sections as supplementary material attached to this paper.« less

  3. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    PubMed

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  4. Asymptotics of empirical eigenstructure for high dimensional spiked covariance

    PubMed Central

    Wang, Weichen

    2017-01-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726

  5. False-nearest-neighbors algorithm and noise-corrupted time series

    NASA Astrophysics Data System (ADS)

    Rhodes, Carl; Morari, Manfred

    1997-05-01

    The false-nearest-neighbors (FNN) algorithm was originally developed to determine the embedding dimension for autonomous time series. For noise-free computer-generated time series, the algorithm does a good job in predicting the embedding dimension. However, the problem of predicting the embedding dimension when the time-series data are corrupted by noise was not fully examined in the original studies of the FNN algorithm. Here it is shown that with large data sets, even small amounts of noise can lead to incorrect prediction of the embedding dimension. Surprisingly, as the length of the time series analyzed by FNN grows larger, the cause of incorrect prediction becomes more pronounced. An analysis of the effect of noise on the FNN algorithm and a solution for dealing with the effects of noise are given here. Some results on the theoretically correct choice of the FNN threshold are also presented.

  6. Digital chaos-masked optical encryption scheme enhanced by two-dimensional key space

    NASA Astrophysics Data System (ADS)

    Liu, Ling; Xiao, Shilin; Zhang, Lu; Bi, Meihua; Zhang, Yunhao; Fang, Jiafei; Hu, Weisheng

    2017-09-01

    A digital chaos-masked optical encryption scheme is proposed and demonstrated. The transmitted signal is completely masked by interference chaotic noise in both bandwidth and amplitude with analog method via dual-drive Mach-Zehnder modulator (DDMZM), making the encrypted signal analog, noise-like and unrecoverable by post-processing techniques. The decryption process requires precise matches of both the amplitude and phase between the cancellation and interference chaotic noises, which provide a large two-dimensional key space with the help of optical interference cancellation technology. For 10-Gb/s 16-quadrature amplitude modulation (QAM) orthogonal frequency division multiplexing (OFDM) signal over the maximum transmission distance of 80 km without dispersion compensation or inline amplifier, the tolerable mismatch ranges of amplitude and phase/delay at the forward error correction (FEC) threshold of 3.8×10-3 are 0.44 dB and 0.08 ns respectively.

  7. Method for measuring multiple scattering corrections between liquid scintillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.

    2016-04-11

    In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  8. Neurodevelopmental outcome at 2 years for preterm children born at 22 to 34 weeks’ gestation in France in 2011: EPIPAGE-2 cohort study

    PubMed Central

    Marchand-Martin, Laetitia; Arnaud, Catherine; Kaminski, Monique; Resche-Rigon, Matthieu; Lebeaux, Cécile; Bodeau-Livinec, Florence; Morgan, Andrei S; Goffinet, François; Marret, Stéphane; Ancel, Pierre-Yves

    2017-01-01

    Objectives To describe neurodevelopmental outcomes at 2 years corrected age for children born alive at 22-26, 27-31, and 32-34 weeks’ gestation in 2011, and to evaluate changes since 1997. Design Population based cohort studies, EPIPAGE and EPIPAGE-2. Setting France. Participants 5567 neonates born alive in 2011 at 22-34 completed weeks’ gestation, with 4199 survivors at 2 years corrected age included in follow-up. Comparison of outcomes reported for 3334 (1997) and 2418 (2011) neonates born alive in the nine regions participating in both studies. Main outcome measures Survival; cerebral palsy (2000 European consensus definition); scores below threshold on the neurodevelopmental Ages and Stages Questionnaire (ASQ; at least one of five domains below threshold) if completed between 22 and 26 months corrected age, in children without cerebral palsy, blindness, or deafness; and survival without severe or moderate neuromotor or sensory disabilities (cerebral palsy with Gross Motor Function Classification System levels 2-5, unilateral or bilateral blindness or deafness). Results are given as percentage of outcome measures with 95% confidence intervals. Results Among 5170 liveborn neonates with parental consent, survival at 2 years corrected age was 51.7% (95% confidence interval 48.6% to 54.7%) at 22-26 weeks’ gestation, 93.1% (92.1% to 94.0%) at 27-31 weeks’ gestation, and 98.6% (97.8% to 99.2%) at 32-34 weeks’ gestation. Only one infant born at 22-23 weeks survived. Data on cerebral palsy were available for 3599 infants (81.0% of the eligible population). The overall rate of cerebral palsy at 24-26, 27-31, and 32-34 weeks’ gestation was 6.9% (4.7% to 9.6%), 4.3% (3.5% to 5.2%), and 1.0% (0.5% to 1.9%), respectively. Responses to the ASQ were analysed for 2506 children (56.4% of the eligible population). The proportion of children with an ASQ result below threshold at 24-26, 27-31, and 32-34 weeks’ gestation were 50.2% (44.5% to 55.8%), 40.7% (38.3% to 43.2%), and 36.2% (32.4% to 40.1%), respectively. Survival without severe or moderate neuromotor or sensory disabilities among live births increased between 1997 and 2011, from 45.5% (39.2% to 51.8%) to 62.3% (57.1% to 67.5%) at 25-26 weeks’ gestation, but no change was observed at 22-24 weeks’ gestation. At 32-34 weeks’ gestation, there was a non-statistically significant increase in survival without severe or moderate neuromotor or sensory disabilities (P=0.61), but the proportion of survivors with cerebral palsy declined (P=0.01). Conclusions In this large cohort of preterm infants, rates of survival and survival without severe or moderate neuromotor or sensory disabilities have increased during the past two decades, but these children remain at high risk of developmental delay. PMID:28814566

  9. Error suppression via complementary gauge choices in Reed-Muller codes

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Jochym-O'Connor, Tomas

    2017-09-01

    Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.

  10. Three-Dimensional Color Code Thresholds via Statistical-Mechanical Mapping

    NASA Astrophysics Data System (ADS)

    Kubica, Aleksander; Beverland, Michael E.; Brandão, Fernando; Preskill, John; Svore, Krysta M.

    2018-05-01

    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code (3DCC) on the body-centered cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D stringlike and 2D sheetlike logical operators to be p3DCC (1 )≃1.9 % and p3DCC (2 )≃27.6 % . We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the four- and six-body random coupling Ising models.

  11. Seven benzimidazole pesticides combined at sub-threshold levels induce micronuclei in vitro

    PubMed Central

    Ermler, Sibylle; Scholze, Martin; Kortenkamp, Andreas

    2013-01-01

    Benzimidazoles act by disrupting microtubule polymerisation and are capable of inducing the formation of micronuclei. Considering the similarities in their mechanisms of action (inhibition of microtubule assembly by binding to the colchicine-binding site on tubulin monomers), combination effects according to the principles of concentration addition might occur. If so, it is to be expected that several benzimidazoles contribute to micronucleus formation even when each single one is present at or below threshold levels. This would have profound implications for risk assessment, but the idea has never been tested rigorously. To fill this gap, we analysed micronucleus frequencies for seven benzimidazoles, including the fungicide benomyl, its metabolite carbendazim, the anthelmintics albendazole, albendazole oxide, flubendazole, mebendazole and oxibendazole. Thiabendazole was also tested but was inactive. We used the cytochalasin-blocked micronucleus assay with CHO-K1 cells according to OECD guidelines, and employed an automated micronucleus scoring system based on image analysis to establish quantitative concentration–response relationships for the seven active benzimidazoles. Based on this information, we predicted additive combination effects for a mixture of the seven benzimidazoles by using the concepts of concentration addition and independent action. The observed effects of the mixture agreed very well with those predicted by concentration addition. Independent action underestimated the observed combined effects by a large margin. With a mixture that combined all benzimidazoles at their estimated threshold concentrations for micronucleus induction, micronucleus frequencies of ~15.5% were observed, correctly anticipated by concentration addition. On the basis of independent action, this mixture was expected to produce no effects. Our data provide convincing evidence that concentration addition is applicable to combinations of benzimidazoles that form micronuclei by disrupting microtubule polymerisation. They present a rationale for grouping these chemicals together for the purpose of cumulative risk assessment. PMID:23547264

  12. SpArcFiRe: morphological selection effects due to reduced visibility of tightly winding arms in distant spiral galaxies

    NASA Astrophysics Data System (ADS)

    Peng, Tianrui Rae; Edward English, John; Silva, Pedro; Davis, Darren R.; Hayes, Wayne B.

    2018-03-01

    The Galaxy Zoo project has provided a plethora of valuable morphological data on a large number of galaxies from various surveys, and their team have identified and/or corrected for many biases. Here we study a new bias related to spiral arm pitch angles, which first requires selecting a sample of spiral galaxies that show observable structure. One obvious way is to select galaxies using a threshold in spirality, which we define as the fraction of Galaxy Zoo humans who have reported seeing spiral structure. Using such a threshold, we use the automated tool SpArcFiRe (SPiral ARC FInder and REporter) to measure spiral arm pitch angles. We observe that the mean pitch angle of spiral arms increases linearly with redshift for 0.05 < z < 0.085. We hypothesize that this is a selection effect due to tightly-wound arms becoming less visible as image quality degrades, leading to fewer such galaxies being above the spirality threshold as redshift increases. We corroborate this hypothesis by first artificially degrading images of nearby galaxies, and then using a machine learning algorithm trained on Galaxy Zoo data to provide a spirality for each artificially degraded image. We find that SpARcFiRe's ability to accurately measure pitch angles decreases as the image degrades, but that spirality decreases more quickly in galaxies with tightly wound arms, leading to the selection effect. This new bias means one must be careful in selecting a sample on which to measure spiral structure. Finally, we also include a sensitivity analysis of SpArcFiRe's internal parameters.

  13. Efficient segmentation of 3D fluoroscopic datasets from mobile C-arm

    NASA Astrophysics Data System (ADS)

    Styner, Martin A.; Talib, Haydar; Singh, Digvijay; Nolte, Lutz-Peter

    2004-05-01

    The emerging mobile fluoroscopic 3D technology linked with a navigation system combines the advantages of CT-based and C-arm-based navigation. The intra-operative, automatic segmentation of 3D fluoroscopy datasets enables the combined visualization of surgical instruments and anatomical structures for enhanced planning, surgical eye-navigation and landmark digitization. We performed a thorough evaluation of several segmentation algorithms using a large set of data from different anatomical regions and man-made phantom objects. The analyzed segmentation methods include automatic thresholding, morphological operations, an adapted region growing method and an implicit 3D geodesic snake method. In regard to computational efficiency, all methods performed within acceptable limits on a standard Desktop PC (30sec-5min). In general, the best results were obtained with datasets from long bones, followed by extremities. The segmentations of spine, pelvis and shoulder datasets were generally of poorer quality. As expected, the threshold-based methods produced the worst results. The combined thresholding and morphological operations methods were considered appropriate for a smaller set of clean images. The region growing method performed generally much better in regard to computational efficiency and segmentation correctness, especially for datasets of joints, and lumbar and cervical spine regions. The less efficient implicit snake method was able to additionally remove wrongly segmented skin tissue regions. This study presents a step towards efficient intra-operative segmentation of 3D fluoroscopy datasets, but there is room for improvement. Next, we plan to study model-based approaches for datasets from the knee and hip joint region, which would be thenceforth applied to all anatomical regions in our continuing development of an ideal segmentation procedure for 3D fluoroscopic images.

  14. Comparison of optotypes of Amsterdam Picture Chart with those of Tumbling-E, LEA symbols, ETDRS, and Landolt-C in non-amblyopic and amblyopic patients.

    PubMed

    Engin, O; Despriet, D D G; van der Meulen-Schot, H M; Romers, A; Slot, X; Sang, M Tjon Fo; Fronius, M; Kelderman, H; Simonsz, H J

    2014-12-01

    To compare optotypes of the Amsterdam Picture Chart (APK) with those of Landolt-C (LC), Tumbling-E (TE), ETDRS and LEA symbols (LEA), to assess their reliability in measuring visual acuity (VA). We recruited healthy controls with equal VA and amblyopes with ≥2 LogMAR lines interocular difference. New logarithmic charts were developed with LC, TE, ETDRS, LEA, and APK with identical size and spacing (four optotypes) between optotypes. Charts were randomly presented at 5 m under DIN EN ISO 8596 and 8597 conditions. VA was measured with LC (LC-VA), TE, ETDRS, LEA, and APK, using six out of ten optotypes answered correctly as threshold. In 100 controls aged 17-31, LC-VA was -0.207 ± SD 0.089 LogMAR. Visual acuity measured with TE differed from LC-VA by 0.021 (positive value meaning less recognizable), with ETDRS 0.012, with Lea 0.054, and with APK 0.117. In 46 amblyopic eyes with LC-VA <0.5 LogMAR, the difference was for TE 0.017, for ETDRS 0.017, for LEA 0.089, and for APK 0.213. In 13 amblyopic eyes with LC-VA ≥0.5 LogMAR, the difference was for TE 0.122, ETDRS 0.047, LEA 0.057, and APK 0.019. APK optotypes had a lower percentage of passed subjects at each LogMAR line compared to Landolt-C. The 11 APK optotypes had different thresholds. Small APK optotypes were recognized worse than all other optotypes, probably because of their thinner lines. Large APK optotypes were recognized relatively well, possibly reflecting recognition acuity. Differences between the thresholds of the 11 APK optotypes reduced its sensitivity further.

  15. Intelligent model-based OPC

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.

  16. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  17. Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo

    PubMed Central

    Fontaine, Bertrand; Peña, José Luis; Brette, Romain

    2014-01-01

    Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397

  18. Large Electroweak Corrections to Vector-Boson Scattering at the Large Hadron Collider.

    PubMed

    Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu

    2017-06-30

    For the first time full next-to-leading-order electroweak corrections to off-shell vector-boson scattering are presented. The computation features the complete matrix elements, including all nonresonant and off-shell contributions, to the electroweak process pp→μ^{+}ν_{μ}e^{+}ν_{e}jj and is fully differential. We find surprisingly large corrections, reaching -16% for the fiducial cross section, as an intrinsic feature of the vector-boson-scattering processes. We elucidate the origin of these large electroweak corrections upon using the double-pole approximation and the effective vector-boson approximation along with leading-logarithmic corrections.

  19. [Atomic absorption in mercury determination by "Julia-2" analyzer and urine mercury level in children of Moscow suburbs].

    PubMed

    Pavlovskaia, N A; Vagina, E N; Stepanova, E V

    2000-01-01

    The authors report on atomic absorption method determining mercury in urine. Being sensitive, with lower determination threshold of 10 nmole/l and correctness of 95.5%, the method was tested on children living in two districts of Moscow suburb.

  20. The (in)famous GWAS P-value threshold revisited and updated for low-frequency variants.

    PubMed

    Fadista, João; Manning, Alisa K; Florez, Jose C; Groop, Leif

    2016-08-01

    Genome-wide association studies (GWAS) have long relied on proposed statistical significance thresholds to be able to differentiate true positives from false positives. Although the genome-wide significance P-value threshold of 5 × 10(-8) has become a standard for common-variant GWAS, it has not been updated to cope with the lower allele frequency spectrum used in many recent array-based GWAS studies and sequencing studies. Using a whole-genome- and -exome-sequencing data set of 2875 individuals of European ancestry from the Genetics of Type 2 Diabetes (GoT2D) project and a whole-exome-sequencing data set of 13 000 individuals from five ancestries from the GoT2D and T2D-GENES (Type 2 Diabetes Genetic Exploration by Next-generation sequencing in multi-Ethnic Samples) projects, we describe guidelines for genome- and exome-wide association P-value thresholds needed to correct for multiple testing, explaining the impact of linkage disequilibrium thresholds for distinguishing independent variants, minor allele frequency and ancestry characteristics. We emphasize the advantage of studying recent genetic isolate populations when performing rare and low-frequency genetic association analyses, as the multiple testing burden is diminished due to higher genetic homogeneity.

  1. Bias correction method for climate change impact assessment at a basin scale

    NASA Astrophysics Data System (ADS)

    Nyunt, C.; Jaranilla-sanchez, P. A.; Yamamoto, A.; Nemoto, T.; Kitsuregawa, M.; Koike, T.

    2012-12-01

    Climate change impact studies are mainly based on the general circulation models GCM and these studies play an important role to define suitable adaptation strategies for resilient environment in a basin scale management. For this purpose, this study summarized how to select appropriate GCM to decrease the certain uncertainty amount in analysis. This was applied to the Pampanga, Angat and Kaliwa rivers in Luzon Island, the main island of Philippine and these three river basins play important roles in irrigation water supply, municipal water source for Metro Manila. According to the GCM scores of both seasonal evolution of Asia summer monsoon and spatial correlation and root mean squared error of atmospheric variables over the region, finally six GCM is chosen. Next, we develop a complete, efficient and comprehensive statistical bias correction scheme covering extremes events, normal rainfall and frequency of dry period. Due to the coarse resolution and parameterization scheme of GCM, extreme rainfall underestimation, too many rain days with low intensity and poor representation of local seasonality have been known as bias of GCM. Extreme rainfall has unusual characteristics and it should be focused specifically. Estimated maximum extreme rainfall is crucial for planning and design of infrastructures in river basin. Developing countries have limited technical, financial and management resources for implementing adaptation measures and they need detailed information of drought and flood for near future. Traditionally, the analysis of extreme has been examined using annual maximum series (AMS) adjusted to a Gumbel or Lognormal distribution. The drawback is the loss of the second, third etc, largest rainfall. Another approach is partial duration series (PDS) constructed using the values above a selected threshold and permit more than one event per year. The generalized Pareto distribution (GPD) has been used to model PDS and it is the series of excess over a threshold. In this study, the lowest value of AMS of observed is selected as threshold and simultaneously same frequency is considered as extremes in corresponding GCM gridded series. After fitting to GP distribution, bias corrected GCM extreme is found by using the inverse function of observed extremes. The results show it can remove bias effectively. For projected climate, the same transfer function between historical observed and GCM was applied. Moreover, frequency analysis of maximum extreme intensity estimation was done for validation and then approximate for near future by using identical function as past. To fix the error in the number of no rain days of GCM, ranking order statistics is used and define in GCM same as the frequency of wet days in observed station. After this rank, GCM output will be zero and identify same threshold for future projection. Normal rainfall is classified as between threshold of extreme and no rain day. We assume monthly normal rainfall follow gamma distribution. Then, we mapped the CDF of GCM normal rainfall to station's one in each month and bias corrected rainfall is available. In summary, bias of GCM have been addressed efficiently and validated at point scale by seasonal climatology and at all stations for evaluating downscaled rainfall performance. The results show bias corrected and downscaled scheme is good enough for climate impact study.

  2. Crowd-Sourced Global Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Minson, S. E.; Brooks, B. A.; Glennie, C. L.; Murray, J. R.; Langbein, J. O.; Owen, S. E.; Iannucci, B. A.; Hauser, D. L.

    2014-12-01

    Although earthquake early warning (EEW) has shown great promise for reducing loss of life and property, it has only been implemented in a few regions due, in part, to the prohibitive cost of building the required dense seismic and geodetic networks. However, many cars and consumer smartphones, tablets, laptops, and similar devices contain low-cost versions of the same sensors used for earthquake monitoring. If a workable EEW system could be implemented based on either crowd-sourced observations from consumer devices or very inexpensive networks of instruments built from consumer-quality sensors, EEW coverage could potentially be expanded worldwide. Controlled tests of several accelerometers and global navigation satellite system (GNSS) receivers typically found in consumer devices show that, while they are significantly noisier than scientific-grade instruments, they are still accurate enough to capture displacements from moderate and large magnitude earthquakes. The accuracy of these sensors varies greatly depending on the type of data collected. Raw coarse acquisition (C/A) code GPS data are relatively noisy. These observations have a surface displacement detection threshold approaching ~1 m and would thus only be useful in large Mw 8+ earthquakes. However, incorporating either satellite-based differential corrections or using a Kalman filter to combine the raw GNSS data with low-cost acceleration data (such as from a smartphone) decreases the noise dramatically. These approaches allow detection thresholds as low as 5 cm, potentially enabling accurate warnings for earthquakes as small as Mw 6.5. Simulated performance tests show that, with data contributed from only a very small fraction of the population, a crowd-sourced EEW system would be capable of warning San Francisco and San Jose of a Mw 7 rupture on California's Hayward fault and could have accurately issued both earthquake and tsunami warnings for the 2011 Mw 9 Tohoku-oki, Japan earthquake.

  3. Large-scale exploratory genetic analysis of cognitive impairment in Parkinson's disease.

    PubMed

    Mata, Ignacio F; Johnson, Catherine O; Leverenz, James B; Weintraub, Daniel; Trojanowski, John Q; Van Deerlin, Vivianna M; Ritz, Beate; Rausch, Rebecca; Factor, Stewart A; Wood-Siverio, Cathy; Quinn, Joseph F; Chung, Kathryn A; Peterson-Hiller, Amie L; Espay, Alberto J; Revilla, Fredy J; Devoto, Johnna; Yearout, Dora; Hu, Shu-Ching; Cholerton, Brenna A; Montine, Thomas J; Edwards, Karen L; Zabetian, Cyrus P

    2017-08-01

    Cognitive impairment is a common and disabling problem in Parkinson's disease (PD). Identification of genetic variants that influence the presence or severity of cognitive deficits in PD might provide a clearer understanding of the pathophysiology underlying this important nonmotor feature. We genotyped 1105 PD patients from the PD Cognitive Genetics Consortium for 249,336 variants using the NeuroX array. Participants underwent assessments of learning and memory (Hopkins Verbal Learning Test-Revised [HVLT-R]), working memory/executive function (Letter-Number Sequencing and Trail Making Test [TMT] A and B), language processing (semantic and phonemic verbal fluency), visuospatial abilities (Benton Judgment of Line Orientation [JoLO]), and global cognitive function (Montreal Cognitive Assessment). For common variants, we used linear regression to test for association between genotype and cognitive performance with adjustment for important covariates. Rare variants were analyzed using the optimal unified sequence kernel association test. The significance threshold was defined as a false discovery rate-corrected p-value (P FDR ) of 0.05. Eighteen common variants in 13 genomic regions exceeded the significance threshold for one of the cognitive tests. These included GBA rs2230288 (E326K; P FDR  = 2.7 × 10 -4 ) for JoLO, PARP4 rs9318600 (P FDR  = 0.006), and rs9581094 (P FDR  = 0.006) for HVLT-R total recall, and MTCL1 rs34877994 (P FDR  = 0.01) for TMT B-A. Analysis of rare variants did not yield any significant gene regions. We have conducted the first large-scale PD cognitive genetics analysis and nominated several new putative susceptibility genes for cognitive impairment in PD. These results will require replication in independent PD cohorts. Published by Elsevier Inc.

  4. Modelling the impact of altered axonal morphometry on the response of regenerative nervous tissue to electrical stimulation through macro-sieve electrodes.

    PubMed

    Zellmer, Erik R; MacEwan, Matthew R; Moran, Daniel W

    2018-04-01

    Regenerated peripheral nervous tissue possesses different morphometric properties compared to undisrupted nerve. It is poorly understood how these morphometric differences alter the response of the regenerated nerve to electrical stimulation. In this work, we use computational modeling to explore the electrophysiological response of regenerated and undisrupted nerve axons to electrical stimulation delivered by macro-sieve electrodes (MSEs). A 3D finite element model of a peripheral nerve segment populated with mammalian myelinated axons and implanted with a macro-sieve electrode has been developed. Fiber diameters and morphometric characteristics representative of undisrupted or regenerated peripheral nervous tissue were assigned to core conductor models to simulate the two tissue types. Simulations were carried out to quantify differences in thresholds and chronaxie between undisrupted and regenerated fiber populations. The model was also used to determine the influence of axonal caliber on recruitment thresholds for the two tissue types. Model accuracy was assessed through comparisons with in vivo recruitment data from chronically implanted MSEs. Recruitment thresholds of individual regenerated fibers with diameters  >2 µm were found to be lower compared to same caliber undisrupted fibers at electrode to fiber distances of less than about 90-140 µm but roughly equal or higher for larger distances. Caliber redistributions observed in regenerated nerve resulted in an overall increase in average recruitment thresholds and chronaxie during whole nerve stimulation. Modeling results also suggest that large diameter undisrupted fibers located close to a longitudinally restricted current source such as the MSE have higher average recruitment thresholds compared to small diameter fibers. In contrast, large diameter regenerated nerve fibers located in close proximity of MSE sites have, on average, lower recruitment thresholds compared to small fibers. Utilizing regenerated fiber morphometry and caliber distributions resulted in accurate predictions of in vivo recruitment data. Our work uses computational modeling to show how morphometric differences between regenerated and undisrupted tissue results in recruitment threshold discrepancies, quantifies these differences, and illustrates how large undisrupted nerve fibers close to longitudinally restricted current sources have higher recruitment thresholds compared to adjacently positioned smaller fibers while the opposite is true for large regenerated fibers.

  5. Modelling the impact of altered axonal morphometry on the response of regenerative nervous tissue to electrical stimulation through macro-sieve electrodes

    NASA Astrophysics Data System (ADS)

    Zellmer, Erik R.; MacEwan, Matthew R.; Moran, Daniel W.

    2018-04-01

    Objective. Regenerated peripheral nervous tissue possesses different morphometric properties compared to undisrupted nerve. It is poorly understood how these morphometric differences alter the response of the regenerated nerve to electrical stimulation. In this work, we use computational modeling to explore the electrophysiological response of regenerated and undisrupted nerve axons to electrical stimulation delivered by macro-sieve electrodes (MSEs). Approach. A 3D finite element model of a peripheral nerve segment populated with mammalian myelinated axons and implanted with a macro-sieve electrode has been developed. Fiber diameters and morphometric characteristics representative of undisrupted or regenerated peripheral nervous tissue were assigned to core conductor models to simulate the two tissue types. Simulations were carried out to quantify differences in thresholds and chronaxie between undisrupted and regenerated fiber populations. The model was also used to determine the influence of axonal caliber on recruitment thresholds for the two tissue types. Model accuracy was assessed through comparisons with in vivo recruitment data from chronically implanted MSEs. Main results. Recruitment thresholds of individual regenerated fibers with diameters  >2 µm were found to be lower compared to same caliber undisrupted fibers at electrode to fiber distances of less than about 90-140 µm but roughly equal or higher for larger distances. Caliber redistributions observed in regenerated nerve resulted in an overall increase in average recruitment thresholds and chronaxie during whole nerve stimulation. Modeling results also suggest that large diameter undisrupted fibers located close to a longitudinally restricted current source such as the MSE have higher average recruitment thresholds compared to small diameter fibers. In contrast, large diameter regenerated nerve fibers located in close proximity of MSE sites have, on average, lower recruitment thresholds compared to small fibers. Utilizing regenerated fiber morphometry and caliber distributions resulted in accurate predictions of in vivo recruitment data. Significance. Our work uses computational modeling to show how morphometric differences between regenerated and undisrupted tissue results in recruitment threshold discrepancies, quantifies these differences, and illustrates how large undisrupted nerve fibers close to longitudinally restricted current sources have higher recruitment thresholds compared to adjacently positioned smaller fibers while the opposite is true for large regenerated fibers.

  6. Characterisation of false-positive observations in botanical surveys

    PubMed Central

    2017-01-01

    Errors in botanical surveying are a common problem. The presence of a species is easily overlooked, leading to false-absences; while misidentifications and other mistakes lead to false-positive observations. While it is common knowledge that these errors occur, there are few data that can be used to quantify and describe these errors. Here we characterise false-positive errors for a controlled set of surveys conducted as part of a field identification test of botanical skill. Surveys were conducted at sites with a verified list of vascular plant species. The candidates were asked to list all the species they could identify in a defined botanically rich area. They were told beforehand that their final score would be the sum of the correct species they listed, but false-positive errors counted against their overall grade. The number of errors varied considerably between people, some people create a high proportion of false-positive errors, but these are scattered across all skill levels. Therefore, a person’s ability to correctly identify a large number of species is not a safeguard against the generation of false-positive errors. There was no phylogenetic pattern to falsely observed species; however, rare species are more likely to be false-positive as are species from species rich genera. Raising the threshold for the acceptance of an observation reduced false-positive observations dramatically, but at the expense of more false negative errors. False-positive errors are higher in field surveying of plants than many people may appreciate. Greater stringency is required before accepting species as present at a site, particularly for rare species. Combining multiple surveys resolves the problem, but requires a considerable increase in effort to achieve the same sensitivity as a single survey. Therefore, other methods should be used to raise the threshold for the acceptance of a species. For example, digital data input systems that can verify, feedback and inform the user are likely to reduce false-positive errors significantly. PMID:28533972

  7. Deep learning methods to guide CT image reconstruction and reduce metal artifacts

    NASA Astrophysics Data System (ADS)

    Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge

    2017-03-01

    The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.

  8. Distributed Monitoring of the R(sup 2) Statistic for Linear Regression

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Das, Kamalika; Giannella, Chris R.

    2011-01-01

    The problem of monitoring a multivariate linear regression model is relevant in studying the evolving relationship between a set of input variables (features) and one or more dependent target variables. This problem becomes challenging for large scale data in a distributed computing environment when only a subset of instances is available at individual nodes and the local data changes frequently. Data centralization and periodic model recomputation can add high overhead to tasks like anomaly detection in such dynamic settings. Therefore, the goal is to develop techniques for monitoring and updating the model over the union of all nodes data in a communication-efficient fashion. Correctness guarantees on such techniques are also often highly desirable, especially in safety-critical application scenarios. In this paper we develop DReMo a distributed algorithm with very low resource overhead, for monitoring the quality of a regression model in terms of its coefficient of determination (R2 statistic). When the nodes collectively determine that R2 has dropped below a fixed threshold, the linear regression model is recomputed via a network-wide convergecast and the updated model is broadcast back to all nodes. We show empirically, using both synthetic and real data, that our proposed method is highly communication-efficient and scalable, and also provide theoretical guarantees on correctness.

  9. Application of Statistical Methods of Rain Rate Estimation to Data From The TRMM Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Liao, L.; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    The TRMM Precipitation Radar is well suited to statistical methods in that the measurements over any given region are sparsely sampled in time. Moreover, the instantaneous rain rate estimates are often of limited accuracy at high rain rates because of attenuation effects and at light rain rates because of receiver sensitivity. For the estimation of the time-averaged rain characteristics over an area both errors are relevant. By enlarging the space-time region over which the data are collected, the sampling error can be reduced. However. the bias and distortion of the estimated rain distribution generally will remain if estimates at the high and low rain rates are not corrected. In this paper we use the TRMM PR data to investigate the behavior of 2 statistical methods the purpose of which is to estimate the rain rate over large space-time domains. Examination of large-scale rain characteristics provides a useful starting point. The high correlation between the mean and standard deviation of rain rate implies that the conditional distribution of this quantity can be approximated by a one-parameter distribution. This property is used to explore the behavior of the area-time-integral (ATI) methods where fractional area above a threshold is related to the mean rain rate. In the usual application of the ATI method a correlation is established between these quantities. However, if a particular form of the rain rate distribution is assumed and if the ratio of the mean to standard deviation is known, then not only the mean but the full distribution can be extracted from a measurement of fractional area above a threshold. The second method is an extension of this idea where the distribution is estimated from data over a range of rain rates chosen in an intermediate range where the effects of attenuation and poor sensitivity can be neglected. The advantage of estimating the distribution itself rather than the mean value is that it yields the fraction of rain contributed by the light and heavy rain rates. This is useful in estimating the fraction of rainfall contributed by the rain rates that go undetected by the radar. The results at high rain rates provide a cross-check on the usual attenuation correction methods that are applied at the highest resolution of the instrument.

  10. Searching for massive clusters in weak lensing surveys

    NASA Astrophysics Data System (ADS)

    Hamana, Takashi; Takada, Masahiro; Yoshida, Naoki

    2004-05-01

    We explore the ability of weak lensing surveys to locate massive clusters. We use both analytic models of dark matter haloes and mock weak lensing surveys generated from a large cosmological N-body simulation. The analytic models describe the average properties of weak lensing haloes and predict the number counts, enabling us to compute an effective survey selection function. We argue that the detectability of massive haloes depends not only on the halo mass but also strongly on the redshift where the halo is located. We test the model prediction for the peak number counts in weak lensing mass maps against mock numerical data, and find that the noise resulting from intrinsic galaxy ellipticities causes a systematic effect which increases the peak counts. We develop a correction scheme for the systematic effect in an empirical manner, and show that, after correction, the model prediction agrees well with the mock data. The mock data is also used to examine the completeness and efficiency of the weak lensing halo search by fully taking into account the noise and the projection effect by large-scale structures. We show that the detection threshold of S/N = 4 ~ 5 gives an optimal balance between completeness and efficiency. Our results suggest that, for a weak lensing survey with a galaxy number density of ng= 30 arcmin-2 with a mean redshift of z= 1, the mean number of haloes which are expected to cause lensing signals above S/N = 4 is Nhalo(S/N > 4) = 37 per 10 deg2, whereas 23 of the haloes are actually detected with S/N > 4, giving the effective completeness as good as 63 per cent. Alternatively, the mean number of peaks in the same area is Npeak= 62 for a detection threshold of S/N = 4. Among the 62 peaks, 23 are caused by haloes with the expected peak height S/N > 4, 13 result from haloes with 3 < S/N < 4 and the remaining 26 peaks are either the false peaks caused by the noise or haloes with a lower expected peak height. Therefore the contamination rate is 44 per cent (this could be an overestimation). Weak lensing surveys thus provide a reasonably efficient way to search for massive clusters.

  11. Validity of food consumption indicators in the Lao context: moving toward cross-cultural standardization.

    PubMed

    Baumann, Soo Mee; Webb, Patrick; Zeller, Manfred

    2013-03-01

    Cross-cultural validity of food security indicators is commonly presumed without questioning the suitability of generic indicators in different geographic settings. However, ethnic differences in the perception of and reporting on, food insecurity, as well as variations in consumption patterns, may limit the comparability of results. Although research on correction factors for standardization of food security indicators is in process, so far no universal indicator has been identified. The current paper considers the ability of the Food Consumption Score (FCS) developed by the World Food Programme in southern Africa in 1996 to meet the requirement of local cultural validity in a Laotian context. The analysis is based on research that seeks to identify options for correcting possible biases linked to cultural disparities. Based on the results of a household survey conducted in different agroecological zones of Laos in 2009, the FCS was validated against a benchmark of calorie consumption. Changing the thresholds and excluding small amounts of food items consumed were tested as options to correct for biases caused by cultural disparities. The FCS in its original form underestimates the food insecurity level in the surveyed villages. However, the closeness of fit of the FCS to the benchmark classification improves when small amounts of food items are excluded from the assessment. Further research in different cultural settings is required to generate more insight into the extent to which universal thresholds can be applied to dietary diversity indicators with or without locally determined correction factors such as the exclusion of small amounts of food items.

  12. Carbon dioxide laser polishing of fused silica surfaces for increased laser-damage resistance at 1064 nm.

    PubMed

    Temple, P A; Lowdermilk, W H; Milam, D

    1982-09-15

    Mechanically polished fused silica surfaces were heated with continuous-wave CO(2) laser radiation. Laser-damage thresholds of the surfaces were measured with 1064-nm 9-nsec pulses focused to small spots and with large-spot, 1064-nm, 1-nsec irradiation. A sharp transition from laser-damage-prone to highly laser-damage-resistant took place over a small range in CO(2) laser power. The transition to high damage resistance occurred at a silica surface temperature where material softening began to take place as evidenced by the onset of residual strain in the CO(2) laser-processed part. The small-spot damage measurements show that some CO(2) laser-treated surfaces have a local damage threshold as high as the bulk damage threshold of SiO(2). On some CO(2) laser-treated surfaces, large-spot damage thresholds were increased by a factor of 3-4 over thresholds of the original mechanically polished surface. These treated parts show no obvious change in surface appearance as seen in bright-field, Nomarski, or total internal reflection microscopy. They also show little change in transmissive figure. Further, antireflection films deposited on CO(2) laser-treated surfaces have thresholds greater than the thresholds of antireflection films on mechanically polished surfaces.

  13. Threshold-based segmentation of fluorescent and chromogenic images of microglia, astrocytes and oligodendrocytes in FIJI.

    PubMed

    Healy, Sinead; McMahon, Jill; Owens, Peter; Dockery, Peter; FitzGerald, Una

    2018-02-01

    Image segmentation is often imperfect, particularly in complex image sets such z-stack micrographs of slice cultures and there is a need for sufficient details of parameters used in quantitative image analysis to allow independent repeatability and appraisal. For the first time, we have critically evaluated, quantified and validated the performance of different segmentation methodologies using z-stack images of ex vivo glial cells. The BioVoxxel toolbox plugin, available in FIJI, was used to measure the relative quality, accuracy, specificity and sensitivity of 16 global and 9 local threshold automatic thresholding algorithms. Automatic thresholding yields improved binary representation of glial cells compared with the conventional user-chosen single threshold approach for confocal z-stacks acquired from ex vivo slice cultures. The performance of threshold algorithms varies considerably in quality, specificity, accuracy and sensitivity with entropy-based thresholds scoring highest for fluorescent staining. We have used the BioVoxxel toolbox to correctly and consistently select the best automated threshold algorithm to segment z-projected images of ex vivo glial cells for downstream digital image analysis and to define segmentation quality. The automated OLIG2 cell count was validated using stereology. As image segmentation and feature extraction can quite critically affect the performance of successive steps in the image analysis workflow, it is becoming increasingly necessary to consider the quality of digital segmenting methodologies. Here, we have applied, validated and extended an existing performance-check methodology in the BioVoxxel toolbox to z-projected images of ex vivo glia cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Quantitative Characterizations of Ultrashort Echo (UTE) Images for Supporting Air-Bone Separation in the Head

    PubMed Central

    Hsu, Shu-Hui; Cao, Yue; Lawrence, Theodore S.; Tsien, Christina; Feng, Mary; Grodzki, David M.; Balter, James M.

    2015-01-01

    Accurate separation of air and bone is critical for creating synthetic CT from MRI to support Radiation Oncology workflow. This study compares two different ultrashort echo-time sequences in the separation of air from bone, and evaluates post-processing methods that correct intensity nonuniformity of images and account for intensity gradients at tissue boundaries to improve this discriminatory power. CT and MRI scans were acquired on 12 patients under an institution review board-approved prospective protocol. The two MRI sequences tested were ultra-short TE imaging using 3D radial acquisition (UTE), and using pointwise encoding time reduction with radial acquisition (PETRA). Gradient nonlinearity correction was applied to both MR image volumes after acquisition. MRI intensity nonuniformity was corrected by vendor-provided normalization methods, and then further corrected using the N4itk algorithm. To overcome the intensity-gradient at air-tissue boundaries, spatial dilations, from 0 to 4 mm, were applied to threshold-defined air regions from MR images. Receiver operating characteristic (ROC) analyses, by comparing predicted (defined by MR images) versus “true” regions of air and bone (defined by CT images), were performed with and without residual bias field correction and local spatial expansion. The post-processing corrections increased the areas under the ROC curves (AUC) from 0.944 ± 0.012 to 0.976 ± 0.003 for UTE images, and from 0.850 ± 0.022 to 0.887 ± 0.012 for PETRA images, compared to without corrections. When expanding the threshold-defined air volumes, as expected, sensitivity of air identification decreased with an increase in specificity of bone discrimination, but in a non-linear fashion. A 1-mm air mask expansion yielded AUC increases of 1% and 4% for UTE and PETRA images, respectively. UTE images had significantly greater discriminatory power in separating air from bone than PETRA images. Post-processing strategies improved the discriminatory power of air from bone for both UTE and PETRA images, and reduced the difference between the two imaging sequences. Both postprocessed UTE and PETRA images demonstrated sufficient power to discriminate air from bone to support synthetic CT generation from MRI data. PMID:25776205

  15. Harm is all you need? Best interests and disputes about parental decision-making

    PubMed Central

    Birchley, Giles

    2016-01-01

    A growing number of bioethics papers endorse the harm threshold when judging whether to override parental decisions. Among other claims, these papers argue that the harm threshold is easily understood by lay and professional audiences and correctly conforms to societal expectations of parents in regard to their children. English law contains a harm threshold which mediates the use of the best interests test in cases where a child may be removed from her parents. Using Diekema's seminal paper as an example, this paper explores the proposed workings of the harm threshold. I use examples from the practical use of the harm threshold in English law to argue that the harm threshold is an inadequate answer to the indeterminacy of the best interests test. I detail two criticisms: First, the harm standard has evaluative overtones and judges are loath to employ it where parental behaviour is misguided but they wish to treat parents sympathetically. Thus, by focusing only on ‘substandard’ parenting, harm is problematic where the parental attempts to benefit their child are misguided or wrong, such as in disputes about withdrawal of medical treatment. Second, when harm is used in genuine dilemmas, court judgments offer different answers to similar cases. This level of indeterminacy suggests that, in practice, the operation of the harm threshold would be indistinguishable from best interests. Since indeterminacy appears to be the greatest problem in elucidating what is best, bioethicists should concentrate on discovering the values that inform best interests. PMID:26401048

  16. Developing Bayesian adaptive methods for estimating sensitivity thresholds (d′) in Yes-No and forced-choice tasks

    PubMed Central

    Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.

    2015-01-01

    Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798

  17. Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Herman, Dave J.; James, Mark A.

    2003-01-01

    Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).

  18. Near-threshold photoionization of hydrogenlike uranium studied in ion-atom collisions via the time-reversed process.

    PubMed

    Stöhlker, T; Ma, X; Ludziejewski, T; Beyer, H F; Bosch, F; Brinzanescu, O; Dunford, R W; Eichler, J; Hagmann, S; Ichihara, A; Kozhuharov, C; Krämer, A; Liesen, D; Mokler, P H; Stachura, Z; Swiat, P; Warczak, A

    2001-02-05

    Radiative electron capture, the time-reversed photoionization process occurring in ion-atom collisions, provides presently the only access to photoionization studies for very highly charged ions. By applying the deceleration mode of the ESR storage ring, we studied this process in low-energy collisions of bare uranium ions with low- Z target atoms. This technique allows us to extend the current information about photoionization to much lower energies than those accessible for neutral heavy elements in the direct reaction channel. The results prove that for high- Z systems, higher-order multipole contributions and magnetic corrections persist even at energies close to the threshold.

  19. Measurements methodology for evaluation of Digital TV operation in VHF high-band

    NASA Astrophysics Data System (ADS)

    Pudwell Chaves de Almeida, M.; Vladimir Gonzalez Castellanos, P.; Alfredo Cal Braz, J.; Pereira David, R.; Saboia Lima de Souza, R.; Pereira da Soledade, A.; Rodrigues Nascimento Junior, J.; Ferreira Lima, F.

    2016-07-01

    This paper describes the experimental setup of field measurements carried out for evaluating the operation of the ISDB-TB (Integrated Services Digital Broadcasting, Terrestrial, Brazilian version) standard digital TV in the VHF-highband. Measurements were performed in urban and suburban areas in a medium-sized Brazilian city. Besides the direct measurements of received power and environmental noise, a measurement procedure involving the injection of Gaussian additive noise was employed to achieve the signal to noise ratio threshold at each measurement site. The analysis includes results of static reception measurements for evaluating the received field strength and the signal to noise ratio thresholds for correct signal decoding.

  20. System and method for quench and over-current protection of superconductor

    DOEpatents

    Huang, Xianrui; Laskaris, Evangelos Trifon; Sivasubramaniam, Kiruba Haran; Bray, James William; Ryan, David Thomas; Fogarty, James Michael; Steinbach, Albert Eugene

    2005-05-31

    A system and method for protecting a superconductor. The system may comprise a current sensor operable to detect a current flowing through the superconductor. The system may comprise a coolant temperature sensor operable to detect the temperature of a cryogenic coolant used to cool the superconductor to a superconductive state. The control circuit is operable to estimate the superconductor temperature based on the current flow and the coolant temperature. The system may also be operable to compare the estimated superconductor temperature to at least one threshold temperature and to initiate a corrective action when the superconductor temperature exceeds the at least one threshold temperature.

  1. Hygiene-therapists could be used to screen for dental caries and periodontal disease.

    PubMed

    Richards, Derek

    2015-12-01

    A purposive sample of large NHS dental practices with a minimum of three surgeries employing at least one hygiene-therapist (HT) was taken. Asymptomatic patients attending for routine checkups who consented to the study underwent a screen by H-T for dental caries and periodontal disease (index test) followed by a screen by a general dental practitioner (reference test). Patients were recruited consecutively. H-Ts and dentists attended a compulsory training day, which covered recruitment, consenting, screening process, calibration using stock photographs and patient record form completion. Diagnostic threshold for caries was any tooth in the patient's mouth that showed evidence of frank cavitation or shadowing and opacity that would indicate dental caries into the dentine. The diagnostic threshold for periodontal disease was any pocket in the patient's mouth where the black-band of a basic periodontal examination (BPE) probe (3.5 to 5.5 mm) partially or totally disappeared (ie BPE code 3). The index test was compared with the reference test to determine true-positive, false-positive, false-negative and true-negative values. Sensitivity, specificity, positive predictive value, negative predictive value and diagnostic odds ratios are shown in Table 1. Eighteen hundred and ninety-nine patients consented to dental screening with 996 patients being randomly allocated to see the dentist first and 903 H-T first. The time interval between the index and reference test never exceeded 21 minutes. With the exception of two practices failing to collect data on smoking and dentures there were no missing results regarding the outcome of a positive or negative screening decision. No adverse events were reported. Mean screening time was five min 25 s for H-Ts and four min 26 s for dentists. Dentists identified 668 patients with caries (Prevalence of 0.35) while H-Ts classified 548 positive and correctly identified 1,047 of the 1,231 patients with no caries. Dentists identified 1074 patients with at least one pocket exceeding 3.5 mm in depth. Of these 935 were correctly identified by the H-Ts. For the 825 screened as negative by the dentist H-Ts correctly identified 621. The results suggest that hygiene-therapists could be used to screen for dental caries and periodontal disease. This has important ramifications for service design in public-funded health systems.

  2. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    PubMed

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  3. Classical simulation of quantum error correction in a Fibonacci anyon code

    NASA Astrophysics Data System (ADS)

    Burton, Simon; Brell, Courtney G.; Flammia, Steven T.

    2017-02-01

    Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128 ×128 sites, and find a lower bound on the error-correction threshold of approximately 0.125 errors per edge, which is comparable to those previously known for Abelian and (nonuniversal) non-Abelian anyon models.

  4. Sensitivity of collective action to uncertainty about climate tipping points

    NASA Astrophysics Data System (ADS)

    Barrett, Scott; Dannenberg, Astrid

    2014-01-01

    Despite more than two decades of diplomatic effort, concentrations of greenhouse gases continue to trend upwards, creating the risk that we may someday cross a threshold for `dangerous' climate change. Although climate thresholds are very uncertain, new research is trying to devise `early warning signals' of an approaching tipping point. This research offers a tantalizing promise: whereas collective action fails when threshold uncertainty is large, reductions in this uncertainty may bring about the behavioural change needed to avert a climate `catastrophe'. Here we present the results of an experiment, rooted in a game-theoretic model, showing that behaviour differs markedly either side of a dividing line for threshold uncertainty. On one side of the dividing line, where threshold uncertainty is relatively large, free riding proves irresistible and trust illusive, making it virtually inevitable that the tipping point will be crossed. On the other side, where threshold uncertainty is small, the incentive to coordinate is strong and trust more robust, often leading the players to avoid crossing the tipping point. Our results show that uncertainty must be reduced to this `good' side of the dividing line to stimulate the behavioural shift needed to avoid `dangerous' climate change.

  5. EVALUATING MACROINVERTEBRATE COMMUNITY ...

    EPA Pesticide Factsheets

    Since 2010, new construction in California is required to include stormwater detention and infiltration that is designed to capture rainfall from the 85th percentile of storm events in the region, preferably through green infrastructure. This study used recent macroinvertebrate community monitoring data to determine the ecological threshold for percent impervious cover prior to large scale adoption of green infrastructure using Threshold Indicator Taxa Analysis (TITAN). TITAN uses an environmental gradient and biological community data to determine individual taxa change points with respect to changes in taxa abundance and frequency across that gradient. Individual taxa change points are then aggregated to calculate the ecological threshold. This study used impervious cover data from National Land Cover Datasets and macroinvertebrate community data from California Environmental Data Exchange Network and Southern California Coastal Water Research Project. Preliminary TITAN runs for California’s Chaparral region indicated that both increasing and decreasing taxa had ecological thresholds of <1% watershed impervious cover. Next, TITAN will be used to determine shifts in the ecological threshold after the implementation of green infrastructure on a large scale. This presentation for the Society for Freshwater Scientists will discuss initial evaluation of community and taxa-specific thresholds of impairment for macroinvertebrates in California streams along

  6. Beam hardening correction in CT myocardial perfusion measurement

    NASA Astrophysics Data System (ADS)

    So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim

    2009-05-01

    This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.

  7. Neutron Capture and the Antineutrino Yield from Nuclear Reactors.

    PubMed

    Huber, Patrick; Jaffke, Patrick

    2016-03-25

    We identify a new, flux-dependent correction to the antineutrino spectrum as produced in nuclear reactors. The abundance of certain nuclides, whose decay chains produce antineutrinos above the threshold for inverse beta decay, has a nonlinear dependence on the neutron flux, unlike the vast majority of antineutrino producing nuclides, whose decay rate is directly related to the fission rate. We have identified four of these so-called nonlinear nuclides and determined that they result in an antineutrino excess at low energies below 3.2 MeV, dependent on the reactor thermal neutron flux. We develop an analytic model for the size of the correction and compare it to the results of detailed reactor simulations for various real existing reactors, spanning 3 orders of magnitude in neutron flux. In a typical pressurized water reactor the resulting correction can reach ∼0.9% of the low energy flux which is comparable in size to other, known low-energy corrections from spent nuclear fuel and the nonequilibrium correction. For naval reactors the nonlinear correction may reach the 5% level by the end of cycle.

  8. Photoionization of atomic barium subshells in the 4 d threshold region using the relativistic multiconfiguration Tamm-Dancoff approximation

    NASA Astrophysics Data System (ADS)

    Ganesan, Aarthi; Deshmukh, P. C.; Manson, S. T.

    2017-03-01

    Photoionization cross sections and photoelectron angular distribution asymmetry parameters are calculated for the 4 d10, 5 s2, 5 p6 , and 6 s2 subshells of atomic barium as a test of the relativistic multiconfiguration Tamm-Dancoff (RMCTD) method. The shape resonance present in the near-threshold region of the 4 d subshell is studied in detail in the 4 d photoionization along with the 5 s , 5 p , and 6 s subshells in the region of the 4 d thresholds, as the 4 d shape resonance strongly influences these subshells in its vicinity. The results are compared with available experiment and other many-body theoretical results in an effort to assess the capabilities of the RMCTD methodology. The electron correlations addressed in the RMCTD method give relatively good agreement with the experimental data, indicating that the important many-body correlations are included correctly.

  9. The effect of variably tinted spectacle lenses on visual performance in cataract subjects.

    PubMed

    Naidu, Srilata; Lee, Jason E; Holopigian, Karen; Seiple, William H; Greenstein, Vivienne C; Stenson, Susan M

    2003-01-01

    A body of clinical and laboratory evidence suggests that tinted spectacle lenses may have an effect on visual performance. The aim of this study was to quantify the effects of spectacle lens tint on the visual performance of 25 subjects with cataracts. Cataracts were scored based on best-corrected acuity and by comparison with the Lens Opacity Classification System (LOCS III) plates. Visual performance was assessed by measuring contrast sensitivity with and without glare (Morphonome software version 4.0). The effect of gray, brown, yellow, green and purple tinting was evaluated. All subjects demonstrated an increase in contrast thresholds under glare conditions regardless of lens tint. However, brown and yellow lens tints resulted in the least amount of contrast threshold increase. Gray lens tint resulted in the largest contrast threshold increase. Individuals with lenticular changes may benefit from brown or yellow spectacle lenses under glare conditions.

  10. Three-Dimensional Color Code Thresholds via Statistical-Mechanical Mapping.

    PubMed

    Kubica, Aleksander; Beverland, Michael E; Brandão, Fernando; Preskill, John; Svore, Krysta M

    2018-05-04

    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code (3DCC) on the body-centered cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D stringlike and 2D sheetlike logical operators to be p_{3DCC}^{(1)}≃1.9% and p_{3DCC}^{(2)}≃27.6%. We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the four- and six-body random coupling Ising models.

  11. ASSESSMENT OF LOW-FREQUENCY HEARING WITH NARROW-BAND CHIRP EVOKED 40-HZ SINUSOIDAL AUDITORY STEADY STATE RESPONSE

    PubMed Central

    Wilson, Uzma S.; Kaf, Wafaa A.; Danesh, Ali A.; Lichtenhan, Jeffery T.

    2016-01-01

    Objective To determine the clinical utility of narrow-band chirp evoked 40-Hz sinusoidal auditory steady state responses (s-ASSR) in the assessment of low-frequency hearing in noisy participants. Design Tone bursts and narrow-band chirps were used to respectively evoke auditory brainstem responses (tb-ABR) and 40-Hz s-ASSR thresholds with the Kalman-weighted filtering technique and were compared to behavioral thresholds at 500, 2000, and 4000 Hz. A repeated measure ANOVA and post-hoc t-tests, and simple regression analyses were performed for each of the three stimulus frequencies. Study Sample Thirty young adults aged 18–25 with normal hearing participated in this study. Results When 4000 equivalent responses averages were used, the range of mean s-ASSR thresholds from 500, 2000, and 4000 Hz were 17–22 dB lower (better) than when 2000 averages were used. The range of mean tb-ABR thresholds were lower by 11–15 dB for 2000 and 4000 Hz when twice as many equivalent response averages were used, while mean tb-ABR thresholds for 500 Hz were indistinguishable regardless of additional response averaging Conclusion Narrow band chirp evoked 40-Hz s-ASSR requires a ~15 dB smaller correction factor than tb-ABR for estimating low-frequency auditory threshold in noisy participants when adequate response averaging is used. PMID:26795555

  12. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    DOE PAGES

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville; ...

    2016-03-03

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less

  13. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less

  14. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  15. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  16. Measurement Techniques for Transmit Source Clock Jitter for Weak Serial RF Links

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin A.; Schlesinger, Adam M.

    2010-01-01

    Techniques for filtering clock jitter measurements are developed, in the context of controlling data modulation jitter on an RF carrier to accommodate low signal-to-noise ratio thresholds of high-performance error correction codes. Measurement artifacts from sampling are considered, and a tutorial on interpretation of direct readings is included.

  17. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  18. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  19. 26 CFR 301.6721-1 - Failure to file correct information returns.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... file timely includes a failure to file in the required manner, for example, on magnetic media or in... they fall below the 250-threshold requirement) or on magnetic media or other machine-readable form. Filers who are required to file information returns on magnetic media and who file such information...

  20. Rainfall threshold definition using an entropy decision approach and radar data

    NASA Astrophysics Data System (ADS)

    Montesarchio, V.; Ridolfi, E.; Russo, F.; Napolitano, F.

    2011-07-01

    Flash flood events are floods characterised by a very rapid response of basins to storms, often resulting in loss of life and property damage. Due to the specific space-time scale of this type of flood, the lead time available for triggering civil protection measures is typically short. Rainfall threshold values specify the amount of precipitation for a given duration that generates a critical discharge in a given river cross section. If the threshold values are exceeded, it can produce a critical situation in river sites exposed to alluvial risk. It is therefore possible to directly compare the observed or forecasted precipitation with critical reference values, without running online real-time forecasting systems. The focus of this study is the Mignone River basin, located in Central Italy. The critical rainfall threshold values are evaluated by minimising a utility function based on the informative entropy concept and by using a simulation approach based on radar data. The study concludes with a system performance analysis, in terms of correctly issued warnings, false alarms and missed alarms.

  1. Solar Energetic Particle Spectra

    NASA Astrophysics Data System (ADS)

    Ryan, J. M.; Boezio, M.; Bravar, U.; Bruno, A.; Christian, E. R.; de Nolfo, G. A.; Martucci, M.; Mergè, M.; Munini, R.; Ricci, M.; Sparvoli, R.; Stochaj, S.

    2017-12-01

    We report updated event-integrated spectra from several SEP events measured with PAMELA. The measurements were made from 2006 to 2014 in the energy range starting at 80 MeV and extending well above the neutron monitor threshold. The PAMELA instrument is in a high inclination, low Earth orbit and has access to SEPs when at high latitudes. Spectra have been assembled from these high-latitude measurements. The field of view of PAMELA is small and during the high-latitude passes it scans a wide range of asymptotic directions as the spacecraft orbits. Correcting for data gaps, solid angle effects and improved background corrections, we have compiled event-integrated intensity spectra for twenty-eight SEP events. Where statistics permit, the spectra exhibit power law shapes in energy with a high-energy exponential roll over. The events analyzed include two genuine ground level enhancements (GLE). In those cases the roll-over energy lies above the neutron monitor threshold ( 1 GV) while the others are lower. We see no qualitative difference between the spectra of GLE vs. non-GLE events, i.e., all roll over in an exponential fashion with rapidly decreasing intensity at high energies.

  2. Diagnosing pulmonary embolisms: the clinician's point of view.

    PubMed

    Carrillo Alcaraz, A; Martínez, A López; Solano, F J Sotos

    Pulmonary thromboembolism is common and potentially severe. To ensure the correct approach to the diagnostic workup of pulmonary thromboembolism, it is essential to know the basic concepts governing the use of the different tests available. The diagnostic approach to pulmonary thromboembolism is an example of the application of the conditional probabilities of Bayes' theorem in daily practice. To interpret the available diagnostic tests correctly, it is necessary to analyze different concepts that are fundamental for decision making. Thus, it is necessary to know what the likelihood ratios, 95% confidence intervals, and decision thresholds mean. Whether to determine the D-dimer concentration or to do CT angiography or other imaging tests depends on their capacity to modify the pretest probability of having the disease to a posttest probability that is higher or lower than the thresholds for action. This review aims to clarify the diagnostic sequence of thromboembolic pulmonary disease, analyzing the main diagnostic tools (clinical examination, laboratory tests, and imaging tests), placing special emphasis on the principles that govern evidence-based medicine. Copyright © 2016 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  3. Quantitative analysis of voids in percolating structures in two-dimensional N-body simulations

    NASA Technical Reports Server (NTRS)

    Harrington, Patrick M.; Melott, Adrian L.; Shandarin, Sergei F.

    1993-01-01

    We present in this paper a quantitative method for defining void size in large-scale structure based on percolation threshold density. Beginning with two-dimensional gravitational clustering simulations smoothed to the threshold of nonlinearity, we perform percolation analysis to determine the large scale structure. The resulting objective definition of voids has a natural scaling property, is topologically interesting, and can be applied immediately to redshift surveys.

  4. Threshold Theory Tested in an Organizational Setting: The Relation between Perceived Innovativeness and Intelligence in a Large Sample of Leaders

    ERIC Educational Resources Information Center

    Christensen, Bo T.; Hartmann, Peter V. W.; Rasmussen, Thomas Hedegaard

    2017-01-01

    A large sample of leaders (N = 4257) was used to test the link between leader innovativeness and intelligence. The threshold theory of the link between creativity and intelligence assumes that below a certain IQ level (approximately IQ 120), there is some correlation between IQ and creative potential, but above this cutoff point, there is no…

  5. Nano-Transistor Modeling: Two Dimensional Green's Function Method

    NASA Technical Reports Server (NTRS)

    Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan

    2001-01-01

    Two quantum mechanical effects that impact the operation of nanoscale transistors are inversion layer energy quantization and ballistic transport. While the qualitative effects of these features are reasonably understood, a comprehensive study of device physics in two dimensions is lacking. Our work addresses this shortcoming and provides: (a) a framework to quantitatively explore device physics issues such as the source-drain and gate leakage currents, DIBL (Drain Induced Barrier Lowering), and threshold voltage shift due to quantization, and b) a means of benchmarking quantum corrections to semiclassical models (such as density-gradient and quantum-corrected MEDICI).

  6. Evaluation of the most suitable threshold value for modelling snow glacier melt through T- index approach: the case study of Forni Glacier (Italian Alps)

    NASA Astrophysics Data System (ADS)

    Senese, Antonella; Maugeri, Maurizio; Vuillermoz, Elisa; Smiraglia, Claudio; Diolaiuti, Guglielmina

    2014-05-01

    Glacier melt occurs whenever the surface temperature is null (273.15 K) and the net energy budget is positive. These conditions can be assessed by analyzing meteorological and energy data acquired by a supraglacial Automatic Weather Station (AWS). In the case this latter is not present at the glacier surface the assessment of actual melting conditions and the evaluation of melt amount is difficult and degree-day (also named T-index) models are applied. These approaches require the choice of a correct temperature threshold. In fact, melt does not necessarily occur at daily air temperatures higher than 273.15 K, since it is determined by the energy budget which in turn is only indirectly affected by air temperature. This is the case of the late spring period when ablation processes start at the glacier surface thus progressively reducing snow thickness. In this study, to detect the most indicative air temperature threshold witnessing melt conditions in the April-June period, we analyzed air temperature data recorded from 2006 to 2012 by a supraglacial AWS (at 2631 m a.s.l.) on the ablation tongue of the Forni Glacier (Italy), and by a weather station located nearby the studied glacier (at Bormio, 1225 m a.s.l.). Moreover we evaluated the glacier energy budget (which gives the actual melt, Senese et al., 2012) and the snow water equivalent values during this time-frame. Then the ablation amount was estimated both from the surface energy balance (MEB from supraglacial AWS data) and from degree-day method (MT-INDEX, in this latter case applying the mean tropospheric lapse rate to temperature data acquired at Bormio changing the air temperature threshold) and the results were compared. We found that the mean tropospheric lapse rate permits a good and reliable reconstruction of daily glacier air temperature conditions and the major uncertainty in the computation of snow melt from degree-day models is driven by the choice of an appropriate air temperature threshold. Then, to assess the most suitable threshold, we firstly analyzed hourly MEB values to detect if ablation occurs and how long this phenomenon takes (number of hours per day). The largest part of the melting (97.7%) resulted occurring on days featuring at least 6 melting hours thus suggesting to consider their minimum average daily temperature value as a suitable threshold (268.1 K). Then we ran a simple T-index model applying different threshold values. The threshold which better reproduces snow melting results the value 268.1 K. Summarizing using a 5.0 K lower threshold value (with respect to the largely applied 273.15 K) permits the best reconstruction of glacier melt and it results in agreement with findings by van den Broeke et al. (2010) in Greenland ice sheet. Then probably the choice of a 268 K value as threshold for computing degree days amount could be generalized and applied not only on Greenland glaciers but also on Mid latitude and Alpine ones. This work was carried out under the umbrella of the SHARE Stelvio Project funded by the Lombardy Region and managed by FLA and EvK2-CNR Committee.

  7. Can adaptive threshold-based metabolic tumor volume (MTV) and lean body mass corrected standard uptake value (SUL) predict prognosis in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy?

    PubMed

    Akagunduz, Ozlem Ozkaya; Savas, Recep; Yalman, Deniz; Kocacelebi, Kenan; Esassolak, Mustafa

    2015-11-01

    To evaluate the predictive value of adaptive threshold-based metabolic tumor volume (MTV), maximum standardized uptake value (SUVmax) and maximum lean body mass corrected SUV (SULmax) measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Pretreatment PET/CT of the 62 patients with locally advanced head and neck cancer who were treated consecutively between May 2010 and February 2013 were reviewed retrospectively. The maximum FDG uptake of the primary tumor was defined according to SUVmax and SULmax. Multiple threshold levels between 60% and 10% of the SUVmax and SULmax were tested with intervals of 5% to 10% in order to define the most suitable threshold value for the metabolic activity of each patient's tumor (adaptive threshold). MTV was calculated according to this value. We evaluated the relationship of mean values of MTV, SUVmax and SULmax with treatment response, local recurrence, distant metastasis and disease-related death. Receiver-operating characteristic (ROC) curve analysis was done to obtain optimal predictive cut-off values for MTV and SULmax which were found to have a predictive value. Local recurrence-free (LRFS), disease-free (DFS) and overall survival (OS) were examined according to these cut-offs. Forty six patients had complete response, 15 had partial response, and 1 had stable disease 6 weeks after the completion of treatment. Median follow-up of the entire cohort was 18 months. Of 46 complete responders 10 had local recurrence, and of 16 partial or no responders 10 had local progression. Eighteen patients died. Adaptive threshold-based MTV had significant predictive value for treatment response (p=0.011), local recurrence/progression (p=0.050), and disease-related death (p=0.024). SULmax had a predictive value for local recurrence/progression (p=0.030). ROC curves analysis revealed a cut-off value of 14.00 mL for MTV and 10.15 for SULmax. Three-year LRFS and DFS rates were significantly lower in patients with MTV ≥ 14.00 mL (p=0.026, p=0.018 respectively), and SULmax≥10.15 (p=0.017, p=0.022 respectively). SULmax did not have a significant predictive value for OS whereas MTV had (p=0.025). Adaptive threshold-based MTV and SULmax could have a role in predicting local control and survival in head and neck cancer patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Monitoring of Tumor Growth with [(18)F]-FET PET in a Mouse Model of Glioblastoma: SUV Measurements and Volumetric Approaches.

    PubMed

    Holzgreve, Adrien; Brendel, Matthias; Gu, Song; Carlsen, Janette; Mille, Erik; Böning, Guido; Mastrella, Giorgia; Unterrainer, Marcus; Gildehaus, Franz J; Rominger, Axel; Bartenstein, Peter; Kälin, Roland E; Glass, Rainer; Albert, Nathalie L

    2016-01-01

    Noninvasive tumor growth monitoring is of particular interest for the evaluation of experimental glioma therapies. This study investigates the potential of positron emission tomography (PET) using O-(2-(18)F-fluoroethyl)-L-tyrosine ([(18)F]-FET) to determine tumor growth in a murine glioblastoma (GBM) model-including estimation of the biological tumor volume (BTV), which has hitherto not been investigated in the pre-clinical context. Fifteen GBM-bearing mice (GL261) and six control mice (shams) were investigated during 5 weeks by PET followed by autoradiographic and histological assessments. [(18)F]-FET PET was quantitated by calculation of maximum and mean standardized uptake values within a universal volume-of-interest (VOI) corrected for healthy background (SUVmax/BG, SUVmean/BG). A partial volume effect correction (PVEC) was applied in comparison to ex vivo autoradiography. BTVs obtained by predefined thresholds for VOI definition (SUV/BG: ≥1.4; ≥1.6; ≥1.8; ≥2.0) were compared to the histologically assessed tumor volume (n = 8). Finally, individual "optimal" thresholds for BTV definition best reflecting the histology were determined. In GBM mice SUVmax/BG and SUVmean/BG clearly increased with time, however at high inter-animal variability. No relevant [(18)F]-FET uptake was observed in shams. PVEC recovered signal loss of SUVmean/BG assessment in relation to autoradiography. BTV as estimated by predefined thresholds strongly differed from the histology volume. Strikingly, the individual "optimal" thresholds for BTV assessment correlated highly with SUVmax/BG (ρ = 0.97, p < 0.001), allowing SUVmax/BG-based calculation of individual thresholds. The method was verified by a subsequent validation study (n = 15, ρ = 0.88, p < 0.01) leading to extensively higher agreement of BTV estimations when compared to histology in contrast to predefined thresholds. [(18)F]-FET PET with standard SUV measurements is feasible for glioma imaging in the GBM mouse model. PVEC is beneficial to improve accuracy of [(18)F]-FET PET SUV quantification. Although SUVmax/BG and SUVmean/BG increase during the disease course, these parameters do not correlate with the respective tumor size. For the first time, we propose a histology-verified method allowing appropriate individual BTV estimation for volumetric in vivo monitoring of tumor growth with [(18)F]-FET PET and show that standardized thresholds from routine clinical practice seem to be inappropriate for BTV estimation in the GBM mouse model.

  9. Quantitative measurement of interocular suppression in anisometropic amblyopia: a case-control study.

    PubMed

    Li, Jinrong; Hess, Robert F; Chan, Lily Y L; Deng, Daming; Yang, Xiao; Chen, Xiang; Yu, Minbin; Thompson, Benjamin

    2013-08-01

    The aims of this study were to assess (1) the relationship between interocular suppression and visual function in patients with anisometropic amblyopia, (2) whether suppression can be simulated in matched controls using monocular defocus or neutral density filters, (3) the effects of spectacle or rigid gas-permeable contact lens correction on suppression in patients with anisometropic amblyopia, and (4) the relationship between interocular suppression and outcomes of occlusion therapy. Case-control study (aims 1-3) and cohort study (aim 4). Forty-five participants with anisometropic amblyopia and 45 matched controls (mean age, 8.8 years for both groups). Interocular suppression was assessed using Bagolini striated lenses, neutral density filters, and an objective psychophysical technique that measures the amount of contrast imbalance between the 2 eyes that is required to overcome suppression (dichoptic motion coherence thresholds). Visual acuity was assessed using a logarithm minimum angle of resolution tumbling E chart and stereopsis using the Randot preschool test. Interocular suppression assessed using dichoptic motion coherence thresholds. Patients exhibited significantly stronger suppression than controls, and stronger suppression was correlated significantly with poorer visual acuity in amblyopic eyes. Reducing monocular acuity in controls to match that of cases using neutral density filters (luminance reduction) resulted in levels of interocular suppression comparable with that in patients. This was not the case for monocular defocus (optical blur). Rigid gas-permeable contact lens correction resulted in less suppression than spectacle correction, and stronger suppression was associated with poorer outcomes after occlusion therapy. Interocular suppression plays a key role in the visual deficits associated with anisometropic amblyopia and can be simulated in controls by inducing a luminance difference between the eyes. Accurate quantification of suppression using the dichoptic motion coherence threshold technique may provide useful information for the management and treatment of anisometropic amblyopia. Proprietary or commercial disclosure may be found after the references. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  10. Improved laser damage threshold for chalcogenide glasses through surface microstructuring

    NASA Astrophysics Data System (ADS)

    Florea, Catalin; Sanghera, Jasbinder; Busse, Lynda; Shaw, Brandon; Aggarwal, Ishwar

    2011-03-01

    We demonstrate improved laser damage threshold of chalcogenide glasses with microstructured surfaces as compared to chalcogenide glasses provided with traditional antireflection coatings. The surface microstructuring is used to reduce Fresnel losses over large bandwidths in As2S3 glasses and fibers. The treated surfaces show almost a factor of two of improvement in the laser damage threshold when compared with untreated surfaces.

  11. Repeatability of Quantitative Whole-Body 18F-FDG PET/CT Uptake Measures as Function of Uptake Interval and Lesion Selection in Non-Small Cell Lung Cancer Patients.

    PubMed

    Kramer, Gerbrand Maria; Frings, Virginie; Hoetjes, Nikie; Hoekstra, Otto S; Smit, Egbert F; de Langen, Adrianus Johannes; Boellaard, Ronald

    2016-09-01

    Change in (18)F-FDG uptake may predict response to anticancer treatment. The PERCIST suggest a threshold of 30% change in SUV to define partial response and progressive disease. Evidence underlying these thresholds consists of mixed stand-alone PET and PET/CT data with variable uptake intervals and no consensus on the number of lesions to be assessed. Additionally, there is increasing interest in alternative (18)F-FDG uptake measures such as metabolically active tumor volume and total lesion glycolysis (TLG). The aim of this study was to comprehensively investigate the repeatability of various quantitative whole-body (18)F-FDG metrics in non-small cell lung cancer (NSCLC) patients as a function of tracer uptake interval and lesion selection strategies. Eleven NSCLC patients, with at least 1 intrathoracic lesion 3 cm or greater, underwent double baseline whole-body (18)F-FDG PET/CT scans at 60 and 90 min after injection within 3 d. All (18)F-FDG-avid tumors were delineated with an 50% threshold of SUVpeak adapted for local background. SUVmax, SUVmean, SUVpeak, TLG, metabolically active tumor volume, and tumor-to-blood and -liver ratios were evaluated, as well as the influence of lesion selection and 2 methods for correction of uptake time differences. The best repeatability was found using the SUV metrics of the averaged PERCIST target lesions (repeatability coefficients < 10%). The correlation between test and retest scans was strong for all uptake measures at either uptake interval (intraclass correlation coefficient > 0.97 and R(2) > 0.98). There were no significant differences in repeatability between data obtained 60 and 90 min after injection. When only PERCIST-defined target lesions were included (n = 34), repeatability improved for all uptake values. Normalization to liver or blood uptake or glucose correction did not improve repeatability. However, after correction for uptake time the correlation of SUV measures and TLG between the 60- and 90-min data significantly improved without affecting test-retest performance. This study suggests that a 15% change of SUVmean/SUVpeak at 60 min after injection can be used to assess response in advanced NSCLC patients if up to 5 PERCIST target lesions are assessed. Lower thresholds could be used in averaged PERCIST target lesions (<10%). © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  12. "A violation of the conditional independence assumption in the two-high-threshold Model of recognition memory": Correction to Chen, Starns, and Rotello (2015).

    PubMed

    2016-01-01

    Reports an error in "A violation of the conditional independence assumption in the two-high-threshold model of recognition memory" by Tina Chen, Jeffrey J. Starns and Caren M. Rotello (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015[Jul], Vol 41[4], 1215-1222). In the article, Chen et al. compared three models: a continuous signal detection model (SDT), a standard two-high-threshold discrete-state model in which detect states always led to correct responses (2HT), and a full-mapping version of the 2HT model in which detect states could lead to either correct or incorrect responses. After publication, Rani Moran (personal communication, April 21, 2015) identified two errors that impact the reported fit statistics for the Bayesian information criterion (BIC) metric of all models as well as the Akaike information criterion (AIC) results for the full-mapping model. The errors are described in the erratum. (The following abstract of the original article appeared in record 2014-56216-001.) The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are independent of the probability that an item yields a particular state (e.g., both strong and weak items that are detected as old have the same probability of producing a highest-confidence "old" response). We tested this conditional independence assumption by presenting nouns 1, 2, or 4 times. To maximize the strength of some items, "superstrong" items were repeated 4 times and encoded in conjunction with pleasantness, imageability, anagram, and survival processing tasks. The 2HT model failed to simultaneously capture the response rate data for all item classes, demonstrating that the data violated the conditional independence assumption. In contrast, a Gaussian signal detection model, which posits that the level of confidence that an item is "old" or "new" is a function of its continuous strength value, provided a good account of the data. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Tsunami Generation from Asteroid Airburst and Ocean Impact and Van Dorn Effect

    NASA Technical Reports Server (NTRS)

    Robertson, Darrel

    2016-01-01

    Airburst - In the simulations explored energy from the airburst couples very weakly with the water making tsunami dangerous over a shorter distance than the blast for asteroid sizes up to the maximum expected size that will still airburst (approx.250MT). Future areas of investigation: - Low entry angle airbursts create more cylindrical blasts and might couple more efficiently - Bursts very close to the ground will increase coupling - Inclusion of thermosphere (>80km altitude) may show some plume collapse effects over a large area although with much less pressure center dot Ocean Impact - Asteroid creates large cavity in ocean. Cavity backfills creating central jet. Oscillation between the cavity and jet sends out tsunami wave packet. - For deep ocean impact waves are deep water waves (Phase speed = 2x Group speed) - If the tsunami propagation and inundation calculations are correct for the small (<250MT) asteroids in these simulations where they impact deep ocean basins, the resulting tsunami is not a significant hazard unless particularly close to vulnerable communities. Future work: - Shallow ocean impact. - Effect of continental shelf and beach profiles - Tsunami vs. blast damage radii for impacts close to populated areas - Larger asteroids below presumed threshold of global effects (Ø200 - 800m).

  14. A threshold effect for spacecraft charging

    NASA Technical Reports Server (NTRS)

    Olsen, R. C.

    1983-01-01

    The borderline case between no charging and large (kV) negative potentials for eclipse charging events on geosynchronous satellites is investigated, and the dependence of this transition on a threshold energy in the ambient plasma is examined. Data from the Applied Technology Satellite 6 and P78-2 (SCATHA) show that plasma sheet fluxes must extend above 10 keV for these satellites to charge in eclipse. The threshold effect is a result of the shape of the normal secondary yield curve, in particular the high energy crossover, where the secondary yield drops below 1. It is found that a large portion of the ambient electron flux must exceed this energy for a negative current to exist.

  15. Pixel structures to compensate nonuniform threshold voltage and mobility of polycrystalline silicon thin-film transistors using subthreshold current for large-size active matrix organic light-emitting diode displays

    NASA Astrophysics Data System (ADS)

    Na, Jun-Seok; Kwon, Oh-Kyong

    2014-01-01

    We propose pixel structures for large-size and high-resolution active matrix organic light-emitting diode (AMOLED) displays using a polycrystalline silicon (poly-Si) thin-film transistor (TFT) backplane. The proposed pixel structures compensate the variations of the threshold voltage and mobility of the driving TFT using the subthreshold current. The simulated results show that the emission current error of the proposed pixel structure B ranges from -2.25 to 2.02 least significant bit (LSB) when the variations of the threshold voltage and mobility of the driving TFT are ±0.5 V and ±10%, respectively.

  16. Threshold-Voltage-Shift Compensation and Suppression Method Using Hydrogenated Amorphous Silicon Thin-Film Transistors for Large Active Matrix Organic Light-Emitting Diode Displays

    NASA Astrophysics Data System (ADS)

    Oh, Kyonghwan; Kwon, Oh-Kyong

    2012-03-01

    A threshold-voltage-shift compensation and suppression method for active matrix organic light-emitting diode (AMOLED) displays fabricated using a hydrogenated amorphous silicon thin-film transistor (TFT) backplane is proposed. The proposed method compensates for the threshold voltage variation of TFTs due to different threshold voltage shifts during emission time and extends the lifetime of the AMOLED panel. Measurement results show that the error range of emission current is from -1.1 to +1.7% when the threshold voltage of TFTs varies from 1.2 to 3.0 V.

  17. Universal phase transition in community detectability under a stochastic block model.

    PubMed

    Chen, Pin-Yu; Hero, Alfred O

    2015-03-01

    We prove the existence of an asymptotic phase-transition threshold on community detectability for the spectral modularity method [M. E. J. Newman, Phys. Rev. E 74, 036104 (2006) and Proc. Natl. Acad. Sci. (USA) 103, 8577 (2006)] under a stochastic block model. The phase transition on community detectability occurs as the intercommunity edge connection probability p grows. This phase transition separates a subcritical regime of small p, where modularity-based community detection successfully identifies the communities, from a supercritical regime of large p where successful community detection is impossible. We show that, as the community sizes become large, the asymptotic phase-transition threshold p* is equal to √[p1p2], where pi(i=1,2) is the within-community edge connection probability. Thus the phase-transition threshold is universal in the sense that it does not depend on the ratio of community sizes. The universal phase-transition phenomenon is validated by simulations for moderately sized communities. Using the derived expression for the phase-transition threshold, we propose an empirical method for estimating this threshold from real-world data.

  18. Synergy of adaptive thresholds and multiple transmitters in free-space optical communication.

    PubMed

    Louthain, James A; Schmidt, Jason D

    2010-04-26

    Laser propagation through extended turbulence causes severe beam spread and scintillation. Airborne laser communication systems require special considerations in size, complexity, power, and weight. Rather than using bulky, costly, adaptive optics systems, we reduce the variability of the received signal by integrating a two-transmitter system with an adaptive threshold receiver to average out the deleterious effects of turbulence. In contrast to adaptive optics approaches, systems employing multiple transmitters and adaptive thresholds exhibit performance improvements that are unaffected by turbulence strength. Simulations of this system with on-off-keying (OOK) showed that reducing the scintillation variations with multiple transmitters improves the performance of low-frequency adaptive threshold estimators by 1-3 dB. The combination of multiple transmitters and adaptive thresholding provided at least a 10 dB gain over implementing only transmitter pointing and receiver tilt correction for all three high-Rytov number scenarios. The scenario with a spherical-wave Rytov number R=0.20 enjoyed a 13 dB reduction in the required SNR for BER's between 10(-5) to 10(-3), consistent with the code gain metric. All five scenarios between 0.06 and 0.20 Rytov number improved to within 3 dB of the SNR of the lowest Rytov number scenario.

  19. Audiometric evaluation of an attempt to optimize the fixation of the transducer of a middle-ear implant to the ossicular chain with bone cement.

    PubMed

    Snik, A; Cremers, C

    2004-02-01

    Typically, an implantable hearing device consists of a transducer that is coupled to the ossicular chain and electronics. The coupling is of major importance. The Vibrant Soundbridge (VSB) is such an implantable device; normally, the VSB transducer is fixed to the ossicular chain by means of a special clip that is crimped around the long process of the incus. In addition to crimping, bone cement was used to optimize the fixation in six patients. Long-term results were compared to those of five controls with crimp fixation alone. To assess the effect of bone cement (SerenoCem, Corinthian Medical Ltd, Nottingham, UK) on hearing thresholds, long-term post-surgery thresholds were compared to pre-surgery thresholds. Bone cement did not have any negative effect. Next, to test the hypothesis that aided thresholds might be better with the use of bone cement, aided thresholds were studied. After correction for the severity of hearing loss, only a small difference was found between the two groups at one frequency, viz. 2 kHz. It was concluded that there was no negative effect of using bone cement; however, there is also no reason to use bone cement in VSB users on a regular basis.

  20. Transport temperatures observed during the commercial transportation of animals.

    PubMed

    Fiore, Gianluca; Hofherr, Johann; Natale, Fabrizio; Mainetti, Sergio; Ruotolo, Espedito

    2012-01-01

    Current temperature standards and those proposed by the European Food Safety Authority (EFSA) were compared with the actual practices of commercial transport in the European Union. Temperature and humidity records recorded for a year on 21 vehicles over 905 journeys were analysed. Differences in temperature and humidity recorded by sensors at four different positions in the vehicles exceeded 10°C between the highest and lowest temperatures in nearly 7% of cases. The number and position of temperature sensors are important to ensure the correct representation of temperature conditions in the different parts of a vehicle. For all journeys and all animal categories, a relatively high percentage of beyond threshold temperatures can be observed in relation to the temperature limits of 30°C and 5°C. Most recorded temperature values lie within the accepted tolerance of ±5°C stipulated in European Community Regulation (EC) 1/2005. The temperature thresholds proposed by EFSA would result in a higher percentage of non-compliant conditions which are more pronounced at the lower threshold, compared to the thresholds laid down in Regulation (EC) 1/2005. With respect to the different animal categories, the non-compliant temperature occurrences were more frequent in pigs and sheep, in particular with regard to the thresholds proposed by EFSA.

  1. ISR corrections to associated HZ production at future Higgs factories

    NASA Astrophysics Data System (ADS)

    Greco, Mario; Montagna, Guido; Nicrosini, Oreste; Piccinini, Fulvio; Volpi, Gabriele

    2018-02-01

    We evaluate the QED corrections due to initial state radiation (ISR) to associated Higgs boson production in electron-positron (e+e-) annihilation at typical energies of interest for the measurement of the Higgs properties at future e+e- colliders, such as CEPC and FCC-ee. We apply the QED Structure Function approach to the four-fermion production process e+e- →μ+μ- b b bar , including both signal and background contributions. We emphasize the relevance of the ISR corrections particularly near threshold and show that finite third order collinear contributions are mandatory to meet the expected experimental accuracy. We analyze in turn the rôle played by a full four-fermion calculation and beam energy spread in precision calculations for Higgs physics at future e+e- colliders.

  2. Humans and seasonal climate variability threaten large-bodied coral reef fish with small ranges.

    PubMed

    Mellin, C; Mouillot, D; Kulbicki, M; McClanahan, T R; Vigliola, L; Bradshaw, C J A; Brainard, R E; Chabanet, P; Edgar, G J; Fordham, D A; Friedlander, A M; Parravicini, V; Sequeira, A M M; Stuart-Smith, R D; Wantiez, L; Caley, M J

    2016-02-03

    Coral reefs are among the most species-rich and threatened ecosystems on Earth, yet the extent to which human stressors determine species occurrences, compared with biogeography or environmental conditions, remains largely unknown. With ever-increasing human-mediated disturbances on these ecosystems, an important question is not only how many species can inhabit local communities, but also which biological traits determine species that can persist (or not) above particular disturbance thresholds. Here we show that human pressure and seasonal climate variability are disproportionately and negatively associated with the occurrence of large-bodied and geographically small-ranging fishes within local coral reef communities. These species are 67% less likely to occur where human impact and temperature seasonality exceed critical thresholds, such as in the marine biodiversity hotspot: the Coral Triangle. Our results identify the most sensitive species and critical thresholds of human and climatic stressors, providing opportunity for targeted conservation intervention to prevent local extinctions.

  3. Effects of global financial crisis on network structure in a local stock market

    NASA Astrophysics Data System (ADS)

    Nobi, Ashadun; Maeng, Seong Eun; Ha, Gyeong Gyun; Lee, Jae Woo

    2014-08-01

    This study considers the effects of the 2008 global financial crisis on threshold networks of a local Korean financial market around the time of the crisis. Prices of individual stocks belonging to KOSPI 200 (Korea Composite Stock Price Index 200) are considered for three time periods, namely before, during, and after the crisis. Threshold networks are constructed from fully connected cross-correlation networks, and thresholds of cross-correlation coefficients are assigned to obtain threshold networks. At the high threshold, only one large cluster consisting of firms in the financial sector, heavy industry, and construction is observed during the crisis. However, before and after the crisis, there are several fragmented clusters belonging to various sectors. The power law of the degree distribution in threshold networks is observed within the limited range of thresholds. Threshold networks are fatter during the crisis than before or after the crisis. The clustering coefficient of the threshold network follows the power law in the scaling range.

  4. Cognitive Abilities, Monitoring Confidence, and Control Thresholds Explain Individual Differences in Heuristics and Biases

    PubMed Central

    Jackson, Simon A.; Kleitman, Sabina; Howie, Pauline; Stankov, Lazar

    2016-01-01

    In this paper, we investigate whether individual differences in performance on heuristic and biases tasks can be explained by cognitive abilities, monitoring confidence, and control thresholds. Current theories explain individual differences in these tasks by the ability to detect errors and override automatic but biased judgments, and deliberative cognitive abilities that help to construct the correct response. Here we retain cognitive abilities but disentangle error detection, proposing that lower monitoring confidence and higher control thresholds promote error checking. Participants (N = 250) completed tasks assessing their fluid reasoning abilities, stable monitoring confidence levels, and the control threshold they impose on their decisions. They also completed seven typical heuristic and biases tasks such as the cognitive reflection test and Resistance to Framing. Using structural equation modeling, we found that individuals with higher reasoning abilities, lower monitoring confidence, and higher control threshold performed significantly and, at times, substantially better on the heuristic and biases tasks. Individuals with higher control thresholds also showed lower preferences for risky alternatives in a gambling task. Furthermore, residual correlations among the heuristic and biases tasks were reduced to null, indicating that cognitive abilities, monitoring confidence, and control thresholds accounted for their shared variance. Implications include the proposal that the capacity to detect errors does not differ between individuals. Rather, individuals might adopt varied strategies that promote error checking to different degrees, regardless of whether they have made a mistake or not. The results support growing evidence that decision-making involves cognitive abilities that construct actions and monitoring and control processes that manage their initiation. PMID:27790170

  5. Cognitive Abilities, Monitoring Confidence, and Control Thresholds Explain Individual Differences in Heuristics and Biases.

    PubMed

    Jackson, Simon A; Kleitman, Sabina; Howie, Pauline; Stankov, Lazar

    2016-01-01

    In this paper, we investigate whether individual differences in performance on heuristic and biases tasks can be explained by cognitive abilities, monitoring confidence, and control thresholds. Current theories explain individual differences in these tasks by the ability to detect errors and override automatic but biased judgments, and deliberative cognitive abilities that help to construct the correct response. Here we retain cognitive abilities but disentangle error detection, proposing that lower monitoring confidence and higher control thresholds promote error checking. Participants ( N = 250) completed tasks assessing their fluid reasoning abilities, stable monitoring confidence levels, and the control threshold they impose on their decisions. They also completed seven typical heuristic and biases tasks such as the cognitive reflection test and Resistance to Framing. Using structural equation modeling, we found that individuals with higher reasoning abilities, lower monitoring confidence, and higher control threshold performed significantly and, at times, substantially better on the heuristic and biases tasks. Individuals with higher control thresholds also showed lower preferences for risky alternatives in a gambling task. Furthermore, residual correlations among the heuristic and biases tasks were reduced to null, indicating that cognitive abilities, monitoring confidence, and control thresholds accounted for their shared variance. Implications include the proposal that the capacity to detect errors does not differ between individuals. Rather, individuals might adopt varied strategies that promote error checking to different degrees, regardless of whether they have made a mistake or not. The results support growing evidence that decision-making involves cognitive abilities that construct actions and monitoring and control processes that manage their initiation.

  6. Using generalized additive modeling to empirically identify thresholds within the ITERS in relation to toddlers' cognitive development.

    PubMed

    Setodji, Claude Messan; Le, Vi-Nhuan; Schaack, Diana

    2013-04-01

    Research linking high-quality child care programs and children's cognitive development has contributed to the growing popularity of child care quality benchmarking efforts such as quality rating and improvement systems (QRIS). Consequently, there has been an increased interest in and a need for approaches to identifying thresholds, or cutpoints, in the child care quality measures used in these benchmarking efforts that differentiate between different levels of children's cognitive functioning. To date, research has provided little guidance to policymakers as to where these thresholds should be set. Using the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B) data set, this study explores the use of generalized additive modeling (GAM) as a method of identifying thresholds on the Infant/Toddler Environment Rating Scale (ITERS) in relation to toddlers' performance on the Mental Development subscale of the Bayley Scales of Infant Development (the Bayley Mental Development Scale Short Form-Research Edition, or BMDSF-R). The present findings suggest that simple linear models do not always correctly depict the relationships between ITERS scores and BMDSF-R scores and that GAM-derived thresholds were more effective at differentiating among children's performance levels on the BMDSF-R. Additionally, the present findings suggest that there is a minimum threshold on the ITERS that must be exceeded before significant improvements in children's cognitive development can be expected. There may also be a ceiling threshold on the ITERS, such that beyond a certain level, only marginal increases in children's BMDSF-R scores are observed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  7. ECG signal performance de-noising assessment based on threshold tuning of dual-tree wavelet transform.

    PubMed

    El B'charri, Oussama; Latif, Rachid; Elmansouri, Khalifa; Abenaou, Abdenbi; Jenkal, Wissam

    2017-02-07

    Since the electrocardiogram (ECG) signal has a low frequency and a weak amplitude, it is sensitive to miscellaneous mixed noises, which may reduce the diagnostic accuracy and hinder the physician's correct decision on patients. The dual tree wavelet transform (DT-WT) is one of the most recent enhanced versions of discrete wavelet transform. However, threshold tuning on this method for noise removal from ECG signal has not been investigated yet. In this work, we shall provide a comprehensive study on the impact of the choice of threshold algorithm, threshold value, and the appropriate wavelet decomposition level to evaluate the ECG signal de-noising performance. A set of simulations is performed on both synthetic and real ECG signals to achieve the promised results. First, the synthetic ECG signal is used to observe the algorithm response. The evaluation results of synthetic ECG signal corrupted by various types of noise has showed that the modified unified threshold and wavelet hyperbolic threshold de-noising method is better in realistic and colored noises. The tuned threshold is then used on real ECG signals from the MIT-BIH database. The results has shown that the proposed method achieves higher performance than the ordinary dual tree wavelet transform into all kinds of noise removal from ECG signal. The simulation results indicate that the algorithm is robust for all kinds of noises with varying degrees of input noise, providing a high quality clean signal. Moreover, the algorithm is quite simple and can be used in real time ECG monitoring.

  8. String-inspired supergravity model at one loop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaillard, M.K.; Papadopoulos, A.; Pierce, D.M.

    1992-03-15

    We study a prototype supergravity model from superstrings, with three generations of matter fields in the untwisted sector, nonperturbatively induced supersymmetry breaking and including threshold corrections in conformity with modular invariance. The scale degeneracy of the vacuum is lifted at the one-loop level, allowing a determination of the fundamental parameters of the effective low-energy theory.

  9. Sensitivity to Lateral Information on a Perceptual Word Identification Task in French Third and Fifth Graders

    ERIC Educational Resources Information Center

    Khelifi, Rachid; Sparrow, Laurent; Casalis, Severine

    2012-01-01

    This study aimed at examining sensitivity to lateral linguistic and nonlinguistic information in third and fifth grade readers. A word identification task with a threshold was used, and targets were displayed foveally with or without distractors. Sensitivity to lateral information was inferred from the deterioration of the rate of correct word…

  10. ISASS Policy 2016 Update – Minimally Invasive Sacroiliac Joint Fusion

    PubMed Central

    Lorio, Morgan P.

    2016-01-01

    Rationale The index 2014 ISASS Policy Statement - Minimally Invasive Sacroiliac Joint Fusion was generated out of necessity to provide an ICD9-based background and emphasize tools to ensure correct diagnosis. A timely ICD10-based 2016 Update provides a granular threshold selection with improved level of evidence and a more robust, relevant database. PMID:27652197

  11. 77 FR 59139 - Prompt Corrective Action, Requirements for Insurance, and Promulgation of NCUA Rules and Regulations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-26

    ... threshold is used to define a ``complex'' credit union for determining whether risk-based net worth... credit union (FICU) is subject to certain interest rate risk rule requirements. \\1\\ IRPS 03-2, 68 FR... multiple applications, while avoiding undue risk to the National Credit Union Share Insurance Fund (NCUSIF...

  12. The Burden of Social Proof: Shared Thresholds and Social Influence

    ERIC Educational Resources Information Center

    MacCoun, Robert J.

    2012-01-01

    [Correction Notice: An erratum for this article was reported in Vol 119(2) of Psychological Review (see record 2012-06153-001). In the article, incorrect versions of figures 3 and 6 were included. Also, Table 8 should have included the following information in the table footnote "P(A V) = probability of acquittal given unanimous verdict." All…

  13. Threshold-voltage modulated phase change heterojunction for application of high density memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Baihan; Tong, Hao, E-mail: tonghao@hust.edu.cn; Qian, Hang

    2015-09-28

    Phase change random access memory is one of the most important candidates for the next generation non-volatile memory technology. However, the ability to reduce its memory size is compromised by the fundamental limitations inherent in the CMOS technology. While 0T1R configuration without any additional access transistor shows great advantages in improving the storage density, the leakage current and small operation window limit its application in large-scale arrays. In this work, phase change heterojunction based on GeTe and n-Si is fabricated to address those problems. The relationship between threshold voltage and doping concentration is investigated, and energy band diagrams and X-raymore » photoelectron spectroscopy measurements are provided to explain the results. The threshold voltage is modulated to provide a large operational window based on this relationship. The switching performance of the heterojunction is also tested, showing a good reverse characteristic, which could effectively decrease the leakage current. Furthermore, a reliable read-write-erase function is achieved during the tests. Phase change heterojunction is proposed for high-density memory, showing some notable advantages, such as modulated threshold voltage, large operational window, and low leakage current.« less

  14. Enhanced neural function in highly aberrated eyes following perceptual learning with adaptive optics.

    PubMed

    Sabesan, Ramkumar; Barbot, Antoine; Yoon, Geunyoung

    2017-03-01

    Highly aberrated keratoconic (KC) eyes do not elicit the expected visual advantage from customized optical corrections. This is attributed to the neural insensitivity arising from chronic visual experience with poor retinal image quality, dominated by low spatial frequencies. The goal of this study was to investigate if targeted perceptual learning with adaptive optics (AO) can stimulate neural plasticity in these highly aberrated eyes. The worse eye of 2 KC subjects was trained in a contrast threshold test under AO correction. Prior to training, tumbling 'E' visual acuity and contrast sensitivity at 4, 8, 12, 16, 20, 24 and 28 c/deg were measured in both the trained and untrained eyes of each subject with their routine prescription and with AO correction for a 6mm pupil. The high spatial frequency requiring 50% contrast for detection with AO correction was picked as the training frequency. Subjects were required to train on a contrast detection test with AO correction for 1h for 5 consecutive days. During each training session, threshold contrast measurement at the training frequency with AO was conducted. Pre-training measures were repeated after the 5 training sessions in both eyes (i.e., post-training). After training, contrast sensitivity under AO correction improved on average across spatial frequency by a factor of 1.91 (range: 1.77-2.04) and 1.75 (1.22-2.34) for the two subjects. This improvement in contrast sensitivity transferred to visual acuity with the two subjects improving by 1.5 and 1.3 lines respectively with AO following training. One of the two subjects denoted an interocular transfer of training and an improvement in performance with their routine prescription post-training. This training-induced visual benefit demonstrates the potential of AO as a tool for neural rehabilitation in patients with abnormal corneas. Moreover, it reveals a sufficient degree of neural plasticity in normally developed adults who have a long history of abnormal visual experience due to optical imperfections. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Visual sensitivity to spatially sampled modulation in human observers

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Macleod, Donald I. A.

    1991-01-01

    Thresholds were measured for detecting spatial luminance modulation in regular lattices of visually discrete dots. Thresholds for modulation of a lattice are generally higher than the corresponding threshold for modulation of a continuous field, and the size of the threshold elevation, which depends on the spacing of the lattice elements, can be as large as a one log unit. The largest threshold elevations are seen when the sample spacing is 12 min arc or greater. Theories based on response compression cannot explain the further observation that the threshold elevations due to spatial sampling are also dependent on modulation frequency: the greatest elevations occur with higher modulation frequencies. The idea that this is due to masking of the modulation frequency by the spatial frequencies in the sampling lattice is considered.

  16. Glancing-angle-deposited magnesium oxide films for high-fluence applications

    DOE PAGES

    Oliver, J. B.; Smith, C.; Spaulding, J.; ...

    2016-06-15

    Here, Birefringent magnesium oxide thin films are formed by glancing angle deposition to perform as quarter-wave plates at a wavelength of 351 nm. These films are being developed to fabricate a large aperture distributed-polarization rotator for use in vacuum, with an ultimate laser-damage–threshold goal of up to 12 J/cm 2 for a 5-ns flat-in-time pulse. The laser-damage threshold, ease of deposition, and optical film properties are evaluated. While the measured large-area laser-damage threshold is limited to ~4 J/cm 2 in vacuum, initial results based on small-spot testing in air (>20 J/cm 2) suggest MgO may be suitable with further processmore » development.« less

  17. Impact of different cleaning processes on the laser damage threshold of antireflection coatings for Z-Backlighter optics at Sandia National Laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Field, Ella; Bellum, John; Kletecka, Damon

    We have examined how different cleaning processes affect the laser-induced damage threshold of antireflection coatings for large dimension, Z-Backlighter laser optics at Sandia National Laboratories. Laser damage thresholds were measured after the coatings were created, and again 4 months later to determine which cleaning processes were most effective. There is a nearly twofold increase in laser-induced damage threshold between the antireflection coatings that were cleaned and those that were not cleaned. Aging of the coatings after 4 months resulted in even higher laser-induced damage thresholds. Also, the laser-induced damage threshold results revealed that every antireflection coating had a high defectmore » density, despite the cleaning process used, which indicates that improvements to either the cleaning or deposition processes should provide even higher laser-induced damage thresholds.« less

  18. Ecological thresholds: The key to successful enviromental management or an important concept with no practical application?

    USGS Publications Warehouse

    Groffman, P.M.; Baron, Jill S.; Blett, T.; Gold, A.J.; Goodman, I.; Gunderson, L.H.; Levinson, B.M.; Palmer, Margaret A.; Paerl, H.W.; Peterson, G.D.; Poff, N.L.; Rejeski, D.W.; Reynolds, J.F.; Turner, M.G.; Weathers, K.C.; Wiens, J.

    2006-01-01

    An ecological threshold is the point at which there is an abrupt change in an ecosystem quality, property or phenomenon, or where small changes in an environmental driver produce large responses in the ecosystem. Analysis of thresholds is complicated by nonlinear dynamics and by multiple factor controls that operate at diverse spatial and temporal scales. These complexities have challenged the use and utility of threshold concepts in environmental management despite great concern about preventing dramatic state changes in valued ecosystems, the need for determining critical pollutant loads and the ubiquity of other threshold-based environmental problems. In this paper we define the scope of the thresholds concept in ecological science and discuss methods for identifying and investigating thresholds using a variety of examples from terrestrial and aquatic environments, at ecosystem, landscape and regional scales. We end with a discussion of key research needs in this area.

  19. Impact of different cleaning processes on the laser damage threshold of antireflection coatings for Z-Backlighter optics at Sandia National Laboratories

    DOE PAGES

    Field, Ella; Bellum, John; Kletecka, Damon

    2014-11-06

    We have examined how different cleaning processes affect the laser-induced damage threshold of antireflection coatings for large dimension, Z-Backlighter laser optics at Sandia National Laboratories. Laser damage thresholds were measured after the coatings were created, and again 4 months later to determine which cleaning processes were most effective. There is a nearly twofold increase in laser-induced damage threshold between the antireflection coatings that were cleaned and those that were not cleaned. Aging of the coatings after 4 months resulted in even higher laser-induced damage thresholds. Also, the laser-induced damage threshold results revealed that every antireflection coating had a high defectmore » density, despite the cleaning process used, which indicates that improvements to either the cleaning or deposition processes should provide even higher laser-induced damage thresholds.« less

  20. GEOMORPHIC THRESHOLDS AND CHANNEL MORPHOLOGY IN LARGE RIVERS

    EPA Science Inventory

    Systematic changes in channel morphology occur as channel gradient, streamflow, and sediment character change and interact. Geomorphic thresholds of various kinds are useful metrics to define these changes along the river network, as they are based on in-channel processes that d...

  1. A critique of the use of indicator-species scores for identifying thresholds in species responses

    USGS Publications Warehouse

    Cuffney, Thomas F.; Qian, Song S.

    2013-01-01

    Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.

  2. Evaluation of ERA-Interim precipitation data in complex terrain

    NASA Astrophysics Data System (ADS)

    Gao, Lu; Bernhardt, Matthias; Schulz, Karsten

    2013-04-01

    Precipitation controls a large variety of environmental processes, which is an essential input parameter for land surface models e.g. in hydrology, ecology and climatology. However, rain gauge networks provides the necessary information, are commonly sparse in complex terrains, especially in high mountainous regions. Reanalysis products (e.g. ERA-40 and NCEP-NCAR) as surrogate data are increasing applied in the past years. Although they are improving forward, previous studies showed that these products should be objectively evaluated due to their various uncertainties. In this study, we evaluated the precipitation data from ERA-Interim, which is a latest reanalysis product developed by ECMWF. ERA-Interim daily total precipitation are compared with high resolution gridded observation dataset (E-OBS) at 0.25°×0.25° grids for the period 1979-2010 over central Alps (45.5-48°N, 6.25-11.5°E). Wet or dry day is defined using different threshold values (0.5mm, 1mm, 5mm, 10mm and 20mm). The correspondence ratio (CR) is applied for frequency comparison, which is the ratio of days when precipitation occurs in both ERA-Interim and E-OBS dataset. The result shows that ERA-Interim captures precipitation occurrence very well with a range of CR from 0.80 to 0.97 for 0.5mm to 20mm thresholds. However, the bias of intensity increases with rising thresholds. Mean absolute error (MAE) varies between 4.5 mm day-1 and 9.5 mm day-1 in wet days for whole area. In term of mean annual cycle, ERA-Interim almost has the same standard deviation of the interannual variability of daily precipitation with E-OBS, 1.0 mm day-1. Significant wet biases happened in ERA-Interim throughout warm season (May to August) and dry biases in cold season (November to February). The spatial distribution of mean annual daily precipitation shows that ERA-Interim significant underestimates precipitation intensity in high mountains and northern flank of Alpine chain from November to March while pronounced overestimate in the southern flank of Alps. The poor topographical and flow related characteristic representation of ERA-Interim model is possibly responsible for the bias. Particularly, the mountain block effect of moisture is weak captured. The comparison demonstrates that ERA-Interim precipitation intensity needs bias correction for further alpine climate studies, although it reasonably captures precipitation frequency. This critical evaluation not only diagnosed the data quality of ERA-Interim, but also provided the evidence for reanalysis products downscaling and bias correction in complex terrain.

  3. Female Choice or Male Sex Drive? The Advantages of Male Body Size during Mating in Drosophila Melanogaster.

    PubMed

    Jagadeeshan, Santosh; Shah, Ushma; Chakrabarti, Debarti; Singh, Rama S

    2015-01-01

    The mating success of larger male Drosophila melanogaster in the laboratory and the wild has been traditionally been explained by female choice, even though the reasons are generally hard to reconcile. Female choice can explain this success by virtue of females taking less time to mate with preferred males, but so can the more aggressive or persistent courtships efforts of large males. Since mating is a negotiation between the two sexes, the behaviors of both are likely to interact and influence mating outcomes. Using a series of assays, we explored these negotiations by testing for the relative influence of male behaviors and its effect on influencing female courtship arousal threshold, which is the time taken for females to accept copulation. Our results show that large males indeed have higher copulation success compared to smaller males. Competition between two males or an increasing number of males had no influence on female sexual arousal threshold;-females therefore may have a relatively fixed 'arousal threshold' that must be reached before they are ready to mate, and larger males appear to be able to manipulate this threshold sooner. On the other hand, the females' physiological and behavioral state drastically influences mating; once females have crossed the courtship arousal threshold they take less time to mate and mate indiscriminately with large and small males. Mating quicker with larger males may be misconstrued to be due to female choice; our results suggest that the mating advantage of larger males may be more a result of heightened male activity and relatively less of female choice. Body size per se may not be a trait under selection by female choice, but size likely amplifies male activity and signal outputs in courtship, allowing them to influence female arousal threshold faster.

  4. Toward computer-aided emphysema quantification on ultralow-dose CT: reproducibility of ventrodorsal gravity effect measurement and correction

    NASA Astrophysics Data System (ADS)

    Wiemker, Rafael; Opfer, Roland; Bülow, Thomas; Rogalla, Patrik; Steinberg, Amnon; Dharaiya, Ekta; Subramanyan, Krishna

    2007-03-01

    Computer aided quantification of emphysema in high resolution CT data is based on identifying low attenuation areas below clinically determined Hounsfield thresholds. However, the emphysema quantification is prone to error since a gravity effect can influence the mean attenuation of healthy lung parenchyma up to +/- 50 HU between ventral and dorsal lung areas. Comparing ultra-low-dose (7 mAs) and standard-dose (70 mAs) CT scans of each patient we show that measurement of the ventrodorsal gravity effect is patient specific but reproducible. It can be measured and corrected in an unsupervised way using robust fitting of a linear function.

  5. Syntactic error modeling and scoring normalization in speech recognition: Error modeling and scoring normalization in the speech recognition task for adult literacy training

    NASA Technical Reports Server (NTRS)

    Olorenshaw, Lex; Trawick, David

    1991-01-01

    The purpose was to develop a speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Better mechanisms are provided for using speech recognition in a literacy tutor application. Using a combination of scoring normalization techniques and cheater-mode decoding, a reasonable acceptance/rejection threshold was provided. In continuous speech, the system was tested to be able to provide above 80 pct. correct acceptance of words, while correctly rejecting over 80 pct. of incorrectly pronounced words.

  6. Operational early warning of shallow landslides in Norway: Evaluation of landslide forecasts and associated challenges

    NASA Astrophysics Data System (ADS)

    Dahl, Mads-Peter; Colleuille, Hervé; Boje, Søren; Sund, Monica; Krøgli, Ingeborg; Devoli, Graziella

    2015-04-01

    The Norwegian Water Resources and Energy Directorate (NVE) runs a national early warning system (EWS) for shallow landslides in Norway. Slope failures included in the EWS are debris slides, debris flows, debris avalanches and slush flows. The EWS has been operational on national scale since 2013 and consists of (a) quantitative landslide thresholds and daily hydro-meteorological prognosis; (b) daily qualitative expert evaluation of prognosis / additional data in decision to determine warning levels; (c) publication of warning levels through various custom build internet platforms. The effectiveness of an EWS depends on both the quality of forecasts being issued, and the communication of forecasts to the public. In this analysis a preliminary evaluation of landslide forecasts from the Norwegian EWS within the period 2012-2014 is presented. Criteria for categorizing forecasts as correct, missed events or false alarms are discussed and concrete examples of forecasts falling into the latter two categories are presented. The evaluation show a rate of correct forecasts exceeding 90%. However correct forecast categorization is sometimes difficult, particularly due to poorly documented landslide events. Several challenges has to be met in the process of further lowering rates of missed events of false alarms in the EWS. Among others these include better implementation of susceptibility maps in landslide forecasting, more detailed regionalization of hydro-meteorological landslide thresholds, improved prognosis on precipitation, snowmelt and soil water content as well as the build-up of more experience among the people performing landslide forecasting.

  7. Puberty timing associated with diabetes, cardiovascular disease and also diverse health outcomes in men and women: the UK Biobank study.

    PubMed

    Day, Felix R; Elks, Cathy E; Murray, Anna; Ong, Ken K; Perry, John R B

    2015-06-18

    Early puberty timing is associated with higher risks for type 2 diabetes (T2D) and cardiovascular disease in women and therefore represents a potential target for early preventive interventions. We characterised the range of diseases and other adverse health outcomes associated with early or late puberty timing in men and women in the very large UK Biobank study. Recalled puberty timing and past/current diseases were self-reported by questionnaire. We limited analyses to individuals of White ethnicity (250,037 women; 197,714 men) and to disease outcomes with at least 500 cases (~ 0.2% prevalence) and we applied stringent correction for multiple testing (corrected threshold P < 7.48 × 10(-5)). In models adjusted for socioeconomic position and adiposity/body composition variables, both in women and men separately, earlier puberty timing was associated with higher risks for angina, hypertension and T2D. Furthermore, compared to the median/average group, earlier or later puberty timing in women or men was associated with higher risks for 48 adverse outcomes, across a range of cancers, cardio-metabolic, gynaecological/obstetric, gastrointestinal, musculoskeletal, and neuro-cognitive categories. Notably, both early and late menarche were associated with higher risks for early natural menopause in women. Puberty timing in both men and women appears to have a profound impact on later health.

  8. Practical Weak-lensing Shear Measurement with Metacalibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheldon, Erin S.; Huff, Eric M.

    2017-05-20

    Metacalibration is a recently introduced method to accurately measure weak gravitational lensing shear using only the available imaging data, without need for prior information about galaxy properties or calibration from simulations. The method involves distorting the image with a small known shear, and calculating the response of a shear estimator to that applied shear. The method was shown to be accurate in moderate-sized simulations with galaxy images that had relatively high signal-to-noise ratios, and without significant selection effects. In this work we introduce a formalism to correct for both shear response and selection biases. We also observe that for imagesmore » with relatively low signal-to-noise ratios, the correlated noise that arises during the metacalibration process results in significant bias, for which we develop a simple empirical correction. To test this formalism, we created large image simulations based on both parametric models and real galaxy images, including tests with realistic point-spread functions. We varied the point-spread function ellipticity at the five-percent level. In each simulation we applied a small few-percent shear to the galaxy images. We introduced additional challenges that arise in real data, such as detection thresholds, stellar contamination, and missing data. We applied cuts on the measured galaxy properties to induce significant selection effects. Using our formalism, we recovered the input shear with an accuracy better than a part in a thousand in all cases.« less

  9. Searching Remote Homology with Spectral Clustering with Symmetry in Neighborhood Cluster Kernels

    PubMed Central

    Maulik, Ujjwal; Sarkar, Anasua

    2013-01-01

    Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of “recent” paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. Contact: sarkar@labri.fr. PMID:23457439

  10. Searching remote homology with spectral clustering with symmetry in neighborhood cluster kernels.

    PubMed

    Maulik, Ujjwal; Sarkar, Anasua

    2013-01-01

    Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of "recent" paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. sarkar@labri.fr.

  11. Biology relevant to space radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fry, R J.M.

    There are only very limited data on the health effects to humans from the two major components of the radiations in space, namely protons and heavy ions. As a result, predictions of the accompanying effects must be based either on (1) data generated through studies of experimental systems exposed on earth at rates and fluences higher than those in space, or (2) extrapolations from studies of gamma and x rays. Better information is needed about the doses, dose rates, and the energy and LET spectra of the radiations at the organ level that are anticipated to be encountered during extendedmore » space missions. In particular, there is a need for better estimates of the relationship between radiation quality and biological effects. In the case of deterministic effects, it is the threshold that is important. The possibility of the occurrence of a large solar particle event (SPE) requires that such effects be considered during extended space missions. Analyses suggest, however, that it is feasible to provide sufficient shielding so as to reduce such effects to acceptable levels, particularly if the dose rates can be limited. If these analyses prove correct, the primary biological risks will be the stochastic effects (latent cancer induction). The contribution of one large SPE to the risk of stochastic effects while undesirable will not be large in comparison to the potential total dose on a mission of long duration.« less

  12. Lift-enhancement in the gliding paradise tree snake

    NASA Astrophysics Data System (ADS)

    Krishnan, Anush; Barba, Lorena A.

    2012-11-01

    The paradise tree snake is a good glider, despite having no wing-like appendages. This snake jumps from tree branches, flattens its body and adopts an S-shape, then glides while undulating laterally in the air. Previous experimental studies in wind and water tunnels showed that the lift of the snake cross-section can peak markedly at about 35° angle of attack, a surprising feature that hints at a lift-enhancing mechanism. Here, we report numerical simulations on the snake cross-section using an immersed boundary method, which also show the peak in lift above a certain Reynolds number threshold. Our visualizations reveal a change in the vortex shedding pattern at that angle of attack. We also study variants of the cross-section, removing the anatomical overhanging lips on the fore and aft, and observe that they have a large impact on the flow field. The best performance is in fact obtained with the anatomically correct shape of the snake.

  13. Maximal gene number maintainable by stochastic correction - The second error threshold.

    PubMed

    Hubai, András G; Kun, Ádám

    2016-09-21

    There is still no general solution to Eigen׳s Paradox, the chicken-or-egg problem of the origin of life: neither accurate copying, nor long genomes could have evolved without one another being established beforehand. But an array of small, individually replicating genes might offer a workaround, provided that multilevel selection assists the survival of the ensemble. There are two key difficulties that such a system has to overcome: the non-synchronous replication of genes, and their random assortment into daughter cells (the units of higher-level selection) upon fission. Here we find, using the Stochastic Corrector Model framework, that a large number (τ≥90) of genes can coexist. Furthermore, the system can tolerate about 10% replication rate asymmetry (competition) among the genes. On this basis, we put forward a plausible (and testable!) scenario for how novel genes could have been incorporated into early living systems: a route to complex metabolism. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Study design in high-dimensional classification analysis.

    PubMed

    Sánchez, Brisa N; Wu, Meihua; Song, Peter X K; Wang, Wen

    2016-10-01

    Advances in high throughput technology have accelerated the use of hundreds to millions of biomarkers to construct classifiers that partition patients into different clinical conditions. Prior to classifier development in actual studies, a critical need is to determine the sample size required to reach a specified classification precision. We develop a systematic approach for sample size determination in high-dimensional (large [Formula: see text] small [Formula: see text]) classification analysis. Our method utilizes the probability of correct classification (PCC) as the optimization objective function and incorporates the higher criticism thresholding procedure for classifier development. Further, we derive the theoretical bound of maximal PCC gain from feature augmentation (e.g. when molecular and clinical predictors are combined in classifier development). Our methods are motivated and illustrated by a study using proteomics markers to classify post-kidney transplantation patients into stable and rejecting classes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Clinical research in small genomically stratified patient populations.

    PubMed

    Martin-Liberal, J; Rodon, J

    2017-07-01

    The paradigm of early drug development in cancer is shifting from 'histology-oriented' to 'molecularly oriented' clinical trials. This change can be attributed to the vast amount of tumour biology knowledge generated by large international research initiatives such as The Cancer Genome Atlas (TCGA) and the use of next generation sequencing (NGS) techniques developed in recent years. However, targeting infrequent molecular alterations entails a series of special challenges. The optimal molecular profiling method, the lack of standardised biological thresholds, inter- and intra-tumor heterogeneity, availability of enough tumour material, correct clinical trials design, attrition rate, logistics or costs are only some of the issues that need to be taken into consideration in clinical research in small genomically stratified patient populations. This article examines the most relevant challenges inherent to clinical research in these populations. Moreover, perspectives from the Academia point of view are reviewed as well as initiatives to be taken in forthcoming years. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Communication: rate coefficients from quasiclassical trajectory calculations from the reverse reaction: The Mu + H2 reaction re-visited.

    PubMed

    Homayoon, Zahra; Jambrina, Pablo G; Aoiz, F Javier; Bowman, Joel M

    2012-07-14

    In a previous paper [P. G. Jambrina et al., J. Chem. Phys. 135, 034310 (2011)] various calculations of the rate coefficient for the Mu + H(2) → MuH + H reaction were presented and compared to experiment. The widely used standard quasiclassical trajectory (QCT) method was shown to overestimate the rate coefficients by several orders of magnitude over the temperature range 200-1000 K. This was attributed to a major failure of that method to describe the correct threshold for the reaction owing to the large difference in zero-point energies (ZPE) of the reactant H(2) and product MuH (∼0.32 eV). In this Communication we show that by performing standard QCT calculations for the reverse reaction and then applying detailed balance, the resulting rate coefficient is in very good agreement with the other computational results that respect the ZPE, (as well as with the experiment) but which are more demanding computationally.

  17. Communication: Rate coefficients from quasiclassical trajectory calculations from the reverse reaction: The Mu + H2 reaction re-visited

    NASA Astrophysics Data System (ADS)

    Homayoon, Zahra; Jambrina, Pablo G.; Aoiz, F. Javier; Bowman, Joel M.

    2012-07-01

    In a previous paper [P. G. Jambrina et al., J. Chem. Phys. 135, 034310 (2011), 10.1063/1.3611400] various calculations of the rate coefficient for the Mu + H2 → MuH + H reaction were presented and compared to experiment. The widely used standard quasiclassical trajectory (QCT) method was shown to overestimate the rate coefficients by several orders of magnitude over the temperature range 200-1000 K. This was attributed to a major failure of that method to describe the correct threshold for the reaction owing to the large difference in zero-point energies (ZPE) of the reactant H2 and product MuH (˜0.32 eV). In this Communication we show that by performing standard QCT calculations for the reverse reaction and then applying detailed balance, the resulting rate coefficient is in very good agreement with the other computational results that respect the ZPE, (as well as with the experiment) but which are more demanding computationally.

  18. Body size phenology in a regional bee fauna: a temporal extension of Bergmann's rule.

    PubMed

    Osorio-Canadas, Sergio; Arnan, Xavier; Rodrigo, Anselm; Torné-Noguera, Anna; Molowny, Roberto; Bosch, Jordi

    2016-12-01

    Bergmann's rule originally described a positive relationship between body size and latitude in warm-blooded animals. Larger animals, with a smaller surface/volume ratio, are better enabled to conserve heat in cooler climates (thermoregulatory hypothesis). Studies on endothermic vertebrates have provided support for Bergmann's rule, whereas studies on ectotherms have yielded conflicting results. If the thermoregulatory hypothesis is correct, negative relationships between body size and temperature should occur in temporal in addition to geographical gradients. To explore this possibility, we analysed seasonal activity patterns in a bee fauna comprising 245 species. In agreement with our hypothesis of a different relationship for large (endothermic) and small (ectothermic) species, we found that species larger than 27.81 mg (dry weight) followed Bergmann's rule, whereas species below this threshold did not. Our results represent a temporal extension of Bergmann's rule and indicate that body size and thermal physiology play an important role in structuring community phenology. © 2016 John Wiley & Sons Ltd/CNRS.

  19. Sensor Alerting Capability

    NASA Astrophysics Data System (ADS)

    Henriksson, Jakob; Bermudez, Luis; Satapathy, Goutam

    2013-04-01

    There is a large amount of sensor data generated today by various sensors, from in-situ buoys to mobile underwater gliders. Providing sensor data to the users through standardized services, language and data model is the promise of OGC's Sensor Web Enablement (SWE) initiative. As the amount of data grows it is becoming difficult for data providers, planners and managers to ensure reliability of data and services and to monitor critical data changes. Intelligent Automation Inc. (IAI) is developing a net-centric alerting capability to address these issues. The capability is built on Sensor Observation Services (SOSs), which is used to collect and monitor sensor data. The alerts can be configured at the service level and at the sensor data level. For example it can alert for irregular data delivery events or a geo-temporal statistic of sensor data crossing a preset threshold. The capability provides multiple delivery mechanisms and protocols, including traditional techniques such as email and RSS. With this capability decision makers can monitor their assets and data streams, correct failures or be alerted about a coming phenomena.

  20. Recognizing millions of consistently unidentified spectra across hundreds of shotgun proteomics datasets

    PubMed Central

    Griss, Johannes; Perez-Riverol, Yasset; Lewis, Steve; Tabb, David L.; Dianes, José A.; del-Toro, Noemi; Rurik, Marc; Walzer, Mathias W.; Kohlbacher, Oliver; Hermjakob, Henning; Wang, Rui; Vizcaíno, Juan Antonio

    2016-01-01

    Mass spectrometry (MS) is the main technology used in proteomics approaches. However, on average 75% of spectra analysed in an MS experiment remain unidentified. We propose to use spectrum clustering at a large-scale to shed a light on these unidentified spectra. PRoteomics IDEntifications database (PRIDE) Archive is one of the largest MS proteomics public data repositories worldwide. By clustering all tandem MS spectra publicly available in PRIDE Archive, coming from hundreds of datasets, we were able to consistently characterize three distinct groups of spectra: 1) incorrectly identified spectra, 2) spectra correctly identified but below the set scoring threshold, and 3) truly unidentified spectra. Using a multitude of complementary analysis approaches, we were able to identify less than 20% of the consistently unidentified spectra. The complete spectrum clustering results are available through the new version of the PRIDE Cluster resource (http://www.ebi.ac.uk/pride/cluster). This resource is intended, among other aims, to encourage and simplify further investigation into these unidentified spectra. PMID:27493588

  1. Recognizing millions of consistently unidentified spectra across hundreds of shotgun proteomics datasets.

    PubMed

    Griss, Johannes; Perez-Riverol, Yasset; Lewis, Steve; Tabb, David L; Dianes, José A; Del-Toro, Noemi; Rurik, Marc; Walzer, Mathias W; Kohlbacher, Oliver; Hermjakob, Henning; Wang, Rui; Vizcaíno, Juan Antonio

    2016-08-01

    Mass spectrometry (MS) is the main technology used in proteomics approaches. However, on average 75% of spectra analysed in an MS experiment remain unidentified. We propose to use spectrum clustering at a large-scale to shed a light on these unidentified spectra. PRoteomics IDEntifications database (PRIDE) Archive is one of the largest MS proteomics public data repositories worldwide. By clustering all tandem MS spectra publicly available in PRIDE Archive, coming from hundreds of datasets, we were able to consistently characterize three distinct groups of spectra: 1) incorrectly identified spectra, 2) spectra correctly identified but below the set scoring threshold, and 3) truly unidentified spectra. Using a multitude of complementary analysis approaches, we were able to identify less than 20% of the consistently unidentified spectra. The complete spectrum clustering results are available through the new version of the PRIDE Cluster resource (http://www.ebi.ac.uk/pride/cluster). This resource is intended, among other aims, to encourage and simplify further investigation into these unidentified spectra.

  2. Random sequential adsorption of straight rigid rods on a simple cubic lattice

    NASA Astrophysics Data System (ADS)

    García, G. D.; Sanchez-Varretti, F. O.; Centres, P. M.; Ramirez-Pastor, A. J.

    2015-10-01

    Random sequential adsorption of straight rigid rods of length k (k-mers) on a simple cubic lattice has been studied by numerical simulations and finite-size scaling analysis. The k-mers were irreversibly and isotropically deposited into the lattice. The calculations were performed by using a new theoretical scheme, whose accuracy was verified by comparison with rigorous analytical data. The results, obtained for k ranging from 2 to 64, revealed that (i) the jamming coverage for dimers (k = 2) is θj = 0.918388(16) . Our result corrects the previously reported value of θj = 0.799(2) (Tarasevich and Cherkasova, 2007); (ii) θj exhibits a decreasing function when it is plotted in terms of the k-mer size, being θj(∞) = 0.4045(19) the value of the limit coverage for large k's; and (iii) the ratio between percolation threshold and jamming coverage shows a non-universal behavior, monotonically decreasing to zero with increasing k.

  3. Higgs boson mass and complex sneutrino dark matter in the supersymmetric inverse seesaw models

    NASA Astrophysics Data System (ADS)

    Guo, Jun; Kang, Zhaofeng; Li, Tianjun; Liu, Yandong

    2014-02-01

    The discovery of a relatively heavy Standard Model (SM)-like Higgs boson challenges naturalness of the minimal supersymmetric standard model (MSSM) from both Higgs and dark matter (DM) sectors. We study these two aspects in the MSSM extended by the low-scale inverse seesaw mechanism. Firstly, it admits a sizable radiative contribution to the Higgs boson mass m h , up to ~4 GeV in the case of an IR-fixed point of the coupling Y ν LH u ν c and a large sneutrino mixing. Secondly, the lightest sneutrino, highly complex as expected, is a viable thermal DM candidate. Owing to the correct DM relic density and the XENON100 experimental constraints, two scenarios survive: a Higgs-portal complex DM with mass lying around the Higgs pole or above W threshold, and a coannihilating DM with slim prospect of detection. Given an extra family of sneutrinos, both scenarios naturally work when we attempt to suppress the DM left-handed sneutrino component, confronting with enhancing m h .

  4. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    NASA Astrophysics Data System (ADS)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.

  5. The Delineation of Coral Bleaching Thresholds and Future Reef Health, Little Cayman Cayman Islands

    NASA Astrophysics Data System (ADS)

    Manfrino, C.; Van Hooidonk, R. J.; Manzello, D.; Hendee, J.

    2011-12-01

    The global rise in sea temperature through anthropogenic climate change is affecting coral reef ecosystems through a phenomenon known as coral bleaching; a common reaction to thermally induced physiological stress in reef-building corals that often leads to coral mortality. We describe aspects of the most prevalent episode of coral bleaching ever recorded at Little Cayman, Cayman Islands, during the fall of 2009. Scleractinian coral species exhibiting susceptibility to thermal stress and bleaching in Little Cayman were, in order, Siderastrea siderea, Montastraea annularis, and Montastraea faveolata, while Diplora strigosa and Agaricia spp. were less so, yet still showed considerable bleaching prevalence and severity. In contrast, the least susceptible were Porites porites, Porites astreoides, and Montastraea cavernosa. These observations and other reported observations of coral bleaching, together with 29 years (1982 - 2010) of satellite-derived sea surface temperatures, were used in a Degree Heating Weeks (DHW) and Peirce Skill Score (PSS) analysis to calculate a bleaching threshold above which bleaching was expected to occur. A threshold of 4.2 DHW had the highest skill, with a PSS of 0.70. This threshold and susceptibility ranking are used in combination with SST data from global, coupled ocean-atmosphere general circulation models (GCM) from the fourth IPCC assessment to forecast future reef health on Little Cayman. While these GCMs possess skill in reproducing many aspects of climate, they vary in their ability to correctly capture such parameters as the tropical ocean seasonal cycle and El Niño Southern Oscillation (ENSO) variability. These model weaknesses likely reduce the skill of coral bleaching predictions. To overcome this, a multi-model ensemble of GCMs are corrected for their mean, annual cycle and ENSO variability prior to calculating future thermal stress. Preliminary results show that from 2045 on Little Cayman is likely to see more than two massive bleaching episodes per decade.

  6. Examination of a Method to Determine the Reference Region for Calculating the Specific Binding Ratio in Dopamine Transporter Imaging.

    PubMed

    Watanabe, Ayumi; Inoue, Yusuke; Asano, Yuji; Kikuchi, Kei; Miyatake, Hiroki; Tokushige, Takanobu

    2017-01-01

    The specific binding ratio (SBR) was first reported by Tossici-Bolt et al. for quantitative indicators for dopamine transporter (DAT) imaging. It is defined as the ratio of the specific binding concentration of the striatum to the non-specific binding concentration of the whole brain other than the striatum. The non-specific binding concentration is calculated based on the region of interest (ROI), which is set 20 mm inside the outer contour, defined by a threshold technique. Tossici-Bolt et al. used a 50% threshold, but sometimes we couldn't define the ROI of non-specific binding concentration (reference region) and calculate SBR appropriately with a 50% threshold. Therefore, we sought a new method for determining the reference region when calculating SBR. We used data from 20 patients who had undergone DAT imaging in our hospital, to calculate the non-specific binding concentration by the following methods, the threshold to define a reference region was fixed at some specific values (the fixing method) and reference region was visually optimized by an examiner at every examination (the visual optimization method). First, we assessed the reference region of each method visually, and afterward, we quantitatively compared SBR calculated based on each method. In the visual assessment, the scores of the fixing method at 30% and visual optimization method were higher than the scores of the fixing method at other values, with or without scatter correction. In the quantitative assessment, the SBR obtained by visual optimization of the reference region, based on consensus of three radiological technologists, was used as a baseline (the standard method). The values of SBR showed good agreement between the standard method and both the fixing method at 30% and the visual optimization method, with or without scatter correction. Therefore, the fixing method at 30% and the visual optimization method were equally suitable for determining the reference region.

  7. The pre-operative levels of haemoglobin in the blood can be used to predict the risk of allogenic blood transfusion after total knee arthroplasty.

    PubMed

    Maempel, J F; Wickramasinghe, N R; Clement, N D; Brenkel, I J; Walmsley, P J

    2016-04-01

    The pre-operative level of haemoglobin is the strongest predictor of the peri-operative requirement for blood transfusion after total knee arthroplasty (TKA). There are, however, no studies reporting a value that could be considered to be appropriate pre-operatively. This study aimed to identify threshold pre-operative levels of haemoglobin that would predict the requirement for blood transfusion in patients who undergo TKA. Analysis of receiver operator characteristic (ROC) curves of 2284 consecutive patients undergoing unilateral TKA was used to determine gender specific thresholds predicting peri-operative transfusion with the highest combined sensitivity and specificity (area under ROC curve 0.79 for males; 0.78 for females). Threshold levels of 13.75 g/dl for males and 12.75 g/dl for females were identified. The rates of transfusion in males and females, respectively above these levels were 3.37% and 7.11%, while below these levels, they were 16.13% and 28.17%. Pre-operative anaemia increased the rate of transfusion by 6.38 times in males and 6.27 times in females. Blood transfusion was associated with an increased incidence of early post-operative confusion (odds ratio (OR) = 3.44), cardiac arrhythmia (OR = 5.90), urinary catheterisation (OR = 1.60), the incidence of deep infection (OR = 4.03) and mortality (OR = 2.35) one year post-operatively, and increased length of stay (eight days vs six days, p < 0.001). Uncorrected low pre-operative levels of haemoglobin put patients at potentially modifiable risk and attempts should be made to correct this before TKA. Target thresholds for the levels of haemoglobin pre-operatively in males and females are proposed. Low pre-operative haemoglobin levels put patients at unnecessary risk and should be corrected prior to surgery. ©2016 The British Editorial Society of Bone & Joint Surgery.

  8. The 6-min push test is reliable and predicts low fitness in spinal cord injury.

    PubMed

    Cowan, Rachel E; Callahan, Morgan K; Nash, Mark S

    2012-10-01

    The objective of this study is to assess 6-min push test (6MPT) reliability, determine whether the 6MPT is sensitive to fitness differences, and assess if 6MPT distance predicts fitness level in persons with spinal cord injury (SCI) or disease. Forty individuals with SCI who could self-propel a manual wheelchair completed an incremental arm crank peak oxygen consumption assessment and two 6MPTs across 3 d (37% tetraplegia (TP), 63% paraplegia (PP), 85% men, 70% white, 63% Hispanic, mean age = 34 ± 10 yr, mean duration of injury = 13 ± 10 yr, and mean body mass index = 24 ± 5 kg.m). Intraclass correlation and Bland-Altman plots assessed 6MPT distance (m) reliability. Mann-Whitney U test compared 6MPT distance (m) of high and low fitness groups for TP and PP. The fitness status prediction was developed using N = 30 and validated in N = 10 (validation group (VG)). A nonstatistical prediction approach, below or above a threshold distance (TP = 445 m and PP = 604 m), was validated statistically by binomial logistic regression. Accuracy, sensitivity, and specificity were computed to evaluate the threshold approach. Intraclass correlation coefficients exceeded 0.90 for the whole sample and the TP/PP subsets. High fitness persons propelled farther than low fitness persons for both TP/PP (both P < 0.05). Binomial logistic regression (P < 0.008) predicted the same fitness levels in the VG as the threshold approach. In the VG, overall accuracy was 70%. Eighty-six percent of low fitness persons were correctly identified (sensitivity), and 33% of high fitness persons were correctly identified (specificity). The 6MPT may be a useful tool for SCI clinicians and researchers. 6MPT distance demonstrates excellent reliability and is sensitive to differences in fitness level. 6MPT distances less than a threshold distance may be an effective approach to identify low fitness in person with SCI.

  9. Method, apparatus and system to compensate for drift by physically unclonable function circuitry

    DOEpatents

    Hamlet, Jason

    2016-11-22

    Techniques and mechanisms to detect and compensate for drift by a physically uncloneable function (PUF) circuit. In an embodiment, first state information is registered as reference information to be made available for subsequent evaluation of whether drift by PUF circuitry has occurred. The first state information is associated with a first error correction strength. The first state information is generated based on a first PUF value output by the PUF circuitry. In another embodiment, second state information is determined based on a second PUF value that is output by the PUF circuitry. An evaluation of whether drift has occurred is performed based on the first state information and the second state information, the evaluation including determining whether a threshold error correction strength is exceeded concurrent with a magnitude of error being less than the first error correction strength.

  10. Small step tracking - Implications for the oculomotor 'dead zone'. [eye response failure below threshold target displacements

    NASA Technical Reports Server (NTRS)

    Wyman, D.; Steinman, R. M.

    1973-01-01

    Recently Timberlake, Wyman, Skavenski, and Steinman (1972) concluded in a study of the oculomotor error signal in the fovea that 'the oculomotor dead zone is surely smaller than 10 min and may even be less than 5 min (smaller than the 0.25 to 0.5 deg dead zone reported by Rashbass (1961) with similar stimulus conditions).' The Timberlake et al. speculation is confirmed by demonstrating that the fixating eye consistently and accurately corrects target displacements as small as 3.4 min. The contact lens optical lever technique was used to study the manner in which the oculomotor system responds to small step displacements of the fixation target. Subjects did, without prior practice, use saccades to correct step displacements of the fixation target just as they correct small position errors during maintained fixation.

  11. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  12. Segment and Fit Thresholding: A New Method for Image Analysis Applied to Microarray and Immunofluorescence Data

    PubMed Central

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.

    2016-01-01

    Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  13. Photo-Double Ionization: Threshold Law and Low-Energy Behavior

    NASA Technical Reports Server (NTRS)

    Bhatia, A. K.; Temkin, A.

    2007-01-01

    The threshold law for photoejection of two electrons from atoms (PDI) is derived from a modification of the Coulomb-dipole (C-D) theory. The C-D theory applies to two-electron ejection from negative ions (photo-double detachment:PDD). The modification consists of correctly accounting for the fact that in PDI that the two escaping electrons see a Coulomb field, asymptotically no matter what their relative distances from the residual ion are. We find in the contralinear spherically symmetric model that the analytic threshold law Q(E), i.e. the yield of residual ions, to be Q Integral of (E) varies as E + (C(sub w) E(sup gamma W)) +CE(sup 5/4) sin [1/2 ln E + phi]/ln(E). The first and third terms are beyond the Wannier law. Our threshold law can only be rigorously justified for residual energies <= 10(exp -3) eV. Nevertheless in the present experimental range (0.1 - 4 eV), the form, even without the second term, can be fitted to experimental results of PDI for He, Li, and Be, in contrast to the Wannier law which has a larger deviation from the data for Li and Be.

  14. Low threshold optical bistability in one-dimensional gratings based on graphene plasmonics.

    PubMed

    Guo, Jun; Jiang, Leyong; Jia, Yue; Dai, Xiaoyu; Xiang, Yuanjiang; Fan, Dianyuan

    2017-03-20

    Optical bistability of graphene surface plasmon is investigated numerically, using grating coupling method at normal light incidence. The linear surface plasmon resonance is strongly dependent on Femi-level of graphene, hence it can be tuned in a large wavelength range. Due to the field enhancement of graphene surface plasmon resonance and large third-order nonlinear response of graphene, a low-threshold optical hysteresis has been observed. The threshold value with 20MW/cm2 and response time with 1.7ps have been verified. Especially, it is found that this optical bistability phenomenon is angular insensitivity for near 15° incident angle. The threshold of optical bistability can be further lowered to 0.5MW/cm2 by using graphene nanoribbons, and the response time is also shorten to 800fs. We believe that our results will find potential applications in bistable devices and all-optical switching from mid-IR to THz range.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agnese, R.; Anderson, A. J.; Aralis, T.

    The SuperCDMS experiment is designed to directly detect WIMPs (Weakly Interacting Massive Particles) that may constitute the dark matter in our galaxy. During its operation at the Soudan Underground Laboratory, germanium detectors were run in the CDMSlite (Cryogenic Dark Matter Search low ionization threshold experiment) mode to gather data sets with sensitivity specifically for WIMPs with massesmore » $${<}10$$ GeV/$c^2$. In this mode, a large detector-bias voltage is applied to amplify the phonon signals produced by drifting charges. This paper presents studies of the experimental noise and its effect on the achievable energy threshold, which is demonstrated to be as low as 56 eV$$_{\\text{ee}}$$ (electron equivalent energy). The detector biasing configuration is described in detail, with analysis corrections for voltage variations to the level of a few percent. Detailed studies of the electric-field geometry, and the resulting successful development of a fiducial parameter, eliminate poorly measured events, yielding an energy resolution ranging from $${\\sim}$$9 eV$$_{\\text{ee}}$$ at 0 keV to 101 eV$$_{\\text{ee}}$$ at $${\\sim}$$10 keV$$_{\\text{ee}}$$. New results are derived for astrophysical uncertainties relevant to the WIMP-search limits, specifically examining how they are affected by variations in the most probable WIMP velocity and the galactic escape velocity. These variations become more important for WIMP masses below 10 GeV/$c^2$. Finally, new limits on spin-dependent low-mass WIMP-nucleon interactions are derived, with new parameter space excluded for WIMP masses $${\\lesssim}$$3 GeV/$c^2$.« less

  16. Measurement-free implementations of small-scale surface codes for quantum-dot qubits

    NASA Astrophysics Data System (ADS)

    Ercan, H. Ekmel; Ghosh, Joydip; Crow, Daniel; Premakumar, Vickram N.; Joynt, Robert; Friesen, Mark; Coppersmith, S. N.

    2018-01-01

    The performance of quantum-error-correction schemes depends sensitively on the physical realizations of the qubits and the implementations of various operations. For example, in quantum-dot spin qubits, readout is typically much slower than gate operations, and conventional surface-code implementations that rely heavily on syndrome measurements could therefore be challenging. However, fast and accurate reset of quantum-dot qubits, without readout, can be achieved via tunneling to a reservoir. Here we propose small-scale surface-code implementations for which syndrome measurements are replaced by a combination of Toffoli gates and qubit reset. For quantum-dot qubits, this enables much faster error correction than measurement-based schemes, but requires additional ancilla qubits and non-nearest-neighbor interactions. We have performed numerical simulations of two different coding schemes, obtaining error thresholds on the orders of 10-2 for a one-dimensional architecture that only corrects bit-flip errors and 10-4 for a two-dimensional architecture that corrects bit- and phase-flip errors.

  17. Event-related potential measures of gap detection threshold during natural sleep.

    PubMed

    Muller-Gass, Alexandra; Campbell, Kenneth

    2014-08-01

    The minimum time interval between two stimuli that can be reliably detected is called the gap detection threshold. The present study examines whether an unconscious state, natural sleep affects the gap detection threshold. Event-related potentials were recorded in 10 young adults while awake and during all-night sleep to provide an objective estimate of this threshold. These subjects were presented with 2, 4, 8 or 16ms gaps occurring in 1.5 duration white noise. During wakefulness, a significant N1 was elicited for the 8 and 16ms gaps. N1 was difficult to observe during stage N2 sleep, even for the longest gap. A large P2 was however elicited and was significant for the 8 and 16ms gaps. Also, a later, very large N350 was elicited by the 16ms gap. An N1 and P2 was significant only for the 16ms gap during REM sleep. ERPs to gaps occurring in noise segments can therefore be successfully elicited during natural sleep. The gap detection threshold is similar in the waking and sleeping states. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.

  18. 75 FR 16361 - Airworthiness Directives; CFM International, S.A. Models CFM56-3 and -3B Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-01

    ..., S.A. Models CFM56-3 and -3B Turbofan Engines AGENCY: Federal Aviation Administration (FAA...), for certain CFM International, S.A. models CFM56-3 and -3B turbofan engines. That proposed AD would... inspection compliance threshold, to correct the engine model designations affected, and to clarify some of...

  19. 78 FR 2249 - Magnuson-Stevens Act Provisions; Fisheries of the Northeastern United States; Northeast...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-10

    ...This action reopens the comment period for an Acadian redfish- related proposed rule that published on November 8, 2012. The original comment period closed on November 23, 2012. This action clarifies a bycatch threshold incorrectly explained in the proposed rule. The public comment period is being reopened to solicit additional public comment on this correction.

  20. An entropy decision approach in flash flood warning: rainfall thresholds definition

    NASA Astrophysics Data System (ADS)

    Montesarchio, V.; Napolitano, F.; Ridolfi, E.

    2009-09-01

    Flash floods events are floods characterised by very rapid response of the basins to the storms, and often they involve loss of life and damage to common and private properties. Due to the specific space-time scale of this kind of flood, generally only a short lead time is available for triggering civil protection measures. Thresholds values specify the precipitation amount for a given duration that generates a critical discharge in a given cross section. The overcoming of these values could produce a critical situation in river sites exposed to alluvial risk, so it is possible to compare directly the observed or forecasted precipitation with critical reference values, without running on line real time forecasting systems. This study is focused on the Mignone River basin, located in Central Italy. The critical rainfall threshold values are evaluated minimising an utility function based on the informative entropy concept. The study concludes with a system performance analysis, in terms of correctly issued warning, false alarms and missed alarms.

  1. Cost and threshold analysis of an HIV/STI/hepatitis prevention intervention for young men leaving prison: Project START.

    PubMed

    Johnson, A P; Macgowan, R J; Eldridge, G D; Morrow, K M; Sosman, J; Zack, B; Margolis, A

    2013-10-01

    The objectives of this study were to: (a) estimate the costs of providing a single-session HIV prevention intervention and a multi-session intervention, and (b) estimate the number of HIV transmissions that would need to be prevented for the intervention to be cost-saving or cost-effective (threshold analysis). Project START was evaluated with 522 young men aged 18-29 years released from eight prisons located in California, Mississippi, Rhode Island, and Wisconsin. Cost data were collected prospectively. Costs per participant were $689 for the single-session comparison intervention, and ranged from $1,823 to 1,836 for the Project START multi-session intervention. From the incremental threshold analysis, the multi-session intervention would be cost-effective if it prevented one HIV transmission for every 753 participants compared to the single-session intervention. Costs are comparable with other HIV prevention programs. Program managers can use these data to gauge costs of initiating these HIV prevention programs in correctional facilities.

  2. Step Detection Robust against the Dynamics of Smartphones

    PubMed Central

    Lee, Hwan-hee; Choi, Suji; Lee, Myeong-jin

    2015-01-01

    A novel algorithm is proposed for robust step detection irrespective of step mode and device pose in smartphone usage environments. The dynamics of smartphones are decoupled into a peak-valley relationship with adaptive magnitude and temporal thresholds. For extracted peaks and valleys in the magnitude of acceleration, a step is defined as consisting of a peak and its adjacent valley. Adaptive magnitude thresholds consisting of step average and step deviation are applied to suppress pseudo peaks or valleys that mostly occur during the transition among step modes or device poses. Adaptive temporal thresholds are applied to time intervals between peaks or valleys to consider the time-varying pace of human walking or running for the correct selection of peaks or valleys. From the experimental results, it can be seen that the proposed step detection algorithm shows more than 98.6% average accuracy for any combination of step mode and device pose and outperforms state-of-the-art algorithms. PMID:26516857

  3. Positive-negative corresponding normalized ghost imaging based on an adaptive threshold

    NASA Astrophysics Data System (ADS)

    Li, G. L.; Zhao, Y.; Yang, Z. H.; Liu, X.

    2016-11-01

    Ghost imaging (GI) technology has attracted increasing attention as a new imaging technique in recent years. However, the signal-to-noise ratio (SNR) of GI with pseudo-thermal light needs to be improved before it meets engineering application demands. We therefore propose a new scheme called positive-negative correspondence normalized GI based on an adaptive threshold (PCNGI-AT) to achieve a good performance with less amount of data. In this work, we use both the advantages of normalized GI (NGI) and positive-negative correspondence GI (P-NCGI). The correctness and feasibility of the scheme were proved in theory before we designed an adaptive threshold selection method, in which the parameter of object signal selection conditions is replaced by the normalizing value. The simulation and experimental results reveal that the SNR of the proposed scheme is better than that of time-correspondence differential GI (TCDGI), avoiding the calculation of the matrix of correlation and reducing the amount of data used. The method proposed will make GI far more practical in engineering applications.

  4. Analysis of Waveform Retracking Methods in Antarctic Ice Sheet Based on CRYOSAT-2 Data

    NASA Astrophysics Data System (ADS)

    Xiao, F.; Li, F.; Zhang, S.; Hao, W.; Yuan, L.; Zhu, T.; Zhang, Y.; Zhu, C.

    2017-09-01

    Satellite altimetry plays an important role in many geoscientific and environmental studies of Antarctic ice sheet. The ranging accuracy is degenerated near coasts or over nonocean surfaces, due to waveform contamination. A postprocess technique, known as waveform retracking, can be used to retrack the corrupt waveform and in turn improve the ranging accuracy. In 2010, the CryoSat-2 satellite was launched with the Synthetic aperture Interferometric Radar ALtimeter (SIRAL) onboard. Satellite altimetry waveform retracking methods are discussed in the paper. Six retracking methods including the OCOG method, the threshold method with 10 %, 25 % and 50 % threshold level, the linear and exponential 5-β parametric methods are used to retrack CryoSat-2 waveform over the transect from Zhongshan Station to Dome A. The results show that the threshold retracker performs best with the consideration of waveform retracking success rate and RMS of retracking distance corrections. The linear 5-β parametric retracker gives best waveform retracking precision, but cannot make full use of the waveform data.

  5. Randomized, Prospective, Three-Arm Study to Confirm the Auditory Safety and Efficacy of Artemether-Lumefantrine in Colombian Patients with Uncomplicated Plasmodium falciparum Malaria

    PubMed Central

    Carrasquilla, Gabriel; Barón, Clemencia; Monsell, Edwin M.; Cousin, Marc; Walter, Verena; Lefèvre, Gilbert; Sander, Oliver; Fisher, Laurel M.

    2012-01-01

    The safety of artemether-lumefantrine in patients with acute, uncomplicated Plasmodium falciparum malaria was investigated prospectively using the auditory brainstem response (ABR) and pure-tone thresholds. Secondary outcomes included polymerase chain reaction-corrected cure rates. Patients were randomly assigned in a 3:1:1 ratio to either artemether-lumefantrine (N = 159), atovaquone-proguanil (N = 53), or artesunate-mefloquine (N = 53). The null hypothesis (primary outcome), claiming that the percentage of patients with a baseline to Day-7 ABR Wave III latency increase of > 0.30 msec is ≥ 15% after administration of artemether-lumefantrine, was rejected; 2.6% of patients (95% confidence interval: 0.7–6.6) exceeded 0.30 msec, i.e., significantly below 15% (P < 0.0001). A model-based analysis found no apparent relationship between drug exposure and ABR change. In all three groups, average improvements (2–4 dB) in pure-tone thresholds were observed, and polymerase chain reaction-corrected cure rates were > 95% to Day 42. The results support the continued safe and efficacious use of artemether-lumefantrine in uncomplicated falciparum malaria. PMID:22232454

  6. Randomized, prospective, three-arm study to confirm the auditory safety and efficacy of artemether-lumefantrine in Colombian patients with uncomplicated Plasmodium falciparum malaria.

    PubMed

    Carrasquilla, Gabriel; Barón, Clemencia; Monsell, Edwin M; Cousin, Marc; Walter, Verena; Lefèvre, Gilbert; Sander, Oliver; Fisher, Laurel M

    2012-01-01

    The safety of artemether-lumefantrine in patients with acute, uncomplicated Plasmodium falciparum malaria was investigated prospectively using the auditory brainstem response (ABR) and pure-tone thresholds. Secondary outcomes included polymerase chain reaction-corrected cure rates. Patients were randomly assigned in a 3:1:1 ratio to either artemether-lumefantrine (N = 159), atovaquone-proguanil (N = 53), or artesunate-mefloquine (N = 53). The null hypothesis (primary outcome), claiming that the percentage of patients with a baseline to Day-7 ABR Wave III latency increase of > 0.30 msec is ≥ 15% after administration of artemether-lumefantrine, was rejected; 2.6% of patients (95% confidence interval: 0.7-6.6) exceeded 0.30 msec, i.e., significantly below 15% (P < 0.0001). A model-based analysis found no apparent relationship between drug exposure and ABR change. In all three groups, average improvements (2-4 dB) in pure-tone thresholds were observed, and polymerase chain reaction-corrected cure rates were > 95% to Day 42. The results support the continued safe and efficacious use of artemether-lumefantrine in uncomplicated falciparum malaria.

  7. Face recognition: database acquisition, hybrid algorithms, and human studies

    NASA Astrophysics Data System (ADS)

    Gutta, Srinivas; Huang, Jeffrey R.; Singh, Dig; Wechsler, Harry

    1997-02-01

    One of the most important technologies absent in traditional and emerging frontiers of computing is the management of visual information. Faces are accessible `windows' into the mechanisms that govern our emotional and social lives. The corresponding face recognition tasks considered herein include: (1) Surveillance, (2) CBIR, and (3) CBIR subject to correct ID (`match') displaying specific facial landmarks such as wearing glasses. We developed robust matching (`classification') and retrieval schemes based on hybrid classifiers and showed their feasibility using the FERET database. The hybrid classifier architecture consist of an ensemble of connectionist networks--radial basis functions-- and decision trees. The specific characteristics of our hybrid architecture include (a) query by consensus as provided by ensembles of networks for coping with the inherent variability of the image formation and data acquisition process, and (b) flexible and adaptive thresholds as opposed to ad hoc and hard thresholds. Experimental results, proving the feasibility of our approach, yield (i) 96% accuracy, using cross validation (CV), for surveillance on a data base consisting of 904 images (ii) 97% accuracy for CBIR tasks, on a database of 1084 images, and (iii) 93% accuracy, using CV, for CBIR subject to correct ID match tasks on a data base of 200 images.

  8. Estimating unbiased magnitudes for the announced DPRK nuclear tests, 2006-2016

    NASA Astrophysics Data System (ADS)

    Peacock, Sheila; Bowers, David

    2017-04-01

    The seismic disturbances generated from the five (2006-2016) announced nuclear test explosions by the Democratic People's Republic of Korea (DPRK) are of moderate magnitude (body-wave magnitude mb 4-5) by global earthquake standards. An upward bias of network mean mb of low- to moderate-magnitude events is long established, and is caused by the censoring of readings from stations where the signal was below noise level at the time of the predicted arrival. This sampling bias can be overcome by maximum-likelihood methods using station thresholds at detecting (and non-detecting) stations. Bias in the mean mb can also be introduced by differences in the network of stations recording each explosion - this bias can reduced by using station corrections. We apply a maximum-likelihood (JML) inversion that jointly estimates station corrections and unbiased network mb for the five DPRK explosions recorded by the CTBTO International Monitoring Network (IMS) of seismic stations. The thresholds can either be directly measured from the noise preceding the observed signal, or determined by statistical analysis of bulletin amplitudes. The network mb of the first and smallest explosion is reduced significantly relative to the mean mb (to < 4.0 mb) by removal of the censoring bias.

  9. Comparison of HapMap and 1000 Genomes Reference Panels in a Large-Scale Genome-Wide Association Study.

    PubMed

    de Vries, Paul S; Sabater-Lleal, Maria; Chasman, Daniel I; Trompet, Stella; Ahluwalia, Tarunveer S; Teumer, Alexander; Kleber, Marcus E; Chen, Ming-Huei; Wang, Jie Jin; Attia, John R; Marioni, Riccardo E; Steri, Maristella; Weng, Lu-Chen; Pool, Rene; Grossmann, Vera; Brody, Jennifer A; Venturini, Cristina; Tanaka, Toshiko; Rose, Lynda M; Oldmeadow, Christopher; Mazur, Johanna; Basu, Saonli; Frånberg, Mattias; Yang, Qiong; Ligthart, Symen; Hottenga, Jouke J; Rumley, Ann; Mulas, Antonella; de Craen, Anton J M; Grotevendt, Anne; Taylor, Kent D; Delgado, Graciela E; Kifley, Annette; Lopez, Lorna M; Berentzen, Tina L; Mangino, Massimo; Bandinelli, Stefania; Morrison, Alanna C; Hamsten, Anders; Tofler, Geoffrey; de Maat, Moniek P M; Draisma, Harmen H M; Lowe, Gordon D; Zoledziewska, Magdalena; Sattar, Naveed; Lackner, Karl J; Völker, Uwe; McKnight, Barbara; Huang, Jie; Holliday, Elizabeth G; McEvoy, Mark A; Starr, John M; Hysi, Pirro G; Hernandez, Dena G; Guan, Weihua; Rivadeneira, Fernando; McArdle, Wendy L; Slagboom, P Eline; Zeller, Tanja; Psaty, Bruce M; Uitterlinden, André G; de Geus, Eco J C; Stott, David J; Binder, Harald; Hofman, Albert; Franco, Oscar H; Rotter, Jerome I; Ferrucci, Luigi; Spector, Tim D; Deary, Ian J; März, Winfried; Greinacher, Andreas; Wild, Philipp S; Cucca, Francesco; Boomsma, Dorret I; Watkins, Hugh; Tang, Weihong; Ridker, Paul M; Jukema, Jan W; Scott, Rodney J; Mitchell, Paul; Hansen, Torben; O'Donnell, Christopher J; Smith, Nicholas L; Strachan, David P; Dehghan, Abbas

    2017-01-01

    An increasing number of genome-wide association (GWA) studies are now using the higher resolution 1000 Genomes Project reference panel (1000G) for imputation, with the expectation that 1000G imputation will lead to the discovery of additional associated loci when compared to HapMap imputation. In order to assess the improvement of 1000G over HapMap imputation in identifying associated loci, we compared the results of GWA studies of circulating fibrinogen based on the two reference panels. Using both HapMap and 1000G imputation we performed a meta-analysis of 22 studies comprising the same 91,953 individuals. We identified six additional signals using 1000G imputation, while 29 loci were associated using both HapMap and 1000G imputation. One locus identified using HapMap imputation was not significant using 1000G imputation. The genome-wide significance threshold of 5×10-8 is based on the number of independent statistical tests using HapMap imputation, and 1000G imputation may lead to further independent tests that should be corrected for. When using a stricter Bonferroni correction for the 1000G GWA study (P-value < 2.5×10-8), the number of loci significant only using HapMap imputation increased to 4 while the number of loci significant only using 1000G decreased to 5. In conclusion, 1000G imputation enabled the identification of 20% more loci than HapMap imputation, although the advantage of 1000G imputation became less clear when a stricter Bonferroni correction was used. More generally, our results provide insights that are applicable to the implementation of other dense reference panels that are under development.

  10. Comparison of HapMap and 1000 Genomes Reference Panels in a Large-Scale Genome-Wide Association Study

    PubMed Central

    de Vries, Paul S.; Sabater-Lleal, Maria; Chasman, Daniel I.; Trompet, Stella; Kleber, Marcus E.; Chen, Ming-Huei; Wang, Jie Jin; Attia, John R.; Marioni, Riccardo E.; Weng, Lu-Chen; Grossmann, Vera; Brody, Jennifer A.; Venturini, Cristina; Tanaka, Toshiko; Rose, Lynda M.; Oldmeadow, Christopher; Mazur, Johanna; Basu, Saonli; Yang, Qiong; Ligthart, Symen; Hottenga, Jouke J.; Rumley, Ann; Mulas, Antonella; de Craen, Anton J. M.; Grotevendt, Anne; Taylor, Kent D.; Delgado, Graciela E.; Kifley, Annette; Lopez, Lorna M.; Berentzen, Tina L.; Mangino, Massimo; Bandinelli, Stefania; Morrison, Alanna C.; Hamsten, Anders; Tofler, Geoffrey; de Maat, Moniek P. M.; Draisma, Harmen H. M.; Lowe, Gordon D.; Zoledziewska, Magdalena; Sattar, Naveed; Lackner, Karl J.; Völker, Uwe; McKnight, Barbara; Huang, Jie; Holliday, Elizabeth G.; McEvoy, Mark A.; Starr, John M.; Hysi, Pirro G.; Hernandez, Dena G.; Guan, Weihua; Rivadeneira, Fernando; McArdle, Wendy L.; Slagboom, P. Eline; Zeller, Tanja; Psaty, Bruce M.; Uitterlinden, André G.; de Geus, Eco J. C.; Stott, David J.; Binder, Harald; Hofman, Albert; Franco, Oscar H.; Rotter, Jerome I.; Ferrucci, Luigi; Spector, Tim D.; Deary, Ian J.; März, Winfried; Greinacher, Andreas; Wild, Philipp S.; Cucca, Francesco; Boomsma, Dorret I.; Watkins, Hugh; Tang, Weihong; Ridker, Paul M.; Jukema, Jan W.; Scott, Rodney J.; Mitchell, Paul; Hansen, Torben; O'Donnell, Christopher J.; Smith, Nicholas L.; Strachan, David P.

    2017-01-01

    An increasing number of genome-wide association (GWA) studies are now using the higher resolution 1000 Genomes Project reference panel (1000G) for imputation, with the expectation that 1000G imputation will lead to the discovery of additional associated loci when compared to HapMap imputation. In order to assess the improvement of 1000G over HapMap imputation in identifying associated loci, we compared the results of GWA studies of circulating fibrinogen based on the two reference panels. Using both HapMap and 1000G imputation we performed a meta-analysis of 22 studies comprising the same 91,953 individuals. We identified six additional signals using 1000G imputation, while 29 loci were associated using both HapMap and 1000G imputation. One locus identified using HapMap imputation was not significant using 1000G imputation. The genome-wide significance threshold of 5×10−8 is based on the number of independent statistical tests using HapMap imputation, and 1000G imputation may lead to further independent tests that should be corrected for. When using a stricter Bonferroni correction for the 1000G GWA study (P-value < 2.5×10−8), the number of loci significant only using HapMap imputation increased to 4 while the number of loci significant only using 1000G decreased to 5. In conclusion, 1000G imputation enabled the identification of 20% more loci than HapMap imputation, although the advantage of 1000G imputation became less clear when a stricter Bonferroni correction was used. More generally, our results provide insights that are applicable to the implementation of other dense reference panels that are under development. PMID:28107422

  11. Implementation of a computer-aided detection tool for quantification of intracranial radiologic markers on brain CT images

    NASA Astrophysics Data System (ADS)

    Aghaei, Faranak; Ross, Stephen R.; Wang, Yunzhi; Wu, Dee H.; Cornwell, Benjamin O.; Ray, Bappaditya; Zheng, Bin

    2017-03-01

    Aneurysmal subarachnoid hemorrhage (aSAH) is a form of hemorrhagic stroke that affects middle-aged individuals and associated with significant morbidity and/or mortality especially those presenting with higher clinical and radiologic grades at the time of admission. Previous studies suggested that blood extravasated after aneurysmal rupture was a potentially clinical prognosis factor. But all such studies used qualitative scales to predict prognosis. The purpose of this study is to develop and test a new interactive computer-aided detection (CAD) tool to detect, segment and quantify brain hemorrhage and ventricular cerebrospinal fluid on non-contrasted brain CT images. First, CAD segments brain skull using a multilayer region growing algorithm with adaptively adjusted thresholds. Second, CAD assigns pixels inside the segmented brain region into one of three classes namely, normal brain tissue, blood and fluid. Third, to avoid "black-box" approach and increase accuracy in quantification of these two image markers using CT images with large noise variation in different cases, a graphic User Interface (GUI) was implemented and allows users to visually examine segmentation results. If a user likes to correct any errors (i.e., deleting clinically irrelevant blood or fluid regions, or fill in the holes inside the relevant blood or fluid regions), he/she can manually define the region and select a corresponding correction function. CAD will automatically perform correction and update the computed data. The new CAD tool is now being used in clinical and research settings to estimate various quantitatively radiological parameters/markers to determine radiological severity of aSAH at presentation and correlate the estimations with various homeostatic/metabolic derangements and predict clinical outcome.

  12. Simulation of eye-tracker latency, spot size, and ablation pulse depth on the correction of higher order wavefront aberrations with scanning spot laser systems.

    PubMed

    Bueeler, Michael; Mrochen, Michael

    2005-01-01

    The aim of this theoretical work was to investigate the robustness of scanning spot laser treatments with different laser spot diameters and peak ablation depths in case of incomplete compensation of eye movements due to eye-tracker latency. Scanning spot corrections of 3rd to 5th Zernike order wavefront errors were numerically simulated. Measured eye-movement data were used to calculate the positioning error of each laser shot assuming eye-tracker latencies of 0, 5, 30, and 100 ms, and for the case of no eye tracking. The single spot ablation depth ranged from 0.25 to 1.0 microm and the spot diameter from 250 to 1000 microm. The quality of the ablation was rated by the postoperative surface variance and the Strehl intensity ratio, which was calculated after a low-pass filter was applied to simulate epithelial surface smoothing. Treatments performed with nearly ideal eye tracking (latency approximately 0) provide the best results with a small laser spot (0.25 mm) and a small ablation depth (250 microm). However, combinations of a large spot diameter (1000 microm) and a small ablation depth per pulse (0.25 microm) yield the better results for latencies above a certain threshold to be determined specifically. Treatments performed with tracker latencies in the order of 100 ms yield similar results as treatments done completely without eye-movement compensation. CONCWSIONS: Reduction of spot diameter was shown to make the correction more susceptible to eye movement induced error. A smaller spot size is only beneficial when eye movement is neutralized with a tracking system with a latency <5 ms.

  13. F-wave of single firing motor units: correct or misleading criterion of motoneuron excitability in humans?

    PubMed

    Kudina, Lydia P; Andreeva, Regina E

    2017-03-01

    Motoneuron excitability is a critical property for information processing during motor control. F-wave (a motoneuronal recurrent discharge evoked by a motor antidromic volley) is often used as a criterion of motoneuron pool excitability in normal and neuromuscular diseases. However, such using of F-wave calls in question. The present study was designed to explore excitability of single low-threshold motoneurons during their natural firing in healthy humans and to ascertain whether F-wave is a correct measure of motoneuronal excitability. Single motor units (MUs) were activated by gentle voluntary muscle contractions. MU peri-stimulus time histograms and motoneuron excitability changes within a target interspike interval were analysed during testing by motor antidromic and Ia-afferent volleys. It was found that F-waves could be occasionally recorded in some low-threshold MUs. However, during evoking F-wave, in contrast with the H-reflex, peri-stimulus time histograms revealed no statistically significant increase in MU discharge probability. Moreover, surprisingly, motoneurons appeared commonly incapable to fire a recurrent discharge within the most excitable part of a target interval. Thus, the F-wave, unlike the H-reflex, is the incorrect criterion of motoneuron excitability resulting in misleading conclusions. However, it does not exclude the validity of the F-wave as a clinical tool for other aims. It was concluded that the F-wave was first explored in low-threshold MUs during their natural firing. The findings may be useful at interpretations of changes in the motoneuron pool excitability in neuromuscular diseases.

  14. Visual Cortical Function in Very Low Birth Weight Infants without Retinal or Cerebral Pathology

    PubMed Central

    Hou, Chuan; Norcia, Anthony M.; Madan, Ashima; Tith, Solina; Agarwal, Rashi

    2011-01-01

    Purpose. Preterm infants are at high risk of visual and neural developmental deficits. However, the development of visual cortical function in preterm infants with no retinal or neurologic morbidity has not been well defined. To determine whether premature birth itself alters visual cortical function, swept parameter visual evoked potential (sVEP) responses of healthy preterm infants were compared with those of term infants. Methods. Fifty-two term infants and 58 very low birth weight (VLBW) infants without significant retinopathy of prematurity or neurologic morbidities were enrolled. Recruited VLBW infants were between 26 and 33 weeks of gestational age, with birth weights of less than 1500 g. Spatial frequency, contrast, and vernier offset sweep VEP tuning functions were measured at 5 to 7 months' corrected age. Acuity and contrast thresholds were derived by extrapolating the tuning functions to 0 amplitude. These thresholds and suprathreshold response amplitudes were compared between groups. Results. Preterm infants showed increased thresholds (indicating decreased sensitivity to visual stimuli) and reductions in amplitudes for all three measures. These changes in cortical responsiveness were larger in the <30 weeks ' gestational age subgroup than in the ≥30 weeks' gestational age subgroup. Conclusions. Preterm infants with VLBW had measurable and significant changes in cortical responsiveness that were correlated with gestational age. These results suggest that premature birth in the absence of identifiable retinal or neurologic abnormalities has a significant effect on visual cortical sensitivity at 5 to 7 months' of corrected age and that gestational age is an important factor in visual development. PMID:22025567

  15. Development, implementation and evaluation of a dedicated metal artefact reduction method for interventional flat-detector CT.

    PubMed

    Prell, D; Kalender, W A; Kyriakou, Y

    2010-12-01

    The purpose of this study was to develop, implement and evaluate a dedicated metal artefact reduction (MAR) method for flat-detector CT (FDCT). The algorithm uses the multidimensional raw data space to calculate surrogate attenuation values for the original metal traces in the raw data domain. The metal traces are detected automatically by a three-dimensional, threshold-based segmentation algorithm in an initial reconstructed image volume, based on twofold histogram information for calculating appropriate metal thresholds. These thresholds are combined with constrained morphological operations in the projection domain. A subsequent reconstruction of the modified raw data yields an artefact-reduced image volume that is further processed by a combining procedure that reinserts the missing metal information. For image quality assessment, measurements on semi-anthropomorphic phantoms containing metallic inserts were evaluated in terms of CT value accuracy, image noise and spatial resolution before and after correction. Measurements of the same phantoms without prostheses were used as ground truth for comparison. Cadaver measurements were performed on complex and realistic cases and to determine the influences of our correction method on the tissue surrounding the prostheses. The results showed a significant reduction of metal-induced streak artefacts (CT value differences were reduced to below 22 HU and image noise reduction of up to 200%). The cadaver measurements showed excellent results for imaging areas close to the implant and exceptional artefact suppression in these areas. Furthermore, measurements in the knee and spine regions confirmed the superiority of our method to standard one-dimensional, linear interpolation.

  16. The validity of activity monitors for measuring sleep in elite athletes.

    PubMed

    Sargent, Charli; Lastella, Michele; Halson, Shona L; Roach, Gregory D

    2016-10-01

    There is a growing interest in monitoring the sleep of elite athletes. Polysomnography is considered the gold standard for measuring sleep, however this technique is impractical if the aim is to collect data simultaneously with multiple athletes over consecutive nights. Activity monitors may be a suitable alternative for monitoring sleep, but these devices have not been validated against polysomnography in a population of elite athletes. Participants (n=16) were endurance-trained cyclists participating in a 6-week training camp. A total of 122 nights of sleep were recorded with polysomnography and activity monitors simultaneously. Agreement, sensitivity, and specificity were calculated from epoch-for-epoch comparisons of polysomnography and activity monitor data. Sleep variables derived from polysomnography and activity monitors were compared using paired t-tests. Activity monitor data were analysed using low, medium, and high sleep-wake thresholds. Epoch-for-epoch comparisons showed good agreement between activity monitors and polysomnography for each sleep-wake threshold (81-90%). Activity monitors were sensitive to sleep (81-92%), but specificity differed depending on the threshold applied (67-82%). Activity monitors underestimated sleep duration (18-90min) and overestimated wake duration (4-77min) depending on the threshold applied. Applying the correct sleep-wake threshold is important when using activity monitors to measure the sleep of elite athletes. For example, the default sleep-wake threshold (>40 activity counts=wake) underestimates sleep duration by ∼50min and overestimates wake duration by ∼40min. In contrast, sleep-wake thresholds that have a high sensitivity to sleep (>80 activity counts=wake) yield the best combination of agreement, sensitivity, and specificity. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  17. Harm is all you need? Best interests and disputes about parental decision-making.

    PubMed

    Birchley, Giles

    2016-02-01

    A growing number of bioethics papers endorse the harm threshold when judging whether to override parental decisions. Among other claims, these papers argue that the harm threshold is easily understood by lay and professional audiences and correctly conforms to societal expectations of parents in regard to their children. English law contains a harm threshold which mediates the use of the best interests test in cases where a child may be removed from her parents. Using Diekema's seminal paper as an example, this paper explores the proposed workings of the harm threshold. I use examples from the practical use of the harm threshold in English law to argue that the harm threshold is an inadequate answer to the indeterminacy of the best interests test. I detail two criticisms: First, the harm standard has evaluative overtones and judges are loath to employ it where parental behaviour is misguided but they wish to treat parents sympathetically. Thus, by focusing only on 'substandard' parenting, harm is problematic where the parental attempts to benefit their child are misguided or wrong, such as in disputes about withdrawal of medical treatment. Second, when harm is used in genuine dilemmas, court judgments offer different answers to similar cases. This level of indeterminacy suggests that, in practice, the operation of the harm threshold would be indistinguishable from best interests. Since indeterminacy appears to be the greatest problem in elucidating what is best, bioethicists should concentrate on discovering the values that inform best interests. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  18. Foveal threshold and photoreceptor integrity for prediction of visual acuity after intravitreal aflibercept on age-related macular degeneration.

    PubMed

    Sakai, Tsutomu; Okude, Sachiyo; Tsuneoka, Hiroshi

    2018-01-01

    To determine whether baseline foveal threshold and photoreceptor integrity can predict best-corrected visual acuity (BCVA) at 12 months after intravitreal aflibercept (IVA) therapy in eyes with neovascular age-related macular degeneration (AMD). We evaluated 25 eyes of 25 patients with treatment-naïve neovascular AMD who received IVA once a month for 3 months, followed by once every 2 months for 8 months. BCVA, integrity of the external limiting membrane (ELM) or the ellipsoid zone (EZ) of the photoreceptors, and retinal sensitivity were determined before (baseline) and at 6 and 12 months after initial IVA. The average threshold foveal sensitivity and mean deviation within the central 10° were determined by Humphrey central 10-2 perimetry. Correlations between BCVA at 12 months and integrity of the ELM or EZ, foveal threshold, and mean deviation at each visit were determined. At 12 months, BCVA improved significantly from 0.20±0.23 to 0.10±0.22 logMAR (logarithm of the minimum angle of resolution) units, and foveal threshold and mean deviation improved significantly from 29.0±5.1 and -3.38±3.10 dB to 32.6±3.2 and -1.64±2.10 dB, respectively ( P =0.0009 and P =0.0021). At baseline, both foveal threshold and integrity of the ELM were significantly correlated with BCVA at 12 months ( P =0.0428 and P =0.0275). These results indicate that both integrity of the ELM and foveal threshold at baseline can predict BCVA after treatment for neovascular AMD. There is a possibility that these parameters can predict the efficacy of IVA in each case.

  19. W production at large transverse momentum at the CERN Large Hadron Collider.

    PubMed

    Gonsalves, Richard J; Kidonakis, Nikolaos; Sabio Vera, Agustín

    2005-11-25

    We study the production of W bosons at large transverse momentum in pp collisions at the CERN Large Hadron Collider. We calculate the complete next-to-leading order (NLO) corrections to the differential cross section. We find that the NLO corrections provide a large increase to the cross section but, surprisingly, do not reduce the scale dependence relative to leading order (LO). We also calculate next-to-next-to-leading-order (NNLO) soft-gluon corrections and find that, although they are small, they significantly reduce the scale dependence thus providing a more stable result.

  20. Environment and host as large-scale controls of ectomycorrhizal fungi.

    PubMed

    van der Linde, Sietse; Suz, Laura M; Orme, C David L; Cox, Filipa; Andreae, Henning; Asi, Endla; Atkinson, Bonnie; Benham, Sue; Carroll, Christopher; Cools, Nathalie; De Vos, Bruno; Dietrich, Hans-Peter; Eichhorn, Johannes; Gehrmann, Joachim; Grebenc, Tine; Gweon, Hyun S; Hansen, Karin; Jacob, Frank; Kristöfel, Ferdinand; Lech, Paweł; Manninger, Miklós; Martin, Jan; Meesenburg, Henning; Merilä, Päivi; Nicolas, Manuel; Pavlenda, Pavel; Rautio, Pasi; Schaub, Marcus; Schröck, Hans-Werner; Seidling, Walter; Šrámek, Vít; Thimonier, Anne; Thomsen, Iben Margrete; Titeux, Hugues; Vanguelova, Elena; Verstraeten, Arne; Vesterdal, Lars; Waldner, Peter; Wijk, Sture; Zhang, Yuxin; Žlindra, Daniel; Bidartondo, Martin I

    2018-06-06

    Explaining the large-scale diversity of soil organisms that drive biogeochemical processes-and their responses to environmental change-is critical. However, identifying consistent drivers of belowground diversity and abundance for some soil organisms at large spatial scales remains problematic. Here we investigate a major guild, the ectomycorrhizal fungi, across European forests at a spatial scale and resolution that is-to our knowledge-unprecedented, to explore key biotic and abiotic predictors of ectomycorrhizal diversity and to identify dominant responses and thresholds for change across complex environmental gradients. We show the effect of 38 host, environment, climate and geographical variables on ectomycorrhizal diversity, and define thresholds of community change for key variables. We quantify host specificity and reveal plasticity in functional traits involved in soil foraging across gradients. We conclude that environmental and host factors explain most of the variation in ectomycorrhizal diversity, that the environmental thresholds used as major ecosystem assessment tools need adjustment and that the importance of belowground specificity and plasticity has previously been underappreciated.

  1. Humans and seasonal climate variability threaten large-bodied coral reef fish with small ranges

    PubMed Central

    Mellin, C.; Mouillot, D.; Kulbicki, M.; McClanahan, T. R.; Vigliola, L.; Bradshaw, C. J. A.; Brainard, R. E.; Chabanet, P.; Edgar, G. J.; Fordham, D. A.; Friedlander, A. M.; Parravicini, V.; Sequeira, A. M. M.; Stuart-Smith, R. D.; Wantiez, L.; Caley, M. J.

    2016-01-01

    Coral reefs are among the most species-rich and threatened ecosystems on Earth, yet the extent to which human stressors determine species occurrences, compared with biogeography or environmental conditions, remains largely unknown. With ever-increasing human-mediated disturbances on these ecosystems, an important question is not only how many species can inhabit local communities, but also which biological traits determine species that can persist (or not) above particular disturbance thresholds. Here we show that human pressure and seasonal climate variability are disproportionately and negatively associated with the occurrence of large-bodied and geographically small-ranging fishes within local coral reef communities. These species are 67% less likely to occur where human impact and temperature seasonality exceed critical thresholds, such as in the marine biodiversity hotspot: the Coral Triangle. Our results identify the most sensitive species and critical thresholds of human and climatic stressors, providing opportunity for targeted conservation intervention to prevent local extinctions. PMID:26839155

  2. Clinical experience with the words-in-noise test on 3430 veterans: comparisons with pure-tone thresholds and word recognition in quiet.

    PubMed

    Wilson, Richard H

    2011-01-01

    Since the 1940s, measures of pure-tone sensitivity and speech recognition in quiet have been vital components of the audiologic evaluation. Although early investigators urged that speech recognition in noise also should be a component of the audiologic evaluation, only recently has this suggestion started to become a reality. This report focuses on the Words-in-Noise (WIN) Test, which evaluates word recognition in multitalker babble at seven signal-to-noise ratios and uses the 50% correct point (in dB SNR) calculated with the Spearman-Kärber equation as the primary metric. The WIN was developed and validated in a series of 12 laboratory studies. The current study examined the effectiveness of the WIN materials for measuring the word-recognition performance of patients in a typical clinical setting. To examine the relations among three audiometric measures including pure-tone thresholds, word-recognition performances in quiet, and word-recognition performances in multitalker babble for veterans seeking remediation for their hearing loss. Retrospective, descriptive. The participants were 3430 veterans who for the most part were evaluated consecutively in the Audiology Clinic at the VA Medical Center, Mountain Home, Tennessee. The mean age was 62.3 yr (SD = 12.8 yr). The data were collected in the course of a 60 min routine audiologic evaluation. A history, otoscopy, and aural-acoustic immittance measures also were included in the clinic protocol but were not evaluated in this report. Overall, the 1000-8000 Hz thresholds were significantly lower (better) in the right ear (RE) than in the left ear (LE). There was a direct relation between age and the pure-tone thresholds, with greater change across age in the high frequencies than in the low frequencies. Notched audiograms at 4000 Hz were observed in at least one ear in 41% of the participants with more unilateral than bilateral notches. Normal pure-tone thresholds (≤20 dB HL) were obtained from 6% of the participants. Maximum performance on the Northwestern University Auditory Test No. 6 (NU-6) in quiet was ≥90% correct by 50% of the participants, with an additional 20% performing at ≥80% correct; the RE performed 1-3% better than the LE. Of the 3291 who completed the WIN on both ears, only 7% exhibited normal performance (50% correct point of ≤6 dB SNR). Overall, WIN performance was significantly better in the RE (mean = 13.3 dB SNR) than in the LE (mean = 13.8 dB SNR). Recognition performance on both the NU-6 and the WIN decreased as a function of both pure-tone hearing loss and age. There was a stronger relation between the high-frequency pure-tone average (1000, 2000, and 4000 Hz) and the WIN than between the pure-tone average (500, 1000, and 2000 Hz) and the WIN. The results on the WIN from both the previous laboratory studies and the current clinical study indicate that the WIN is an appropriate clinic instrument to assess word-recognition performance in background noise. Recognition performance on a speech-in-quiet task does not predict performance on a speech-in-noise task, as the two tasks reflect different domains of auditory function. Experience with the WIN indicates that word-in-noise tasks should be considered the "stress test" for auditory function. American Academy of Audiology.

  3. Integration of community structure data reveals observable effects below sediment guideline thresholds in a large estuary.

    PubMed

    Tremblay, Louis A; Clark, Dana; Sinner, Jim; Ellis, Joanne I

    2017-09-20

    The sustainable management of estuarine and coastal ecosystems requires robust frameworks due to the presence of multiple physical and chemical stressors. In this study, we assessed whether ecological health decline, based on community structure composition changes along a pollution gradient, occurred at levels below guideline threshold values for copper, zinc and lead. Canonical analysis of principal coordinates (CAP) was used to characterise benthic communities along a metal contamination gradient. The analysis revealed changes in benthic community distribution at levels below the individual guideline values for the three metals. These results suggest that field-based measures of ecological health analysed with multivariate tools can provide additional information to single metal guideline threshold values to monitor large systems exposed to multiple stressors.

  4. Damage threshold from large retinal spot size repetitive-pulse laser exposures.

    PubMed

    Lund, Brian J; Lund, David J; Edsall, Peter R

    2014-10-01

    The retinal damage thresholds for large spot size, multiple-pulse exposures to a Q-switched, frequency doubled Nd:YAG laser (532 nm wavelength, 7 ns pulses) have been measured for 100 μm and 500 μm retinal irradiance diameters. The ED50, expressed as energy per pulse, varies only weakly with the number of pulses, n, for these extended spot sizes. The previously reported threshold for a multiple-pulse exposure for a 900 μm retinal spot size also shows the same weak dependence on the number of pulses. The multiple-pulse ED50 for an extended spot-size exposure does not follow the n dependence exhibited by small spot size exposures produced by a collimated beam. Curves derived by using probability-summation models provide a better fit to the data.

  5. Effect of subliminal stimuli on consumer behavior: negative evidence.

    PubMed

    George, S G; Jennings, L B

    1975-12-01

    The study corrected methodological weaknesses found in previous experiments designed to test the contentions of motivational research theorists that subliminal stimulation can affect buying behavior. The words "Hershey's Chocolate" were presented to a group of 18 experimental Ss below a forced-choice detection threshold. The 19 control Ss had a blank slide superimposed over the same background media. In a highly controlled buying situation neither experimental nor control Ss purchased Hershey's products, but on comparable chocolate products, the experimental Ss bought 5 and the control Ss, 3. A second study tested 15 experimental and 12 control Ss with the stimulus presented just below a recognition threshold. No experimental Ss bought Hershey's; two control Ss did. No support was found for the claims of motivational research theorists.

  6. Thresholds in the response of free-floating plant abundance to variation in hydraulic connectivity, nutrients, and macrophyte abundance in a large floodplain river

    USGS Publications Warehouse

    Giblin, Shawn M.; Houser, Jeffrey N.; Sullivan, John F.; Langrehr, H.A.; Rogala, James T.; Campbell, Benjamin D.

    2014-01-01

    Duckweed and other free-floating plants (FFP) can form dense surface mats that affect ecosystem condition and processes, and can impair public use of aquatic resources. FFP obtain their nutrients from the water column, and the formation of dense FFP mats can be a consequence and indicator of river eutrophication. We conducted two complementary surveys of diverse aquatic areas of the Upper Mississippi River as an in situ approach for estimating thresholds in the response of FFP abundance to nutrient concentration and physical conditions in a large, floodplain river. Local regression analysis was used to estimate thresholds in the relations between FFP abundance and phosphorus (P) concentration (0.167 mg l−1L), nitrogen (N) concentration (0.808 mg l−1), water velocity (0.095 m s−1), and aquatic macrophyte abundance (65 % cover). FFP tissue concentrations suggested P limitation was more likely in spring, N limitation was more likely in late summer, and N limitation was most likely in backwaters with minimal hydraulic connection to the channel. The thresholds estimated here, along with observed patterns in nutrient limitation, provide river scientists and managers with criteria to consider when attempting to modify FFP abundance in off-channel areas of large river systems.

  7. Can the human lumbar posterior columns be stimulated by transcutaneous spinal cord stimulation? A modeling study

    PubMed Central

    Danner, Simon M.; Hofstoetter, Ursula S.; Ladenbauer, Josef; Rattay, Frank; Minassian, Karen

    2014-01-01

    Stimulation of different spinal cord segments in humans is a widely developed clinical practice for modification of pain, altered sensation and movement. The human lumbar cord has become a target for modification of motor control by epidural and more recently by transcutaneous spinal cord stimulation. Posterior columns of the lumbar spinal cord represent a vertical system of axons and when activated can add other inputs to the motor control of the spinal cord than stimulated posterior roots. We used a detailed three-dimensional volume conductor model of the torso and the McIntyre-Richard-Grill axon model to calculate the thresholds of axons within the posterior columns in response to transcutaneous lumbar spinal cord stimulation. Superficially located large diameter posterior column fibers with multiple collaterals have a threshold of 45.4 V, three times higher than posterior root fibers (14.1 V). With the stimulation strength needed to activate posterior column axons, posterior root fibers of large and small diameters as well as anterior root fibers are co-activated. The reported results inform on these threshold differences, when stimulation is applied to the posterior structures of the lumbar cord at intensities above the threshold of large-diameter posterior root fibers. PMID:21401670

  8. Pressure and cold pain threshold reference values in a large, young adult, pain-free population.

    PubMed

    Waller, Robert; Smith, Anne Julia; O'Sullivan, Peter Bruce; Slater, Helen; Sterling, Michele; McVeigh, Joanne Alexandra; Straker, Leon Melville

    2016-10-01

    Currently there is a lack of large population studies that have investigated pain sensitivity distributions in healthy pain free people. The aims of this study were: (1) to provide sex-specific reference values of pressure and cold pain thresholds in young pain-free adults; (2) to examine the association of potential correlates of pain sensitivity with pain threshold values. This study investigated sex specific pressure and cold pain threshold estimates for young pain free adults aged 21-24 years. A cross-sectional design was utilised using participants (n=617) from the Western Australian Pregnancy Cohort (Raine) Study at the 22-year follow-up. The association of site, sex, height, weight, smoking, health related quality of life, psychological measures and activity with pain threshold values was examined. Pressure pain threshold (lumbar spine, tibialis anterior, neck and dorsal wrist) and cold pain threshold (dorsal wrist) were assessed using standardised quantitative sensory testing protocols. Reference values for pressure pain threshold (four body sites) stratified by sex and site, and cold pain threshold (dorsal wrist) stratified by sex are provided. Statistically significant, independent correlates of increased pressure pain sensitivity measures were site (neck, dorsal wrist), sex (female), higher waist-hip ratio and poorer mental health. Statistically significant, independent correlates of increased cold pain sensitivity measures were, sex (female), poorer mental health and smoking. These data provide the most comprehensive and robust sex specific reference values for pressure pain threshold specific to four body sites and cold pain threshold at the dorsal wrist for young adults aged 21-24 years. Establishing normative values in this young age group is important given that the transition from adolescence to adulthood is a critical temporal period during which trajectories for persistent pain can be established. These data will provide an important research resource to enable more accurate profiling and interpretation of pain sensitivity in clinical pain disorders in young adults. The robust and comprehensive data can assist interpretation of future clinical pain studies and provide further insight into the complex associations of pain sensitivity that can be used in future research. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  9. Incorporating adaptive responses into future projections of coral bleaching.

    PubMed

    Logan, Cheryl A; Dunne, John P; Eakin, C Mark; Donner, Simon D

    2014-01-01

    Climate warming threatens to increase mass coral bleaching events, and several studies have projected the demise of tropical coral reefs this century. However, recent evidence indicates corals may be able to respond to thermal stress though adaptive processes (e.g., genetic adaptation, acclimatization, and symbiont shuffling). How these mechanisms might influence warming-induced bleaching remains largely unknown. This study compared how different adaptive processes could affect coral bleaching projections. We used the latest bias-corrected global sea surface temperature (SST) output from the NOAA/GFDL Earth System Model 2 (ESM2M) for the preindustrial period through 2100 to project coral bleaching trajectories. Initial results showed that, in the absence of adaptive processes, application of a preindustrial climatology to the NOAA Coral Reef Watch bleaching prediction method overpredicts the present-day bleaching frequency. This suggests that corals may have already responded adaptively to some warming over the industrial period. We then modified the prediction method so that the bleaching threshold either permanently increased in response to thermal history (e.g., simulating directional genetic selection) or temporarily increased for 2-10 years in response to a bleaching event (e.g., simulating symbiont shuffling). A bleaching threshold that changes relative to the preceding 60 years of thermal history reduced the frequency of mass bleaching events by 20-80% compared with the 'no adaptive response' prediction model by 2100, depending on the emissions scenario. When both types of adaptive responses were applied, up to 14% more reef cells avoided high-frequency bleaching by 2100. However, temporary increases in bleaching thresholds alone only delayed the occurrence of high-frequency bleaching by ca. 10 years in all but the lowest emissions scenario. Future research should test the rate and limit of different adaptive responses for coral species across latitudes and ocean basins to determine if and how much corals can respond to increasing thermal stress.

  10. Quantification of the impact of a confounding variable on functional connectivity confirms anti-correlated networks in the resting-state.

    PubMed

    Carbonell, F; Bellec, P; Shmuel, A

    2014-02-01

    The effect of regressing out the global average signal (GAS) in resting state fMRI data has become a concern for interpreting functional connectivity analyses. It is not clear whether the reported anti-correlations between the Default Mode and the Dorsal Attention Networks are intrinsic to the brain, or are artificially created by regressing out the GAS. Here we introduce a concept, Impact of the Global Average on Functional Connectivity (IGAFC), for quantifying the sensitivity of seed-based correlation analyses to the regression of the GAS. This voxel-wise IGAFC index is defined as the product of two correlation coefficients: the correlation between the GAS and the fMRI time course of a voxel, times the correlation between the GAS and the seed time course. This definition enables the calculation of a threshold at which the impact of regressing-out the GAS would be large enough to introduce spurious negative correlations. It also yields a post-hoc impact correction procedure via thresholding, which eliminates spurious correlations introduced by regressing out the GAS. In addition, we introduce an Artificial Negative Correlation Index (ANCI), defined as the absolute difference between the IGAFC index and the impact threshold. The ANCI allows a graded confidence scale for ranking voxels according to their likelihood of showing artificial correlations. By applying this method, we observed regions in the Default Mode and Dorsal Attention Networks that were anti-correlated. These findings confirm that the previously reported negative correlations between the Dorsal Attention and Default Mode Networks are intrinsic to the brain and not the result of statistical manipulations. Our proposed quantification of the impact that a confound may have on functional connectivity can be generalized to global effect estimators other than the GAS. It can be readily applied to other confounds, such as systemic physiological or head movement interferences, in order to quantify their impact on functional connectivity in the resting state. © 2013.

  11. Correcting highly aberrated eyes using large-stroke adaptive optics.

    PubMed

    Sabesan, Ramkumar; Ahmad, Kamran; Yoon, Geunyoung

    2007-11-01

    To investigate the optical performance of a large-stroke deformable mirror in correcting large aberrations in highly aberrated eyes. A large-stroke deformable mirror (Mirao 52D; Imagine Eyes) and a Shack-Hartmann wavefront sensor were used in an adaptive optics system. Closed-loop correction of the static aberrations of a phase plate designed for an advanced keratoconic eye was performed for a 6-mm pupil. The same adaptive optics system was also used to correct the aberrations in one eye each of two moderate keratoconic and three normal human eyes for a 6-mm pupil. With closed-loop correction of the phase plate, the total root-mean-square (RMS) over a 6-mm pupil was reduced from 3.54 to 0.04 microm in 30 to 40 iterations, corresponding to 3 to 4 seconds. Adaptive optics closed-loop correction reduced an average total RMS of 1.73+/-0.998 to 0.10+/-0.017 microm (higher order RMS of 0.39+/-0.124 to 0.06+/-0.004 microm) in the three normal eyes and 2.73+/-1.754 to 0.10+/-0.001 microm (higher order RMS of 1.82+/-1.058 to 0.05+/-0.017 microm) in the two keratoconic eyes. Aberrations in both normal and highly aberrated eyes were successfully corrected using the large-stroke deformable mirror to provide almost perfect optical quality. This mirror can be a powerful tool to assess the limit of visual performance achievable after correcting the aberrations, especially in eyes with abnormal corneal profiles.

  12. Dispersive estimates for massive Dirac operators in dimension two

    NASA Astrophysics Data System (ADS)

    Erdoğan, M. Burak; Green, William R.; Toprak, Ebru

    2018-05-01

    We study the massive two dimensional Dirac operator with an electric potential. In particular, we show that the t-1 decay rate holds in the L1 →L∞ setting if the threshold energies are regular. We also show these bounds hold in the presence of s-wave resonances at the threshold. We further show that, if the threshold energies are regular then a faster decay rate of t-1(log ⁡ t) - 2 is attained for large t, at the cost of logarithmic spatial weights. The free Dirac equation does not satisfy this bound due to the s-wave resonances at the threshold energies.

  13. Speed accuracy trade-off under response deadlines

    PubMed Central

    Karşılar, Hakan; Simen, Patrick; Papadakis, Samantha; Balcı, Fuat

    2014-01-01

    Perceptual decision making has been successfully modeled as a process of evidence accumulation up to a threshold. In order to maximize the rewards earned for correct responses in tasks with response deadlines, participants should collapse decision thresholds dynamically during each trial so that a decision is reached before the deadline. This strategy ensures on-time responding, though at the cost of reduced accuracy, since slower decisions are based on lower thresholds and less net evidence later in a trial (compared to a constant threshold). Frazier and Yu (2008) showed that the normative rate of threshold reduction depends on deadline delays and on participants' uncertainty about these delays. Participants should start collapsing decision thresholds earlier when making decisions under shorter deadlines (for a given level of timing uncertainty) or when timing uncertainty is higher (for a given deadline). We tested these predictions using human participants in a random dot motion discrimination task. Each participant was tested in free-response, short deadline (800 ms), and long deadline conditions (1000 ms). Contrary to optimal-performance predictions, the resulting empirical function relating accuracy to response time (RT) in deadline conditions did not decline to chance level near the deadline; nor did the slight decline we typically observed relate to measures of endogenous timing uncertainty. Further, although this function did decline slightly with increasing RT, the decline was explainable by the best-fitting parameterization of Ratcliff's diffusion model (Ratcliff, 1978), whose parameters are constant within trials. Our findings suggest that at the very least, typical decision durations are too short for participants to adapt decision parameters within trials. PMID:25177265

  14. Dynamical origin of near- and below-threshold harmonic generation of Cs in an intense mid-infrared laser field.

    PubMed

    Li, Peng-Cheng; Sheu, Yae-Lin; Laughlin, Cecil; Chu, Shih-I

    2015-05-20

    Near- and below-threshold harmonic generation provides a potential approach to generate vacuum-ultraviolet frequency comb. However, the dynamical origin of in these lower harmonics is less understood and largely unexplored. Here we perform an ab initio quantum study of the near- and below-threshold harmonic generation of caesium (Cs) atoms in an intense 3,600-nm mid-infrared laser field. Combining with a synchrosqueezing transform of the quantum time-frequency spectrum and an extended semiclassical analysis, the roles of multiphoton and multiple rescattering trajectories on the near- and below-threshold harmonic generation processes are clarified. We find that the multiphoton-dominated trajectories only involve the electrons scattered off the higher part of the combined atom-field potential followed by the absorption of many photons in near- and below-threshold regime. Furthermore, only the near-resonant below-threshold harmonic is exclusive to exhibit phase locked features. Our results shed light on the dynamic origin of the near- and below-threshold harmonic generation.

  15. Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation

    PubMed Central

    Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi

    2016-01-01

    After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t′, n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely. PMID:27792784

  16. Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.

    PubMed

    Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi

    2016-01-01

    After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.

  17. Avulsion threshold in a large Himalayan river: the case of the Kosi, India and Nepal

    NASA Astrophysics Data System (ADS)

    Sinha, R.; Kommula, S.

    2010-12-01

    Avulsion, the relatively rapid shift of a river to a new course on a lower part of a floodplain, is considered as a major fluvial hazard in large population centers such as the north Bihar plains, eastern India and the adjoining areas of Nepal. This region witnessed one of the most recent avulsions of the Kosi River on 18 August, 2008 when the river shifted by ~120 km eastward. This was perhaps one of the greatest avulsions in a large river in recent years triggered by the breach of the eastern afflux bund at Kusaha in Nepal at a location 12 km upstream of the Kosi barrage and affecting more than 3 million people in Nepal and north Bihar. The trigger for an avulsion largely depends upon the regional channel-floodplain slope relationships and the lowest elevation available in the region. Most of the available assessments of avulsion threshold have therefore been based on the examination of channel slopes- longitudinal and cross-sectional. However, planform dynamics in a sediment-charged river such as the Kosi also plays an important role in pushing the river towards threshold for avulsion. The present study has made use of SRTM DEM, temporal satellite images and maps to compute the avulsion threshold for a ~50 km long reach of the Kosi river after incorporating planform dynamics in a GIS environment. Flow accumulation paths generated from the SRTM data match closely with the zones of high avulsion threshold. Not just that the Kusaha plots in a high avulsion threshold zone, we also identify several critical points where breach (avulsion) can occur in near future. This study assumes global significance keeping in view the most recent flooding in the Indus River in Pakistan. Like the Kusaha breach in Kosi in August 2008, the Indus flood trauma started with the breach of the eastern marginal embankment in the upstream of Taunsa barrage and was apparently triggered by rise of bed level due to excessive sediment load. The mega avulsion of the Kosi on 18th August 2008 which occurred due to a breach in the eastern embankment at Kusaha, Nepal

  18. Rainfall thresholds as a landslide indicator for engineered slopes on the Irish Rail network

    NASA Astrophysics Data System (ADS)

    Martinović, Karlo; Gavin, Kenneth; Reale, Cormac; Mangan, Cathal

    2018-04-01

    Rainfall thresholds express the minimum levels of rainfall that need to be reached or exceeded in order for landslides to occur in a particular area. They are a common tool in expressing the temporal portion of landslide hazard analysis. Numerous rainfall thresholds have been developed for different areas worldwide, however none of these are focused on landslides occurring on the engineered slopes on transport infrastructure networks. This paper uses empirical method to develop the rainfall thresholds for landslides on the Irish Rail network earthworks. For comparison, rainfall thresholds are also developed for natural terrain in Ireland. The results show that particular thresholds involving relatively low rainfall intensities are applicable for Ireland, owing to the specific climate. Furthermore, the comparison shows that rainfall thresholds for engineered slopes are lower than those for landslides occurring on the natural terrain. This has severe implications as it indicates that there is a significant risk involved when using generic weather alerts (developed largely for natural terrain) for infrastructure management, and showcases the need for developing railway and road specific rainfall thresholds for landslides.

  19. Assessing the Electrode-Neuron Interface with the Electrically Evoked Compound Action Potential, Electrode Position, and Behavioral Thresholds.

    PubMed

    DeVries, Lindsay; Scheperle, Rachel; Bierer, Julie Arenberg

    2016-06-01

    Variability in speech perception scores among cochlear implant listeners may largely reflect the variable efficacy of implant electrodes to convey stimulus information to the auditory nerve. In the present study, three metrics were applied to assess the quality of the electrode-neuron interface of individual cochlear implant channels: the electrically evoked compound action potential (ECAP), the estimation of electrode position using computerized tomography (CT), and behavioral thresholds using focused stimulation. The primary motivation of this approach is to evaluate the ECAP as a site-specific measure of the electrode-neuron interface in the context of two peripheral factors that likely contribute to degraded perception: large electrode-to-modiolus distance and reduced neural density. Ten unilaterally implanted adults with Advanced Bionics HiRes90k devices participated. ECAPs were elicited with monopolar stimulation within a forward-masking paradigm to construct channel interaction functions (CIF), behavioral thresholds were obtained with quadrupolar (sQP) stimulation, and data from imaging provided estimates of electrode-to-modiolus distance and scalar location (scala tympani (ST), intermediate, or scala vestibuli (SV)) for each electrode. The width of the ECAP CIF was positively correlated with electrode-to-modiolus distance; both of these measures were also influenced by scalar position. The ECAP peak amplitude was negatively correlated with behavioral thresholds. Moreover, subjects with low behavioral thresholds and large ECAP amplitudes, averaged across electrodes, tended to have higher speech perception scores. These results suggest a potential clinical role for the ECAP in the objective assessment of individual cochlear implant channels, with the potential to improve speech perception outcomes.

  20. Dead time corrections using the backward extrapolation method

    NASA Astrophysics Data System (ADS)

    Gilad, E.; Dubi, C.; Geslot, B.; Blaise, P.; Kolin, A.

    2017-05-01

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1-2%) in restoring the corrected count rate.

  1. Strongly screening corrections to antineutrino energy loss by β --decay of nuclides 53Fe, 54Fe, 55Fe, and 56Fe in supernova

    NASA Astrophysics Data System (ADS)

    Liu, Jing-Jing; Liu, Dong-Mei

    2018-06-01

    Based on the p-f shell-model, we discuss and calculate β--decay half-lives of neutron-rich nuclei, with a consideration of shell and pair effects, the decay energy, and the nucleon numbers. According to the linear response theory model, we study the effect of electron screening on the electron energy, beta-decay threshold energy, and the antineutrino energy loss rate by β--decay of some iron isotopes. We find that the electron screening antineutrino energy loss rates increase by about two orders of magnitude due to the shell effects and the pairing effect. Beta-decay rates with Q-value corrections due to strong electron screening are higher than those without the Q-value corrections by more than two orders of magnitude. Our conclusions may be helpful for the research on numerical simulations of the cooling of stars.

  2. The solar cycle variation of the rates of CMEs and related activity

    NASA Technical Reports Server (NTRS)

    Webb, David F.

    1991-01-01

    Coronal mass ejections (CMEs) are an important aspect of the physics of the corona and heliosphere. This paper presents results of a study of occurrence frequencies of CMEs and related activity tracers over more than a complete solar activity cycle. To properly estimate occurrence rates, observed CME rates must be corrected for instrument duty cycles, detection efficiencies away from the skyplane, mass detection thresholds, and geometrical considerations. These corrections are evaluated using CME data from 1976-1989 obtained with the Skylab, SMM and SOLWIND coronagraphs and the Helios-2 photometers. The major results are: (1) the occurrence rate of CMEs tends to track the activity cycle in both amplitude and phase; (2) the corrected rates from different instruments are reasonably consistent; and (3) over the long term, no one class of solar activity tracer is better correlated with CME rate than any other (with the possible exception of type II bursts).

  3. Defining major trauma using the 2008 Abbreviated Injury Scale.

    PubMed

    Palmer, Cameron S; Gabbe, Belinda J; Cameron, Peter A

    2016-01-01

    The Injury Severity Score (ISS) is the most ubiquitous summary score derived from Abbreviated Injury Scale (AIS) data. It is frequently used to classify patients as 'major trauma' using a threshold of ISS >15. However, it is not known whether this is still appropriate, given the changes which have been made to the AIS codeset since this threshold was first used. This study aimed to identify appropriate ISS and New Injury Severity Score (NISS) thresholds for use with the 2008 AIS (AIS08) which predict mortality and in-hospital resource use comparably to ISS >15 using AIS98. Data from 37,760 patients in a state trauma registry were retrieved and reviewed. AIS data coded using the 1998 AIS (AIS98) were mapped to AIS08. ISS and NISS were calculated, and their effects on patient classification compared. The ability of selected ISS and NISS thresholds to predict mortality or high-level in-hospital resource use (the need for ICU or urgent surgery) was assessed. An ISS >12 using AIS08 was similar to an ISS >15 using AIS98 in terms of both the number of patients classified major trauma, and overall major trauma mortality. A 10% mortality level was only seen for ISS 25 or greater. A NISS >15 performed similarly to both of these ISS thresholds. However, the AIS08-based ISS >12 threshold correctly classified significantly more patients than a NISS >15 threshold for all three severity measures assessed. When coding injuries using AIS08, an ISS >12 appears to function similarly to an ISS >15 in AIS98 for the purposes of identifying a population with an elevated risk of death after injury. Where mortality is a primary outcome of trauma monitoring, an ISS >12 threshold could be adopted to identify major trauma patients. Level II evidence--diagnostic tests and criteria. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. MO-FG-202-09: Virtual IMRT QA Using Machine Learning: A Multi-Institutional Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valdes, G; Scheuermann, R; Solberg, T

    Purpose: To validate a machine learning approach to Virtual IMRT QA for accurately predicting gamma passing rates using different QA devices at different institutions. Methods: A Virtual IMRT QA was constructed using a machine learning algorithm based on 416 IMRT plans, in which QA measurements were performed using diode-array detectors and a 3%local/3mm with 10% threshold. An independent set of 139 IMRT measurements from a different institution, with QA data based on portal dosimetry using the same gamma index and 10% threshold, was used to further test the algorithm. Plans were characterized by 90 different complexity metrics. A weighted poisonmore » regression with Lasso regularization was trained to predict passing rates using the complexity metrics as input. Results: In addition to predicting passing rates with 3% accuracy for all composite plans using diode-array detectors, passing rates for portal dosimetry on per-beam basis were predicted with an error <3.5% for 120 IMRT measurements. The remaining measurements (19) had large areas of low CU, where portal dosimetry has larger disagreement with the calculated dose and, as such, large errors were expected. These beams need to be further modeled to correct the under-response in low dose regions. Important features selected by Lasso to predict gamma passing rates were: complete irradiated area outline (CIAO) area, jaw position, fraction of MLC leafs with gaps smaller than 20 mm or 5mm, fraction of area receiving less than 50% of the total CU, fraction of the area receiving dose from penumbra, weighted Average Irregularity Factor, duty cycle among others. Conclusion: We have demonstrated that the Virtual IMRT QA can predict passing rates using different QA devices and across multiple institutions. Prediction of QA passing rates could have profound implications on the current IMRT process.« less

  5. Testing and Performance Analysis of the Multichannel Error Correction Code Decoder

    NASA Technical Reports Server (NTRS)

    Soni, Nitin J.

    1996-01-01

    This report provides the test results and performance analysis of the multichannel error correction code decoder (MED) system for a regenerative satellite with asynchronous, frequency-division multiple access (FDMA) uplink channels. It discusses the system performance relative to various critical parameters: the coding length, data pattern, unique word value, unique word threshold, and adjacent-channel interference. Testing was performed under laboratory conditions and used a computer control interface with specifically developed control software to vary these parameters. Needed technologies - the high-speed Bose Chaudhuri-Hocquenghem (BCH) codec from Harris Corporation and the TRW multichannel demultiplexer/demodulator (MCDD) - were fully integrated into the mesh very small aperture terminal (VSAT) onboard processing architecture and were demonstrated.

  6. Review of approaches to the recording of background lesions in toxicologic pathology studies in rats.

    PubMed

    McInnes, E F; Scudamore, C L

    2014-08-17

    Pathological evaluation of lesions caused directly by xenobiotic treatment must always take into account the recognition of background (incidental) findings. Background lesions can be congenital or hereditary, histological variations, changes related to trauma or normal aging and physiologic or hormonal changes. This review focuses on the importance and correct approach to recording of background changes and includes discussion on sources of variability in background changes, the correct use of terminology, the concept of thresholds, historical control data, diagnostic drift, blind reading of slides, scoring and artifacts. The review is illustrated with background lesions in Sprague Dawley and Wistar rats. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Accuracy of Range Restriction Correction with Multiple Imputation in Small and Moderate Samples: A Simulation Study

    ERIC Educational Resources Information Center

    Pfaffel, Andreas; Spiel, Christiane

    2016-01-01

    Approaches to correcting correlation coefficients for range restriction have been developed under the framework of large sample theory. The accuracy of missing data techniques for correcting correlation coefficients for range restriction has thus far only been investigated with relatively large samples. However, researchers and evaluators are…

  8. Ethical Issues Related to the Promotion of a "100 mSv Threshold Assumption" in Japan after the Fukushima Nuclear Accident in 2011: Background and Consequences.

    PubMed

    Tsuda, Toshihide; Lindahl, Lena; Tokinobu, Akiko

    2017-06-01

    This article describes the debates in Japan regarding the 100 mSv threshold assumption and ethical issues related to it, and explores the background to distorted risk information and absence of risk communication in Japan. Then we seek proper risk communication based on scientific evidence. On March 11, 2011 an accident occurred at the Fukushima Daiichi Nuclear Power Plant due to the Great East Japan Earthquake. Since then a number of misunderstandings have become common in Japan as a result of public statements by the Japanese and local governments that have no basis in medical science or are contradictory to medical science. Consequently, not only the population of Fukushima Prefecture, but also others, have been subjected to unnecessary exposure to radiation, against the As Low As Reasonably Achievable (ALARA) principle. The number of cases of thyroid cancer has increased by one or two orders of magnitude since the accident in Fukushima. However, the population has hardly been given any correct information from the central and local governments, medical societies, and media. The center of this problem is a statement on radiation-induced cancer (including thyroid cancer) made by the Japanese Government and Japanese medical academic societies indicating that "exposure of less than 100 mSv gives rise to no excess risk of cancer, and even if there is some resulting cancer it will be impossible to detect it" (this will be referred to as "the 100 mSv threshold assumption" from now onward). They have been saying this since April 2011 and have made no effort to correct it. Many Japanese began to notice this but correct information on radiation protection has reached only one part of the population. Risk communication should be based on scientific evidence, and providing it as information for the public is a key element. In Japan, governments and academic societies tried to communicate with the public without doing it. Ethical problems after the accident in Fukushima can be understood from the consequences of the mistakes in both risk information and risk communication in Japan after 2011.

  9. Cathodal transcranial direct-current stimulation over right posterior parietal cortex enhances human temporal discrimination ability.

    PubMed

    Oyama, Fuyuki; Ishibashi, Keita; Iwanaga, Koichi

    2017-12-04

    Time perception associated with durations from 1 s to several minutes involves activity in the right posterior parietal cortex (rPPC). It is unclear whether altering the activity of the rPPC affects an individual's timing performance. Here, we investigated the human timing performance under the application of transcranial direct-current stimulation (tDCS) that altered the neural activities of the rPPC. We measured the participants' duration-discrimination threshold by administering a behavioral task during the tDCS application. The tDCS conditions consisted of anodal, cathodal, and sham conditions. The electrodes were placed over the P4 position (10-20 system) and on the left supraorbital forehead. On each task trial, the participant observed two visual stimuli and indicated which was longer. The amount of difference between the two stimulus durations was varied repeatedly throughout the trials according to the participant's responses. The correct answer rate of the trials was calculated for each amount of difference, and the minimum amount with the correct answer rate exceeding 75% was selected as the threshold. The data were analyzed by a linear mixed-effects models procedure. Nineteen volunteers participated in the experiment. We excluded three participants from the analysis: two who reported extreme sleepiness while performing the task and one who could recognize the sham condition correctly with confidence. Our analysis of the 16 participants' data showed that the average value of the thresholds observed under the cathodal condition was lower than that of the sham condition. This suggests that inhibition of the rPPC leads to an improvement in temporal discrimination performance, resulting in improved timing performance. In the present study, we found a new effect that cathodal tDCS over the rPPC enhances temporal discrimination performance. In terms of the existence of anodal/cathodal tDCS effects on human timing performance, the results were consistent with a previous study that investigated temporal reproduction performance during tDCS application. However, the results of the current study further indicated that cathodal tDCS over the rPPC increases accuracy of observed time duration rather than inducing an overestimation as a previous study reported.

  10. Upsets in Erased Floating Gate Cells With High-Energy Protons

    DOE PAGES

    Gerardin, S.; Bagatin, M.; Paccagnella, A.; ...

    2017-01-01

    We discuss upsets in erased floating gate cells, due to large threshold voltage shifts, using statistical distributions collected on a large number of memory cells. The spread in the neutral threshold voltage appears to be too low to quantitatively explain the experimental observations in terms of simple charge loss, at least in SLC devices. The possibility that memories exposed to high energy protons and heavy ions exhibit negative charge transfer between programmed and erased cells is investigated, although the analysis does not provide conclusive support to this hypothesis.

  11. Low authority-threshold control for large flexible structures

    NASA Technical Reports Server (NTRS)

    Zimmerman, D. C.; Inman, D. J.; Juang, J.-N.

    1988-01-01

    An improved active control strategy for the vibration control of large flexible structures is presented. A minimum force, low authority-threshold controller is developed to bring a system with or without known external disturbances back into an 'allowable' state manifold over a finite time interval. The concept of a constrained, or allowable feedback form of the controller is introduced that reflects practical hardware implementation concerns. The robustness properties of the control strategy are then assessed. Finally, examples are presented which highlight the key points made within the paper.

  12. A study of the threshold method utilizing raingage data

    NASA Technical Reports Server (NTRS)

    Short, David A.; Wolff, David B.; Rosenfeld, Daniel; Atlas, David

    1993-01-01

    The threshold method for estimation of area-average rain rate relies on determination of the fractional area where rain rate exceeds a preset level of intensity. Previous studies have shown that the optimal threshold level depends on the climatological rain-rate distribution (RRD). It has also been noted, however, that the climatological RRD may be composed of an aggregate of distributions, one for each of several distinctly different synoptic conditions, each having its own optimal threshold. In this study, the impact of RRD variations on the threshold method is shown in an analysis of 1-min rainrate data from a network of tipping-bucket gauges in Darwin, Australia. Data are analyzed for two distinct regimes: the premonsoon environment, having isolated intense thunderstorms, and the active monsoon rains, having organized convective cell clusters that generate large areas of stratiform rain. It is found that a threshold of 10 mm/h results in the same threshold coefficient for both regimes, suggesting an alternative definition of optimal threshold as that which is least sensitive to distribution variations. The observed behavior of the threshold coefficient is well simulated by assumption of lognormal distributions with different scale parameters and same shape parameters.

  13. Fan-out Estimation in Spin-based Quantum Computer Scale-up.

    PubMed

    Nguyen, Thien; Hill, Charles D; Hollenberg, Lloyd C L; James, Matthew R

    2017-10-17

    Solid-state spin-based qubits offer good prospects for scaling based on their long coherence times and nexus to large-scale electronic scale-up technologies. However, high-threshold quantum error correction requires a two-dimensional qubit array operating in parallel, posing significant challenges in fabrication and control. While architectures incorporating distributed quantum control meet this challenge head-on, most designs rely on individual control and readout of all qubits with high gate densities. We analysed the fan-out routing overhead of a dedicated control line architecture, basing the analysis on a generalised solid-state spin qubit platform parameterised to encompass Coulomb confined (e.g. donor based spin qubits) or electrostatically confined (e.g. quantum dot based spin qubits) implementations. The spatial scalability under this model is estimated using standard electronic routing methods and present-day fabrication constraints. Based on reasonable assumptions for qubit control and readout we estimate 10 2 -10 5 physical qubits, depending on the quantum interconnect implementation, can be integrated and fanned-out independently. Assuming relatively long control-free interconnects the scalability can be extended. Ultimately, the universal quantum computation may necessitate a much higher number of integrated qubits, indicating that higher dimensional electronics fabrication and/or multiplexed distributed control and readout schemes may be the preferredstrategy for large-scale implementation.

  14. In-silico Taxonomic Classification of 373 Genomes Reveals Species Misidentification and New Genospecies within the Genus Pseudomonas.

    PubMed

    Tran, Phuong N; Savka, Michael A; Gan, Han Ming

    2017-01-01

    The genus Pseudomonas has one of the largest diversity of species within the Bacteria kingdom. To date, its taxonomy is still being revised and updated. Due to the non-standardized procedure and ambiguous thresholds at species level, largely based on 16S rRNA gene or conventional biochemical assay, species identification of publicly available Pseudomonas genomes remains questionable. In this study, we performed a large-scale analysis of all Pseudomonas genomes with species designation (excluding the well-defined P. aeruginosa ) and re-evaluated their taxonomic assignment via in silico genome-genome hybridization and/or genetic comparison with valid type species. Three-hundred and seventy-three pseudomonad genomes were analyzed and subsequently clustered into 145 distinct genospecies. We detected 207 erroneous labels and corrected 43 to the proper species based on Average Nucleotide Identity Multilocus Sequence Typing (MLST) sequence similarity to the type strain. Surprisingly, more than half of the genomes initially designated as Pseudomonas syringae and Pseudomonas fluorescens should be classified either to a previously described species or to a new genospecies. Notably, high pairwise average nucleotide identity (>95%) indicating species-level similarity was observed between P. synxantha-P. libanensis, P. psychrotolerans - P. oryzihabitans , and P. kilonensis- P. brassicacearum , that were previously differentiated based on conventional biochemical tests and/or genome-genome hybridization techniques.

  15. Semiclassical excited-state signatures of quantum phase transitions in spin chains with variable-range interactions

    NASA Astrophysics Data System (ADS)

    Gessner, Manuel; Bastidas, Victor Manuel; Brandes, Tobias; Buchleitner, Andreas

    2016-04-01

    We study the excitation spectrum of a family of transverse-field spin chain models with variable interaction range and arbitrary spin S , which in the case of S =1 /2 interpolates between the Lipkin-Meshkov-Glick and the Ising model. For any finite number N of spins, a semiclassical energy manifold is derived in the large-S limit employing bosonization methods, and its geometry is shown to determine not only the leading-order term but also the higher-order quantum fluctuations. Based on a multiconfigurational mean-field ansatz, we obtain the semiclassical backbone of the quantum spectrum through the extremal points of a series of one-dimensional energy landscapes—each one exhibiting a bifurcation when the external magnetic field drops below a threshold value. The obtained spectra become exact in the limit of vanishing or very strong external, transverse magnetic fields. Further analysis of the higher-order corrections in 1 /√{2 S } enables us to analytically study the dispersion relations of spin-wave excitations around the semiclassical energy levels. Within the same model, we are able to investigate quantum bifurcations, which occur in the semiclassical (S ≫1 ) limit, and quantum phase transitions, which are observed in the thermodynamic (N →∞ ) limit.

  16. Large-scale exact diagonalizations reveal low-momentum scales of nuclei

    NASA Astrophysics Data System (ADS)

    Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.

    2018-03-01

    Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.

  17. Essays on price dynamics, discovery, and dynamic threshold effects among energy spot markets in North America

    NASA Astrophysics Data System (ADS)

    Park, Haesun

    2005-12-01

    Given the role electricity and natural gas sectors play in the North American economy, an understanding of how markets for these commodities interact is important. This dissertation independently characterizes the price dynamics of major electricity and natural gas spot markets in North America by combining directed acyclic graphs with time series analyses. Furthermore, the dissertation explores a generalization of price difference bands associated with the law of one price. Interdependencies among 11 major electricity spot markets are examined in Chapter II using a vector autoregression model. Results suggest that the relationships between the markets vary by time. Western markets are separated from the eastern markets and the Electricity Reliability Council of Texas. At longer time horizons these separations disappear. Palo Verde is the important spot market in the west for price discovery. Southwest Power Pool is the dominant market in Eastern Interconnected System for price discovery. Interdependencies among eight major natural gas spot markets are investigated using a vector error correction model and the Greedy Equivalence Search Algorithm in Chapter III. Findings suggest that the eight price series are tied together through six long-run cointegration relationships, supporting the argument that the natural gas market has developed into a single integrated market in North America since deregulation. Results indicate that price discovery tends to occur in the excess consuming regions and move to the excess producing regions. Across North America, the U.S. Midwest region, represented by the Chicago spot market, is the most important for price discovery. The Ellisburg-Leidy Hub in Pennsylvania and Malin Hub in Oregon are important for eastern and western markets. In Chapter IV, a threshold vector error correction model is applied to the natural gas markets to examine nonlinearities in adjustments to the law of one price. Results show that there are nonlinear adjustments to the law of one price in seven pair-wise markets. Four alternative cases for the law of one price are presented as a theoretical background. A methodology is developed for finding a threshold cointegration model that accounts for seasonality in the threshold levels. Results indicate that dynamic threshold effects vary depending on geographical location and whether the markets are excess producing or excess consuming markets.

  18. Device and material characterization and analytic modeling of amorphous silicon thin film transistors

    NASA Astrophysics Data System (ADS)

    Slade, Holly Claudia

    Hydrogenated amorphous silicon thin film transistors (TFTs) are now well-established as switching elements for a variety of applications in the lucrative electronics market, such as active matrix liquid crystal displays, two-dimensional imagers, and position-sensitive radiation detectors. These applications necessitate the development of accurate characterization and simulation tools. The main goal of this work is the development of a semi- empirical, analytical model for the DC and AC operation of an amorphous silicon TFT for use in a manufacturing facility to improve yield and maintain process control. The model is physically-based, in order that the parameters scale with gate length and can be easily related back to the material and device properties. To accomplish this, extensive experimental data and 2D simulations are used to observe and quantify non- crystalline effects in the TFTs. In particular, due to the disorder in the amorphous network, localized energy states exist throughout the band gap and affect all regimes of TFT operation. These localized states trap most of the free charge, causing a gate-bias-dependent field effect mobility above threshold, a power-law dependence of the current on gate bias below threshold, very low leakage currents, and severe frequency dispersion of the TFT gate capacitance. Additional investigations of TFT instabilities reveal the importance of changes in the density of states and/or back channel conduction due to bias and thermal stress. In the above threshold regime, the model is similar to the crystalline MOSFET model, considering the drift component of free charge. This approach uses the field effect mobility to take into account the trap states and must utilize the correct definition of threshold voltage. In the below threshold regime, the density of deep states is taken into account. The leakage current is modeled empirically, and the parameters are temperature dependent to 150oC. The capacitance of the TFT can be modeled using a transmission line model, which is implemented using a small signal circuit with access resistors in series with the source and drain capacitances. This correctly reproduces the frequency dispersion in the TFT. Automatic parameter extraction routines are provided and are used to test the robustness of the model on a variety of devices from different research laboratories. The results demonstrate excellent agreement, showing that the model is suitable for device design, scaling, and implementation in the manufacturing process.

  19. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    NASA Astrophysics Data System (ADS)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  20. Building rainfall thresholds for large-scales landslides by extracting occurrence time of landslides from seismic records

    NASA Astrophysics Data System (ADS)

    Yen, Hsin-Yi; Lin, Guan-Wei

    2017-04-01

    Understanding the rainfall condition which triggers mass moment on hillslope is the key to forecast rainfall-induced slope hazards, and the exact time of landslide occurrence is one of the basic information for rainfall statistics. In the study, we focused on large-scale landslides (LSLs) with disturbed area larger than 10 ha and conducted a string of studies including the recognition of landslide-induced ground motions and the analyses of different terms of rainfall thresholds. More than 10 heavy typhoons during the periods of 2005-2014 in Taiwan induced more than hundreds of LSLs and provided the opportunity to characterize the rainfall conditions which trigger LSLs. A total of 101 landslide-induced seismic signals were identified from the records of Taiwan seismic network. These signals exposed the occurrence time of landslide to assess rainfall conditions. Rainfall analyses showed that LSLs occurred when cumulative rainfall exceeded 500 mm. The results of rainfall-threshold analyses revealed that it is difficult to distinct LSLs from small-scale landslides (SSLs) by the I-D and R-D methods, but the I-R method can achieve the discrimination. Besides, an enhanced three-factor threshold considering deep water content was proposed as the rainfall threshold for LSLs.

  1. Generation Process of Large-Amplitude Upper-Band Chorus Emissions Observed by Van Allen Probes

    DOE PAGES

    Kubota, Yuko; Omura, Yoshiharu; Kletzing, Craig; ...

    2018-04-19

    In this paper, we analyze large-amplitude upper-band chorus emissions measured near the magnetic equator by the Electric and Magnetic Field Instrument Suite and Integrated Science instrument package on board the Van Allen Probes. In setting up the parameters of source electrons exciting the emissions based on theoretical analyses and observational results measured by the Helium Oxygen Proton Electron instrument, we calculate threshold and optimum amplitudes with the nonlinear wave growth theory. We find that the optimum amplitude is larger than the threshold amplitude obtained in the frequency range of the chorus emissions and that the wave amplitudes grow between themore » threshold and optimum amplitudes. Finally, in the frame of the wave growth process, the nonlinear growth rates are much greater than the linear growth rates.« less

  2. Generation Process of Large-Amplitude Upper-Band Chorus Emissions Observed by Van Allen Probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kubota, Yuko; Omura, Yoshiharu; Kletzing, Craig

    In this paper, we analyze large-amplitude upper-band chorus emissions measured near the magnetic equator by the Electric and Magnetic Field Instrument Suite and Integrated Science instrument package on board the Van Allen Probes. In setting up the parameters of source electrons exciting the emissions based on theoretical analyses and observational results measured by the Helium Oxygen Proton Electron instrument, we calculate threshold and optimum amplitudes with the nonlinear wave growth theory. We find that the optimum amplitude is larger than the threshold amplitude obtained in the frequency range of the chorus emissions and that the wave amplitudes grow between themore » threshold and optimum amplitudes. Finally, in the frame of the wave growth process, the nonlinear growth rates are much greater than the linear growth rates.« less

  3. Scaling laws for nanoFET sensors

    NASA Astrophysics Data System (ADS)

    Zhou, Fu-Shan; Wei, Qi-Huo

    2008-01-01

    The sensitive conductance change of semiconductor nanowires and carbon nanotubes in response to the binding of charged molecules provides a novel sensing modality which is generally denoted as nanoFET sensors. In this paper, we study the scaling laws of nanoplate FET sensors by simplifying nanoplates as random resistor networks with molecular receptors sitting on lattice sites. Nanowire/tube FETs are included as the limiting cases where the device width goes small. Computer simulations show that the field effect strength exerted by the binding molecules has significant impact on the scaling behaviors. When the field effect strength is small, nanoFETs have little size and shape dependence. In contrast, when the field effect strength becomes stronger, there exists a lower detection threshold for charge accumulation FETs and an upper detection threshold for charge depletion FET sensors. At these thresholds, the nanoFET devices undergo a transition between low and large sensitivities. These thresholds may set the detection limits of nanoFET sensors, while they could be eliminated by designing devices with very short source-drain distance and large width.

  4. Large-particle calcium hydroxylapatite injection for correction of facial wrinkles and depressions.

    PubMed

    Alam, Murad; Havey, Jillian; Pace, Natalie; Pongprutthipan, Marisa; Yoo, Simon

    2011-07-01

    Small-particle calcium hydroxylapatite (Radiesse, Merz, Frankfurt, Germany) is safe and effective for facial wrinkle reduction, and has medium-term persistence for this indication. There is patient demand for similar fillers that may be longer lasting. We sought to assess the safety and persistence of effect in vivo associated with use of large-particle calcium hydroxylapatite (Coaptite, Merz) for facial augmentation and wrinkle reduction. This was a case series of 3 patients injected with large-particle calcium hydroxylapatite. Large-particle calcium hydroxylapatite appears to be effective and well tolerated for correction of facial depressions, including upper mid-cheek atrophy, nasolabial creases, and HIV-associated lipoatrophy. Adverse events included erythema and edema, and transient visibility of the injection sites. Treated patients, all of whom had received small-particle calcium hydroxylapatite correction before, noted improved persistence at 6 and 15 months with the large-particle injections as compared with prior small-particle injections. This is a small case series, and there was no direct control to compare the persistence of small-particle versus large-particle correction. For facial wrinkle correction, large-particle calcium hydroxylapatite has a safety profile comparable with that of small-particle calcium hydroxylapatite. The large-particle variant may have longer persistence that may be useful in selected clinical circumstances. Copyright © 2010 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  5. Corrective response times in a coordinated eye-head-arm countermanding task.

    PubMed

    Tao, Gordon; Khan, Aarlenne Z; Blohm, Gunnar

    2018-06-01

    Inhibition of motor responses has been described as a race between two competing decision processes of motor initiation and inhibition, which manifest as the reaction time (RT) and the stop signal reaction time (SSRT); in the case where motor initiation wins out over inhibition, an erroneous movement occurs that usually needs to be corrected, leading to corrective response times (CRTs). Here we used a combined eye-head-arm movement countermanding task to investigate the mechanisms governing multiple effector coordination and the timing of corrective responses. We found a high degree of correlation between effector response times for RT, SSRT, and CRT, suggesting that decision processes are strongly dependent across effectors. To gain further insight into the mechanisms underlying CRTs, we tested multiple models to describe the distribution of RTs, SSRTs, and CRTs. The best-ranked model (according to 3 information criteria) extends the LATER race model governing RTs and SSRTs, whereby a second motor initiation process triggers the corrective response (CRT) only after the inhibition process completes in an expedited fashion. Our model suggests that the neural processing underpinning a failed decision has a residual effect on subsequent actions. NEW & NOTEWORTHY Failure to inhibit erroneous movements typically results in corrective movements. For coordinated eye-head-hand movements we show that corrective movements are only initiated after the erroneous movement cancellation signal has reached a decision threshold in an accelerated fashion.

  6. The Impact of Quality Assurance Assessment on Diffusion Tensor Imaging Outcomes in a Large-Scale Population-Based Cohort

    PubMed Central

    Roalf, David R.; Quarmley, Megan; Elliott, Mark A.; Satterthwaite, Theodore D.; Vandekar, Simon N.; Ruparel, Kosha; Gennatas, Efstathios D.; Calkins, Monica E.; Moore, Tyler M.; Hopson, Ryan; Prabhakaran, Karthik; Jackson, Chad T.; Verma, Ragini; Hakonarson, Hakon; Gur, Ruben C.; Gur, Raquel E.

    2015-01-01

    Background Diffusion tensor imaging (DTI) is applied in investigation of brain biomarkers for neurodevelopmental and neurodegenerative disorders. However, the quality of DTI measurements, like other neuroimaging techniques, is susceptible to several confounding factors (e.g. motion, eddy currents), which have only recently come under scrutiny. These confounds are especially relevant in adolescent samples where data quality may be compromised in ways that confound interpretation of maturation parameters. The current study aims to leverage DTI data from the Philadelphia Neurodevelopmental Cohort (PNC), a sample of 1,601 youths ages of 8–21 who underwent neuroimaging, to: 1) establish quality assurance (QA) metrics for the automatic identification of poor DTI image quality; 2) examine the performance of these QA measures in an external validation sample; 3) document the influence of data quality on developmental patterns of typical DTI metrics. Methods All diffusion-weighted images were acquired on the same scanner. Visual QA was performed on all subjects completing DTI; images were manually categorized as Poor, Good, or Excellent. Four image quality metrics were automatically computed and used to predict manual QA status: Mean voxel intensity outlier count (MEANVOX), Maximum voxel intensity outlier count (MAXVOX), mean relative motion (MOTION) and temporal signal-to-noise ratio (TSNR). Classification accuracy for each metric was calculated as the area under the receiver-operating characteristic curve (AUC). A threshold was generated for each measure that best differentiated visual QA status and applied in a validation sample. The effects of data quality on sensitivity to expected age effects in this developmental sample were then investigated using the traditional MRI diffusion metrics: fractional anisotropy (FA) and mean diffusivity (MD). Finally, our method of QA is compared to DTIPrep. Results TSNR (AUC=0.94) best differentiated Poor data from Good and Excellent data. MAXVOX (AUC=0.88) best differentiated Good from Excellent DTI data. At the optimal threshold, 88% of Poor data and 91% Good/Excellent data were correctly identified. Use of these thresholds on a validation dataset (n=374) indicated high accuracy. In the validation sample 83% of Poor data and 94% of Excellent data was identified using thresholds derived from the training sample. Both FA and MD were affected by the inclusion of poor data in an analysis of age, sex and race in a matched comparison sample. In addition, we show that the inclusion of poor data results in significant attenuation of the correlation between diffusion metrics (FA and MD) and age during a critical neurodevelopmental period. We find higher correspondence between our QA method and DTIPrep for Poor data, but we find our method to be more robust for apparently high-quality images. Conclusion Automated QA of DTI can facilitate large-scale, high-throughput quality assurance by reliably identifying both scanner and subject induced imaging artifacts. The results present a practical example of the confounding effects of artifacts on DTI analysis in a large population-based sample, and suggest that estimates of data quality should not only be reported but also accounted for in data analysis, especially in studies of development. PMID:26520775

  7. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    USGS Publications Warehouse

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites for estimating mean conditions. However, low sample sizes (<5 fish) did not achieve 80% power to detect near-threshold values (i.e., <1 mg Se/kg) under any scenario we evaluated. This analysis can assist the sampling design and interpretation of Se assessments from fish tissue by accounting for natural variation in stream fish populations.

  8. Using Statistical Techniques and Web Search to Correct ESL Errors

    ERIC Educational Resources Information Center

    Gamon, Michael; Leacock, Claudia; Brockett, Chris; Dolan, William B.; Gao, Jianfeng; Belenko, Dmitriy; Klementiev, Alexandre

    2009-01-01

    In this paper we present a system for automatic correction of errors made by learners of English. The system has two novel aspects. First, machine-learned classifiers trained on large amounts of native data and a very large language model are combined to optimize the precision of suggested corrections. Second, the user can access real-life web…

  9. Directional Limits on Motion Transparency Assessed Through Colour-Motion Binding.

    PubMed

    Maloney, Ryan T; Clifford, Colin W G; Mareschal, Isabelle

    2018-03-01

    Motion-defined transparency is the perception of two or more distinct moving surfaces at the same retinal location. We explored the limits of motion transparency using superimposed surfaces of randomly positioned dots defined by differences in motion direction and colour. In one experiment, dots were red or green and we varied the proportion of dots of a single colour that moved in a single direction ('colour-motion coherence') and measured the threshold direction difference for discriminating between two directions. When colour-motion coherences were high (e.g., 90% of red dots moving in one direction), a smaller direction difference was required to correctly bind colour with direction than at low coherences. In another experiment, we varied the direction difference between the surfaces and measured the threshold colour-motion coherence required to discriminate between them. Generally, colour-motion coherence thresholds decreased with increasing direction differences, stabilising at direction differences around 45°. Different stimulus durations were compared, and thresholds were higher at the shortest (150 ms) compared with the longest (1,000 ms) duration. These results highlight different yet interrelated aspects of the task and the fundamental limits of the mechanisms involved: the resolution of narrowly separated directions in motion processing and the local sampling of dot colours from each surface.

  10. Photo-Double Ionization: Threshold Law and Low-Energy Behavior

    NASA Technical Reports Server (NTRS)

    Bhatia, Anand

    2008-01-01

    The threshold law for photoejection of two electrons from atoms (PDI) is derived from a modification of the Coulomb-dipole (C-D) theory. The C-D theory applies to two-electron ejection from negative ions (photo-double detachment:PDD). The modification consists of correctly accounting for the fact that in PDI that the two escaping electrons see a Coulomb field, asymptotically no matter what their relative distances from the residual ion are. We find in the contralinear spherically symmetric model that the analytic threshold law Q(E),i. e. the yield of residual ions, to be Qf(E)approaches E + CwE(sup gamma(w)) + CE(sup 5/4)sin[1/2 ln(E + theta)]/ln(E). The first and third terms are beyond the Wannier law. Our threshold law can only be rigorously justified for residual energies less than or equal to 10(exp -3) eV. Nevertheless in the present experimental range (0.1 - 4 eV), the form, even without the second term, can be fitted to experimental results of PDI for He, Li, and Be, in contrast to the Wannier law which has a larger deviation from the data for Li and Be, for both of which the data show signs of modulation.

  11. Objective definition of rainfall intensity-duration thresholds for post-fire flash floods and debris flows in the area burned by the Waldo Canyon fire, Colorado, USA

    USGS Publications Warehouse

    Staley, Dennis M.; Gartner, Joseph E.; Kean, Jason W.

    2015-01-01

    We present an objectively defined rainfall intensity-duration (I-D) threshold for the initiation of flash floods and debris flows for basins recently burned in the 2012 Waldo Canyon fire near Colorado Springs, Colorado, USA. Our results are based on 453 rainfall records which include 8 instances of hazardous flooding and debris flow from 10 July 2012 to 14 August 2013. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow or flood occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. The equation I = 11.6D−0.7 represents the I-D threshold (I, in mm/h) for durations (D, in hours) ranging from 0.083 h (5 min) to 1 h for basins burned by the 2012 Waldo Canyon fire. As periods of high-intensity rainfall over short durations (less than 1 h) produced all of the debris flow and flood events, real-time monitoring of rainfall conditions will result in very short lead times for early-warning. Our results highlight the need for improved forecasting of the rainfall rates during short-duration, high-intensity convective rainfall events.

  12. Should high-power posing be integrated in physical therapy?

    PubMed

    Ge, Weiqing; Bennett, Teale K; Oller, Jeremy C

    2017-04-01

    [Purpose] Postural assessment and correction is a common approach in patient management to decrease symptoms and improve function for patients. The purpose of this study was to determine the effects of high-power posing on muscle strength and pain threshold. [Subjects and Methods] Thirty-one subjects, 16 females and 15 males, mean age 28.9 (SD 10.8) years old, were recruited through a convenience sampling on the university campus. The research design was a randomized controlled trial. In the experimental group, the subjects were instructed to stand in a high-power posture. In the control group, the subjects were instructed to stand in a low-power posture. Grip strength and pain threshold measurements were conducted before and after the postural intervention. [Results] The grip strength changed by -3.4 (-3.7, 0.3) % and 1.7 (-3.6, 5.3) % for the experimental and control groups, respectively. The pain threshold changed by 0.6 (-9.9, 10.4) % and 15.1 (-9.3, 24.4) % for the experimental and control groups, respectively. However, both changes were not significant as all the 95% CIs included 0. [Conclusions] The data did not show significant benefits of high-power posing in increasing grip strength and pain threshold compared to low-power posing.

  13. Should high-power posing be integrated in physical therapy?

    PubMed Central

    Ge, Weiqing; Bennett, Teale K.; Oller, Jeremy C.

    2017-01-01

    [Purpose] Postural assessment and correction is a common approach in patient management to decrease symptoms and improve function for patients. The purpose of this study was to determine the effects of high-power posing on muscle strength and pain threshold. [Subjects and Methods] Thirty-one subjects, 16 females and 15 males, mean age 28.9 (SD 10.8) years old, were recruited through a convenience sampling on the university campus. The research design was a randomized controlled trial. In the experimental group, the subjects were instructed to stand in a high-power posture. In the control group, the subjects were instructed to stand in a low-power posture. Grip strength and pain threshold measurements were conducted before and after the postural intervention. [Results] The grip strength changed by −3.4 (−3.7, 0.3) % and 1.7 (−3.6, 5.3) % for the experimental and control groups, respectively. The pain threshold changed by 0.6 (−9.9, 10.4) % and 15.1 (−9.3, 24.4) % for the experimental and control groups, respectively. However, both changes were not significant as all the 95% CIs included 0. [Conclusions] The data did not show significant benefits of high-power posing in increasing grip strength and pain threshold compared to low-power posing. PMID:28533612

  14. Uncertainty in determining extreme precipitation thresholds

    NASA Astrophysics Data System (ADS)

    Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili

    2013-10-01

    Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.

  15. Computed radiography utilizing laser-stimulated luminescence: detectability of simulated low-contrast radiographic objects.

    PubMed

    Higashida, Y; Moribe, N; Hirata, Y; Morita, K; Doudanuki, S; Sonoda, Y; Katsuda, N; Hiai, Y; Misumi, W; Matsumoto, M

    1988-01-01

    Threshold contrasts of low-contrast objects with computed radiography (CR) images were compared with those of blue and green emitting screen-film systems by employing the 18-alternative forced choice (18-AFC) procedure. The dependence of the threshold contrast on the incident X-ray exposure and also the object size was studied. The results indicated that the threshold contrasts of CR system were comparable to those of blue and green screen-film systems and decreased with increasing object size, and increased with decreasing incident X-ray exposure. The increase in threshold contrasts was small when the relative incident exposure decreased from 1 to 1/4, and was large when incident exposure was decreased further.

  16. Two-Point Orientation Discrimination Versus the Traditional Two-Point Test for Tactile Spatial Acuity Assessment

    PubMed Central

    Tong, Jonathan; Mao, Oliver; Goldreich, Daniel

    2013-01-01

    Two-point discrimination is widely used to measure tactile spatial acuity. The validity of the two-point threshold as a spatial acuity measure rests on the assumption that two points can be distinguished from one only when the two points are sufficiently separated to evoke spatially distinguishable foci of neural activity. However, some previous research has challenged this view, suggesting instead that two-point task performance benefits from an unintended non-spatial cue, allowing spuriously good performance at small tip separations. We compared the traditional two-point task to an equally convenient alternative task in which participants attempt to discern the orientation (vertical or horizontal) of two points of contact. We used precision digital readout calipers to administer two-interval forced-choice versions of both tasks to 24 neurologically healthy adults, on the fingertip, finger base, palm, and forearm. We used Bayesian adaptive testing to estimate the participants’ psychometric functions on the two tasks. Traditional two-point performance remained significantly above chance levels even at zero point separation. In contrast, two-point orientation discrimination approached chance as point separation approached zero, as expected for a valid measure of tactile spatial acuity. Traditional two-point performance was so inflated at small point separations that 75%-correct thresholds could be determined on all tested sites for fewer than half of participants. The 95%-correct thresholds on the two tasks were similar, and correlated with receptive field spacing. In keeping with previous critiques, we conclude that the traditional two-point task provides an unintended non-spatial cue, resulting in spuriously good performance at small spatial separations. Unlike two-point discrimination, two-point orientation discrimination rigorously measures tactile spatial acuity. We recommend the use of two-point orientation discrimination for neurological assessment. PMID:24062677

  17. The role of genetic variation of human metabolism for BMI, mental traits and mental disorders.

    PubMed

    Hebebrand, Johannes; Peters, Triinu; Schijven, Dick; Hebebrand, Moritz; Grasemann, Corinna; Winkler, Thomas W; Heid, Iris M; Antel, Jochen; Föcker, Manuel; Tegeler, Lisa; Brauner, Lena; Adan, Roger A H; Luykx, Jurjen J; Correll, Christoph U; König, Inke R; Hinney, Anke; Libuda, Lars

    2018-06-01

    The aim was to assess whether loci associated with metabolic traits also have a significant role in BMI and mental traits/disorders METHODS: We first assessed the number of single nucleotide polymorphisms (SNPs) with genome-wide significance for human metabolism (NHGRI-EBI Catalog). These 516 SNPs (216 independent loci) were looked-up in genome-wide association studies for association with body mass index (BMI) and the mental traits/disorders educational attainment, neuroticism, schizophrenia, well-being, anxiety, depressive symptoms, major depressive disorder, autism-spectrum disorder, attention-deficit/hyperactivity disorder, Alzheimer's disease, bipolar disorder, aggressive behavior, and internalizing problems. A strict significance threshold of p < 6.92 × 10 -6 was based on the correction for 516 SNPs and all 14 phenotypes, a second less conservative threshold (p < 9.69 × 10 -5 ) on the correction for the 516 SNPs only. 19 SNPs located in nine independent loci revealed p-values < 6.92 × 10 -6 ; the less strict criterion was met by 41 SNPs in 24 independent loci. BMI and schizophrenia showed the most pronounced genetic overlap with human metabolism with three loci each meeting the strict significance threshold. Overall, genetic variation associated with estimated glomerular filtration rate showed up frequently; single metabolite SNPs were associated with more than one phenotype. Replications in independent samples were obtained for BMI and educational attainment. Approximately 5-10% of the regions involved in the regulation of blood/urine metabolite levels seem to also play a role in BMI and mental traits/disorders and related phenotypes. If validated in metabolomic studies of the respective phenotypes, the associated blood/urine metabolites may enable novel preventive and therapeutic strategies. Copyright © 2018 The Authors. Published by Elsevier GmbH.. All rights reserved.

  18. Reconstruction of signals with unknown spectra in information field theory with parameter uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ensslin, Torsten A.; Frommert, Mona

    2011-05-15

    The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequentmore » reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.« less

  19. EAP recordings in ineraid patients--correlations with psychophysical measures and possible implications for patient fitting.

    PubMed

    Zimmerling, Martin J; Hochmair, Erwin S

    2002-04-01

    Objective measurements can be helpful for cochlear implant fitting of difficult populations, as for example very young children. One method, the recording of the electrically evoked compound action potential (EAP), measures the nerve recruitment in the cochlea in response to stimulation through the implant. For coding strategies implemented at a moderate stimulation rate of 250 pps per channel, useful correlations between EAP data and psychophysical data have been already found. With new systems running at higher rates, it is important to check these correlations again. This study investigates the correlations between psychophysical data and EAP measures calculated from EAP amplitude growth functions. EAP data were recorded in 12 Ineraid subjects. Additionally, behavioral thresholds (THR) and maximum acceptable loudness levels (MAL) were determined for stimulation rates of 80 pps and 2,020 pps for each electrode. Useful correlations between EAP data and psychophysical data were found at the low stimulation rate (80 pps). However, at the higher stimulation rate (2,020 pps) correlations were not significant. They were improved substantially, however, by introducing a factor that corrected for disparities due to temporal integration. Incorporation of this factor, which controls for the influence of the stimulation rate on the threshold, improved the correlations between EAP measures recorded at 80 pps and psychophysical MALs measured at 2,020 pps to better than r = 0.70. EAP data as such can only be used to predict behavioral THRs or MCLs at low stimulation rates. To cope with temporal integration effects at higher stimulation rates, EAP data must be rate corrected. The introduction of a threshold-rate-factor is a promising way to achieve that goal. Further investigations need to be performed.

  20. Enumerative and binomial sequential sampling plans for the multicolored Asian lady beetle (Coleoptera: Coccinellidae) in wine grapes.

    PubMed

    Galvan, T L; Burkness, E C; Hutchison, W D

    2007-06-01

    To develop a practical integrated pest management (IPM) system for the multicolored Asian lady beetle, Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae), in wine grapes, we assessed the spatial distribution of H. axyridis and developed eight sampling plans to estimate adult density or infestation level in grape clusters. We used 49 data sets collected from commercial vineyards in 2004 and 2005, in Minnesota and Wisconsin. Enumerative plans were developed using two precision levels (0.10 and 0.25); the six binomial plans reflected six unique action thresholds (3, 7, 12, 18, 22, and 31% of cluster samples infested with at least one H. axyridis). The spatial distribution of H. axyridis in wine grapes was aggregated, independent of cultivar and year, but it was more randomly distributed as mean density declined. The average sample number (ASN) for each sampling plan was determined using resampling software. For research purposes, an enumerative plan with a precision level of 0.10 (SE/X) resulted in a mean ASN of 546 clusters. For IPM applications, the enumerative plan with a precision level of 0.25 resulted in a mean ASN of 180 clusters. In contrast, the binomial plans resulted in much lower ASNs and provided high probabilities of arriving at correct "treat or no-treat" decisions, making these plans more efficient for IPM applications. For a tally threshold of one adult per cluster, the operating characteristic curves for the six action thresholds provided binomial sequential sampling plans with mean ASNs of only 19-26 clusters, and probabilities of making correct decisions between 83 and 96%. The benefits of the binomial sampling plans are discussed within the context of improving IPM programs for wine grapes.

  1. Transdiaphragmatic pressure and neural respiratory drive measured during inspiratory muscle training in stable patients with chronic obstructive pulmonary disease.

    PubMed

    Wu, Weiliang; Zhang, Xianming; Lin, Lin; Ou, Yonger; Li, Xiaoying; Guan, Lili; Guo, Bingpeng; Zhou, Luqian; Chen, Rongchang

    2017-01-01

    Inspiratory muscle training (IMT) is a rehabilitation therapy for stable patients with COPD. However, its therapeutic effect remains undefined due to the unclear nature of diaphragmatic mobilization during IMT. Diaphragmatic mobilization, represented by transdiaphragmatic pressure (Pdi), and neural respiratory drive, expressed as the corrected root mean square (RMS) of the diaphragmatic electromyogram (EMGdi), both provide vital information to select the proper IMT device and loads in COPD, therefore contributing to the curative effect of IMT. Pdi and RMS of EMGdi (RMSdi%) were measured and compared during inspiratory resistive training and threshold load training in stable patients with COPD. Pdi and neural respiratory drive were measured continuously during inspiratory resistive training and threshold load training in 12 stable patients with COPD (forced expiratory volume in 1 s ± SD was 26.1%±10.2% predicted). Pdi was significantly higher during high-intensity threshold load training (91.46±17.24 cmH 2 O) than during inspiratory resistive training (27.24±6.13 cmH 2 O) in stable patients with COPD, with P <0.01 for each. Significant difference was also found in RMSdi% between high-intensity threshold load training and inspiratory resistive training (69.98%±16.78% vs 17.26%±14.65%, P <0.01). We concluded that threshold load training shows greater mobilization of Pdi and neural respiratory drive than inspiratory resistive training in stable patients with COPD.

  2. Dynamic permeability in fault damage zones induced by repeated coseismic fracturing events

    NASA Astrophysics Data System (ADS)

    Aben, F. M.; Doan, M. L.; Mitchell, T. M.

    2017-12-01

    Off-fault fracture damage in upper crustal fault zones change the fault zone properties and affect various co- and interseismic processes. One of these properties is the permeability of the fault damage zone rocks, which is generally higher than the surrounding host rock. This allows large-scale fluid flow through the fault zone that affects fault healing and promotes mineral transformation processes. Moreover, it might play an important role in thermal fluid pressurization during an earthquake rupture. The damage zone permeability is dynamic due to coseismic damaging. It is crucial for earthquake mechanics and for longer-term processes to understand how the dynamic permeability structure of a fault looks like and how it evolves with repeated earthquakes. To better detail coseismically induced permeability, we have performed uniaxial split Hopkinson pressure bar experiments on quartz-monzonite rock samples. Two sample sets were created and analyzed: single-loaded samples subjected to varying loading intensities - with damage varying from apparently intact to pulverized - and samples loaded at a constant intensity but with a varying number of repeated loadings. The first set resembles a dynamic permeability structure created by a single large earthquake. The second set resembles a permeability structure created by several earthquakes. After, the permeability and acoustic velocities were measured as a function of confining pressure. The permeability in both datasets shows a large and non-linear increase over several orders of magnitude (from 10-20 up to 10-14 m2) with an increasing amount of fracture damage. This, combined with microstructural analyses of the varying degrees of damage, suggests a percolation threshold. The percolation threshold does not coincide with the pulverization threshold. With increasing confining pressure, the permeability might drop up to two orders of magnitude, which supports the possibility of large coseismic fluid pulses over relatively large distances along a fault. Also, a relatively small threshold could potentially increase permeability in a large volume of rock, given that previous earthquakes already damaged these rocks.

  3. Threshold quantum secret sharing based on single qubit

    NASA Astrophysics Data System (ADS)

    Lu, Changbin; Miao, Fuyou; Meng, Keju; Yu, Yue

    2018-03-01

    Based on unitary phase shift operation on single qubit in association with Shamir's ( t, n) secret sharing, a ( t, n) threshold quantum secret sharing scheme (or ( t, n)-QSS) is proposed to share both classical information and quantum states. The scheme uses decoy photons to prevent eavesdropping and employs the secret in Shamir's scheme as the private value to guarantee the correctness of secret reconstruction. Analyses show it is resistant to typical intercept-and-resend attack, entangle-and-measure attack and participant attacks such as entanglement swapping attack. Moreover, it is easier to realize in physic and more practical in applications when compared with related ones. By the method in our scheme, new ( t, n)-QSS schemes can be easily constructed using other classical ( t, n) secret sharing.

  4. Extremal Optimization for estimation of the error threshold in topological subsystem codes at T = 0

    NASA Astrophysics Data System (ADS)

    Millán-Otoya, Jorge E.; Boettcher, Stefan

    2014-03-01

    Quantum decoherence is a problem that arises in implementations of quantum computing proposals. Topological subsystem codes (TSC) have been suggested as a way to overcome decoherence. These offer a higher optimal error tolerance when compared to typical error-correcting algorithms. A TSC has been translated into a planar Ising spin-glass with constrained bimodal three-spin couplings. This spin-glass has been considered at finite temperature to determine the phase boundary between the unstable phase and the stable phase, where error recovery is possible.[1] We approach the study of the error threshold problem by exploring ground states of this spin-glass with the Extremal Optimization algorithm (EO).[2] EO has proven to be a effective heuristic to explore ground state configurations of glassy spin-systems.[3

  5. Thinking individuation forward.

    PubMed

    Tresan, David

    2007-02-01

    This paper extends Jung's theory of individuation as faithfully elaborated by Joseph L. Henderson in his authoritative book Thresholds of Initiation. It addresses analyses that continue over very many years with analysands said to be individuated and proposes psychodynamics that explain and support such work that is otherwise beyond formal theoretical justification. In so doing, the paper addresses both Jung's and Henderson's refusal to explore the relevance of metaphysics for psychology and offers both a theoretical corrective for this shortcoming and a clinical illustration in support of an expanded point of view. In the course of the paper, personal material from interviews with Dr. Henderson, aged 101, serve as a substrate, both to support the above considerations and to shed new light on the development of his thought that led to Thresholds of Initiation.

  6. Permanent fine tuning of silicon microring devices by femtosecond laser surface amorphization and ablation.

    PubMed

    Bachman, Daniel; Chen, Zhijiang; Fedosejevs, Robert; Tsui, Ying Y; Van, Vien

    2013-05-06

    We demonstrate the fine tuning capability of femtosecond laser surface modification as a permanent trimming mechanism for silicon photonic components. Silicon microring resonators with a 15 µm radius were irradiated with single 400 nm wavelength laser pulses at varying fluences. Below the laser ablation threshold, surface amorphization of the crystalline silicon waveguides yielded a tuning rate of 20 ± 2 nm/J · cm(-2)with a minimum resonance wavelength shift of 0.10nm. Above that threshold, ablation yielded a minimum resonance shift of -1.7 nm. There was some increase in waveguide loss for both trimming mechanisms. We also demonstrated the application of the method by using it to permanently correct the resonance mismatch of a second-order microring filter.

  7. A method of camera calibration with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei

    2009-07-01

    In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.

  8. Legislating thresholds for drug trafficking: a policy development case study from New South Wales, Australia.

    PubMed

    Hughes, Caitlin Elizabeth; Ritter, Alison; Cowdery, Nicholas

    2014-09-01

    Legal thresholds are used in many parts of the world to define the quantity of illicit drugs over which possession is deemed "trafficking" as opposed to "possession for personal use". There is limited knowledge about why or how such laws were developed. In this study we analyse the policy processes underpinning the introduction and expansion of the drug trafficking legal threshold system in New South Wales (NSW), Australia. A critical legal and historical analysis was undertaken sourcing data from legislation, Parliamentary Hansard debates, government inquiries, police reports and research. A timeline of policy developments was constructed from 1970 until 2013 outlining key steps including threshold introduction (1970), expansion (1985), and wholesale revision (1988). We then critically analysed the drivers of each step and the roles played by formal policy actors, public opinion, research/data and the drug trafficking problem. We find evidence that while justified as a necessary tool for effective law enforcement of drug trafficking, their introduction largely preceded overt police calls for reform or actual increases in drug trafficking. Moreover, while the expansion from one to four thresholds had the intent of differentiating small from large scale traffickers, the quantities employed were based on government assumptions which led to "manifest problems" and the revision in 1988 of over 100 different quantities. Despite the revisions, there has remained no further formal review and new quantities for "legal highs" continue to be added based on assumption and an uncertain evidence-base. The development of legal thresholds for drug trafficking in NSW has been arbitrary and messy. That the arbitrariness persists from 1970 until the present day makes it hard to conclude the thresholds have been well designed. Our narrative provides a platform for future policy reform. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. State Correctional Education Programs. State Policy Update.

    ERIC Educational Resources Information Center

    Tolbert, Michelle

    Secure state correctional facilities currently house more than 1.8 million adults, and nearly 4.4 million adults fall under state-administered community corrections. A state's approach to corrections and the communication between the state correctional components can have a large impact on the state's correctional education program. Decentralized…

  10. Simulations of the modified gap experiment

    NASA Astrophysics Data System (ADS)

    Sutherland, Gerrit T.; Benjamin, Richard; Kooker, Douglas

    2017-01-01

    Modified gap experiment (test) hydrocode simulations predict the trends seen in experimental excess free surface velocity versus input pressure curves for explosives with both large and modest failure diameters. Simulations were conducted for explosive "A", an explosive with a large failure diameter, and for cast TNT, which has a modest failure diameter. Using the best available reactive rate models, the simulations predicted sustained ignition thresholds similar to experiment. This is a threshold where detonation is likely given a long enough run distance. For input pressures greater than the sustained ignition threshold pressure, the simulations predicted too little velocity for explosive "A" and too much velocity for TNT. It was found that a better comparison of experiment and simulation requires additional experimental data for both explosives. It was observed that the choice of reactive rate model for cast TNT can lead to large differences in the predicted modified gap experiment result. The cause of the difference is that the same data was not used to parameterize both models; one set of data was more shock reactive than the other.

  11. Multiple testing corrections in quantitative proteomics: A useful but blunt tool.

    PubMed

    Pascovici, Dana; Handler, David C L; Wu, Jemma X; Haynes, Paul A

    2016-09-01

    Multiple testing corrections are a useful tool for restricting the FDR, but can be blunt in the context of low power, as we demonstrate by a series of simple simulations. Unfortunately, in proteomics experiments low power can be common, driven by proteomics-specific issues like small effects due to ratio compression, and few replicates due to reagent high cost, instrument time availability and other issues; in such situations, most multiple testing corrections methods, if used with conventional thresholds, will fail to detect any true positives even when many exist. In this low power, medium scale situation, other methods such as effect size considerations or peptide-level calculations may be a more effective option, even if they do not offer the same theoretical guarantee of a low FDR. Thus, we aim to highlight in this article that proteomics presents some specific challenges to the standard multiple testing corrections methods, which should be employed as a useful tool but not be regarded as a required rubber stamp. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Sma3s: a three-step modular annotator for large sequence datasets.

    PubMed

    Muñoz-Mérida, Antonio; Viguera, Enrique; Claros, M Gonzalo; Trelles, Oswaldo; Pérez-Pulido, Antonio J

    2014-08-01

    Automatic sequence annotation is an essential component of modern 'omics' studies, which aim to extract information from large collections of sequence data. Most existing tools use sequence homology to establish evolutionary relationships and assign putative functions to sequences. However, it can be difficult to define a similarity threshold that achieves sufficient coverage without sacrificing annotation quality. Defining the correct configuration is critical and can be challenging for non-specialist users. Thus, the development of robust automatic annotation techniques that generate high-quality annotations without needing expert knowledge would be very valuable for the research community. We present Sma3s, a tool for automatically annotating very large collections of biological sequences from any kind of gene library or genome. Sma3s is composed of three modules that progressively annotate query sequences using either: (i) very similar homologues, (ii) orthologous sequences or (iii) terms enriched in groups of homologous sequences. We trained the system using several random sets of known sequences, demonstrating average sensitivity and specificity values of ~85%. In conclusion, Sma3s is a versatile tool for high-throughput annotation of a wide variety of sequence datasets that outperforms the accuracy of other well-established annotation algorithms, and it can enrich existing database annotations and uncover previously hidden features. Importantly, Sma3s has already been used in the functional annotation of two published transcriptomes. © The Author 2014. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  13. Collectivization of anti-analog strength above charged particle thresholds

    NASA Astrophysics Data System (ADS)

    Okołowicz, J.; Płoszajczak, M.; Charity, R. J.; Sobotka, L. G.

    2018-04-01

    Ten years ago, highly excited states were found in 9Li and 10Be a few hundred kilovolts above the proton decay threshold. These physical states are too low in energy to be the isospin-stretched configuration of the decay channel (the isobaric analog or T>). However, these states can be understood by a continuum cognizant shell model as strongly mixed states of lower isospin (T<), where the mixing is largely mediated by the open neutron channels but ushered in energy to be just above the proton threshold.

  14. Practical Weak-lensing Shear Measurement with Metacalibration

    DOE PAGES

    Sheldon, Erin S.; Huff, Eric M.

    2017-05-19

    We report that metacalibration is a recently introduced method to accurately measure weak gravitational lensing shear using only the available imaging data, without need for prior information about galaxy properties or calibration from simulations. The method involves distorting the image with a small known shear, and calculating the response of a shear estimator to that applied shear. The method was shown to be accurate in moderate-sized simulations with galaxy images that had relatively high signal-to-noise ratios, and without significant selection effects. In this work we introduce a formalism to correct for both shear response and selection biases. We also observemore » that for images with relatively low signal-to-noise ratios, the correlated noise that arises during the metacalibration process results in significant bias, for which we develop a simple empirical correction. To test this formalism, we created large image simulations based on both parametric models and real galaxy images, including tests with realistic point-spread functions. We varied the point-spread function ellipticity at the five-percent level. In each simulation we applied a small few-percent shear to the galaxy images. We introduced additional challenges that arise in real data, such as detection thresholds, stellar contamination, and missing data. We applied cuts on the measured galaxy properties to induce significant selection effects. Finally, using our formalism, we recovered the input shear with an accuracy better than a part in a thousand in all cases.« less

  15. A technique for sequential segmental neuromuscular stimulation with closed loop feedback control.

    PubMed

    Zonnevijlle, Erik D H; Abadia, Gustavo Perez; Somia, Naveen N; Kon, Moshe; Barker, John H; Koenig, Steven; Ewert, D L; Stremel, Richard W

    2002-01-01

    In dynamic myoplasty, dysfunctional muscle is assisted or replaced with skeletal muscle from a donor site. Electrical stimulation is commonly used to train and animate the skeletal muscle to perform its new task. Due to simultaneous tetanic contractions of the entire myoplasty, muscles are deprived of perfusion and fatigue rapidly, causing long-term problems such as excessive scarring and muscle ischemia. Sequential stimulation contracts part of the muscle while other parts rest, thus significantly improving blood perfusion. However, the muscle still fatigues. In this article, we report a test of the feasibility of using closed-loop control to economize the contractions of the sequentially stimulated myoplasty. A simple stimulation algorithm was developed and tested on a sequentially stimulated neo-sphincter designed from a canine gracilis muscle. Pressure generated in the lumen of the myoplasty neo-sphincter was used as feedback to regulate the stimulation signal via three control parameters, thereby optimizing the performance of the myoplasty. Additionally, we investigated and compared the efficiency of amplitude and frequency modulation techniques. Closed-loop feedback enabled us to maintain target pressures within 10% deviation using amplitude modulation and optimized control parameters (correction frequency = 4 Hz, correction threshold = 4%, and transition time = 0.3 s). The large-scale stimulation/feedback setup was unfit for chronic experimentation, but can be used as a blueprint for a small-scale version to unveil the theoretical benefits of closed-loop control in chronic experimentation.

  16. Catatonia in inpatients with psychiatric disorders: A comparison of schizophrenia and mood disorders.

    PubMed

    Grover, Sandeep; Chakrabarti, Subho; Ghormode, Deepak; Agarwal, Munish; Sharma, Akhilesh; Avasthi, Ajit

    2015-10-30

    This study aimed to evaluate the symptom threshold for making the diagnosis of catatonia. Further the objectives were to (1) to study the factor solution of Bush Francis Catatonia Rating Scale (BFCRS); (2) To compare the prevalence and symptom profile of catatonia in patients with psychotic and mood disorders among patients admitted to the psychiatry inpatient of a general hospital psychiatric unit. 201 patients were screened for presence of catatonia by using BFCRS. By using cluster analysis, discriminant analysis, ROC curve, sensitivity and specificity analysis, data suggested that a threshold of 3 symptoms was able to correctly categorize 89.4% of patients with catatonia and 100% of patients without catatonia. Prevalence of catatonia was 9.45%. There was no difference in the prevalence rate and symptom profile of catatonia between those with schizophrenia and mood disorders (i.e., unipolar depression and bipolar affective disorder). Factor analysis of the data yielded 2 factor solutions, i.e., retarded and excited catatonia. To conclude this study suggests that presence of 3 symptoms for making the diagnosis of catatonia can correctly distinguish patients with and without catatonia. This is compatible with the recommendations of DSM-5. Prevalence of catatonia is almost equal in patients with schizophrenia and mood disorders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. EDGE2D-EIRENE modelling of near SOL E r: possible impact on the H-mode power threshold

    NASA Astrophysics Data System (ADS)

    Chankin, A. V.; Delabie, E.; Corrigan, G.; Harting, D.; Maggi, C. F.; Meyer, H.; Contributors, JET

    2017-04-01

    Recent EDGE2D-EIRENE simulations of JET plasmas showed a significant difference between radial electric field (E r) profiles across the separatrix in two divertor configurations, with the outer strike point on the horizontal target (HT) and vertical target (VT) (Chankin et al 2016 Nucl. Mater. Energy, doi: 10.1016/j.nme.2016.10.004). Under conditions (input power, plasma density) where the HT plasma went into the H-mode, a large positive E r spike in the near scrape-off layer (SOL) was seen in the code output, leading to a very large E × B shear across the separatrix over a narrow region of a fraction of a cm width. No such E r feature was obtained in the code solution for the VT configuration, where the H-mode power threshold was found to be twice as high as in the HT configuration. It was hypothesised that the large E × B shear across the separatrix in the HT configuration could be responsible for the turbulence suppression leading to an earlier (at lower input power) L-H transition compared to the VT configuration. In the present work these ideas are extended to cover some other experimental observations on the H-mode power threshold variation with parameters which typically are not included in the multi-machine H-mode power threshold scalings, namely: ion mass dependence (isotope H-D-T exchange), dependence on the ion ∇B drift direction, and dependence on the wall material composition (ITER-like wall versus carbon wall in JET). In all these cases EDGE2D-EIRENE modelling shows larger positive E r spikes in the near SOL under conditions where the H-mode power threshold is lower, at least in the HT configuration.

  18. The Uncertainty of Long-term Linear Trend in Global SST Due to Internal Variation

    NASA Astrophysics Data System (ADS)

    Lian, Tao

    2016-04-01

    In most parts of the global ocean, the magnitude of the long-term linear trend in sea surface temperature (SST) is much smaller than the amplitude of local multi-scale internal variation. One can thus use the record of a specified period to arbitrarily determine the value and the sign of the long-term linear trend in regional SST, and further leading to controversial conclusions on how global SST responds to global warming in the recent history. Analyzing the linear trend coefficient estimated by the ordinary least-square method indicates that the linear trend consists of two parts: One related to the long-term change, and the other related to the multi-scale internal variation. The sign of the long-term change can be correctly reproduced only when the magnitude of the linear trend coefficient is greater than a theoretical threshold which scales the influence from the multi-scale internal variation. Otherwise, the sign of the linear trend coefficient will depend on the phase of the internal variation, or in the other words, the period being used. An improved least-square method is then proposed to reduce the theoretical threshold. When apply the new method to a global SST reconstruction from 1881 to 2013, we find that in a large part of Pacific, the southern Indian Ocean and North Atlantic, the influence from the multi-scale internal variation on the sign of the linear trend coefficient can-not be excluded. Therefore, the resulting warming or/and cooling linear trends in these regions can-not be fully assigned to global warming.

  19. The effect of saccade metrics on the corollary discharge contribution to perceived eye location

    PubMed Central

    Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.

    2015-01-01

    Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955

  20. Observations on saliva osmolality during progressive dehydration and partial rehydration.

    PubMed

    Taylor, Nigel A S; van den Heuvel, Anne M J; Kerry, Pete; McGhee, Sheena; Peoples, Gregory E; Brown, Marc A; Patterson, Mark J

    2012-09-01

    A need exists to identify dehydrated individuals under stressful settings beyond the laboratory. A predictive index based on changes in saliva osmolality has been proposed, and its efficacy and sensitivity was appraised across mass (water) losses from 1 to 7%. Twelve euhydrated males [serum osmolality: 286.1 mOsm kg(-1) H(2)O (SD 4.3)] completed three exercise- and heat-induced dehydration trials (35.6°C, 56% relative humidity): 7% dehydration (6.15 h), 3% dehydration (with 60% fluid replacement: 2.37 h), repeat 7% dehydration (5.27 h). Expectorated saliva osmolality, measured at baseline and at each 1% mass change, was used to predict instantaneous hydration state relative to mass losses of 3 and 6%. Saliva osmolality increased linearly with dehydration, although its basal osmolality and its rate of change varied among and within subjects across trials. Receiver operating characteristic curves indicated a good predictive power for saliva osmolality when used with two, single-threshold cutoffs to differentiate between hydrated and dehydrated individuals (area under curve: 3% cutoff = 0.868, 6% cutoff = 0.831). However, when analysed using a double-threshold detection technique (3 and 6%), as might be used in a field-based monitor, <50% of the osmolality data could correctly identify individuals who exceeded 3% dehydration. Indeed, within the 3-6% dehydration range, its sensitivity was 64%, while beyond 6% dehydration, this fell to 42%. Therefore, while expectorated saliva osmolality tracked mass losses within individuals, its large intra- and inter-individual variability limited its predictive power and sensitivity, rendering its utility questionable within a universal dehydration monitor.

Top