Influence of surgical gloves on haptic perception thresholds.
Hatzfeld, Christian; Dorsch, Sarah; Neupert, Carsten; Kupnik, Mario
2018-02-01
Impairment of haptic perception by surgical gloves could reduce requirements on haptic systems for surgery. While grip forces and manipulation capabilities were not impaired in previous studies, no data is available for perception thresholds. Absolute and differential thresholds (20 dB above threshold) of 24 subjects were measured for frequencies of 25 and 250 Hz with a Ψ-method. Effects of wearing a surgical glove, moisture on the contact surface and subject's experience with gloves were incorporated in a full-factorial experimental design. Absolute thresholds of 12.8 dB and -29.6 dB (means for 25 and 250 Hz, respectively) and differential thresholds of -12.6 dB and -9.5 dB agree with previous studies. A relevant effect of the frequency on absolute thresholds was found. Comparisons of glove- and no-glove-conditions did not reveal a significant mean difference. Wearing a single surgical glove does not affect absolute and differential haptic perception thresholds. Copyright © 2017 John Wiley & Sons, Ltd.
A Purkinje shift in the spectral sensitivity of grey squirrels
Silver, Priscilla H.
1966-01-01
1. The light-adapted spectral sensitivity of the grey squirrel has been determined by an automated training method at a level about 6 log units above the squirrel's absolute threshold. 2. The maximum sensitivity is near 555 nm, under light-adapted conditions, compared with the dark-adapted maximum near 500 nm found by a similar method. 3. Neither the light-adapted nor the dark-adapted behavioural threshold agrees with electrophysiological findings using single flash techniques, but there is agreement with e.r.g. results obtained with sinusoidal stimuli. PMID:5972118
Threshold network of a financial market using the P-value of correlation coefficients
NASA Astrophysics Data System (ADS)
Ha, Gyeong-Gyun; Lee, Jae Woo; Nobi, Ashadun
2015-06-01
Threshold methods in financial networks are important tools for obtaining important information about the financial state of a market. Previously, absolute thresholds of correlation coefficients have been used; however, they have no relation to the length of time. We assign a threshold value depending on the size of the time window by using the P-value concept of statistics. We construct a threshold network (TN) at the same threshold value for two different time window sizes in the Korean Composite Stock Price Index (KOSPI). We measure network properties, such as the edge density, clustering coefficient, assortativity coefficient, and modularity. We determine that a significant difference exists between the network properties of the two time windows at the same threshold, especially during crises. This implies that the market information depends on the length of the time window when constructing the TN. We apply the same technique to Standard and Poor's 500 (S&P500) and observe similar results.
Bikel, Shirley; Jacobo-Albavera, Leonor; Sánchez-Muñoz, Fausto; Cornejo-Granados, Fernanda; Canizales-Quinteros, Samuel; Soberón, Xavier; Sotelo-Mundo, Rogerio R; Del Río-Navarro, Blanca E; Mendoza-Vargas, Alfredo; Sánchez, Filiberto; Ochoa-Leyva, Adrian
2017-01-01
In spite of the emergence of RNA sequencing (RNA-seq), microarrays remain in widespread use for gene expression analysis in the clinic. There are over 767,000 RNA microarrays from human samples in public repositories, which are an invaluable resource for biomedical research and personalized medicine. The absolute gene expression analysis allows the transcriptome profiling of all expressed genes under a specific biological condition without the need of a reference sample. However, the background fluorescence represents a challenge to determine the absolute gene expression in microarrays. Given that the Y chromosome is absent in female subjects, we used it as a new approach for absolute gene expression analysis in which the fluorescence of the Y chromosome genes of female subjects was used as the background fluorescence for all the probes in the microarray. This fluorescence was used to establish an absolute gene expression threshold, allowing the differentiation between expressed and non-expressed genes in microarrays. We extracted the RNA from 16 children leukocyte samples (nine males and seven females, ages 6-10 years). An Affymetrix Gene Chip Human Gene 1.0 ST Array was carried out for each sample and the fluorescence of 124 genes of the Y chromosome was used to calculate the absolute gene expression threshold. After that, several expressed and non-expressed genes according to our absolute gene expression threshold were compared against the expression obtained using real-time quantitative polymerase chain reaction (RT-qPCR). From the 124 genes of the Y chromosome, three genes (DDX3Y, TXLNG2P and EIF1AY) that displayed significant differences between sexes were used to calculate the absolute gene expression threshold. Using this threshold, we selected 13 expressed and non-expressed genes and confirmed their expression level by RT-qPCR. Then, we selected the top 5% most expressed genes and found that several KEGG pathways were significantly enriched. Interestingly, these pathways were related to the typical functions of leukocytes cells, such as antigen processing and presentation and natural killer cell mediated cytotoxicity. We also applied this method to obtain the absolute gene expression threshold in already published microarray data of liver cells, where the top 5% expressed genes showed an enrichment of typical KEGG pathways for liver cells. Our results suggest that the three selected genes of the Y chromosome can be used to calculate an absolute gene expression threshold, allowing a transcriptome profiling of microarray data without the need of an additional reference experiment. Our approach based on the establishment of a threshold for absolute gene expression analysis will allow a new way to analyze thousands of microarrays from public databases. This allows the study of different human diseases without the need of having additional samples for relative expression experiments.
Psychophysical Criteria for Visual Simulation Systems.
1980-05-01
definitive data were found to estab- lish detection thresholds; therefore, this is one area where a psycho- physical study was recommended. Differential size...The specific functional relationships needinq quantification were the following: 1. The effect of Horizontal Aniseikonia on Target Detection and...Transition Technique 6. The Effects of Scene Complexity and Separation on the Detection of Scene Misalignment 7. Absolute Brightness Levels in
Absolute single-photoionization cross sections of Se 2 + : Experiment and theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macaluso, D. A.; Aguilar, A.; Kilcoyne, A. L. D.
2015-12-28
Absolute single-photoionization cross-section measurements for Se 2+ ions were performed at the Advanced Light Source at Lawrence Berkeley National Laboratory using the merged-beams photo-ion technique. Measurements were made at a photon energy resolution of 24 ± 3 meV in the photon energy range 23.5-42.5 eV, spanning the ground state and low-lying metastable state ionization thresholds. Here, to clearly resolve the resonant structure near the ground-state threshold, high-resolution measurements were made from 30.0 to 31.9 eV at a photon energy resolution of 6.7 ± 0.7 meV. Numerous resonance features observed in the experimental spectra are assigned and their energies and quantummore » defects tabulated. The high-resolution cross-section measurements are compared with large-scale, state-of-the-art theoretical cross-section calculations obtained from the Dirac Coulomb R -matrix method. Suitable agreement is obtained over the entire photon energy range investigated. In conclusion, these results are an experimental determination of the absolute photoionization cross section of doubly ionized selenium and include a detailed analysis of the photoionization resonance spectrum of this ion.« less
NASA Technical Reports Server (NTRS)
Storm, Mark E. (Inventor)
1994-01-01
A technique was developed which carefully retro-reflects precisely controlled amounts of light back into a laser system thereby intentionally forcing the laser system components to oscillate in a new resonator called the parasitic oscillator. The parasitic oscillator uses the laser system to provide the gain and an external mirror is used to provide the output coupling of the new resonator. Any change of gain or loss inside the new resonator will directly change the lasing threshold of the parasitic oscillator. This change in threshold can be experimentally measured as a change in the absolute value of reflectivity, provided by the external mirror, necessary to achieve lasing in the parasitic oscillator. Discrepancies between experimental data and a parasitic oscillator model are direct evidence of optical misalignment or component performance problems. Any changes in the optical system can instantly be measured as a change in threshold for the parasitic oscillator. This technique also enables aligning the system for maximum parasitic suppression with the system fully operational.
Bikel, Shirley; Jacobo-Albavera, Leonor; Sánchez-Muñoz, Fausto; Cornejo-Granados, Fernanda; Canizales-Quinteros, Samuel; Soberón, Xavier; Sotelo-Mundo, Rogerio R.; del Río-Navarro, Blanca E.; Mendoza-Vargas, Alfredo; Sánchez, Filiberto
2017-01-01
Background In spite of the emergence of RNA sequencing (RNA-seq), microarrays remain in widespread use for gene expression analysis in the clinic. There are over 767,000 RNA microarrays from human samples in public repositories, which are an invaluable resource for biomedical research and personalized medicine. The absolute gene expression analysis allows the transcriptome profiling of all expressed genes under a specific biological condition without the need of a reference sample. However, the background fluorescence represents a challenge to determine the absolute gene expression in microarrays. Given that the Y chromosome is absent in female subjects, we used it as a new approach for absolute gene expression analysis in which the fluorescence of the Y chromosome genes of female subjects was used as the background fluorescence for all the probes in the microarray. This fluorescence was used to establish an absolute gene expression threshold, allowing the differentiation between expressed and non-expressed genes in microarrays. Methods We extracted the RNA from 16 children leukocyte samples (nine males and seven females, ages 6–10 years). An Affymetrix Gene Chip Human Gene 1.0 ST Array was carried out for each sample and the fluorescence of 124 genes of the Y chromosome was used to calculate the absolute gene expression threshold. After that, several expressed and non-expressed genes according to our absolute gene expression threshold were compared against the expression obtained using real-time quantitative polymerase chain reaction (RT-qPCR). Results From the 124 genes of the Y chromosome, three genes (DDX3Y, TXLNG2P and EIF1AY) that displayed significant differences between sexes were used to calculate the absolute gene expression threshold. Using this threshold, we selected 13 expressed and non-expressed genes and confirmed their expression level by RT-qPCR. Then, we selected the top 5% most expressed genes and found that several KEGG pathways were significantly enriched. Interestingly, these pathways were related to the typical functions of leukocytes cells, such as antigen processing and presentation and natural killer cell mediated cytotoxicity. We also applied this method to obtain the absolute gene expression threshold in already published microarray data of liver cells, where the top 5% expressed genes showed an enrichment of typical KEGG pathways for liver cells. Our results suggest that the three selected genes of the Y chromosome can be used to calculate an absolute gene expression threshold, allowing a transcriptome profiling of microarray data without the need of an additional reference experiment. Discussion Our approach based on the establishment of a threshold for absolute gene expression analysis will allow a new way to analyze thousands of microarrays from public databases. This allows the study of different human diseases without the need of having additional samples for relative expression experiments. PMID:29230367
Goldman, D; Kohn, P M; Hunt, R W
1983-08-01
The following measures were obtained from 42 student volunteers: the General and the Disinhibition subscales of the Sensation Seeking Scale (Form IV), the Reducer-Augmenter Scale, and the Absolute Auditory Threshold. General sensation seeking correlated significantly with the Reducer-Augmenter Scale, r(40) = .59, p less than .001, and the Absolute Auditory Threshold, r(40) = .45, p less than .005. Both results proved general across sex. These findings, that high-sensation seekers tend to be reducers and to lack sensitivity to weak stimulation, were interpreted as supporting strength-of-the-nervous-system theory more than the formulation of Zuckerman and his associates.
Peripheral absolute threshold spectral sensitivity in retinitis pigmentosa.
Massof, R W; Johnson, M A; Finkelstein, D
1981-01-01
Dark-adapted spectral sensitivities were measured in the peripheral retinas of 38 patients diagnosed as having typical retinitis pigmentosa (RP) and in 3 normal volunteers. The patients included those having autosomal dominant and autosomal recessive inheritance patterns. Results were analysed by comparisons with the CIE standard scotopic spectral visibility function and with Judd's modification of the photopic spectral visibility function, with consideration of contributions from changes in spectral transmission of preretinal media. The data show 3 general patterns. One group of patients had absolute threshold spectral sensitivities that were fit by Judd's photopic visibility curve. Absolute threshold spectral sensitivities for a second group of patients were fit by a normal scotopic spectral visibility curve. The third group of patients had absolute threshold spectral sensitivities that were fit by a combination of scotopic and photopic spectral visibility curves. The autosomal dominant and autosomal recessive modes of inheritance were represented in each group of patients. These data indicate that RP patients have normal rod and/or cone spectral sensitivities, and support the subclassification of patients described previously by Massof and Finkelstein. PMID:7459312
Reardon, Cillian; Tobin, Daniel P.; Delahunt, Eamonn
2015-01-01
A number of studies have used GPS technology to categorise rugby union locomotive demands. However, the utility of the results of these studies is confounded by small sample sizes, sub-elite player status and the global application of absolute speed thresholds to all player positions. Furthermore, many of these studies have used GPS units with low sampling frequencies. The aim of the present study was to compare and contrast the high speed running (HSR) demands of professional rugby union when utilizing micro-technology units sampling at 10 Hz and applying relative or individualised speed zones. The results of this study indicate that application of individualised speed zones results in a significant shift in the interpretation of the HSR demands of both forwards and backs and positional sub-categories therein. When considering the use of an absolute in comparison to an individualised HSR threshold, there was a significant underestimation for forwards of HSR distance (HSRD) (absolute = 269 ± 172.02, individualised = 354.72 ± 99.22, p < 0.001), HSR% (absolute = 5.15 ± 3.18, individualised = 7.06 ± 2.48, p < 0.001) and HSR efforts (HSRE) (absolute = 18.81 ± 12.25; individualised = 24.78 ± 8.30, p < 0.001). In contrast, there was a significant overestimation of the same HSR metrics for backs with the use of an absolute threshold (HSRD absolute = 697.79 ± 198.11, individualised = 570.02 ± 171.14, p < 0.001; HSR% absolute = 10.85 ± 2.82, individualised = 8.95 ± 2.76, p < 0.001; HSRE absolute = 41.55 ± 11.25; individualised = 34.54 ± 9.24, p < 0.001). This under- or overestimation associated with an absolute speed zone applies to varying degrees across the ten positional sub-categories analyzed and also to individuals within the same positional sub-category. The results of the present study indicated that although use of an individulised HSR threshold improves the interpretation of the HSR demands on a positional basis, inter-individual variability in maximum velocity within positional sub-categories means that players need to be considered on an individual basis to accurately gauge the HSR demands of rugby union. PMID:26208315
Ye, Ying; Griffin, Michael J
2016-04-01
This study investigated whether the reductions in finger blood flow induced by 125-Hz vibration applied to different locations on the hand depend on thresholds for perceiving vibration at these locations. Subjects attended three sessions during which vibration was applied to the right index finger, the right thenar eminence, or the left thenar eminence. Absolute thresholds for perceiving vibration at these locations were determined. Finger blood flow in the middle finger of both hands was then measured at 30-s intervals during five successive 5-min periods: (i) pre-exposure, (ii) pre-exposure with 2-N force, (iii) 2-N force with vibration, (iv) post-exposure with 2-N force, (v) recovery. During period (iii), vibration was applied at 15 dB above the absolute threshold for perceiving vibration at the right thenar eminence. Vibration at all three locations reduced finger blood flow on the exposed and unexposed hand, with greater reductions when vibrating the finger. Vibration-induced vasoconstriction was greatest for individuals with low thresholds and locations of excitation with low thresholds. Differences in vasoconstriction between subjects and between locations are consistent with the Pacinian channel mediating both absolute thresholds and vibration-induced vasoconstriction.
NASA Astrophysics Data System (ADS)
Saweikis, Meghan; Surprenant, Aimée M.; Davies, Patricia; Gallant, Don
2003-10-01
While young and old subjects with comparable audiograms tend to perform comparably on speech recognition tasks in quiet environments, the older subjects have more difficulty than the younger subjects with recognition tasks in degraded listening conditions. This suggests that factors other than an absolute threshold may account for some of the difficulty older listeners have on recognition tasks in noisy environments. Many metrics, including the Speech Intelligibility Index (SII), used to measure speech intelligibility, only consider an absolute threshold when accounting for age related hearing loss. Therefore these metrics tend to overestimate the performance for elderly listeners in noisy environments [Tobias et al., J. Acoust. Soc. Am. 83, 859-895 (1988)]. The present studies examine the predictive capabilities of the SII in an environment with automobile noise present. This is of interest because people's evaluation of the automobile interior sound is closely linked to their ability to carry on conversations with their fellow passengers. The four studies examine whether, for subjects with age related hearing loss, the accuracy of the SII can be improved by incorporating factors other than an absolute threshold into the model. [Work supported by Ford Motor Company.
Loziuk, Philip L.; Sederoff, Ronald R.; Chiang, Vincent L.; Muddiman, David C.
2014-01-01
Quantitative mass spectrometry has become central to the field of proteomics and metabolomics. Selected reaction monitoring is a widely used method for the absolute quantification of proteins and metabolites. This method renders high specificity using several product ions measured simultaneously. With growing interest in quantification of molecular species in complex biological samples, confident identification and quantitation has been of particular concern. A method to confirm purity or contamination of product ion spectra has become necessary for achieving accurate and precise quantification. Ion abundance ratio assessments were introduced to alleviate some of these issues. Ion abundance ratios are based on the consistent relative abundance (RA) of specific product ions with respect to the total abundance of all product ions. To date, no standardized method of implementing ion abundance ratios has been established. Thresholds by which product ion contamination is confirmed vary widely and are often arbitrary. This study sought to establish criteria by which the relative abundance of product ions can be evaluated in an absolute quantification experiment. These findings suggest that evaluation of the absolute ion abundance for any given transition is necessary in order to effectively implement RA thresholds. Overall, the variation of the RA value was observed to be relatively constant beyond an absolute threshold ion abundance. Finally, these RA values were observed to fluctuate significantly over a 3 year period, suggesting that these values should be assessed as close as possible to the time at which data is collected for quantification. PMID:25154770
Absolute versus convective helical magnetorotational instability in a Taylor-Couette flow.
Priede, Jānis; Gerbeth, Gunter
2009-04-01
We analyze numerically the magnetorotational instability of a Taylor-Couette flow in a helical magnetic field [helical magnetorotational instability (HMRI)] using the inductionless approximation defined by a zero magnetic Prandtl number (Pr_{m}=0) . The Chebyshev collocation method is used to calculate the eigenvalue spectrum for small-amplitude perturbations. First, we carry out a detailed conventional linear stability analysis with respect to perturbations in the form of Fourier modes that corresponds to the convective instability which is not in general self-sustained. The helical magnetic field is found to extend the instability to a relatively narrow range beyond its purely hydrodynamic limit defined by the Rayleigh line. There is not only a lower critical threshold at which HMRI appears but also an upper one at which it disappears again. The latter distinguishes the HMRI from a magnetically modified Taylor vortex flow. Second, we find an absolute instability threshold as well. In the hydrodynamically unstable regime before the Rayleigh line, the threshold of absolute instability is just slightly above the convective one although the critical wavelength of the former is noticeably shorter than that of the latter. Beyond the Rayleigh line the lower threshold of absolute instability rises significantly above the corresponding convective one while the upper one descends significantly below its convective counterpart. As a result, the extension of the absolute HMRI beyond the Rayleigh line is considerably shorter than that of the convective instability. The absolute HMRI is supposed to be self-sustained and, thus, experimentally observable without any external excitation in a system of sufficiently large axial extension.
NASA Astrophysics Data System (ADS)
Shields, C. A.; Ullrich, P. A.; Rutz, J. J.; Wehner, M. F.; Ralph, M.; Ruby, L.
2017-12-01
Atmospheric rivers (ARs) are long, narrow filamentary structures that transport large amounts of moisture in the lower layers of the atmosphere, typically from subtropical regions to mid-latitudes. ARs play an important role in regional hydroclimate by supplying significant amounts of precipitation that can alleviate drought, or in extreme cases, produce dangerous floods. Accurately detecting, or tracking, ARs is important not only for weather forecasting, but is also necessary to understand how these events may change under global warming. Detection algorithms are used on both regional and global scales, and most accurately, using high resolution datasets, or model output. Different detection algorithms can produce different answers. Detection algorithms found in the current literature fall broadly into two categories: "time-stitching", where the AR is tracked with a Lagrangian approach through time and space; and "counting", where ARs are identified for a single point in time for a single location. Counting routines can be further subdivided into algorithms that use absolute thresholds with specific geometry, to algorithms that use relative thresholds, to algorithms based on statistics, to pattern recognition and machine learning techniques. With such a large diversity in detection code, differences in AR tracking and "counts" can vary widely from technique to technique. Uncertainty increases for future climate scenarios, where the difference between relative and absolute thresholding produce vastly different counts, simply due to the moister background state in a warmer world. In an effort to quantify the uncertainty associated with tracking algorithms, the AR detection community has come together to participate in ARTMIP, the Atmospheric River Tracking Method Intercomparison Project. Each participant will provide AR metrics to the greater group by applying their code to a common reanalysis dataset. MERRA2 data was chosen for both temporal and spatial resolution. After completion of this first phase, Tier 1, ARTMIP participants may choose to contribute to Tier 2, which will range from reanalysis uncertainty, to analysis of future climate scenarios from high resolution model output. ARTMIP's experimental design, techniques, and preliminary metrics will be presented.
Werner-Wasik, Maria; Nelson, Arden D; Choi, Walter; Arai, Yoshio; Faulhaber, Peter F; Kang, Patrick; Almeida, Fabio D; Xiao, Ying; Ohri, Nitin; Brockway, Kristin D; Piper, Jonathan W; Nelson, Aaron S
2012-03-01
To evaluate the accuracy and consistency of a gradient-based positron emission tomography (PET) segmentation method, GRADIENT, compared with manual (MANUAL) and constant threshold (THRESHOLD) methods. Contouring accuracy was evaluated with sphere phantoms and clinically realistic Monte Carlo PET phantoms of the thorax. The sphere phantoms were 10-37 mm in diameter and were acquired at five institutions emulating clinical conditions. One institution also acquired a sphere phantom with multiple source-to-background ratios of 2:1, 5:1, 10:1, 20:1, and 70:1. One observer segmented (contoured) each sphere with GRADIENT and THRESHOLD from 25% to 50% at 5% increments. Subsequently, seven physicians segmented 31 lesions (7-264 mL) from 25 digital thorax phantoms using GRADIENT, THRESHOLD, and MANUAL. For spheres <20 mm in diameter, GRADIENT was the most accurate with a mean absolute % error in diameter of 8.15% (10.2% SD) compared with 49.2% (51.1% SD) for 45% THRESHOLD (p < 0.005). For larger spheres, the methods were statistically equivalent. For varying source-to-background ratios, GRADIENT was the most accurate for spheres >20 mm (p < 0.065) and <20 mm (p < 0.015). For digital thorax phantoms, GRADIENT was the most accurate (p < 0.01), with a mean absolute % error in volume of 10.99% (11.9% SD), followed by 25% THRESHOLD at 17.5% (29.4% SD), and MANUAL at 19.5% (17.2% SD). GRADIENT had the least systematic bias, with a mean % error in volume of -0.05% (16.2% SD) compared with 25% THRESHOLD at -2.1% (34.2% SD) and MANUAL at -16.3% (20.2% SD; p value <0.01). Interobserver variability was reduced using GRADIENT compared with both 25% THRESHOLD and MANUAL (p value <0.01, Levene's test). GRADIENT was the most accurate and consistent technique for target volume contouring. GRADIENT was also the most robust for varying imaging conditions. GRADIENT has the potential to play an important role for tumor delineation in radiation therapy planning and response assessment. Copyright © 2012. Published by Elsevier Inc.
Castelli, Joël; Depeursinge, Adrien; de Bari, Berardino; Devillers, Anne; de Crevoisier, Renaud; Bourhis, Jean; Prior, John O
2017-06-01
In the context of oropharyngeal cancer treated with definitive radiotherapy, the aim of this retrospective study was to identify the best threshold value to compute metabolic tumor volume (MTV) and/or total lesion glycolysis to predict local-regional control (LRC) and disease-free survival. One hundred twenty patients with a locally advanced oropharyngeal cancer from 2 different institutions treated with definitive radiotherapy underwent FDG PET/CT before treatment. Various MTVs and total lesion glycolysis were defined based on 2 segmentation methods: (i) an absolute threshold of SUV (0-20 g/mL) or (ii) a relative threshold for SUVmax (0%-100%). The parameters' predictive capabilities for disease-free survival and LRC were assessed using the Harrell C-index and Cox regression model. Relative thresholds between 40% and 68% and absolute threshold between 5.5 and 7 had a similar predictive value for LRC (C-index = 0.65 and 0.64, respectively). Metabolic tumor volume had a higher predictive value than gross tumor volume (C-index = 0.61) and SUVmax (C-index = 0.54). Metabolic tumor volume computed with a relative threshold of 51% of SUVmax was the best predictor of disease-free survival (hazard ratio, 1.23 [per 10 mL], P = 0.009) and LRC (hazard ratio: 1.22 [per 10 mL], P = 0.02). The use of different thresholds within a reasonable range (between 5.5 and 7 for an absolute threshold and between 40% and 68% for a relative threshold) seems to have no major impact on the predictive value of MTV. This parameter may be used to identify patient with a high risk of recurrence and who may benefit from treatment intensification.
A standardized model for predicting flap failure using indocyanine green dye
NASA Astrophysics Data System (ADS)
Zimmermann, Terence M.; Moore, Lindsay S.; Warram, Jason M.; Greene, Benjamin J.; Nakhmani, Arie; Korb, Melissa L.; Rosenthal, Eben L.
2016-03-01
Techniques that provide a non-invasive method for evaluation of intraoperative skin flap perfusion are currently available but underutilized. We hypothesize that intraoperative vascular imaging can be used to reliably assess skin flap perfusion and elucidate areas of future necrosis by means of a standardized critical perfusion threshold. Five animal groups (negative controls, n=4; positive controls, n=5; chemotherapy group, n=5; radiation group, n=5; chemoradiation group, n=5) underwent pre-flap treatments two weeks prior to undergoing random pattern dorsal fasciocutaneous flaps with a length to width ratio of 2:1 (3 x 1.5 cm). Flap perfusion was assessed via laser-assisted indocyanine green dye angiography and compared to standard clinical assessment for predictive accuracy of flap necrosis. For estimating flap-failure, clinical prediction achieved a sensitivity of 79.3% and a specificity of 90.5%. When average flap perfusion was more than three standard deviations below the average flap perfusion for the negative control group at the time of the flap procedure (144.3+/-17.05 absolute perfusion units), laser-assisted indocyanine green dye angiography achieved a sensitivity of 81.1% and a specificity of 97.3%. When absolute perfusion units were seven standard deviations below the average flap perfusion for the negative control group, specificity of necrosis prediction was 100%. Quantitative absolute perfusion units can improve specificity for intraoperative prediction of viable tissue. Using this strategy, a positive predictive threshold of flap failure can be standardized for clinical use.
Autoshaping as a psychophysical paradigm: Absolute visual sensitivity in the pigeon
Passe, Dennis H.
1981-01-01
A classical conditioning procedure (autoshaping) was used to determine absolute visual threshold in the pigeon. This method provides the basis for a standardized visual psychophysical paradigm. PMID:16812228
Hemispheric Lateralization of Motor Thresholds in Relation to Stuttering
Alm, Per A.; Karlsson, Ragnhild; Sundberg, Madeleine; Axelson, Hans W.
2013-01-01
Stuttering is a complex speech disorder. Previous studies indicate a tendency towards elevated motor threshold for the left hemisphere, as measured using transcranial magnetic stimulation (TMS). This may reflect a monohemispheric motor system impairment. The purpose of the study was to investigate the relative side-to-side difference (asymmetry) and the absolute levels of motor threshold for the hand area, using TMS in adults who stutter (n = 15) and in controls (n = 15). In accordance with the hypothesis, the groups differed significantly regarding the relative side-to-side difference of finger motor threshold (p = 0.0026), with the stuttering group showing higher motor threshold of the left hemisphere in relation to the right. Also the absolute level of the finger motor threshold for the left hemisphere differed between the groups (p = 0.049). The obtained results, together with previous investigations, provide support for the hypothesis that stuttering tends to be related to left hemisphere motor impairment, and possibly to a dysfunctional state of bilateral speech motor control. PMID:24146930
The absolute threshold of cone vision
Koeing, Darran; Hofer, Heidi
2013-01-01
We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115
The absolute disparity anomaly and the mechanism of relative disparities.
Chopin, Adrien; Levi, Dennis; Knill, David; Bavelier, Daphne
2016-06-01
There has been a long-standing debate about the mechanisms underlying the perception of stereoscopic depth and the computation of the relative disparities that it relies on. Relative disparities between visual objects could be computed in two ways: (a) using the difference in the object's absolute disparities (Hypothesis 1) or (b) using relative disparities based on the differences in the monocular separations between objects (Hypothesis 2). To differentiate between these hypotheses, we measured stereoscopic discrimination thresholds for lines with different absolute and relative disparities. Participants were asked to judge the depth of two lines presented at the same distance from the fixation plane (absolute disparity) or the depth between two lines presented at different distances (relative disparity). We used a single stimulus method involving a unique memory component for both conditions, and no extraneous references were available. We also measured vergence noise using Nonius lines. Stereo thresholds were substantially worse for absolute disparities than for relative disparities, and the difference could not be explained by vergence noise. We attribute this difference to an absence of conscious readout of absolute disparities, termed the absolute disparity anomaly. We further show that the pattern of correlations between vergence noise and absolute and relative disparity acuities can be explained jointly by the existence of the absolute disparity anomaly and by the assumption that relative disparity information is computed from absolute disparities (Hypothesis 1).
The absolute disparity anomaly and the mechanism of relative disparities
Chopin, Adrien; Levi, Dennis; Knill, David; Bavelier, Daphne
2016-01-01
There has been a long-standing debate about the mechanisms underlying the perception of stereoscopic depth and the computation of the relative disparities that it relies on. Relative disparities between visual objects could be computed in two ways: (a) using the difference in the object's absolute disparities (Hypothesis 1) or (b) using relative disparities based on the differences in the monocular separations between objects (Hypothesis 2). To differentiate between these hypotheses, we measured stereoscopic discrimination thresholds for lines with different absolute and relative disparities. Participants were asked to judge the depth of two lines presented at the same distance from the fixation plane (absolute disparity) or the depth between two lines presented at different distances (relative disparity). We used a single stimulus method involving a unique memory component for both conditions, and no extraneous references were available. We also measured vergence noise using Nonius lines. Stereo thresholds were substantially worse for absolute disparities than for relative disparities, and the difference could not be explained by vergence noise. We attribute this difference to an absence of conscious readout of absolute disparities, termed the absolute disparity anomaly. We further show that the pattern of correlations between vergence noise and absolute and relative disparity acuities can be explained jointly by the existence of the absolute disparity anomaly and by the assumption that relative disparity information is computed from absolute disparities (Hypothesis 1). PMID:27248566
Human sensitivity to vertical self-motion.
Nesti, Alessandro; Barnett-Cowan, Michael; Macneilage, Paul R; Bülthoff, Heinrich H
2014-01-01
Perceiving vertical self-motion is crucial for maintaining balance as well as for controlling an aircraft. Whereas heave absolute thresholds have been exhaustively studied, little work has been done in investigating how vertical sensitivity depends on motion intensity (i.e., differential thresholds). Here we measure human sensitivity for 1-Hz sinusoidal accelerations for 10 participants in darkness. Absolute and differential thresholds are measured for upward and downward translations independently at 5 different peak amplitudes ranging from 0 to 2 m/s(2). Overall vertical differential thresholds are higher than horizontal differential thresholds found in the literature. Psychometric functions are fit in linear and logarithmic space, with goodness of fit being similar in both cases. Differential thresholds are higher for upward as compared to downward motion and increase with stimulus intensity following a trend best described by two power laws. The power laws' exponents of 0.60 and 0.42 for upward and downward motion, respectively, deviate from Weber's Law in that thresholds increase less than expected at high stimulus intensity. We speculate that increased sensitivity at high accelerations and greater sensitivity to downward than upward self-motion may reflect adaptations to avoid falling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macaluso, D. A.; Bogolub, K.; Johnson, A.
Absolute single photoionization cross-section measurements of Rb 2+ ions were performed at the Advanced Light Source at Lawrence Berkeley National Laboratory using synchrotron radiation and the photo-ion, merged-beams technique. Measurements were made at a photon energy resolution of 13.5 2.5 meV from 37.31 to 44.08 eV spanning the 2 P ground state and 2 P metastable state ionization thresholds. Multiple autoionizing resonance series arising from each initial state are identified using quantum defect theory. The measurements are compared to Breit-Pauli R-matrix calculations with excellent agreement between theory and experiment.
Reconstruction of Sensory Stimuli Encoded with Integrate-and-Fire Neurons with Random Thresholds
Lazar, Aurel A.; Pnevmatikakis, Eftychios A.
2013-01-01
We present a general approach to the reconstruction of sensory stimuli encoded with leaky integrate-and-fire neurons with random thresholds. The stimuli are modeled as elements of a Reproducing Kernel Hilbert Space. The reconstruction is based on finding a stimulus that minimizes a regularized quadratic optimality criterion. We discuss in detail the reconstruction of sensory stimuli modeled as absolutely continuous functions as well as stimuli with absolutely continuous first-order derivatives. Reconstruction results are presented for stimuli encoded with single as well as a population of neurons. Examples are given that demonstrate the performance of the reconstruction algorithms as a function of threshold variability. PMID:24077610
Development and Current Status of the “Cambridge” Loudness Models
2014-01-01
This article reviews the evolution of a series of models of loudness developed in Cambridge, UK. The first model, applicable to stationary sounds, was based on modifications of the model developed by Zwicker, including the introduction of a filter to allow for the effects of transfer of sound through the outer and middle ear prior to the calculation of an excitation pattern, and changes in the way that the excitation pattern was calculated. Later, modifications were introduced to the assumed middle-ear transfer function and to the way that specific loudness was calculated from excitation level. These modifications led to a finite calculated loudness at absolute threshold, which made it possible to predict accurately the absolute thresholds of broadband and narrowband sounds, based on the assumption that the absolute threshold corresponds to a fixed small loudness. The model was also modified to give predictions of partial loudness—the loudness of one sound in the presence of another. This allowed predictions of masked thresholds based on the assumption that the masked threshold corresponds to a fixed small partial loudness. Versions of the model for time-varying sounds were developed, which allowed prediction of the masked threshold of any sound in a background of any other sound. More recent extensions incorporate binaural processing to account for the summation of loudness across ears. In parallel, versions of the model for predicting loudness for hearing-impaired ears have been developed and have been applied to the development of methods for fitting multichannel compression hearing aids. PMID:25315375
Pullman, Rebecca E; Roepke, Stephanie E; Duffy, Jeanne F
2012-06-01
To determine whether an accurate circadian phase assessment could be obtained from saliva samples collected by patients in their home. Twenty-four individuals with a complaint of sleep initiation or sleep maintenance difficulty were studied for two evenings. Each participant received instructions for collecting eight hourly saliva samples in dim light at home. On the following evening they spent 9h in a laboratory room with controlled dim (<20 lux) light where hourly saliva samples were collected. Circadian phase of dim light melatonin onset (DLMO) was determined using both an absolute threshold (3 pg ml(-1)) and a relative threshold (two standard deviations above the mean of three baseline values). Neither threshold method worked well for one participant who was a "low-secretor". In four cases the participants' in-lab melatonin levels rose much earlier or were much higher than their at-home levels, and one participant appeared to take the at home samples out of order. Overall, the at-home and in-lab DLMO values were significantly correlated using both methods, and differed on average by 37 (± 19)min using the absolute threshold and by 54 (± 36)min using the relative threshold. The at-home assessment procedure was able to determine an accurate DLMO using an absolute threshold in 62.5% of the participants. Thus, an at-home procedure for assessing circadian phase could be practical for evaluating patients for circadian rhythm sleep disorders. Copyright © 2012 Elsevier B.V. All rights reserved.
Photoionization of Se+ and Se2+ Ions: Experiment and Theory
NASA Astrophysics Data System (ADS)
Esteves, D. A.; Sterling, N. C.; Alna'Washi, Ghassan; Aguilar, A.; Kilcoyne, A. L. D.; Balance, C. P.; Norrington, P. H.; McLaughlin, B. M.
2007-06-01
The determination of elemental abundances in astrophysical nebulae are highly dependent on the accuracy of the available atomic data. Numerical simulations show that derived Se abundances in ionized nebulae can be uncertain by factors of two or more from atomic data uncertainties alone. Of these uncertainties, photoionization cross section data are the most important, particularly in the near threshold region of the valence shell. Absolute photoionization cross sections for Se^+ and Se^2+ ions near their thresholds have been measured at the Advanced Light Source in Berkeley, using the merged beams photo-ion technique. Theoretical photoionization cross sections calculations were performed for both of these Se ions using the state-of-the-art fully relativistic Dirac R-matrix code (DARC). The calculations show encouraging agreement with the experimental measurements. A more comprehensive set of results will be presented at the meeting.
Gans, Bérenger; Garcia, Gustavo A; Boyé-Péronne, Séverine; Loison, Jean-Christophe; Douin, Stéphane; Gaie-Levrel, François; Gauyacq, Dolores
2011-06-02
The absolute photoionization cross section of C(2)H(5) has been measured at 10.54 eV using vacuum ultraviolet (VUV) laser photoionization. The C(2)H(5) radical was produced in situ using the rapid C(2)H(6) + F → C(2)H(5) + HF reaction. Its absolute photoionization cross section has been determined in two different ways: first using the C(2)H(5) + NO(2) → C(2)H(5)O + NO reaction in a fast flow reactor, and the known absolute photoionization cross section of NO. In a second experiment, it has been measured relative to the known absolute photoionization cross section of CH(3) as a reference by using the CH(4) + F → CH(3) + HF and C(2)H(6) + F → C(2)H(5) + HF reactions successively. Both methods gave similar results, the second one being more precise and yielding the value: σ(C(2)H(5))(ion) = (5.6 ± 1.4) Mb at 10.54 eV. This value is used to calibrate on an absolute scale the photoionization curve of C(2)H(5) produced in a pyrolytic source from the C(2)H(5)NO(2) precursor, and ionized by the VUV beam of the DESIRS beamline at SOLEIL synchrotron facility. In this latter experiment, a recently developed ion imaging technique is used to discriminate the direct photoionization process from dissociative ionization contributions to the C(2)H(5)(+) signal. The imaging technique applied on the photoelectron signal also allows a slow photoelectron spectrum with a 40 meV resolution to be extracted, indicating that photoionization around the adiabatic ionization threshold involves a complex vibrational overlap between the neutral and cationic ground states, as was previously observed in the literature. Comparison with earlier photoionization studies, in particular with the photoionization yield recorded by Ruscic et al. is also discussed. © 2011 American Chemical Society
Measurements of Absolute Hadronic Branching Fractions of the Λ_{c}^{+} Baryon.
Ablikim, M; Achasov, M N; Ai, X C; Albayrak, O; Albrecht, M; Ambrose, D J; Amoroso, A; An, F F; An, Q; Bai, J Z; Baldini Ferroli, R; Ban, Y; Bennett, D W; Bennett, J V; Bertani, M; Bettoni, D; Bian, J M; Bianchi, F; Boger, E; Boyko, I; Briere, R A; Cai, H; Cai, X; Cakir, O; Calcaterra, A; Cao, G F; Cetin, S A; Chang, J F; Chelkov, G; Chen, G; Chen, H S; Chen, H Y; Chen, J C; Chen, M L; Chen, S J; Chen, X; Chen, X R; Chen, Y B; Cheng, H P; Chu, X K; Cibinetto, G; Dai, H L; Dai, J P; Dbeyssi, A; Dedovich, D; Deng, Z Y; Denig, A; Denysenko, I; Destefanis, M; De Mori, F; Ding, Y; Dong, C; Dong, J; Dong, L Y; Dong, M Y; Dou, Z L; Du, S X; Duan, P F; Eren, E E; Fan, J Z; Fang, J; Fang, S S; Fang, X; Fang, Y; Farinelli, R; Fava, L; Fedorov, O; Feldbauer, F; Felici, G; Feng, C Q; Fioravanti, E; Fritsch, M; Fu, C D; Gao, Q; Gao, X L; Gao, X Y; Gao, Y; Gao, Z; Garzia, I; Goetzen, K; Gong, L; Gong, W X; Gradl, W; Greco, M; Gu, M H; Gu, Y T; Guan, Y H; Guo, A Q; Guo, L B; Guo, Y; Guo, Y P; Haddadi, Z; Hafner, A; Han, S; Hao, X Q; Harris, F A; He, K L; Held, T; Heng, Y K; Hou, Z L; Hu, C; Hu, H M; Hu, J F; Hu, T; Hu, Y; Huang, G S; Huang, J S; Huang, X T; Huang, Y; Hussain, T; Ji, Q; Ji, Q P; Ji, X B; Ji, X L; Jiang, L W; Jiang, X S; Jiang, X Y; Jiao, J B; Jiao, Z; Jin, D P; Jin, S; Johansson, T; Julin, A; Kalantar-Nayestanaki, N; Kang, X L; Kang, X S; Kavatsyuk, M; Ke, B C; Kiese, P; Kliemt, R; Kloss, B; Kolcu, O B; Kopf, B; Kornicer, M; Kuehn, W; Kupsc, A; Lange, J S; Lara, M; Larin, P; Leng, C; Li, C; Li, Cheng; Li, D M; Li, F; Li, F Y; Li, G; Li, H B; Li, J C; Li, Jin; Li, K; Li, K; Li, Lei; Li, P R; Li, Q Y; Li, T; Li, W D; Li, W G; Li, X L; Li, X M; Li, X N; Li, X Q; Li, Z B; Liang, H; Liang, Y F; Liang, Y T; Liao, G R; Lin, D X; Liu, B J; Liu, C X; Liu, D; Liu, F H; Liu, Fang; Liu, Feng; Liu, H B; Liu, H H; Liu, H H; Liu, H M; Liu, J; Liu, J B; Liu, J P; Liu, J Y; Liu, K; Liu, K Y; Liu, L D; Liu, P L; Liu, Q; Liu, S B; Liu, X; Liu, Y B; Liu, Z A; Liu, Zhiqing; Loehner, H; Lou, X C; Lu, H J; Lu, J G; Lu, Y; Lu, Y P; Luo, C L; Luo, M X; Luo, T; Luo, X L; Lyu, X R; Ma, F C; Ma, H L; Ma, L L; Ma, Q M; Ma, T; Ma, X N; Ma, X Y; Ma, Y M; Maas, F E; Maggiora, M; Mao, Y J; Mao, Z P; Marcello, S; Messchendorp, J G; Min, J; Mitchell, R E; Mo, X H; Mo, Y J; Morales Morales, C; Muchnoi, N Yu; Muramatsu, H; Nefedov, Y; Nerling, F; Nikolaev, I B; Ning, Z; Nisar, S; Niu, S L; Niu, X Y; Olsen, S L; Ouyang, Q; Pacetti, S; Pan, Y; Patteri, P; Pelizaeus, M; Peng, H P; Peters, K; Pettersson, J; Ping, J L; Ping, R G; Poling, R; Prasad, V; Qi, H R; Qi, M; Qian, S; Qiao, C F; Qin, L Q; Qin, N; Qin, X S; Qin, Z H; Qiu, J F; Rashid, K H; Redmer, C F; Ripka, M; Rong, G; Rosner, Ch; Ruan, X D; Santoro, V; Sarantsev, A; Savrié, M; Schoenning, K; Schumann, S; Shan, W; Shao, M; Shen, C P; Shen, P X; Shen, X Y; Sheng, H Y; Song, W M; Song, X Y; Sosio, S; Spataro, S; Sun, G X; Sun, J F; Sun, S S; Sun, Y J; Sun, Y Z; Sun, Z J; Sun, Z T; Tang, C J; Tang, X; Tapan, I; Thorndike, E H; Tiemens, M; Ullrich, M; Uman, I; Varner, G S; Wang, B; Wang, B L; Wang, D; Wang, D Y; Wang, K; Wang, L L; Wang, L S; Wang, M; Wang, P; Wang, P L; Wang, S G; Wang, W; Wang, W P; Wang, X F; Wang, Y D; Wang, Y F; Wang, Y Q; Wang, Z; Wang, Z G; Wang, Z H; Wang, Z Y; Weber, T; Wei, D H; Wei, J B; Weidenkaff, P; Wen, S P; Wiedner, U; Wolke, M; Wu, L H; Wu, Z; Xia, L; Xia, L G; Xia, Y; Xiao, D; Xiao, H; Xiao, Z J; Xie, Y G; Xiu, Q L; Xu, G F; Xu, L; Xu, Q J; Xu, Q N; Xu, X P; Yan, L; Yan, W B; Yan, W C; Yan, Y H; Yang, H J; Yang, H X; Yang, L; Yang, Y X; Ye, M; Ye, M H; Yin, J H; Yu, B X; Yu, C X; Yu, J S; Yuan, C Z; Yuan, W L; Yuan, Y; Yuncu, A; Zafar, A A; Zallo, A; Zeng, Y; Zeng, Z; Zhang, B X; Zhang, B Y; Zhang, C; Zhang, C C; Zhang, D H; Zhang, H H; Zhang, H Y; Zhang, J J; Zhang, J L; Zhang, J Q; Zhang, J W; Zhang, J Y; Zhang, J Z; Zhang, K; Zhang, L; Zhang, X Y; Zhang, Y; Zhang, Y H; Zhang, Y N; Zhang, Y T; Zhang, Yu; Zhang, Z H; Zhang, Z P; Zhang, Z Y; Zhao, G; Zhao, J W; Zhao, J Y; Zhao, J Z; Zhao, Lei; Zhao, Ling; Zhao, M G; Zhao, Q; Zhao, Q W; Zhao, S J; Zhao, T C; Zhao, Y B; Zhao, Z G; Zhemchugov, A; Zheng, B; Zheng, J P; Zheng, W J; Zheng, Y H; Zhong, B; Zhou, L; Zhou, X; Zhou, X K; Zhou, X R; Zhou, X Y; Zhu, K; Zhu, K J; Zhu, S; Zhu, S H; Zhu, X L; Zhu, Y C; Zhu, Y S; Zhu, Z A; Zhuang, J; Zotti, L; Zou, B S; Zou, J H
2016-02-05
We report the first measurement of absolute hadronic branching fractions of Λ_{c}^{+} baryon at the Λ_{c}^{+}Λ[over ¯]_{c}^{-} production threshold, in the 30 years since the Λ_{c}^{+} discovery. In total, 12 Cabibbo-favored Λ_{c}^{+} hadronic decay modes are analyzed with a double-tag technique, based on a sample of 567 pb^{-1} of e^{+}e^{-} collisions at sqrt[s]=4.599 GeV recorded with the BESIII detector. A global least-squares fitter is utilized to improve the measured precision. Among the measurements for twelve Λ_{c}^{+} decay modes, the branching fraction for Λ_{c}^{+}→pK^{-}π^{+} is determined to be (5.84±0.27±0.23)%, where the first uncertainty is statistical and the second is systematic. In addition, the measurements of the branching fractions of the other 11 Cabibbo-favored hadronic decay modes are significantly improved.
NASA Astrophysics Data System (ADS)
Coventry, M. D.; Krites, A. M.
Measurements to determine the absolute D-D and D-7Li neutron production rates with a neutron generator running at 100-200 kV acceleration potential were performed using the threshold activation foil technique. This technique provides a clear measure of fast neutron flux and with a suitable model, the neutron output. This approach requires little specialized equipment and is used to calibrate real-time neutron detectors and to verify neutron output. We discuss the activation foil measurement technique and describe its use in determining the relative contributions of D-D and D-7Li reactions to the total neutron yield and real-time detector response and compare to model predictions. The D-7Li reaction produces neutrons with a continuum of energies and a sharp peak around 13.5 MeV for measurement techniques outside of what D-D generators can perform. The ability to perform measurements with D-D neutrons alone, then add D-7Li neutrons for inelastic gamma production presents additional measurement modalities with the same neutron source without the use of tritium. Typically, D-T generators are employed for inelastic scattering applications but have a high regulatory burden from a radiological aspect (tritium inventory, liability concerns) and are export-controlled. D-D and D-7Li generators avoid these issues completely.
Total and dissociative photoionization cross sections of N2 from threshold to 107 eV
NASA Technical Reports Server (NTRS)
Samson, James A. R.; Masuoka, T.; Pareek, P. N.; Angel, G. C.
1986-01-01
The absolute cross sections for the production of N(+) and N2(+) were measured from the dissociative ionization threshold of 115 A. In addition, the absolute photoabsorption and photoionization cross sections were tabulated between 114 and 796 A. The ionization efficiencies were also given at several discrete wave lengths between 660 and 790 A. The production of N(+) fragment ions are discussed in terms of the doubly excited N2(+) states with binding energies in the range of 24 to 44 eV.
Total and dissociative photoionization cross sections of N2 from threshold to 107 eV
NASA Technical Reports Server (NTRS)
Samson, James A. R.; Masuoka, T.; Pareek, P. N.; Angel, G. C.
1987-01-01
The absolute cross sections for the production of N(+) and N2(+) have been measured from the dissociative ionization threshold to 115 A. In addition, the absolute photoabsorption and photoionization cross sections are tabulated between 114 and 796 A. The ionization efficiencies are also given at several discrete wavelengths between 660 and 790 A. The production of N(+) fragment ions are discussed in terms of the doubly excited N2(+) states with binding energies in the range 24 to 44 eV.
Gandjour, Afschin
2015-01-01
In Germany, the Institute for Quality and Efficiency in Health Care (IQWiG) makes recommendations for reimbursement prices of drugs on the basis of a proportional relationship between costs and health benefits. This paper analyzed the potential of IQWiG's decision rule to control health expenditures and used a cost-per-quality-adjusted life year (QALY) rule as a comparison. A literature search was conducted, and a theoretical model of health expenditure growth was built. The literature search shows that the median incremental cost-effectiveness ratio of German cost-effectiveness analyses was €7650 per QALY gained, thus yielding a much lower threshold cost-effectiveness ratio for IQWiG's rule than an absolute rule at €30 000 per QALY. The theoretical model shows that IQWiG's rule is able to contain the long-term growth of health expenditures under the conservative assumption that future health increases at a constant absolute rate and that the threshold incremental cost-effectiveness ratio increases at a smaller rate than health expenditures. In contrast, an absolute rule offers the potential for manufacturers to raise drug prices in response to the threshold, thus resulting in an initial spike in expenditures. Results suggest that IQWiG's proportional rule will lead to lower drug prices and a slower growth of health expenditures than an absolute cost-effectiveness threshold at €30 000 per QALY. This finding is surprising as IQWiG's rule-in contrast to a cost-per-QALY rule-does not start from a fixed budget. Copyright © 2014 John Wiley & Sons, Ltd.
Kastelein, Ronald A; Hoek, Lean; Wensveen, Paul J; Terhune, John M; de Jong, Christ A F
2010-02-01
The underwater hearing sensitivities of two 2-year-old female harbor seals were quantified in a pool built for acoustic research by using a behavioral psycho-acoustic technique. The animals were trained only to respond when they detected an acoustic signal ("go/no-go" response). Detection thresholds were obtained for pure tone signals (frequencies: 0.2-40 kHz; durations: 0.5-5000 ms, depending on the frequency; 59 frequency-duration combinations). Detection thresholds were quantified by varying the signal amplitude by the 1-up, 1-down staircase method, and were defined as the stimulus levels, resulting in a 50% detection rate. The hearing thresholds of the two seals were similar for all frequencies except for 40 kHz, for which the thresholds differed by, on average, 3.7 dB. There was an inverse relationship between the time constant (tau), derived from an exponential model of temporal integration, and the frequency [log(tau)=2.86-0.94 log(f);tau in ms and f in kHz]. Similarly, the thresholds increased when the pulse was shorter than approximately 780 cycles (independent of the frequency). For pulses shorter than the integration time, the thresholds increased by 9-16 dB per decade reduction in the duration or number of cycles in the pulse. The results of this study suggest that most published hearing thresholds
16O resonances near the 4α threshold through the 12C(6Li,d) reaction
NASA Astrophysics Data System (ADS)
Rodrigues, M. R. D.; Borello-Lewin, T.; Miyake, H.; Duarte, J. L. M.; Rodrigues, C. L.; Souza, M. A.; Horodynski-Matsushigue, L. B.; Ukita, G. M.; Cappuzzello, F.; Cunsolo, A.; Cavallaro, M.; Agodi, C.; Foti, A.
2014-02-01
Background: Resonances around xα thresholds in light nuclei are recognized to be important in basic aspects of nuclear structure. However, there is scarce experimental information associated with them. Purpose: We study the α-clustering phenomenon in resonant states around the 4α threshold (14.44 MeV) in the 16O nucleus. Method: The 12C(6Li,d )16O reaction was investigated with an unprecedented resolution at a bombarding energy of 25.5 MeV by employing the São Paulo Pelletron-Enge-Spectrograph facility and the nuclear emulsion technique. Results: Several narrow resonances were populated and the energy resolution of 15 keV allows for the separation of doublet states that were not resolved previously. The upper limits for the resonance widths in this region were extracted. The angular distributions of the absolute differential cross section associated with four natural parity quasibound states are presented and compared to distorted wave Born approximation predictions. Conclusions: Narrow resonances not previously reported in the literature were observed. This indicates that the α-cluster structure information in this region should be revised.
NASA Technical Reports Server (NTRS)
Smith, Steven J.; Man, K.-F.; Chutjian, A.; Mawhorter, R. J.; Williams, I. D.
1991-01-01
Absolute cascade-free excitation cross-sections in an ion have been measured for the resonance 2S to 2P transition in Zn(+) using electron-energy-loss and merged electron-ion beams methods. Measurements were carried out at electron energies of below threshold to 6 times threshold. Comparisons are made with 2-, 5-, and 15-state close-coupling and distorted-wave theories. There is good agreement between experiment and the 15-state close-coupling cross-sections over the energy range of the calculations.
NASA Astrophysics Data System (ADS)
Schumacher, David; Sharma, Ravi; Grager, Jan-Carl; Schrapp, Michael
2018-07-01
Photon counting detectors (PCD) offer new possibilities for x-ray micro computed tomography (CT) in the field of non-destructive testing. For large and/or dense objects with high atomic numbers the problem of scattered radiation and beam hardening severely influences the image quality. This work shows that using an energy discriminating PCD based on CdTe allows to address these problems by intrinsically reducing both the influence of scattering and beam hardening. Based on 2D-radiographic measurements it is shown that by energy thresholding the influence of scattered radiation can be reduced by up to in case of a PCD compared to a conventional energy-integrating detector (EID). To demonstrate the capabilities of a PCD in reducing beam hardening, cupping artefacts are analyzed quantitatively. The PCD results show that the higher the energy threshold is set, the lower the cupping effect emerges. But since numerous beam hardening correction algorithms exist, the results of the PCD are compared to EID results corrected by common techniques. Nevertheless, the highest energy thresholds yield lower cupping artefacts than any of the applied correction algorithms. As an example of a potential industrial CT application, a turbine blade is investigated by CT. The inner structure of the turbine blade allows for comparing the image quality between PCD and EID in terms of absolute contrast, as well as normalized signal-to-noise and contrast-to-noise ratio. Where the absolute contrast can be improved by raising the energy thresholds of the PCD, it is found that due to lower statistics the normalized contrast-to-noise-ratio could not be improved compared to the EID. These results might change to the contrary when discarding pre-filtering of the x-ray spectra and thus allowing more low-energy photons to reach the detectors. Despite still being in the early phase in technological progress, PCDs already allow to improve CT image quality compared to conventional detectors in terms of scatter and beam hardening reduction.
Mathematics of quantitative kinetic PCR and the application of standard curves.
Rutledge, R G; Côté, C
2003-08-15
Fluorescent monitoring of DNA amplification is the basis of real-time PCR, from which target DNA concentration can be determined from the fractional cycle at which a threshold amount of amplicon DNA is produced. Absolute quantification can be achieved using a standard curve constructed by amplifying known amounts of target DNA. In this study, the mathematics of quantitative PCR are examined in detail, from which several fundamental aspects of the threshold method and the application of standard curves are illustrated. The construction of five replicate standard curves for two pairs of nested primers was used to examine the reproducibility and degree of quantitative variation using SYBER Green I fluorescence. Based upon this analysis the application of a single, well- constructed standard curve could provide an estimated precision of +/-6-21%, depending on the number of cycles required to reach threshold. A simplified method for absolute quantification is also proposed, in which quantitative scale is determined by DNA mass at threshold.
Towards a unifying basis of auditory thresholds: binaural summation.
Heil, Peter
2014-04-01
Absolute auditory threshold decreases with increasing sound duration, a phenomenon explainable by the assumptions that the sound evokes neural events whose probabilities of occurrence are proportional to the sound's amplitude raised to an exponent of about 3 and that a constant number of events are required for threshold (Heil and Neubauer, Proc Natl Acad Sci USA 100:6151-6156, 2003). Based on this probabilistic model and on the assumption of perfect binaural summation, an equation is derived here that provides an explicit expression of the binaural threshold as a function of the two monaural thresholds, irrespective of whether they are equal or unequal, and of the exponent in the model. For exponents >0, the predicted binaural advantage is largest when the two monaural thresholds are equal and decreases towards zero as the monaural threshold difference increases. This equation is tested and the exponent derived by comparing binaural thresholds with those predicted on the basis of the two monaural thresholds for different values of the exponent. The thresholds, measured in a large sample of human subjects with equal and unequal monaural thresholds and for stimuli with different temporal envelopes, are compatible only with an exponent close to 3. An exponent of 3 predicts a binaural advantage of 2 dB when the two ears are equally sensitive. Thus, listening with two (equally sensitive) ears rather than one has the same effect on absolute threshold as doubling duration. The data suggest that perfect binaural summation occurs at threshold and that peripheral neural signals are governed by an exponent close to 3. They might also shed new light on mechanisms underlying binaural summation of loudness.
Swanepoel, De Wet; Matthysen, Cornelia; Eikelboom, Robert H; Clark, Jackie L; Hall, James W
2015-01-01
Accessibility of audiometry is hindered by the cost of sound booths and shortage of hearing health personnel. This study investigated the validity of an automated mobile diagnostic audiometer with increased attenuation and real-time noise monitoring for clinical testing outside a sound booth. Attenuation characteristics and reference ambient noise levels for the computer-based audiometer (KUDUwave) was evaluated alongside the validity of environmental noise monitoring. Clinical validity was determined by comparing air- and bone-conduction thresholds obtained inside and outside the sound booth (23 subjects). Twenty-three normal-hearing subjects (age range, 20-75 years; average age 35.5) and a sub group of 11 subjects to establish test-retest reliability. Improved passive attenuation and valid environmental noise monitoring was demonstrated. Clinically, air-conduction thresholds inside and outside the sound booth, corresponded within 5 dB or less > 90% of instances (mean absolute difference 3.3 ± 3.2 SD). Bone conduction thresholds corresponded within 5 dB or less in 80% of comparisons between test environments, with a mean absolute difference of 4.6 dB (3.7 SD). Threshold differences were not statistically significant. Mean absolute test-retest differences outside the sound booth was similar to those in the booth. Diagnostic pure-tone audiometry outside a sound booth, using automated testing, improved passive attenuation, and real-time environmental noise monitoring demonstrated reliable hearing assessments.
Rider, Lisa G.; Aggarwal, Rohit; Pistorio, Angela; Bayat, Nastaran; Erman, Brian; Feldman, Brian M.; Huber, Adam M.; Cimaz, Rolando; Cuttica, Rubén J.; de Oliveira, Sheila Knupp; Lindsley, Carol B.; Pilkington, Clarissa A.; Punaro, Marilyn; Ravelli, Angelo; Reed, Ann M.; Rouster-Stevens, Kelly; van Royen, Annet; Dressler, Frank; Magalhaes, Claudia Saad; Constantin, Tamás; Davidson, Joyce E.; Magnusson, Bo; Russo, Ricardo; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A.; Miller, Frederick W.; Vencovsky, Jiri; Ruperto, Nicolino
2017-01-01
Objective Develop response criteria for juvenile dermatomyositis (JDM). Methods We analyzed the performance of 312 definitions that used core set measures (CSM) from either the International Myositis Assessment and Clinical Studies Group (IMACS) or the Pediatric Rheumatology International Trials Organization (PRINTO) and were derived from natural history data and a conjoint-analysis survey. They were further validated in the PRINTO trial of prednisone alone compared to prednisone with methotrexate or cyclosporine and the Rituximab in Myositis trial. Experts considered 14 top-performing candidate criteria based on their performance characteristics and clinical face validity using nominal group technique at a consensus conference. Results Consensus was reached for a conjoint analysis–based continuous model with a Total Improvement Score of 0-100, using absolute percent change in CSM with thresholds for minimal (≥30 points), moderate (≥45), and major improvement (≥70). The same criteria were chosen for adult dermatomyositis/polymyositis with differing thresholds for improvement. The sensitivity and specificity were 89% and 91-98% for minimal, 92-94% and 94-99% for moderate, and 91-98% and 85-85% for major improvement, respectively, in JDM patient cohorts using the IMACS and PRINTO CSM. These criteria were validated in the PRINTO trial for differentiating between treatment arms for minimal and moderate improvement (P=0.009–0.057) and in the Rituximab trial for significantly differentiating the physician rating of improvement (P<0.006). Conclusion The response criteria for JDM was a conjoint analysis–based model using a continuous improvement score based on absolute percent change in CSM, with thresholds for minimal, moderate, and major improvement. PMID:28382787
Rider, Lisa G.; Aggarwal, Rohit; Pistorio, Angela; Bayat, Nastaran; Erman, Brian; Feldman, Brian M.; Huber, Adam M.; Cimaz, Rolando; Cuttica, Rubén J.; de Oliveira, Sheila Knupp; Lindsley, Carol B.; Pilkington, Clarissa A.; Punaro, Marilyn; Ravelli, Angelo; Reed, Ann M.; Rouster-Stevens, Kelly; van Royen, Annet; Dressler, Frank; Magalhaes, Claudia Saad; Constantin, Tamás; Davidson, Joyce E.; Magnusson, Bo; Russo, Ricardo; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A.; Miller, Frederick W.; Vencovsky, Jiri; Ruperto, Nicolino
2017-01-01
Objective Develop response criteria for juvenile dermatomyositis (JDM). Methods We analyzed the performance of 312 definitions that used core set measures (CSM) from either the International Myositis Assessment and Clinical Studies Group (IMACS) or the Pediatric Rheumatology International Trials Organization (PRINTO) and were derived from natural history data and a conjoint-analysis survey. They were further validated in the PRINTO trial of prednisone alone compared to prednisone with methotrexate or cyclosporine and the Rituximab in Myositis trial. Experts considered 14 top-performing candidate criteria based on their performance characteristics and clinical face validity using nominal group technique at a consensus conference. Results Consensus was reached for a conjoint analysis–based continuous model with a Total Improvement Score of 0-100, using absolute percent change in CSM with thresholds for minimal (≥30 points), moderate (≥45), and major improvement (≥70). The same criteria were chosen for adult dermatomyositis/polymyositis with differing thresholds for improvement. The sensitivity and specificity were 89% and 91-98% for minimal, 92-94% and 94-99% for moderate, and 91-98% and 85-85% for major improvement, respectively, in JDM patient cohorts using the IMACS and PRINTO CSM. These criteria were validated in the PRINTO trial for differentiating between treatment arms for minimal and moderate improvement (P=0.009–0.057) and in the Rituximab trial for significantly differentiating the physician rating of improvement (P<0.006). Conclusion The response criteria for JDM was a conjoint analysis–based model using a continuous improvement score based on absolute percent change in CSM, with thresholds for minimal, moderate, and major improvement. PMID:28382778
Baker, Simon; Priest, Patricia; Jackson, Rod
2000-01-01
Objective To estimate the impact of using thresholds based on absolute risk of cardiovascular disease to target drug treatment to lower blood pressure in the community. Design Modelling of three thresholds of treatment for hypertension based on the absolute risk of cardiovascular disease. 5 year risk of disease was estimated for each participant using an equation to predict risk. Net predicted impact of the thresholds on the number of people treated and the number of disease events averted over 5 years was calculated assuming a relative treatment benefit of one quarter. Setting Auckland, New Zealand. Participants 2158 men and women aged 35-79 years randomly sampled from the general electoral rolls. Main outcome measures Predicted 5 year risk of cardiovascular disease event, estimated number of people for whom treatment would be recommended, and disease events averted over 5 years at different treatment thresholds. Results 46 374 (12%) Auckland residents aged 35-79 receive drug treatment to lower their blood pressure, averting an estimated 1689 disease events over 5 years. Restricting treatment to individuals with blood pressure ⩾170/100 mm Hg and those with blood pressure between 150/90-169/99 mm Hg who have a predicted 5 year risk of disease ⩾10% would increase the net number for whom treatment would be recommended by 19 401. This 42% relative increase is predicted to avert 1139/1689 (68%) additional disease events overall over 5 years compared with current treatment. If the threshold for 5 year risk of disease is set at 15% the number recommended for treatment increases by <10% but about 620/1689 (37%) additional events can be averted. A 20% threshold decreases the net number of patients recommended for treatment by about 10% but averts 204/1689 (12%) more disease events than current treatment. Conclusions Implementing treatment guidelines that use treatment thresholds based on absolute risk could significantly improve the efficiency of drug treatment to lower blood pressure in primary care. PMID:10710577
Herrmann, H W; Kim, Y H; Young, C S; Fatherley, V E; Lopez, F E; Oertel, J A; Malone, R M; Rubery, M S; Horsfield, C J; Stoeffl, W; Zylstra, A B; Shmayda, W T; Batha, S H
2014-11-01
A new Gas Cherenkov Detector (GCD) with low-energy threshold and high sensitivity, currently known as Super GCD (or GCD-3 at OMEGA), is being developed for use at the OMEGA Laser Facility and the National Ignition Facility (NIF). Super GCD is designed to be pressurized to ≤400 psi (absolute) and uses all metal seals to allow the use of fluorinated gases inside the target chamber. This will allow the gamma energy threshold to be run as low at 1.8 MeV with 400 psi (absolute) of C2F6, opening up a new portion of the gamma ray spectrum. Super GCD operating at 20 cm from TCC will be ∼400 × more efficient at detecting DT fusion gammas at 16.7 MeV than the Gamma Reaction History diagnostic at NIF (GRH-6m) when operated at their minimum thresholds.
Absolute auditory threshold: testing the absolute.
Heil, Peter; Matysiak, Artur
2017-11-02
The mechanisms underlying the detection of sounds in quiet, one of the simplest tasks for auditory systems, are debated. Several models proposed to explain the threshold for sounds in quiet and its dependence on sound parameters include a minimum sound intensity ('hard threshold'), below which sound has no effect on the ear. Also, many models are based on the assumption that threshold is mediated by integration of a neural response proportional to sound intensity. Here, we test these ideas. Using an adaptive forced choice procedure, we obtained thresholds of 95 normal-hearing human ears for 18 tones (3.125 kHz carrier) in quiet, each with a different temporal amplitude envelope. Grand-mean thresholds and standard deviations were well described by a probabilistic model according to which sensory events are generated by a Poisson point process with a low rate in the absence, and higher, time-varying rates in the presence, of stimulation. The subject actively evaluates the process and bases the decision on the number of events observed. The sound-driven rate of events is proportional to the temporal amplitude envelope of the bandpass-filtered sound raised to an exponent. We find no evidence for a hard threshold: When the model is extended to include such a threshold, the fit does not improve. Furthermore, we find an exponent of 3, consistent with our previous studies and further challenging models that are based on the assumption of the integration of a neural response that, at threshold sound levels, is directly proportional to sound amplitude or intensity. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan
2016-07-27
This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.
NASA Astrophysics Data System (ADS)
Lükens, G.; Yacoub, H.; Kalisch, H.; Vescan, A.
2016-05-01
The interface charge density between the gate dielectric and an AlGaN/GaN heterostructure has a significant impact on the absolute value and stability of the threshold voltage Vth of metal-insulator-semiconductor (MIS) heterostructure field effect transistor. It is shown that a dry-etching step (as typically necessary for normally off devices engineered by gate-recessing) before the Al2O3 gate dielectric deposition introduces a high positive interface charge density. Its origin is most likely donor-type trap states shifting Vth to large negative values, which is detrimental for normally off devices. We investigate the influence of oxygen plasma annealing techniques of the dry-etched AlGaN/GaN surface by capacitance-voltage measurements and demonstrate that the positive interface charge density can be effectively compensated. Furthermore, only a low Vth hysteresis is observable making this approach suitable for threshold voltage engineering. Analysis of the electrostatics in the investigated MIS structures reveals that the maximum Vth shift to positive voltages achievable is fundamentally limited by the onset of accumulation of holes at the dielectric/barrier interface. In the case of the Al2O3/Al0.26Ga0.74N/GaN material system, this maximum threshold voltage shift is limited to 2.3 V.
Compound summer temperature and precipitation extremes over central Europe
NASA Astrophysics Data System (ADS)
Sedlmeier, Katrin; Feldmann, H.; Schädler, G.
2018-02-01
Reliable knowledge of the near-future climate change signal of extremes is important for adaptation and mitigation strategies. Especially compound extremes, like heat and drought occurring simultaneously, may have a greater impact on society than their univariate counterparts and have recently become an active field of study. In this paper, we use a 12-member ensemble of high-resolution (7 km) regional climate simulations with the regional climate model COSMO-CLM over central Europe to analyze the climate change signal and its uncertainty for compound heat and drought extremes in summer by two different measures: one describing absolute (i.e., number of exceedances of absolute thresholds like hot days), the other relative (i.e., number of exceedances of time series intrinsic thresholds) compound extreme events. Changes are assessed between a reference period (1971-2000) and a projection period (2021-2050). Our findings show an increase in the number of absolute compound events for the whole investigation area. The change signal of relative extremes is more region-dependent, but there is a strong signal change in the southern and eastern parts of Germany and the neighboring countries. Especially the Czech Republic shows strong change in absolute and relative extreme events.
Ku-band signal design study. [for space shuttle orbiter communication links
NASA Technical Reports Server (NTRS)
Lindsey, W. L.; Woo, K. T.
1977-01-01
The acquisition/tracking performance of a practical squaring loop in which the times two multiplier is mechanized as a limiter/multiplier combination is evaluated. This squaring approach serves to produce the absolute value of the arriving signal as opposed to the perfect square law action which is required in order to render acquisition and tracking performance equivalent to that of a Costas loop. The Ku-Band orbiter signal design for the forward link is assessed. Acquisition time results and acquisition and tracking thresholds are summarized. A tradeoff study which pertains to bit synchronization techniques for the high rate Ku-Band channel is included and an optimum selection is made based upon the appropriate design constraints.
Alpha Cluster Structure in 16O
NASA Astrophysics Data System (ADS)
Dias Rodrigues, Márcia Regina; Borello-Lewin, Thereza; Miyake, Hideaki; Cappuzzello, Francesco; Cavallaro, Manuela; Duarte, José Luciano Miranda; Lima Rodrigues, Cleber; de Souza, Marco Antonio; Horodynski-Matsushigue, Brighitta; Cunsolo, Angelo; Foti, Antonio; Mitsuo Ukita, Gilberto; Neto de Faria, Pedro; Agodi, Clementina; De Napoli, Marzio; Nicolosi, Dario; Bondì, Dario; Carbone, Diana; Tropea, Stefania
2014-03-01
The main purpose of the present work is the investigation of the α-cluster phenomenon in 16O. The 12C(6Li,d)16O reaction was measured at a bombarding energy of 25.5 MeV employing the São Paulo Pelletron-Enge-Spectrograph facility and the nuclear emulsion detection technique. Resonant states around 4α threshold were measured and an energy resolution of 15 keV allows to define states previously unresolved. The angular distributions of the absolute cross sections were determined in a range of 4-40 degree in the center of mass system. The upper limit for the resonance widths was obtained, indicating that the a cluster structure information in this region should be revised.
Optimum projection pattern generation for grey-level coded structured light illumination systems
NASA Astrophysics Data System (ADS)
Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben
2017-04-01
Structured light illumination (SLI) systems are well-established optical inspection techniques for noncontact 3D surface measurements. A common technique is multi-frequency sinusoidal SLI that obtains the phase map at various fringe periods in order to estimate the absolute phase, and hence, the 3D surface information. Nevertheless, multi-frequency SLI systems employ multiple measurement planes (e.g. four phase shifted frames) to obtain the phase at a given fringe period. It is therefore an age old challenge to obtain the absolute surface information using fewer measurement frames. Grey level (GL) coding techniques have been developed as an attempt to reduce the number of planes needed, because a spatio-temporal GL sequence employing p discrete grey-levels and m frames has the potential to unwrap up to pm fringes. Nevertheless, one major disadvantage of GL based SLI techniques is that there are often errors near the border of each stripe, because an ideal stepwise intensity change cannot be measured. If the step-change in intensity is a single discrete grey-level unit, this problem can usually be overcome by applying an appropriate threshold. However, severe errors occur if the intensity change at the border of the stripe exceeds several discrete grey-level units. In this work, an optimum GL based technique is presented that generates a series of projection patterns with a minimal gradient in the intensity. It is shown that when using this technique, the errors near the border of the stripes can be significantly reduced. This improvement is achieved with the choice generated patterns, and does not involve additional hardware or special post-processing techniques. The performance of that method is validated using both simulations and experiments. The reported technique is generic, works with an arbitrary number of frames, and can employ an arbitrary number of grey-levels.
The article deals first with the theoretical foundations of underwater hearing, and the effects of the acoustical characteristics of water on hearing...lead to the conclusion that, in water , man can locate the direction of sound at low and at very high tonal frequencies of the audio range, but this ability is probably vanishing in the middle range of frequencies. (Author)
The dual rod system of amphibians supports colour discrimination at the absolute visual threshold
Yovanovich, Carola A. M.; Koskela, Sanna M.; Nevala, Noora; Kondrashev, Sergei L.
2017-01-01
The presence of two spectrally different kinds of rod photoreceptors in amphibians has been hypothesized to enable purely rod-based colour vision at very low light levels. The hypothesis has never been properly tested, so we performed three behavioural experiments at different light intensities with toads (Bufo) and frogs (Rana) to determine the thresholds for colour discrimination. The thresholds of toads were different in mate choice and prey-catching tasks, suggesting that the differential sensitivities of different spectral cone types as well as task-specific factors set limits for the use of colour in these behavioural contexts. In neither task was there any indication of rod-based colour discrimination. By contrast, frogs performing phototactic jumping were able to distinguish blue from green light down to the absolute visual threshold, where vision relies only on rod signals. The remarkable sensitivity of this mechanism comparing signals from the two spectrally different rod types approaches theoretical limits set by photon fluctuations and intrinsic noise. Together, the results indicate that different pathways are involved in processing colour cues depending on the ecological relevance of this information for each task. This article is part of the themed issue ‘Vision in dim light’. PMID:28193811
Anaerobic threshold determination through ventilatory and electromyographics parameters.
Gassi, E R; Bankoff, A D P
2010-01-01
The aim of present study was to compare the alterations in electromyography signs with Ventilatory Threshold (VT). Had been part of the study eight men, amateur cyclists and triathletes (25.25 +/- 6.96 years), that they had exercised themselves in a mechanical cicloergometer, a cadence of 80 RPM and with the increased intensity being in 25 W/min until the exhaustion. The VT was determined by a non-linear increase in VE/VO2 without any increase in VE/VCO2 and compared with the intensity corresponding to break point of amplitude EMG sign during the incremental exercise. The EMG--Fatigue Threshold (FT) and Ventilatory Threshold (VT) parameters used were the power, the time, absolute and relative VO2, ventilation (VE), the heart hate (HH) and the subjective perception of the effort. The results had not shown to difference in none of the variable selected for the corresponding intensity to VT and FT--EMG of the muscles lateralis vastus and femoris rectus. The parameters used in the comparison between the electromyographic indicators and ventilatory were the load, the time, absolute VO2 and relative to corporal mass, to ventilation (VE), the heart frequency (HH) and the Subjective Perception of the Effort (SPE).
Artes, Paul H; Hutchison, Donna M; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C
2005-07-01
To compare test results from second-generation Frequency-Doubling Technology perimetry (FDT2, Humphrey Matrix; Carl-Zeiss Meditec, Dublin, CA) and standard automated perimetry (SAP) in patients with glaucoma. Specifically, to examine the relationship between visual field sensitivity and test-retest variability and to compare total and pattern deviation probability maps between both techniques. Fifteen patients with glaucoma who had early to moderately advanced visual field loss with SAP (mean MD, -4.0 dB; range, +0.2 to -16.1) were enrolled in the study. Patients attended three sessions. During each session, one eye was examined twice with FDT2 (24-2 threshold test) and twice with SAP (Swedish Interactive Threshold Algorithm [SITA] Standard 24-2 test), in random order. We compared threshold values between FDT2 and SAP at test locations with similar visual field coordinates. Test-retest variability, established in terms of test-retest intervals and standard deviations (SDs), was investigated as a function of visual field sensitivity (estimated by baseline threshold and mean threshold, respectively). The magnitude of visual field defects apparent in total and pattern deviation probability maps were compared between both techniques by ordinal scoring. The global visual field indices mean deviation (MD) and pattern standard deviation (PSD) of FDT2 and SAP correlated highly (r > 0.8; P < 0.001). At test locations with high sensitivity (>25 dB with SAP), threshold estimates from FDT2 and SAP exhibited a close, linear relationship, with a slope of approximately 2.0. However, at test locations with lower sensitivity, the relationship was much weaker and ceased to be linear. In comparison with FDT2, SAP showed a slightly larger proportion of test locations with absolute defects (3.0% vs. 2.2% with SAP and FDT2, respectively, P < 0.001). Whereas SAP showed a significant increase in test-retest variability at test locations with lower sensitivity (P < 0.001), there was no relationship between variability and sensitivity with FDT2 (P = 0.46). In comparison with SAP, FDT2 exhibited narrower test-retest intervals at test locations with lower sensitivity (SAP thresholds <25 dB). A comparison of the total and pattern deviation maps between both techniques showed that the total deviation analyses of FDT2 may slightly underestimate the visual field loss apparent with SAP. However, the pattern-deviation maps of both instruments agreed well with each other. The test-retest variability of FDT2 is uniform over the measurement range of the instrument. These properties may provide advantages for the monitoring of patients with glaucoma that should be investigated in longitudinal studies.
Aggarwal, Rohit; Rider, Lisa G; Ruperto, Nicolino; Bayat, Nastaran; Erman, Brian; Feldman, Brian M; Oddis, Chester V; Amato, Anthony A; Chinoy, Hector; Cooper, Robert G; Dastmalchi, Maryam; Fiorentino, David; Isenberg, David; Katz, James D; Mammen, Andrew; de Visser, Marianne; Ytterberg, Steven R; Lundberg, Ingrid E; Chung, Lorinda; Danko, Katalin; García-De la Torre, Ignacio; Song, Yeong Wook; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A; Miller, Frederick W; Vencovsky, Jiri
2017-05-01
To develop response criteria for adult dermatomyositis (DM) and polymyositis (PM). Expert surveys, logistic regression, and conjoint analysis were used to develop 287 definitions using core set measures. Myositis experts rated greater improvement among multiple pairwise scenarios in conjoint analysis surveys, where different levels of improvement in 2 core set measures were presented. The PAPRIKA (Potentially All Pairwise Rankings of All Possible Alternatives) method determined the relative weights of core set measures and conjoint analysis definitions. The performance characteristics of the definitions were evaluated on patient profiles using expert consensus (gold standard) and were validated using data from a clinical trial. The nominal group technique was used to reach consensus. Consensus was reached for a conjoint analysis-based continuous model using absolute percent change in core set measures (physician, patient, and extramuscular global activity, muscle strength, Health Assessment Questionnaire, and muscle enzyme levels). A total improvement score (range 0-100), determined by summing scores for each core set measure, was based on improvement in and relative weight of each core set measure. Thresholds for minimal, moderate, and major improvement were ≥20, ≥40, and ≥60 points in the total improvement score. The same criteria were chosen for juvenile DM, with different improvement thresholds. Sensitivity and specificity in DM/PM patient cohorts were 85% and 92%, 90% and 96%, and 92% and 98% for minimal, moderate, and major improvement, respectively. Definitions were validated in the clinical trial analysis for differentiating the physician rating of improvement (P < 0.001). The response criteria for adult DM/PM consisted of the conjoint analysis model based on absolute percent change in 6 core set measures, with thresholds for minimal, moderate, and major improvement. © 2017, American College of Rheumatology.
NASA Technical Reports Server (NTRS)
Cosgrove, D. J.
1987-01-01
This study was carried out to develop improved methods for measuring in-vivo stress relaxation of growing tissues and to compare relaxation in the stems of four different species. When water uptake by growing tissue is prevented, in-vivo stress relaxation occurs because continued wall loosening reduces wall stress and cell turgor pressure. With this procedure one may measure the yield threshold for growth (Y), the turgor pressure in excess of the yield threshold (P-Y), and the physiological wall extensibility (phi). Three relaxation techniques proved useful: "turgor-relaxation", "balance-pressure" and "pressure-block". In the turgor-relaxation method, water is withheld from growing tissue and the reduction in turgor is measured directly with the pressure probe. This technique gives absolute values for P and Y, but requires tissue excision. In the balance-pressure technique, the excised growing region is sealed in a pressure chamber, and the subsequent reduction in water potential is measured as the applied pressure needed to return xylem sap to the cut surface. This method is simple, but only measures (P-Y), not the individual values of P and Y. In the pressure-block technique, the growing tissue is sealed into a pressure chamber, growth is monitored continuously, and just sufficient pressure is applied to the chamber to block growth. The method gives high-resolution kinetics of relaxation and does not require tissue excision, but only measures (P-Y). The three methods gave similar results when applied to the growing stems of pea (Pisum sativum L.), cucumber (Cucumis sativus L.), soybean (Glycine max (L.) Merr.) and zucchini (Curcubita pepo L.) seedlings. Values for (P-Y) averaged between 1.4 and 2.7 bar, depending on species. Yield thresholds averaged between 1.3 and 3.0 bar. Compared with the other methods, relaxation by pressure-block was faster and exhibited dynamic changes in wall-yielding properties. The two pressure-chamber methods were also used to measure the internal water-potential gradient (between the xylem and the epidermis) which drives water uptake for growth. For the four species it was small, between 0.3 and 0.6 bar, and so did not limit growth substantially.
Sex Differences in Antiretroviral Therapy Initiation in Pediatric HIV Infection
Swordy, Alice; Mori, Luisa; Laker, Leana; Muenchhoff, Maximilian; Matthews, Philippa C.; Tudor-Williams, Gareth; Lavandier, Nora; van Zyl, Anriette; Hurst, Jacob; Walker, Bruce D.; Ndung’u, Thumbi; Prendergast, Andrew; Goulder, Philip; Jooste, Pieter
2015-01-01
The incidence and severity of infections in childhood is typically greater in males. The basis for these observed sex differences is not well understood, and potentially may facilitate novel approaches to reducing disease from a range of conditions. We here investigated sex differences in HIV-infected children in relation to antiretroviral therapy (ART) initiation and post-treatment outcome. In a South African cohort of 2,101 HIV-infected children, we observed that absolute CD4+ count and CD4% were significantly higher in ART-naïve female, compared to age-matched male, HIV-infected children. Absolute CD4 count and CD4% were also significantly higher in HIV-uninfected female versus male neonates. We next showed that significantly more male than female children were initiated on ART (47% female); and children not meeting criteria to start ART by >5yrs were more frequently female (59%; p<0.001). Among ART-treated children, immune reconstitution of CD4 T-cells was more rapid and more complete in female children, even after adjustment for pre-ART absolute CD4 count or CD4% (p=0.011, p=0.030, respectively). However, while ART was initiated as a result of meeting CD4 criteria less often in females (45%), ART initiation as a result of clinical disease in children whose CD4 counts were above treatment thresholds occurred more often in females (57%, p<0.001). The main sex difference in morbidity observed in children initiating ART above CD4 thresholds, above that of TB disease, was as a result of wasting and stunting observed in females with above-threshold CD4 counts (p=0.002). These findings suggest the possibility that optimal treatment of HIV-infected children might incorporate differential CD4 treatment thresholds for ART initiation according to sex. PMID:26151555
Absolute photoionization cross sections of atomic oxygen
NASA Technical Reports Server (NTRS)
Samson, J. A. R.; Pareek, P. N.
1982-01-01
The absolute values of photoionization cross sections of atomic oxygen were measured from the ionization threshold to 120 A. An auto-ionizing resonance belonging to the 2S2P4(4P)3P(3Do, 3So) transition was observed at 479.43 A and another line at 389.97 A. The experimental data is in excellent agreement with rigorous close-coupling calculations that include electron correlations in both the initial and final states.
Absolute photoionization cross sections of atomic oxygen
NASA Technical Reports Server (NTRS)
Samson, J. A. R.; Pareek, P. N.
1985-01-01
The absolute values of photoionization cross sections of atomic oxygen were measured from the ionization threshold to 120 A. An auto-ionizing resonance belonging to the 2S2P4(4P)3P(3Do, 3So) transition was observed at 479.43 A and another line at 389.97 A. The experimental data is in excellent agreement with rigorous close-coupling calculations that include electron correlations in both the initial and final states.
Reeves, Adam; Grayhem, Rebecca
2016-03-01
Rod-mediated 500 nm test spots were flashed in Maxwellian view at 5 deg eccentricity, both on steady 10.4 deg fields of intensities (I) from 0.00001 to 1.0 scotopic troland (sc td) and from 0.2 s to 1 s after extinguishing the field. On dim fields, thresholds of tiny (5') tests were proportional to √I (Rose-DeVries law), while thresholds after extinction fell within 0.6 s to the fully dark-adapted absolute threshold. Thresholds of large (1.3 deg) tests were proportional to I (Weber law) and extinction thresholds, to √I. rod thresholds are elevated by photon-driven noise from dim fields that disappears at field extinction; large spot thresholds are additionally elevated by neural light adaptation proportional to √I. At night, recovery from dimly lit fields is fast, not slow.
Distributions of Magnetic Field Variations, Differences and Residuals
1999-02-01
differences and residuals between two neighbouring sites (1997 data, Monte - cristo area). Each panel displays the results from a specific vector...This means, in effect, counting the number of times the absolute value increased past one of a series of regularly spaced thresholds, and tally the...results. Crossings of the zero level were not counted . Fig. 7 illustrates the binning procedure for a fictitious data set and four bin thresholds on
Bound-Electron Nonlinearity Beyond the Ionization Threshold.
Wahlstrand, J K; Zahedpour, S; Bahl, A; Kolesik, M; Milchberg, H M
2018-05-04
We present absolute space- and time-resolved measurements of the ultrafast laser-driven nonlinear polarizability in argon, krypton, xenon, nitrogen, and oxygen up to ionization fractions of a few percent. These measurements enable determination of the strongly nonperturbative bound-electron nonlinear polarizability well beyond the ionization threshold, where it is found to remain approximately quadratic in the laser field, a result normally expected at much lower intensities where perturbation theory applies.
Bound-Electron Nonlinearity Beyond the Ionization Threshold
NASA Astrophysics Data System (ADS)
Wahlstrand, J. K.; Zahedpour, S.; Bahl, A.; Kolesik, M.; Milchberg, H. M.
2018-05-01
We present absolute space- and time-resolved measurements of the ultrafast laser-driven nonlinear polarizability in argon, krypton, xenon, nitrogen, and oxygen up to ionization fractions of a few percent. These measurements enable determination of the strongly nonperturbative bound-electron nonlinear polarizability well beyond the ionization threshold, where it is found to remain approximately quadratic in the laser field, a result normally expected at much lower intensities where perturbation theory applies.
Measurement of visual contrast sensitivity
NASA Astrophysics Data System (ADS)
Vongierke, H. E.; Marko, A. R.
1985-04-01
This invention involves measurement of the visual contrast sensitivity (modulation transfer) function of a human subject by means of linear or circular spatial frequency pattern on a cathode ray tube whose contrast is automatically decreasing or increasing depending on the subject pressing or releasing a hand-switch button. The threshold of detection of the pattern modulation is found by the subject by adjusting the contrast to values which vary about the subject's threshold thereby determining the threshold and also providing by the magnitude of the contrast fluctuations between reversals some estimate of the variability of the subject's absolute threshold. The invention also involves the slow automatic sweeping of the spatial frequency of the pattern over the spatial frequencies after preset time intervals or after threshold has been defined at each frequency by a selected number of subject-determined threshold crossings; i.e., contrast reversals.
NASA Astrophysics Data System (ADS)
van der Veen, Rob L. P.; Berendschot, Tos T. J. M.; Makridaki, Maria; Hendrikse, Fred; Carden, David; Murray, Ian J.
2009-11-01
A comparison of macular pigment optical density (MPOD) spatial profiles determined by an optical and a psychophysical technique is presented. We measured the right eyes of 19 healthy individuals, using fundus reflectometry at 0, 1, 2, 4, 6, and 8 deg eccentricity; and heterochromatic flicker photometry (HFP) at 0, 0.5, 1, 2, 3, 4, 5, 6, and 7 deg, and a reference point at 8 deg eccentricity. We found a strong correlation between the two techniques. However, the absolute estimates obtained by fundus reflectometry data were higher than by HFP. These differences could partly be explained by the fact that at 8 deg eccentricity the MPOD is not zero, as assumed in HFP. Furthermore, when performing HFP for eccentricities of <1 deg, we had to assume that subjects set flicker thresholds at 0.4 deg horizontal translation when using a 1-deg stimulus. MPOD profiles are very similar for both techniques if, on average, 0.05 DU is added to the HFP data at all eccentricities. An additional correction factor, dependent on the steepness of the MPOD spatial distribution, is required for 0 deg.
NASA Astrophysics Data System (ADS)
Laib, Mohamed; Telesca, Luciano; Kanevski, Mikhail
2018-03-01
This paper studies the daily connectivity time series of a wind speed-monitoring network using multifractal detrended fluctuation analysis. It investigates the long-range fluctuation and multifractality in the residuals of the connectivity time series. Our findings reveal that the daily connectivity of the correlation-based network is persistent for any correlation threshold. Further, the multifractality degree is higher for larger absolute values of the correlation threshold.
Gierach, Gretchen L.; Geller, Berta M.; Shepherd, John A.; Patel, Deesha A.; Vacek, Pamela M.; Weaver, Donald L.; Chicoine, Rachael E.; Pfeiffer, Ruth M.; Fan, Bo; Mahmoudzadeh, Amir Pasha; Wang, Jeff; Johnson, Jason M.; Herschorn, Sally D.; Brinton, Louise A.; Sherman, Mark E.
2014-01-01
Background Mammographic density (MD), the area of non-fatty appearing tissue divided by total breast area, is a strong breast cancer risk factor. Most MD analyses have employed visual categorizations or computer-assisted quantification, which ignore breast thickness. We explored MD volume and area, using a volumetric approach previously validated as predictive of breast cancer risk, in relation to risk factors among women undergoing breast biopsy. Methods Among 413 primarily white women, ages 40–65, undergoing diagnostic breast biopsies between 2007–2010 at an academic facility in Vermont, MD volume (cm3) was quantified in cranio-caudal views of the breast contralateral to the biopsy target using a density phantom, while MD area (cm2) was measured on the same digital mammograms using thresholding software. Risk factor associations with continuous MD measurements were evaluated using linear regression. Results Percent MD volume and area were correlated (r=0.81) and strongly and inversely associated with age, body mass index (BMI), and menopause. Both measures were inversely associated with smoking and positively associated with breast biopsy history. Absolute MD measures were correlated (r=0.46) and inversely related to age and menopause. Whereas absolute dense area was inversely associated with BMI, absolute dense volume was positively associated. Conclusions Volume and area MD measures exhibit some overlap in risk factor associations, but divergence as well, particularly for BMI. Impact Findings suggest that volume and area density measures differ in subsets of women; notably, among obese women, absolute density was higher with volumetric methods, suggesting that breast cancer risk assessments may vary for these techniques. PMID:25139935
{{\\rm{\\Lambda }}}_{c}^{+} physics at BESIII
NASA Astrophysics Data System (ADS)
Wang, Weiping; BESIII collaboration
2018-05-01
Based on the data sets collected by the BESIII detector near the {{{Λ }}}c+{\\bar{{{Λ }}}}c- production threshold, i.e. at \\sqrt{s}=4574.5,4580.0,4590.0, and 4599.5 MeV, we report the preliminary study of the production behaviour of {e}+{e}-\\to {{{Λ }}}c+{\\bar{{{Λ }}}}c- process, including the Born cross section and electromagnetic form factor ratios. Using the large statistic data at \\sqrt{s}=4599.5 {{MeV}}, we measured the absolute branching fractions of Cabibbo-favored hadronic decays of {{{Λ }}}c+ baryon with a double-tag technique. The branching fractions for 12 hadronic decay modes are significantly improved. We also report the model-independent measurement of the branching fraction of {{{Λ }}}c+\\to {{Λ }}{e}+{v}e and {{{Λ }}}c+\\to {{Λ }}{μ }+{v}μ semi-leptonic decays.
NASA Astrophysics Data System (ADS)
Jannson, Tomasz; Wang, Wenjian; Hodelin, Juan; Forrester, Thomas; Romanov, Volodymyr; Kostrzewski, Andrew
2016-05-01
In this paper, Bayesian Binary Sensing (BBS) is discussed as an effective tool for Bayesian Inference (BI) evaluation in interdisciplinary areas such as ISR (and, C3I), Homeland Security, QC, medicine, defense, and many others. In particular, Hilbertian Sine (HS) as an absolute measure of BI, is introduced, while avoiding relativity of decision threshold identification, as in the case of traditional measures of BI, related to false positives and false negatives.
Stoecker, William V.; Gupta, Kapil; Stanley, R. Joe; Moss, Randy H.; Shrestha, Bijaya
2011-01-01
Background Dermoscopy, also known as dermatoscopy or epiluminescence microscopy (ELM), is a non-invasive, in vivo technique, which permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. One prominent feature useful for melanoma detection in dermoscopy images is the asymmetric blotch (asymmetric structureless area). Method Using both relative and absolute colors, blotches are detected in this research automatically by using thresholds in the red and green color planes. Several blotch indices are computed, including the scaled distance between the largest blotch centroid and the lesion centroid, ratio of total blotch areas to lesion area, ratio of largest blotch area to lesion area, total number of blotches, size of largest blotch, and irregularity of largest blotch. Results The effectiveness of the absolute and relative color blotch features was examined for melanoma/benign lesion discrimination over a dermoscopy image set containing 165 melanomas (151 invasive melanomas and 14 melanomas in situ) and 347 benign lesions (124 nevocellular nevi without dysplasia and 223 dysplastic nevi) using a leave-one-out neural network approach. Receiver operating characteristic curve results are shown, highlighting the sensitivity and specificity of melanoma detection. Statistical analysis of the blotch features are also presented. Conclusion Neural network and statistical analysis showed that the blotch detection method was somewhat more effective using relative color than using absolute color. The relative-color blotch detection method gave a diagnostic accuracy of about 77%. PMID:15998328
Tomlin, Kerry L; Neitenbach, Anna-Maria; Borg, Ulf
2017-01-13
Regional oximetry is increasingly used to monitor post-extraction oxygen status of the brain during surgical procedures where hemodynamic fluctuations are expected. Particularly in cardiac surgery, clinicians employ an interventional algorithm to restore baseline regional oxygen saturation (rSO 2 ) when a patient reaches a critical desaturation threshold. Evidence suggests that monitoring cardiac surgery patients and intervening to maintain rSO 2 can improve postoperative outcomes; however, evidence generated with one manufacturer's device may not be applicable to others. We hypothesized that regional oximeters from different manufacturers respond uniquely to changes in oxygen saturation in healthy volunteers. Three devices were tested: INVOS™ 5100C (Medtronic), EQUANOX™ 7600 (Nonin), and FORE-SIGHT™ (CASMED) monitors. We divided ten healthy subjects into two cohorts wearing a single sensor each from INVOS and EQUANOX (n = 6), or INVOS and FORE-SIGHT (n = 4). We induced and reversed hypoxia by adjusting the fraction of inspired oxygen. We calculated the magnitude of absolute rSO 2 change and rate of rSO 2 change during desaturation and resaturation, and determined if and when each device reached a critical interventional rSO 2 threshold during hypoxia. All devices responded to changes in oxygen directionally as expected. The median absolute rSO 2 change and the rate of rSO 2 change was significantly greater during desaturation and resaturation for INVOS compared with EQUANOX (P = 0.04). A similar but nonsignificant trend was observed for INVOS compared with FORE-SIGHT; our study was underpowered to definitively conclude there was no difference. A 10% relative decrease in rSO 2 during desaturation was detected by all three devices across the ten subjects. INVOS met a 20% relative decrease threshold in all subjects of both cohorts, compared to 1 with EQUANOX and 2 with FORE-SIGHT. Neither EQUANOX nor FORE-SIGHT reached a 50% absolute rSO 2 threshold compared with 4 and 3 subjects in each cohort with INVOS, respectively. Significant differences exist between the devices in how they respond to changes in oxygen saturation in healthy volunteers. We suggest caution when applying evidence generated with one manufacturer's device to all devices.
Uribe, Juan S; Isaacs, Robert E; Youssef, Jim A; Khajavi, Kaveh; Balzer, Jeffrey R; Kanter, Adam S; Küelling, Fabrice A; Peterson, Mark D
2015-04-01
This multicenter study aims to evaluate the utility of triggered electromyography (t-EMG) recorded throughout psoas retraction during lateral transpsoas interbody fusion to predict postoperative changes in motor function. Three hundred and twenty-three patients undergoing L4-5 minimally invasive lateral interbody fusion from 21 sites were enrolled. Intraoperative data collection included initial t-EMG thresholds in response to posterior retractor blade stimulation and subsequent t-EMG threshold values collected every 5 min throughout retraction. Additional data collection included dimensions/duration of retraction as well as pre-and postoperative lower extremity neurologic exams. Prior to expanding the retractor, the lowestt-EMG threshold was identified posterior to the retractor in 94 % of cases. Postoperatively, 13 (4.5 %) patients had a new motor weakness that was consistent with symptomatic neuropraxia (SN) of lumbar plexus nerves on the approach side. There were no significant differences between patients with or without a corresponding postoperative SN with respect to initial posterior blade reading (p = 0.600), or retraction dimensions (p > 0.05). Retraction time was significantly longer in those patients with SN vs. those without (p = 0.031). Stepwise logistic regression showed a significant positive relationship between the presence of new postoperative SN and total retraction time (p < 0.001), as well as change in t-EMG thresholds over time (p < 0.001), although false positive rates (increased threshold in patients with no new SN) remained high regardless of the absolute increase in threshold used to define an alarm criteria. Prolonged retraction time and coincident increases in t-EMG thresholds are predictors of declining nerve integrity. Increasing t-EMG thresholds, while predictive of injury, were also observed in a large number of patients without iatrogenic injury, with a greater predictive value in cases with extended duration. In addition to a careful approach with minimal muscle retraction and consistent lumbar plexus directional retraction, the incidence of postoperative motor neuropraxia may be reduced by limiting retraction time and utilizing t-EMG throughout retraction, while understanding that the specificity of this monitoring technique is low during initial retraction and increases with longer retraction duration.
Prognostic value of metabolic metrics extracted from baseline PET images in NSCLC
Carvalho, Sara; Leijenaar, Ralph T.H.; Velazquez, Emmanuel Rios; Oberije, Cary; Parmar, Chintan; van Elmpt, Wouter; Reymen, Bart; Troost, Esther G.C.; Oellers, Michel; Dekker, Andre; Gillies, Robert; Aerts, Hugo J.W.L.; Lambin, Philippe
2015-01-01
Background Maximum, mean and peak SUV of primary tumor at baseline FDG-PET scans, have often been found predictive for overall survival in non-small cell lung cancer (NSCLC) patients. In this study we further investigated the prognostic power of advanced metabolic metrics derived from Intensity-Volume Histograms (IVH) extracted from PET imaging. Methods A cohort of 220 NSCLC patients (mean age, 66.6 years; 149 men, 71 women), stages I-IIIB, treated with radiotherapy with curative intent were included (NCT00522639). Each patient underwent standardized pre-treatment CT-PET imaging. Primary GTV was delineated by an experienced radiation oncologist on CT-PET images. Common PET descriptors such as maximum, mean and peak SUV, and metabolic tumor volume (MTV) were quantified. Advanced descriptors of metabolic activity were quantified by IVH. These comprised 5 groups of features: Absolute and Relative Volume above Relative Intensity threshold (AVRI and RVRI), Absolute and Relative Volume above Absolute Intensity threshold (AVAI and RVAI), and Absolute Intensity above Relative Volume threshold (AIRV). MTV was derived from the IVH curves for volumes with SUV above 2.5, 3 and 4, and of 40% and 50% maximum SUV. Univariable analysis using Cox Proportional Hazard Regression was performed for overall survival assessment. Results Relative volume above higher SUV (80 %) was an independent predictor of OS (p = 0.05). None of the possible surrogates for MTV based on volumes above SUV of 3, 40% and 50% of maximum SUV showed significant associations with OS (p (AVAI3) = 0.10, p (AVAI4) = 0.22, p (AVRI40%) = 0.15, p (AVRI50%) = 0.17). Maximum and peak SUV (r = 0.99) revealed no prognostic value for OS (p (maximum SUV) = 0.20, p (peak SUV) = 0.22). Conclusions New methods using more advanced imaging features extracted from PET were analyzed. Best prognostic value for OS of NSCLC patients was found for relative portions of the tumor above higher uptakes (80% SUV). PMID:24047338
A study of FM threshold extension techniques
NASA Technical Reports Server (NTRS)
Arndt, G. D.; Loch, F. J.
1972-01-01
The characteristics of three postdetection threshold extension techniques are evaluated with respect to the ability of such techniques to improve the performance of a phase lock loop demodulator. These techniques include impulse-noise elimination, signal correlation for the detection of impulse noise, and delta modulation signal processing. Experimental results from signal to noise ratio data and bit error rate data indicate that a 2- to 3-decibel threshold extension is readily achievable by using the various techniques. This threshold improvement is in addition to the threshold extension that is usually achieved through the use of a phase lock loop demodulator.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
Computational Modeling of Semiconductor Dynamics at Femtosecond Time Scales
NASA Technical Reports Server (NTRS)
Agrawal, Govind P.; Goorjian, Peter M.
1998-01-01
The main objective of the Joint-Research Interchange NCC2-5149 was to develop computer codes for accurate simulation of femtosecond pulse propagation in semiconductor lasers and semiconductor amplifiers [I]. The code should take into account all relevant processes such as the interband and intraband carrier relaxation mechanisms and the many-body effects arising from the Coulomb interaction among charge carriers [2]. This objective was fully accomplished. We made use of a previously developed algorithm developed at NASA Ames [3]-[5]. The new algorithm was tested on several problems of practical importance. One such problem was related to the amplification of femtosecond optical pulses in semiconductors. These results were presented in several international conferences over a period of three years. With the help of a postdoctoral fellow, we also investigated the origin of instabilities that can lead to the formation of femtosecond pulses in different kinds of lasers. We analyzed the occurrence of absolute instabilities in lasers that contain a dispersive host material with third-order nonlinearities. Starting from the Maxwell-Bloch equations, we derived general multimode equations to distinguish between convective and absolute instabilities. We find that both self-phase modulation and intensity-dependent absorption can dramatically affect the absolute stability of such lasers. In particular, the self-pulsing threshold (the so-called second laser threshold) can occur at few times the first laser threshold even in good-cavity lasers for which no self-pulsing occurs in the absence of intensity-dependent absorption. These results were presented in an international conference and published in the form of two papers.
Measurement of the lowest dosage of phenobarbital that can produce drug discrimination in rats
Overton, Donald A.; Stanwood, Gregg D.; Patel, Bhavesh N.; Pragada, Sreenivasa R.; Gordon, M. Kathleen
2009-01-01
Rationale Accurate measurement of the threshold dosage of phenobarbital that can produce drug discrimination (DD) may improve our understanding of the mechanisms and properties of such discrimination. Objectives Compare three methods for determining the threshold dosage for phenobarbital (D) versus no drug (N) DD. Methods Rats learned a D versus N DD in 2-lever operant training chambers. A titration scheme was employed to increase or decrease dosage at the end of each 18-day block of sessions depending on whether the rat had achieved criterion accuracy during the sessions just completed. Three criterion rules were employed, all based on average percent drug lever responses during initial links of the last 6 D and 6 N sessions of a block. The criteria were: D%>66 and N%<33; D%>50 and N%<50; (D%-N%)>33. Two squads of rats were trained, one immediately after the other. Results All rats discriminated drug versus no drug. In most rats, dosage decreased to low levels and then oscillated near the minimum level required to maintain criterion performance. The lowest discriminated dosage significantly differed under the three criterion rules. The squad that was trained 2nd may have benefited by partially duplicating the lever choices of the previous squad. Conclusions The lowest discriminated dosage is influenced by the criterion of discriminative control that is employed, and is higher than the absolute threshold at which discrimination entirely disappears. Threshold estimations closer to absolute threshold can be obtained when criteria are employed that are permissive, and that allow rats to maintain lever preferences. PMID:19082992
Changing Conceptions of Activation Energy.
ERIC Educational Resources Information Center
Pacey, Philip D.
1981-01-01
Provides background material which relates to the concept of activation energy, fundamental in the study of chemical kinetics. Compares the related concepts of the Arrhenius activation energy, the activation energy at absolute zero, the enthalpy of activation, and the threshold energy. (CS)
Aggarwal, Rohit; Rider, Lisa G; Ruperto, Nicolino; Bayat, Nastaran; Erman, Brian; Feldman, Brian M; Oddis, Chester V; Amato, Anthony A; Chinoy, Hector; Cooper, Robert G; Dastmalchi, Maryam; Fiorentino, David; Isenberg, David; Katz, James D; Mammen, Andrew; de Visser, Marianne; Ytterberg, Steven R; Lundberg, Ingrid E; Chung, Lorinda; Danko, Katalin; García-De la Torre, Ignacio; Song, Yeong Wook; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A; Miller, Frederick W; Vencovsky, Jiri
2017-05-01
To develop response criteria for adult dermatomyositis (DM) and polymyositis (PM). Expert surveys, logistic regression, and conjoint analysis were used to develop 287 definitions using core set measures. Myositis experts rated greater improvement among multiple pairwise scenarios in conjoint analysis surveys, where different levels of improvement in 2 core set measures were presented. The PAPRIKA (Potentially All Pairwise Rankings of All Possible Alternatives) method determined the relative weights of core set measures and conjoint analysis definitions. The performance characteristics of the definitions were evaluated on patient profiles using expert consensus (gold standard) and were validated using data from a clinical trial. The nominal group technique was used to reach consensus. Consensus was reached for a conjoint analysis-based continuous model using absolute per cent change in core set measures (physician, patient, and extramuscular global activity, muscle strength, Health Assessment Questionnaire, and muscle enzyme levels). A total improvement score (range 0-100), determined by summing scores for each core set measure, was based on improvement in and relative weight of each core set measure. Thresholds for minimal, moderate, and major improvement were ≥20, ≥40, and ≥60 points in the total improvement score. The same criteria were chosen for juvenile DM, with different improvement thresholds. Sensitivity and specificity in DM/PM patient cohorts were 85% and 92%, 90% and 96%, and 92% and 98% for minimal, moderate, and major improvement, respectively. Definitions were validated in the clinical trial analysis for differentiating the physician rating of improvement (p<0.001). The response criteria for adult DM/PM consisted of the conjoint analysis model based on absolute per cent change in 6 core set measures, with thresholds for minimal, moderate, and major improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarolli, Jay G.; Naes, Benjamin E.; Butler, Lamar
A fully convolutional neural network (FCN) was developed to supersede automatic or manual thresholding algorithms used for tabulating SIMS particle search data. The FCN was designed to perform a binary classification of pixels in each image belonging to a particle or not, thereby effectively removing background signal without manually or automatically determining an intensity threshold. Using 8,000 images from 28 different particle screening analyses, the FCN was trained to accurately predict pixels belonging to a particle with near 99% accuracy. Background eliminated images were then segmented using a watershed technique in order to determine isotopic ratios of particles. A comparisonmore » of the isotopic distributions of an independent data set segmented using the neural network, compared to a commercially available automated particle measurement (APM) program developed by CAMECA, highlighted the necessity for effective background removal to ensure that resulting particle identification is not only accurate, but preserves valuable signal that could be lost due to improper segmentation. The FCN approach improves the robustness of current state-of-the-art particle searching algorithms by reducing user input biases, resulting in an improved absolute signal per particle and decreased uncertainty of the determined isotope ratios.« less
Probability of an Abnormal Screening PSA Result Based on Age, Race, and PSA Threshold
Espaldon, Roxanne; Kirby, Katharine A.; Fung, Kathy Z.; Hoffman, Richard M.; Powell, Adam A.; Freedland, Stephen J.; Walter, Louise C.
2014-01-01
Objective To determine the distribution of screening PSA values in older men and how different PSA thresholds affect the proportion of white, black, and Latino men who would have an abnormal screening result across advancing age groups. Methods We used linked national VA and Medicare data to determine the value of the first screening PSA test (ng/mL) of 327,284 men age 65+ who underwent PSA screening in the VA healthcare system in 2003. We calculated the proportion of men with an abnormal PSA result based on age, race, and common PSA thresholds. Results Among men age 65+, 8.4% had a PSA >4.0ng/mL. The percentage of men with a PSA >4.0ng/mL increased with age and was highest in black men (13.8%) versus white (8.0%) or Latino men (10.0%) (P<0.001). Combining age and race, the probability of having a PSA >4.0ng/mL ranged from 5.1% of Latino men age 65–69 to 27.4% of black men age 85+. Raising the PSA threshold from >4.0ng/mL to >10.0ng/mL, reclassified the greatest percentage of black men age 85+ (18.3% absolute change) and the lowest percentage of Latino men age 65–69 (4.8% absolute change) as being under the biopsy threshold (P<0.001). Conclusions Age, race, and PSA threshold together affect the pre-test probability of an abnormal screening PSA result. Based on screening PSA distributions, stopping screening among men whose PSA < 3ng/ml means over 80% of white and Latino men age 70+ would stop further screening, and increasing the biopsy threshold to >10ng/ml has the greatest effect on reducing the number of older black men who will face biopsy decisions after screening. PMID:24439009
NASA Astrophysics Data System (ADS)
Taki, Majid; San Miguel, Maxi; Santagiustina, Marco
2000-02-01
Degenerate optical parametric oscillators can exhibit both uniformly translating fronts and nonuniformly translating envelope fronts under the walk-off effect. The nonlinear dynamics near threshold is shown to be described by a real convective Swift-Hohenberg equation, which provides the main characteristics of the walk-off effect on pattern selection. The predictions of the selected wave vector and the absolute instability threshold are in very good quantitative agreement with numerical solutions found from the equations describing the optical parametric oscillator.
Mandelbaum, Tal; Lee, Joon; Scott, Daniel J; Mark, Roger G; Malhotra, Atul; Howell, Michael D; Talmor, Daniel
2013-03-01
The observation periods and thresholds of serum creatinine and urine output defined in the Acute Kidney Injury Network (AKIN) criteria were not empirically derived. By continuously varying creatinine/urine output thresholds as well as the observation period, we sought to investigate the empirical relationships among creatinine, oliguria, in-hospital mortality, and receipt of renal replacement therapy (RRT). Using a high-resolution database (Multiparameter Intelligent Monitoring in Intensive Care II), we extracted data from 17,227 critically ill patients with an in-hospital mortality rate of 10.9 %. The 14,526 patients had urine output measurements. Various combinations of creatinine/urine output thresholds and observation periods were investigated by building multivariate logistic regression models for in-hospital mortality and RRT predictions. For creatinine, both absolute and percentage increases were analyzed. To visualize the dependence of adjusted mortality and RRT rate on creatinine, the urine output, and the observation period, we generated contour plots. Mortality risk was high when absolute creatinine increase was high regardless of the observation period, when percentage creatinine increase was high and the observation period was long, and when oliguria was sustained for a long period of time. Similar contour patterns emerged for RRT. The variability in predictive accuracy was small across different combinations of thresholds and observation periods. The contour plots presented in this article complement the AKIN definition. A multi-center study should confirm the universal validity of the results presented in this article.
NASA Technical Reports Server (NTRS)
James, G. K.; Slevin, J. A.; Shemansky, D. E.; McConkey, J. W.; Bray, I.; Dziczek, D.; Kanik, I.; Ajello, J. M.
1997-01-01
The optical excitation function of prompt Lyman-Alpha radiation, produced by electron impact on atomic hydrogen, has been measured over the extended energy range from threshold to 1.8 keV. Measurements were obtained in a crossed-beams experiment using both magnetically confined and electrostatically focused electrons in collision with atomic hydrogen produced by an intense discharge source. A vacuum-ultraviolet mono- chromator system was used to measure the emitted Lyman-Alpha radiation. The absolute H(1s-2p) electron impact excitation cross section was obtained from the experimental optical excitation function by normalizing to the accepted optical oscillator strength, with corrections for polarization and cascade. Statistical and known systematic uncertainties in our data range from +/- 4% near threshold to +/- 2% at 1.8 keV. Multistate coupling affecting the shape of the excitation function up to 1 keV impact energy is apparent in both the present experimental data and present theoretical results obtained with convergent close- coupling (CCC) theory. This shape function effect leads to an uncertainty in absolute cross sections at the 10% level in the analysis of the experimental data. The derived optimized absolute cross sections are within 7% of the CCC calculations over the 14 eV-1.8 keV range. The present CCC calculations converge on the Bethe- Fano profile for H(1s-2p) excitation at high energy. For this reason agreement with the CCC values to within 3% is achieved in a nonoptimal normalization of the experimental data to the Bethe-Fano profile. The fundamental H(1s-2p) electron impact cross section is thereby determined to an unprecedented accuracy over the 14 eV - 1.8 keV energy range.
Near-threshold J/ψ-meson photoproduction on nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paryev, E. Ya.; Kiselev, Yu. T., E-mail: yurikis@itep.ru
On the basis of the first-collision model that relies on the nuclear spectral function and which includes incoherent processes involving charmonium production in proton–nucleon collisions, the photoproduction of J/ψ mesons on nuclei is considered at energies close to the threshold for their production on a nucleon. The absorption of final J/ψ mesons, their formation length, and the binding and Fermi motion of target nucleons are taken into account in this model along with the effect of the nuclear potential on these processes. The A dependences of the absolute and relative charmonium yields are calculated together with absolute and relative excitationmore » functions under various assumptions on the magnitude of the cross section for J/ψN absorption, the J/ψ-meson formation length, and their inmedium modification. It is shown that, at energies above the threshold, these features are virtually independent of the formation length and the change in the J/ψ-meson mass in nuclear matter but are rather highly sensitive to the cross section for J/ψN interaction. The calculations performed in the present study can be used to determine the unknown cross section for J/ψ-meson absorption in nuclei from a comparison of their results with data expected from experiments in the Hall C of the CEBAF (USA) facility upgraded to the energy of 12 GeV. It is also shown that the absolute and relative excitation functions for J/ψ mesons in photon–nucleus reactions at subthreshold energies are sensitive to the change in the meson mass and, hence, carry information about the properties of charmonium in nuclear matter.« less
Müller, Alfred; Bernhardt, Dietrich; Borovik, Alexander; ...
2017-02-17
Single, double, and triple photoionization of Ne + ions by single photons have been investigated at the synchrotron radiation source PETRA III in Hamburg, Germany. Absolute cross-sections were measured by employing the photon-ion merged-beams technique. Photon energies were between about 840 and 930 eV, covering the range from the lowest-energy resonances associated with the excitation of one single K-shell electron up to double excitations involving one K- and one L-shell electron, well beyond the K-shell ionization threshold. Also, photoionization of neutral Ne was investigated just below the K edge. The chosen photon energy bandwidths were between 32 and 500 meV,more » facilitating the determination of natural line widths. The uncertainty of the energy scale is estimated to be 0.2 eV. For comparison with existing theoretical calculations, astrophysically relevant photoabsorption cross-sections were inferred by summing the measured partial ionization channels. Discussion of the observed resonances in the different final ionization channels reveals the presence of complex Auger-decay mechanisms. The ejection of three electrons from the lowest K-shell-excited Ne + (1s2s 2p 6 2S 1/2) level, for example, requires cooperative interaction of at least four electrons.« less
Determination of quality factors by microdosimetry
NASA Astrophysics Data System (ADS)
Al-Affan, I. A. M.; Watt, D. E.
1987-03-01
The application of microdose parameters for the specification of a revised scale of quality factors which would be applicable at low doses and dose rates is examined in terms of an original proposal by Rossi. Two important modifications are suggested to enable an absolute scale of quality factors to be constructed. Allowance should be made to allow for the dependence of the saturation threshold of lineal energy on the type of heavy charged particle. Also, an artificial saturation threshold should be introduced for electron tracks as a mean of modifying the measurements made in the microdosimeter to the more realistic site sizes of nanometer dimensions. The proposed absolute scale of quality factors nicely encompasses the high RBEs of around 3 observed at low doses for tritium β rays and is consistent with the recent recommendation of the ICRP that the quality factor for fast neutrons be increased by a factor of two, assuming that there is no biological repair for the reference radiation.
Determination of the electric field strength of filamentary DBDs by CARS-based four-wave mixing
NASA Astrophysics Data System (ADS)
Böhm, P.; Kettlitz, M.; Brandenburg, R.; Höft, H.; Czarnetzki, U.
2016-10-01
It is demonstrated that a four-wave mixing technique based on coherent anti-Stokes Raman spectroscopy (CARS) can determine the electric field strength of a pulsed-driven filamentary dielectric barrier discharge (DBD) of 1 mm gap, using hydrogen as a tracer medium in nitrogen at atmospheric pressure. The measurements are presented for a hydrogen admixture of 10%, but even 5% H2 admixture delivers sufficient infrared signals. The lasers do not affect the discharge by photoionization or by other radiation-induced processes. The absolute values of the electric field strength can be determined by the calibration of the CARS setup with high voltage amplitudes below the ignition threshold of the arrangement. This procedure also enables the determination of the applied breakdown voltage. The alteration of the electric field is observed during the internal polarity reversal and the breakdown process. One advantage of the CARS technique over emission-based methods is that it can be used independently of emission, e.g. in the pre-phase and in between two consecutive discharges, where no emission occurs at all.
Reaction rates of the 113In(γ,n)112mIn and 115In(γ,n)114mIn
NASA Astrophysics Data System (ADS)
Skakun, Ye; Semisalov, I.; Kasilov, V.; Popov, V.; Kochetov, S.; Maslyuk, V.; Mazur, V.; Parlag, O.; Gajnish, I.
2016-01-01
The integral yields of the 113In(γ,n)112mIn (Jπ=9/2+→Jπ=4+) and 115In(γ,n)114mIn (Jπ=9/2+→Jπ=5+) photonuclear reactions were measured in the bremsstrahlung end-point energy range from the respective thresholds up to 14 MeV by a conventional activation/decay technique using the 197Au(γ,n)196Au reaction cross sections as the standard for the absolute photon intensity determination. The metallic indium samples of the natural and enriched compositions were irradiated by the bremsstrahlung beams from thin tantalum converters of the electron linear accelerator of NSC KIPT (Kharkiv) and the microtron of IEP (Ughhorod). The integral reaction yields were determined from the activities of the nuclei-products measured by the high resolution γ-ray spectrometry technique with Ge(Li)- and HPGe-detectors. The reaction rates for the Planck spectrum of a thermal photon bath were derived for the ground state target nuclei and compared to the predictions of the statistical model of nuclear reactions.
NASA Astrophysics Data System (ADS)
Zorila, Alexandru; Stratan, Aurel; Nemes, George
2018-01-01
We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dance, M; Chera, B; Falchook, A
2015-06-15
Purpose: Validate the consistency of a gradient-based segmentation tool to facilitate accurate delineation of PET/CT-based GTVs in head and neck cancers by comparing against hybrid PET/MR-derived GTV contours. Materials and Methods: A total of 18 head and neck target volumes (10 primary and 8 nodal) were retrospectively contoured using a gradient-based segmentation tool by two observers. Each observer independently contoured each target five times. Inter-observer variability was evaluated via absolute percent differences. Intra-observer variability was examined by percentage uncertainty. All target volumes were also contoured using the SUV percent threshold method. The thresholds were explored case by case so itsmore » derived volume matched with the gradient-based volume. Dice similarity coefficients (DSC) were calculated to determine overlap of PET/CT GTVs and PET/MR GTVs. Results: The Levene’s test showed there was no statistically significant difference of the variances between the observer’s gradient-derived contours. However, the absolute difference between the observer’s volumes was 10.83%, with a range from 0.39% up to 42.89%. PET-avid regions with qualitatively non-uniform shapes and intensity levels had a higher absolute percent difference near 25%, while regions with uniform shapes and intensity levels had an absolute percent difference of 2% between observers. The average percentage uncertainty between observers was 4.83% and 7%. As the volume of the gradient-derived contours increased, the SUV threshold percent needed to match the volume decreased. Dice coefficients showed good agreement of the PET/CT and PET/MR GTVs with an average DSC value across all volumes at 0.69. Conclusion: Gradient-based segmentation of PET volume showed good consistency in general but can vary considerably for non-uniform target shapes and intensity levels. PET/CT-derived GTV contours stemming from the gradient-based tool show good agreement with the anatomically and metabolically more accurate PET/MR-derived GTV contours, but tumor delineation accuracy can be further improved with the use PET/MR.« less
TES-Based Light Detectors for the CRESST Direct Dark Matter Search
NASA Astrophysics Data System (ADS)
Rothe, J.; Angloher, G.; Bauer, P.; Bento, A.; Bucci, C.; Canonica, L.; D'Addabbo, A.; Defay, X.; Erb, A.; Feilitzsch, F. v.; Ferreiro Iachellini, N.; Gorla, P.; Gütlein, A.; Hauff, D.; Jochum, J.; Kiefer, M.; Kluck, H.; Kraus, H.; Lanfranchi, J.-C.; Langenkämper, A.; Loebell, J.; Mancuso, M.; Mondragon, E.; Münster, A.; Pagliarone, C.; Petricca, F.; Potzel, W.; Pröbst, F.; Puig, R.; Reindl, F.; Schäffner, K.; Schieck, J.; Schipperges, V.; Schönert, S.; Seidel, W.; Stahlberg, M.; Stodolsky, L.; Strandhagen, C.; Strauss, R.; Tanzke, A.; Trinh Thi, H. H.; Türkoğlu, C.; Ulrich, A.; Usherov, I.; Wawoczny, S.; Willers, M.; Wüstrich, M.
2018-05-01
The CRESST experiment uses cryogenic detectors based on transition-edge sensors to search for dark matter interactions. Each detector module consists of a scintillating CaWO_4 crystal and a silicon-on-sapphire (SOS) light detector which operate in coincidence (phonon-light technique). The 40-mm-diameter SOS disks (2 g mass) used in the data taking campaign of CRESST-II Phase 2 (2014-2016) reached absolute baseline resolutions of σ = 4-7 eV. This is the best performance reported for cryogenic light detectors of this size. Newly developed silicon beaker light detectors (4 cm height, 4 cm diameter, 6 g mass), which cover a large fraction of the target crystal surface, have achieved a baseline resolution of σ = 5.8 eV. First results of further improved light detectors developed for the ongoing low-threshold CRESST-III experiment are presented.
[A peak recognition algorithm designed for chromatographic peaks of transformer oil].
Ou, Linjun; Cao, Jian
2014-09-01
In the field of the chromatographic peak identification of the transformer oil, the traditional first-order derivative requires slope threshold to achieve peak identification. In terms of its shortcomings of low automation and easy distortion, the first-order derivative method was improved by applying the moving average iterative method and the normalized analysis techniques to identify the peaks. Accurate identification of the chromatographic peaks was realized through using multiple iterations of the moving average of signal curves and square wave curves to determine the optimal value of the normalized peak identification parameters, combined with the absolute peak retention times and peak window. The experimental results show that this algorithm can accurately identify the peaks and is not sensitive to the noise, the chromatographic peak width or the peak shape changes. It has strong adaptability to meet the on-site requirements of online monitoring devices of dissolved gases in transformer oil.
Inner-shell photodetachment of transition metal negative ions
NASA Astrophysics Data System (ADS)
Dumitriu, Ileana
This thesis focuses on the study of inner-shell photodetachment of transition metal negative ions, specifically Fe- and Ru- . Experimental investigations have been performed with the aim of gaining new insights into the physics of negative atomic ions and providing valuable absolute cross section data for astrophysics. The experiments were performed using the X-ray radiation from the Advanced Light Source, Lawrence Berkeley National Laboratory, and the merged-beam technique for photoion spectroscopy. Negative ions are a special class of atomic systems very different from neutral atoms and positive ions. The fundamental physics of the interaction of transition metal negative ions with photons is interesting but difficult to analyze in detail because the angular momentum coupling generates a large number of possible terms resulting from the open d shell. Our work reports on the first inner-shell photodetachment studies and absolute cross section measurements for Fe- and Ru -. In the case of Fe-, an important astrophysical abundant element, the inner-shell photodetachment cross section was obtained by measuring the Fe+ and Fe2+ ion production over the photon energy range of 48--72 eV. The absolute cross sections for the production of Fe+ and Fe2+ were measured at four photon energies. Strong shape resonances due to the 3p→3d photoexcitation were measured above the 3p detachment threshold. The production of Ru+, Ru2+, and Ru3+ from Ru- was measured over 30--90 eV photon energy range The absolute photodetachment cross sections of Ru - ([Kr] 4d75s 2) leading to Ru+, Ru2+, and Ru 3+ ion production were measured at three photon energies. Resonance effects were observed due to interference between transitions of the 4 p-electrons to the quasi-bound 4p54d85s 2 states and the 4d→epsilonf continuum. The role of many-particle effects, intershell interaction, and polarization seems much more significant in Ru- than in Fe- photodetachment.
Le Prell, Colleen G; Brungart, Douglas S
2016-09-01
In humans, the accepted clinical standards for detecting hearing loss are the behavioral audiogram, based on the absolute detection threshold of pure-tones, and the threshold auditory brainstem response (ABR). The audiogram and the threshold ABR are reliable and sensitive measures of hearing thresholds in human listeners. However, recent results from noise-exposed animals demonstrate that noise exposure can cause substantial neurodegeneration in the peripheral auditory system without degrading pure-tone audiometric thresholds. It has been suggested that clinical measures of auditory performance conducted with stimuli presented above the detection threshold may be more sensitive than the behavioral audiogram in detecting early-stage noise-induced hearing loss in listeners with audiometric thresholds within normal limits. Supra-threshold speech-in-noise testing and supra-threshold ABR responses are reviewed here, given that they may be useful supplements to the behavioral audiogram for assessment of possible neurodegeneration in noise-exposed listeners. Supra-threshold tests may be useful for assessing the effects of noise on the human inner ear, and the effectiveness of interventions designed to prevent noise trauma. The current state of the science does not necessarily allow us to define a single set of best practice protocols. Nonetheless, we encourage investigators to incorporate these metrics into test batteries when feasible, with an effort to standardize procedures to the greatest extent possible as new reports emerge.
Audibility threshold spectrum for prominent discrete tone analysis
NASA Astrophysics Data System (ADS)
Kimizuka, Ikuo
2005-09-01
To evaluate the annoyance of tonal components in noise emissions, ANSI S1.13 (for general purposes) and/or ISO 7779/ECMA-74 (dedicatedfor IT equipment) state two similar metrics: tone-to-noise ratio (TNR) and prominence ratio(PR). By these or either of these two parameters, noise of question with a sharp spectral peak is analyzed by high resolution FFF and classified as prominent when it exceeds some criterion curve. According to present procedures, however this designation is dependent on only the spectral shape. To resolve this problem, the author proposes a threshold spectrum of human ear audibility. The spectrum is based on the reference threshold of hearing which is defined in ISO 389-7 and/or ISO 226. With this spectrum, one can objectively define whether the noise peak of question is audible or not, by simple comparison of the peak amplitude of noise emission and the corresponding value of threshold. Applying the threshold, one can avoid overkilling or unnecessary action for noise. Such a peak with absolutely low amplitude is not audible.
VUV photoionization cross sections of HO2, H2O2, and H2CO.
Dodson, Leah G; Shen, Linhan; Savee, John D; Eddingsaas, Nathan C; Welz, Oliver; Taatjes, Craig A; Osborn, David L; Sander, Stanley P; Okumura, Mitchio
2015-02-26
The absolute vacuum ultraviolet (VUV) photoionization spectra of the hydroperoxyl radical (HO2), hydrogen peroxide (H2O2), and formaldehyde (H2CO) have been measured from their first ionization thresholds to 12.008 eV. HO2, H2O2, and H2CO were generated from the oxidation of methanol initiated by pulsed-laser-photolysis of Cl2 in a low-pressure slow flow reactor. Reactants, intermediates, and products were detected by time-resolved multiplexed synchrotron photoionization mass spectrometry. Absolute concentrations were obtained from the time-dependent photoion signals by modeling the kinetics of the methanol oxidation chemistry. Photoionization cross sections were determined at several photon energies relative to the cross section of methanol, which was in turn determined relative to that of propene. These measurements were used to place relative photoionization spectra of HO2, H2O2, and H2CO on an absolute scale, resulting in absolute photoionization spectra.
LCAMP: Location Constrained Approximate Message Passing for Compressed Sensing MRI
Sung, Kyunghyun; Daniel, Bruce L; Hargreaves, Brian A
2016-01-01
Iterative thresholding methods have been extensively studied as faster alternatives to convex optimization methods for solving large-sized problems in compressed sensing. A novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) is presented for reducing computational complexity and improving reconstruction accuracy when a nonzero location (or sparse support) constraint can be obtained from view shared images. LCAMP modifies the existing approximate message passing algorithm by replacing the thresholding stage with a location constraint, which avoids adjusting regularization parameters or thresholding levels. This work is first compared with other conventional reconstruction methods using random 1D signals and then applied to dynamic contrast-enhanced breast MRI to demonstrate the excellent reconstruction accuracy (less than 2% absolute difference) and low computation time (5 - 10 seconds using Matlab) with highly undersampled 3D data (244 × 128 × 48; overall reduction factor = 10). PMID:23042658
Fundus-controlled two-color dark adaptometry with the Microperimeter MP1.
Bowl, Wadim; Stieger, Knut; Lorenz, Birgit
2015-06-01
The aim of this study was to provide fundus-controlled two-color adaptometry with an existing device. A quick and easy approach extends the application possibilities of a commercial fundus-controlled perimeter. An external filter holder was placed in front the objective lens of the MP1 (Nidek, Italy) and fitted with filters to modify background, stimulus intensity, and color. Prior to dark adaptometry, the subject's visual sensitivity profile was measured for red and blue stimuli to determine whether rods or cones or both mediated the absolute threshold. After light adaptation, 20 healthy subjects were investigated with a pattern covering six spots at the posterior pole of the retina up to 45 min of dark adaptation. Thresholds were determined using a 200 ms red Goldmann IV and a blue Goldmann II stimulus. The pre-test sensitivity showed a typical distribution of values along the meridian, with high peripheral light increment sensitivity (LIS) and low central LIS for rods and the reverse for cones. After bleach, threshold recovery had a classic biphasic shape. The absolute threshold was reached after approximately 10 min for the red and 15 min for the blue stimulus. Two-color fundus-controlled adaptometry with a commercial MP1 without internal changes to the device provides a quick and easy examination of rod and cone function during dark adaptation at defined retinal loci of the posterior pole. This innovative method will be helpful to measure rod vs. cone function at known loci of the posterior pole in early stages of retinal degenerations.
Murray, Louise; Mason, Joshua; Henry, Ann M; Hoskin, Peter; Siebert, Frank-Andre; Venselaar, Jack; Bownes, Peter
2016-08-01
To estimate the risks of radiation-induced rectal and bladder cancers following low dose rate (LDR) and high dose rate (HDR) brachytherapy as monotherapy for localised prostate cancer and compare to external beam radiotherapy techniques. LDR and HDR brachytherapy monotherapy plans were generated for three prostate CT datasets. Second cancer risks were assessed using Schneider's concept of organ equivalent dose. LDR risks were assessed according to a mechanistic model and a bell-shaped model. HDR risks were assessed according to a bell-shaped model. Relative risks and excess absolute risks were estimated and compared to external beam techniques. Excess absolute risks of second rectal or bladder cancer were low for both LDR (irrespective of the model used for calculation) and HDR techniques. Average excess absolute risks of rectal cancer for LDR brachytherapy according to the mechanistic model were 0.71 per 10,000 person-years (PY) and 0.84 per 10,000 PY respectively, and according to the bell-shaped model, were 0.47 and 0.78 per 10,000 PY respectively. For HDR, the average excess absolute risks for second rectal and bladder cancers were 0.74 and 1.62 per 10,000 PY respectively. The absolute differences between techniques were very low and clinically irrelevant. Compared to external beam prostate radiotherapy techniques, LDR and HDR brachytherapy resulted in the lowest risks of second rectal and bladder cancer. This study shows both LDR and HDR brachytherapy monotherapy result in low estimated risks of radiation-induced rectal and bladder cancer. LDR resulted in lower bladder cancer risks than HDR, and lower or similar risks of rectal cancer. In absolute terms these differences between techniques were very small. Compared to external beam techniques, second rectal and bladder cancer risks were lowest for brachytherapy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Absolute empirical rate coefficient for the excitation of the 117.6 nm line in C III
NASA Astrophysics Data System (ADS)
Gardner, L. D.; Daw, A. N.; Janzen, P. H.; Atkins, N.; Kohl, J. L.
2005-05-01
We have measured the absolute cross sections for electron impact excitation (EIE) of C2+ (2s2p 3P° - 2p2 3P) for energies from below threshold to 17 eV above and derived EIE rate coefficients required for astrophysical applications. The uncertainty in the rate coefficient at a typical solar temperature of formation of C2+ is less than ± 6 %. Ions are produced in a 5 GHz Electron Cyclotron Resonance (ECR) ion source, extracted, formed into a beam, and transported to a collision chamber where they collide with electrons from an electron beam inclined at 45 degrees. The beams are modulated and the radiation from the decay of the excited ions at λ 117.6 nm is detected synchronously using an absolutely calibrated optical system that subtends slightly over π steradians. The fractional population of the C2+ metastable state in the incident ion beam has been determined experimentally to be 0.42 ± 0.03 (1.65 σ). At the reported ± 15 % total experimental uncertainty level (1.65 σ), the measured structure and absolute scale of the cross section are in fairly good agreement with 6-term close-coupling R-matrix calculations and 90-term R-matrix with pseudo-states calculations, although some minor differences are seen just above threshold. As density-sensitive line intensity ratios vary by only about a factor of 5 as the density changes by nearly a factor of 100, even a 30 % uncertainty in the excitation rate can lead to a factor of 3 error in density. This work is supported by NASA Supporting Research and Technology grants NAG5- 9516 and NAG5-12863 in Solar and Heliospheric Physics and by the Smithsonian Astrophysical Observatory.
Yang, Shuai; Liu, Ying
2018-08-01
Liquid crystal nematic elastomers are one kind of smart anisotropic and viscoelastic solids simultaneously combing the properties of rubber and liquid crystals, which is thermal sensitivity. In this paper, the wave dispersion in a liquid crystal nematic elastomer porous phononic crystal subjected to an external thermal stimulus is theoretically investigated. Firstly, an energy function is proposed to determine thermo-induced deformation in NE periodic structures. Based on this function, thermo-induced band variation in liquid crystal nematic elastomer porous phononic crystals is investigated in detail. The results show that when liquid crystal elastomer changes from nematic state to isotropic state due to the variation of the temperature, the absolute band gaps at different bands are opened or closed. There exists a threshold temperature above which the absolute band gaps are opened or closed. Larger porosity benefits the opening of the absolute band gaps. The deviation of director from the structural symmetry axis is advantageous for the absolute band gap opening in nematic state whist constrains the absolute band gap opening in isotropic state. The combination effect of temperature and director orientation provides an added degree of freedom in the intelligent tuning of the absolute band gaps in phononic crystals. Copyright © 2018 Elsevier B.V. All rights reserved.
Survey of management of acute, traumatic compartment syndrome of the leg in Australia.
Wall, Christopher J; Richardson, Martin D; Lowe, Adrian J; Brand, Caroline; Lynch, Joan; de Steiger, Richard N
2007-09-01
Acute compartment syndrome is a serious and not uncommon complication of limb trauma. The condition is a surgical emergency and is associated with significant morbidity if not diagnosed promptly and treated effectively. Despite the urgency of effective management to minimize the risk of adverse outcomes, there is currently little consensus in the published reports as to what constitutes best practice in the management of acute limb compartment syndrome. A structured survey was sent to all currently practising orthopaedic surgeons and accredited orthopaedic registrars in Australia to assess their current practice in the management of acute, traumatic compartment syndrome of the leg. Questions were related to key decision nodes in the management process, as identified in a literature review. These included identification of patients at high risk, diagnosis of the condition in alert and unconscious patients, optimal timeframe and technique for carrying out a fasciotomy and management of fasciotomy wounds. A total of 264 valid responses were received, a response rate of 29% of all eligible respondents. The results indicated considerable variation in management of acute compartment syndrome of the leg, in particular in the utilization of compartment pressure measurement and the appropriate pressure threshold for fasciotomy. Of the 78% of respondents who regularly measured compartment pressure, 33% used an absolute pressure threshold, 28% used a differential pressure threshold and 39% took both into consideration. There is variation in the management of acute, traumatic compartment syndrome of the leg in Australia. The development of evidence-based clinical practice guidelines may be beneficial.
NASA Astrophysics Data System (ADS)
Bell, L. R.; Dowling, J. A.; Pogson, E. M.; Metcalfe, P.; Holloway, L.
2017-01-01
Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes.
A Special Application of Absolute Value Techniques in Authentic Problem Solving
ERIC Educational Resources Information Center
Stupel, Moshe
2013-01-01
There are at least five different equivalent definitions of the absolute value concept. In instances where the task is an equation or inequality with only one or two absolute value expressions, it is a worthy educational experience for learners to solve the task using each one of the definitions. On the other hand, if more than two absolute value…
NASA Astrophysics Data System (ADS)
Prasad, M. N.; Brown, M. S.; Ahmad, S.; Abtin, F.; Allen, J.; da Costa, I.; Kim, H. J.; McNitt-Gray, M. F.; Goldin, J. G.
2008-03-01
Segmentation of lungs in the setting of scleroderma is a major challenge in medical image analysis. Threshold based techniques tend to leave out lung regions that have increased attenuation, for example in the presence of interstitial lung disease or in noisy low dose CT scans. The purpose of this work is to perform segmentation of the lungs using a technique that selects an optimal threshold for a given scleroderma patient by comparing the curvature of the lung boundary to that of the ribs. Our approach is based on adaptive thresholding and it tries to exploit the fact that the curvature of the ribs and the curvature of the lung boundary are closely matched. At first, the ribs are segmented and a polynomial is used to represent the ribs' curvature. A threshold value to segment the lungs is selected iteratively such that the deviation of the lung boundary from the polynomial is minimized. A Naive Bayes classifier is used to build the model for selection of the best fitting lung boundary. The performance of the new technique was compared against a standard approach using a simple fixed threshold of -400HU followed by regiongrowing. The two techniques were evaluated against manual reference segmentations using a volumetric overlap fraction (VOF) and the adaptive threshold technique was found to be significantly better than the fixed threshold technique.
NASA Astrophysics Data System (ADS)
Plane, John M. C.; Saltzman, Eric S.
1987-10-01
A kinetic study is presented of the reaction between lithium atoms and hydrogen chloride over the temperature range 700-1000 K. Li atoms are produced in an excess of HCl and He bath gas by pulsed photolysis of LiCl vapor. The concentration of the metal atoms is then monitored in real time by the technique of laser-induced fluorescence of Li atoms at λ=670.7 nm using a pulsed nitrogen-pumped dye laser and box-car integration of the fluorescence signal. Absolute second-order rate constants for this reaction have been measured at T=700, 750, 800, and 900 K. At T=1000 K the reverse reaction is sufficiently fast that equilibrium is rapidly established on the time scale of the experiment. A fit of the data between 700 and 900 K to the Arrhenius form, with 2σ errors calculated from the absolute errors in the rate constants, yields k(T)=(3.8±1.1)×10-10 exp[-(883±218)/T] cm3 molecule-1 s-1. This result is interpreted through a modified form of collision theory which is constrained to take account of the conservation of total angular momentum during the reaction. Thereby we obtain an estimate for the reaction energy threshold, E0=8.2±1.4 kJ mol-1 (where the error arises from uncertainty in the exothermicity of the reaction), in very good agreement with a crossed molecular beam study of the title reaction, and substantially lower than estimates of E0 from both semiempirical and ab initio calculations of the potential energy surface.
Real-time edge-enhanced optical correlator
NASA Astrophysics Data System (ADS)
Shihabi, Mazen M.; Hinedi, Sami M.; Shah, Biren N.
1992-08-01
The performance of five symbol lock detectors are compared. They are the square-law detector with overlapping (SQOD) and non-overlapping (SQNOD) integrators, the absolute value detectors with overlapping and non-overlapping (AVNOD) integrators and the signal power estimator detector (SPED). The analysis considers various scenarios when the observation interval is much larger or equal to the symbol synchronizer loop bandwidth, which has not been considered in previous analyses. Also, the case of threshold setting in the absence of signal is considered. It is shown that the SQOD outperforms all others when the threshold is set in the presence of signal, independent of the relationship between loop bandwidth and observation period. On the other hand, the SPED outperforms all others when the threshold is set in the presence of noise only.
Real-time edge-enhanced optical correlator
NASA Technical Reports Server (NTRS)
Shihabi, Mazen M. (Inventor); Hinedi, Sami M. (Inventor); Shah, Biren N. (Inventor)
1992-01-01
The performance of five symbol lock detectors are compared. They are the square-law detector with overlapping (SQOD) and non-overlapping (SQNOD) integrators, the absolute value detectors with overlapping and non-overlapping (AVNOD) integrators and the signal power estimator detector (SPED). The analysis considers various scenarios when the observation interval is much larger or equal to the symbol synchronizer loop bandwidth, which has not been considered in previous analyses. Also, the case of threshold setting in the absence of signal is considered. It is shown that the SQOD outperforms all others when the threshold is set in the presence of signal, independent of the relationship between loop bandwidth and observation period. On the other hand, the SPED outperforms all others when the threshold is set in the presence of noise only.
What to Do about Zero Frequency Cells when Estimating Polychoric Correlations
ERIC Educational Resources Information Center
Savalei, Victoria
2011-01-01
Categorical structural equation modeling (SEM) methods that fit the model to estimated polychoric correlations have become popular in the social sciences. When population thresholds are high in absolute value, contingency tables in small samples are likely to contain zero frequency cells. Such cells make the estimation of the polychoric…
Kinetic Energy Distribution of D(2p) Atoms From Analysis of the D Lyman-a Line Profile
NASA Technical Reports Server (NTRS)
Ciocca, Marco; Ajello, Joseph M.; Liu, Xianming; Maki, Justin
1997-01-01
The absolute cross sections of the line center (slow atoms) and wings (fast atoms) and total emission line profile were measured from threshold to 400 eV. Analytical model coeffiecients are given for the energy dependence of the measured slow atom cross section.
2012-05-01
noise (AGN) [1] and [11]. We focus on threshold communication systems due to the underwater environment, noncoherent communication techniques are...the threshold level. In the context of the underwater communications, where noncoherent communication techniques are affected both by noise and
Verdecchia, Kyle; Diop, Mamadou; Lee, Ting-Yim; St Lawrence, Keith
2013-02-01
Preterm infants are highly susceptible to ischemic brain injury; consequently, continuous bedside monitoring to detect ischemia before irreversible damage occurs would improve patient outcome. In addition to monitoring cerebral blood flow (CBF), assessing the cerebral metabolic rate of oxygen (CMRO2) would be beneficial considering that metabolic thresholds can be used to evaluate tissue viability. The purpose of this study was to demonstrate that changes in absolute CMRO2 could be measured by combining diffuse correlation spectroscopy (DCS) with time-resolved near-infrared spectroscopy (TR-NIRS). Absolute CBF was determined using bolus-tracking TR-NIRS to calibrate the DCS measurements. Cerebral venous blood oxygenation (SvO2) was determined by multiwavelength TR-NIRS measurements, the accuracy of which was assessed by directly measuring the oxygenation of sagittal sinus blood. In eight newborn piglets, CMRO2 was manipulated by varying the anesthetics and by injecting sodium cyanide. No significant differences were found between the two sets of SvO2 measurements obtained by TR-NIRS or sagittal sinus blood samples and the corresponding CMRO2 measurements. Bland-Altman analysis showed a mean CMRO2 difference of 0.0268 ± 0.8340 mLO2/100 g/min between the two techniques over a range from 0.3 to 4 mL O2/100 g/min.
NASA Astrophysics Data System (ADS)
Verdecchia, Kyle; Diop, Mamadou; Lee, Ting-Yim; St. Lawrence, Keith
2013-02-01
Preterm infants are highly susceptible to ischemic brain injury; consequently, continuous bedside monitoring to detect ischemia before irreversible damage occurs would improve patient outcome. In addition to monitoring cerebral blood flow (CBF), assessing the cerebral metabolic rate of oxygen (CMRO2) would be beneficial considering that metabolic thresholds can be used to evaluate tissue viability. The purpose of this study was to demonstrate that changes in absolute CMRO2 could be measured by combining diffuse correlation spectroscopy (DCS) with time-resolved near-infrared spectroscopy (TR-NIRS). Absolute CBF was determined using bolus-tracking TR-NIRS to calibrate the DCS measurements. Cerebral venous blood oxygenation (SvO2) was determined by multiwavelength TR-NIRS measurements, the accuracy of which was assessed by directly measuring the oxygenation of sagittal sinus blood. In eight newborn piglets, CMRO2 was manipulated by varying the anesthetics and by injecting sodium cyanide. No significant differences were found between the two sets of SvO2 measurements obtained by TR-NIRS or sagittal sinus blood samples and the corresponding CMRO2 measurements. Bland-Altman analysis showed a mean CMRO2 difference of 0.0268±0.8340 mL O2/100 g/min between the two techniques over a range from 0.3 to 4 mL O2/100 g/min.
Altenburg, T M; de Haan, A; Verdijk, P W L; van Mechelen, W; de Ruiter, C J
2009-07-01
Single motor unit electromyographic (EMG) activity of the knee extensors was investigated at different knee angles with subjects (n = 10) exerting the same absolute submaximal isometric torque at each angle. Measurements were made over a 20 degrees range around the optimum angle for torque production (AngleTmax) and, where feasible, over a wider range (50 degrees ). Forty-six vastus lateralis (VL) motor units were recorded at 20.7 +/- 17.9 %maximum voluntary contraction (%MVC) together with the rectified surface EMG (rsEMG) of the superficial VL muscle. Due to the lower maximal torque capacity at positions more flexed and extended than AngleTmax, single motor unit recruitment thresholds were expected to decrease and discharge rates were expected to increase at angles above and below AngleTmax. Unexpectedly, the recruitment threshold was higher (P < 0.05) at knee angles 10 degrees more extended (43.7 +/- 22.2 N.m) and not different (P > 0.05) at knee angles 10 degrees more flexed (35.2 +/- 17.9 N.m) compared with recruitment threshold at AngleTmax (41.8 +/- 21.4 N.m). Also, unexpectedly the discharge rates were similar (P > 0.05) at the three angles: 11.6 +/- 2.2, 11.6 +/- 2.1, and 12.3 +/- 2.1 Hz. Similar angle independent discharge rates were also found for 12 units (n = 5; 7.4 +/- 5.4 %MVC) studied over the wider (50 degrees ) range, while recruitment threshold only decreased at more flexed angles. In conclusion, the similar recruitment threshold and discharge behavior of VL motor units during submaximal isometric torque production suggests that net motor unit activation did not change very much along the ascending limb of the knee-angle torque relationship. Several factors such as length-dependent twitch potentiation, which may contribute to this unexpected aspect of motor control, are discussed.
Zhu, Liling; Su, Fengxi; Jia, Weijuan; Deng, Xiaogeng
2014-01-01
Background Predictive models for febrile neutropenia (FN) would be informative for physicians in clinical decision making. This study aims to validate a predictive model (Jenkin’s model) that comprises pretreatment hematological parameters in early-stage breast cancer patients. Patients and Methods A total of 428 breast cancer patients who received neoadjuvant/adjuvant chemotherapy without any prophylactic use of colony-stimulating factor were included. Pretreatment absolute neutrophil counts (ANC) and absolute lymphocyte counts (ALC) were used by the Jenkin’s model to assess the risk of FN. In addition, we modified the threshold of Jenkin’s model and generated Model-A and B. We also developed Model-C by incorporating the absolute monocyte count (AMC) as a predictor into Model-A. The rates of FN in the 1st chemotherapy cycle were calculated. A valid model should be able to significantly identify high-risk subgroup of patients with FN rate >20%. Results Jenkin’s model (Predicted as high-risk when ANC≦3.1*10∧9/L;ALC≦1.5*10∧9/L) did not identify any subgroups with significantly high risk (>20%) of FN in our population, even if we used different thresholds in Model-A(ANC≦4.4*10∧9/L;ALC≦2.1*10∧9/L) or B(ANC≦3.8*10∧9/L;ALC≦1.8*10∧9/L). However, with AMC added as an additional predictor, Model-C(ANC≦4.4*10∧9/L;ALC≦2.1*10∧9/L; AMC≦0.28*10∧9/L) identified a subgroup of patients with a significantly high risk of FN (23.1%). Conclusions In our population, Jenkin’s model, cannot accurately identify patients with a significant risk of FN. The threshold should be changed and the AMC should be incorporated as a predictor, to have excellent predictive ability. PMID:24945817
Dark Light, Rod Saturation, and the Absolute and Incremental Sensitivity of Mouse Cone Vision
Naarendorp, Frank; Esdaille, Tricia M.; Banden, Serenity M.; Andrews-Labenski, John; Gross, Owen P.; Pugh, Edward N.
2012-01-01
Visual thresholds of mice for the detection of small, brief targets were measured with a novel behavioral methodology in the dark and in the presence of adapting lights spanning ∼8 log10 units of intensity. To help dissect the contributions of rod and cone pathways, both wild-type mice and mice lacking rod (Gnat1−/−) or cone (Gnat2cpfl3) function were studied. Overall, the visual sensitivity of mice was found to be remarkably similar to that of the human peripheral retina. Rod absolute threshold corresponded to 12-15 isomerized pigment molecules (R*) in image fields of 800 to 3000 rods. Rod “dark light” (intrinsic retinal noise in darkness) corresponded to that estimated previously from single-cell recordings, 0.012R*s−1rod−1, indicating that spontaneous thermalisomerizations are responsible. Psychophysical rod saturation was measured for the first time in a nonhman species and found to be very similar to that of the human rod monochromat. Cone threshold corresponded to ∼5 R* cone−1 in an image field of 280 cones. Cone dark light was equivalent to ∼5000 R*s−1 cone−1, consistent with primate single-cell data but 100-fold higher than predicted by recent measurements of the rate of thermal isomerization of mouse cone opsins, indicating that nonopsin sources of noise determine cone threshold. The new, fully automated behavioral method is based on the ability of mice to learn to interrupt spontaneous wheel running on the presentation of a visual cue and provides an efficient and highly reliable means of examining visual function in naturally behaving normal and mutant mice. PMID:20844144
Neil, Sarah E; Klika, Riggs J; Garland, S Jayne; McKenzie, Donald C; Campbell, Kristin L
2013-03-01
Fatigue is one of the most commonly reported side effects during treatment for breast cancer and can persist following treatment completion. Cancer-related fatigue after treatment is multifactorial in nature, and one hypothesized mechanism is cardiorespiratory and neuromuscular deconditioning. The purpose of this study was to compare cardiorespiratory and neuromuscular function in breast cancer survivors who had completed treatment and met the specified criteria for cancer-related fatigue and a control group of breast cancer survivors without fatigue. Participants in the fatigue (n = 16) and control group (n = 11) performed a maximal exercise test on a cycle ergometer for determination of peak power, power at lactate threshold, and VO(2) peak. Neuromuscular fatigue was induced with a sustained submaximal contraction of the right quadriceps. Central fatigue (failure of voluntary activation) was evaluated using twitch interpolation, and peripheral fatigue was measured with an electrically evoked twitch. Power at lactate threshold was lower in the fatigue group (p = 0.05). There were no differences between groups for power at lactate threshold as percentage of peak power (p = 0.10) or absolute or relative VO(2) peak (p = 0.08 and 0.33, respectively). When adjusted for age, the fatigue group had a lower power at lactate threshold (p = 0.02) and absolute VO(2) peak (p = 0.03). There were no differences between groups in change in any neuromuscular parameters after the muscle-fatiguing protocol. Findings support the hypothesis that cardiorespiratory deconditioning may play a role in the development and persistence of cancer-related fatigue following treatment. Future research into the use of exercise training to reduce cardiorespiratory deconditioning as a treatment for cancer-related fatigue is warranted to confirm these preliminary findings.
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates
Malone, Brian J.
2017-01-01
Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194
Balaguier, Romain; Madeleine, Pascal; Vuillerme, Nicolas
2016-01-01
The assessment of pressure pain threshold (PPT) provides a quantitative value related to the mechanical sensitivity to pain of deep structures. Although excellent reliability of PPT has been reported in numerous anatomical locations, its absolute and relative reliability in the lower back region remains to be determined. Because of the high prevalence of low back pain in the general population and because low back pain is one of the leading causes of disability in industrialized countries, assessing pressure pain thresholds over the low back is particularly of interest. The purpose of this study study was (1) to evaluate the intra- and inter- absolute and relative reliability of PPT within 14 locations covering the low back region of asymptomatic individuals and (2) to determine the number of trial required to ensure reliable PPT measurements. Fifteen asymptomatic subjects were included in this study. PPTs were assessed among 14 anatomical locations in the low back region over two sessions separated by one hour interval. For the two sessions, three PPT assessments were performed on each location. Reliability was assessed computing intraclass correlation coefficients (ICC), standard error of measurement (SEM) and minimum detectable change (MDC) for all possible combinations between trials and sessions. Bland-Altman plots were also generated to assess potential bias in the dataset. Relative reliability for both intra- and inter- session was almost perfect with ICC ranged from 0.85 to 0.99. With respect to the intra-session, no statistical difference was reported for ICCs and SEM regardless of the conducted comparisons between trials. Conversely, for inter-session, ICCs and SEM values were significantly larger when two consecutive PPT measurements were used for data analysis. No significant difference was observed for the comparison between two consecutive measurements and three measurements. Excellent relative and absolute reliabilities were reported for both intra- and inter-session. Reliable measurements can be equally achieved when using the mean of two or three consecutive PPT measurements, as usually proposed in the literature, or with only the first one. Although reliability was almost perfect regardless of the conducted comparison between PPT assessments, our results suggest using two consecutive measurements to obtain higher short term absolute reliability.
Srimurugan Pratheep, Neeraja; Madeleine, Pascal; Arendt-Nielsen, Lars
2018-04-25
Pressure pain threshold (PPT) and PPT maps are commonly used to quantify and visualize mechanical pain sensitivity. Although PPT's have frequently been reported from patients with knee osteoarthritis (KOA), the absolute and relative reliability of PPT assessments remain to be determined. Thus, the purpose of this study was to evaluate the test-retest relative and absolute reliability of PPT in KOA. For that purpose, intra- and interclass correlation coefficient (ICC) as well as the standard error of measurement (SEM) and the minimal detectable change (MDC) values within eight anatomical locations covering the most painful knee of KOA patients was measured. Twenty KOA patients participated in two sessions with a period of 2 weeks±3 days apart. PPT's were assessed over eight anatomical locations covering the knee and two remote locations over tibialis anterior and brachioradialis. The patients rated their maximum pain intensity during the past 24 h and prior to the recordings on a visual analog scale (VAS), and completed The Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and PainDetect surveys. The ICC, SEM and MDC between the sessions were assessed. The ICC for the individual variability was expressed with coefficient of variance (CV). Bland-Altman plots were used to assess potential bias in the dataset. The ICC ranged from 0.85 to 0.96 for all the anatomical locations which is considered "almost perfect". CV was lowest in session 1 and ranged from 44.2 to 57.6%. SEM for comparison ranged between 34 and 71 kPa and MDC ranged between 93 and 197 kPa with a mean PPT ranged from 273.5 to 367.7 kPa in session 1 and 268.1-331.3 kPa in session 2. The analysis of Bland-Altman plot showed no systematic bias. PPT maps showed that the patients had lower thresholds in session 2, but no significant difference was observed for the comparison between the sessions for PPT or VAS. No correlations were seen between PainDetect and PPT and PainDetect and WOMAC. Almost perfect relative and absolute reliabilities were found for the assessment of PPT's for KOA patients. The present investigation implicates that PPT's is reliable for assessing pain sensitivity and sensitization in KOA patients.
Electric field strength determination in filamentary DBDs by CARS-based four-wave mixing
NASA Astrophysics Data System (ADS)
Boehm, Patrick; Kettlitz, Manfred; Brandenburg, Ronny; Hoeft, Hans; Czarnetzki, Uwe
2016-09-01
The electric field strength is a basic parameter of non-thermal plasmas. Therefore, a profound knowledge of the electric field distribution is crucial. In this contribution a four wave mixing technique based on Coherent Anti-Stokes Raman spectroscopy (CARS) is used to measure electric field strengths in filamentary dielectric barrier discharges (DBDs). The discharges are operated with a pulsed voltage in nitrogen at atmospheric pressure. Small amounts hydrogen (10 vol%) are admixed as tracer gas to evaluate the electric field strength in the 1 mm discharge gap. Absolute values of the electric field strength are determined by calibration of the CARS setup with high voltage amplitudes below the ignition threshold of the arrangement. Alteration of the electric field strength has been observed during the internal polarity reversal and the breakdown process. In this case the major advantage over emission based methods is that this technique can be used independently from emission, e.g. in the pre-phase and in between two consecutive, opposite discharge pulses where no emission occurs at all. This work was supported by the Deutsche Forschungsgemeinschaft, Forschergruppe FOR 1123 and Sonderforschungsbereich TRR 24 ``Fundamentals of complex plasmas''.
Neutron activation analysis of certified samples by the absolute method
NASA Astrophysics Data System (ADS)
Kadem, F.; Belouadah, N.; Idiri, Z.
2015-07-01
The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.
NASA Astrophysics Data System (ADS)
Nishiyama, N.
2001-12-01
Absolute return strategy provided from fund of funds (FOFs) investment schemes is the focus in Japanese Financial Community. FOFs investment mainly consists of hedge fund investment and it has two major characteristics which are low correlation against benchmark index and little impact from various external changes in the environment given maximizing return. According to the historical track record of survival hedge funds in this business world, they maintain a stable high return and low risk. However, one must keep in mind that low risk would not be equal to risk free. The failure of Long-term capital management (LTCM) that took place in the summer of 1998 was a symbolized phenomenon. The summer of 1998 exhibited a certain limitation of traditional value at risk (VaR) and some possibility that traditional VaR could be ineffectual to the nonlinear type of fluctuation in the market. In this paper, I try to bring self-organized criticality (SOC) into portfolio risk control. SOC would be well known as a model of decay in the natural world. I analyzed nonlinear type of fluctuation in the market as SOC and applied SOC to capture complicated market movement using threshold point of SOC and risk adjustments by scenario correlation as implicit signals. Threshold becomes the control parameter of risk exposure to set downside floor and forecast extreme nonlinear type of fluctuation under a certain probability. Simulation results would show synergy effect of portfolio risk control between SOC and absolute return strategy.
A Universal Threshold for the Assessment of Load and Output Residuals of Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2017-01-01
A new universal residual threshold for the detection of load and gage output residual outliers of wind tunnel strain{gage balance data was developed. The threshold works with both the Iterative and Non{Iterative Methods that are used in the aerospace testing community to analyze and process balance data. It also supports all known load and gage output formats that are traditionally used to describe balance data. The threshold's definition is based on an empirical electrical constant. First, the constant is used to construct a threshold for the assessment of gage output residuals. Then, the related threshold for the assessment of load residuals is obtained by multiplying the empirical electrical constant with the sum of the absolute values of all first partial derivatives of a given load component. The empirical constant equals 2.5 microV/V for the assessment of balance calibration or check load data residuals. A value of 0.5 microV/V is recommended for the evaluation of repeat point residuals because, by design, the calculation of these residuals removes errors that are associated with the regression analysis of the data itself. Data from a calibration of a six-component force balance is used to illustrate the application of the new threshold definitions to real{world balance calibration data.
A comparative analysis of frequency modulation threshold extension techniques
NASA Technical Reports Server (NTRS)
Arndt, G. D.; Loch, F. J.
1970-01-01
FM threshold extension for system performance improvement, comparing impulse noise elimination, correlation detection and delta modulation signal processing techniques implemented at demodulator output
Sinclair, R C F; Danjoux, G R; Goodridge, V; Batterham, A M
2009-11-01
The variability between observers in the interpretation of cardiopulmonary exercise tests may impact upon clinical decision making and affect the risk stratification and peri-operative management of a patient. The purpose of this study was to quantify the inter-reader variability in the determination of the anaerobic threshold (V-slope method). A series of 21 cardiopulmonary exercise tests from patients attending a surgical pre-operative assessment clinic were read independently by nine experienced clinicians regularly involved in clinical decision making. The grand mean for the anaerobic threshold was 10.5 ml O(2).kg body mass(-1).min(-1). The technical error of measurement was 8.1% (circa 0.9 ml.kg(-1).min(-1); 90% confidence interval, 7.4-8.9%). The mean absolute difference between readers was 4.5% with a typical random error of 6.5% (6.0-7.2%). We conclude that the inter-observer variability for experienced clinicians determining the anaerobic threshold from cardiopulmonary exercise tests is acceptable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamberto, M; Chen, H; Huang, K
2015-06-15
Purpose To characterize the Cyberknife (CK) robotic system’s dosimetric accuracy of the delivery of MultiPlan’s Monte Carlo dose calculations using EBT3 radiochromic film inserted in a thorax phantom. Methods The CIRS XSight Lung Tracking (XLT) Phantom (model 10823) was used in this study with custom cut EBT3 film inserted in the horizontal (coronal) plane inside the lung tissue equivalent phantom. CK MultiPlan v3.5.3 with Monte Carlo dose calculation algorithm (1.5 mm grid size, 2% statistical uncertainty) was used to calculate a clinical plan for a 25-mm lung tumor lesion, as contoured by the physician, and then imported onto the XLTmore » phantom CT. Using the same film batch, the net OD to dose calibration curve was obtained using CK with the 60 mm fixed cone by delivering 0– 800 cGy. The test films (n=3) were irradiated using 325 cGy to the prescription point. Films were scanned 48 hours after irradiation using an Epson v700 scanner (48 bits color scan, extracted red channel only, 96 dpi). Percent absolute dose and relative isodose distribution difference relative to the planned dose were quantified using an in-house QA software program. Multiplan Monte Carlo dose calculation was validated using RCF dosimetry (EBT3) and gamma index criteria of 3%/3mm and 2%/2mm for absolute dose and relative isodose distribution measurement comparisons. Results EBT3 film measurements of the patient plans calculated with Monte Carlo in MultiPlan resulted in an absolute dose passing rate of 99.6±0.4% for the Gamma Index of 3%/3mm, 10% dose threshold, and 95.6±4.4% for 2%/2mm, 10% threshold criteria. The measured central axis absolute dose was within 1.2% (329.0±2.5 cGy) of the Monte Carlo planned dose (325.0±6.5 cGy) for that same point. Conclusion MultiPlan’s Monte Carlo dose calculation was validated using the EBT3 film absolute dosimetry for delivery in a heterogeneous thorax phantom.« less
Auditory Sensitivity and Masking Profiles for the Sea Otter (Enhydra lutris).
Ghoul, Asila; Reichmuth, Colleen
2016-01-01
Sea otters are threatened marine mammals that may be negatively impacted by human-generated coastal noise, yet information about sound reception in this species is surprisingly scarce. We investigated amphibious hearing in sea otters by obtaining the first measurements of absolute sensitivity and critical masking ratios. Auditory thresholds were measured in air and underwater from 0.125 to 40 kHz. Critical ratios derived from aerial masked thresholds from 0.25 to 22.6 kHz were also obtained. These data indicate that although sea otters can detect underwater sounds, their hearing appears to be primarily air adapted and not specialized for detecting signals in background noise.
The massive soft anomalous dimension matrix at two loops
NASA Astrophysics Data System (ADS)
Mitov, Alexander; Sterman, George; Sung, Ilmo
2009-05-01
We study two-loop anomalous dimension matrices in QCD and related gauge theories for products of Wilson lines coupled at a point. We verify by an analysis in Euclidean space that the contributions to these matrices from diagrams that link three massive Wilson lines do not vanish in general. We show, however, that for two-to-two processes the two-loop anomalous dimension matrix is diagonal in the same color-exchange basis as the one-loop matrix for arbitrary masses at absolute threshold and for scattering at 90 degrees in the center of mass. This result is important for applications of threshold resummation in heavy quark production.
NASA Astrophysics Data System (ADS)
Murray, Louise J.; Thompson, Christopher M.; Lilley, John; Cosgrove, Vivian; Franks, Kevin; Sebag-Montefiore, David; Henry, Ann M.
2015-02-01
Risks of radiation-induced second primary cancer following prostate radiotherapy using 3D-conformal radiotherapy (3D-CRT), intensity-modulated radiotherapy (IMRT), volumetric modulated arc therapy (VMAT), flattening filter free (FFF) and stereotactic ablative radiotherapy (SABR) were evaluated. Prostate plans were created using 10 MV 3D-CRT (78 Gy in 39 fractions) and 6 MV 5-field IMRT (78 Gy in 39 fractions), VMAT (78 Gy in 39 fractions, with standard flattened and energy-matched FFF beams) and SABR (42.7 Gy in 7 fractions with standard flattened and energy-matched FFF beams). Dose-volume histograms from pelvic planning CT scans of three prostate patients, each planned using all 6 techniques, were used to calculate organ equivalent doses (OED) and excess absolute risks (EAR) of second rectal and bladder cancers, and pelvic bone and soft tissue sarcomas, using mechanistic, bell-shaped and plateau models. For organs distant to the treatment field, chamber measurements recorded in an anthropomorphic phantom were used to calculate OEDs and EARs using a linear model. Ratios of OED give relative radiation-induced second cancer risks. SABR resulted in lower second cancer risks at all sites relative to 3D-CRT. FFF resulted in lower second cancer risks in out-of-field tissues relative to equivalent flattened techniques, with increasing impact in organs at greater distances from the field. For example, FFF reduced second cancer risk by up to 20% in the stomach and up to 56% in the brain, relative to the equivalent flattened technique. Relative to 10 MV 3D-CRT, 6 MV IMRT or VMAT with flattening filter increased second cancer risks in several out-of-field organs, by up to 26% and 55%, respectively. For all techniques, EARs were consistently low. The observed large relative differences between techniques, in absolute terms, were very low, highlighting the importance of considering absolute risks alongside the corresponding relative risks, since when absolute risks are very low, large relative risks become less meaningful. A calculated relative radiation-induced second cancer risk benefit from SABR and FFF techniques was theoretically predicted, although absolute radiation-induced second cancer risks were low for all techniques, and absolute differences between techniques were small.
Frahm Olsen, Mette; Bjerre, Eik; Hansen, Maria Damkjær; Tendal, Britta; Hilden, Jørgen; Hróbjartsson, Asbjørn
2018-05-21
The minimum clinically important difference (MCID) is used to interpret the relevance of treatment effects, e.g., when developing clinical guidelines, evaluating trial results or planning sample sizes. There is currently no agreement on an appropriate MCID in chronic pain and little is known about which contextual factors cause variation. This is a systematic review. We searched PubMed, EMBASE, and Cochrane Library. Eligible studies determined MCID for chronic pain based on a one-dimensional pain scale, a patient-reported transition scale of perceived improvement, and either a mean change analysis (mean difference in pain among minimally improved patients) or a threshold analysis (pain reduction associated with best sensitivity and specificity for identifying minimally improved patients). Main results were descriptively summarized due to considerable heterogeneity, which were quantified using meta-analyses and explored using subgroup analyses and metaregression. We included 66 studies (31.254 patients). Median absolute MCID was 23 mm on a 0-100 mm scale (interquartile range [IQR] 12-39) and median relative MCID was 34% (IQR 22-45) among studies using the mean change approach. In both cases, heterogeneity was very high: absolute MCID I 2 = 99% and relative MCID I 2 = 96%. High variation was also seen among studies using the threshold approach: median absolute MCID was 20 mm (IQR 15-30) and relative MCID was 32% (IQR 15-41). Absolute MCID was strongly associated with baseline pain, explaining approximately two-thirds of the variation, and to a lesser degree with the operational definition of minimum pain relief and clinical condition. A total of 15 clinical and methodological factors were assessed as possible causes for variation in MCID. MCID for chronic pain relief vary considerably. Baseline pain is strongly associated with absolute, but not relative, measures. To a much lesser degree, MCID is also influenced by the operational definition of relevant pain relief and possibly by clinical condition. Explicit and conscientious reflections on the choice of an MCID are required when classifying effect sizes as clinically important or trivial. Copyright © 2018 Elsevier Inc. All rights reserved.
Thresher: an improved algorithm for peak height thresholding of microbial community profiles.
Starke, Verena; Steele, Andrew
2014-11-15
This article presents Thresher, an improved technique for finding peak height thresholds for automated rRNA intergenic spacer analysis (ARISA) profiles. We argue that thresholds must be sample dependent, taking community richness into account. In most previous fragment analyses, a common threshold is applied to all samples simultaneously, ignoring richness variations among samples and thereby compromising cross-sample comparison. Our technique solves this problem, and at the same time provides a robust method for outlier rejection, selecting for removal any replicate pairs that are not valid replicates. Thresholds are calculated individually for each replicate in a pair, and separately for each sample. The thresholds are selected to be the ones that minimize the dissimilarity between the replicates after thresholding. If a choice of threshold results in the two replicates in a pair failing a quantitative test of similarity, either that threshold or that sample must be rejected. We compare thresholded ARISA results with sequencing results, and demonstrate that the Thresher algorithm outperforms conventional thresholding techniques. The software is implemented in R, and the code is available at http://verenastarke.wordpress.com or by contacting the author. vstarke@ciw.edu or http://verenastarke.wordpress.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Absolute cross-section measurements of inner-shell ionization
NASA Astrophysics Data System (ADS)
Schneider, Hans; Tobehn, Ingo; Ebel, Frank; Hippler, Rainer
1994-12-01
Cross section ratios for K- and L-shell ionization of thin silver and gold targets by positron and electron impact have been determined at projectile energies of 30 70 keV. The experimental results are confirmed by calculations in plane wave Born approximation (PWBA) which include an electron exchange term and account for the deceleration or acceleration of the incident projectile in the nuclear field of the target atom. We report first absolute cross sections for K- and L-shell ionization of silver and gold targets by lepton impact in the threshold region. We have measured the corresponding cross sections for electron (e-) impact with an electron gun and the same experimental set-up.
Coltharp, Carla; Kessler, Rene P.; Xiao, Jie
2012-01-01
Localization-based superresolution microscopy techniques such as Photoactivated Localization Microscopy (PALM) and Stochastic Optical Reconstruction Microscopy (STORM) have allowed investigations of cellular structures with unprecedented optical resolutions. One major obstacle to interpreting superresolution images, however, is the overcounting of molecule numbers caused by fluorophore photoblinking. Using both experimental and simulated images, we determined the effects of photoblinking on the accurate reconstruction of superresolution images and on quantitative measurements of structural dimension and molecule density made from those images. We found that structural dimension and relative density measurements can be made reliably from images that contain photoblinking-related overcounting, but accurate absolute density measurements, and consequently faithful representations of molecule counts and positions in cellular structures, require the application of a clustering algorithm to group localizations that originate from the same molecule. We analyzed how applying a simple algorithm with different clustering thresholds (tThresh and dThresh) affects the accuracy of reconstructed images, and developed an easy method to select optimal thresholds. We also identified an empirical criterion to evaluate whether an imaging condition is appropriate for accurate superresolution image reconstruction with the clustering algorithm. Both the threshold selection method and imaging condition criterion are easy to implement within existing PALM clustering algorithms and experimental conditions. The main advantage of our method is that it generates a superresolution image and molecule position list that faithfully represents molecule counts and positions within a cellular structure, rather than only summarizing structural properties into ensemble parameters. This feature makes it particularly useful for cellular structures of heterogeneous densities and irregular geometries, and allows a variety of quantitative measurements tailored to specific needs of different biological systems. PMID:23251611
Lower-Order Compensation Chain Threshold-Reduction Technique for Multi-Stage Voltage Multipliers.
Dell' Anna, Francesco; Dong, Tao; Li, Ping; Wen, Yumei; Azadmehr, Mehdi; Casu, Mario; Berg, Yngvar
2018-04-17
This paper presents a novel threshold-compensation technique for multi-stage voltage multipliers employed in low power applications such as passive and autonomous wireless sensing nodes (WSNs) powered by energy harvesters. The proposed threshold-reduction technique enables a topological design methodology which, through an optimum control of the trade-off among transistor conductivity and leakage losses, is aimed at maximizing the voltage conversion efficiency (VCE) for a given ac input signal and physical chip area occupation. The conducted simulations positively assert the validity of the proposed design methodology, emphasizing the exploitable design space yielded by the transistor connection scheme in the voltage multiplier chain. An experimental validation and comparison of threshold-compensation techniques was performed, adopting 2N5247 N-channel junction field effect transistors (JFETs) for the realization of the voltage multiplier prototypes. The attained measurements clearly support the effectiveness of the proposed threshold-reduction approach, which can significantly reduce the chip area occupation for a given target output performance and ac input signal.
Heil, Peter; Matysiak, Artur; Neubauer, Heinrich
2017-09-01
Thresholds for detecting sounds in quiet decrease with increasing sound duration in every species studied. The neural mechanisms underlying this trade-off, often referred to as temporal integration, are not fully understood. Here, we probe the human auditory system with a large set of tone stimuli differing in duration, shape of the temporal amplitude envelope, duration of silent gaps between bursts, and frequency. Duration was varied by varying the plateau duration of plateau-burst (PB) stimuli, the duration of the onsets and offsets of onset-offset (OO) stimuli, and the number of identical bursts of multiple-burst (MB) stimuli. Absolute thresholds for a large number of ears (>230) were measured using a 3-interval-3-alternative forced choice (3I-3AFC) procedure. Thresholds decreased with increasing sound duration in a manner that depended on the temporal envelope. Most commonly, thresholds for MB stimuli were highest followed by thresholds for OO and PB stimuli of corresponding durations. Differences in the thresholds for MB and OO stimuli and in the thresholds for MB and PB stimuli, however, varied widely across ears, were negative in some ears, and were tightly correlated. We show that the variation and correlation of MB-OO and MB-PB threshold differences are linked to threshold microstructure, which affects the relative detectability of the sidebands of the MB stimuli and affects estimates of the bandwidth of auditory filters. We also found that thresholds for MB stimuli increased with increasing duration of the silent gaps between bursts. We propose a new model and show that it accurately accounts for our results and does so considerably better than a leaky-integrator-of-intensity model and a probabilistic model proposed by others. Our model is based on the assumption that sensory events are generated by a Poisson point process with a low rate in the absence of stimulation and higher, time-varying rates in the presence of stimulation. A subject in a 3I-3AFC task is assumed to choose the interval in which the greatest number of events occurred or randomly chooses among intervals which are tied for the greatest number of events. The subject is further assumed to count events over the duration of an evaluation interval that has the same timing and duration as the expected stimulus. The increase in the rate of the events caused by stimulation is proportional to the time-varying amplitude envelope of the bandpass-filtered signal raised to an exponent. We find the exponent to be about 3, consistent with our previous studies. This challenges models that are based on the assumption of the integration of a neural response that is directly proportional to the stimulus amplitude or proportional to its square (i.e., proportional to the stimulus intensity or power). Copyright © 2017 Elsevier B.V. All rights reserved.
Merchant, Nathan D; Witt, Matthew J; Blondel, Philippe; Godley, Brendan J; Smith, George H
2012-07-01
Underwater noise from shipping is a growing presence throughout the world's oceans, and may be subjecting marine fauna to chronic noise exposure with potentially severe long-term consequences. The coincidence of dense shipping activity and sensitive marine ecosystems in coastal environments is of particular concern, and noise assessment methodologies which describe the high temporal variability of sound exposure in these areas are needed. We present a method of characterising sound exposure from shipping using continuous passive acoustic monitoring combined with Automatic Identification System (AIS) shipping data. The method is applied to data recorded in Falmouth Bay, UK. Absolute and relative levels of intermittent ship noise contributions to the 24-h sound exposure level are determined using an adaptive threshold, and the spatial distribution of potential ship sources is then analysed using AIS data. This technique can be used to prioritize shipping noise mitigation strategies in coastal marine environments. Copyright © 2012 Elsevier Ltd. All rights reserved.
Python Waveform Cross-Correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Templeton, Dennise
PyWCC is a tool to compute seismic waveform cross-correlation coefficients on single-component or multiple-component seismic data across a network of seismic sensors. PyWCC compares waveform data templates with continuous seismic data, associates the resulting detections, identifies the template with the highest cross-correlation coefficient, and outputs a catalog of detections above a user-defined absolute cross-correlation threshold value.
Background Studies for Acoustic Neutrino Detection at the South Pole
NASA Technical Reports Server (NTRS)
Abbasi, R.; Abdou, Y.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.;
2011-01-01
The detection of acoustic signals from ultra-high energy neutrino interactions is a promising method to measure the flux of cosmogenic neutrinos expected on Earth. The energy threshold for this process depends strongly on the absolute noise level in the target material. The South Pole Acoustic Test Setup (SPATS), deployed in the upper part of four boreholes of the IceCube Neutrino Observatory, has monitored the noise in Antarctic ice at the geographic South Pole for more than two years down to 500m depth. The noise is very stable and Gaussian distributed. Lacking an in-situ calibration up to now, laboratory measurements have been used to estimate the absolute noise level in the 10 to 50 kHz frequency range to be smaller than 20mPa. Using a threshold trigger, sensors of the South Pole Acoustic Test Setup registered acoustic events in the IceCube detector volume and its vicinity. Acoustic signals from refreezing IceCube holes and from anthropogenic sources have been used to test the localization of acoustic events. An upper limit on the neutrino flux at energies E > 10(exp 11) GeV is derived from acoustic data taken over eight months.
Background studies for acoustic neutrino detection at the South Pole
NASA Astrophysics Data System (ADS)
Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Benzvi, S.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Brown, A. M.; Buitink, S.; Carson, M.; Chirkin, D.; Christy, B.; Clem, J.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Daughhetee, J.; Davis, J. C.; de Clercq, C.; Demirörs, L.; Denger, T.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; Deyoung, T.; Díaz-Vélez, J. C.; Dierckxsens, M.; Dreyer, J.; Dumm, J. P.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feusels, T.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Geisler, M.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Heinen, D.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Larson, M. J.; Lauer, R.; Lünemann, J.; Madsen, J.; Majumdar, P.; Marotta, A.; Maruyama, R.; Mase, K.; Matis, H. S.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Miller, J.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; O'Murchadha, A.; Ono, M.; Panknin, S.; Paul, L.; Pérez de Los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Schmidt, T.; Schönwald, A.; Schukraft, A.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stössl, A.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Stür, M.; Sullivan, G. W.; Swillens, Q.; Taavola, H.; Taboada, I.; Tamburro, A.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Toscano, S.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Wolf, M.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsk, P.
2012-01-01
The detection of acoustic signals from ultra-high energy neutrino interactions is a promising method to measure the flux of cosmogenic neutrinos expected on Earth. The energy threshold for this process depends strongly on the absolute noise level in the target material. The South Pole Acoustic Test Setup (SPATS), deployed in the upper part of four boreholes of the IceCube Neutrino Observatory, has monitored the noise in Antarctic ice at the geographic South Pole for more than two years down to 500 m depth. The noise is very stable and Gaussian distributed. Lacking an in situ calibration up to now, laboratory measurements have been used to estimate the absolute noise level in the 10-50 kHz frequency range to be smaller than 20 mPa. Using a threshold trigger, sensors of the South Pole Acoustic Test Setup registered acoustic events in the IceCube detector volume and its vicinity. Acoustic signals from refreezing IceCube holes and from anthropogenic sources have been used to test the localization of acoustic events. An upper limit on the neutrino flux at energies Eν > 1011 GeV is derived from acoustic data taken over eight months.
Jürgens, Tim; Clark, Nicholas R; Lecluyse, Wendy; Meddis, Ray
2016-01-01
To use a computer model of impaired hearing to explore the effects of a physiologically-inspired hearing-aid algorithm on a range of psychoacoustic measures. A computer model of a hypothetical impaired listener's hearing was constructed by adjusting parameters of a computer model of normal hearing. Absolute thresholds, estimates of compression, and frequency selectivity (summarized to a hearing profile) were assessed using this model with and without pre-processing the stimuli by a hearing-aid algorithm. The influence of different settings of the algorithm on the impaired profile was investigated. To validate the model predictions, the effect of the algorithm on hearing profiles of human impaired listeners was measured. A computer model simulating impaired hearing (total absence of basilar membrane compression) was used, and three hearing-impaired listeners participated. The hearing profiles of the model and the listeners showed substantial changes when the test stimuli were pre-processed by the hearing-aid algorithm. These changes consisted of lower absolute thresholds, steeper temporal masking curves, and sharper psychophysical tuning curves. The hearing-aid algorithm affected the impaired hearing profile of the model to approximate a normal hearing profile. Qualitatively similar results were found with the impaired listeners' hearing profiles.
Concentration Independent Calibration of β-γ Coincidence Detector Using 131mXe and 133Xe
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIntyre, Justin I.; Cooper, Matthew W.; Carman, April J.
Absolute efficiency calibration of radiometric detectors is frequently difficult and requires careful detector modeling and accurate knowledge of the radioactive source used. In the past we have calibrated the b-g coincidence detector of the Automated Radioxenon Sampler/Analyzer (ARSA) using a variety of sources and techniques which have proven to be less than desirable.[1] A superior technique has been developed that uses the conversion-electron (CE) and x-ray coincidence of 131mXe to provide a more accurate absolute gamma efficiency of the detector. The 131mXe is injected directly into the beta cell of the coincident counting system and no knowledge of absolute sourcemore » strength is required. In addition, 133Xe is used to provide a second independent means to obtain the absolute efficiency calibration. These two data points provide the necessary information for calculating the detector efficiency and can be used in conjunction with other noble gas isotopes to completely characterize and calibrate the ARSA nuclear detector. In this paper we discuss the techniques and results that we have obtained.« less
Evaluation of thresholding techniques for segmenting scaffold images in tissue engineering
NASA Astrophysics Data System (ADS)
Rajagopalan, Srinivasan; Yaszemski, Michael J.; Robb, Richard A.
2004-05-01
Tissue engineering attempts to address the ever widening gap between the demand and supply of organ and tissue transplants using natural and biomimetic scaffolds. The regeneration of specific tissues aided by synthetic materials is dependent on the structural and morphometric properties of the scaffold. These properties can be derived non-destructively using quantitative analysis of high resolution microCT scans of scaffolds. Thresholding of the scanned images into polymeric and porous phase is central to the outcome of the subsequent structural and morphometric analysis. Visual thresholding of scaffolds produced using stochastic processes is inaccurate. Depending on the algorithmic assumptions made, automatic thresholding might also be inaccurate. Hence there is a need to analyze the performance of different techniques and propose alternate ones, if needed. This paper provides a quantitative comparison of different thresholding techniques for segmenting scaffold images. The thresholding algorithms examined include those that exploit spatial information, locally adaptive characteristics, histogram entropy information, histogram shape information, and clustering of gray-level information. The performance of different techniques was evaluated using established criteria, including misclassification error, edge mismatch, relative foreground error, and region non-uniformity. Algorithms that exploit local image characteristics seem to perform much better than those using global information.
Li, Wen; Arasu, Vignesh; Newitt, David C.; Jones, Ella F.; Wilmes, Lisa; Gibbs, Jessica; Kornak, John; Joe, Bonnie N.; Esserman, Laura J.; Hylton, Nola M.
2016-01-01
Functional tumor volume (FTV) measurements by dynamic contrast-enhanced magnetic resonance imaging can predict treatment outcomes for women receiving neoadjuvant chemotherapy for breast cancer. Here, we explore whether the contrast thresholds used to define FTV could be adjusted by breast cancer subtype to improve predictive performance. Absolute FTV and percent change in FTV (ΔFTV) at sequential time-points during treatment were calculated and investigated as predictors of pathologic complete response at surgery. Early percent enhancement threshold (PEt) and signal enhancement ratio threshold (SERt) were varied. The predictive performance of resulting FTV predictors was evaluated using the area under the receiver operating characteristic curve. A total number of 116 patients were studied both as a full cohort and in the following groups defined by hormone receptor (HR) and HER2 receptor subtype: 45 HR+/HER2−, 39 HER2+, and 30 triple negatives. High AUCs were found at different ranges of PEt and SERt levels in different subtypes. Findings from this study suggest that the predictive performance to treatment response by MRI varies by contrast thresholds, and that pathologic complete response prediction may be improved through subtype-specific contrast enhancement thresholds. A validation study is underway with a larger patient population. PMID:28066808
A New Load Residual Threshold Definition for the Evaluation of Wind Tunnel Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2016-01-01
A new definition of a threshold for the detection of load residual outliers of wind tunnel strain-gage balance data was developed. The new threshold is defined as the product between the inverse of the absolute value of the primary gage sensitivity and an empirical limit of the electrical outputs of a strain{gage. The empirical limit of the outputs is either 2.5 microV/V for balance calibration or check load residuals. A reduced limit of 0.5 microV/V is recommended for the evaluation of differences between repeat load points because, by design, the calculation of these differences removes errors in the residuals that are associated with the regression analysis of the data itself. The definition of the new threshold and different methods for the determination of the primary gage sensitivity are discussed. In addition, calibration data of a six-component force balance and a five-component semi-span balance are used to illustrate the application of the proposed new threshold definition to different types of strain{gage balances. During the discussion of the force balance example it is also explained how the estimated maximum expected output of a balance gage can be used to better understand results of the application of the new threshold definition.
Results of clinical olfactometric studies.
Kittel, G
1976-09-01
A modification of a flow olfactometer with a new application appartus, with which "quasi-free" nasal respiration allows the elimination of adaptation without a special testing room, subsequent results using this device to examine olfactory thresholds before and after septum operations, as well as reference to threshold increases in 57 post-operative cases of cheilognathopalatoschisis are reported. An esthesio-neuroblastoma as well as the deformity syndrome with cheilognathopalatoschisis and encephalodystrophy are used as examples for combined olfactory transmission and perception disorders. Studies of 55 smokers with primary neurosensory disorders demonstrated a threefold increase in the olfactory threshold and an up to 50% decrease "fatique-time". A mean acetone deviation factor of 1.93 was seen in 100 students from 20-27 years of age before and after eating. Correspondingly, after a substantial breakfast and lunch, the olfactory threshold attained its maximum daily value within 90 minutes, much more pronounced than after intake of 80 grams of glucose solution. In contrast to the literature, the olfactory threshold was seen to continuously increase, dependent on age. Studies of the perceptive and recognition threshold on 100 normal individuals and 28 patients with hyposmia exhibited with 3 sigma, a significant difference. In patients with hyposmia, the absolute values for the two threshold types vary greatly, however not their deviation factors. More importance should be attached to the sense of smell as the so-called lesser senses give us the greatest pleasures.
Katiyar, Amit; Sarkar, Kausik
2012-11-01
A recent study [Katiyar and Sarkar (2011). J. Acoust. Soc. Am. 130, 3137-3147] showed that in contrast to the analytical result for free bubbles, the minimum threshold for subharmonic generation for contrast microbubbles does not necessarily occur at twice the resonance frequency. Here increased damping-either due to the small radius or the encapsulation-is shown to shift the minimum threshold away from twice the resonance frequency. Free bubbles as well as four models of the contrast agent encapsulation are investigated varying the surface dilatational viscosity. Encapsulation properties are determined using measured attenuation data for a commercial contrast agent. For sufficiently small damping, models predict two minima for the threshold curve-one at twice the resonance frequency being lower than the other at resonance frequency-in accord with the classical analytical result. However, increased damping damps the bubble response more at twice the resonance than at resonance, leading to a flattening of the threshold curve and a gradual shift of the absolute minimum from twice the resonance frequency toward the resonance frequency. The deviation from the classical result stems from the fact that the perturbation analysis employed to obtain it assumes small damping, not always applicable for contrast microbubbles.
Measuring Input Thresholds on an Existing Board
NASA Technical Reports Server (NTRS)
Kuperman, Igor; Gutrich, Daniel G.; Berkun, Andrew C.
2011-01-01
A critical PECL (positive emitter-coupled logic) interface to Xilinx interface needed to be changed on an existing flight board. The new Xilinx input interface used a CMOS (complementary metal-oxide semiconductor) type of input, and the driver could meet its thresholds typically, but not in worst-case, according to the data sheet. The previous interface had been based on comparison with an external reference, but the CMOS input is based on comparison with an internal divider from the power supply. A way to measure what the exact input threshold was for this device for 64 inputs on a flight board was needed. The measurement technique allowed an accurate measurement of the voltage required to switch a Xilinx input from high to low for each of the 64 lines, while only probing two of them. Directly driving an external voltage was considered too risky, and tests done on any other unit could not be used to qualify the flight board. The two lines directly probed gave an absolute voltage threshold calibration, while data collected on the remaining 62 lines without probing gave relative measurements that could be used to identify any outliers. The PECL interface was forced to a long-period square wave by driving a saturated square wave into the ADC (analog to digital converter). The active pull-down circuit was turned off, causing each line to rise rapidly and fall slowly according to the input s weak pull-down circuitry. The fall time shows up as a change in the pulse width of the signal ready by the Xilinx. This change in pulse width is a function of capacitance, pulldown current, and input threshold. Capacitance was known from the different trace lengths, plus a gate input capacitance, which is the same for all inputs. The pull-down current is the same for all inputs including the two that are probed directly. The data was combined, and the Excel solver tool was used to find input thresholds for the 62 lines. This was repeated over different supply voltages and temperatures to show that the interface had voltage margin under all worst case conditions. Gate input thresholds are normally measured at the manufacturer when the device is on a chip tester. A key function of this machine was duplicated on an existing flight board with no modifications to the nets to be tested, with the exception of changes in the FPGA program.
Chantler, C T; Islam, M T; Rae, N A; Tran, C Q; Glover, J L; Barnea, Z
2012-03-01
An extension of the X-ray extended-range technique is described for measuring X-ray mass attenuation coefficients by introducing absolute measurement of a number of foils - the multiple independent foil technique. Illustrating the technique with the results of measurements for gold in the 38-50 keV energy range, it is shown that its use enables selection of the most uniform and well defined of available foils, leading to more accurate measurements; it allows one to test the consistency of independently measured absolute values of the mass attenuation coefficient with those obtained by the thickness transfer method; and it tests the linearity of the response of the counter and counting chain throughout the range of X-ray intensities encountered in a given experiment. In light of the results for gold, the strategy to be ideally employed in measuring absolute X-ray mass attenuation coefficients, X-ray absorption fine structure and related quantities is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cherpak, Amanda
Purpose: The Octavius 1000{sup SRS} detector was commissioned in December 2014 and is used routinely for verification of all SRS and SBRT plans. Results of verifications were analyzed to assess trends and limitations of the device and planning methods. Methods: Plans were delivered using a True Beam STx and results were evaluated using gamma analysis (95%, 3%/3mm) and absolute dose difference (5%). Verification results were analyzed based on several plan parameters including tumour volume, degree of modulation and prescribed dose. Results: During a 12 month period, a total of 124 patient plans were verified using the Octavius detector. Thirteen plansmore » failed the gamma criteria, while 7 plans failed based on the absolute dose difference. When binned according to degree of modulation, a significant correlation was found between MU/cGy and both mean dose difference (r=0.78, p<0.05) and gamma (r=−0.60, p<0.05). When data was binned according to tumour volume, the standard deviation of average gamma dropped from 2.2% – 3.7% for the volumes less than 30 cm{sup 3} to below 1% for volumes greater than 30 cm{sup 3}. Conclusions: The majority of plans and verification failures involved tumour volumes smaller than 30 cm{sup 3}. This was expected due to the nature of disease treated with SBRT and SRS techniques and did not increase rate of failure. Correlations found with MU/cGy indicate that as modulation increased, results deteriorated but not beyond the previously set thresholds.« less
Al-Asadi, H A; Al-Mansoori, M H; Ajiya, M; Hitam, S; Saripan, M I; Mahdi, M A
2010-10-11
We develop a theoretical model that can be used to predict stimulated Brillouin scattering (SBS) threshold in optical fibers that arises through the effect of Brillouin pump recycling technique. Obtained simulation results from our model are in close agreement with our experimental results. The developed model utilizes single mode optical fiber of different lengths as the Brillouin gain media. For 5-km long single mode fiber, the calculated threshold power for SBS is about 16 mW for conventional technique. This value is reduced to about 8 mW when the residual Brillouin pump is recycled at the end of the fiber. The decrement of SBS threshold is due to longer interaction lengths between Brillouin pump and Stokes wave.
A behavioral audiogram of the red fox (Vulpes vulpes).
Malkemper, E Pascal; Topinka, Václav; Burda, Hynek
2015-02-01
We determined the absolute hearing sensitivity of the red fox (Vulpes vulpes) using an adapted standard psychoacoustic procedure. The animals were tested in a reward-based go/no-go procedure in a semi-anechoic chamber. At 60 dB sound pressure level (SPL) (re 20 μPa) red foxes perceive pure tones between 51 Hz and 48 kHz, spanning 9.84 octaves with a single peak sensitivity of -15 dB at 4 kHz. The red foxes' high-frequency cutoff is comparable to that of the domestic dog while the low-frequency cutoff is comparable to that of the domestic cat and the absolute sensitivity is between both species. The maximal absolute sensitivity of the red fox is among the best found to date in any mammal. The procedure used here allows for assessment of animal auditory thresholds using positive reinforcement outside the laboratory. Copyright © 2014 Elsevier B.V. All rights reserved.
Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding
NASA Astrophysics Data System (ADS)
Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry
2014-07-01
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.
Hwang, Eui Jin; Goo, Jin Mo; Kim, Jihye; Park, Sang Joon; Ahn, Soyeon; Park, Chang Min; Shin, Yeong-Gil
2017-08-01
To develop a prediction model for the variability range of lung nodule volumetry and validate the model in detecting nodule growth. For model development, 50 patients with metastatic nodules were prospectively included. Two consecutive CT scans were performed to assess volumetry for 1,586 nodules. Nodule volume, surface voxel proportion (SVP), attachment proportion (AP) and absolute percentage error (APE) were calculated for each nodule and quantile regression analyses were performed to model the 95% percentile of APE. For validation, 41 patients who underwent metastasectomy were included. After volumetry of resected nodules, sensitivity and specificity for diagnosis of metastatic nodules were compared between two different thresholds of nodule growth determination: uniform 25% volume change threshold and individualized threshold calculated from the model (estimated 95% percentile APE). SVP and AP were included in the final model: Estimated 95% percentile APE = 37.82 · SVP + 48.60 · AP-10.87. In the validation session, the individualized threshold showed significantly higher sensitivity for diagnosis of metastatic nodules than the uniform 25% threshold (75.0% vs. 66.0%, P = 0.004) CONCLUSION: Estimated 95% percentile APE as an individualized threshold of nodule growth showed greater sensitivity in diagnosing metastatic nodules than a global 25% threshold. • The 95 % percentile APE of a particular nodule can be predicted. • Estimated 95 % percentile APE can be utilized as an individualized threshold. • More sensitive diagnosis of metastasis can be made with an individualized threshold. • Tailored nodule management can be provided during nodule growth follow-up.
NASA Astrophysics Data System (ADS)
Rich, D. R.; Bowman, J. D.; Crawford, B. E.; Delheij, P. P. J.; Espy, M. A.; Haseyama, T.; Jones, G.; Keith, C. D.; Knudson, J.; Leuschner, M. B.; Masaike, A.; Masuda, Y.; Matsuda, Y.; Penttilä, S. I.; Pomeroy, V. R.; Smith, D. A.; Snow, W. M.; Szymanski, J. J.; Stephenson, S. L.; Thompson, A. K.; Yuan, V.
2002-04-01
The capability of performing accurate absolute measurements of neutron beam polarization opens a number of exciting opportunities in fundamental neutron physics and in neutron scattering. At the LANSCE pulsed neutron source we have measured the neutron beam polarization with an absolute accuracy of 0.3% in the neutron energy range from 40 meV to 10 eV using an optically pumped polarized 3He spin filter and a relative transmission measurement technique. 3He was polarized using the Rb spin-exchange method. We describe the measurement technique, present our results, and discuss some of the systematic effects associated with the method.
NASA Astrophysics Data System (ADS)
Talamonti, James J.; Kay, Richard B.; Krebs, Danny J.
1996-05-01
A numerical model was developed to emulate the capabilities of systems performing noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation by using Hanning, Blackman, and Gaussian windows in the fast Fourier transform technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer. By processing computer-simulated data through our model, we project the ultimate precision for ideal data, and data containing AM-FM noise. The precision is shown to be limited by nonlinearities in the laser scan. absolute distance, interferometer.
An experiment to measure the one-way velocity of propagation of electromagnetic radiation
NASA Technical Reports Server (NTRS)
Kolen, P.; Torr, D. G.
1982-01-01
An experiment involving commercially available instrumentation to measure the velocity of the earth with respect to absolute space is described. The experiment involves the measurement of the one-way propagation velocity of electromagnetic radiation down a high-quality coaxial cable. It is demonstrated that the experiment is both physically meaningful and exceedingly simple in concept and in implementation. It is shown that with currently available commercial equipment one might expect to detect a threshold value for the component of velocity of the earth's motion with respect to absolute space in the equatorial plane of approximately 10 km/s, which greatly exceeds the velocity resolution required to detect the motion of the solar system with respect to the center of the galaxy.
SU-C-9A-01: Parameter Optimization in Adaptive Region-Growing for Tumor Segmentation in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, S; Huazhong University of Science and Technology, Wuhan, Hubei; Xue, M
Purpose: To design a reliable method to determine the optimal parameter in the adaptive region-growing (ARG) algorithm for tumor segmentation in PET. Methods: The ARG uses an adaptive similarity criterion m - fσ ≤ I-PET ≤ m + fσ, so that a neighboring voxel is appended to the region based on its similarity to the current region. When increasing the relaxing factor f (f ≥ 0), the resulting volumes monotonically increased with a sharp increase when the region just grew into the background. The optimal f that separates the tumor from the background is defined as the first point withmore » the local maximum curvature on an Error function fitted to the f-volume curve. The ARG was tested on a tumor segmentation Benchmark that includes ten lung cancer patients with 3D pathologic tumor volume as ground truth. For comparison, the widely used 42% and 50% SUVmax thresholding, Otsu optimal thresholding, Active Contours (AC), Geodesic Active Contours (GAC), and Graph Cuts (GC) methods were tested. The dice similarity index (DSI), volume error (VE), and maximum axis length error (MALE) were calculated to evaluate the segmentation accuracy. Results: The ARG provided the highest accuracy among all tested methods. Specifically, the ARG has an average DSI, VE, and MALE of 0.71, 0.29, and 0.16, respectively, better than the absolute 42% thresholding (DSI=0.67, VE= 0.57, and MALE=0.23), the relative 42% thresholding (DSI=0.62, VE= 0.41, and MALE=0.23), the absolute 50% thresholding (DSI=0.62, VE=0.48, and MALE=0.21), the relative 50% thresholding (DSI=0.48, VE=0.54, and MALE=0.26), OTSU (DSI=0.44, VE=0.63, and MALE=0.30), AC (DSI=0.46, VE= 0.85, and MALE=0.47), GAC (DSI=0.40, VE= 0.85, and MALE=0.46) and GC (DSI=0.66, VE= 0.54, and MALE=0.21) methods. Conclusions: The results suggest that the proposed method reliably identified the optimal relaxing factor in ARG for tumor segmentation in PET. This work was supported in part by National Cancer Institute Grant R01 CA172638; The dataset is provided by AAPM TG211.« less
Measurement of absolute lung volumes by imaging techniques.
Clausen, J
1997-10-01
In this paper, the techniques available for estimating total lung capacities from standard chest radiographs in children and infants as well as adults are reviewed. These techniques include manual measurements using ellipsoid and planimetry techniques as well as computerized systems. Techniques are also available for making radiographic lung volume measurements from portable chest radiographs. There are inadequate data in the literature to support recommending one specific technique over another. Though measurements of lung volumes by radiographic, plethysmographic, gas dilution or washout techniques result in remarkably similar mean results when groups of normal subjects are tested, in patients with disease, the results of these different basic measurement techniques can differ significantly. Computed tomographic and magnetic resonance techniques can also be used to measure absolute lung volumes and offer the theoretical advantages that the results in individual subjects are less affected by variances of thoracic shape than are measurements made using conventional chest radiographs.
NASA Technical Reports Server (NTRS)
Brown, A. H.; Chapman, D. K.; Johnsson, A.; Heathcote, D.
1995-01-01
We conducted a series of gravitropic experiments on Avena coleoptiles in the weightlessness environment of Spacelab. The purpose was to test the threshold stimulus, reciprocity rule and autotropic reactions to a range of g-force stimulations of different intensities and durations The tests avoided the potentially complicating effects of earth's gravity and the interference from clinostat ambiguities. Using slow-speed centrifuges, coleoptiles received transversal accelerations in the hypogravity range between 0.l and 1.0 g over periods that ranged from 2 to 130 min. All responses that occurred in weightlessness were compared to clinostat experiments on earth using the same apparatus. Characteristic gravitropistic response patterns of Atuena were not substantially different from those observed in ground-based experiments. Gravitropic presentation times were extrapolated. The threshold at 1.0 g was less than 1 min (shortest stimulation time 2 min), in agreement with values obtained on the ground. The least stimulus tested, 0.1 g for 130 min, produced a significant response. Therefore the absolute threshold for a gravitropic response is less than 0.1 g.
Quinn, Mitchell S; Andrews, Duncan U; Nauta, Klaas; Jordan, Meredith J T; Kable, Scott H
2017-07-07
The dynamics of CO production from photolysis of H 2 CO have been explored over a 8000 cm -1 energy range (345 nm-266 nm). Two-dimensional ion imaging, which simultaneously measures the speed and angular momentum distribution of a photofragment, was used to characterise the distribution of rotational and translational energy and to quantify the branching fraction of roaming, transition state (TS), and triple fragmentation (3F) pathways. The rotational distribution for the TS channel broadens significantly with increasing energy, while the distribution is relatively constant for the roaming channel. The branching fraction from roaming is also relatively constant at 20% of the observed CO. Above the 3F threshold, roaming decreases in favour of triple fragmentation. Combining the present data with our previous study on the H-atom branching fractions and published quantum yields for radical and molecular channels, absolute quantum yields were determined for all five dissociation channels for the entire S 1 ←S 0 absorption band, covering almost 8000 cm -1 of excitation energy. The S 0 radical and TS molecular channels are the most important over this energy range. The absolute quantum yield of roaming is fairly constant ∼5% at all energies. The T 1 radical channel is important (20%-40%) between 1500 and 4000 cm -1 above the H + HCO threshold, but becomes unimportant at higher energy. Triple fragmentation increases rapidly above its threshold reaching a maximum of 5% of the total product yield at the highest energy.
Towards a clinically informed, data-driven definition of elderly onset epilepsy.
Josephson, Colin B; Engbers, Jordan D T; Sajobi, Tolulope T; Jette, Nathalie; Agha-Khani, Yahya; Federico, Paolo; Murphy, William; Pillay, Neelan; Wiebe, Samuel
2016-02-01
Elderly onset epilepsy represents a distinct subpopulation that has received considerable attention due to the unique features of the disease in this age group. Research into this particular patient group has been limited by a lack of a standardized definition and understanding of the attributes associated with elderly onset epilepsy. We used a prospective cohort database to examine differences in patients stratified according to age of onset. Linear support vector machine learning incorporating all significant variables was used to predict age of onset according to prespecified thresholds. Sensitivity and specificity were calculated and plotted in receiver-operating characteristic (ROC) space. Feature coefficients achieving an absolute value of 0.25 or greater were graphed by age of onset to define how they vary with time. We identified 2,449 patients, of whom 149 (6%) had an age of seizure onset of 65 or older. Fourteen clinical variables had an absolute predictive value of at least 0.25 at some point over the age of epilepsy-onset spectrum. Area under the curve in ROC space was maximized between ages of onset of 65 and 70. Features identified through machine learning were frequently threshold specific and were similar, but not identical, to those revealed through simple univariable and multivariable comparisons. This study provides an empirical, clinically informed definition of "elderly onset epilepsy." If validated, an age threshold of 65-70 years can be used for future studies of elderly onset epilepsy and permits targeted interventions according to the patient's age of onset. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.
Liberal or restrictive transfusion in high-risk patients after hip surgery.
Carson, Jeffrey L; Terrin, Michael L; Noveck, Helaine; Sanders, David W; Chaitman, Bernard R; Rhoads, George G; Nemo, George; Dragert, Karen; Beaupre, Lauren; Hildebrand, Kevin; Macaulay, William; Lewis, Courtland; Cook, Donald Richard; Dobbin, Gwendolyn; Zakriya, Khwaja J; Apple, Fred S; Horney, Rebecca A; Magaziner, Jay
2011-12-29
The hemoglobin threshold at which postoperative red-cell transfusion is warranted is controversial. We conducted a randomized trial to determine whether a higher threshold for blood transfusion would improve recovery in patients who had undergone surgery for hip fracture. We enrolled 2016 patients who were 50 years of age or older, who had either a history of or risk factors for cardiovascular disease, and whose hemoglobin level was below 10 g per deciliter after hip-fracture surgery. We randomly assigned patients to a liberal transfusion strategy (a hemoglobin threshold of 10 g per deciliter) or a restrictive transfusion strategy (symptoms of anemia or at physician discretion for a hemoglobin level of <8 g per deciliter). The primary outcome was death or an inability to walk across a room without human assistance on 60-day follow-up. A median of 2 units of red cells were transfused in the liberal-strategy group and none in the restrictive-strategy group. The rates of the primary outcome were 35.2% in the liberal-strategy group and 34.7% in the restrictive-strategy group (odds ratio in the liberal-strategy group, 1.01; 95% confidence interval [CI], 0.84 to 1.22), for an absolute risk difference of 0.5 percentage points (95% CI, -3.7 to 4.7). The rates of in-hospital acute coronary syndrome or death were 4.3% and 5.2%, respectively (absolute risk difference, -0.9%; 99% CI, -3.3 to 1.6), and rates of death on 60-day follow-up were 7.6% and 6.6%, respectively (absolute risk difference, 1.0%; 99% CI, -1.9 to 4.0). The rates of other complications were similar in the two groups. A liberal transfusion strategy, as compared with a restrictive strategy, did not reduce rates of death or inability to walk independently on 60-day follow-up or reduce in-hospital morbidity in elderly patients at high cardiovascular risk. (Funded by the National Heart, Lung, and Blood Institute; FOCUS ClinicalTrials.gov number, NCT00071032.).
Liang, Shanshan; Yuan, Fusong; Luo, Xu; Yu, Zhuoren; Tang, Zhihui
2018-04-05
Marginal discrepancy is key to evaluating the accuracy of fixed dental prostheses. An improved method of evaluating marginal discrepancy is needed. The purpose of this in vitro study was to evaluate the absolute marginal discrepancy of ceramic crowns fabricated using conventional and digital methods with a digital method for the quantitative evaluation of absolute marginal discrepancy. The novel method was based on 3-dimensional scanning, iterative closest point registration techniques, and reverse engineering theory. Six standard tooth preparations for the right maxillary central incisor, right maxillary second premolar, right maxillary second molar, left mandibular lateral incisor, left mandibular first premolar, and left mandibular first molar were selected. Ten conventional ceramic crowns and 10 CEREC crowns were fabricated for each tooth preparation. A dental cast scanner was used to obtain 3-dimensional data of the preparations and ceramic crowns, and the data were compared with the "virtual seating" iterative closest point technique. Reverse engineering software used edge sharpening and other functional modules to extract the margins of the preparations and crowns. Finally, quantitative evaluation of the absolute marginal discrepancy of the ceramic crowns was obtained from the 2-dimensional cross-sectional straight-line distance between points on the margin of the ceramic crowns and the standard preparations based on the circumferential function module along the long axis. The absolute marginal discrepancy of the ceramic crowns fabricated using conventional methods was 115 ±15.2 μm, and 110 ±14.3 μm for those fabricated using the digital technique was. ANOVA showed no statistical difference between the 2 methods or among ceramic crowns for different teeth (P>.05). The digital quantitative evaluation method for the absolute marginal discrepancy of ceramic crowns was established. The evaluations determined that the absolute marginal discrepancies were within a clinically acceptable range. This method is acceptable for the digital evaluation of the accuracy of complete crowns. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.
Donoho, David; Jin, Jiashun
2008-09-30
In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak
Donoho, David; Jin, Jiashun
2008-01-01
In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365
Shanir, P P Muhammed; Khan, Kashif Ahmad; Khan, Yusuf Uzzaman; Farooq, Omar; Adeli, Hojjat
2017-12-01
Epileptic neurological disorder of the brain is widely diagnosed using the electroencephalography (EEG) technique. EEG signals are nonstationary in nature and show abnormal neural activity during the ictal period. Seizures can be identified by analyzing and obtaining features of EEG signal that can detect these abnormal activities. The present work proposes a novel morphological feature extraction technique based on the local binary pattern (LBP) operator. LBP provides a unique decimal value to a sample point by weighing the binary outcomes after thresholding the neighboring samples with the present sample point. These LBP values assist in capturing the rising and falling edges of the EEG signal, thus providing a morphologically featured discriminating pattern for epilepsy detection. In the present work, the variability in the LBP values is measured by calculating the sum of absolute difference of the consecutive LBP values. Interquartile range is calculated over the preprocessed EEG signal to provide dispersion measure in the signal. For classification purpose, K-nearest neighbor classifier is used, and the performance is evaluated on 896.9 hours of data from CHB-MIT continuous EEG database. Mean accuracy of 99.7% and mean specificity of 99.8% is obtained with average false detection rate of 0.47/h and sensitivity of 99.2% for 136 seizures.
Gaskill, S E; Walker, A J; Serfass, R A; Bouchard, C; Gagnon, J; Rao, D C; Skinner, J S; Wilmore, J H; Leon, A S
2001-11-01
The purpose of this study was to evaluate the effect of exercise training intensity relative to the ventilatory threshold (VT) on changes in work (watts) and VO2 at the ventilatory threshold and at maximal exercise in previously sedentary participants in the HERITAGE Family Study. We hypothesized that those who exercised below their VT would improve less in VO2 at the ventilatory threshold (VO2vt) and VO2max than those who trained at an intensity greater than their VT. Supervised cycle ergometer training was performed at the 4 participating clinical centers, 3 times a week for 20 weeks. Exercise training progressed from the HR corresponding to 55% VO2max for 30 minutes to the HR associated with 75% VO2max for 50 minutes for the final 6 weeks. VT was determined at baseline and after exercise training using standardized methods. 432 sedentary white and black men (n = 224) and women (n = 208), aged 17 to 65 years, were retrospectively divided into groups based on whether exercise training was initiated below, at, or above VT. 1) Training intensity (relative to VT) accounting for about 26% of the improvement in VO2vt (R2 = 0.26, p < 0.0001). 2) The absolute intensity of training in watts (W) accounted for approximately 56% of the training effect at VT (R2 = 0.56, p < 0.0001) with post-training watts at VT (VT(watts)) being not significantly different than W during training (p > 0.70). 3) Training intensity (relative to VT) had no effect on DeltaVO2max. These data clearly show that as a result of aerobic training both the VO2 and W associated with VT respond and become similar to the absolute intensity of sustained (3 x /week for 50 min) aerobic exercise training. Higher intensities of exercise, relative to VT, result in larger gains in VO2vt but not in VO2max.
Split-step eigenvector-following technique for exploring enthalpy landscapes at absolute zero.
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra
2006-03-16
The mapping of enthalpy landscapes is complicated by the coupling of particle position and volume coordinates. To address this issue, we have developed a new split-step eigenvector-following technique for locating minima and transition points in an enthalpy landscape at absolute zero. Each iteration is split into two steps in order to independently vary system volume and relative atomic coordinates. A separate Lagrange multiplier is used for each eigendirection in order to provide maximum flexibility in determining step sizes. This technique will be useful for mapping the enthalpy landscapes of bulk systems such as supercooled liquids and glasses.
Absolute gravimetry as an operational tool for geodynamics research
NASA Astrophysics Data System (ADS)
Torge, W.
Relative gravimetric techniques have been used for nearly 30 years for measuring non-tidal gravity variations with time, and thus have contributed to geodynamics research by monitoring vertical crustal movements and internal mass shifts. With today's accuracy of about ± 0.05µms-2 (or 5µGal), significant results have been obtained in numerous control nets of local extension, especially in connection with seismic and volcanic events. Nevertheless, the main drawbacks of relative gravimetry, which are deficiencies in absolute datum and calibration, set a limit for its application, especially with respect to large-scale networks and long-term investigations. These problems can now be successfully attacked by absolute gravimetry, with transportable gravimeters available since about 20 years. While the absolute technique during the first two centuries of gravimetry's history was based on the pendulum method, the free-fall method can now be employed taking advantage of laser-interferometry, electronic timing, vacuum and shock absorbing techniques, and on-line computer-control. The accuracy inherent in advanced instruments is about ± 0.05 µms-2. In field work, generally an accuracy of ±0.1 µms-2 may be expected, strongly depending on local environmental conditions.
Finneran, James J; Houser, Dorian S
2006-05-01
Traditional behavioral techniques for hearing assessment in marine mammals are limited by the time and access required to train subjects. Electrophysiological methods, where passive electrodes are used to measure auditory evoked potentials (AEPs), are attractive alternatives to behavioral techniques; however, there have been few attempts to compare AEP and behavioral results for the same subject. In this study, behavioral and AEP hearing thresholds were compared in four bottlenose dolphins. AEP thresholds were measured in-air using a piezoelectric sound projector embedded in a suction cup to deliver amplitude modulated tones to the dolphin through the lower jaw. Evoked potentials were recorded noninvasively using surface electrodes. Adaptive procedures allowed AEP hearing thresholds to be estimated from 10 to 150 kHz in a single ear in about 45 min. Behavioral thresholds were measured in a quiet pool and in San Diego Bay. AEP and behavioral threshold estimates agreed closely as to the upper cutoff frequency beyond which thresholds increased sharply. AEP thresholds were strongly correlated with pool behavioral thresholds across the range of hearing; differences between AEP and pool behavioral thresholds increased with threshold magnitude and ranged from 0 to + 18 dB.
Development of Relative Disparity Sensitivity in Human Visual Cortex.
Norcia, Anthony M; Gerhard, Holly E; Meredith, Wesley J
2017-06-07
Stereopsis is the primary cue underlying our ability to make fine depth judgments. In adults, depth discriminations are supported largely by relative rather than absolute binocular disparity, and depth is perceived primarily for horizontal rather than vertical disparities. Although human infants begin to exhibit disparity-specific responses between 3 and 5 months of age, it is not known how relative disparity mechanisms develop. Here we show that the specialization for relative disparity is highly immature in 4- to 6-month-old infants but is adult-like in 4- to 7-year-old children. Disparity-tuning functions for horizontal and vertical disparities were measured using the visual evoked potential. Infant relative disparity thresholds, unlike those of adults, were equal for vertical and horizontal disparities. Their horizontal disparity thresholds were a factor of ∼10 higher than adults, but their vertical disparity thresholds differed by a factor of only ∼4. Horizontal relative disparity thresholds for 4- to 7-year-old children were comparable with those of adults at ∼0.5 arcmin. To test whether infant immaturity was due to spatial limitations or insensitivity to interocular correlation, highly suprathreshold horizontal and vertical disparities were presented in alternate regions of the display, and the interocular correlation of the interdigitated regions was varied from 0% to 100%. This manipulation regulated the availability of coarse-scale relative disparity cues. Adult and infant responses both increased with increasing interocular correlation by similar magnitudes, but adult responses increased much more for horizontal disparities, further evidence for qualitatively immature stereopsis based on relative disparity at 4-6 months of age. SIGNIFICANCE STATEMENT Stereopsis, our ability to sense depth from horizontal image disparity, is among the finest spatial discriminations made by the primate visual system. Fine stereoscopic depth discriminations depend critically on comparisons of disparity relationships in the image that are supported by relative disparity cues rather than the estimation of single, absolute disparities. Very young human and macaque infants are sensitive to absolute disparity, but no previous study has specifically studied the development of relative disparity sensitivity, a hallmark feature of adult stereopsis. Here, using high-density EEG recordings, we show that 4- to 6-month-old infants display both quantitative and qualitative response immaturities for relative disparity information. Relative disparity responses are adult-like no later than 4-7 years of age. Copyright © 2017 the authors 0270-6474/17/375608-12$15.00/0.
Improvements in absolute seismometer sensitivity calibration using local earth gravity measurements
Anthony, Robert E.; Ringler, Adam; Wilson, David
2018-01-01
The ability to determine both absolute and relative seismic amplitudes is fundamentally limited by the accuracy and precision with which scientists are able to calibrate seismometer sensitivities and characterize their response. Currently, across the Global Seismic Network (GSN), errors in midband sensitivity exceed 3% at the 95% confidence interval and are the least‐constrained response parameter in seismic recording systems. We explore a new methodology utilizing precise absolute Earth gravity measurements to determine the midband sensitivity of seismic instruments. We first determine the absolute sensitivity of Kinemetrics EpiSensor accelerometers to 0.06% at the 99% confidence interval by inverting them in a known gravity field at the Albuquerque Seismological Laboratory (ASL). After the accelerometer is calibrated, we install it in its normal configuration next to broadband seismometers and subject the sensors to identical ground motions to perform relative calibrations of the broadband sensors. Using this technique, we are able to determine the absolute midband sensitivity of the vertical components of Nanometrics Trillium Compact seismometers to within 0.11% and Streckeisen STS‐2 seismometers to within 0.14% at the 99% confidence interval. The technique enables absolute calibrations from first principles that are traceable to National Institute of Standards and Technology (NIST) measurements while providing nearly an order of magnitude more precision than step‐table calibrations.
A fuzzy optimal threshold technique for medical images
NASA Astrophysics Data System (ADS)
Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.
2012-01-01
A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.
Improved detection and relocation of micro-earthquakes applied to the Sea of Marmara
NASA Astrophysics Data System (ADS)
Tary, J. B.; Evangelia, B.; Géli, L.; Lomax, A.
2016-12-01
The Sea of Marmara is located at the western end of the North Anatolian Fault (NAF). This part of the NAF is considered as a seismic gap, being between the Izmit and Duzce earthquakes to the East and the Ganos earthquake to the West. Improved detection and location of seismicity in the Sea of Marmara is important for defining the seismic hazard in this area.On July 25, 2011, a Mw 5 earthquake occurred below the Western High in the western part of the Sea of Marmara. This earthquake as well as its aftershock sequence were recorded by a network of 10 ocean bottom seismometers (Ifremer) as well as seafloor observatories (KOERI). The OBSs were deployed from mid-April, 2011, to the end of July, 2011.The aftershock sequence is characterized by deep seismicity ( 10-15 km) around the main shock and shallow seismicity. Some of the shallow seismicity could be located at a similar depth as gas prone sediment layers below the Western High. The exact causes of these shallow aftershocks are still unclear. To better define this aftershock sequence, we use the match filter technique with a selection of aftershocks as templates to dig out child events from the continuous data streams. The templates are cross-correlated with the continuous data for stations with absolute time picks. The cross-correlation coefficients are then summed over all stations and components, and we then compute its median absolute deviation (MAD). Signals are detected when the summed cross-correlation time series exceeds a given number of times the MAD. Using a conservative detection threshold, we obtain a 10-fold increase in the number of events. The newly detected events are then relocated using the double-difference technique. With these newly detected events, we investigate the nucleation phase of the main shock and the aftershock sequence, as well as the possible triggering of the shallow aftershocks by the deeper seismicity.
Tsipouras, Markos G; Giannakeas, Nikolaos; Tzallas, Alexandros T; Tsianou, Zoe E; Manousou, Pinelopi; Hall, Andrew; Tsoulos, Ioannis; Tsianos, Epameinondas
2017-03-01
Collagen proportional area (CPA) extraction in liver biopsy images provides the degree of fibrosis expansion in liver tissue, which is the most characteristic histological alteration in hepatitis C virus (HCV). Assessment of the fibrotic tissue is currently based on semiquantitative staging scores such as Ishak and Metavir. Since its introduction as a fibrotic tissue assessment technique, CPA calculation based on image analysis techniques has proven to be more accurate than semiquantitative scores. However, CPA has yet to reach everyday clinical practice, since the lack of standardized and robust methods for computerized image analysis for CPA assessment have proven to be a major limitation. The current work introduces a three-stage fully automated methodology for CPA extraction based on machine learning techniques. Specifically, clustering algorithms have been employed for background-tissue separation, as well as for fibrosis detection in liver tissue regions, in the first and the third stage of the methodology, respectively. Due to the existence of several types of tissue regions in the image (such as blood clots, muscle tissue, structural collagen, etc.), classification algorithms have been employed to identify liver tissue regions and exclude all other non-liver tissue regions from CPA computation. For the evaluation of the methodology, 79 liver biopsy images have been employed, obtaining 1.31% mean absolute CPA error, with 0.923 concordance correlation coefficient. The proposed methodology is designed to (i) avoid manual threshold-based and region selection processes, widely used in similar approaches presented in the literature, and (ii) minimize CPA calculation time. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Absolute and Mass-Dependent Titanium Isotope Compositions of Solar System Materials
NASA Astrophysics Data System (ADS)
Williams, N. H.; Fehr, M. A.; Akram, W. M.; Parkinson, I. J.; Schönbächler, M.
2013-09-01
Mass dependent Ti isotope data for various solar system material will be presented. This data has been obtained via double spike technique using ^47 Ti and ^49Ti as spikes. Absolute nucleosynthetic anomalie data for Ti will be presented also.
Measuring the Accuracy of Simple Evolving Connectionist System with Varying Distance Formulas
NASA Astrophysics Data System (ADS)
Al-Khowarizmi; Sitompul, O. S.; Suherman; Nababan, E. B.
2017-12-01
Simple Evolving Connectionist System (SECoS) is a minimal implementation of Evolving Connectionist Systems (ECoS) in artificial neural networks. The three-layer network architecture of the SECoS could be built based on the given input. In this study, the activation value for the SECoS learning process, which is commonly calculated using normalized Hamming distance, is also calculated using normalized Manhattan distance and normalized Euclidean distance in order to compare the smallest error value and best learning rate obtained. The accuracy of measurement resulted by the three distance formulas are calculated using mean absolute percentage error. In the training phase with several parameters, such as sensitivity threshold, error threshold, first learning rate, and second learning rate, it was found that normalized Euclidean distance is more accurate than both normalized Hamming distance and normalized Manhattan distance. In the case of beta fibrinogen gene -455 G/A polymorphism patients used as training data, the highest mean absolute percentage error value is obtained with normalized Manhattan distance compared to normalized Euclidean distance and normalized Hamming distance. However, the differences are very small that it can be concluded that the three distance formulas used in SECoS do not have a significant effect on the accuracy of the training results.
High-resolution vacuum-ultraviolet photoabsorption spectra of 1-butyne and 2-butyne
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacovella, U.; Holland, D. M. P.; Boyé-Péronne, S.
2015-07-21
The absolute photoabsorption cross sections of 1- and 2-butyne have been recorded at high resolution by using the vacuum-ultraviolet Fourier-Transform spectrometer at the SOLEIL Synchrotron. Both spectra show more resolved structure than previously observed, especially in the case of 2-butyne. In this work, we assess the potential importance of Rydberg states with higher values of orbital angular momentum, l, than are typically observed in photoabsorption experiments from ground state molecules. We show how the character of the highest occupied molecular orbitals in 1- and 2-butyne suggests the potential importance of transitions to such high-l (l = 3 and 4) Rydbergmore » states. Furthermore, we use theoretical calculations of the partial wave composition of the absorption cross section just above the ionization threshold and the principle of continuity of oscillator strength through an ionization threshold to support this conclusion. The new absolute photoabsorption cross sections are discussed in light of these arguments, and the results are consistent with the expectations. This type of argument should be valuable for assessing the potential importance of different Rydberg series when sufficiently accurate direct quantum chemical calculations are difficult, for example, in the n ≥ 5 manifolds of excited states of larger molecules.« less
NASA Astrophysics Data System (ADS)
Gingell, M.; Mason, N. J.; Walker, I. C.; Marston, G.; Zhao, H.; Siggel, M. R. F.
1999-06-01
Absolute optical (VUV) absorption cross sections for cyclopropane have been measured from 5.0 to 11.2 and 20-40 eV using synchrotron radiation. Also, electron energy-loss (EEL) spectra have been obtained using incident electrons of (a) 150 eV energy scattered through small angles (energy loss 5.0-15 eV) and (b) near-threshold energies scattered through large angles (energy loss 0-10.5 eV). Taken together these confirm that the low-lying excited electronic states of cyclopropane are of Rydberg type and, although spectral bands are diffuse, a known Rydberg series has been extended. Recent computations (Galasso V 1996 Chem. Phys. 206 289) appear to give a good account of the experimental spectrum from threshold to about 11 eV, but these must be extended if valence-excited states are to be characterized. Particular attention has been directed at the evaluation of absolute optical cross sections. These are now believed to be established over the energy ranges 5-15 and 20-40 eV. In the gap region (15-20 eV) second-order radiation may affect the optical measurements. From consideration of second-order effects, and comparison of the present studies with earlier measurements, we propose a best-estimate cross section in this energy region also.
Competitive epidemic spreading over arbitrary multilayer networks.
Darabi Sahneh, Faryad; Scoglio, Caterina
2014-06-01
This study extends the Susceptible-Infected-Susceptible (SIS) epidemic model for single-virus propagation over an arbitrary graph to an Susceptible-Infected by virus 1-Susceptible-Infected by virus 2-Susceptible (SI_{1}SI_{2}S) epidemic model of two exclusive, competitive viruses over a two-layer network with generic structure, where network layers represent the distinct transmission routes of the viruses. We find analytical expressions determining extinction, coexistence, and absolute dominance of the viruses after we introduce the concepts of survival threshold and absolute-dominance threshold. The main outcome of our analysis is the discovery and proof of a region for long-term coexistence of competitive viruses in nontrivial multilayer networks. We show coexistence is impossible if network layers are identical yet possible if network layers are distinct. Not only do we rigorously prove a region of coexistence, but we can quantitate it via interrelation of central nodes across the network layers. Little to no overlapping of the layers' central nodes is the key determinant of coexistence. For example, we show both analytically and numerically that positive correlation of network layers makes it difficult for a virus to survive, while in a network with negatively correlated layers, survival is easier, but total removal of the other virus is more difficult.
Bernstein, Leslie R; Trahiotis, Constantine
2016-11-01
This study assessed whether audiometrically-defined "slight" or "hidden" hearing losses might be associated with degradations in binaural processing as measured in binaural detection experiments employing interaurally delayed signals and maskers. Thirty-one listeners participated, all having no greater than slight hearing losses (i.e., no thresholds greater than 25 dB HL). Across the 31 listeners and consistent with the findings of Bernstein and Trahiotis [(2015). J. Acoust. Soc. Am. 138, EL474-EL479] binaural detection thresholds at 500 Hz and 4 kHz increased with increasing magnitude of interaural delay, suggesting a loss of precision of coding with magnitude of interaural delay. Binaural detection thresholds were consistently found to be elevated for listeners whose absolute thresholds at 4 kHz exceeded 7.5 dB HL. No such elevations were observed in conditions having no binaural cues available to aid detection (i.e., "monaural" conditions). Partitioning and analyses of the data revealed that those elevated thresholds (1) were more attributable to hearing level than to age and (2) result from increased levels of internal noise. The data suggest that listeners whose high-frequency monaural hearing status would be classified audiometrically as being normal or "slight loss" may exhibit substantial and perceptually meaningful losses of binaural processing.
Behrens, Dieter; Forsgren, Eva; Fries, Ingemar; Moritz, Robin F A
2010-10-01
We compared the mortality of honeybee (Apis mellifera) drone and worker larvae from a single queen under controlled in vitro conditions following infection with Paenibacillus larvae, a bacterium causing the brood disease American Foulbrood (AFB). We also determined absolute P. larvae cell numbers and lethal titres in deceased individuals of both sexes up to 8 days post infection using quantitative real-time PCR (qPCR). Our results show that in drones the onset of infection induced mortality is delayed by 1 day, the cumulative mortality is reduced by 10% and P. larvae cell numbers are higher than in worker larvae. Since differences in bacterial cell titres between sexes can be explained by differences in body size, larval size appears to be a key parameter for a lethal threshold in AFB tolerance. Both means and variances for lethal thresholds are similar for drone and worker larvae suggesting that drone resistance phenotypes resemble those of related workers. © 2010 Society for Applied Microbiology and Blackwell Publishing Ltd.
Comparison of four software packages for CT lung volumetry in healthy individuals.
Nemec, Stefan F; Molinari, Francesco; Dufresne, Valerie; Gosset, Natacha; Silva, Mario; Bankier, Alexander A
2015-06-01
To compare CT lung volumetry (CTLV) measurements provided by different software packages, and to provide normative data for lung densitometric measurements in healthy individuals. This retrospective study included 51 chest CTs of 17 volunteers (eight men and nine women; mean age, 30 ± 6 years), who underwent spirometrically monitored CT at total lung capacity (TLC), functional residual capacity (FRC), and mean inspiratory capacity (MIC). Volumetric differences assessed by four commercial software packages were compared with analysis of variance (ANOVA) for repeated measurements and benchmarked against the threshold for acceptable variability between spirometric measurements. Mean lung density (MLD) and parenchymal heterogeneity (MLD-SD) were also compared with ANOVA. Volumetric differences ranged from 12 to 213 ml (0.20 % to 6.45 %). Although 16/18 comparisons (among four software packages at TLC, MIC, and FRC) were statistically significant (P < 0.001 to P = 0.004), only 3/18 comparisons, one at MIC and two at FRC, exceeded the spirometry variability threshold. MLD and MLD-SD significantly increased with decreasing volumes, and were significantly larger in lower compared to upper lobes (P < 0.001). Lung volumetric differences provided by different software packages are small. These differences should not be interpreted based on statistical significance alone, but together with absolute volumetric differences. • Volumetric differences, assessed by different CTLV software, are small but statistically significant. • Volumetric differences are smaller at TLC than at MIC and FRC. • Volumetric differences rarely exceed spirometric repeatability thresholds at MIC and FRC. • Differences between CTLV measurements should be interpreted based on comparison of absolute differences. • MLD increases with decreasing volumes, and is larger in lower compared to upper lobes.
NASA Astrophysics Data System (ADS)
Seiller, G.; Roy, R.; Anctil, F.
2017-04-01
Uncertainties associated to the evaluation of the impacts of climate change on water resources are broad, from multiple sources, and lead to diagnoses sometimes difficult to interpret. Quantification of these uncertainties is a key element to yield confidence in the analyses and to provide water managers with valuable information. This work specifically evaluates the influence of hydrological modeling calibration metrics on future water resources projections, on thirty-seven watersheds in the Province of Québec, Canada. Twelve lumped hydrologic models, representing a wide range of operational options, are calibrated with three common objective functions derived from the Nash-Sutcliffe efficiency. The hydrologic models are forced with climate simulations corresponding to two RCP, twenty-nine GCM from CMIP5 (Coupled Model Intercomparison Project phase 5) and two post-treatment techniques, leading to future projections in the 2041-2070 period. Results show that the diagnosis of the impacts of climate change on water resources are quite affected by the hydrologic models selection and calibration metrics. Indeed, for the four selected hydrological indicators, dedicated to water management, parameters from the three objective functions can provide different interpretations in terms of absolute and relative changes, as well as projected changes direction and climatic ensemble consensus. The GR4J model and a multimodel approach offer the best modeling options, based on calibration performance and robustness. Overall, these results illustrate the need to provide water managers with detailed information on relative changes analysis, but also absolute change values, especially for hydrological indicators acting as security policy thresholds.
Cryar, Adam; Pritchard, Caroline; Burkitt, William; Walker, Michael; O'Connor, Gavin; Burns, Duncan Thorburn; Quaglia, Milena
2013-01-01
Current routine food allergen quantification methods, which are based on immunochemistry, offer high sensitivity but can suffer from issues of specificity and significant variability of results. MS approaches have been developed, but currently lack metrological traceability. A feasibility study on the application of metrologically traceable MS-based reference procedures was undertaken. A proof of concept involving proteolytic digestion and isotope dilution MS for quantification of protein allergens in a food matrix was undertaken using lysozyme in wine as a model system. A concentration of lysozyme in wine of 0.95 +/- 0.03 microg/g was calculated based on the concentrations of two peptides, confirming that this type of analysis is viable at allergenically meaningful concentrations. The challenges associated with this promising method were explored; these included peptide stability, chemical modification, enzymatic digestion, and sample cleanup. The method is suitable for the production of allergen in food certified reference materials, which together with the achieved understanding of the effects of sample preparation and of the matrix on the final results, will assist in addressing the bias of the techniques routinely used and improve measurement confidence. Confirmation of the feasibility of MS methods for absolute quantification of an allergenic protein in a food matrix with results traceable to the International System of Units is a step towards meaningful comparison of results for allergen proteins among laboratories. This approach will also underpin risk assessment and risk management of allergens in the food industry, and regulatory compliance of the use of thresholds or action levels when adopted.
Absolute colorimetric characterization of a DSLR camera
NASA Astrophysics Data System (ADS)
Guarnera, Giuseppe Claudio; Bianco, Simone; Schettini, Raimondo
2014-03-01
A simple but effective technique for absolute colorimetric camera characterization is proposed. It offers a large dynamic range requiring just a single, off-the-shelf target and a commonly available controllable light source for the characterization. The characterization task is broken down in two modules, respectively devoted to absolute luminance estimation and to colorimetric characterization matrix estimation. The characterized camera can be effectively used as a tele-colorimeter, giving an absolute estimation of the XYZ data in cd=m2. The user is only required to vary the f - number of the camera lens or the exposure time t, to better exploit the sensor dynamic range. The estimated absolute tristimulus values closely match the values measured by a professional spectro-radiometer.
Activity Detection and Retrieval for Image and Video Data with Limited Training
2015-06-10
applications. Here we propose two techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a... automata . For our second approach to segmentation, we employ a region based segmentation technique that is capable of handling intensity inhomogeneity...techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a mixture of Gaussian is fitted to the
Absolute or relative? A comparative analysis of the relationship between poverty and mortality.
Fritzell, Johan; Rehnberg, Johan; Bacchus Hertzman, Jennie; Blomgren, Jenni
2015-01-01
We aimed to examine the cross-national and cross-temporal association between poverty and mortality, in particular differentiating the impact of absolute and relative poverty. We employed pooled cross-sectional time series analysis. Our measure of relative poverty was based upon the standard 60% of median income. The measure of absolute, or fixed, poverty was based upon the US poverty threshold. Our analyses were conducted on data for 30 countries between 1978 and 2010, a total of 149 data points. We separately studied infant, child, and adult mortality. Our findings highlight the importance of relative poverty for mortality. Especially for infant and child mortality, we found that our estimates of fixed poverty is close to zero either in the crude models, or when adjusting for gross domestic product. Conversely, the relative poverty estimates increased when adjusting for confounders. Our results seemed robust to a number of sensitivity tests. If we agree that risk of death is important, the public policy implication of our findings is that relative poverty, which has close associations to overall inequality, should be a major concern also among rich countries.
Simulated cosmic microwave background maps at 0.5 deg resolution: Unresolved features
NASA Technical Reports Server (NTRS)
Kogut, A.; Hinshaw, G.; Bennett, C. L.
1995-01-01
High-contrast peaks in the cosmic microwave background (CMB) anisotropy can appear as unresolved sources to observers. We fit simluated CMB maps generated with a cold dark matter model to a set of unresolved features at instrumental resolution 0.5 deg-1.5 deg to derive the integral number density per steradian n (greater than absolute value of T) of features brighter than threshold temperature absolute value of T and compare the results to recent experiments. A typical medium-scale experiment observing 0.001 sr at 0.5 deg resolution would expect to observe one feature brighter than 85 micro-K after convolution with the beam profile, with less than 5% probability to observe a source brighter than 150 micro-K. Increasing the power-law index of primordial density perturbations n from 1 to 1.5 raises these temperature limits absolute value of T by a factor of 2. The MSAM features are in agreement with standard cold dark matter models and are not necessarily evidence for processes beyond the standard model.
Heinrichs-Graham, Elizabeth; Wilson, Tony W
2016-07-01
Previous research has connected a specific pattern of beta oscillatory activity to proper motor execution, but no study to date has directly examined how resting beta levels affect motor-related beta oscillatory activity in the motor cortex. Understanding this relationship is imperative to determining the basic mechanisms of motor control, as well as the impact of pathological beta oscillations on movement execution. In the current study, we used magnetoencephalography (MEG) and a complex movement paradigm to quantify resting beta activity and movement-related beta oscillations in the context of healthy aging. We chose healthy aging as a model because preliminary evidence suggests that beta activity is elevated in older adults, and thus by examining older and younger adults we were able to naturally vary resting beta levels. To this end, healthy younger and older participants were recorded during motor performance and at rest. Using beamforming, we imaged the peri-movement beta event-related desynchronization (ERD) and extracted virtual sensors from the peak voxels, which enabled absolute and relative beta power to be assessed. Interestingly, absolute beta power during the pre-movement baseline was much stronger in older relative to younger adults, and older adults also exhibited proportionally large beta desynchronization (ERD) responses during motor planning and execution compared to younger adults. Crucially, we found a significant relationship between spontaneous (resting) beta power and beta ERD magnitude in both primary motor cortices, above and beyond the effects of age. A similar link was found between beta ERD magnitude and movement duration. These findings suggest a direct linkage between beta reduction during movement and spontaneous activity in the motor cortex, such that as spontaneous beta power increases, a greater reduction in beta activity is required to execute movement. We propose that, on an individual level, the primary motor cortices have an absolute threshold of beta power that must be reached in order to move, and that an inability to suppress beta power to this threshold results in an increase in movement duration. Copyright © 2016 Elsevier Inc. All rights reserved.
Activation cross section and isomeric cross section ratio for the 76Ge(n,2n)75m,gGe process
NASA Astrophysics Data System (ADS)
Luo, Junhua; Jiang, Li; Wang, Xinxing
2018-04-01
We measured neutron-induced reaction cross sections for the 76Ge(n,2n)75m,gGe reactions and their isomeric cross section ratios σm/σg at three neutron energies between 13 and 15MeV by an activation and off-line γ-ray spectrometric technique using the K-400 Neutron Generator at the Chinese Academy of Engineering Physics (CAEP). Ge samples and Nb monitor foils were activated together to determine the reaction cross section and the incident neutron flux. The monoenergetic neutron beams were formed via the 3H( d, n)4He reaction. The pure cross section of the ground state was derived from the absolute cross section of the metastable state and the residual nuclear decay analysis. The cross sections were also calculated using the nuclear model code TALYS-1.8 with different level density options at neutron energies varying from the reaction threshold to 20MeV. Results are discussed and compared with the corresponding literature data.
Noninvasive Sensor for Measuring Muscle Metabolism During Exercise
NASA Technical Reports Server (NTRS)
Soller, B. R.; Yang, Y.; Lee, S. M. C.; Soyemi, O. O.; Wilson, C.; Hagan, R. D.
2007-01-01
The measurement of oxygen uptake (VO2) and lactate threshold (LT) are utilized to assess changes in aerobic capacity and the efficacy of exercise countermeasures in astronauts. During extravehicular activity (EVA), real-time knowledge of VO2 and relative work intensity can be used to monitor crew activity levels and organize tasks to reduce the cumulative effects of fatigue. Currently VO2 and LT are determined with complicated measurement techniques that require sampling of expired ventilatory gases, which may not be accurate in enclosed, oxygen-rich environments such as the EVA suit. The UMMS team has developed a novel near infrared spectroscopic (NIRS) system which noninvasively, simultaneously and continuously measures muscle oxygen tension, oxygen saturation, pH (pHm), and hematocrit from a small sensor placed on the leg. This system is unique in that it allows accurate, absolute measurement of these parameters in the thigh muscle by correcting spectra for the interference from skin pigment and fat. These parameters can be used to estimate VO2 and LT. A preliminary evaluation of the system s capabilities was performed in the NASA JSC Exercise Physiology Lab.
Ruan, Chunhai; Huang, Hai; Rodgers, M T
2008-02-01
Threshold collision-induced dissociation techniques are employed to determine the bond dissociation energies (BDEs) of complexes of alkali metal cations to trimethyl phosphate, TMP. Endothermic loss of the intact TMP ligand is the only dissociation pathway observed for all complexes. Theoretical calculations at the B3LYP/6-31G* level of theory are used to determine the structures, vibrational frequencies, and rotational constants of neutral TMP and the M+(TMP) complexes. Theoretical BDEs are determined from single point energy calculations at the B3LYP/6-311+G(2d,2p) level using the B3LYP/6-31G* optimized geometries. The agreement between theory and experiment is reasonably good for all complexes except Li+(TMP). The absolute M+-(TMP) BDEs are found to decrease monotonically as the size of the alkali metal cation increases. No activated dissociation was observed for alkali metal cation binding to TMP. The binding of alkali metal cations to TMP is compared with that to acetone and methanol.
Behavior of motor units in human biceps brachii during a submaximal fatiguing contraction.
Garland, S J; Enoka, R M; Serrano, L P; Robinson, G A
1994-06-01
The activity of 50 single motor units was recorded in the biceps brachii muscle of human subjects while they performed submaximal isometric elbow flexion contractions that were sustained to induce fatigue. The purposes of this study were to examine the influence of fatigue on motor unit threshold force and to determine the relationship between the threshold force of recruitment and the initial interimpulse interval on the discharge rates of single motor units during a fatiguing contraction. The discharge rate of most motor units that were active from the beginning of the contraction declined during the fatiguing contraction, whereas the discharge rates of most newly recruited units were either constant or increased slightly. The absolute threshold forces of recruitment and derecruitment decreased, and the variability of interimpulse intervals increased after the fatigue task. The change in motor unit discharge rate during the fatigue task was related to the initial rate, but the direction of the change in discharge rate could not be predicted from the threshold force of recruitment or the variability in the interimpulse intervals. The discharge rate of most motor units declined despite an increase in the excitatory drive to the motoneuron pool during the fatigue task.
The simple ears of noctuoid moths are tuned to the calls of their sympatric bat community.
ter Hofstede, Hannah M; Goerlitz, Holger R; Ratcliffe, John M; Holderied, Marc W; Surlykke, Annemarie
2013-11-01
Insects with bat-detecting ears are ideal animals for investigating sensory system adaptations to predator cues. Noctuid moths have two auditory receptors (A1 and A2) sensitive to the ultrasonic echolocation calls of insectivorous bats. Larger moths are detected at greater distances by bats than smaller moths. Larger moths also have lower A1 best thresholds, allowing them to detect bats at greater distances and possibly compensating for their increased conspicuousness. Interestingly, the sound frequency at the lowest threshold is lower in larger than in smaller moths, suggesting that the relationship between threshold and size might vary across frequencies used by different bat species. Here, we demonstrate that the relationships between threshold and size in moths were only significant at some frequencies, and these frequencies differed between three locations (UK, Canada and Denmark). The relationships were more likely to be significant at call frequencies used by proportionately more bat species in the moths' specific bat community, suggesting an association between the tuning of moth ears and the cues provided by sympatric predators. Additionally, we found that the best threshold and best frequency of the less sensitive A2 receptor are also related to size, and that these relationships hold when controlling for evolutionary relationships. The slopes of best threshold versus size differ, however, such that the difference in threshold between A1 and A2 is greater for larger than for smaller moths. The shorter time from A1 to A2 excitation in smaller than in larger moths could potentially compensate for shorter absolute detection distances in smaller moths.
Nylen, Kirk; Likhodii, Sergei; Abdelmalik, Peter A; Clarke, Jasper; Burnham, W McIntyre
2005-08-01
The pentylenetetrazol (PTZ) infusion test was used to compare seizure thresholds in adult and young rats fed either a 4:1 ketogenic diet (KD) or a 6.3:1 KD. We hypothesized that both KDs would significantly elevate seizure thresholds and that the 4:1 KD would serve as a better model of the KD used clinically. Ninety adult rats and 75 young rats were placed on one of five experimental diets: (a) a 4:1 KD, (b) a control diet balanced to the 4:1 KD, (c) a 6.3:1 KD, (d) a standard control diet, or (e) an ad libitum standard control diet. All subjects were seizure tested by using the PTZ infusion test. Blood glucose and beta-hydroxybutyrate (beta-OHB) levels were measured. Neither KD elevated absolute "latencies to seizure" in young or adult rats. Similarly, neither KD elevated "threshold doses" in adult rats. In young rats, the 6.3:1 KD, but not the 4:1 KD, significantly elevated threshold doses. The 6.3:1 KD group showed poorer weight gain than the 4:1 KD group when compared with respective controls. The most dramatic discrepancies were seen in young rats. "Threshold doses" and "latency to seizure" data provided conflicting measures of seizure threshold. This was likely due to the inflation of threshold doses calculated by using the much smaller body weights found in the 6.3:1 KD group. Ultimately, the PTZ infusion test in rats may not be a good preparation to model the anticonvulsant effects of the KD seen clinically, especially when dietary treatments lead to significantly mismatched body weights between the groups.
Dantrolene Reduces the Threshold and Gain for Shivering
Lin, Chun-Ming; Neeru, Sharma; Doufas, Anthony G.; Liem, Edwin; Shah, Yunus Muneer; Wadhwa, Anupama; Lenhardt, Rainer; Bjorksten, Andrew; Kurz, Andrea
2005-01-01
Dantrolene is used for treatment of life-threatening hyperthermia, yet its thermoregulatory effects are unknown. We tested the hypothesis that dantrolene reduces the threshold (triggering core temperature) and gain (incremental increase) of shivering. With IRB approval and informed consent, healthy volunteers were evaluated on two random days: control and dantrolene (≈2.5 mg/kg plus a continuous infusion). In study 1, 9 men were warmed until sweating was provoked and then cooled until arterio-venous shunt constriction and shivering occurred. Sweating was quantified on the chest using a ventilated capsule. Absolute right middle fingertip blood flow was quantified using venous-occlusion volume plethysmography. A sustained increase in oxygen consumption identified the shivering threshold. In study 2, 9 men were given cold Ringer's solution IV to reduce core temperature ≈2°C/h. Cooling was stopped when shivering intensity no longer increased with further core cooling. The gain of shivering was the slope of oxygen consumption vs. core temperature regression. In Study 1, sweating and vasoconstriction thresholds were similar on both days. In contrast, shivering threshold decreased 0.3±0.3°C, P=0.004, on the dantrolene day. In Study 2, dantrolene decreased the shivering threshold from 36.7±0.2 to 36.3±0.3°C, P=0.01 and systemic gain from 353±144 to 211±93 ml·min−1·°C−1, P=0.02. Thus, dantrolene substantially decreased the gain of shivering, but produced little central thermoregulatory inhibition. PMID:15105208
The recalibration of the IUE scientific instrument
NASA Technical Reports Server (NTRS)
Imhoff, Catherine L.; Oliversen, Nancy A.; Nichols-Bohlin, Joy; Casatella, Angelo; Lloyd, Christopher
1988-01-01
The IUE instrument was recalibrated because of long time-scale changes in the scientific instrument, a better understanding of the performance of the instrument, improved sets of calibration data, and improved analysis techniques. Calibrations completed or planned include intensity transfer functions (ITF), low-dispersion absolute calibrations, high-dispersion ripple corrections and absolute calibrations, improved geometric mapping of the ITFs to spectral images, studies to improve the signal-to-noise, enhanced absolute calibrations employing corrections for time, temperature, and aperture dependence, and photometric and geometric calibrations for the FES.
NASA Technical Reports Server (NTRS)
Haugen, H. K.; Weitz, E.; Leone, S. R.
1985-01-01
Various techniques have been used to study photodissociation dynamics of the halogens and interhalogens. The quantum yields obtained by these techniques differ widely. The present investigation is concerned with a qualitatively new approach for obtaining highly accurate quantum yields for electronically excited states. This approach makes it possible to obtain an accuracy of 1 percent to 3 percent. It is shown that measurement of the initial transient gain/absorption vs the final absorption in a single time-resolved signal is a very accurate technique in the study of absolute branching fractions in photodissociation. The new technique is found to be insensitive to pulse and probe laser characteristics, molecular absorption cross sections, and absolute precursor density.
Power ramp induced iodine and cesium redistribution in LWR fuel rods
NASA Astrophysics Data System (ADS)
Sontheimer, F.; Vogl, W.; Ruyter, I.; Markgraf, J.
1980-01-01
Volatile fission product migration in LWR fuel rods which are power ramped above a certain threshold beyond the envelope of their previous power history, plays an important role in stress corrosion cracking of Zircaloy. This may cause fuel rods to fail already at stresses below the yield strength. In the HFR, Petten, many power ramp experiments have been performed with subsequent examination of the ramped rods for fission product distribution. This study describes the measurement of iodine and cesium distribution using γ-spectroscopy of I-131 and Cs-137. An evaluation method is presented which makes the determination of absolute amounts of I/Cs feasible. It is shown that a threshold for I/Cs redistribution exists beyond which it depends strongly on local fuel rod power and fuel type.
Comparison of hearing and voicing ranges in singing
NASA Astrophysics Data System (ADS)
Hunter, Eric J.; Titze, Ingo R.
2003-04-01
The spectral and dynamic ranges of the human voice of professional and nonprofessional vocalists were compared to the auditory hearing and feeling thresholds at a distance of one meter. In order to compare these, an analysis was done in true dB SPL, not just relative dB as is usually done in speech analysis. The methodology of converting the recorded acoustic signal to absolute pressure units was described. The human voice range of a professional vocalist appeared to match the dynamic range of the auditory system at some frequencies. In particular, it was demonstrated that professional vocalists were able to make use of the most sensitive part of the hearing thresholds (around 4 kHz) through the use of a learned vocal ring or singer's formant. [Work sponsored by NIDCD.
NASA Astrophysics Data System (ADS)
Gusakov, E. Z.; Popov, A. Yu.; Saveliev, A. N.
2018-06-01
We analyze the saturation of the low-threshold absolute parametric decay instability of an extraordinary pump wave leading to the excitation of two upper hybrid (UH) waves, only one of which is trapped in the vicinity of a local maximum of the plasma density profile. The pump depletion and the secondary decay of the localized daughter UH wave are treated as the most likely moderators of a primary two-plasmon decay instability. The reduced equations describing the nonlinear saturation phenomena are derived. The general analytical consideration is accompanied by the numerical analysis performed under the experimental conditions typical of the off-axis X2-mode ECRH experiments at TEXTOR. The possibility of substantial (up to 20%) anomalous absorption of the pump wave is predicted.
Kaur, Taranjit; Saini, Barjinder Singh; Gupta, Savita
2018-03-01
In the present paper, a hybrid multilevel thresholding technique that combines intuitionistic fuzzy sets and tsallis entropy has been proposed for the automatic delineation of the tumor from magnetic resonance images having vague boundaries and poor contrast. This novel technique takes into account both the image histogram and the uncertainty information for the computation of multiple thresholds. The benefit of the methodology is that it provides fast and improved segmentation for the complex tumorous images with imprecise gray levels. To further boost the computational speed, the mutation based particle swarm optimization is used that selects the most optimal threshold combination. The accuracy of the proposed segmentation approach has been validated on simulated, real low-grade glioma tumor volumes taken from MICCAI brain tumor segmentation (BRATS) challenge 2012 dataset and the clinical tumor images, so as to corroborate its generality and novelty. The designed technique achieves an average Dice overlap equal to 0.82010, 0.78610 and 0.94170 for three datasets. Further, a comparative analysis has also been made between the eight existing multilevel thresholding implementations so as to show the superiority of the designed technique. In comparison, the results indicate a mean improvement in Dice by an amount equal to 4.00% (p < 0.005), 9.60% (p < 0.005) and 3.58% (p < 0.005), respectively in contrast to the fuzzy tsallis approach.
Absolute calibration of Doppler coherence imaging velocity images
NASA Astrophysics Data System (ADS)
Samuell, C. M.; Allen, S. L.; Meyer, W. H.; Howard, J.
2017-08-01
A new technique has been developed for absolutely calibrating a Doppler Coherence Imaging Spectroscopy interferometer for measuring plasma ion and neutral velocities. An optical model of the interferometer is used to generate zero-velocity reference images for the plasma spectral line of interest from a calibration source some spectral distance away. Validation of this technique using a tunable diode laser demonstrated an accuracy better than 0.2 km/s over an extrapolation range of 3.5 nm; a two order of magnitude improvement over linear approaches. While a well-characterized and very stable interferometer is required, this technique opens up the possibility of calibrated velocity measurements in difficult viewing geometries and for complex spectral line-shapes.
A Survey of Architectural Techniques for Near-Threshold Computing
Mittal, Sparsh
2015-12-28
Energy efficiency has now become the primary obstacle in scaling the performance of all classes of computing systems. In low-voltage computing and specifically, near-threshold voltage computing (NTC), which involves operating the transistor very close to and yet above its threshold voltage, holds the promise of providing many-fold improvement in energy efficiency. However, use of NTC also presents several challenges such as increased parametric variation, failure rate and performance loss etc. Our paper surveys several re- cent techniques which aim to offset these challenges for fully leveraging the potential of NTC. By classifying these techniques along several dimensions, we also highlightmore » their similarities and differences. Ultimately, we hope that this paper will provide insights into state-of-art NTC techniques to researchers and system-designers and inspire further research in this field.« less
A Survey of Architectural Techniques for Near-Threshold Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh
Energy efficiency has now become the primary obstacle in scaling the performance of all classes of computing systems. In low-voltage computing and specifically, near-threshold voltage computing (NTC), which involves operating the transistor very close to and yet above its threshold voltage, holds the promise of providing many-fold improvement in energy efficiency. However, use of NTC also presents several challenges such as increased parametric variation, failure rate and performance loss etc. Our paper surveys several re- cent techniques which aim to offset these challenges for fully leveraging the potential of NTC. By classifying these techniques along several dimensions, we also highlightmore » their similarities and differences. Ultimately, we hope that this paper will provide insights into state-of-art NTC techniques to researchers and system-designers and inspire further research in this field.« less
Liu, Zhihua; Yang, Jian; He, Hong S.
2013-01-01
The relative importance of fuel, topography, and weather on fire spread varies at different spatial scales, but how the relative importance of these controls respond to changing spatial scales is poorly understood. We designed a “moving window” resampling technique that allowed us to quantify the relative importance of controls on fire spread at continuous spatial scales using boosted regression trees methods. This quantification allowed us to identify the threshold value for fire size at which the dominant control switches from fuel at small sizes to weather at large sizes. Topography had a fluctuating effect on fire spread across the spatial scales, explaining 20–30% of relative importance. With increasing fire size, the dominant control switched from bottom-up controls (fuel and topography) to top-down controls (weather). Our analysis suggested that there is a threshold for fire size, above which fires are driven primarily by weather and more likely lead to larger fire size. We suggest that this threshold, which may be ecosystem-specific, can be identified using our “moving window” resampling technique. Although the threshold derived from this analytical method may rely heavily on the sampling technique, our study introduced an easily implemented approach to identify scale thresholds in wildfire regimes. PMID:23383247
CORRELATIONS IN LIGHT FROM A LASER AT THRESHOLD,
Temporal correlations in the electromagnetic field radiated by a laser in the threshold region of oscillation (from one tenth of threshold intensity...to ten times threshold ) were measured by photoelectron counting techniques. The experimental results were compared with theoretical predictions based...shows that the intensity fluctuations at about one tenth threshold are nearly those of a Gaussian field and continuously approach those of a constant amplitude field as the intensity is increased. (Author)
Modelling the regulatory system for diabetes mellitus with a threshold window
NASA Astrophysics Data System (ADS)
Yang, Jin; Tang, Sanyi; Cheke, Robert A.
2015-05-01
Piecewise (or non-smooth) glucose-insulin models with threshold windows for type 1 and type 2 diabetes mellitus are proposed and analyzed with a view to improving understanding of the glucose-insulin regulatory system. For glucose-insulin models with a single threshold, the existence and stability of regular, virtual, pseudo-equilibria and tangent points are addressed. Then the relations between regular equilibria and a pseudo-equilibrium are studied. Furthermore, the sufficient and necessary conditions for the global stability of regular equilibria and the pseudo-equilibrium are provided by using qualitative analysis techniques of non-smooth Filippov dynamic systems. Sliding bifurcations related to boundary node bifurcations were investigated with theoretical and numerical techniques, and insulin clinical therapies are discussed. For glucose-insulin models with a threshold window, the effects of glucose thresholds or the widths of threshold windows on the durations of insulin therapy and glucose infusion were addressed. The duration of the effects of an insulin injection is sensitive to the variation of thresholds. Our results indicate that blood glucose level can be maintained within a normal range using piecewise glucose-insulin models with a single threshold or a threshold window. Moreover, our findings suggest that it is critical to individualise insulin therapy for each patient separately, based on initial blood glucose levels.
Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)
2002-01-01
A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.
Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)
2002-01-01
A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.
Computation of the soft anomalous dimension matrix in coordinate space
NASA Astrophysics Data System (ADS)
Mitov, Alexander; Sterman, George; Sung, Ilmo
2010-08-01
We complete the coordinate space calculation of the three-parton correlation in the two-loop massive soft anomalous dimension matrix. The full answer agrees with the result found previously by a different approach. The coordinate space treatment of renormalized two-loop gluon exchange diagrams exhibits their color symmetries in a transparent fashion. We compare coordinate space calculations of the soft anomalous dimension matrix with massive and massless eikonal lines and examine its nonuniform limit at absolute threshold.
NASA Astrophysics Data System (ADS)
Lauer, S.; Liebel, H.; Vollweiler, F.; Schmoranzer, H.; Reichardt, G.; Wilhelmi, O.; Mentzel, G.; Schartner, K.-H.; Sukhorukov, V. L.; Lagutin, B. M.; Petrov, I. D.; Demekhin, Ph. V.
1998-10-01
The absolute Ar 3s-electron photoionization cross section was measured in the exciting-photon energy range from 30.65 to 31.75 eV by photon-induced fluorescence spectroscopy (PIFS). The bandwidth of the exciting synchrotron radiation was 4.8 meV. The profiles of the resonances observed in the Ar 3s-electron photoionization were compared with the profiles of the resonances in the total photoabsorption.
Vibrotactile perception assessment for a haptic interface on an antigravity suit.
Ko, Sang Min; Lee, Kwangil; Kim, Daeho; Ji, Yong Gu
2017-01-01
Haptic technology is used in various fields to transmit information to the user with or without visual and auditory cues. This study aimed to provide preliminary data for use in developing a haptic interface for an antigravity (anti-G) suit. With the structural characteristics of the anti-G suit in mind, we determined five areas on the body (lower back, outer thighs, inner thighs, outer calves, and inner calves) on which to install ten bar-type eccentric rotating mass (ERM) motors as vibration actuators. To determine the design factors of the haptic anti-G suit, we conducted three experiments to find the absolute threshold, moderate intensity, and subjective assessments of vibrotactile stimuli. Twenty-six fighter pilots participated in the experiments, which were conducted in a fixed-based flight simulator. From the results of our study, we recommend 1) absolute thresholds of ∼11.98-15.84 Hz and 102.01-104.06 dB, 2) moderate intensities of 74.36 Hz and 126.98 dB for the lower back and 58.65 Hz and 122.37 dB for either side of the thighs and calves, and 3) subjective assessments of vibrotactile stimuli (displeasure, easy to perceive, and level of comfort). The results of this study will be useful for the design of a haptic anti-G suit. Copyright © 2016 Elsevier Ltd. All rights reserved.
Absolute Calibration of Si iRMs used for Measurements of Si Paleo-nutrient proxies
NASA Astrophysics Data System (ADS)
Vocke, R. D., Jr.; Rabb, S. A.
2016-12-01
Silicon isotope variations (reported as δ30Si and δ29Si, relative to NBS28) in silicic acid dissolved in ocean waters, in biogenic silica and in diatoms are extremely informative paleo-nutrient proxies. The resolution and comparability of such measurements depend on the quality of the isotopic Reference Materials (iRMs) defining the delta scale. We report new absolute Si isotopic measurements on the iRMs NBS28 (RM 8546 - Silica Sand), Diatomite, and Big Batch using the Avogadro measurement approach and comparing them with prior assessments of these iRMs. The Avogadro Si measurement technique was developed by the German Physikalish-Technische Bundesanstalt (PTB) to provide a precise and highly accurate method to measure absolute isotopic ratios in highly enriched 28Si (99.996%) material. These measurements are part of an international effort to redefine the kg and mole based on the Planck constant h and the Avogadro constant NA, respectively (Vocke et al., 2014 Metrologia 51, 361, Azuma et al., 2015 Metrologia 52 360). This approach produces absolute Si isotope ratio data with lower levels of uncertainty when compared to the traditional "Atomic Weights" method of absolute isotope ratio measurement calibration. This is illustrated in Fig. 1 where absolute Si isotopic measurements on SRM 990, separated by 40+ years of advances in instrumentation, are compared. The availability of this new technique does not say that absolute Si isotopic ratios are or ever will be better for normal Si isotopic measurements when seeking isotopic variations in nature, because they are not. However, by determining the absolute isotopic ratios of all the Si iRM scale artifacts, such iRMs become traceable to the metric system (SI); thereby automatically conferring on all the artifact-based δ30Si and δ29Si measurements traceability to the base SI unit, the mole. Such traceability should help reduce the potential of bias between different iRMs and facilitate the replacement of delta-scale artefacts when they run out. Fig. 1 Comparison of absolute isotopic measurements of SRM 990 using two radically different approaches to absolute calibration and mass bias corrections.
Farooq, Zerwa; Behzadi, Ashkan Heshmatzadeh; Blumenfeld, Jon D; Zhao, Yize; Prince, Martin R
To compare MRI segmentation methods for measuring liver cyst volumes in autosomal dominant polycystic kidney disease (ADPKD). Liver cyst volumes in 42 ADPKD patients were measured using region growing, thresholding and cyst diameter techniques. Manual segmentation was the reference standard. Root mean square deviation was 113, 155, and 500 for cyst diameter, thresholding and region growing respectively. Thresholding error for cyst volumes below 500ml was 550% vs 17% for cyst volumes above 500ml (p<0.001). For measuring volume of a small number of cysts, cyst diameter and manual segmentation methods are recommended. For severe disease with numerous, large hepatic cysts, thresholding is an acceptable alternative. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Glass, John O.; Reddick, Wilburn E.; Reeves, Cara; Pui, Ching-Hon
2004-05-01
Reliably quantifying therapy-induced leukoencephalopathy in children treated for cancer is a challenging task due to its varying MR properties and similarity to normal tissues and imaging artifacts. T1, T2, PD, and FLAIR images were analyzed for a subset of 15 children from an institutional protocol for the treatment of acute lymphoblastic leukemia. Three different analysis techniques were compared to examine improvements in the segmentation accuracy of leukoencephalopathy versus manual tracings by two expert observers. The first technique utilized no apriori information and a white matter mask based on the segmentation of the first serial examination of each patient. MR images were then segmented with a Kohonen Self-Organizing Map. The other two techniques combine apriori maps from the ICBM atlas spatially normalized to each patient and resliced using SPM99 software. The apriori maps were included as input and a gradient magnitude threshold calculated on the FLAIR images was also utilized. The second technique used a 2-dimensional threshold, while the third algorithm utilized a 3-dimensional threshold. Kappa values were compared for the three techniques to each observer, and improvements were seen with each addition to the original algorithm (Observer 1: 0.651, 0.653, 0.744; Observer 2: 0.603, 0.615, 0.699).
Uncertainty analysis technique for OMEGA Dante measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M. J.; Widmann, K.; Sorce, C.
2010-10-15
The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
Uncertainty Analysis Technique for OMEGA Dante Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M J; Widmann, K; Sorce, C
2010-05-07
The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
Continuous Seismic Threshold Monitoring
1992-05-31
Continuous threshold monitoring is a technique for using a seismic network to monitor a geographical area continuously in time. The method provides...area. Two approaches are presented. Site-specific monitoring: By focusing a seismic network on a specific target site, continuous threshold monitoring...recorded events at the site. We define the threshold trace for the network as the continuous time trace of computed upper magnitude limits of seismic
Radiometrically accurate scene-based nonuniformity correction for array sensors.
Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott
2003-10-01
A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.
Time-Dependent Computed Tomographic Perfusion Thresholds for Patients With Acute Ischemic Stroke.
d'Esterre, Christopher D; Boesen, Mari E; Ahn, Seong Hwan; Pordeli, Pooneh; Najm, Mohamed; Minhas, Priyanka; Davari, Paniz; Fainardi, Enrico; Rubiera, Marta; Khaw, Alexander V; Zini, Andrea; Frayne, Richard; Hill, Michael D; Demchuk, Andrew M; Sajobi, Tolulope T; Forkert, Nils D; Goyal, Mayank; Lee, Ting Y; Menon, Bijoy K
2015-12-01
Among patients with acute ischemic stroke, we determine computed tomographic perfusion (CTP) thresholds associated with follow-up infarction at different stroke onset-to-CTP and CTP-to-reperfusion times. Acute ischemic stroke patients with occlusion on computed tomographic angiography were acutely imaged with CTP. Noncontrast computed tomography and magnectic resonance diffusion-weighted imaging between 24 and 48 hours were used to delineate follow-up infarction. Reperfusion was assessed on conventional angiogram or 4-hour repeat computed tomographic angiography. Tmax, cerebral blood flow, and cerebral blood volume derived from delay-insensitive CTP postprocessing were analyzed using receiver-operator characteristic curves to derive optimal thresholds for combined patient data (pooled analysis) and individual patients (patient-level analysis) based on time from stroke onset-to-CTP and CTP-to-reperfusion. One-way ANOVA and locally weighted scatterplot smoothing regression was used to test whether the derived optimal CTP thresholds were different by time. One hundred and thirty-two patients were included. Tmax thresholds of >16.2 and >15.8 s and absolute cerebral blood flow thresholds of <8.9 and <7.4 mL·min(-1)·100 g(-1) were associated with infarct if reperfused <90 min from CTP with onset <180 min. The discriminative ability of cerebral blood volume was modest. No statistically significant relationship was noted between stroke onset-to-CTP time and the optimal CTP thresholds for all parameters based on discrete or continuous time analysis (P>0.05). A statistically significant relationship existed between CTP-to-reperfusion time and the optimal thresholds for cerebral blood flow (P<0.001; r=0.59 and 0.77 for gray and white matter, respectively) and Tmax (P<0.001; r=-0.68 and -0.60 for gray and white matter, respectively) parameters. Optimal CTP thresholds associated with follow-up infarction depend on time from imaging to reperfusion. © 2015 American Heart Association, Inc.
Grosso, Matthew J; Frangiamore, Salvatore J; Ricchetti, Eric T; Bauer, Thomas W; Iannotti, Joseph P
2014-03-19
Propionibacterium acnes is a clinically relevant pathogen with total shoulder arthroplasty. The purpose of this study was to determine the sensitivity of frozen section histology in identifying patients with Propionibacterium acnes infection during revision total shoulder arthroplasty and investigate various diagnostic thresholds of acute inflammation that may improve frozen section performance. We reviewed the results of forty-five patients who underwent revision total shoulder arthroplasty. Patients were divided into the non-infection group (n = 15), the Propionibacterium acnes infection group (n = 18), and the other infection group (n = 12). Routine preoperative testing was performed and intraoperative tissue culture and frozen section histology were collected for each patient. The histologic diagnosis was determined by one pathologist for each of the four different thresholds. The absolute maximum polymorphonuclear leukocyte concentration was used to construct a receiver operating characteristics curve to determine a new potential optimal threshold. Using the current thresholds for grading frozen section histology, the sensitivity was lower for the Propionibacterium acnes infection group (50%) compared with the other infection group (67%). The specificity of frozen section was 100%. Using a receiver operating characteristics curve, an optimized threshold was found at a total of ten polymorphonuclear leukocytes in five high-power fields (400×). Using this threshold, the sensitivity of frozen section for Propionibacterium acnes was increased to 72%, and the specificity remained at 100%. Using current histopathology grading systems, frozen sections were specific but showed low sensitivity with respect to the Propionibacterium acnes infection. A new threshold value of a total of ten or more polymorphonuclear leukocytes in five high-power fields may increase the sensitivity of frozen section, with minimal impact on specificity.
ERIC Educational Resources Information Center
Ericson, T. J.
1988-01-01
Describes an apparatus capable of measuring absolute temperatures of a tungsten filament bulb up to normal running temperature and measuring Botzmann's constant to an accuracy of a few percent. Shows that electrical noise techniques are convenient to demonstrate how the concept of temperature is related to the micro- and macroscopic world. (CW)
Mosher Amides: Determining the Absolute Stereochemistry of Optically-Active Amines
ERIC Educational Resources Information Center
Allen, Damian A.; Tomaso, Anthony E., Jr.; Priest, Owen P.; Hindson, David F.; Hurlburt, Jamie L.
2008-01-01
The use of chiral reagents for the derivatization of optically-active amines and alcohols for the purpose of determining their enantiomeric purity or absolute configuration is a tool used by many chemists. Among the techniques used, Mosher's amide and Mosher's ester analyses are among the most reliable and one of the most often used. Despite this,…
Assessing Suturing Skills in a Self-Guided Learning Setting: Absolute Symmetry Error
ERIC Educational Resources Information Center
Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam
2009-01-01
Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…
NASA Astrophysics Data System (ADS)
Bancelin, Stéphane; Aimé, Carole; Gusachenko, Ivan; Kowalczuk, Laura; Latour, Gaël; Coradin, Thibaud; Schanne-Klein, Marie-Claire
2014-09-01
The quantification of collagen fibril size is a major issue for the investigation of pathological disorders associated with structural defects of the extracellular matrix. Second-harmonic generation microscopy is a powerful technique to characterize the macromolecular organization of collagen in unstained biological tissues. Nevertheless, due to the complex coherent building of this nonlinear optical signal, it has never been used to measure fibril diameter so far. Here we report absolute measurements of second-harmonic signals from isolated fibrils down to 30 nm diameter, via implementation of correlative second-harmonic-electron microscopy. Moreover, using analytical and numerical calculations, we demonstrate that the high sensitivity of this technique originates from the parallel alignment of collagen triple helices within fibrils and the subsequent constructive interferences of second-harmonic radiations. Finally, we use these absolute measurements as a calibration for ex vivo quantification of fibril diameter in the Descemet’s membrane of a diabetic rat cornea.
Moral absolutism and ectopic pregnancy.
Kaczor, C
2001-02-01
If one accepts a version of absolutism that excludes the intentional killing of any innocent human person from conception to natural death, ectopic pregnancy poses vexing difficulties. Given that the embryonic life almost certainly will die anyway, how can one retain one's moral principle and yet adequately respond to a situation that gravely threatens the life of the mother and her future fertility? The four options of treatment most often discussed in the literature are non-intervention, salpingectomy (removal of tube with embryo), salpingostomy (removal of embryo alone), and use of methotrexate (MXT). In this essay, I review these four options and introduce a fifth (the milking technique). In order to assess these options in terms of the absolutism mentioned, it will also be necessary to discuss various accounts of the intention/foresight distinction. I conclude that salpingectomy, salpingostomy, and the milking technique are compatible with absolutist presuppositions, but not the use of methotrexate.
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
Yu, Tzu-Ying; Jacobs, Robert J.; Anstice, Nicola S.; Paudel, Nabin; Harding, Jane E.; Thompson, Benjamin
2013-01-01
Purpose. We developed and validated a technique for measuring global motion perception in 2-year-old children, and assessed the relationship between global motion perception and other measures of visual function. Methods. Random dot kinematogram (RDK) stimuli were used to measure motion coherence thresholds in 366 children at risk of neurodevelopmental problems at 24 ± 1 months of age. RDKs of variable coherence were presented and eye movements were analyzed offline to grade the direction of the optokinetic reflex (OKR) for each trial. Motion coherence thresholds were calculated by fitting psychometric functions to the resulting datasets. Test–retest reliability was assessed in 15 children, and motion coherence thresholds were measured in a group of 10 adults using OKR and behavioral responses. Standard age-appropriate optometric tests also were performed. Results. Motion coherence thresholds were measured successfully in 336 (91.8%) children using the OKR technique, but only 31 (8.5%) using behavioral responses. The mean threshold was 41.7 ± 13.5% for 2-year-old children and 3.3 ± 1.2% for adults. Within-assessor reliability and test–retest reliability were high in children. Children's motion coherence thresholds were significantly correlated with stereoacuity (LANG I & II test, ρ = 0.29, P < 0.001; Frisby, ρ = 0.17, P = 0.022), but not with binocular visual acuity (ρ = 0.11, P = 0.07). In adults OKR and behavioral motion coherence thresholds were highly correlated (intraclass correlation = 0.81, P = 0.001). Conclusions. Global motion perception can be measured in 2-year-old children using the OKR. This technique is reliable and data from adults suggest that motion coherence thresholds based on the OKR are related to motion perception. Global motion perception was related to stereoacuity in children. PMID:24282224
Investigation of advanced phase-shifting projected fringe profilometry techniques
NASA Astrophysics Data System (ADS)
Liu, Hongyu
1999-11-01
The phase-shifting projected fringe profilometry (PSPFP) technique is a powerful tool in the profile measurements of rough engineering surfaces. Compared with other competing techniques, this technique is notable for its full-field measurement capacity, system simplicity, high measurement speed, and low environmental vulnerability. The main purpose of this dissertation is to tackle three important problems, which severely limit the capability and the accuracy of the PSPFP technique, with some new approaches. Chapter 1 provides some background information of the PSPFP technique including the measurement principles, basic features, and related techniques is briefly introduced. The objectives and organization of the thesis are also outlined. Chapter 2 gives a theoretical treatment to the absolute PSPFP measurement. The mathematical formulations and basic requirements of the absolute PSPFP measurement and its supporting techniques are discussed in detail. Chapter 3 introduces the experimental verification of the proposed absolute PSPFP technique. Some design details of a prototype system are discussed as supplements to the previous theoretical analysis. Various fundamental experiments performed for concept verification and accuracy evaluation are introduced together with some brief comments. Chapter 4 presents the theoretical study of speckle- induced phase measurement errors. In this analysis, the expression for speckle-induced phase errors is first derived based on the multiplicative noise model of image- plane speckles. The statistics and the system dependence of speckle-induced phase errors are then thoroughly studied through numerical simulations and analytical derivations. Based on the analysis, some suggestions on the system design are given to improve measurement accuracy. Chapter 5 discusses a new technique combating surface reflectivity variations. The formula used for error compensation is first derived based on a simplified model of the detection process. The techniques coping with two major effects of surface reflectivity variations are then introduced. Some fundamental problems in the proposed technique are studied through simulations. Chapter 6 briefly summarizes the major contributions of the current work and provides some suggestions for the future research.
Eleven Colors That Are Almost Never Confused
NASA Astrophysics Data System (ADS)
Boynton, Robert M.
1989-08-01
1.1. Three functions of color vision. Setting aside the complex psychological effects of color, related to esthetics, fashion, and mood, three relatively basic functions of color vision, which can be examined scientifically, are discernable. (1) With the eye in a given state of adaptation, color vision allows the perception of signals that otherwise would be below threshold, and therefore lost to perception. Evidence for this comes from a variety of two-color threshold experiments. (2) Visible contours can be maintained by color differences alone, regardless of the relative radiances of the two parts of the field whose junction defines the border. For achromatic vision, contour disappears at the isoluminant point. (3) Color specifies what seems to be an absolute property of a surface, one that enhances its recognizability and allows a clearer separation and classification of non-contiguous elements in the visual field.
NASA Astrophysics Data System (ADS)
Salvador-Castiñeira, Paula; Hambsch, Franz-Josef; Göök, Alf; Vidali, Marzio; Hawkes, Nigel P.; Roberts, Neil J.; Taylor, Graeme C.; Thomas, David J.
2017-09-01
Cross section measurements in the fast energy region are being demanded as one of the key ingredients for modelling Generation-IV nuclear power plants. However, in facilities where there are no time-of-flight possibilities or it is not convenient to use them, using the 235U(n,f) cross section as a benchmark would require a careful knowledge of the room scatter in the experimental area. In this paper we present measurements of two threshold reactions, 238U(n,f) and 237Np(n,f), that could become a standard between their fission threshold and 2.5 MeV, if the discrepancies shown in the evaluations and in some experimental data can be solved. The preliminary results are in agreement with the present ENDF/B-VII.1 evaluation.
Transverse mode coupling instability threshold with space charge and different wakefields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balbekov, V.
Transverse mode coupling instability of a bunch with space charge and wake field is considered in frameworks of the boxcar model. Eigenfunctions of the bunch without wake are used as the basis for solution of the equations with the wake field included. Dispersion equation for the bunch eigentunes is obtained in the form of an infinite continued fraction. It is shown that influence of space charge on the instability essentially depends on the wake sign. In particular, threshold of the negative wake increases in absolute value until the space charge tune shift is rather small, and goes to zero atmore » higher space charge. The explanation of this behavior is developed by analysis of the bunch spectrum. As a result, a comparison of the results with published articles is represented.« less
Amplitude distributions of the spider heartpulse in response to gravitational stimuli
NASA Technical Reports Server (NTRS)
Finck, A.
1984-01-01
The arachnid Nuctenea sclopetaria (Clerck) which possesses a neurogenic heart, measuring the heartbeat is under efferent control through a dorsal nerve arising from a brain center is discussed. It was shown that the heartrate of this spider is also modulated by an afferent input associated with small increments of gravity. A compressive force on the order of 40 micron is sufficient to elicit a threshold change in heart rate for a typical (100mg) spider. This obtains in a hyper-Gz field less than 1.001. The functional relationship between gravity and heartrate is logarithmic between the absolute threshold and at least 1.5 Gz. A model was proposed in which equilibrium and movement are maintained by changes in blood pressure. It is concluded that the arachnid equilibrium system is like a weight detector which employs a hydraulic compensatory mechanism.
NASA Astrophysics Data System (ADS)
Hayama, K.; Ohyama, H.; Simoen, E.; Rafí, J. M.; Mercha, A.; Claeys, C.
2004-04-01
The degradation of the electrical properties of deep submicron metal-oxide-semiconductor field-effect transistors (MOSFETs) by 2 MeV electron irradiation at high temperatures was studied. The irradiation temperatures were 30, 100, 150 and 200 °C, and the fluence was fixed at 1015e/cm2. For most experimental conditions, the threshold voltage (VT) is observed to reduce in absolute value both for n- and p-MOSFETs. This reduction is most pronounced at 100 °C, as at this irradiation temperature, the radiation-induced density of interface traps is highest. It is proposed that hydrogen neutralization of the dopants in the substrate plays a key role, whereby the hydrogen is released from the gate by the 2 MeV electrons.
Human factors studies of control configurations for advanced transport aircraft
NASA Technical Reports Server (NTRS)
Snyder, Harry L.; Monty, Robert W.; Old, Joe
1985-01-01
This research investigated the threshold levels of display luminance contrast which were required to interpret static, achromatic, integrated displays of primary flight information. A four-factor within-subjects design was used to investigate the influences of type of flight variable information, the level of ambient illumination, the type of control input, and the size of the display symbology on the setting of these interpretability thresholds. A three-alternative forced choice paradigm was used in conjunction with the method of adjustments to obtain a measure of the upper limen of display luminance contrast needed to interpret a complex display of primary flight information. The pattern of results and the absolute magnitudes of the luminance contrast settings were found to be in good agreement with previously reported data from psychophysical investigations of display luminance contrast requirements.
Transverse mode coupling instability threshold with space charge and different wakefields
Balbekov, V.
2017-03-10
Transverse mode coupling instability of a bunch with space charge and wake field is considered in frameworks of the boxcar model. Eigenfunctions of the bunch without wake are used as the basis for solution of the equations with the wake field included. Dispersion equation for the bunch eigentunes is obtained in the form of an infinite continued fraction. It is shown that influence of space charge on the instability essentially depends on the wake sign. In particular, threshold of the negative wake increases in absolute value until the space charge tune shift is rather small, and goes to zero atmore » higher space charge. The explanation of this behavior is developed by analysis of the bunch spectrum. As a result, a comparison of the results with published articles is represented.« less
Collective stimulated Brillouin backscatter
NASA Astrophysics Data System (ADS)
Lushnikov, Pavel; Rose, Harvey
2007-11-01
We develop the statistical theory of linear collective stimulated Brillouin backscatter (CBSBS) in spatially and temporally incoherent laser beam. Instability is collective because it does not depend on the dynamics of isolated hot spots (speckles) of laser intensity, but rather depends on averaged laser beam intensity, optic f/#, and laser coherence time, Tc. CBSBS has a much larger threshold than a classical coherent beam's in long-scale-length high temperature plasma. It is a novel regime in which Tc is too large for applicability of well-known statistical theories (RPA) but Tc must be small enough to suppress single speckle processes such as self-focusing. Even if laser Tc is too large for a priori applicability of our theory, collective forward SBS^1, perhaps enhanced by high Z dopant, and its resultant self-induced Tc reduction, may regain the CBSBS regime. We identified convective and absolute CBSBS regimes. The threshold of convective instability is inside the typical parameter region of NIF designs. Well above incoherent threshold, the coherent instability growth rate is recovered. ^1 P.M. Lushnikov and H.A. Rose, Plasma Physics and Controlled Fusion, 48, 1501 (2006).
Hydrogenated pyrene: Statistical single-carbon loss below the knockout threshold
NASA Astrophysics Data System (ADS)
Wolf, Michael; Giacomozzi, Linda; Gatchell, Michael; de Ruette, Nathalie; Stockett, Mark H.; Schmidt, Henning T.; Cederquist, Henrik; Zettergren, Henning
2016-04-01
An ongoing discussion revolves around the question of what effect hydrogenation has on carbon backbone fragmentation in polycyclic aromatic hydrocarbons (PAHs). In order to shed more light on this issue, we have measured absolute single carbon loss cross sections in collisions between native or hydrogenated pyrene cations (C16H+10+m, m = 0, 6, 16) and He as functions of center-of-mass energies down to 20 eV. Classical molecular dynamics (MD) simulations give further insight into energy transfer processes and also yield m-dependent threshold energies for prompt (femtoseconds) carbon knockout. Such fast, non-statistical fragmentation processes dominate CHx-loss for native pyrene (m = 0), while much slower statistical fragmentation processes contribute significantly to single-carbon loss for the hydrogenated molecules (m = 6 and m = 16). The latter is shown by measurements of large CHx-loss cross sections far below the MD knockout thresholds for C16H+16 and C16H+26. Contribution to the "Atomic Cluster Collisions (7th International Symposium)", edited by Gerardo Delgado Barrio, Andrey Solov'Yov, Pablo Villarreal, Rita Prosmiti.
Thresholds and noise limitations of colour vision in dim light
Yovanovich, Carola
2017-01-01
Colour discrimination is based on opponent photoreceptor interactions, and limited by receptor noise. In dim light, photon shot noise impairs colour vision, and in vertebrates, the absolute threshold of colour vision is set by dark noise in cones. Nocturnal insects (e.g. moths and nocturnal bees) and vertebrates lacking rods (geckos) have adaptations to reduce receptor noise and use chromatic vision even in very dim light. In contrast, vertebrates with duplex retinae use colour-blind rod vision when noisy cone signals become unreliable, and their transition from cone- to rod-based vision is marked by the Purkinje shift. Rod–cone interactions have not been shown to improve colour vision in dim light, but may contribute to colour vision in mesopic light intensities. Frogs and toads that have two types of rods use opponent signals from these rods to control phototaxis even at their visual threshold. However, for tasks such as prey or mate choice, their colour discrimination abilities fail at brighter light intensities, similar to other vertebrates, probably limited by the dark noise in cones. This article is part of the themed issue 'Vision in dim light’. PMID:28193810
Drop splashing induced by target roughness and porosity: The size plays no role.
Roisman, Ilia V; Lembach, Andreas; Tropea, Cameron
2015-08-01
Drop splash as a result of an impact onto a dry substrate is governed by the impact parameters, gas properties and the substrate properties. The splash thresholds determine the boundaries between various splash modes. Various existing models for the splash threshold are reviewed in this paper. It is shown that our understanding of splash is not yet complete. The most popular, widely used models for splash threshold do not describe well the available experimental data. The scientific part of this paper is focused on the description of drop prompt splash on rough and porous substrates. It is found that the absolute length scales of the substrate roughness, like Ra or Rz, do not have any significant effect on the splash threshold. It is discovered that on rough substrates the main influencing splash parameters are the impact Weber number and the characteristic slope of the roughness of the substrate. The drop deposition without splash on porous substrates is enhanced by the liquid modified Reynolds number. Surprisingly, it is not influenced by the pore size, at least for the impact parameters used in the experiments. Finally, an empirical correlation for the prompt splash on rough and porous substrates is proposed, based on a rather amount of experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.
Trunk muscle activation during golf swing: Baseline and threshold.
Silva, Luís; Marta, Sérgio; Vaz, João; Fernandes, Orlando; Castro, Maria António; Pezarat-Correia, Pedro
2013-10-01
There is a lack of studies regarding EMG temporal analysis during dynamic and complex motor tasks, such as golf swing. The aim of this study is to analyze the EMG onset during the golf swing, by comparing two different threshold methods. Method A threshold was determined using the baseline activity recorded between two maximum voluntary contraction (MVC). Method B threshold was calculated using the mean EMG activity for 1000ms before the 500ms prior to the start of the Backswing. Two different clubs were also studied. Three-way repeated measures ANOVA was used to compare methods, muscles and clubs. Two-way mixed Intraclass Correlation Coefficient (ICC) with absolute agreement was used to determine the methods reliability. Club type usage showed no influence in onset detection. Rectus abdominis (RA) showed the higher agreement between methods. Erector spinae (ES), on the other hand, showed a very low agreement, that might be related to postural activity before the swing. External oblique (EO) is the first being activated, at 1295ms prior impact. There is a similar activation time between right and left muscles sides, although the right EO showed better agreement between methods than left side. Therefore, the algorithms usage is task- and muscle-dependent. Copyright © 2013 Elsevier Ltd. All rights reserved.
Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers
NASA Astrophysics Data System (ADS)
Jiang, Chufan; Li, Beiwen; Zhang, Song
2017-04-01
This paper presents a method that can recover absolute phase pixel by pixel without embedding markers on three phase-shifted fringe patterns, acquiring additional images, or introducing additional hardware component(s). The proposed three-dimensional (3D) absolute shape measurement technique includes the following major steps: (1) segment the measured object into different regions using rough priori knowledge of surface geometry; (2) artificially create phase maps at different z planes using geometric constraints of structured light system; (3) unwrap the phase pixel by pixel for each region by properly referring to the artificially created phase map; and (4) merge unwrapped phases from all regions into a complete absolute phase map for 3D reconstruction. We demonstrate that conventional three-step phase-shifted fringe patterns can be used to create absolute phase map pixel by pixel even for large depth range objects. We have successfully implemented our proposed computational framework to achieve absolute 3D shape measurement at 40 Hz.
Measurement of the absolute reflectance of polytetrafluoroethylene (PTFE) immersed in liquid xenon
NASA Astrophysics Data System (ADS)
Neves, F.; Lindote, A.; Morozov, A.; Solovov, V.; Silva, C.; Bras, P.; Rodrigues, J. P.; Lopes, M. I.
2017-01-01
The performance of a detector using liquid xenon (LXe) as a scintillator is strongly dependent on the collection efficiency for xenon scintillation light, which in turn is critically dependent on the reflectance of the surfaces that surround the active volume. To improve the light collection in such detectors the active volume is usually surrounded by polytetrafluoroethylene (PTFE) reflector panels, used due to its very high reflectance—even at the short wavelength of scintillation light of LXe (peaked at 178 nm). In this work, which contributed to the overall R&D effort towards the LUX-ZEPLIN (LZ) experiment, we present experimental results for the absolute reflectance measurements of three different PTFE samples (including the material used in the LUX detector) immersed in LXe for its scintillation light. The obtained results show that very high bi-hemispherical reflectance values (>= 97%) can be achieved, enabling very low energy thresholds in liquid xenon scintillator-based detectors.
Vogel, Stefanie; Rackwitz, Jenny; Schürman, Robin; Prinz, Julia; Milosavljević, Aleksandar R; Réfrégiers, Matthieu; Giuliani, Alexandre; Bald, Ilko
2015-11-19
We have characterized ultraviolet (UV) photon-induced DNA strand break processes by determination of absolute cross sections for photoabsorption and for sequence-specific DNA single strand breakage induced by photons in an energy range from 6.50 to 8.94 eV. These represent the lowest-energy photons able to induce DNA strand breaks. Oligonucleotide targets are immobilized on a UV transparent substrate in controlled quantities through attachment to DNA origami templates. Photon-induced dissociation of single DNA strands is visualized and quantified using atomic force microscopy. The obtained quantum yields for strand breakage vary between 0.06 and 0.5, indicating highly efficient DNA strand breakage by UV photons, which is clearly dependent on the photon energy. Above the ionization threshold strand breakage becomes clearly the dominant form of DNA radiation damage, which is then also dependent on the nucleotide sequence.
Measurement of the Absolute Branching Fraction for Λ_{c}^{+}→Λe^{+}ν_{e}.
Ablikim, M; Achasov, M N; Ai, X C; Albayrak, O; Albrecht, M; Ambrose, D J; Amoroso, A; An, F F; An, Q; Bai, J Z; Baldini Ferroli, R; Ban, Y; Bennett, D W; Bennett, J V; Bertani, M; Bettoni, D; Bian, J M; Bianchi, F; Boger, E; Boyko, I; Briere, R A; Cai, H; Cai, X; Cakir, O; Calcaterra, A; Cao, G F; Cetin, S A; Chang, J F; Chelkov, G; Chen, G; Chen, H S; Chen, H Y; Chen, J C; Chen, M L; Chen, S J; Chen, X; Chen, X R; Chen, Y B; Cheng, H P; Chu, X K; Cibinetto, G; Dai, H L; Dai, J P; Dbeyssi, A; Dedovich, D; Deng, Z Y; Denig, A; Denysenko, I; Destefanis, M; De Mori, F; Ding, Y; Dong, C; Dong, J; Dong, L Y; Dong, M Y; Dou, Z L; Du, S X; Duan, P F; Fan, J Z; Fang, J; Fang, S S; Fang, X; Fang, Y; Fava, L; Fedorov, O; Feldbauer, F; Felici, G; Feng, C Q; Fioravanti, E; Fritsch, M; Fu, C D; Gao, Q; Gao, X L; Gao, X Y; Gao, Y; Gao, Z; Garzia, I; Goetzen, K; Gong, W X; Gradl, W; Greco, M; Gu, M H; Gu, Y T; Guan, Y H; Guo, A Q; Guo, L B; Guo, Y; Guo, Y P; Haddadi, Z; Hafner, A; Han, S; Hao, X Q; Harris, F A; He, K L; Held, T; Heng, Y K; Hou, Z L; Hu, C; Hu, H M; Hu, J F; Hu, T; Hu, Y; Huang, G M; Huang, G S; Huang, J S; Huang, X T; Huang, Y; Hussain, T; Ji, Q; Ji, Q P; Ji, X B; Ji, X L; Jiang, L W; Jiang, X S; Jiang, X Y; Jiao, J B; Jiao, Z; Jin, D P; Jin, S; Johansson, T; Julin, A; Kalantar-Nayestanaki, N; Kang, X L; Kang, X S; Kavatsyuk, M; Ke, B C; Kiese, P; Kliemt, R; Kloss, B; Kolcu, O B; Kopf, B; Kornicer, M; Kuehn, W; Kupsc, A; Lange, J S; Lara, M; Larin, P; Leng, C; Li, C; Li, Cheng; Li, D M; Li, F; Li, F Y; Li, G; Li, H B; Li, J C; Li, Jin; Li, K; Li, K; Li, Lei; Li, P R; Li, T; Li, W D; Li, W G; Li, X L; Li, X M; Li, X N; Li, X Q; Li, Z B; Liang, H; Liang, Y F; Liang, Y T; Liao, G R; Lin, D X; Liu, B J; Liu, C X; Liu, D; Liu, F H; Liu, Fang; Liu, Feng; Liu, H B; Liu, H H; Liu, H H; Liu, H M; Liu, J; Liu, J B; Liu, J P; Liu, J Y; Liu, K; Liu, K Y; Liu, L D; Liu, P L; Liu, Q; Liu, S B; Liu, X; Liu, Y B; Liu, Z A; Liu, Zhiqing; Lou, X C; Lu, H J; Lu, J G; Lu, Y; Lu, Y P; Luo, C L; Luo, M X; Luo, T; Luo, X L; Lyu, X R; Ma, F C; Ma, H L; Ma, L L; Ma, Q M; Ma, T; Ma, X N; Ma, X Y; Maas, F E; Maggiora, M; Mao, Y J; Mao, Z P; Marcello, S; Messchendorp, J G; Min, J; Mitchell, R E; Mo, X H; Mo, Y J; Morales Morales, C; Muchnoi, N Yu; Muramatsu, H; Nefedov, Y; Nerling, F; Nikolaev, I B; Ning, Z; Nisar, S; Niu, S L; Niu, X Y; Olsen, S L; Ouyang, Q; Pacetti, S; Pan, Y; Patteri, P; Pelizaeus, M; Peng, H P; Peters, K; Pettersson, J; Ping, J L; Ping, R G; Poling, R; Prasad, V; Qi, H R; Qi, M; Qian, S; Qiao, C F; Qin, L Q; Qin, N; Qin, X S; Qin, Z H; Qiu, J F; Rashid, K H; Redmer, C F; Ripka, M; Rong, G; Rosner, Ch; Ruan, X D; Santoro, V; Sarantsev, A; Savrié, M; Schoenning, K; Schumann, S; Shan, W; Shao, M; Shen, C P; Shen, P X; Shen, X Y; Sheng, H Y; Song, W M; Song, X Y; Sosio, S; Spataro, S; Sun, G X; Sun, J F; Sun, S S; Sun, Y J; Sun, Y Z; Sun, Z J; Sun, Z T; Tang, C J; Tang, X; Tapan, I; Thorndike, E H; Tiemens, M; Ullrich, M; Uman, I; Varner, G S; Wang, B; Wang, B L; Wang, D; Wang, D Y; Wang, K; Wang, L L; Wang, L S; Wang, M; Wang, P; Wang, P L; Wang, S G; Wang, W; Wang, W P; Wang, X F; Wang, Y D; Wang, Y F; Wang, Y Q; Wang, Z; Wang, Z G; Wang, Z H; Wang, Z Y; Weber, T; Wei, D H; Wei, J B; Weidenkaff, P; Wen, S P; Wiedner, U; Wolke, M; Wu, L H; Wu, Z; Xia, L; Xia, L G; Xia, Y; Xiao, D; Xiao, H; Xiao, Z J; Xie, Y G; Xiu, Q L; Xu, G F; Xu, L; Xu, Q J; Xu, Q N; Xu, X P; Yan, L; Yan, W B; Yan, W C; Yan, Y H; Yang, H J; Yang, H X; Yang, L; Yang, Y; Yang, Y X; Ye, M; Ye, M H; Yin, J H; Yu, B X; Yu, C X; Yu, J S; Yuan, C Z; Yuan, W L; Yuan, Y; Yuncu, A; Zafar, A A; Zallo, A; Zeng, Y; Zeng, Z; Zhang, B X; Zhang, B Y; Zhang, C; Zhang, C C; Zhang, D H; Zhang, H H; Zhang, H Y; Zhang, J J; Zhang, J L; Zhang, J Q; Zhang, J W; Zhang, J Y; Zhang, J Z; Zhang, K; Zhang, L; Zhang, X Y; Zhang, Y; Zhang, Y H; Zhang, Y N; Zhang, Y T; Zhang, Yu; Zhang, Z H; Zhang, Z P; Zhang, Z Y; Zhao, G; Zhao, J W; Zhao, J Y; Zhao, J Z; Zhao, Lei; Zhao, Ling; Zhao, M G; Zhao, Q; Zhao, Q W; Zhao, S J; Zhao, T C; Zhao, Y B; Zhao, Z G; Zhemchugov, A; Zheng, B; Zheng, J P; Zheng, W J; Zheng, Y H; Zhong, B; Zhou, L; Zhou, X; Zhou, X K; Zhou, X R; Zhou, X Y; Zhu, K; Zhu, K J; Zhu, S; Zhu, S H; Zhu, X L; Zhu, Y C; Zhu, Y S; Zhu, Z A; Zhuang, J; Zotti, L; Zou, B S; Zou, J H
2015-11-27
We report the first measurement of the absolute branching fraction for Λ_{c}^{+}→Λe^{+}ν_{e}. This measurement is based on 567 pb^{-1} of e^{+}e^{-} annihilation data produced at sqrt[s]=4.599 GeV, which is just above the Λ_{c}^{+}Λ[over ¯]_{c}^{-} threshold. The data were collected with the BESIII detector at the BEPCII storage rings. The branching fraction is determined to be B(Λ_{c}^{+}→Λe^{+}ν_{e})=[3.63±0.38(stat)±0.20(syst)]%, representing a significant improvement in precision over the current indirect determination. As the branching fraction for Λ_{c}^{+}→Λe^{+}ν_{e} is the benchmark for those of other Λ_{c}^{+} semileptonic channels, our result provides a unique test of different theoretical models, which is the most stringent to date.
2008-05-01
0.05 significance threshold. Fol- owing ANOVA, Fisher’s least significant difference, LSD , air-wise comparison was implemented post-hoc. Briefly, the SD...average absolute difference etween any two groups was greater than the LSD critical alue, then the pair-wise comparison for those two groups ere...Xu, Y., Hin- shaw , J.C., Zimmerman, G.A., Hama, K., Aoki, J., Arai, H., Prestwich, G.D., 2003. Identification of an intracellular receptor for
Mitochondrial DNA copy number threshold in mtDNA depletion myopathy.
Durham, S E; Bonilla, E; Samuels, D C; DiMauro, S; Chinnery, P F
2005-08-09
The authors measured the absolute amount of mitochondrial DNA (mtDNA) within single muscle fibers from two patients with thymidine kinase 2 (TK2) deficiency and two healthy controls. TK2 deficient fibers containing more than 0.01 mtDNA/microm3 had residual cytochrome c oxidase (COX) activity. This defines the minimum amount of wild-type mtDNA molecules required to maintain COX activity in skeletal muscle and provides an explanation for the mosaic histochemical pattern seen in patients with mtDNA depletion syndrome.
SU-E-J-85: Leave-One-Out Perturbation (LOOP) Fitting Algorithm for Absolute Dose Film Calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, A; Ahmad, M; Chen, Z
2014-06-01
Purpose: To introduce an outliers-recognition fitting routine for film dosimetry. It cannot only be flexible with any linear and non-linear regression but also can provide information for the minimal number of sampling points, critical sampling distributions and evaluating analytical functions for absolute film-dose calibration. Methods: The technique, leave-one-out (LOO) cross validation, is often used for statistical analyses on model performance. We used LOO analyses with perturbed bootstrap fitting called leave-one-out perturbation (LOOP) for film-dose calibration . Given a threshold, the LOO process detects unfit points (“outliers”) compared to other cohorts, and a bootstrap fitting process follows to seek any possibilitiesmore » of using perturbations for further improvement. After that outliers were reconfirmed by a traditional t-test statistics and eliminated, then another LOOP feedback resulted in the final. An over-sampled film-dose- calibration dataset was collected as a reference (dose range: 0-800cGy), and various simulated conditions for outliers and sampling distributions were derived from the reference. Comparisons over the various conditions were made, and the performance of fitting functions, polynomial and rational functions, were evaluated. Results: (1) LOOP can prove its sensitive outlier-recognition by its statistical correlation to an exceptional better goodness-of-fit as outliers being left-out. (2) With sufficient statistical information, the LOOP can correct outliers under some low-sampling conditions that other “robust fits”, e.g. Least Absolute Residuals, cannot. (3) Complete cross-validated analyses of LOOP indicate that the function of rational type demonstrates a much superior performance compared to the polynomial. Even with 5 data points including one outlier, using LOOP with rational function can restore more than a 95% value back to its reference values, while the polynomial fitting completely failed under the same conditions. Conclusion: LOOP can cooperate with any fitting routine functioning as a “robust fit”. In addition, it can be set as a benchmark for film-dose calibration fitting performance.« less
Tatsi, Christina; Boden, Rebecca; Sinaii, Ninet; Keil, Meg; Lyssikatos, Charalampos; Belyavskaya, Elena; Rosenzweig, Sergio D; Stratakis, Constantine A; Lodish, Maya B
2018-02-01
BackgroundHypercortisolemia results in changes of the immune system and elevated infection risk, but data on the WBC changes in pediatric Cushing syndrome (CS) are not known. We describe the changes of the WBC lineages in pediatric endogenous hypercortisolemia, their associations with the markers of disease severity, and the presence of infections.MethodsWe identified 197 children with endogenous CS. Clinical and biochemical data were recorded. Sixty-six children with similar age and gender, and normocortisolemia served as controls.ResultsThe absolute lymphocyte count of CS patients was significantly lower than that of controls, while the total WBC and the absolute neutrophil counts were significantly higher. These changes correlated with several markers of CS severity and improved after resolution of hypercortisolemia. Infections were identified in 35 patients (17.8%), and their presence correlated to elevated serum morning cortisol, midnight cortisol, and urinary free cortisol levels, as well as with the decrease in absolute lymphocyte count.ConclusionsChildren with endogenous CS have abnormal WBC counts, which correlate with the severity of CS, and normalize after cure. Infections are common in this population; clinicians should be aware of this complication of CS and have low threshold in diagnosis and treating infections in CS.
Aided Electrophysiology Using Direct Audio Input: Effects of Amplification and Absolute Signal Level
Billings, Curtis J.; Miller, Christi W.; Tremblay, Kelly L.
2016-01-01
Purpose This study investigated (a) the effect of amplification on cortical auditory evoked potentials (CAEPs) at different signal levels when signal-to-noise ratios (SNRs) were equated between unaided and aided conditions, and (b) the effect of absolute signal level on aided CAEPs when SNR was held constant. Method CAEPs were recorded from 13 young adults with normal hearing. A 1000-Hz pure tone was presented in unaided and aided conditions with a linear analog hearing aid. Direct audio input was used, allowing recorded hearing aid noise floor to be added to unaided conditions to equate SNRs between conditions. An additional stimulus was created through scaling the noise floor to study the effect of signal level. Results Amplification resulted in delayed N1 and P2 peak latencies relative to the unaided condition. An effect of absolute signal level (when SNR was constant) was present for aided CAEP area measures, such that larger area measures were found at higher levels. Conclusion Results of this study further demonstrate that factors in addition to SNR must also be considered before CAEPs can be used to clinically to measure aided thresholds. PMID:26953543
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Shulian; Liao Zhongxing; Vaporciyan, Ara A.
Purpose: To assess the association of clinical and especially dosimetric factors with the incidence of postoperative pulmonary complications among esophageal cancer patients treated with concurrent chemoradiation therapy followed by surgery. Method and Materials: Data from 110 esophageal cancer patients treated between January 1998 and December 2003 were analyzed retrospectively. All patients received concurrent chemoradiotherapy followed by surgery; 72 patients also received irinotecan-based induction chemotherapy. Concurrent chemotherapy was 5-fluorouracil-based and in 97 cases included taxanes. Radiotherapy was delivered to a total dose of 41.4-50.4 Gy at 1.8-2.0 Gy per fraction with a three-dimensional conformal technique. Surgery (three-field, Ivor-Lewis, or transhiatal esophagectomy)more » was performed 27-123 days (median, 45 days) after completion of radiotherapy. The following dosimetric parameters were generated from the dose-volume histogram (DVH) for total lung: lung volume, mean dose to lung, relative and absolute volumes of lung receiving more than a threshold dose (relative V{sub dose} and absolute V{sub dose}), and absolute volume of lung receiving less than a threshold dose (volume spared, or VS{sub dose}). Occurrence of postoperative pulmonary complications, defined as pneumonia or acute respiratory distress syndrome (ARDS) within 30 days after surgery, was the endpoint for all analyses. Fisher's exact test was used to investigate the relationship between categorical factors and incidence of postoperative pulmonary complications. Logistic analysis was used to analyze the relationship between continuous factors (e.g., V{sub dose} or VS{sub dose}) and complication rate. Logistic regression with forward stepwise inclusion of factors was used to perform multivariate analysis of those factors having univariate significance (p < 0.05). The Mann-Whitney test was used to compare length of hospital stay in patients with and without lung complications and to compare lung volumes, VS5 values, and absolute and relative V5 values in male vs. female patients. Pearson correlation analysis was used to determine correlations between dosimetric factors. Results: Eighteen (16.4%) of the 110 patients developed postoperative pulmonary complications. Two of these died of progressive pneumonia. Hospitalizations were significantly longer for patients with postoperative pulmonary complications than for those without (median, 15 days vs. 11 days, p = 0.003). On univariate analysis, female gender (p = 0.017), higher mean lung dose (p = 0.036), higher relative volume of lung receiving {>=}5 Gy (V5) (p = 0.023), and smaller volumes of lung spared from doses {>=}5-35 Gy (VS5-VS35) (p < 0.05) were all significantly associated with an increased incidence of postoperative pulmonary complications. No other clinical factors were significantly associated with the incidence of postoperative pulmonary complications in this cohort. On multivariate analysis, the volume of lung spared from doses {>=}5 Gy (VS5) was the only significant independent factor associated with postoperative pulmonary complications (p = 0.005). Conclusions: Dosimetric factors but not clinical factors were found to be strongly associated with the incidence of postoperative pulmonary complications in this cohort of esophageal cancer patients treated with concurrent chemoradiation plus surgery. The volume of the lung spared from doses of {>=}5 Gy was the only independent dosimetric factor in multivariate analysis. This suggests that ensuring an adequate volume of lung unexposed to radiation might reduce the incidence of postoperative pulmonary complications.« less
Wang, Shu-lian; Liao, Zhongxing; Vaporciyan, Ara A; Tucker, Susan L; Liu, Helen; Wei, Xiong; Swisher, Stephen; Ajani, Jaffer A; Cox, James D; Komaki, Ritsuko
2006-03-01
To assess the association of clinical and especially dosimetric factors with the incidence of postoperative pulmonary complications among esophageal cancer patients treated with concurrent chemoradiation therapy followed by surgery. Data from 110 esophageal cancer patients treated between January 1998 and December 2003 were analyzed retrospectively. All patients received concurrent chemoradiotherapy followed by surgery; 72 patients also received irinotecan-based induction chemotherapy. Concurrent chemotherapy was 5-fluorouracil-based and in 97 cases included taxanes. Radiotherapy was delivered to a total dose of 41.4-50.4 Gy at 1.8-2.0 Gy per fraction with a three-dimensional conformal technique. Surgery (three-field, Ivor-Lewis, or transhiatal esophagectomy) was performed 27-123 days (median, 45 days) after completion of radiotherapy. The following dosimetric parameters were generated from the dose-volume histogram (DVH) for total lung: lung volume, mean dose to lung, relative and absolute volumes of lung receiving more than a threshold dose (relative V(dose) and absolute V(dose)), and absolute volume of lung receiving less than a threshold dose (volume spared, or VS(dose)). Occurrence of postoperative pulmonary complications, defined as pneumonia or acute respiratory distress syndrome (ARDS) within 30 days after surgery, was the endpoint for all analyses. Fisher's exact test was used to investigate the relationship between categorical factors and incidence of postoperative pulmonary complications. Logistic analysis was used to analyze the relationship between continuous factors (e.g., V(dose) or VS(dose)) and complication rate. Logistic regression with forward stepwise inclusion of factors was used to perform multivariate analysis of those factors having univariate significance (p < 0.05). The Mann-Whitney test was used to compare length of hospital stay in patients with and without lung complications and to compare lung volumes, VS5 values, and absolute and relative V5 values in male vs. female patients. Pearson correlation analysis was used to determine correlations between dosimetric factors. Eighteen (16.4%) of the 110 patients developed postoperative pulmonary complications. Two of these died of progressive pneumonia. Hospitalizations were significantly longer for patients with postoperative pulmonary complications than for those without (median, 15 days vs. 11 days, p = 0.003). On univariate analysis, female gender (p = 0.017), higher mean lung dose (p = 0.036), higher relative volume of lung receiving > or = 5 Gy (V5) (p = 0.023), and smaller volumes of lung spared from doses > or = 5-35 Gy (VS5-VS35) (p < 0.05) were all significantly associated with an increased incidence of postoperative pulmonary complications. No other clinical factors were significantly associated with the incidence of postoperative pulmonary complications in this cohort. On multivariate analysis, the volume of lung spared from doses > or = 5 Gy (VS5) was the only significant independent factor associated with postoperative pulmonary complications (p = 0.005). Dosimetric factors but not clinical factors were found to be strongly associated with the incidence of postoperative pulmonary complications in this cohort of esophageal cancer patients treated with concurrent chemoradiation plus surgery. The volume of the lung spared from doses of > or = 5 Gy was the only independent dosimetric factor in multivariate analysis. This suggests that ensuring an adequate volume of lung unexposed to radiation might reduce the incidence of postoperative pulmonary complications.
Wang, Yajun; Laughner, Jacob I.; Efimov, Igor R.; Zhang, Song
2013-01-01
This paper presents a two-frequency binary phase-shifting technique to measure three-dimensional (3D) absolute shape of beating rabbit hearts. Due to the low contrast of the cardiac surface, the projector and the camera must remain focused, which poses challenges for any existing binary method where the measurement accuracy is low. To conquer this challenge, this paper proposes to utilize the optimal pulse width modulation (OPWM) technique to generate high-frequency fringe patterns, and the error-diffusion dithering technique to produce low-frequency fringe patterns. Furthermore, this paper will show that fringe patterns produced with blue light provide the best quality measurements compared to fringe patterns generated with red or green light; and the minimum data acquisition speed for high quality measurements is around 800 Hz for a rabbit heart beating at 180 beats per minute. PMID:23482151
Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W
2012-09-07
A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Qian, Jie; Cheng, Wei; Cao, Zhaoyuan; Chen, Xinjian; Mo, Jianhua
2017-02-01
Phase-resolved Doppler optical coherence tomography (PR-D-OCT) is a functional OCT imaging technique that can provide high-speed and high-resolution depth-resolved measurement on flow in biological materials. However, a common problem with conventional PR-D-OCT is that this technique often measures the flow motion projected onto the OCT beam path. In other words, it needs the projection angle to extract the absolute velocity from PR-D-OCT measurement. In this paper, we proposed a novel dual-beam PR-D-OCT method to measure absolute flow velocity without separate measurement on the projection angle. Two parallel light beams are created in sample arm and focused into the sample at two different incident angles. The images produced by these two beams are encoded to different depths in single B-scan. Then the Doppler signals picked up by the two beams together with the incident angle difference can be used to calculate the absolute velocity. We validated our approach in vitro on an artificial flow phantom with our home-built 1060 nm swept source OCT. Experimental results demonstrated that our method can provide an accurate measurement of absolute flow velocity with independency on the projection angle.
van Hooff, Miranda L; Mannion, Anne F; Staub, Lukas P; Ostelo, Raymond W J G; Fairbank, Jeremy C T
2016-10-01
The achievement of a given change score on a valid outcome instrument is commonly used to indicate whether a clinically relevant change has occurred after spine surgery. However, the achievement of such a change score can be dependent on baseline values and does not necessarily indicate whether the patient is satisfied with the current state. The achievement of an absolute score equivalent to a patient acceptable symptom state (PASS) may be a more stringent measure to indicate treatment success. This study aimed to estimate the score on the Oswestry Disability Index (ODI, version 2.1a; 0-100) corresponding to a PASS in patients who had undergone surgery for degenerative disorders of the lumbar spine. This is a cross-sectional study of diagnostic accuracy using follow-up data from an international spine surgery registry. The sample includes 1,288 patients with degenerative lumbar spine disorders who had undergone elective spine surgery, registered in the EUROSPINE Spine Tango Spine Surgery Registry. The main outcome measure was the ODI (version 2.1a). Surgical data and data from the ODI and Core Outcome Measures Index (COMI) were included to determine the ODI threshold equivalent to PASS at 1 year (±1.5 months; n=780) and 2 years (±2 months; n=508) postoperatively. The symptom-specific well-being item of the COMI was used as the external criterion in the receiver operating characteristic (ROC) analysis to determine the ODI threshold equivalent to PASS. Separate sensitivity analyses were performed based on the different definitions of an "acceptable state" and for subgroups of patients. JF is a copyright holder of the ODI. The ODI threshold for PASS was 22, irrespective of the time of follow-up (area under the curve [AUC]: 0.89 [sensitivity {Se}: 78.3%, specificity {Sp}: 82.1%] and AUC: 0.91 [Se: 80.7%, Sp: 85.6] for the 1- and 2-year follow-ups, respectively). Sensitivity analyses showed that the absolute ODI-22 threshold for the two follow-up time-points were robust. A stricter definition of PASS resulted in lower ODI thresholds, varying from 16 (AUC=0.89; Se: 80.2%, Sp: 82.0%) to 18 (AUC=0.90; Se: 82.4%, Sp: 80.4%) depending on the time of follow-up. An ODI score ≤22 indicates the achievement of an acceptable symptom state and can hence be used as a criterion of treatment success alongside the commonly used change score measures. At the individual level, the threshold could be used to indicate whether or not a patient with a lumbar spine disorder is a "responder" after elective surgery. Copyright © 2016 Elsevier Inc. All rights reserved.
Wakabayashi, Tokumitsu; Sakata, Kazumi; Togashi, Takuya; Itoi, Hiroaki; Shinohe, Sayaka; Watanabe, Miwa; Shingai, Ryuzo
2015-11-19
Under experimental conditions, virtually all behaviors of Caenorhabditis elegans are achieved by combinations of simple locomotion, including forward, reversal movement, turning by deep body bending, and gradual shallow turning. To study how worms regulate these locomotion in response to sensory information, acidic pH avoidance behavior was analyzed by using worm tracking system. In the acidic pH avoidance, we characterized two types of behavioral maneuvers that have similar behavioral sequences in chemotaxis and thermotaxis. A stereotypic reversal-turn-forward sequence of reversal avoidance caused an abrupt random reorientation, and a shallow gradual turn in curve avoidance caused non-random reorientation in a less acidic direction to avoid the acidic pH. Our results suggest that these two maneuvers were each triggered by a distinct threshold pH. A simulation study using the two-distinct-threshold model reproduced the avoidance behavior of the real worm, supporting the presence of the threshold. Threshold pH for both reversal and curve avoidance was altered in mutants with reduced or enhanced glutamatergic signaling from acid-sensing neurons. C. elegans employ two behavioral maneuvers, reversal (klinokinesis) and curve (klinotaxis) to avoid acidic pH. Unlike the chemotaxis in C. elegans, reversal and curve avoidances were triggered by absolute pH rather than temporal derivative of stimulus concentration in this behavior. The pH threshold is different between reversal and curve avoidance. Mutant studies suggested that the difference results from a differential amount of glutamate released from ASH and ASK chemosensory neurons.
Tahir, Muhammad; Jan, Bismillah; Hayat, Maqsood; Shah, Shakir Ullah; Amin, Muhammad
2018-04-01
Discriminative and informative feature extraction is the core requirement for accurate and efficient classification of protein subcellular localization images so that drug development could be more effective. The objective of this paper is to propose a novel modification in the Threshold Adjacency Statistics technique and enhance its discriminative power. In this work, we utilized Threshold Adjacency Statistics from a novel perspective to enhance its discrimination power and efficiency. In this connection, we utilized seven threshold ranges to produce seven distinct feature spaces, which are then used to train seven SVMs. The final prediction is obtained through the majority voting scheme. The proposed ETAS-SubLoc system is tested on two benchmark datasets using 5-fold cross-validation technique. We observed that our proposed novel utilization of TAS technique has improved the discriminative power of the classifier. The ETAS-SubLoc system has achieved 99.2% accuracy, 99.3% sensitivity and 99.1% specificity for Endogenous dataset outperforming the classical Threshold Adjacency Statistics technique. Similarly, 91.8% accuracy, 96.3% sensitivity and 91.6% specificity values are achieved for Transfected dataset. Simulation results validated the effectiveness of ETAS-SubLoc that provides superior prediction performance compared to the existing technique. The proposed methodology aims at providing support to pharmaceutical industry as well as research community towards better drug designing and innovation in the fields of bioinformatics and computational biology. The implementation code for replicating the experiments presented in this paper is available at: https://drive.google.com/file/d/0B7IyGPObWbSqRTRMcXI2bG5CZWs/view?usp=sharing. Copyright © 2018 Elsevier B.V. All rights reserved.
A human visual based binarization technique for histological images
NASA Astrophysics Data System (ADS)
Shreyas, Kamath K. M.; Rajendran, Rahul; Panetta, Karen; Agaian, Sos
2017-05-01
In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.
Absolute photon-flux measurements in the vacuum ultraviolet
NASA Technical Reports Server (NTRS)
Samson, J. A. R.; Haddad, G. N.
1974-01-01
Absolute photon-flux measurements in the vacuum ultraviolet have extended to short wavelengths by use of rare-gas ionization chambers. The technique involves the measurement of the ion current as a function of the gas pressure in the ion chamber. The true value of the ion current, and hence the absolute photon flux, is obtained by extrapolating the ion current to zero gas pressure. Examples are given at 162 and 266 A. The short-wavelength limit is determined only by the sensitivity of the current-measuring apparatus and by present knowledge of the photoionization processes that occur in the rate gases.
Zhao, Jing-Jing; Liu, Liang-Yun
2013-02-01
Flux tower method can effectively monitor the vegetation seasonal and phenological variation processes. At present, the differences in the detection and quantitative evaluation of various phenology extraction methods were not well validated and quantified. Based on the gross primary productivity (GPP) and net ecosystem productivity (NEP) data of temperate forests from 9 forest FLUXNET sites in North America, and by using the start dates (SOS) and end dates (EOS) of the temperate forest growth seasons extracted by different phenology threshold extraction methods, in combining with the forest ecosystem carbon source/sink functions, this paper analyzed the effects of different threshold standards on the extraction results of the vegetations phenology. The results showed that the effects of different threshold standards on the stability of the extracted results of deciduous broadleaved forest (DBF) phenology were smaller than those on the stability of the extracted results of evergreen needleleaved forest (ENF) phenology. Among the extracted absolute and relative thresholds of the forests GPP, the extracted threshold of the DBF daily GPP= 2 g C.m-2.d-1 had the best agreement with the DBF daily GPP = 20% maximum GPP (GPPmax) , the phenological metrics with a threshold of daily GPP = 4 g C.m-2.d-1 was close to that between daily GPP = 20% GPPmax and daily GPP = 50% GPPmax, and the start date of ecosystem carbon sink function was close to the SOS metrics between daily GPP = 4 g C.m-2.d-1 and daily GPP= 20% GPPmax. For ENF, the phenological metrics with a threshold of daily GPP = 2 g C.m-2.d-1 and daily GPP = 4 g C.m-2.d-1 had the best agreement with the daily GPP = 20% GPPmax and daily GPP = 50% GPPmax, respectively, and the start date of the ecosystem carbon sink function was close to the SOS metrics between daily GPP = 2 g C.m-2.d-1 and daily GPP= 10% GPPmax.
NASA Astrophysics Data System (ADS)
Chang, Jianhua; Zhu, Lingyan; Li, Hongxu; Xu, Fan; Liu, Binggang; Yang, Zhenbo
2018-01-01
Empirical mode decomposition (EMD) is widely used to analyze the non-linear and non-stationary signals for noise reduction. In this study, a novel EMD-based denoising method, referred to as EMD with soft thresholding and roughness penalty (EMD-STRP), is proposed for the Lidar signal denoising. With the proposed method, the relevant and irrelevant intrinsic mode functions are first distinguished via a correlation coefficient. Then, the soft thresholding technique is applied to the irrelevant modes, and the roughness penalty technique is applied to the relevant modes to extract as much information as possible. The effectiveness of the proposed method was evaluated using three typical signals contaminated by white Gaussian noise. The denoising performance was then compared to the denoising capabilities of other techniques, such as correlation-based EMD partial reconstruction, correlation-based EMD hard thresholding, and wavelet transform. The use of EMD-STRP on the measured Lidar signal resulted in the noise being efficiently suppressed, with an improved signal to noise ratio of 22.25 dB and an extended detection range of 11 km.
NASA Technical Reports Server (NTRS)
Judge, D. L.; Wu, C. Y. R.
1990-01-01
Absorption of a high energy photon (greater than 6 eV) by an isolated molecule results in the formation of highly excited quasi-discrete or continuum states which evolve through a wide range of direct and indirect photochemical processes. These are: photoionization and autoionization, photodissociation and predissociation, and fluorescence. The ultimate goal is to understand the dynamics of the excitation and decay processes and to quantitatively measure the absolute partial cross sections for all processes which occur in photoabsorption. Typical experimental techniques and the status of observational results of particular interest to solar system observations are presented.
Absolute versus relative intensity of physical activity in a dose-response context.
Shephard, R J
2001-06-01
To examine the importance of relative versus absolute intensities of physical activity in the context of population health. A standard computer-search of the literature was supplemented by review of extensive personal files. Consensus reports (Category D Evidence) have commonly recommended moderate rather than hard physical activity in the context of population health. Much of the available literature provides Category C Evidence. It has often confounded issues of relative intensity with absolute intensity or total weekly dose of exercise. In terms of cardiovascular health, there is some evidence for a threshold intensity of effort, perhaps as high as 6 METs, in addition to a minimum volume of physical activity. Decreases in blood pressure and prevention of stroke seem best achieved by moderate rather than high relative intensities of physical activity. Many aspects of metabolic health depend on the total volume of activity; moderate relative intensities of effort are more effective in mobilizing body fat, but harder relative intensities may help to increase energy expenditures postexercise. Hard relative intensities seem needed to augment bone density, but this may reflect an associated increase in volume of activity. Hard relative intensities of exercise induce a transient immunosuppression. The optimal intensity of effort, relative or absolute, for protection against various types of cancer remains unresolved. Acute effects of exercise on mood state also require further study; long-term benefits seem associated with a moderate rather than a hard relative intensity of effort. The importance of relative versus absolute intensity of effort depends on the desired health outcome, and many issues remain to be resolved. Progress will depend on more precise epidemiological methods of assessing energy expenditures and studies that equate total energy expenditures between differing relative intensities. There is a need to focus on gains in quality-adjusted life expectancy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller, Alfred; Bernhardt, Dietrich; Borovik, Alexander
Single, double, and triple photoionization of Ne + ions by single photons have been investigated at the synchrotron radiation source PETRA III in Hamburg, Germany. Absolute cross-sections were measured by employing the photon-ion merged-beams technique. Photon energies were between about 840 and 930 eV, covering the range from the lowest-energy resonances associated with the excitation of one single K-shell electron up to double excitations involving one K- and one L-shell electron, well beyond the K-shell ionization threshold. Also, photoionization of neutral Ne was investigated just below the K edge. The chosen photon energy bandwidths were between 32 and 500 meV,more » facilitating the determination of natural line widths. The uncertainty of the energy scale is estimated to be 0.2 eV. For comparison with existing theoretical calculations, astrophysically relevant photoabsorption cross-sections were inferred by summing the measured partial ionization channels. Discussion of the observed resonances in the different final ionization channels reveals the presence of complex Auger-decay mechanisms. The ejection of three electrons from the lowest K-shell-excited Ne + (1s2s 2p 6 2S 1/2) level, for example, requires cooperative interaction of at least four electrons.« less
Santerre, Cyrille; Vallet, Nadine; Touboul, David
2018-06-02
Supercritical fluid chromatography hyphenated with high resolution mass spectrometry (SFC-HRMS) was developed for fingerprint analysis of different flower absolutes commonly used in cosmetics field, especially in perfumes. Supercritical fluid chromatography-atmospheric pressure photoionization-high resolution mass spectrometry (SFC-APPI-HRMS) technique was employed to identify the components of the fingerprint. The samples were separated with a porous graphitic carbon (PGC) Hypercarb™ column (100 mm × 2.1 mm, 3 μm) by gradient elution using supercritical CO 2 and ethanol (0.0-20.0 min (2-30% B), 20.0-25.0 min (30% B), 25.0-26.0 min (30-2% B) and 26.0-30.0 min (2% B)) as mobile phase at a flow rate of 1.5 mL/min. In order to compare the SFC fingerprints between five different flower absolutes: Jasminum grandiflorum absolutes, Jasminum sambac absolutes, Narcissus jonquilla absolutes, Narcissus poeticus absolutes, Lavandula angustifolia absolutes from different suppliers and batches, the chemometric procedure including principal component analysis (PCA) was applied to classify the samples according to their genus and their species. Consistent results were obtained to show that samples could be successfully discriminated. Copyright © 2018 Elsevier B.V. All rights reserved.
Diagnostic Application of Absolute Neutron Activation Analysis in Hematology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamboni, C.B.; Oliveira, L.C.; Dalaqua, L. Jr.
2004-10-03
The Absolute Neutron Activation Analysis (ANAA) technique was used to determine element concentrations of Cl and Na in blood of healthy group (male and female blood donators), select from Blood Banks at Sao Paulo city, to provide information which can help in diagnosis of patients. This study permitted to perform a discussion about the advantages and limitations of using this nuclear methodology in hematological examinations.
NASA Technical Reports Server (NTRS)
Maximenko, Nikolai A.
2003-01-01
Mean absolute sea level reflects the deviation of the Ocean surface from geoid due to the ocean currents and is an important characteristic of the dynamical state of the ocean. Values of its spatial variations (order of 1 m) are generally much smaller than deviations of the geoid shape from ellipsoid (order of 100 m) that makes the derivation of the absolute mean sea level a difficult task for gravity and satellite altimetry observations. Technique used by Niiler et al. for computation of the absolute mean sea level in the Kuroshio Extension was then developed into more general method and applied by Niiler et al. (2003b) to the global Ocean. The method is based on the consideration of balance of horizontal momentum.
Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina
2017-07-01
A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.
Central and rear-edge populations can be equally vulnerable to warming
NASA Astrophysics Data System (ADS)
Bennett, Scott; Wernberg, Thomas; Arackal Joy, Bijo; de Bettignies, Thibaut; Campbell, Alexandra H.
2015-12-01
Rear (warm) edge populations are often considered more susceptible to warming than central (cool) populations because of the warmer ambient temperatures they experience, but this overlooks the potential for local variation in thermal tolerances. Here we provide conceptual models illustrating how sensitivity to warming is affected throughout a species' geographical range for locally adapted and non-adapted populations. We test these models for a range-contracting seaweed using observations from a marine heatwave and a 12-month experiment, translocating seaweeds among central, present and historic range edge locations. Growth, reproductive development and survivorship display different temperature thresholds among central and rear-edge populations, but share a 2.5 °C anomaly threshold. Range contraction, therefore, reflects variation in local anomalies rather than differences in absolute temperatures. This demonstrates that warming sensitivity can be similar throughout a species geographical range and highlights the importance of incorporating local adaptation and acclimatization into climate change vulnerability assessments.
Matthews, Luke J; DeWan, Peter; Rula, Elizabeth Y
2013-01-01
Studies of social networks, mapped using self-reported contacts, have demonstrated the strong influence of social connections on the propensity for individuals to adopt or maintain healthy behaviors and on their likelihood to adopt health risks such as obesity. Social network analysis may prove useful for businesses and organizations that wish to improve the health of their populations by identifying key network positions. Health traits have been shown to correlate across friendship ties, but evaluating network effects in large coworker populations presents the challenge of obtaining sufficiently comprehensive network data. The purpose of this study was to evaluate methods for using online communication data to generate comprehensive network maps that reproduce the health-associated properties of an offline social network. In this study, we examined three techniques for inferring social relationships from email traffic data in an employee population using thresholds based on: (1) the absolute number of emails exchanged, (2) logistic regression probability of an offline relationship, and (3) the highest ranked email exchange partners. As a model of the offline social network in the same population, a network map was created using social ties reported in a survey instrument. The email networks were evaluated based on the proportion of survey ties captured, comparisons of common network metrics, and autocorrelation of body mass index (BMI) across social ties. Results demonstrated that logistic regression predicted the greatest proportion of offline social ties, thresholding on number of emails exchanged produced the best match to offline network metrics, and ranked email partners demonstrated the strongest autocorrelation of BMI. Since each method had unique strengths, researchers should choose a method based on the aspects of offline behavior of interest. Ranked email partners may be particularly useful for purposes related to health traits in a social network.
Matthews, Luke J.; DeWan, Peter; Rula, Elizabeth Y.
2013-01-01
Studies of social networks, mapped using self-reported contacts, have demonstrated the strong influence of social connections on the propensity for individuals to adopt or maintain healthy behaviors and on their likelihood to adopt health risks such as obesity. Social network analysis may prove useful for businesses and organizations that wish to improve the health of their populations by identifying key network positions. Health traits have been shown to correlate across friendship ties, but evaluating network effects in large coworker populations presents the challenge of obtaining sufficiently comprehensive network data. The purpose of this study was to evaluate methods for using online communication data to generate comprehensive network maps that reproduce the health-associated properties of an offline social network. In this study, we examined three techniques for inferring social relationships from email traffic data in an employee population using thresholds based on: (1) the absolute number of emails exchanged, (2) logistic regression probability of an offline relationship, and (3) the highest ranked email exchange partners. As a model of the offline social network in the same population, a network map was created using social ties reported in a survey instrument. The email networks were evaluated based on the proportion of survey ties captured, comparisons of common network metrics, and autocorrelation of body mass index (BMI) across social ties. Results demonstrated that logistic regression predicted the greatest proportion of offline social ties, thresholding on number of emails exchanged produced the best match to offline network metrics, and ranked email partners demonstrated the strongest autocorrelation of BMI. Since each method had unique strengths, researchers should choose a method based on the aspects of offline behavior of interest. Ranked email partners may be particularly useful for purposes related to health traits in a social network. PMID:23418436
Maiditsch, Isabelle Pia; Ladich, Friedrich
2014-01-01
Background In ectothermal animals such as fish, -temperature affects physiological and metabolic processes. This includes sensory organs such as the auditory system. The reported effects of temperature on hearing in eurythermal otophysines are contradictory. We therefore investigated the effect on the auditory system in species representing two different orders. Methodology/Principal Findings Hearing sensitivity was determined using the auditory evoked potentials (AEP) recording technique. Auditory sensitivity and latency in response to clicks were measured in the common carp Cyprinus carpio (order Cypriniformes) and the Wels catfish Silurus glanis (order Siluriformes) after acclimating fish for at least three weeks to two different water temperatures (15°C, 25°C and again 15°C). Hearing sensitivity increased with temperature in both species. Best hearing was detected between 0.3 and 1 kHz at both temperatures. The maximum increase occurred at 0.8 kHz (7.8 dB) in C. carpio and at 0.5 kHz (10.3 dB) in S. glanis. The improvement differed between species and was in particular more pronounced in the catfish at 4 kHz. The latency in response to single clicks was measured from the onset of the sound stimulus to the most constant positive peak of the AEP. The latency decreased at the higher temperature in both species by 0.37 ms on average. Conclusions/Significance The current study shows that higher temperature improves hearing (lower thresholds, shorter latencies) in eurythermal species from different orders of otophysines. Differences in threshold shifts between eurythermal species seem to reflect differences in absolute sensitivity at higher frequencies and they furthermore indicate differences to stenothermal (tropical) species. PMID:25255456
Facilitation and refractoriness of the electrically evoked compound action potential.
Hey, Matthias; Müller-Deile, Joachim; Hessel, Horst; Killian, Matthijs
2017-11-01
In this study we aim to resolve the contributions of facilitation and refractoriness at very short pulse intervals. Measurements of the refractory properties of the electrically evoked compound action potential (ECAP) of the auditory nerve in cochlear implant (CI) users at inter pulse intervals below 300 μs are influenced by facilitation and recovery effects. ECAPs were recorded using masker pulses with a wide range of current levels relative to the probe pulse levels, for three suprathreshold probe levels and pulse intervals from 13 to 200 μs. Evoked potentials were measured for 21 CI patients by using the masked response extraction artifact cancellation procedure. During analysis of the measurements the stimulation current was not used as absolute value, but in relation to the patient's individual ECAP threshold. This enabled a more general approach to describe facilitation as a probe level independent effect. Maximum facilitation was found for all tested inter pulse intervals at masker levels near patient's individual ECAP threshold, independent from probe level. For short inter pulse intervals an increased N 1 P 1 amplitude was measured for subthreshold masker levels down to 120 CL below patient's individual ECAP threshold in contrast to the recreated state. ECAPs recorded with inter pulse intervals up to 200 μs are influenced by facilitation and recovery. Facilitation effects are most pronounced for masker levels at or below ECAP threshold, while recovery effects increase with higher masker levels above ECAP threshold. The local maximum of the ECAP amplitude for masker levels around ECAP threshold can be explained by the mutual influence of maximum facilitation and minimal refractoriness. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Absolute/convective secondary instabilities and the role of confinement in free shear layers
NASA Astrophysics Data System (ADS)
Arratia, Cristóbal; Mowlavi, Saviz; Gallaire, François
2018-05-01
We study the linear spatiotemporal stability of an infinite row of equal point vortices under symmetric confinement between parallel walls. These rows of vortices serve to model the secondary instability leading to the merging of consecutive (Kelvin-Helmholtz) vortices in free shear layers, allowing us to study how confinement limits the growth of shear layers through vortex pairings. Using a geometric construction akin to a Legendre transform on the dispersion relation, we compute the growth rate of the instability in different reference frames as a function of the frame velocity with respect to the vortices. This approach is verified and complemented with numerical computations of the linear impulse response, fully characterizing the absolute/convective nature of the instability. Similar to results by Healey on the primary instability of parallel tanh profiles [J. Fluid Mech. 623, 241 (2009), 10.1017/S0022112008005284], we observe a range of confinement in which absolute instability is promoted. For a parallel shear layer with prescribed confinement and mixing length, the threshold for absolute/convective instability of the secondary pairing instability depends on the separation distance between consecutive vortices, which is physically determined by the wavelength selected by the previous (primary or pairing) instability. In the presence of counterflow and moderate to weak confinement, small (large) wavelength of the vortex row leads to absolute (convective) instability. While absolute secondary instabilities in spatially developing flows have been previously related to an abrupt transition to a complex behavior, this secondary pairing instability regenerates the flow with an increased wavelength, eventually leading to a convectively unstable row of vortices. We argue that since the primary instability remains active for large wavelengths, a spatially developing shear layer can directly saturate on the wavelength of such a convectively unstable row, by-passing the smaller wavelengths of absolute secondary instability. This provides a wavelength selection mechanism, according to which the distance between consecutive vortices should be sufficiently large in comparison with the channel width in order for the row of vortices to persist. We argue that the proposed wavelength selection criteria can serve as a guideline for experimentally obtaining plane shear layers with counterflow, which has remained an experimental challenge.
Superfast high-resolution absolute 3D recovery of a stabilized flapping flight process.
Li, Beiwen; Zhang, Song
2017-10-30
Scientific research of a stabilized flapping flight process (e.g. hovering) has been of great interest to a variety of fields including biology, aerodynamics, and bio-inspired robotics. Different from the current passive photogrammetry based methods, the digital fringe projection (DFP) technique has the capability of performing dense superfast (e.g. kHz) 3D topological reconstructions with the projection of defocused binary patterns, yet it is still a challenge to measure a flapping flight process with the presence of rapid flapping wings. This paper presents a novel absolute 3D reconstruction method for a stabilized flapping flight process. Essentially, the slow motion parts (e.g. body) and the fast-motion parts (e.g. wings) are segmented and separately reconstructed with phase shifting techniques and the Fourier transform, respectively. The topological relations between the wings and the body are utilized to ensure absolute 3D reconstruction. Experiments demonstrate the success of our computational framework by testing a flapping wing robot at different flapping speeds.
Hagiwara, Akifumi; Warntjes, Marcel; Hori, Masaaki; Andica, Christina; Nakazawa, Misaki; Kumamaru, Kanako Kunishima; Abe, Osamu; Aoki, Shigeki
2017-01-01
Abstract Conventional magnetic resonance images are usually evaluated using the image signal contrast between tissues and not based on their absolute signal intensities. Quantification of tissue parameters, such as relaxation rates and proton density, would provide an absolute scale; however, these methods have mainly been performed in a research setting. The development of rapid quantification, with scan times in the order of 6 minutes for full head coverage, has provided the prerequisites for clinical use. The aim of this review article was to introduce a specific quantification method and synthesis of contrast-weighted images based on the acquired absolute values, and to present automatic segmentation of brain tissues and measurement of myelin based on the quantitative values, along with application of these techniques to various brain diseases. The entire technique is referred to as “SyMRI” in this review. SyMRI has shown promising results in previous studies when used for multiple sclerosis, brain metastases, Sturge-Weber syndrome, idiopathic normal pressure hydrocephalus, meningitis, and postmortem imaging. PMID:28257339
Elizabeth A. Freeman; Gretchen G. Moisen
2008-01-01
Modelling techniques used in binary classification problems often result in a predicted probability surface, which is then translated into a presence - absence classification map. However, this translation requires a (possibly subjective) choice of threshold above which the variable of interest is predicted to be present. The selection of this threshold value can have...
A new metric for assessing IMRT modulation complexity and plan deliverability.
McNiven, Andrea L; Sharpe, Michael B; Purdie, Thomas G
2010-02-01
To evaluate the utility of a new complexity metric, the modulation complexity score (MCS), in the treatment planning and quality assurance processes and to evaluate the relationship of the metric with deliverability. A multisite (breast, rectum, prostate, prostate bed, lung, and head and neck) and site-specific (lung) dosimetric evaluation has been completed. The MCS was calculated for each beam and the overall treatment plan. A 2D diode array (MapCHECK, Sun Nuclear, Melbourne, FL) was used to acquire measurements for each beam. The measured and planned dose (PINNACLE3, Phillips, Madison, WI) was evaluated using different percent differences and distance to agreement (DTA) criteria (3%/ 3 mm and 2%/ 1 mm) and the relationship between the dosimetric results and complexity (as measured by the MCS or simple beam parameters) assessed. For the multisite analysis (243 plans total), the mean MCS scores for each treatment site were breast (0.92), rectum (0.858), prostate (0.837), prostate bed (0.652), lung (0.631), and head and neck (0.356). The MCS allowed for compilation of treatment site-specific statistics, which is useful for comparing different techniques, as well as for comparison of individual treatment plans with the typical complexity levels. For the six plans selected for dosimetry, the average diode percent pass rate was 98.7% (minimum of 96%) for 3%/3 mm evaluation criteria. The average difference in absolute dose measurement between the planned and measured dose was 1.7 cGy. The detailed lung analysis also showed excellent agreement between the measured and planned dose, as all beams had a diode percentage pass rate for 3%/3 mm criteria of greater than 95.9%, with an average pass rate of 99.0%. The average absolute maximum dose difference for the lung plans was 0.7 cGy. There was no direct correlation between the MCS and simple beam parameters which could be used as a surrogate for complexity level (i.e., number of segments or MU). An evaluation criterion of 2%/ 1 mm reliably allowed for the identification of beams that are dosimetrically robust. In this study we defined a robust beam or plan as one that maintained a diode percentage pass rate greater than 90% at 2%/ 1 mm, indicating delivery that was deemed accurate when compared to the planned dose, even under stricter evaluation criterion. MCS and MU threshold criteria were determined by defining a required specificity of 1.0. A MCS threshold of 0.8 allowed for identification of robust deliverability with a sensitivity of 0.36. In contrast, MU had a lower sensitivity of 0.23 for a threshold of 50 MU. The MCS allows for a quantitative assessment of plan complexity, on a fixed scale, that can be applied to all treatment sites and can provide more information related to dose delivery than simple beam parameters. This could prove useful throughout the entire treatment planning and QA process.
Performance tests and quality control of cathode ray tube displays.
Roehrig, H; Blume, H; Ji, T L; Browne, M
1990-08-01
Spatial resolution, noise, characteristic curve, and absolute luminance are the essential parameters that describe physical image quality of a display. This paper presents simple procedures for assessing the performance of a cathode ray tube (CRT) in terms of these parameters as well as easy set up techniques. The procedures can be used in the environment where the CRT is used. The procedures are based on a digital representation of the Society of Motion Pictures and Television Engineers pattern plus a few simple other digital patterns. Additionally, measurement techniques are discussed for estimating brightness uniformity, veiling glare, and distortion. Apart from the absolute luminance, all performance features can be assessed with an uncalibrated photodetector and the eyes of a human observer. The measurement techniques especially enable the user to perform comparisons of different display systems.
Threshold-adaptive canny operator based on cross-zero points
NASA Astrophysics Data System (ADS)
Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu
2018-03-01
Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.
Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane
2016-08-01
This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau's recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families' expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM's 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a post-tax/post-transfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time.
Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane
2016-01-01
This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau’s recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families’ expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM’s 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a posttax/posttransfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time. PMID:27352076
Kim, Eun-Ha; Lee, Du-Hyeong; Kwon, Sung-Min; Kwon, Tae-Yub
2017-03-01
Although new digital manufacturing techniques are attracting interest in dentistry, few studies have comprehensively investigated the marginal fit of fixed dental prostheses fabricated with such techniques. The purpose of this in vitro microcomputed tomography (μCT) study was to evaluate the marginal fit of cobalt-chromium (Co-Cr) alloy copings fabricated by casting and 3 different computer-aided design and computer-aided manufacturing (CAD-CAM)-based processing techniques and alloy systems. Single Co-Cr metal crowns were fabricated using 4 different manufacturing techniques: casting (control), milling, selective laser melting, and milling/sintering. Two different commercial alloy systems were used for each fabrication technique (a total of 8 groups; n=10 for each group). The marginal discrepancy and absolute marginal discrepancy of the crowns were determined with μCT. For each specimen, the values were determined from 4 different regions (sagittal buccal, sagittal lingual, coronal mesial, and coronal distal) by using imaging software and recorded as the average of the 4 readings. For each parameter, the results were statistically compared with 2-way analysis of variance and appropriate post hoc analysis (using Tukey or Student t test) (α=.05). The milling and selective laser melting groups showed significantly larger marginal discrepancies than the control groups (70.4 ±12.0 and 65.3 ±10.1 μm, respectively; P<.001), whereas the milling/sintering groups exhibited significantly smaller values than the controls (P=.004). The milling groups showed significantly larger absolute marginal discrepancy than the control groups (137.4 ±29.0 and 139.2 ±18.9 μm, respectively; P<.05). In the selective laser melting and milling/sintering groups, the absolute marginal discrepancy values were material-specific (P<.05). Nonetheless, the milling/sintering groups yielded statistically comparable (P=.935) or smaller (P<.001) absolute marginal discrepancies to the control groups. The findings of this in vitro μCT study showed that the marginal fit values of the Co-Cr alloy greatly depended on the fabrication methods and, occasionally, the alloy systems. Fixed dental prostheses produced by using the milling/sintering technique can be considered clinically acceptable in terms of marginal fit. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Esteves, David; Sterling, Nicholas; Aguilar, Alex; Kilcoyne, A. L. David; Phaneuf, Ronald; Bilodeau, Rene; Red, Eddie; McLaughlin, Brendan; Norrington, Patrick; Balance, Connor
2009-05-01
Numerical simulations show that derived elemental abundances in astrophysical nebulae can be uncertain by factors of two or more due to atomic data uncertainties alone, and of these uncertainties, absolute photoionization cross sections are the most important. Absolute single photoionization cross sections for Se^3+ ions have been measured from 42 eV to 56 eV at the ALS using the merged beams photo-ion technique. Theoretical photoionization cross section calculations were also performed for these ions using the state-of-the-art fully relativistic Dirac R-matrix code (DARC). The calculations show encouraging agreement with the experimental measurements.
Dai, Peng; Jiang, Nan; Tan, Ren-Xiang
2016-01-01
Elucidation of absolute configuration of chiral molecules including structurally complex natural products remains a challenging problem in organic chemistry. A reliable method for assigning the absolute stereostructure is to combine the experimental circular dichroism (CD) techniques such as electronic and vibrational CD (ECD and VCD), with quantum mechanics (QM) ECD and VCD calculations. The traditional QM methods as well as their continuing developments make them more applicable with accuracy. Taking some chiral natural products with diverse conformations as examples, this review describes the basic concepts and new developments of QM approaches for ECD and VCD calculations in solution and solid states.
1980-09-01
this system be given no further consideration. 14AGNETOMETER TECHNIQUES Four types of magnetometers are commonly in use today: fluxgate , proton...that are cumbersome to operate and less accurate than fluxgate and proton mag- netometers. The proton magnetometer is also gradually replacing the... fluxgate magnetometer because of its greater sensitivity (I gamma or better), absolute accuracy, nonmoving parts, and its ability Lo measure absolute
An integrative perspective of the anaerobic threshold.
Sales, Marcelo Magalhães; Sousa, Caio Victor; da Silva Aguiar, Samuel; Knechtle, Beat; Nikolaidis, Pantelis Theodoros; Alves, Polissandro Mortoza; Simões, Herbert Gustavo
2017-12-14
The concept of anaerobic threshold (AT) was introduced during the nineteen sixties. Since then, several methods to identify the anaerobic threshold (AT) have been studied and suggested as novel 'thresholds' based upon the variable used for its detection (i.e. lactate threshold, ventilatory threshold, glucose threshold). These different techniques have brought some confusion about how we should name this parameter, for instance, anaerobic threshold or the physiological measure used (i.e. lactate, ventilation). On the other hand, the modernization of scientific methods and apparatus to detect AT, as well as the body of literature formed in the past decades, could provide a more cohesive understanding over the AT and the multiple physiological systems involved. Thus, the purpose of this review was to provide an integrative perspective of the methods to determine AT. Copyright © 2017 Elsevier Inc. All rights reserved.
2006-12-01
hypothesis testing (ANOVA) using icrosoft Excel v10.0 atα= 0.05 significance threshold. Fol- owing ANOVA, Fisher’s least significant difference, LSD , air...critical value (α= 0.05) ound in the t distribution. If the average absolute difference etween any two groups was greater than the LSD critical alue...cIntyre, T.M., Pontsler, A.V., Silva, A.R., St Hilaire, A., Xu, Y., Hin- shaw , J.C., Zimmerman, G.A., Hama, K., Aoki, J., Arai, H., Prestwich, G.D
NASA Technical Reports Server (NTRS)
Samson, James A. R.; Haddad, G. N.; Masuoka, T.; Pareek, P. N.; Kilcoyne, D. A. L.
1989-01-01
Absolute absorption and photoionization cross sections of methane have been measured with an accuracy of about 2 or 3 percent over most of the wavelength range from 950 to 110 A. Also, dissociative photoionization cross sections were measured for the production of CH4(+), CH3(+), CH2(+), CH(+), and C(+) from their respective thresholds to 159 A, and for H(+) and H2(+) measurements were made down to 240 A. Fragmentation was observed at all excited ionic states of CH4.
Robust satellite techniques for oil spill detection and monitoring
NASA Astrophysics Data System (ADS)
Casciello, D.; Pergola, N.; Tramutoli, V.
Discharge of oil into the sea is one of the most dangerous, among technological hazards, for the maritime environment. In the last years maritime transport and exploitation of marine resources continued to increase; as a result, tanker accidents are nowadays increasingly frequent, continuously menacing the maritime security and safety. Satellite remote sensing could contribute in multiple ways, in particular for what concerns early warning and real-time (or near real-time) monitoring. Several satellite techniques exist, mainly based on the use of SAR (Synthetic Aperture Radar) technology, which are able to recognise, with sufficient accuracy, oil spills discharged into the sea. Unfortunately, such methods cannot be profitably used for real-time detection, because of the low observational frequency assured by present satellite platforms carrying SAR sensors (the mean repetition rate is something like 30 days). On the other hand, potential of optical sensors aboard meteorological satellites, was not yet fully exploited and no reliable techniques have been developed until now for this purpose. Main limit of proposed techniques can be found in the ``fixed threshold'' approach which makes such techniques difficult to implement without operator supervision and, generally, without an independent information on the oil spill presence that could drive the choice of the best threshold. A different methodological approach (RAT, Robust AVHRR Techniques) proposed by Tramutoli (1998) and already successfully applied to several natural and environmental emergencies related to volcanic eruptions, forest fires and seismic activity. In this paper its extension to near real-time detection and monitoring of oil spills by means of NOAA-AVHRR (Advanced Very High Resolution Radiometer) records will be described. Briefly, RAT approach is an automatic change-detection scheme that considers a satellite image as a space-time process, described at each place (x,y) and time t, by the value of the satellite derived measurements V(x,y,t). Generally speaking an Absolute Local Index of Change of the Environment (ALICE) is computed and this index permits to identify signal anomalies, in the space-time domain, as deviations from a normal state preliminarily defined, for each image pixel, (e.g. in terms of time average and standard deviation) on the base only of satellite observations collected during several year in the past, in similar observational conditions (same time of the day, same month of the year). By this way local (i.e. specific for the place and the time of observation) instead than fixed thresholds are automatically set by RAT which permit to discriminate signal anomalies from those variations due to natural or observational condition variability. Using AVHRR observations in the Thermal (TIR) and Middle (MIR) Infrared region, such an approach has been applied to the extended oil spill event, occurred at the end of January 1991 in the Persian Gulf. Preliminary results will be analysed that confirm as the suggested technique is able to detect and monitoring oil spills also in the most difficult observational conditions. Automatic implementation, intrinsic exportability on whatever geographic zone and/or satellite package, high sensitivity also to low intensity signals (i.e. small or thin spills), no need for ancillary information (different form satellite data at hand), seem the most promising merits of the proposed technique. Although these results should be confirmed by further analyses on different events and extended also to other AVHRR spectral bands (VIS, NIR), this work surely encourages to continue the research in this field. Moreover, the complete independence of the RAT approach on the specific sensor and/or satellite system, will ensure its full exportability on the new generation of Earth observation satellite sensors (e.g. SEVIRI-Spinning Enhanced Visible and Infrared Imager onboard Meteosat Second generation satellite, with a repetition rate of 15 minutes and 12 spectral bands) which, thanks to their improved capabilities, could actually guarantee timely, reliable and accurate information.
Measurement of absolute gamma emission probabilities
NASA Astrophysics Data System (ADS)
Sumithrarachchi, Chandana S.; Rengan, Krish; Griffin, Henry C.
2003-06-01
The energies and emission probabilities (intensities) of gamma-rays emitted in radioactive decays of particular nuclides are the most important characteristics by which to quantify mixtures of radionuclides. Often, quantification is limited by uncertainties in measured intensities. A technique was developed to reduce these uncertainties. The method involves obtaining a pure sample of a nuclide using radiochemical techniques, and using appropriate fractions for beta and gamma measurements. The beta emission rates were measured using a liquid scintillation counter, and the gamma emission rates were measured with a high-purity germanium detector. Results were combined to obtain absolute gamma emission probabilities. All sources of uncertainties greater than 0.1% were examined. The method was tested with 38Cl and 88Rb.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, M. J.; Solodov, A. A.; Myatt, J. F.
Planar laser-plasma interaction (LPI) experiments at the National Ignition Facility (NIF) have allowed access for the rst time to regimes of electron density scale length (~500 to 700 μm), electron temperature (~3 to 5 keV), and laser intensity (6 to 16 x 10 14 W/cm 2) that are relevant to direct-drive inertial confinement fusion ignition. Unlike in shorter-scale-length plasmas on OMEGA, scattered-light data on the NIF show that the near-quarter-critical LPI physics is dominated by stimulated Raman scattering (SRS) rather than by two-plasmon decay (TPD). This difference in regime is explained based on absolute SRS and TPD threshold considerations. SRSmore » sidescatter tangential to density contours and other SRS mechanisms are observed. The fraction of laser energy converted to hot electrons is ~0.7% to 2.9%, consistent with observed levels of SRS. The intensity threshold for hot-electron production is assessed, and the use of a Si ablator slightly increases this threshold from ~4 x 10 14 to ~6 x 10 14 W/cm 2. These results have significant implications for mitigation of LPI hot-electron preheat in direct-drive ignition designs.« less
Rosenberg, M. J.; Solodov, A. A.; Myatt, J. F.; ...
2018-01-29
Planar laser-plasma interaction (LPI) experiments at the National Ignition Facility (NIF) have allowed access for the rst time to regimes of electron density scale length (~500 to 700 μm), electron temperature (~3 to 5 keV), and laser intensity (6 to 16 x 10 14 W/cm 2) that are relevant to direct-drive inertial confinement fusion ignition. Unlike in shorter-scale-length plasmas on OMEGA, scattered-light data on the NIF show that the near-quarter-critical LPI physics is dominated by stimulated Raman scattering (SRS) rather than by two-plasmon decay (TPD). This difference in regime is explained based on absolute SRS and TPD threshold considerations. SRSmore » sidescatter tangential to density contours and other SRS mechanisms are observed. The fraction of laser energy converted to hot electrons is ~0.7% to 2.9%, consistent with observed levels of SRS. The intensity threshold for hot-electron production is assessed, and the use of a Si ablator slightly increases this threshold from ~4 x 10 14 to ~6 x 10 14 W/cm 2. These results have significant implications for mitigation of LPI hot-electron preheat in direct-drive ignition designs.« less
Thresholds and noise limitations of colour vision in dim light.
Kelber, Almut; Yovanovich, Carola; Olsson, Peter
2017-04-05
Colour discrimination is based on opponent photoreceptor interactions, and limited by receptor noise. In dim light, photon shot noise impairs colour vision, and in vertebrates, the absolute threshold of colour vision is set by dark noise in cones. Nocturnal insects (e.g. moths and nocturnal bees) and vertebrates lacking rods (geckos) have adaptations to reduce receptor noise and use chromatic vision even in very dim light. In contrast, vertebrates with duplex retinae use colour-blind rod vision when noisy cone signals become unreliable, and their transition from cone- to rod-based vision is marked by the Purkinje shift. Rod-cone interactions have not been shown to improve colour vision in dim light, but may contribute to colour vision in mesopic light intensities. Frogs and toads that have two types of rods use opponent signals from these rods to control phototaxis even at their visual threshold. However, for tasks such as prey or mate choice, their colour discrimination abilities fail at brighter light intensities, similar to other vertebrates, probably limited by the dark noise in cones.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Rosenberg, M. J.; Solodov, A. A.; Myatt, J. F.; Seka, W.; Michel, P.; Hohenberger, M.; Short, R. W.; Epstein, R.; Regan, S. P.; Campbell, E. M.; Chapman, T.; Goyon, C.; Ralph, J. E.; Barrios, M. A.; Moody, J. D.; Bates, J. W.
2018-01-01
Planar laser-plasma interaction (LPI) experiments at the National Ignition Facility (NIF) have allowed access for the first time to regimes of electron density scale length (˜500 to 700 μ m ), electron temperature (˜3 to 5 keV), and laser intensity (6 to 16 ×1014 W /cm2 ) that are relevant to direct-drive inertial confinement fusion ignition. Unlike in shorter-scale-length plasmas on OMEGA, scattered-light data on the NIF show that the near-quarter-critical LPI physics is dominated by stimulated Raman scattering (SRS) rather than by two-plasmon decay (TPD). This difference in regime is explained based on absolute SRS and TPD threshold considerations. SRS sidescatter tangential to density contours and other SRS mechanisms are observed. The fraction of laser energy converted to hot electrons is ˜0.7 % to 2.9%, consistent with observed levels of SRS. The intensity threshold for hot-electron production is assessed, and the use of a Si ablator slightly increases this threshold from ˜4×10 14 to ˜6 ×1014 W /cm2 . These results have significant implications for mitigation of LPI hot-electron preheat in direct-drive ignition designs.
Reaction πN → ππN near threshold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frlez, Emil
1993-11-01
The LAMPF E1179 experiment used the π 0 spectrometer and an array of charged particle range counters to detect and record π +π 0, π 0p, and π +π 0p coincidences following the reaction π +p → π 0π +p near threshold. The total cross sections for single pion production were measured at the incident pion kinetic energies 190, 200, 220, 240, and 260 MeV. Absolute normalizations were fixed by measuring π +p elastic scattering at 260 MeV. A detailed analysis of the π 0 detection efficiency was performed using cosmic ray calibrations and pion single charge exchange measurements with a 30 MeV π - beam. All published data on πN → ππN, including our results, are simultaneously fitted to yield a common chiral symmetry breaking parameter ξ =-0.25±0.10. The threshold matrix element |α 0(π 0π +p)| determined by linear extrapolation yields the value of the s-wave isospin-2 ππ scattering length αmore » $$2\\atop{0}$$(ππ) = -0.041±0.003 m$$-1\\atop{π}$$ -1, within the framework of soft-pion theory.« less
The three-dimensional structure of cumulus clouds over the ocean. 1: Structural analysis
NASA Technical Reports Server (NTRS)
Kuo, Kwo-Sen; Welch, Ronald M.; Weger, Ronald C.; Engelstad, Mark A.; Sengupta, S. K.
1993-01-01
Thermal channel (channel 6, 10.4-12.5 micrometers) images of five Landsat thematic mapper cumulus scenes over the ocean are examined. These images are thresholded using the standard International Satellite Cloud Climatology Project (ISCCP) thermal threshold algorithm. The individual clouds in the cloud fields are segmented to obtain their structural statistics which include size distribution, orientation angle, horizontal aspect ratio, and perimeter-to-area (PtA) relationship. The cloud size distributions exhibit a double power law with the smaller clouds having a smaller absolute exponent. The cloud orientation angles, horizontal aspect ratios, and PtA exponents are found in good agreement with earlier studies. A technique also is developed to recognize individual cells within a cloud so that statistics of cloud cellular structure can be obtained. Cell structural statistics are computed for each cloud. Unicellular clouds are generally smaller (less than or equal to 1 km) and have smaller PtA exponents, while multicellular clouds are larger (greater than or equal to 1 km) and have larger PtA exponents. Cell structural statistics are similar to those of the smaller clouds. When each cell is approximated as a quadric surface using a linear least squares fit, most cells have the shape of a hyperboloid of one sheet, but about 15% of the cells are best modeled by a hyperboloid of two sheets. Less than 1% of the clouds are ellipsoidal. The number of cells in a cloud increases slightly faster than linearly with increasing cloud size. The mean nearest neighbor distance between cells in a cloud, however, appears to increase linearly with increasing cloud size and to reach a maximum when the cloud effective diameter is about 10 km; then it decreases with increasing cloud size. Sensitivity studies of threshold and lapse rate show that neither has a significant impact upon the results. A goodness-of-fit ratio is used to provide a quantitative measure of the individual cloud results. Significantly improved results are obtained after applying a smoothing operator, suggesting the eliminating subresolution scale variations with higher spatial resolution may yield even better shape analyses.
Wilson, Nick; Selak, Vanessa; Blakely, Tony; Leung, William; Clarke, Philip; Jackson, Rod; Knight, Josh; Nghiem, Nhung
2016-03-11
Based on new systematic reviews of the evidence, the US Preventive Services Task Force has drafted updated guidelines on the use of low-dose aspirin for the primary prevention of both cardiovascular disease (CVD) and cancer. The Task Force generally recommends consideration of aspirin in adults aged 50-69 years with 10-year CVD risk of at least 10%, in who absolute health gain (reduction of CVD and cancer) is estimated to exceed absolute health loss (increase in bleeds). With the ongoing decline in CVD, current risk calculators for New Zealand are probably outdated, so it is difficult to be precise about what proportion of the population is in this risk category (roughly equivalent to 5-year CVD risk ≥5%). Nevertheless, we suspect that most smokers aged 50-69 years, and some non-smokers, would probably meet the new threshold for taking low-dose aspirin. The country therefore needs updated guidelines and risk calculators that are ideally informed by estimates of absolute net health gain (in quality-adjusted life-years (QALYs) per person) and cost-effectiveness. Other improvements to risk calculators include: epidemiological rigour (eg, by addressing competing mortality); providing enhanced graphical display of risk to enhance risk communication; and possibly capturing the issues of medication disutility and comparison with lifestyle changes.
Intracortical myelination in musicians with absolute pitch: Quantitative morphometry using 7‐T MRI
Knösche, Thomas R.
2016-01-01
Abstract Absolute pitch (AP) is known as the ability to recognize and label the pitch chroma of a given tone without external reference. Known brain structures and functions related to AP are mainly of macroscopic aspects. To shed light on the underlying neural mechanism of AP, we investigated the intracortical myeloarchitecture in musicians with and without AP using the quantitative mapping of the longitudinal relaxation rates with ultra‐high‐field magnetic resonance imaging at 7 T. We found greater intracortical myelination for AP musicians in the anterior region of the supratemporal plane, particularly the medial region of the right planum polare (PP). In the same region of the right PP, we also found a positive correlation with a behavioral index of AP performance. In addition, we found a positive correlation with a frequency discrimination threshold in the anterolateral Heschl's gyrus in the right hemisphere, demonstrating distinctive neural processes of absolute recognition and relative discrimination of pitch. Regarding possible effects of local myelination in the cortex and the known importance of the anterior superior temporal gyrus/sulcus for the identification of auditory objects, we argue that pitch chroma may be processed as an identifiable object property in AP musicians. Hum Brain Mapp 37:3486–3501, 2016. © 2016 Wiley Periodicals, Inc. PMID:27160707
Diagnostic pure-tone audiometry in schools: mobile testing without a sound-treated environment.
Swanepoel, De Wet; Maclennan-Smith, Felicity; Hall, James W
2013-01-01
To validate diagnostic pure-tone audiometry in schools without a sound-treated environment using an audiometer that incorporates insert earphones covered by circumaural earcups and real-time environmental noise monitoring. A within-subject repeated measures design was employed to compare air (250 to 8000 Hz) and bone (250 to 4000 Hz) conduction pure-tone thresholds measured in natural school environments with thresholds measured in a sound-treated booth. 149 children (54% female) with an average age of 6.9 yr (SD = 0.6; range = 5-8). Average difference between the booth and natural environment thresholds was 0.0 dB (SD = 3.6) for air conduction and 0.1 dB (SD = 3.1) for bone conduction. Average absolute difference between the booth and natural environment was 2.1 dB (SD = 2.9) for air conduction and 1.6 dB (SD = 2.7) for bone conduction. Almost all air- (96%) and bone-conduction (97%) threshold comparisons between the natural and booth test environments were within 0 to 5 dB. No statistically significant differences between thresholds recorded in the natural and booth environments for air- and bone-conduction audiometry were found (p > 0.01). Diagnostic air- and bone-conduction audiometry in schools, without a sound-treated room, is possible with sufficient earphone attenuation and real-time monitoring of environmental noise. Audiological diagnosis on-site for school screening may address concerns of false-positive referrals and poor follow-up compliance and allow for direct referral to audiological and/or medical intervention. American Academy of Audiology.
NASA Astrophysics Data System (ADS)
Gómez-Ocampo, E.; Gaxiola-Castro, G.; Durazo, Reginaldo
2017-06-01
Threshold is defined as the point where small changes in an environmental driver produce large responses in the ecosystem. Generalized additive models (GAMs) were used to estimate the thresholds and contribution of key dynamic physical variables in terms of phytoplankton production and variations in biomass in the tropical-subtropical Pacific Ocean off Mexico. The statistical approach used here showed that thresholds were shallower for primary production than for phytoplankton biomass (pycnocline < 68 m and mixed layer < 30 m versus pycnocline < 45 m and mixed layer < 80 m) but were similar for absolute dynamic topography and Ekman pumping (ADT < 59 cm and EkP > 0 cm d-1 versus ADT < 60 cm and EkP > 4 cm d-1). The relatively high productivity on seasonal (spring) and interannual (La Niña 2008) scales was linked to low ADT (45-60 cm) and shallow pycnocline depth (9-68 m) and mixed layer (8-40 m). Statistical estimations from satellite data indicated that the contributions of ocean circulation to phytoplankton variability were 18% (for phytoplankton biomass) and 46% (for phytoplankton production). Although the statistical contribution of models constructed with in situ integrated chlorophyll a and primary production data was lower than the one obtained with satellite data (11%), the fits were better for the former, based on the residual distribution. The results reported here suggest that estimated thresholds may reliably explain the spatial-temporal variations of phytoplankton in the tropical-subtropical Pacific Ocean off the coast of Mexico.
A Comparison of Meditation with Other Relaxation Techniques.
ERIC Educational Resources Information Center
Fling, Sheila
This paper critiques a negative 1984 review, "Meditation and Somatic Arousal Reduction" (Holmes), on the absolute effectiveness of meditation in reducing somatic arousal and reviews research on the relative effectiveness of meditation compared to techniques such as biofeedback, hypnosis, progressive muscle relaxation, and autogenics in…
Uncertainty Estimates of Psychoacoustic Thresholds Obtained from Group Tests
NASA Technical Reports Server (NTRS)
Rathsam, Jonathan; Christian, Andrew
2016-01-01
Adaptive psychoacoustic test methods, in which the next signal level depends on the response to the previous signal, are the most efficient for determining psychoacoustic thresholds of individual subjects. In many tests conducted in the NASA psychoacoustic labs, the goal is to determine thresholds representative of the general population. To do this economically, non-adaptive testing methods are used in which three or four subjects are tested at the same time with predetermined signal levels. This approach requires us to identify techniques for assessing the uncertainty in resulting group-average psychoacoustic thresholds. In this presentation we examine the Delta Method of frequentist statistics, the Generalized Linear Model (GLM), the Nonparametric Bootstrap, a frequentist method, and Markov Chain Monte Carlo Posterior Estimation and a Bayesian approach. Each technique is exercised on a manufactured, theoretical dataset and then on datasets from two psychoacoustics facilities at NASA. The Delta Method is the simplest to implement and accurate for the cases studied. The GLM is found to be the least robust, and the Bootstrap takes the longest to calculate. The Bayesian Posterior Estimate is the most versatile technique examined because it allows the inclusion of prior information.
Absolute reactivity calibration of accelerator-driven systems after RACE-T experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jammes, C. C.; Imel, G. R.; Geslot, B.
2006-07-01
The RACE-T experiments that were held in november 2005 in the ENEA-Casaccia research center near Rome allowed us to improve our knowledge of the experimental techniques for absolute reactivity calibration at either startup or shutdown phases of accelerator-driven systems. Various experimental techniques for assessing a subcritical level were inter-compared through three different subcritical configurations SC0, SC2 and SC3, about -0.5, -3 and -6 dollars, respectively. The area-ratio method based of the use of a pulsed neutron source appears as the most performing. When the reactivity estimate is expressed in dollar unit, the uncertainties obtained with the area-ratio method were lessmore » than 1% for any subcritical configuration. The sensitivity to measurement location was about slightly more than 1% and always less than 4%. Finally, it is noteworthy that the source jerk technique using a transient caused by the pulsed neutron source shutdown provides results in good agreement with those obtained from the area-ratio technique. (authors)« less
Exploring three faint source detections methods for aperture synthesis radio images
NASA Astrophysics Data System (ADS)
Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.
2015-04-01
Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.
Study of CMOS-SOI Integrated Temperature Sensing Circuits for On-Chip Temperature Monitoring.
Malits, Maria; Brouk, Igor; Nemirovsky, Yael
2018-05-19
This paper investigates the concepts, performance and limitations of temperature sensing circuits realized in complementary metal-oxide-semiconductor (CMOS) silicon on insulator (SOI) technology. It is shown that the MOSFET threshold voltage ( V t ) can be used to accurately measure the chip local temperature by using a V t extractor circuit. Furthermore, the circuit's performance is compared to standard circuits used to generate an accurate output current or voltage proportional to the absolute temperature, i.e., proportional-to-absolute temperature (PTAT), in terms of linearity, sensitivity, power consumption, speed, accuracy and calibration needs. It is shown that the V t extractor circuit is a better solution to determine the temperature of low power, analog and mixed-signal designs due to its accuracy, low power consumption and no need for calibration. The circuit has been designed using 1 µm partially depleted (PD) CMOS-SOI technology, and demonstrates a measurement inaccuracy of ±1.5 K across 300 K⁻500 K temperature range while consuming only 30 µW during operation.
Hühn, M
1995-05-01
Some approaches to molecular marker-assisted linkage detection for a dominant disease-resistance trait based on a segregating F2 population are discussed. Analysis of two-point linkage is carried out by the traditional measure of maximum lod score. It depends on (1) the maximum-likelihood estimate of the recombination fraction between the marker and the disease-resistance gene locus, (2) the observed absolute frequencies, and (3) the unknown number of tested individuals. If one replaces the absolute frequencies by expressions depending on the unknown sample size and the maximum-likelihood estimate of recombination value, the conventional rule for significant linkage (maximum lod score exceeds a given linkage threshold) can be resolved for the sample size. For each sub-population used for linkage analysis [susceptible (= recessive) individuals, resistant (= dominant) individuals, complete F2] this approach gives a lower bound for the necessary number of individuals required for the detection of significant two-point linkage by the lod-score method.
Age-associated loss of selectivity in human olfactory sensory neurons
Rawson, Nancy E.; Gomez, George; Cowart, Beverly J.; Kriete, Andres; Pribitkin, Edmund; Restrepo, Diego
2011-01-01
We report a cross-sectional study of olfactory impairment with age based on both odorant-stimulated responses of human olfactory sensory neurons (OSNs) and tests of olfactory threshold sensitivity. A total of 621 OSNs from 440 subjects in two age groups of younger ( 45 years) and older (≥60 years) subjects were investigated using fluorescence intensity ratio fura-2 imaging. OSNs were tested for responses to two odorant mixtures, as well as to subsets of and individual odors in those mixtures. Whereas cells from younger donors were highly selective in the odorants to which they responded, cells from older donors were more likely to respond to multiple odor stimuli, despite a loss in these subjects’ absolute olfactory sensitivity, suggesting a loss of specificity. This degradation in peripheral cellular specificity may impact odor discrimination and olfactory adaptation in the elderly. It is also possible that chronic adaptation as a result of reduced specificity contributes to observed declines in absolute sensitivity. PMID:22074806
Murray, L; Sethugavalar, B; Robertshaw, H; Bayman, E; Thomas, E; Gilson, D; Prestwich, R J D
2015-07-01
Recent radiotherapy guidelines for lymphoma have included involved site radiotherapy (ISRT), involved node radiotherapy (INRT) and irradiation of residual volume after full-course chemotherapy. In the absence of late toxicity data, we aim to compare organ at risk (OAR) dose-metrics and calculated second malignancy risks. Fifteen consecutive patients who had received mediastinal radiotherapy were included. Four radiotherapy plans were generated for each patient using a parallel pair photon technique: (i) involved field radiotherapy (IFRT), (ii) ISRT, (iii) INRT, (iv) residual post-chemotherapy volume. The radiotherapy dose was 30 Gy in 15 fractions. The OARs evaluated were: breasts, lungs, thyroid, heart, oesophagus. Relative and absolute second malignancy rates were estimated using the concept of organ equivalent dose. Significance was defined as P < 0.005. Compared with ISRT, IFRT significantly increased doses to lung, thyroid, heart and oesophagus, whereas INRT and residual volume techniques significantly reduced doses to all OARs. The relative risks of second cancers were significantly higher with IFRT compared with ISRT for lung, breast and thyroid; INRT and residual volume resulted in significantly lower relative risks compared with ISRT for lung, breast and thyroid. The median excess absolute risks of second cancers were consistently lowest for the residual technique and highest for IFRT in terms of thyroid, lung and breast cancers. The risk of oesophageal cancer was similar for all four techniques. Overall, the absolute risk of second cancers was very similar for ISRT and INRT. Decreasing treatment volumes from IFRT to ISRT, INRT or residual volume reduces radiation exposure to OARs. Second malignancy modelling suggests that this reduction in treatment volumes will lead to a reduction in absolute excess second malignancy. Little difference was observed in second malignancy risks between ISRT and INRT, supporting the use of ISRT in the absence of a pre-chemotherapy positron emission tomography scan in the radiotherapy treatment position. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Forecasting in foodservice: model development, testing, and evaluation.
Miller, J L; Thompson, P A; Orabella, M M
1991-05-01
This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.
Measurement of absolute regional lung air volumes from near-field x-ray speckles.
Leong, Andrew F T; Paganin, David M; Hooper, Stuart B; Siew, Melissa L; Kitchen, Marcus J
2013-11-18
Propagation-based phase contrast x-ray (PBX) imaging yields high contrast images of the lung where airways that overlap in projection coherently scatter the x-rays, giving rise to a speckled intensity due to interference effects. Our previous works have shown that total and regional changes in lung air volumes can be accurately measured from two-dimensional (2D) absorption or phase contrast images when the subject is immersed in a water-filled container. In this paper we demonstrate how the phase contrast speckle patterns can be used to directly measure absolute regional lung air volumes from 2D PBX images without the need for a water-filled container. We justify this technique analytically and via simulation using the transport-of-intensity equation and calibrate the technique using our existing methods for measuring lung air volume. Finally, we show the full capabilities of this technique for measuring regional differences in lung aeration.
Sampling techniques for thrips (Thysanoptera: Thripidae) in preflowering tomato.
Joost, P Houston; Riley, David G
2004-08-01
Sampling techniques for thrips (Thysanoptera: Thripidae) were compared in preflowering tomato plants at the Coastal Plain Experiment Station in Tifton, GA, in 2000 and 2003, to determine the most effective method of determining abundance of thrips on tomato foliage early in the growing season. Three relative sampling techniques, including a standard insect aspirator, a 946-ml beat cup, and an insect vacuum device, were compared for accuracy to an absolute method and to themselves for precision and efficiency of sampling thrips. Thrips counts of all relative sampling methods were highly correlated (R > 0.92) to the absolute method. The aspirator method was the most accurate compared with the absolute sample according to regression analysis in 2000. In 2003, all sampling methods were considered accurate according to Dunnett's test, but thrips numbers were lower and sample variation was greater than in 2000. In 2000, the beat cup method had the lowest relative variation (RV) or best precision, at 1 and 8 d after transplant (DAT). Only the beat cup method had RV values <25 for all sampling dates. In 2003, the beat cup method had the lowest RV value at 15 and 21 DAT. The beat cup method also was the most efficient method for all sample dates in both years. Frankliniella fusca (Pergande) was the most abundant thrips species on the foliage of preflowering tomato in both years of study at this location. Overall, the best thrips sampling technique tested was the beat cup method in terms of precision and sampling efficiency.
On-Wafer Characterization of Millimeter-Wave Antennas for Wireless Applications
NASA Technical Reports Server (NTRS)
Simons, Rainee N.; Lee, Richard Q.
1998-01-01
The paper demonstrates a de-embedding technique and a direct on-substrate measurement technique for fast and inexpensive characterization of miniature antennas for wireless applications at millimeter-wave frequencies. The technique is demonstrated by measurements on a tapered slot antenna (TSA). The measured results at Ka-Band frequencies include input impedance, mutual coupling between two TSAs and absolute gain of TSA.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
Sikazwe, Izukanji; Wa Mwanza, Mwanza; Savory, Theodora; Sikombe, Kombatende; Somwe, Paul; Roy, Monika; Padian, Nancy
2018-01-01
Background Although randomized trials have established the clinical efficacy of treating all persons living with HIV (PLWHs), expanding treatment eligibility in the real world may have additional behavioral effects (e.g., changes in retention) or lead to unintended consequences (e.g., crowding out sicker patients owing to increased patient volume). Using a regression discontinuity design, we sought to assess the effects of a previous change to Zambia’s HIV treatment guidelines increasing the threshold for treatment eligibility from 350 to 500 cells/μL to anticipate effects of current global efforts to treat all PLWHs. Methods and findings We analyzed antiretroviral therapy (ART)-naïve adults who newly enrolled in HIV care in a network of 64 clinics operated by the Zambian Ministry of Health and supported by the Centre for Infectious Disease Research in Zambia (CIDRZ). Patients were restricted to those enrolling in a narrow window around the April 1, 2014 change to Zambian HIV treatment guidelines that raised the CD4 threshold for treatment from 350 to 500 cells/μL (i.e., August 1, 2013, to November 1, 2014). Clinical and sociodemographic data were obtained from an electronic medical record system used in routine care. We used a regression discontinuity design to estimate the effects of this change in treatment eligibility on ART initiation within 3 months of enrollment, retention in care at 6 months (defined as clinic attendance between 3 and 9 months after enrollment), and a composite of both ART initiation by 3 months and retention in care at 6 months in all new enrollees. We also performed an instrumental variable (IV) analysis to quantify the effect of actually initiating ART because of this guideline change on retention. Overall, 34,857 ART-naïve patients (39.1% male, median age 34 years [IQR 28–41], median CD4 268 cells/μL [IQR 134–430]) newly enrolled in HIV care during this period; 23,036 were analyzed after excluding patients around the threshold to allow for clinic-to-clinic variations in actual guideline uptake. In all newly enrolling patients, expanding the CD4 threshold for treatment from 350 to 500 cells/μL was associated with a 13.6% absolute increase in ART initiation within 3 months of enrollment (95% CI, 11.1%–16.2%), a 4.1% absolute increase in retention at 6 months (95% CI, 1.6%–6.7%), and a 10.8% absolute increase in the percentage of patients who initiated ART by 3 months and were retained at six months (95% CI, 8.1%–13.5%). These effects were greatest in patients who would have become newly eligible for ART with the change in guidelines: a 43.7% increase in ART initiation by 3 months (95% CI, 37.5%–49.9%), 13.6% increase in retention at six months (95% CI, 7.3%–20.0%), and a 35.5% increase in the percentage of patients on ART at 3 months and still in care at 6 months [95% CI, 29.2%–41.9%). We did not observe decreases in ART initiation or retention in patients not directly targeted by the guideline change. An IV analysis found that initiating ART in response to the guideline change led to a 37.9% (95% CI, 28.8%–46.9%) absolute increase in retention in care. Limitations of this study include uncertain generalizability under newer models of care, lack of laboratory data (e.g., viral load), inability to account for earlier stages in the HIV care cascade (e.g., HIV testing and linkage), and potential for misclassification of eligibility status or outcome. Conclusions In this study, guidelines raising the CD4 threshold for treatment from 350 to 500 cells/μL were associated with a rapid rise in ART initiation as well as enhanced retention among newly treatment-eligible patients, without negatively impacting patients with lower CD4 levels. These data suggest that health systems in Zambia and other high-prevalence settings could substantially enhance engagement even among those with high CD4 levels (i.e., above 500 cells/μL) by expanding treatment without undermining existing care standards. PMID:29870531
Mody, Aaloke; Sikazwe, Izukanji; Czaicki, Nancy L; Wa Mwanza, Mwanza; Savory, Theodora; Sikombe, Kombatende; Beres, Laura K; Somwe, Paul; Roy, Monika; Pry, Jake M; Padian, Nancy; Bolton-Moore, Carolyn; Holmes, Charles B; Geng, Elvin H
2018-06-01
Although randomized trials have established the clinical efficacy of treating all persons living with HIV (PLWHs), expanding treatment eligibility in the real world may have additional behavioral effects (e.g., changes in retention) or lead to unintended consequences (e.g., crowding out sicker patients owing to increased patient volume). Using a regression discontinuity design, we sought to assess the effects of a previous change to Zambia's HIV treatment guidelines increasing the threshold for treatment eligibility from 350 to 500 cells/μL to anticipate effects of current global efforts to treat all PLWHs. We analyzed antiretroviral therapy (ART)-naïve adults who newly enrolled in HIV care in a network of 64 clinics operated by the Zambian Ministry of Health and supported by the Centre for Infectious Disease Research in Zambia (CIDRZ). Patients were restricted to those enrolling in a narrow window around the April 1, 2014 change to Zambian HIV treatment guidelines that raised the CD4 threshold for treatment from 350 to 500 cells/μL (i.e., August 1, 2013, to November 1, 2014). Clinical and sociodemographic data were obtained from an electronic medical record system used in routine care. We used a regression discontinuity design to estimate the effects of this change in treatment eligibility on ART initiation within 3 months of enrollment, retention in care at 6 months (defined as clinic attendance between 3 and 9 months after enrollment), and a composite of both ART initiation by 3 months and retention in care at 6 months in all new enrollees. We also performed an instrumental variable (IV) analysis to quantify the effect of actually initiating ART because of this guideline change on retention. Overall, 34,857 ART-naïve patients (39.1% male, median age 34 years [IQR 28-41], median CD4 268 cells/μL [IQR 134-430]) newly enrolled in HIV care during this period; 23,036 were analyzed after excluding patients around the threshold to allow for clinic-to-clinic variations in actual guideline uptake. In all newly enrolling patients, expanding the CD4 threshold for treatment from 350 to 500 cells/μL was associated with a 13.6% absolute increase in ART initiation within 3 months of enrollment (95% CI, 11.1%-16.2%), a 4.1% absolute increase in retention at 6 months (95% CI, 1.6%-6.7%), and a 10.8% absolute increase in the percentage of patients who initiated ART by 3 months and were retained at six months (95% CI, 8.1%-13.5%). These effects were greatest in patients who would have become newly eligible for ART with the change in guidelines: a 43.7% increase in ART initiation by 3 months (95% CI, 37.5%-49.9%), 13.6% increase in retention at six months (95% CI, 7.3%-20.0%), and a 35.5% increase in the percentage of patients on ART at 3 months and still in care at 6 months [95% CI, 29.2%-41.9%). We did not observe decreases in ART initiation or retention in patients not directly targeted by the guideline change. An IV analysis found that initiating ART in response to the guideline change led to a 37.9% (95% CI, 28.8%-46.9%) absolute increase in retention in care. Limitations of this study include uncertain generalizability under newer models of care, lack of laboratory data (e.g., viral load), inability to account for earlier stages in the HIV care cascade (e.g., HIV testing and linkage), and potential for misclassification of eligibility status or outcome. In this study, guidelines raising the CD4 threshold for treatment from 350 to 500 cells/μL were associated with a rapid rise in ART initiation as well as enhanced retention among newly treatment-eligible patients, without negatively impacting patients with lower CD4 levels. These data suggest that health systems in Zambia and other high-prevalence settings could substantially enhance engagement even among those with high CD4 levels (i.e., above 500 cells/μL) by expanding treatment without undermining existing care standards.
Wavelet-based adaptive thresholding method for image segmentation
NASA Astrophysics Data System (ADS)
Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl
2001-05-01
A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.
NASA Astrophysics Data System (ADS)
Carpenter, John E.; McNary, Christopher P.; Furin, April; Sweeney, Andrew F.; Armentrout, P. B.
2017-09-01
The first absolute experimental bond dissociation energies (BDEs) for the main heterolytic bond cleavages of four benzylpyridinium "thermometer" ions are measured using threshold collision-induced dissociation in a guided ion beam tandem mass spectrometer. In this experiment, substituted benzylpyridinium ions are introduced into the apparatus using an electrospray ionization source, thermalized, and collided with Xe at varied kinetic energies to determine absolute cross-sections for these reactions. Various effects are accounted for, including kinetic shifts, multiple collisions, and internal and kinetic energy distributions. These experimentally measured 0 K BDEs are compared with computationally predicted values at the B3LYP-GD3BJ, M06-GD3, and MP2(full) levels of theory with a 6-311+G(2d,2p) basis set using vibrational frequencies and geometries determined at the B3LYP/6-311+G(d,p) level. Additional dissociation pathways are observed for nitrobenzylpyridinium experimentally and investigated using these same levels of theory. Experimental BDEs are also compared against values in the literature at the AM1, HF, B3LYP, B3P86, and CCSD(T) levels of theory. Of the calculated values obtained in this work, the MP2(full) level of theory with counterpoise corrections best reproduces the experimental results, as do the similar literature CCSD(T) values. Lastly, the survival yield method is used to determine the characteristic temperature (Tchar) of the electrospray source prior to the thermalization region and to confirm efficient thermalization. [Figure not available: see fulltext.
Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm
NASA Astrophysics Data System (ADS)
Elahi, Sana; kaleem, Muhammad; Omer, Hammad
2018-01-01
Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.
Schmitz, Patric; Hildebrandt, Julian; Valdez, Andre Calero; Kobbelt, Leif; Ziefle, Martina
2018-04-01
In virtual environments, the space that can be explored by real walking is limited by the size of the tracked area. To enable unimpeded walking through large virtual spaces in small real-world surroundings, redirection techniques are used. These unnoticeably manipulate the user's virtual walking trajectory. It is important to know how strongly such techniques can be applied without the user noticing the manipulation-or getting cybersick. Previously, this was estimated by measuring a detection threshold (DT) in highly-controlled psychophysical studies, which experimentally isolate the effect but do not aim for perceived immersion in the context of VR applications. While these studies suggest that only relatively low degrees of manipulation are tolerable, we claim that, besides establishing detection thresholds, it is important to know when the user's immersion breaks. We hypothesize that the degree of unnoticed manipulation is significantly different from the detection threshold when the user is immersed in a task. We conducted three studies: a) to devise an experimental paradigm to measure the threshold of limited immersion (TLI), b) to measure the TLI for slowly decreasing and increasing rotation gains, and c) to establish a baseline of cybersickness for our experimental setup. For rotation gains greater than 1.0, we found that immersion breaks quite late after the gain is detectable. However, for gains lesser than 1.0, some users reported a break of immersion even before established detection thresholds were reached. Apparently, the developed metric measures an additional quality of user experience. This article contributes to the development of effective spatial compression methods by utilizing the break of immersion as a benchmark for redirection techniques.
A high-throughput method to measure NaCl and acid taste thresholds in mice.
Ishiwatari, Yutaka; Bachmanov, Alexander A
2009-05-01
To develop a technique suitable for measuring NaCl taste thresholds in genetic studies, we conducted a series of experiments with outbred CD-1 mice using conditioned taste aversion (CTA) and two-bottle preference tests. In Experiment 1, we compared conditioning procedures involving either oral self-administration of LiCl or pairing NaCl intake with LiCl injections and found that thresholds were the lowest after LiCl self-administration. In Experiment 2, we compared different procedures (30-min and 48-h tests) for testing conditioned mice and found that the 48-h test is more sensitive. In Experiment 3, we examined the effects of varying strength of conditioned (NaCl or LiCl taste intensity) and unconditioned (LiCl toxicity) stimuli and concluded that 75-150 mM LiCl or its mixtures with NaCl are the optimal stimuli for conditioning by oral self-administration. In Experiment 4, we examined whether this technique is applicable for measuring taste thresholds for other taste stimuli. Results of these experiments show that conditioning by oral self-administration of LiCl solutions or its mixtures with other taste stimuli followed by 48-h two-bottle tests of concentration series of a conditioned stimulus is an efficient and sensitive method to measure taste thresholds. Thresholds measured with this technique were 2 mM for NaCl and 1 mM for citric acid. This approach is suitable for simultaneous testing of large numbers of animals, which is required for genetic studies. These data demonstrate that mice, like several other species, generalize CTA from LiCl to NaCl, suggesting that they perceive taste of NaCl and LiCl as qualitatively similar, and they also can generalize CTA of a binary mixture of taste stimuli to mixture components.
Covariate selection with iterative principal component analysis for predicting physical
USDA-ARS?s Scientific Manuscript database
Local and regional soil data can be improved by coupling new digital soil mapping techniques with high resolution remote sensing products to quantify both spatial and absolute variation of soil properties. The objective of this research was to advance data-driven digital soil mapping techniques for ...
Power in the Classroom VII: Linking Behavior Alteration Techniques to Cognitive Learning.
ERIC Educational Resources Information Center
Richmond, Virginia P.; And Others
1987-01-01
Argues that Behavior Alteration Techniques (BATs) improve students' on-task compliance which, in turn, is consistently associated with achievement. Indicates a substantial relationship between BAT use and cognitive learning on both absolute and relative measures of achievement. Shows that the teachers perceived by students as "good"…
Auditory steady-state response in cochlear implant patients.
Torres-Fortuny, Alejandro; Arnaiz-Marquez, Isabel; Hernández-Pérez, Heivet; Eimil-Suárez, Eduardo
2018-03-19
Auditory steady state responses to continuous amplitude modulated tones at rates between 70 and 110Hz, have been proposed as a feasible alternative to objective frequency specific audiometry in cochlear implant subjects. The aim of the present study is to obtain physiological thresholds by means of auditory steady-state response in cochlear implant patients (Clarion HiRes 90K), with acoustic stimulation, on free field conditions and to verify its biological origin. 11 subjects comprised the sample. Four amplitude modulated tones of 500, 1000, 2000 and 4000Hz were used as stimuli, using the multiple frequency technique. The recording of auditory steady-state response was also recorded at 0dB HL of intensity, non-specific stimulus and using a masking technique. The study enabled the electrophysiological thresholds to be obtained for each subject of the explored sample. There were no auditory steady-state responses at either 0dB or non-specific stimulus recordings. It was possible to obtain the masking thresholds. A difference was identified between behavioral and electrophysiological thresholds of -6±16, -2±13, 0±22 and -8±18dB at frequencies of 500, 1000, 2000 and 4000Hz respectively. The auditory steady state response seems to be a suitable technique to evaluate the hearing threshold in cochlear implant subjects. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.
Boncyk, Wayne C.; Markham, Brian L.; Barker, John L.; Helder, Dennis
1996-01-01
The Landsat-7 Image Assessment System (IAS), part of the Landsat-7 Ground System, will calibrate and evaluate the radiometric and geometric performance of the Enhanced Thematic Mapper Plus (ETM +) instrument. The IAS incorporates new instrument radiometric artifact correction and absolute radiometric calibration techniques which overcome some limitations to calibration accuracy inherent in historical calibration methods. Knowledge of ETM + instrument characteristics gleaned from analysis of archival Thematic Mapper in-flight data and from ETM + prelaunch tests allow the determination and quantification of the sources of instrument artifacts. This a priori knowledge will be utilized in IAS algorithms designed to minimize the effects of the noise sources before calibration, in both ETM + image and calibration data.
Külahci, Fatih; Sen, Zekâi
2009-09-15
The classical solid/liquid distribution coefficient, K(d), for radionuclides in water-sediment systems is dependent on many parameters such as flow, geology, pH, acidity, alkalinity, total hardness, radioactivity concentration, etc. in a region. Considerations of all these effects require a regional analysis with an effective methodology, which has been based on the concept of the cumulative semivariogram concept in this paper. Although classical K(d) calculations are punctual and cannot represent regional pattern, in this paper a regional calculation methodology is suggested through the use of Absolute Point Cumulative SemiVariogram (APCSV) technique. The application of the methodology is presented for (137)Cs and (90)Sr measurements at a set of points in Keban Dam reservoir, Turkey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Jennifer L., E-mail: peterson.jennifer2@mayo.edu; Buskirk, Steven J.; Heckman, Michael G.
2014-04-01
Rectal adverse events (AEs) are a major concern with definitive radiotherapy (RT) treatment for prostate cancer. The anterior rectal wall is at the greatest risk of injury as it lies closest to the target volume and receives the highest dose of RT. This study evaluated the absolute volume of anterior rectal wall receiving a high dose to identify potential ideal dose constraints that can minimize rectal AEs. A total of 111 consecutive patients with Stage T1c to T3a N0 M0 prostate cancer who underwent image-guided intensity-modulated RT at our institution were included. AEs were graded according to the Common Terminologymore » Criteria for Adverse Events, version 4.0. The volume of anterior rectal wall receiving 5 to 80 Gy in 2.5-Gy increments was determined. Multivariable Cox regression models were used to identify cut points in these volumes that led to an increased risk of early and late rectal AEs. Early AEs occurred in most patients (88%); however, relatively few of them (13%) were grade ≥2. At 5 years, the cumulative incidence of late rectal AEs was 37%, with only 5% being grade ≥2. For almost all RT doses, we identified a threshold of irradiated absolute volume of anterior rectal wall above which there was at least a trend toward a significantly higher rate of AEs. Most strikingly, patients with more than 1.29, 0.73, or 0.45 cm{sup 3} of anterior rectal wall exposed to radiation doses of 67.5, 70, or 72.5 Gy, respectively, had a significantly increased risk of late AEs (relative risks [RR]: 2.18 to 2.72; p ≤ 0.041) and of grade ≥ 2 early AEs (RR: 6.36 to 6.48; p = 0.004). Our study provides evidence that definitive image-guided intensity-modulated radiotherapy (IG-IMRT) for prostate cancer is well tolerated and also identifies dose thresholds for the absolute volume of anterior rectal wall above which patients are at greater risk of early and late complications.« less
NASA Astrophysics Data System (ADS)
Salam, Afifah Salmi Abdul; Isa, Mohd. Nazrin Md.; Ahmad, Muhammad Imran; Che Ismail, Rizalafande
2017-11-01
This paper will focus on the study and identifying various threshold values for two commonly used edge detection techniques, which are Sobel and Canny Edge detection. The idea is to determine which values are apt in giving accurate results in identifying a particular leukemic cell. In addition, evaluating suitability of edge detectors are also essential as feature extraction of the cell depends greatly on image segmentation (edge detection). Firstly, an image of M7 subtype of Acute Myelocytic Leukemia (AML) is chosen due to its diagnosing which were found lacking. Next, for an enhancement in image quality, noise filters are applied. Hence, by comparing images with no filter, median and average filter, useful information can be acquired. Each threshold value is fixed with value 0, 0.25 and 0.5. From the investigation found, without any filter, Canny with a threshold value of 0.5 yields the best result.
NASA Astrophysics Data System (ADS)
Medjoubi, K.; Dawiec, A.
2017-12-01
A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.
Results of FM-TV threshold reduction investigation for the ATS F trust experiment
NASA Technical Reports Server (NTRS)
Brown, J. P.
1972-01-01
An investigation of threshold effects in FM TV was initiated to determine if any simple, low cost techniques were available which can reduce the subjective video threshold, applicable to low cost community TV reception via satellite. Two methods of eliminating these effects were examined: the use of standard video pre-emphasis, and the use of an additional circuit to blank the picture tube during the retrace period.
Threshold of transverse mode coupling instability with arbitrary space charge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balbekov, V.
The threshold of the transverse mode coupling instability is calculated in framework of the square well model at arbitrary value of space charge tune shift. A new method of calculation is developed beyond the traditional expansion technique. The square, resistive, and exponential wakes are investigated. It is shown that the instability threshold goes up indefinitely when the tune shift increases. Finally, a comparison with conventional case of the parabolic potential well is performed.
Threshold of transverse mode coupling instability with arbitrary space charge
Balbekov, V.
2017-11-30
The threshold of the transverse mode coupling instability is calculated in framework of the square well model at arbitrary value of space charge tune shift. A new method of calculation is developed beyond the traditional expansion technique. The square, resistive, and exponential wakes are investigated. It is shown that the instability threshold goes up indefinitely when the tune shift increases. Finally, a comparison with conventional case of the parabolic potential well is performed.
Mielck, F; Bräuer, A; Radke, O; Hanekop, G; Loesch, S; Friedrich, M; Hilgers, R; Sonntag, H
2004-04-01
The transcerebral double-indicator dilution technique is a recently developed method to measure global cerebral blood flow at bedside. It is based on bolus injection of ice-cold indocyanine green dye and simultaneous recording of resulting thermo- and dye-dilution curves in the aorta and the jugular bulb. However, with this method 40 mL of ice-cold solution is administered as a bolus. Therefore, this prospective clinical study was performed to elucidate the effects of repeated administration of indicator on absolute blood temperature and on cerebral blood flow and metabolism. The investigation was performed in nine male patients scheduled for elective coronary artery bypass grafting. Absolute blood temperature was measured in the jugular bulb and in the aorta before and after repeated measurements using the transcerebral double-indicator dilution technique. During the investigated time course, the blood temperature in the jugular bulb, compared to the aorta, was significantly higher with a mean difference of 0.21 degrees C. The administration of an ice-cold bolus reduced the mean blood temperature by 0.06 degrees C in the jugular bulb as well as in the aorta. After the transcerebral double-indicator dilution measurements a temperature recovery to baseline conditions was not observed during the investigated time period. Cerebral blood flow and cerebral metabolism did not change during the investigated time period. Repeated measurements with the transcerebral double-indicator dilution technique do not affect absolute jugular bulb blood temperatures negatively. Global cerebral blood flow and metabolism measurements remain unaltered. However, accuracy and resolution of this technique is not high enough to detect the effect of minor changes of physiological variables.
Interaction thresholds in Er:YAG laser ablation of organic tissue
NASA Astrophysics Data System (ADS)
Lukac, Matjaz; Marincek, Marko; Poberaj, Gorazd; Grad, Ladislav; Mozina, Janez I.; Sustercic, Dusan; Funduk, Nenad; Skaleric, Uros
1996-01-01
Because of their unique properties with regard to the absorption in organic tissue, pulsed Er:YAG lasers are of interest for various applications in medicine, such as dentistry, dermatology, and cosmetic surgery. The relatively low thermal side effects, and surgical precision of erbium medical lasers have been attributed to the micro-explosive nature of their interaction with organic tissue. In this paper, we report on preliminary results of our study of the thresholds for tissue ablation, using an opto-acoustic technique. Two laser energy thresholds for the interaction are observed. The lower energy threshold is attributed to surface water vaporization, and the higher energy threshold to explosive ablation of thin tissue layers.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
Some Radiation Techniques Used in the GU-3 Gamma Irradiator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dodbiba, Andon; Ylli, Ariana; Stamo, Iliriana
2007-04-23
Different radiation techniques, measurement of dose and its distibution throughout the irradiated materials are the main problems treated in this paper. The oscillometry method combined with the ionization chamber, as an absolute dosimeter, is used for calibration of routine ECB dosimeters. The dose uniformity, for the used radiation techniques in our GU-3 Gamma Irradiator with Cs-137, is from 93% up to 99%.
Estimator banks: a new tool for direction-of-arrival estimation
NASA Astrophysics Data System (ADS)
Gershman, Alex B.; Boehme, Johann F.
1997-10-01
A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.
Spectrophotometric Method for Differentiation of Human Skin Melanoma. II. Diagnostic Characteristics
NASA Astrophysics Data System (ADS)
Petruk, V. G.; Ivanov, A. P.; Kvaternyuk, S. M.; Barunb, V. V.
2016-05-01
Experimental data on the spectral dependences of the optical diffuse reflection coefficient for skin from different people with melanoma or nevus are presented in the form of the probability density of the diffuse reflection coefficient for the corresponding pigmented lesions. We propose a noninvasive technique for differentiating between malignant and benign tumors, based on measuring the diffuse reflection coefficient for a specific patient and comparing the value obtained with a pre-set threshold. If the experimental result is below the threshold, then it is concluded that the person has melanoma; otherwise, no melanoma is present. As an example, we consider the wavelength 870 nm. We determine the risk of malignant transformation of a nevus (its transition to melanoma) for different measured diffuse reflection coefficients. We have studied the errors in the method, its operating characteristics and probability characteristics as the threshold diffuse reflection coefficient is varied. We find that the diagnostic confidence, sensitivity, specificity, and effectiveness (accuracy) parameters are maximum (>0.82) for a threshold of 0.45-0.47. The operating characteristics for the proposed technique exceed the corresponding parameters for other familiar optical approaches to melanoma diagnosis. Its distinguishing feature is operation at only one wavelength, and consequently implementation of the experimental technique is simplified and made less expensive.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, J., E-mail: radiant@ferrodevices.com; Chapman, S., E-mail: radiant@ferrodevices.com
Piezoresponse Force Microscopy (PFM) is a popular tool for the study of ferroelectric and piezoelectric materials at the nanometer level. Progress in the development of piezoelectric MEMS fabrication is highlighting the need to characterize absolute displacement at the nanometer and Ångstrom scales, something Atomic Force Microscopy (AFM) might do but PFM cannot. Absolute displacement is measured by executing a polarization measurement of the ferroelectric or piezoelectric capacitor in question while monitoring the absolute vertical position of the sample surface with a stationary AFM cantilever. Two issues dominate the execution and precision of such a measurement: (1) the small amplitude ofmore » the electrical signal from the AFM at the Ångstrom level and (2) calibration of the AFM. The authors have developed a calibration routine and test technique for mitigating the two issues, making it possible to use an atomic force microscope to measure both the movement of a capacitor surface as well as the motion of a micro-machine structure actuated by that capacitor. The theory, procedures, pitfalls, and results of using an AFM for absolute piezoelectric measurement are provided.« less
Absolute flux density calibrations of radio sources: 2.3 GHz
NASA Technical Reports Server (NTRS)
Freiley, A. J.; Batelaan, P. D.; Bathker, D. A.
1977-01-01
A detailed description of a NASA/JPL Deep Space Network program to improve S-band gain calibrations of large aperture antennas is reported. The program is considered unique in at least three ways; first, absolute gain calibrations of high quality suppressed-sidelobe dual mode horns first provide a high accuracy foundation to the foundation to the program. Second, a very careful transfer calibration technique using an artificial far-field coherent-wave source was used to accurately obtain the gain of one large (26 m) aperture. Third, using the calibrated large aperture directly, the absolute flux density of five selected galactic and extragalactic natural radio sources was determined with an absolute accuracy better than 2 percent, now quoted at the familiar 1 sigma confidence level. The follow-on considerations to apply these results to an operational network of ground antennas are discussed. It is concluded that absolute gain accuracies within + or - 0.30 to 0.40 db are possible, depending primarily on the repeatability (scatter) in the field data from Deep Space Network user stations.
NASA Astrophysics Data System (ADS)
Min'ko, L. Ya; Chumakou, A. N.; Chivel', Yu A.
1988-08-01
Nanosecond kinetic spectroscopy techniques were used to identify the erosion origin of pulsed low-threshold surface optical breakdown of air as a result of interaction of microsecond neodymium and CO2 laser pulses with some metals (indium, lead).
MacNeilage, Paul R.; Turner, Amanda H.
2010-01-01
Gravitational signals arising from the otolith organs and vertical plane rotational signals arising from the semicircular canals interact extensively for accurate estimation of tilt and inertial acceleration. Here we used a classical signal detection paradigm to examine perceptual interactions between otolith and horizontal semicircular canal signals during simultaneous rotation and translation on a curved path. In a rotation detection experiment, blindfolded subjects were asked to detect the presence of angular motion in blocks where half of the trials were pure nasooccipital translation and half were simultaneous translation and yaw rotation (curved-path motion). In separate, translation detection experiments, subjects were also asked to detect either the presence or the absence of nasooccipital linear motion in blocks, in which half of the trials were pure yaw rotation and half were curved path. Rotation thresholds increased slightly, but not significantly, with concurrent linear velocity magnitude. Yaw rotation detection threshold, averaged across all conditions, was 1.45 ± 0.81°/s (3.49 ± 1.95°/s2). Translation thresholds, on the other hand, increased significantly with increasing magnitude of concurrent angular velocity. Absolute nasooccipital translation detection threshold, averaged across all conditions, was 2.93 ± 2.10 cm/s (7.07 ± 5.05 cm/s2). These findings suggest that conscious perception might not have independent access to separate estimates of linear and angular movement parameters during curved-path motion. Estimates of linear (and perhaps angular) components might instead rely on integrated information from canals and otoliths. Such interaction may underlie previously reported perceptual errors during curved-path motion and may originate from mechanisms that are specialized for tilt-translation processing during vertical plane rotation. PMID:20554843
Magnesium Sulfate Only Slightly Reduces the Shivering Threshold in Humans
Wadhwa, Anupama; Sengupta, Papiya; Durrani, Jaleel; Akça, Ozan; Lenhardt, Rainer; Sessler, Daniel I.
2005-01-01
Background: Hypothermia may be an effective treatment for stroke or acute myocardial infarction; however, it provokes vigorous shivering, which causes potentially dangerous hemodynamic responses and prevents further hypothermia. Magnesium is an attractive antishivering agent because it is used for treatment of postoperative shivering and provides protection against ischemic injury in animal models. We tested the hypothesis that magnesium reduces the threshold (triggering core temperature) and gain of shivering without substantial sedation or muscle weakness. Methods: We studied nine healthy male volunteers (18-40 yr) on two randomly assigned treatment days: 1) Control and 2) Magnesium (80 mg·kg-1 followed by infusion at 2 g·h-1). Lactated Ringer's solution (4°C) was infused via a central venous catheter over a period of approximately 2 hours to decrease tympanic membrane temperature ≈1.5°C·h-1. A significant and persistent increase in oxygen consumption identified the threshold. The gain of shivering was determined by the slope of oxygen consumption vs. core temperature regression. Sedation was evaluated using verbal rating score (VRS, 0-10) and bispectral index of the EEG (BIS). Peripheral muscle strength was evaluated using dynamometry and spirometry. Data were analyzed using repeated-measures ANOVA; P<0.05 was statistically significant. Results: Magnesium reduced the shivering threshold (36.3±0.4 [mean±SD] vs. 36.6±0.3°C, P=0.040). It did not affect the gain of shivering (Control: 437±289, Magnesium: 573±370 ml·min-1·°C-1, P=0.344). The magnesium bolus did not produce significant sedation or appreciably reduce muscle strength. Conclusions: Magnesium significantly reduced the shivering threshold; however, due to the modest absolute reduction, this finding is considered to be clinically unimportant for induction of therapeutic hypothermia. PMID:15749735
Dipole strength distributions from HIGS Experiments
NASA Astrophysics Data System (ADS)
Werner, V.; Cooper, N.; Goddard, P. M.; Humby, P.; Ilieva, R. S.; Rusev, G.; Beller, J.; Bernards, C.; Crider, B. P.; Isaak, J.; Kelley, J. H.; Kwan, E.; Löher, B.; Peters, E. E.; Pietralla, N.; Romig, C.; Savran, D.; Scheck, M.; Tonchev, A. P.; Tornow, W.; Yates, S. W.; Zweidinger, M.
2015-05-01
A series of photon scattering experiments has been performed on the double-beta decay partners 76Ge and 76Se, in order to investigate their dipole response up to the neutron separation threshold. Gamma-ray beams from bremsstrahlung at the S-DALINAC and from Compton-backscattering at HIGS have been used to measure absolute cross sections and parities of dipole excited states, respectively. The HIGS data allows for indirect measurement of averaged branching ratios, which leads to significant corrections in the observed excitation cross sections. Results are compared to statistical calculations, to test photon strength functions and the Axel-Brink hypothesis
NASA Technical Reports Server (NTRS)
Mumma, M. J.; Borst, W. L.; Zipf, E. C.
1972-01-01
Vacuum ultraviolet multiplets of C I, C II, and O I were produced by electron impact of CO2. Absolute emission cross sections for these multiplets were measured from threshold to 350 eV. The electrostatically focussed electron gun used in this series of experiments is described in detail. The atomic multiplets which were produced by dissociative excitation of CO2 and the cross sections at 100 eV are given. The dependence of the excitation functions on electron energy shows that these multiplets are produced by electric-dipole-allowed transitions in CO2.
Individualizing osteoporosis medications.
Silverman, Stuart
2014-03-01
Mrs. JD is a 58-year-old postmenopausal woman having symptoms of hot flashes and night sweats. She has a dual-energy x-ray absorptiometry T-score of -2.4 in the femoral neck, consistent with low bone mass or osteopenia. She has a parental history of hip fracture. FRAX shows the 10-year absolute risk of major osteoporotic fracture equal to 18% and the 10-year risk of hip fracture above 3% at 3.2%, which meets the National Osteoporosis Foundation threshold. Mrs. JD is taking calcium 1,200 mg, between supplement and diet, and 1,000 IU vitamin D3 daily. How should she be treated?
Design and Fabrication of Cherenkov Counters for the Detection of SNM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Anna S.; Lanza, Richard; Galaitsis, Anthony
2011-12-13
The need for large-size detectors for long-range active interrogation (AI) detection of SNM has generated interest in water-based detector technologies. Water Cherenkov Detectors (WCD) were selected for this research because of their transportability, scalability, and an inherent energy threshold. The detector design and analysis was completed using the Geant4 toolkit. It was demonstrated both computationally and experimentally that it is possible to use WCD to detect and characterize gamma rays. Absolute efficiency of the detector (with no energy cuts applied) was determined to be around 30% for a {sup 60}Co source.
Total photoionization cross sections of atomic oxygen from threshold to 44.3A
NASA Technical Reports Server (NTRS)
Angel, G. C.; Samson, James A. R.
1987-01-01
The relative cross section of atomic oxygen for the production of singly charged ions has been remeasured in more detail and extended to cover the wavelength range 44.3 to 910.5 A by the use of synchrotron radiation. In addition, the contribution of multiple ionization to the cross sections has been measured allowing total photoionization cross sections to be obtained below 250 A. The results have been made absolute by normalization to previously measured data. The use of synchrotron radiation has enabled measurements of the continuum cross section to be made between the numerous autoionizing resonances that occur near the ionization thresholds. This in turn has allowed a more critical comparison of the various theoretical estimates of the cross section to be made. The series of autoionizing resonances leading to the 4-P state of the oxygen ion have been observed for the first time in an ionization type experiment and their positions compared with both theory and previous photographic recordings.
Absolute instability of polaron mode in semiconductor magnetoplasma
NASA Astrophysics Data System (ADS)
Paliwal, Ayushi; Dubey, Swati; Ghosh, S.
2018-01-01
Using coupled mode theory under hydrodynamic regime, a compact dispersion relation is derived for polaron mode in semiconductor magnetoplasma. The propagation and amplification characteristics of the wave are explored in detail. The analysis deals with the behaviour of anomalous threshold and amplification derived from dispersion relation, as function of external parameters like doping concentration and applied magnetic field. The results of this investigation are hoped to be useful in understanding electron-longitudinal optical phonon interplay in polar n-type semiconductor plasmas under the influence of coupled collective cyclotron excitations. The best results in terms of smaller threshold and higher gain of polaron mode could be achieved by choosing moderate doping concentration in the medium at higher magnetic field. For numerical appreciation of the results, relevant data of III-V n-GaAs compound semiconductor at 77 K is used. Present study provides a qualitative picture of polaron mode in magnetized n-type polar semiconductor medium duly shined by a CO2 laser.
Transverse Pupil Shifts for Adaptive Optics Non-Common Path Calibration
NASA Technical Reports Server (NTRS)
Bloemhof, Eric E.
2011-01-01
A simple new way of obtaining absolute wavefront measurements with a laboratory Fizeau interferometer was recently devised. In that case, the observed wavefront map is the difference of two cavity surfaces, those of the mirror under test and of an unknown reference surface on the Fizeau s transmission flat. The absolute surface of each can be determined by applying standard wavefront reconstruction techniques to two grids of absolute surface height differences of the mirror under test, obtained from pairs of measurements made with slight transverse shifts in X and Y. Adaptive optics systems typically provide an actuated periscope between wavefront sensor (WFS) and commonmode optics, used for lateral registration of deformable mirror (DM) to WFS. This periscope permits independent adjustment of either pupil or focal spot incident on the WFS. It would be used to give the required lateral pupil motion between common and non-common segments, analogous to the lateral shifts of the two phase contributions in the lab Fizeau. The technique is based on a completely new approach to calibration of phase. It offers unusual flexibility with regard to the transverse spatial frequency scales probed, and will give results quite quickly, making use of no auxiliary equipment other than that built into the adaptive optics system. The new technique may be applied to provide novel calibration information about other optical systems in which the beam may be shifted transversely in a controlled way.
Sellers, Michael S; Lísal, Martin; Brennan, John K
2016-03-21
We present an extension of various free-energy methodologies to determine the chemical potential of the solid and liquid phases of a fully-flexible molecule using classical simulation. The methods are applied to the Smith-Bharadwaj atomistic potential representation of cyclotrimethylene trinitramine (RDX), a well-studied energetic material, to accurately determine the solid and liquid phase Gibbs free energies, and the melting point (Tm). We outline an efficient technique to find the absolute chemical potential and melting point of a fully-flexible molecule using one set of simulations to compute the solid absolute chemical potential and one set of simulations to compute the solid-liquid free energy difference. With this combination, only a handful of simulations are needed, whereby the absolute quantities of the chemical potentials are obtained, for use in other property calculations, such as the characterization of crystal polymorphs or the determination of the entropy. Using the LAMMPS molecular simulator, the Frenkel and Ladd and pseudo-supercritical path techniques are adapted to generate 3rd order fits of the solid and liquid chemical potentials. Results yield the thermodynamic melting point Tm = 488.75 K at 1.0 atm. We also validate these calculations and compare this melting point to one obtained from a typical superheated simulation technique.
Bashir, Adil; Gropler, Robert; Ackerman, Joseph
2015-01-01
Purpose Absolute concentrations of high-energy phosphorus (31P) metabolites in liver provide more important insight into physiologic status of liver disease compared to resonance integral ratios. A simple method for measuring absolute concentrations of 31P metabolites in human liver is described. The approach uses surface spoiling inhomogeneous magnetic field gradient to select signal from liver tissue. The technique avoids issues caused by respiratory motion, chemical shift dispersion associated with linear magnetic field gradients, and increased tissue heat deposition due to radiofrequency absorption, especially at high field strength. Methods A method to localize signal from liver was demonstrated using superficial and highly non-uniform magnetic field gradients, which eliminate signal(s) from surface tissue(s) located between the liver and RF coil. A double standard method was implemented to determine absolute 31P metabolite concentrations in vivo. 8 healthy individuals were examined in a 3 T MR scanner. Results Concentrations of metabolites measured in eight healthy individuals are: γ-adenosine triphosphate (ATP) = 2.44 ± 0.21 (mean ± sd) mmol/l of wet tissue volume, α-ATP = 3.2 ± 0.63 mmol/l, β-ATP = 2.98 ± 0.45 mmol/l, inorganic phosphates (Pi) = 1.87 ± 0.25 mmol/l, phosphodiesters (PDE) = 10.62 ± 2.20 mmol/l and phosphomonoesters (PME) = 2.12 ± 0.51 mmol/l. All are in good agreement with literature values. Conclusions The technique offers robust and fast means to localize signal from liver tissue, allows absolute metabolite concentration determination, and avoids problems associated with constant field gradient (linear field variation) localization methods. PMID:26633549
Arraycount, an algorithm for automatic cell counting in microwell arrays.
Kachouie, Nezamoddin; Kang, Lifeng; Khademhosseini, Ali
2009-09-01
Microscale technologies have emerged as a powerful tool for studying and manipulating biological systems and miniaturizing experiments. However, the lack of software complementing these techniques has made it difficult to apply them for many high-throughput experiments. This work establishes Arraycount, an approach to automatically count cells in microwell arrays. The procedure consists of fluorescent microscope imaging of cells that are seeded in microwells of a microarray system and then analyzing images via computer to recognize the array and count cells inside each microwell. To start counting, green and red fluorescent images (representing live and dead cells, respectively) are extracted from the original image and processed separately. A template-matching algorithm is proposed in which pre-defined well and cell templates are matched against the red and green images to locate microwells and cells. Subsequently, local maxima in the correlation maps are determined and local maxima maps are thresholded. At the end, the software records the cell counts for each detected microwell on the original image in high-throughput. The automated counting was shown to be accurate compared with manual counting, with a difference of approximately 1-2 cells per microwell: based on cell concentration, the absolute difference between manual and automatic counting measurements was 2.5-13%.
A technique for the correcting ERTS data for solar and atmospheric effects
NASA Technical Reports Server (NTRS)
Rogers, R. H.; Peacock, K.
1973-01-01
A technique is described by which an ERTS investigator can obtain absolute target reflectances by correcting spacecraft radiance measurements for variable target irradiance, atmospheric attenuation, and atmospheric backscatter. A simple measuring instrument and the necessary atmospheric measurements are discussed, and examples demonstrate the nature and magnitude of the atmospheric corrections.
A review of the different techniques for solid surface acid-base characterization.
Sun, Chenhang; Berg, John C
2003-09-18
In this work, various techniques for solid surface acid-base (AB) characterization are reviewed. Different techniques employ different scales to rank acid-base properties. Based on the results from literature and the authors' own investigations for mineral oxides, these scales are compared. The comparison shows that Isoelectric Point (IEP), the most commonly used AB scale, is not a description of the absolute basicity or acidity of a surface, but a description of their relative strength. That is, a high IEP surface shows more basic functionality comparing with its acidic functionality, whereas a low IEP surface shows less basic functionality comparing with its acidic functionality. The choice of technique and scale for AB characterization depends on the specific application. For the cases in which the overall AB property is of interest, IEP (by electrokinetic titration) and H(0,max) (by indicator dye adsorption) are appropriate. For the cases in which the absolute AB property is of interest such as in the study of adhesion, it is more pertinent to use chemical shift (by XPS) and the heat of adsorption of probe gases (by calorimetry or IGC).
Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi
2017-08-01
The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.
Denoising in digital speckle pattern interferometry using wave atoms.
Federico, Alejandro; Kaufmann, Guillermo H
2007-05-15
We present an effective method for speckle noise removal in digital speckle pattern interferometry, which is based on a wave-atom thresholding technique. Wave atoms are a variant of 2D wavelet packets with a parabolic scaling relation and improve the sparse representation of fringe patterns when compared with traditional expansions. The performance of the denoising method is analyzed by using computer-simulated fringes, and the results are compared with those produced by wavelet and curvelet thresholding techniques. An application of the proposed method to reduce speckle noise in experimental data is also presented.
OCT angiography by absolute intensity difference applied to normal and diseased human retinas
Ruminski, Daniel; Sikorski, Bartosz L.; Bukowska, Danuta; Szkulmowski, Maciej; Krawiec, Krzysztof; Malukiewicz, Grazyna; Bieganowski, Lech; Wojtkowski, Maciej
2015-01-01
We compare four optical coherence tomography techniques for noninvasive visualization of microcapillary network in the human retina and murine cortex. We perform phantom studies to investigate contrast-to-noise ratio for angiographic images obtained with each of the algorithm. We show that the computationally simplest absolute intensity difference angiographic OCT algorithm that bases only on two cross-sectional intensity images may be successfully used in clinical study of healthy eyes and eyes with diabetic maculopathy and branch retinal vein occlusion. PMID:26309740
NASA Astrophysics Data System (ADS)
Anees, Amir; Khan, Waqar Ahmad; Gondal, Muhammad Asif; Hussain, Iqtadar
2013-07-01
The aim of this work is to make use of the mean of absolute deviation (MAD) method for the evaluation process of substitution boxes used in the advanced encryption standard. In this paper, we use the MAD technique to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, MAD is applied to advanced encryption standard (AES), affine power affine (APA), Gray, Lui J., Residue Prime, S8 AES, SKIPJACK, and Xyi substitution boxes.
Lin, Kuang-Wei; Hall, Timothy L; Xu, Zhen; Cain, Charles A
2015-08-01
When histotripsy pulses shorter than 2 cycles are applied, the formation of a dense bubble cloud relies only on the applied peak negative pressure (p-) exceeding the "intrinsic threshold" of the medium (absolute value of 26-30 MPa in most soft tissues). It has been found that a sub-threshold high-frequency probe pulse (3 MHz) can be enabled by a sub-threshold low-frequency pump pulse (500 kHz) where the sum exceeds the intrinsic threshold, thus generating lesion-producing dense bubble clouds ("dual-beam histotripsy"). Here, the feasibility of using an imaging transducer to provide the high-frequency probe pulse in the dual-beam histotripsy approach is investigated. More specifically, an ATL L7-4 imaging transducer (Philips Healthcare, Andover, MA, USA), pulsed by a V-1 Data Acquisition System (Verasonics, Redmond, WA, USA), was used to generate the high-frequency probe pulses. The low-frequency pump pulses were generated by a 20-element 345-kHz array transducer, driven by a custom high-voltage pulser. These dual-beam histotripsy pulses were applied to red blood cell tissue-mimicking phantoms at a pulse repetition frequency of 1 Hz, and optical imaging was used to visualize bubble clouds and lesions generated in the red blood cell phantoms. The results indicated that dense bubble clouds (and resulting lesions) were generated when the p- of the sub-threshold pump and probe pulses combined constructively to exceed the intrinsic threshold. The average size of the smallest reproducible lesions using the imaging probe pulse enabled by the sub-threshold pump pulse was 0.7 × 1.7 mm, whereas that using the supra-threshold pump pulse alone was 1.4 × 3.7 mm. When the imaging transducer was steered laterally, bubble clouds and lesions were steered correspondingly until the combined p- no longer exceeded the intrinsic threshold. These results were also validated with ex vivo porcine liver experiments. Using an imaging transducer for dual-beam histotripsy can have two advantages: (i) lesion steering can be achieved using the steering of the imaging transducer (implemented with the beamformer of the accompanying programmable ultrasound system), and (ii) treatment can be simultaneously monitored when the imaging transducer is used in conjunction with an ultrasound imaging system. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Holzmeier, Fabian; Fischer, Ingo; Kiendl, Benjamin; Krueger, Anke; Bodi, Andras; Hemberger, Patrick
2016-04-07
We report the determination of the absolute photoionization cross section of cyclopropenylidene, c-C3H2, and the heat of formation of the C3H radical and ion derived by the dissociative ionization of the carbene. Vacuum ultraviolet (VUV) synchrotron radiation as provided by the Swiss Light Source and imaging photoelectron photoion coincidence (iPEPICO) were employed. Cyclopropenylidene was generated by pyrolysis of a quadricyclane precursor in a 1 : 1 ratio with benzene, which enabled us to derive the carbene's near threshold absolute photoionization cross section from the photoionization yield of the two pyrolysis products and the known cross section of benzene. The cross section at 9.5 eV, for example, was determined to be 4.5 ± 1.4 Mb. Upon dissociative ionization the carbene decomposes by hydrogen atom loss to the linear isomer of C3H(+). The appearance energy for this process was determined to be AE(0K)(c-C3H2; l-C3H(+)) = 13.67 ± 0.10 eV. The heat of formation of neutral and cationic C3H was derived from this value via a thermochemical cycle as Δ(f)H(0K)(C3H) = 725 ± 25 kJ mol(-1) and Δ(f)H(0K)(C3H(+)) = 1604 ± 19 kJ mol(-1), using a previously reported ionization energy of C3H.
Intracortical myelination in musicians with absolute pitch: Quantitative morphometry using 7-T MRI.
Kim, Seung-Goo; Knösche, Thomas R
2016-10-01
Absolute pitch (AP) is known as the ability to recognize and label the pitch chroma of a given tone without external reference. Known brain structures and functions related to AP are mainly of macroscopic aspects. To shed light on the underlying neural mechanism of AP, we investigated the intracortical myeloarchitecture in musicians with and without AP using the quantitative mapping of the longitudinal relaxation rates with ultra-high-field magnetic resonance imaging at 7 T. We found greater intracortical myelination for AP musicians in the anterior region of the supratemporal plane, particularly the medial region of the right planum polare (PP). In the same region of the right PP, we also found a positive correlation with a behavioral index of AP performance. In addition, we found a positive correlation with a frequency discrimination threshold in the anterolateral Heschl's gyrus in the right hemisphere, demonstrating distinctive neural processes of absolute recognition and relative discrimination of pitch. Regarding possible effects of local myelination in the cortex and the known importance of the anterior superior temporal gyrus/sulcus for the identification of auditory objects, we argue that pitch chroma may be processed as an identifiable object property in AP musicians. Hum Brain Mapp 37:3486-3501, 2016. © 2016 Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Remote ultrasound palpation for robotic interventions using absolute elastography.
Schneider, Caitlin; Baghani, Ali; Rohling, Robert; Salcudean, Septimiu
2012-01-01
Although robotic surgery has addressed many of the challenges presented by minimally invasive surgery, haptic feedback and the lack of knowledge of tissue stiffness is an unsolved problem. This paper presents a system for finding the absolute elastic properties of tissue using a freehand ultrasound scanning technique, which utilizes the da Vinci Surgical robot and a custom 2D ultrasound transducer for intraoperative use. An external exciter creates shear waves in the tissue, and a local frequency estimation method computes the shear modulus. Results are reported for both phantom and in vivo models. This system can be extended to any 6 degree-of-freedom tracking method and any 2D transducer to provide real-time absolute elastic properties of tissue.
Li, Mengshan; Zhang, Huaijing; Chen, Bingsheng; Wu, Yan; Guan, Lixin
2018-03-05
The pKa value of drugs is an important parameter in drug design and pharmacology. In this paper, an improved particle swarm optimization (PSO) algorithm was proposed based on the population entropy diversity. In the improved algorithm, when the population entropy was higher than the set maximum threshold, the convergence strategy was adopted; when the population entropy was lower than the set minimum threshold the divergence strategy was adopted; when the population entropy was between the maximum and minimum threshold, the self-adaptive adjustment strategy was maintained. The improved PSO algorithm was applied in the training of radial basis function artificial neural network (RBF ANN) model and the selection of molecular descriptors. A quantitative structure-activity relationship model based on RBF ANN trained by the improved PSO algorithm was proposed to predict the pKa values of 74 kinds of neutral and basic drugs and then validated by another database containing 20 molecules. The validation results showed that the model had a good prediction performance. The absolute average relative error, root mean square error, and squared correlation coefficient were 0.3105, 0.0411, and 0.9685, respectively. The model can be used as a reference for exploring other quantitative structure-activity relationships.
Rosenberg, M J; Solodov, A A; Myatt, J F; Seka, W; Michel, P; Hohenberger, M; Short, R W; Epstein, R; Regan, S P; Campbell, E M; Chapman, T; Goyon, C; Ralph, J E; Barrios, M A; Moody, J D; Bates, J W
2018-02-02
Planar laser-plasma interaction (LPI) experiments at the National Ignition Facility (NIF) have allowed access for the first time to regimes of electron density scale length (∼500 to 700 μm), electron temperature (∼3 to 5 keV), and laser intensity (6 to 16×10^{14} W/cm^{2}) that are relevant to direct-drive inertial confinement fusion ignition. Unlike in shorter-scale-length plasmas on OMEGA, scattered-light data on the NIF show that the near-quarter-critical LPI physics is dominated by stimulated Raman scattering (SRS) rather than by two-plasmon decay (TPD). This difference in regime is explained based on absolute SRS and TPD threshold considerations. SRS sidescatter tangential to density contours and other SRS mechanisms are observed. The fraction of laser energy converted to hot electrons is ∼0.7% to 2.9%, consistent with observed levels of SRS. The intensity threshold for hot-electron production is assessed, and the use of a Si ablator slightly increases this threshold from ∼4×10^{14} to ∼6×10^{14} W/cm^{2}. These results have significant implications for mitigation of LPI hot-electron preheat in direct-drive ignition designs.
Dafni, Urania; Karlis, Dimitris; Pedeli, Xanthi; Bogaerts, Jan; Pentheroudakis, George; Tabernero, Josep; Zielinski, Christoph C; Piccart, Martine J; de Vries, Elisabeth G E; Latino, Nicola Jane; Douillard, Jean-Yves; Cherny, Nathan I
2017-01-01
The European Society for Medical Oncology (ESMO) has developed the ESMO Magnitude of Clinical Benefit Scale (ESMO-MCBS), a tool to assess the magnitude of clinical benefit from new cancer therapies. Grading is guided by a dual rule comparing the relative benefit (RB) and the absolute benefit (AB) achieved by the therapy to prespecified threshold values. The ESMO-MCBS v1.0 dual rule evaluates the RB of an experimental treatment based on the lower limit of the 95%CI (LL95%CI) for the hazard ratio (HR) along with an AB threshold. This dual rule addresses two goals: inclusiveness: not unfairly penalising experimental treatments from trials designed with adequate power targeting clinically meaningful relative benefit; and discernment: penalising trials designed to detect a small inconsequential benefit. Based on 50 000 simulations of plausible trial scenarios, the sensitivity and specificity of the LL95%CI rule and the ESMO-MCBS dual rule, the robustness of their characteristics for reasonable power and range of targeted and true HRs, are examined. The per cent acceptance of maximal preliminary grade is compared with other dual rules based on point estimate (PE) thresholds for RB. For particularly small or particularly large studies, the observed benefit needs to be relatively big for the ESMO-MCBS dual rule to be satisfied and the maximal grade awarded. Compared with approaches that evaluate RB using the PE thresholds, simulations demonstrate that the MCBS approach better exhibits the desired behaviour achieving the goals of both inclusiveness and discernment. RB assessment using the LL95%CI for HR rather than a PE threshold has two advantages: it diminishes the probability of excluding big benefit positive studies from achieving due credit and, when combined with the AB assessment, it increases the probability of downgrading a trial with a statistically significant but clinically insignificant observed benefit.
Dafni, Urania; Karlis, Dimitris; Pedeli, Xanthi; Bogaerts, Jan; Pentheroudakis, George; Tabernero, Josep; Zielinski, Christoph C; Piccart, Martine J; de Vries, Elisabeth G E; Latino, Nicola Jane; Douillard, Jean-Yves; Cherny, Nathan I
2017-01-01
Background The European Society for Medical Oncology (ESMO) has developed the ESMO Magnitude of Clinical Benefit Scale (ESMO-MCBS), a tool to assess the magnitude of clinical benefit from new cancer therapies. Grading is guided by a dual rule comparing the relative benefit (RB) and the absolute benefit (AB) achieved by the therapy to prespecified threshold values. The ESMO-MCBS v1.0 dual rule evaluates the RB of an experimental treatment based on the lower limit of the 95%CI (LL95%CI) for the hazard ratio (HR) along with an AB threshold. This dual rule addresses two goals: inclusiveness: not unfairly penalising experimental treatments from trials designed with adequate power targeting clinically meaningful relative benefit; and discernment: penalising trials designed to detect a small inconsequential benefit. Methods Based on 50 000 simulations of plausible trial scenarios, the sensitivity and specificity of the LL95%CI rule and the ESMO-MCBS dual rule, the robustness of their characteristics for reasonable power and range of targeted and true HRs, are examined. The per cent acceptance of maximal preliminary grade is compared with other dual rules based on point estimate (PE) thresholds for RB. Results For particularly small or particularly large studies, the observed benefit needs to be relatively big for the ESMO-MCBS dual rule to be satisfied and the maximal grade awarded. Compared with approaches that evaluate RB using the PE thresholds, simulations demonstrate that the MCBS approach better exhibits the desired behaviour achieving the goals of both inclusiveness and discernment. Conclusions RB assessment using the LL95%CI for HR rather than a PE threshold has two advantages: it diminishes the probability of excluding big benefit positive studies from achieving due credit and, when combined with the AB assessment, it increases the probability of downgrading a trial with a statistically significant but clinically insignificant observed benefit. PMID:29067214
Uncertainty in determining extreme precipitation thresholds
NASA Astrophysics Data System (ADS)
Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili
2013-10-01
Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.
Cross-validation analysis for genetic evaluation models for ranking in endurance horses.
García-Ballesteros, S; Varona, L; Valera, M; Gutiérrez, J P; Cervantes, I
2018-01-01
Ranking trait was used as a selection criterion for competition horses to estimate racing performance. In the literature the most common approaches to estimate breeding values are the linear or threshold statistical models. However, recent studies have shown that a Thurstonian approach was able to fix the race effect (competitive level of the horses that participate in the same race), thus suggesting a better prediction accuracy of breeding values for ranking trait. The aim of this study was to compare the predictability of linear, threshold and Thurstonian approaches for genetic evaluation of ranking in endurance horses. For this purpose, eight genetic models were used for each approach with different combinations of random effects: rider, rider-horse interaction and environmental permanent effect. All genetic models included gender, age and race as systematic effects. The database that was used contained 4065 ranking records from 966 horses and that for the pedigree contained 8733 animals (47% Arabian horses), with an estimated heritability around 0.10 for the ranking trait. The prediction ability of the models for racing performance was evaluated using a cross-validation approach. The average correlation between real and predicted performances across genetic models was around 0.25 for threshold, 0.58 for linear and 0.60 for Thurstonian approaches. Although no significant differences were found between models within approaches, the best genetic model included: the rider and rider-horse random effects for threshold, only rider and environmental permanent effects for linear approach and all random effects for Thurstonian approach. The absolute correlations of predicted breeding values among models were higher between threshold and Thurstonian: 0.90, 0.91 and 0.88 for all animals, top 20% and top 5% best animals. For rank correlations these figures were 0.85, 0.84 and 0.86. The lower values were those between linear and threshold approaches (0.65, 0.62 and 0.51). In conclusion, the Thurstonian approach is recommended for the routine genetic evaluations for ranking in endurance horses.
NASA Astrophysics Data System (ADS)
Yang, L.; Wang, G.; Liu, H.
2017-12-01
Rising sea level has important direct impacts on coastal and island regions such as the Caribbean where the influence of sea-level rise is becoming more apparent. The Caribbean Sea is a semi-enclosed sea adjacent to the landmasses of South and Central America to the south and west, and the Greater Antilles and the Lesser Antilles separate it from the Atlantic Ocean to the north and east. The work focus on studying the relative and absolute sea-level changes by integrating tide gauge, GPS, and satellite altimetry datasets (1955-2016) within the Caribbean Sea. Further, the two main components of absolute sea-level change, ocean mass and steric sea-level changes, are respectively studied using GRACE, temperature, and salinity datasets (1955-2016). According to the analysis conducted, the sea-level change rates have considerable temporal and spatial variations, and estimates may be subject to the techniques used and observation periods. The average absolute sea-level rise rate is 1.8±0.3 mm/year for the period from 1955 to 2015 according to the integrated tide gauge and GPS observations; the average absolute sea-level rise rate is 3.5±0.6 mm/year for the period from 1993 to 2016 according to the satellite altimetry observations. This study shows that the absolute sea-level change budget in the Caribbean Sea is closed in the periods from 1955 to 2016, in which ocean mass change dominates the absolute sea-level rise. The absolute sea-level change budget is also closed in the periods from 2004 to 2016, in which steric sea-level rise dominates the absolute sea-level rise.
Thermal properties measurements in biodiesel oils using photothermal techniques
NASA Astrophysics Data System (ADS)
Castro, M. P. P.; Andrade, A. A.; Franco, R. W. A.; Miranda, P. C. M. L.; Sthel, M.; Vargas, H.; Constantino, R.; Baesso, M. L.
2005-08-01
In this Letter, thermal lens and open cell photoacoustic techniques are used to measure the thermal properties of biodiesel oils. The absolute values of the thermal effusivity, thermal diffusivity, thermal conductivity and the temperature coefficient of the refractive index were determined for samples obtained from soy, castor bean, sunflower and turnip. The results suggest that the employed techniques may be useful as complementary methods for biodiesel certification.
Creskey, Marybeth C; Li, Changgui; Wang, Junzhi; Girard, Michel; Lorbetskie, Barry; Gravel, Caroline; Farnsworth, Aaron; Li, Xuguang; Smith, Daryl G S; Cyr, Terry D
2012-07-06
Current methods for quality control of inactivated influenza vaccines prior to regulatory approval include determining the hemagglutinin (HA) content by single radial immunodiffusion (SRID), verifying neuraminidase (NA) enzymatic activity, and demonstrating that the levels of the contaminant protein ovalbumin are below a set threshold of 1 μg/dose. The SRID assays require the availability of strain-specific reference HA antigens and antibodies, the production of which is a potential rate-limiting step in vaccine development and release, particularly during a pandemic. Immune responses induced by neuraminidase also contribute to protection from infection; however, the amounts of NA antigen in influenza vaccines are currently not quantified or standardized. Here, we report a method for vaccine analysis that yields simultaneous quantification of HA and NA levels much more rapidly than conventional HA quantification techniques, while providing additional valuable information on the total protein content. Enzymatically digested vaccine proteins were analyzed by LC-MS(E), a mass spectrometric technology that allows absolute quantification of analytes, including the HA and NA antigens, other structural influenza proteins and chicken egg proteins associated with the manufacturing process. This method has potential application for increasing the accuracy of reference antigen standards and for validating label claims for HA content in formulated vaccines. It can also be used to monitor NA and chicken egg protein content in order to monitor manufacturing consistency. While this is a useful methodology with potential for broad application, we also discuss herein some of the inherent limitations of this approach and the care and caution that must be taken in its use as a tool for absolute protein quantification. The variations in HA, NA and chicken egg protein concentrations in the vaccines analyzed in this study are indicative of the challenges associated with the current manufacturing and quality control testing procedures. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun
2015-01-01
Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).
Two-Finger Tightness: What Is It? Measuring Torque and Reproducibility in a Simulated Model.
Acker, William B; Tai, Bruce L; Belmont, Barry; Shih, Albert J; Irwin, Todd A; Holmes, James R
2016-05-01
Residents in training are often directed to insert screws using "two-finger tightness" to impart adequate torque but minimize the chance of a screw stripping in bone. This study seeks to quantify and describe two-finger tightness and to assess the variability of its application by residents in training. Cortical bone was simulated using a polyurethane foam block (30-pcf density) that was prepared with predrilled holes for tightening 3.5 × 14-mm long cortical screws and mounted to a custom-built apparatus on a load cell to capture torque data. Thirty-three residents in training, ranging from the first through fifth years of residency, along with 8 staff members, were directed to tighten 6 screws to two-finger tightness in the test block, and peak torque values were recorded. The participants were blinded to their torque values. Stripping torque (2.73 ± 0.56 N·m) was determined from 36 trials and served as a threshold for failed screw placement. The average torques varied substantially with regard to absolute torque values, thus poorly defining two-finger tightness. Junior residents less consistently reproduced torque compared with other groups (0.29 and 0.32, respectively). These data quantify absolute values of two-finger tightness but demonstrate considerable variability in absolute torque values, percentage of stripping torque, and ability to consistently reproduce given torque levels. Increased years in training are weakly correlated with reproducibility, but experience does not seem to affect absolute torque levels. These results question the usefulness of two-finger tightness as a teaching tool and highlight the need for improvement in resident motor skill training and development within a teaching curriculum. Torque measuring devices may be a useful simulation tools for this purpose.
Magnetic resonance imaging determination of left ventricular mass: junior Olympic weightlifters.
Fleck, S J; Pattany, P M; Stone, M H; Kraemer, W J; Thrush, J; Wong, K
1993-04-01
The relationship between left ventricular mass (LVM) and peak VO2 in junior elite Olympic-style weightlifters and sedentary subjects was investigated. Ten male weightlifters (mean +/- SE, age = 17.5 +/- 0.4 yr, wt = 72.9 +/- 3.3 kg) and 15 sedentary males (age = 18.8 +/- 0.4 yr, wt = 69.6 +/- 2.0 kg) served as subjects. Peak VO2 was measured using a continuous, incrementally loaded bicycle ergometry protocol. LVM was measured using magnetic resonance imaging techniques. Absolute peak VO2 was not significantly different (P > or = 0.05) between the weightlifters and the control subjects (3.5 +/- 0.1 vs 3.3 +/- 0.11.min-1). Absolute LVM (g) was significantly (P < or = 0.05) correlated to absolute peak VO2 (1.min-1) in the weightlifters (r = 0.723), but not in the control subjects. No other correlations between LVM in absolute or normalized by body weight, body surface area, or fat free mass terms, and absolute peak or normalized by body weight peak VO2 were significant. The weightlifters absolute LVM was significantly greater (P < or = 0.05) than that of the controls (208.1 +/- 10.0 vs 179.7 +/- 8.4 g). LVM normalized by body weight and body surface area but not by fat free mass, was significantly greater (P < or = 0.05) in the weightlifters than the control subjects. These data indicate that LVM in junior elite weightlifters is greater than that of control subjects and absolute LVM is related to absolute peak VO2 in weightlifters but not control subjects.
Imaging performance of LabPET APD-based digital PET scanners for pre-clinical research
NASA Astrophysics Data System (ADS)
Bergeron, Mélanie; Cadorette, Jules; Tétrault, Marc-André; Beaudoin, Jean-François; Leroux, Jean-Daniel; Fontaine, Réjean; Lecomte, Roger
2014-02-01
The LabPET is an avalanche photodiode (APD) based digital PET scanner with quasi-individual detector read-out and highly parallel electronic architecture for high-performance in vivo molecular imaging of small animals. The scanner is based on LYSO and LGSO scintillation crystals (2×2×12/14 mm3), assembled side-by-side in phoswich pairs read out by an APD. High spatial resolution is achieved through the individual and independent read-out of an individual APD detector for recording impinging annihilation photons. The LabPET exists in three versions, LabPET4 (3.75 cm axial length), LabPET8 (7.5 cm axial length) and LabPET12 (11.4 cm axial length). This paper focuses on the systematic characterization of the three LabPET versions using two different energy window settings to implement a high-efficiency mode (250-650 keV) and a high-resolution mode (350-650 keV) in the most suitable operating conditions. Prior to measurements, a global timing alignment of the scanners and optimization of the APD operating bias have been carried out. Characteristics such as spatial resolution, absolute sensitivity, count rate performance and image quality have been thoroughly investigated following the NEMA NU 4-2008 protocol. Phantom and small animal images were acquired to assess the scanners' suitability for the most demanding imaging tasks in preclinical biomedical research. The three systems achieve the same radial FBP spatial resolution at 5 mm from the field-of-view center: 1.65/3.40 mm (FWHM/FWTM) for an energy threshold of 250 keV and 1.51/2.97 mm for an energy threshold of 350 keV. The absolute sensitivity for an energy window of 250-650 keV is 1.4%/2.6%/4.3% for LabPET4/8/12, respectively. The best count rate performance peaking at 362 kcps is achieved by the LabPET12 with an energy window of 250-650 keV and a mouse phantom (2.5 cm diameter) at an activity of 2.4 MBq ml-1. With the same phantom, the scatter fraction for all scanners is about 17% for an energy threshold of 250 keV and 10% for an energy threshold of 350 keV. The results obtained with two energy window settings confirm the relevance of high-efficiency and high-resolution operating modes to take full advantage of the imaging capabilities of the LabPET scanners for molecular imaging applications.
Fuzzy Behavior Modulation with Threshold Activation for Autonomous Vehicle Navigation
NASA Technical Reports Server (NTRS)
Tunstel, Edward
2000-01-01
This paper describes fuzzy logic techniques used in a hierarchical behavior-based architecture for robot navigation. An architectural feature for threshold activation of fuzzy-behaviors is emphasized, which is potentially useful for tuning navigation performance in real world applications. The target application is autonomous local navigation of a small planetary rover. Threshold activation of low-level navigation behaviors is the primary focus. A preliminary assessment of its impact on local navigation performance is provided based on computer simulations.
Beauty and charm production in fixed target experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kidonakis, Nikolaos; Vogt, Ramona
We present calculations of NNLO threshold corrections for beauty and charm production in {pi}{sup -} p and pp interactions at fixed-target experiments. Recent calculations for heavy quark hadroproduction have included next-to-next-to-leading-order (NNLO) soft-gluon corrections [1] to the double differential cross section from threshold resummation techniques [2]. These corrections are important for near-threshold beauty and charm production at fixed-target experiments, including HERA-B and some of the current and future heavy ion experiments.
Determining the neutrino mass with cyclotron radiation emission spectroscopy—Project 8
NASA Astrophysics Data System (ADS)
Ashtari Esfahani, Ali; Asner, David M.; Böser, Sebastian; Cervantes, Raphael; Claessens, Christine; de Viveiros, Luiz; Doe, Peter J.; Doeleman, Shepard; Fernandes, Justin L.; Fertl, Martin; Finn, Erin C.; Formaggio, Joseph A.; Furse, Daniel; Guigue, Mathieu; Heeger, Karsten M.; Jones, A. Mark; Kazkaz, Kareem; Kofron, Jared A.; Lamb, Callum; LaRoque, Benjamin H.; Machado, Eric; McBride, Elizabeth L.; Miller, Michael L.; Monreal, Benjamin; Mohanmurthy, Prajwal; Nikkel, James A.; Oblath, Noah S.; Pettus, Walter C.; Hamish Robertson, R. G.; Rosenberg, Leslie J.; Rybka, Gray; Rysewyk, Devyn; Saldaña, Luis; Slocum, Penny L.; Sternberg, Matthew G.; Tedeschi, Jonathan R.; Thümmler, Thomas; VanDevender, Brent A.; E Vertatschitsch, Laura; Wachtendonk, Megan; Weintroub, Jonathan; Woods, Natasha L.; Young, André; Zayas, Evan M.
2017-05-01
The most sensitive direct method to establish the absolute neutrino mass is observation of the endpoint of the tritium beta-decay spectrum. Cyclotron radiation emission spectroscopy (CRES) is a precision spectrographic technique that can probe much of the unexplored neutrino mass range with { O }({eV}) resolution. A lower bound of m({ν }e)≳ 9(0.1) {meV} is set by observations of neutrino oscillations, while the KATRIN experiment—the current-generation tritium beta-decay experiment that is based on magnetic adiabatic collimation with an electrostatic (MAC-E) filter—will achieve a sensitivity of m({ν }e)≲ 0.2 {eV}. The CRES technique aims to avoid the difficulties in scaling up a MAC-E filter-based experiment to achieve a lower mass sensitivity. In this paper we review the current status of the CRES technique and describe Project 8, a phased absolute neutrino mass experiment that has the potential to reach sensitivities down to m({ν }e)≲ 40 {meV} using an atomic tritium source.
Patrizio, Angela; Specht, Christian G.
2016-01-01
Abstract. The ability to count molecules is essential to elucidating cellular mechanisms, as these often depend on the absolute numbers and concentrations of molecules within specific compartments. Such is the case at chemical synapses, where the transmission of information from presynaptic to postsynaptic terminals requires complex interactions between small sets of molecules. Be it the subunit stoichiometry specifying neurotransmitter receptor properties, the copy numbers of scaffold proteins setting the limit of receptor accumulation at synapses, or protein packing densities shaping the molecular organization and plasticity of the postsynaptic density, all of these depend on exact quantities of components. A variety of proteomic, electrophysiological, and quantitative imaging techniques have yielded insights into the molecular composition of synaptic complexes. In this review, we compare the different quantitative approaches and consider the potential of single molecule imaging techniques for the quantification of synaptic components. We also discuss specific neurobiological data to contextualize the obtained numbers and to explain how they aid our understanding of synaptic structure and function. PMID:27335891
Patrizio, Angela; Specht, Christian G
2016-10-01
The ability to count molecules is essential to elucidating cellular mechanisms, as these often depend on the absolute numbers and concentrations of molecules within specific compartments. Such is the case at chemical synapses, where the transmission of information from presynaptic to postsynaptic terminals requires complex interactions between small sets of molecules. Be it the subunit stoichiometry specifying neurotransmitter receptor properties, the copy numbers of scaffold proteins setting the limit of receptor accumulation at synapses, or protein packing densities shaping the molecular organization and plasticity of the postsynaptic density, all of these depend on exact quantities of components. A variety of proteomic, electrophysiological, and quantitative imaging techniques have yielded insights into the molecular composition of synaptic complexes. In this review, we compare the different quantitative approaches and consider the potential of single molecule imaging techniques for the quantification of synaptic components. We also discuss specific neurobiological data to contextualize the obtained numbers and to explain how they aid our understanding of synaptic structure and function.
Using absolute gravimeter data to determine vertical gravity gradients
Robertson, D.S.
2001-01-01
The position versus time data from a free-fall absolute gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" errors of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.
Borckardt, Jeffrey J; Bikson, Marom; Frohman, Heather; Reeves, Scott T; Datta, Abhishek; Bansal, Varun; Madan, Alok; Barth, Kelly; George, Mark S
2012-02-01
Several brain stimulation technologies are beginning to evidence promise as pain treatments. However, traditional versions of 1 specific technique, transcranial direct current stimulation (tDCS), stimulate broad regions of cortex with poor spatial precision. A new tDCS design, called high definition tDCS (HD-tDCS), allows for focal delivery of the charge to discrete regions of the cortex. We sought to preliminarily test the safety and tolerability of the HD-tDCS technique as well as to evaluate whether HD-tDCS over the motor cortex would decrease pain and sensory experience. Twenty-four healthy adult volunteers underwent quantitative sensory testing before and after 20 minutes of real (n = 13) or sham (n = 11) 2 mA HD-tDCS over the motor cortex. No adverse events occurred and no side effects were reported. Real HD-tDCS was associated with significantly decreased heat and cold sensory thresholds, decreased thermal wind-up pain, and a marginal analgesic effect for cold pain thresholds. No significant effects were observed for mechanical pain thresholds or heat pain thresholds. HD-tDCS appears well tolerated, and produced changes in underlying cortex that are associated with changes in pain perception. Future studies are warranted to investigate HD-tDCS in other applications, and to examine further its potential to affect pain perception. This article presents preliminary tolerability and efficacy data for a new focal brain stimulation technique called high definition transcranial direct current stimulation. This technique may have applications in the management of pain. Copyright © 2012. Published by Elsevier Inc.
Olfactory threshold increase in trigeminal neuralgia after balloon compression.
Siqueira, S R D T; Nóbrega, J C M; Teixeira, M J; Siqueira, J T T
2006-12-01
Idiopathic trigeminal neuralgia (ITN) is a well-known disease often treated with neurosurgical procedures, which may produce sensorial abnormalities, such as numbness, dysesthesia and taste complaints. We studied 12 patients that underwent this technique, in order to verify pain, gustative and olfactory thresholds abnormalities, with a follow-up of 120 days. We compared the patients with a matched control group of 12 patients. Our results found a significant difference in the olfactory threshold at the immediate post-operative period (p=0.048). We concluded that injured trigeminal fibers are probably associated with the increase in the olfactory threshold after the surgery, supporting the sensorial interaction theory.
Bashir, Mustafa R; Weber, Paul W; Husarik, Daniela B; Howle, Laurens E; Nelson, Rendon C
2012-08-01
To assess whether a scan triggering technique based on the slope of the time-attenuation curve combined with table speed optimization may improve arterial enhancement in aortic CT angiography compared to conventional threshold-based triggering techniques. Measurements of arterial enhancement were performed in a physiologic flow phantom over a range of simulated cardiac outputs (2.2-8.1 L/min) using contrast media boluses of 80 and 150 mL injected at 4 mL/s. These measurements were used to construct computer models of aortic attenuation in CT angiography, using cardiac output, aortic diameter, and CT table speed as input parameters. In-plane enhancement was calculated for normal and aneurysmal aortic diameters. Calculated arterial enhancement was poor (<150 HU) along most of the scan length using the threshold-based triggering technique for low cardiac outputs and the aneurysmal aorta model. Implementation of the slope-based triggering technique with table speed optimization improved enhancement in all scenarios and yielded good- (>200 HU; 13/16 scenarios) to excellent-quality (>300 HU; 3/16 scenarios) enhancement in all cases. Slope-based triggering with table speed optimization may improve the technical quality of aortic CT angiography over conventional threshold-based techniques, and may reduce technical failures related to low cardiac output and slow flow through an aneurysmal aorta.
Hypoglycemia prediction with subject-specific recursive time-series models.
Eren-Oruklu, Meriyan; Cinar, Ali; Quinn, Lauretta
2010-01-01
Avoiding hypoglycemia while keeping glucose within the narrow normoglycemic range (70-120 mg/dl) is a major challenge for patients with type 1 diabetes. Continuous glucose monitors can provide hypoglycemic alarms when the measured glucose decreases below a threshold. However, a better approach is to provide an early alarm that predicts a hypoglycemic episode before it occurs, allowing enough time for the patient to take the necessary precaution to avoid hypoglycemia. We have previously proposed subject-specific recursive models for the prediction of future glucose concentrations and evaluated their prediction performance. In this work, our objective was to evaluate this algorithm further to predict hypoglycemia and provide early hypoglycemic alarms. Three different methods were proposed for alarm decision, where (A) absolute predicted glucose values, (B) cumulative-sum (CUSUM) control chart, and (C) exponentially weighted moving-average (EWMA) control chart were used. Each method was validated using data from the Diabetes Research in Children Network, which consist of measurements from a continuous glucose sensor during an insulin-induced hypoglycemia. Reference serum glucose measurements were used to determine the sensitivity to predict hypoglycemia and the false alarm rate. With the hypoglycemic threshold set to 60 mg/dl, sensitivity of 89, 87.5, and 89% and specificity of 67, 74, and 78% were reported for methods A, B, and C, respectively. Mean values for time to detection were 30 +/- 5.51 (A), 25.8 +/- 6.46 (B), and 27.7 +/- 5.32 (C) minutes. Compared to the absolute value method, both CUSUM and EWMA methods behaved more conservatively before raising an alarm (reduced time to detection), which significantly decreased the false alarm rate and increased the specificity. 2010 Diabetes Technology Society.
Financial burden of raising CSHCN: association with state policy choices.
Parish, Susan L; Shattuck, Paul T; Rose, Roderick A
2009-12-01
We examined the association between state Medicaid and State Children's Health Insurance Program (SCHIP) income eligibility and the financial burden reported by low-income families raising children with special health care needs (CSHCN). Data on low-income CSHCN and their families were from the National Survey of Children With Special Health Care Needs (N = 17039), with a representative sample from each state. State Medicaid and SCHIP income-eligibility thresholds were from publicly available sources. The 3 outcomes included whether families had any out-of-pocket health care expenditures during the previous 12 months for their CSHCN, amount of expenditure, and expenditures as a percentage of family income. We used multilevel logistic regression to model the association between Medicaid and SCHIP characteristics and families' financial burden, controlling state median income and child- and family-level characteristics. Overall, 61% of low-income families reported expenditures of >$0. Among these families, 30% had expenses between $250 and $500, and 34% had expenses of more than $500. Twenty-seven percent of the families reporting any expenses had expenditures that exceeded 3% of their total household income. The percentage of low-income families with out-of-pocket expenses that exceeded 3% of their income varied considerably according to state and ranged from 5.6% to 25.8%. Families living in states with higher Medicaid and SCHIP income-eligibility guidelines were less likely to have high absolute burden and high relative burden. Beyond child and family characteristics, there is considerable state-level variability in low-income families' out-of-pocket expenditures for their CSHCN. A portion of this variability is associated with states' Medicaid and SCHIP income-eligibility thresholds. Families living in states with more generous programs report less absolute and relative financial burden than families living in states with less generous benefits.
Microstructural Effects on Initiation Behavior in HMX
NASA Astrophysics Data System (ADS)
Molek, Christopher; Welle, Eric; Hardin, Barrett; Vitarelli, Jim; Wixom, Ryan; Samuels, Philip
Understanding the role microstructure plays on ignition and growth behavior has been the subject of a significant body of research within the detonation physics community. The pursuit of this understanding is important because safety and performance characteristics have been shown to strongly correlate to particle morphology. Historical studies have often correlated bulk powder characteristics to the performance or safety characteristics of pressed materials. We believe that a clearer and more relevant correlation is made between the pressed microstructure and the observed detonation behavior. This type of assessment is possible, as techniques now exist for the quantification of the pressed microstructures. Our talk will report on experimental efforts that correlate directly measured microstructural characteristics to initiation threshold behavior of HMX based materials. The internal microstructures were revealed using an argon ion cross-sectioning technique. This technique enabled the quantification of density and interface area of the pores within the pressed bed using methods of stereology. These bed characteristics are compared to the initiation threshold behavior of three HMX based materials using an electric gun based test method. Finally, a comparison of experimental threshold data to supporting theoretical efforts will be made.
3D measurement using combined Gray code and dual-frequency phase-shifting approach
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin
2018-04-01
The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.
NASA Astrophysics Data System (ADS)
Li, Qimeng; Li, Shichun; Hu, Xianglong; Zhao, Jing; Xin, Wenhui; Song, Yuehui; Hua, Dengxin
2018-01-01
The absolute measurement technique for atmospheric temperature can avoid the calibration process and improve the measurement accuracy. To achieve the rotational Raman temperature lidar of absolute measurement, the two-stage parallel multi-channel spectroscopic filter combined a first-order blazed grating with a fiber Bragg grating is designed and its performance is tested. The parameters and the optical path structure of the core cascaded-device (micron-level fiber array) are optimized, the optical path of the primary spectroscope is simulated and the maximum centrifugal distortion of the rotational Raman spectrum is approximately 0.0031 nm, the centrifugal ratio of 0.69%. The experimental results show that the channel coefficients of the primary spectroscope are 0.67, 0.91, 0.67, 0.75, 0.82, 0.63, 0.87, 0.97, 0.89, 0.87 and 1 by using the twelfth channel as a reference and the average FWHM is about 0.44 nm. The maximum deviation between the experimental wavelength and the theoretical value is approximately 0.0398 nm, with the deviation degree of 8.86%. The effective suppression to elastic scattering signal are 30.6, 35.2, 37.1, 38.4, 36.8, 38.2, 41.0, 44.3, 44.0, 46.7 dB. That means, combined with the second spectroscope, the suppression at least is up to 65 dB. Therefore we can fine extract single rotational Raman line to achieve the absolute measurement technique.
Time Poverty Thresholds and Rates for the US Population
ERIC Educational Resources Information Center
Kalenkoski, Charlene M.; Hamrick, Karen S.; Andrews, Margaret
2011-01-01
Time constraints, like money constraints, affect Americans' well-being. This paper defines what it means to be time poor based on the concepts of necessary and committed time and presents time poverty thresholds and rates for the US population and certain subgroups. Multivariate regression techniques are used to identify the key variables…
Liu, Yang; Alocilja, Evangelyn; Chakrabartty, Shantanu
2009-01-01
Silver-enhanced labeling is a technique used in immunochromatographic assays for improving the sensitivity of pathogen detection. In this paper, we employ the silver enhancement approach for constructing a biomolecular transistor that uses a high-density interdigitated electrode to detect rabbit IgG. We show that the response of the biomolecular transistor comprises of: (a) a sub-threshold region where the conductance change is an exponential function of the enhancement time and; (b) an above-threshold region where the conductance change is a linear function with respect to the enhancement time. By exploiting both these regions of operation, it is shown that the silver enhancing time is a reliable indicator of the IgG concentration. The method provides a relatively straightforward alternative to biomolecular signal amplification techniques. The measured results using a biochip prototype fabricated in silicon show that 240 pg/mL rabbit IgG can be detected at the silver enhancing time of 42 min. Also, the biomolecular transistor is compatible with silicon based processing making it ideal for designing integrated CMOS biosensors.
Quantification of absolute blood velocity using LDA
NASA Astrophysics Data System (ADS)
Borozdova, M. A.; Fedosov, I. V.; Tuchin, V. V.
2018-04-01
We developed novel schematics of a Laser Doppler anemometer where measuring volume is comparable with the red blood cell (RBC) size and a small period of interference fringes improves device resolution. The technique was used to estimate Doppler frequency shift at flow velocity measurements. It has been shown that technique is applicable for measurements in whole blood.
NASA Astrophysics Data System (ADS)
Nicolas, J.; Nocquet, J.; van Camp, M.; Coulot, D.
2003-12-01
Time-dependent displacements of stations usually have magnitude close to the accuracy of each individual technique, and it still remains difficult to separate the true geophysical motion from possible artifacts inherent to each space geodetic technique. The Observatoire de la C“te d'Azur (OCA), located at Grasse, France benefits from the collocation of several geodetic instruments and techniques (3 laser ranging stations, and a permanent GPS) what allows us to do a direct comparison of the time series. Moreover, absolute gravimetry measurement campaigns have also been regularly performed since 1997, first by the "Ecole et Observatoire des Sciences de la Terre (EOST) of Strasbourg, France, and more recently by the Royal Observatory of Belgium. This study presents a comparison between the positioning time series of the vertical component derived from the SLR and GPS analysis with the gravimetric results from 1997 to 2003. The laser station coordinates are based on a LAGEOS -1 and -2 combined solution using reference 10-day arc orbits, the ITRF2000 reference frame, and the IERS96 conventions. Different GPS weekly global solutions provided from several IGS are combined and compared to the SLR results. The absolute gravimetry measurements are converted into vertical displacements with a classical gradient. The laser time series indicate a strong annual signal at the level of about 3-4 cm peak to peak amplitude on the vertical component. Absolute gravimetry data agrees with the SLR results. GPS positioning solutions also indicate a significant annual term, but with a magnitude of only 50% of the one shown by the SLR solution and by the gravimetry measurements. Similar annual terms are also observed on other SLR sites we processed, but usually with! lower and various amplitudes. These annual signals are also compared to vertical positioning variations corresponding to an atmospheric loading model. We present the level of agreement between the different techniques and we discuss possible explanations for the discrepancy noted between the signals. At last, we expose explanations for the large annual term at Grasse: These annual variations could be partly due to an hydrological loading effect on the karstic massif on which the observatory is located.
NASA Astrophysics Data System (ADS)
Chérigier, L.; Czarnetzki, U.; Luggenhölscher, D.; Schulz-von der Gathen, V.; Döbele, H. F.
1999-01-01
Absolute atomic hydrogen densities were measured in the gaseous electronics conference reference cell parallel plate reactor by Doppler-free two-photon absorption laser induced fluorescence spectroscopy (TALIF) at λ=205 nm. The capacitively coupled radio frequency discharge was operated at 13.56 MHz in pure hydrogen under various input power and pressure conditions. The Doppler-free excitation technique with an unfocused laser beam together with imaging the fluorescence radiation by an intensified charge coupled device camera allows instantaneous spatial resolution along the radial direction. Absolute density calibration is obtained with the aid of a flow tube reactor and titration with NO2. The influence of spatial intensity inhomogenities along the laser beam and subsequent fluorescence are corrected by TALIF in xenon. A full mapping of the absolute density distribution between the electrodes was obtained. The detection limit for atomic hydrogen amounts to about 2×1018 m-3. The dissociation degree is of the order of a few percent.
NASA Technical Reports Server (NTRS)
Jackson, F. C.; Walton, W. T.; Baker, P. L.
1982-01-01
A microwave radar technique for remotely measuring the vector wave number spectrum of the ocean surface is described. The technique, which employs short-pulse, noncoherent radars in a conical scan mode near vertical incidence, is shown to be suitable for both aircraft and satellite application, the technique was validated at 10 km aircraft altitude, where we have found excellent agreement between buoy and radar-inferred absolute wave height spectra.
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Auditory processing in absolute pitch possessors
NASA Astrophysics Data System (ADS)
McKetton, Larissa; Schneider, Keith A.
2018-05-01
Absolute pitch (AP) is a rare ability in classifying a musical pitch without a reference standard. It has been of great interest to researchers studying auditory processing and music cognition since it is seldom expressed and sheds light on influences pertaining to neurodevelopmental biological predispositions and the onset of musical training. We investigated the smallest frequency that could be detected or just noticeable difference (JND) between two pitches. Here, we report significant differences in JND thresholds in AP musicians and non-AP musicians compared to non-musician control groups at both 1000 Hz and 987.76 Hz testing frequencies. Although the AP-musicians did better than non-AP musicians, the difference was not significant. In addition, we looked at neuro-anatomical correlates of musicianship and AP using structural MRI. We report increased cortical thickness of the left Heschl's Gyrus (HG) and decreased cortical thickness of the inferior frontal opercular gyrus (IFO) and circular insular sulcus volume (CIS) in AP compared to non-AP musicians and controls. These structures may therefore be optimally enhanced and reduced to form the most efficient network for AP to emerge.
Casey, D T; Frenje, J A; Gatu Johnson, M; Séguin, F H; Li, C K; Petrasso, R D; Glebov, V Yu; Katz, J; Knauer, J P; Meyerhofer, D D; Sangster, T C; Bionta, R M; Bleuel, D L; Döppner, T; Glenzer, S; Hartouni, E; Hatchett, S P; Le Pape, S; Ma, T; MacKinnon, A; McKernan, M A; Moran, M; Moses, E; Park, H-S; Ralph, J; Remington, B A; Smalyuk, V; Yeamans, C B; Kline, J; Kyrala, G; Chandler, G A; Leeper, R J; Ruiz, C L; Cooper, G W; Nelson, A J; Fletcher, K; Kilkenny, J; Farrell, M; Jasion, D; Paguio, R
2012-10-01
A magnetic recoil spectrometer (MRS) has been installed and extensively used on OMEGA and the National Ignition Facility (NIF) for measurements of the absolute neutron spectrum from inertial confinement fusion implosions. From the neutron spectrum measured with the MRS, many critical implosion parameters are determined including the primary DT neutron yield, the ion temperature, and the down-scattered neutron yield. As the MRS detection efficiency is determined from first principles, the absolute DT neutron yield is obtained without cross-calibration to other techniques. The MRS primary DT neutron measurements at OMEGA and the NIF are shown to be in excellent agreement with previously established yield diagnostics on OMEGA, and with the newly commissioned nuclear activation diagnostics on the NIF.
Absolute near-infrared refractometry with a calibrated tilted fiber Bragg grating.
Zhou, Wenjun; Mandia, David J; Barry, Seán T; Albert, Jacques
2015-04-15
The absolute refractive indices (RIs) of water and other liquids are determined with an uncertainty of ±0.001 at near-infrared wavelengths by using the tilted fiber Bragg grating (TFBG) cladding mode resonances of a standard single-mode fiber to measure the critical angle for total internal reflection at the interface between the fiber and its surroundings. The necessary condition to obtain absolute RIs (instead of measuring RI changes) is a thorough characterization of the dispersion of the core mode effective index of the TFBG across the full range of its cladding mode resonance spectrum. This technique is shown to be competitive with the best available measurements of the RIs of water and NaCl solutions at wavelengths in the vicinity of 1550 nm.
NASA Technical Reports Server (NTRS)
Hoge, F. E.; Swift, R. N.
1983-01-01
Airborne lidar oil spill experiments carried out to determine the practicability of the AOFSCE (absolute oil fluorescence spectral conversion efficiency) computational model are described. The results reveal that the model is suitable over a considerable range of oil film thicknesses provided the fluorescence efficiency of the oil does not approach the minimum detection sensitivity limitations of the lidar system. Separate airborne lidar experiments to demonstrate measurement of the water column Raman conversion efficiency are also conducted to ascertain the ultimate feasibility of converting such relative oil fluorescence to absolute values. Whereas the AOFSCE model is seen as highly promising, further airborne water column Raman conversion efficiency experiments with improved temporal or depth-resolved waveform calibration and software deconvolution techniques are thought necessary for a final determination of suitability.
A Search Technique for Weak and Long-Duration Gamma-Ray Bursts from Background Model Residuals
NASA Technical Reports Server (NTRS)
Skelton, R. T.; Mahoney, W. A.
1993-01-01
We report on a planned search technique for Gamma-Ray Bursts too weak to trigger the on-board threshold. The technique is to search residuals from a physically based background model used for analysis of point sources by the Earth occultation method.
NASA Astrophysics Data System (ADS)
Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.
2017-04-01
Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.
Nebuya, S; Noshiro, M; Yonemoto, A; Tateno, S; Brown, B H; Smallwood, R H; Milnes, P
2006-05-01
Inter-subject variability has caused the majority of previous electrical impedance tomography (EIT) techniques to focus on the derivation of relative or difference measures of in vivo tissue resistivity. Implicit in these techniques is the requirement for a reference or previously defined data set. This study assesses the accuracy and optimum electrode placement strategy for a recently developed method which estimates an absolute value of organ resistivity without recourse to a reference data set. Since this measurement of tissue resistivity is absolute, in Ohm metres, it should be possible to use EIT measurements for the objective diagnosis of lung diseases such as pulmonary oedema and emphysema. However, the stability and reproducibility of the method have not yet been investigated fully. To investigate these problems, this study used a Sheffield Mk3.5 system which was configured to operate with eight measurement electrodes. As a result of this study, the absolute resistivity measurement was found to be insensitive to the electrode level between 4 and 5 cm above the xiphoid process. The level of the electrode plane was varied between 2 cm and 7 cm above the xiphoid process. Absolute lung resistivity in 18 normal subjects (age 22.6 +/- 4.9, height 169.1 +/- 5.7 cm, weight 60.6 +/- 4.5 kg, body mass index 21.2 +/- 1.6: mean +/- standard deviation) was measured during both normal and deep breathing for 1 min. Three sets of measurements were made over a period of several days on each of nine of the normal male subjects. No significant differences in absolute lung resistivity were found, either during normal tidal breathing between the electrode levels of 4 and 5 cm (9.3 +/- 2.4 Omega m, 9.6 +/- 1.9 Omega m at 4 and 5 cm, respectively: mean +/- standard deviation) or during deep breathing between the electrode levels of 4 and 5 cm (10.9 +/- 2.9 Omega m and 11.1 +/- 2.3 Omega m, respectively: mean +/- standard deviation). However, the differences in absolute lung resistivity between normal and deep tidal breathing at the same electrode level are significant. No significant difference was found in the coefficient of variation between the electrode levels of 4 and 5 cm (9.5 +/- 3.6%, 8.5 +/- 3.2% at 4 and 5 cm, respectively: mean +/- standard deviation in individual subjects). Therefore, the electrode levels of 4 and 5 cm above the xiphoid process showed reasonable reliability in the measurement of absolute lung resistivity both among individuals and over time.
The Design of Optical Sensor for the Pinhole/Occulter Facility
NASA Technical Reports Server (NTRS)
Greene, Michael E.
1990-01-01
Three optical sight sensor systems were designed, built and tested. Two optical lines of sight sensor system are capable of measuring the absolute pointing angle to the sun. The system is for use with the Pinhole/Occulter Facility (P/OF), a solar hard x ray experiment to be flown from Space Shuttle or Space Station. The sensor consists of a pinhole camera with two pairs of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the pinhole, track and hold circuitry for data reduction, an analog to digital converter, and a microcomputer. The deflection of the image center is calculated from these data using an approximation for the solar image. A second system consists of a pinhole camera with a pair of perpendicularly mounted linear photodiode arrays, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed. A third optical sensor system is capable of measuring the internal vibration of the P/OF between the mask and base. The system consists of a white light source, a mirror and a pair of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the mirror, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image and hence the vibration of the structure is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed.
Rail-highway crossing hazard prediction : research results
DOT National Transportation Integrated Search
1979-12-01
This document presents techniques for constructing and evaluating railroad grade : crossing hazard indexes. Hazard indexes are objective formulas for comparing or ranking : crossings according to relative hazard or for calculating absolute hazard (co...
Erosive Burning Study Utilizing Ultrasonic Measurement Techniques
NASA Technical Reports Server (NTRS)
Furfaro, James A.
2003-01-01
A 6-segment subscale motor was developed to generate a range of internal environments from which multiple propellants could be characterized for erosive burning. The motor test bed was designed to provide a high Mach number, high mass flux environment. Propellant regression rates were monitored for each segment utilizing ultrasonic measurement techniques. These data were obtained for three propellants RSRM, ETM- 03, and Castor@ IVA, which span two propellant types, PBAN (polybutadiene acrylonitrile) and HTPB (hydroxyl terminated polybutadiene). The characterization of these propellants indicates a remarkably similar erosive burning response to the induced flow environment. Propellant burnrates for each type had a conventional response with respect to pressure up to a bulk flow velocity threshold. Each propellant, however, had a unique threshold at which it would experience an increase in observed propellant burn rate. Above the observed threshold each propellant again demonstrated a similar enhanced burn rate response corresponding to the local flow environment.
Restrictive or Liberal Red-Cell Transfusion for Cardiac Surgery.
Mazer, C David; Whitlock, Richard P; Fergusson, Dean A; Hall, Judith; Belley-Cote, Emilie; Connolly, Katherine; Khanykin, Boris; Gregory, Alexander J; de Médicis, Étienne; McGuinness, Shay; Royse, Alistair; Carrier, François M; Young, Paul J; Villar, Juan C; Grocott, Hilary P; Seeberger, Manfred D; Fremes, Stephen; Lellouche, François; Syed, Summer; Byrne, Kelly; Bagshaw, Sean M; Hwang, Nian C; Mehta, Chirag; Painter, Thomas W; Royse, Colin; Verma, Subodh; Hare, Gregory M T; Cohen, Ashley; Thorpe, Kevin E; Jüni, Peter; Shehata, Nadine
2017-11-30
The effect of a restrictive versus liberal red-cell transfusion strategy on clinical outcomes in patients undergoing cardiac surgery remains unclear. In this multicenter, open-label, noninferiority trial, we randomly assigned 5243 adults undergoing cardiac surgery who had a European System for Cardiac Operative Risk Evaluation (EuroSCORE) I of 6 or more (on a scale from 0 to 47, with higher scores indicating a higher risk of death after cardiac surgery) to a restrictive red-cell transfusion threshold (transfuse if hemoglobin level was <7.5 g per deciliter, starting from induction of anesthesia) or a liberal red-cell transfusion threshold (transfuse if hemoglobin level was <9.5 g per deciliter in the operating room or intensive care unit [ICU] or was <8.5 g per deciliter in the non-ICU ward). The primary composite outcome was death from any cause, myocardial infarction, stroke, or new-onset renal failure with dialysis by hospital discharge or by day 28, whichever came first. Secondary outcomes included red-cell transfusion and other clinical outcomes. The primary outcome occurred in 11.4% of the patients in the restrictive-threshold group, as compared with 12.5% of those in the liberal-threshold group (absolute risk difference, -1.11 percentage points; 95% confidence interval [CI], -2.93 to 0.72; odds ratio, 0.90; 95% CI, 0.76 to 1.07; P<0.001 for noninferiority). Mortality was 3.0% in the restrictive-threshold group and 3.6% in the liberal-threshold group (odds ratio, 0.85; 95% CI, 0.62 to 1.16). Red-cell transfusion occurred in 52.3% of the patients in the restrictive-threshold group, as compared with 72.6% of those in the liberal-threshold group (odds ratio, 0.41; 95% CI, 0.37 to 0.47). There were no significant between-group differences with regard to the other secondary outcomes. In patients undergoing cardiac surgery who were at moderate-to-high risk for death, a restrictive strategy regarding red-cell transfusion was noninferior to a liberal strategy with respect to the composite outcome of death from any cause, myocardial infarction, stroke, or new-onset renal failure with dialysis, with less blood transfused. (Funded by the Canadian Institutes of Health Research and others; TRICS III ClinicalTrials.gov number, NCT02042898 .).
Lin, Kuang-Wei; Hall, Timothy L.; Xu, Zhen; Cain, Charles A.
2015-01-01
When applying histotripsy pulses shorter than 2 cycles, the formation of a dense bubble cloud only relies on the applied peak negative pressure (p-) exceeding the “intrinsic threshold” of the medium (absolute value of 26 – 30 MPa in most soft tissue). A previous study conducted by our research group showed that a sub-threshold high-frequency probe pulse (3 MHz) can be enabled by a sub-threshold low-frequency pump pulse (500 kHz) where the sum exceeds the intrinsic threshold, thus generating lesion-producing dense bubble clouds (“dual-beam histotripsy”). This paper investigates the feasibility of using an imaging transducer to provide the high-frequency probe pulse in the dual-beam histotripsy approach. More specifically, an ATL L7–4 imaging transducer, pulsed by a Verasonics V-1 Data Acquisition System, was used to generate the high-frequency probe pulses. The low-frequency pump pulses were generated by a 20-element 345 kHz array transducer, driven by a custom high voltage pulser. These dual-beam histotripsy pulses were applied to red-blood-cell (RBC) tissue-mimicking phantoms at a pulse repetition frequency of 1 Hz, and optical imaging was used to visualize bubble clouds and lesions generated in the RBC phantoms. The results showed that dense bubble clouds (and resulting lesions) were generated when the p- of the sub-threshold pump and probe pulses combined constructively to exceed the intrinsic threshold. The average size of the smallest reproducible lesions using the imaging probe pulse enabled by the sub-threshold pump pulse was 0.7 × 1.7 mm while that using the supra-threshold pump pulse alone was 1.4 × 3.7 mm. When the imaging transducer was steered laterally, bubble clouds and lesions were steered correspondingly until the combined p- no longer exceeded the intrinsic threshold. These results were also validated with ex vivo porcine liver experiments. Using an imaging transducer for dual-beam histotripsy can have two advantages, 1) lesion steering can be achieved using the steering of the imaging transducer (implemented with the beamformer of the accompanying programmable ultrasound system) and 2) treatment can be simultaneously monitored when the imaging transducer is used in conjunction with an ultrasound imaging system. PMID:25929995
Continued Development of in Situ Geochronology for Planetary Missions
NASA Technical Reports Server (NTRS)
Devismes, D.; Cohen, B. A.
2015-01-01
The instrument 'Potassium (K) Argon Laser Experiment' (KArLE) is developed and designed for in situ absolute dating of rocks on planetary surfaces. It is based on the K-Ar dating method and uses the Laser Induced Breakdown Spectroscopy - Laser Ablation - Quadrupole Mass Spectrometry (LIBSLA- QMS) technique. We use a dedicated interface to combine two instruments similar to SAM of Mars Science Laboratory (for the QMS) and ChemCam (for the LA and LIBS). The prototype has demonstrated that KArLE is a suitable and promising instrument for in situ absolute dating.
Absolute method of measuring magnetic susceptibility
Thorpe, A.; Senftle, F.E.
1959-01-01
An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.
Absolute irradiance of the Moon for on-orbit calibration
Stone, T.C.; Kieffer, H.H.; ,
2002-01-01
The recognized need for on-orbit calibration of remote sensing imaging instruments drives the ROLO project effort to characterize the Moon for use as an absolute radiance source. For over 5 years the ground-based ROLO telescopes have acquired spatially-resolved lunar images in 23 VNIR (Moon diameter ???500 pixels) and 9 SWIR (???250 pixels) passbands at phase angles within ??90 degrees. A numerical model for lunar irradiance has been developed which fits hundreds of ROLO images in each band, corrected for atmospheric extinction and calibrated to absolute radiance, then integrated to irradiance. The band-coupled extinction algorithm uses absorption spectra of several gases and aerosols derived from MODTRAN to fit time-dependent component abundances to nightly observations of standard stars. The absolute radiance scale is based upon independent telescopic measurements of the star Vega. The fitting process yields uncertainties in lunar relative irradiance over small ranges of phase angle and the full range of lunar libration well under 0.5%. A larger source of uncertainty enters in the absolute solar spectral irradiance, especially in the SWIR, where solar models disagree by up to 6%. Results of ROLO model direct comparisons to spacecraft observations demonstrate the ability of the technique to track sensor responsivity drifts to sub-percent precision. Intercomparisons among instruments provide key insights into both calibration issues and the absolute scale for lunar irradiance.
Prewhitening of Colored Noise Fields for Detection of Threshold Sources
1993-11-07
determines the noise covariance matrix, prewhitening techniques allow detection of threshold sources. The multiple signal classification ( MUSIC ...SUBJECT TERMS 1S. NUMBER OF PAGES AR Model, Colored Noise Field, Mixed Spectra Model, MUSIC , Noise Field, 52 Prewhitening, SNR, Standardized Test...EXAMPLE 2: COMPLEX AR COEFFICIENT .............................................. 5 EXAMPLE 3: MUSIC IN A COLORED BACKGROUND NOISE ...................... 6
Adams, Elizabeth J.; Jordan, Thomas J.; Clark, Catharine H.; Nisbet, Andrew
2013-01-01
Quality assurance (QA) for intensity‐ and volumetric‐modulated radiotherapy (IMRT and VMAT) has evolved substantially. In recent years, various commercial 2D and 3D ionization chamber or diode detector arrays have become available, allowing for absolute verification with near real time results, allowing for streamlined QA. However, detector arrays are limited by their resolution, giving rise to concerns about their sensitivity to errors. Understanding the limitations of these devices is therefore critical. In this study, the sensitivity and resolution of the PTW 2D‐ARRAY seven29 and OCTAVIUS II phantom combination was comprehensively characterized for use in dynamic sliding window IMRT and RapidArc verification. Measurement comparisons were made between single acquisition and a multiple merged acquisition techniques to improve the effective resolution of the 2D‐ARRAY, as well as comparisons against GAFCHROMIC EBT2 film and electronic portal imaging dosimetry (EPID). The sensitivity and resolution of the 2D‐ARRAY was tested using two gantry angle 0° modulated test fields. Deliberate multileaf collimator (MLC) errors of 1, 2, and 5 mm and collimator rotation errors were inserted into IMRT and RapidArc plans for pelvis and head & neck sites, to test sensitivity to errors. The radiobiological impact of these errors was assessed to determine the gamma index passing criteria to be used with the 2D‐ARRAY to detect clinically relevant errors. For gamma index distributions, it was found that the 2D‐ARRAY in single acquisition mode was comparable to multiple acquisition modes, as well as film and EPID. It was found that the commonly used gamma index criteria of 3% dose difference or 3 mm distance to agreement may potentially mask clinically relevant errors. Gamma index criteria of 3%/2 mm with a passing threshold of 98%, or 2%/2 mm with a passing threshold of 95%, were found to be more sensitive. We suggest that the gamma index passing thresholds may be used for guidance, but also should be combined with a visual inspection of the gamma index distribution and calculation of the dose difference to assess whether there may be a clinical impact in failed regions. PACS numbers: 87.55.Qr, 87.56.Fc PMID:24257288
Analysis of laser jamming to satellite-based detector
NASA Astrophysics Data System (ADS)
Wang, Si-wen; Guo, Li-hong; Guo, Ru-hai
2009-07-01
The reconnaissance satellite, communication satellite and navigation satellite used in the military applications have played more and more important role in the advanced technique wars and already become the significant support and aid system for military actions. With the development of all kinds of satellites, anti-satellite laser weapons emerge as the times require. The experiments and analyses of laser disturbing CCD (charge coupled detector) in near ground have been studied by many research groups, but their results are not suitable to the case that using laser disturbs the satellite-based detector. Because the distance between the satellite-based detector and the ground is very large, it is difficult to damage it directly. However the optical receive system of satellite detector has large optical gain, so laser disturbing satellite detector is possible. In order to determine its feasibility, the theoretical analyses and experimental study are carried out in the paper. Firstly, the influence factors of laser disturbing satellite detector are analyzed in detail, which including laser power density on the surface of the detector after long distance transmission, and laser power density threshold for disturbing etc. These factors are not only induced by the satellite orbit, but dependence on the following parameters: laser average power in the ground, laser beam quality, tracing and aiming precision and atmospheric transmission. A calculation model is developed by considering all factors which then the power density entering into the detector can be calculated. Secondly, the laser disturbing experiment is performed by using LD (laser diode) with the wavelength 808 nm disturbing CCD 5 kilometer away, which the disturbing threshold value is obtained as 3.55×10-4mW/cm2 that coincides with other researcher's results. Finally, using the theoretical model, the energy density of laser on the photosensitive surface of MSTI-3 satellite detector is estimated as about 100mW/cm2, which is largely exceed the disturbing threshold and therefore verify the feasibility of using this kind of laser disturbing the satellite-based detector. According to the results. using the similar laser power density absolutely saturate the requirements to laser disturbing satellite-based detector. If considering the peak power of pulsed laser, even decrease laser average power, it is also possible to damage the detector. This result will provide the reliable evidences to evaluate the effect of laser disturbing satellite-based detector.
Domanin, Maurizio; Buora, Adelaide; Scardulla, Francesco; Guerciotti, Bruno; Forzenigo, Laura; Biondetti, Pietro; Vergara, Christian
2017-10-01
Closure technique after carotid endarterectomy (CEA) still remains an issue of debate. Routine use of patch graft (PG) has been advocated to reduce restenosis, stroke, and death, but its protective effect, particularly from late restenosis, is less evident and recent studies call into question this thesis. This study aims to compare PG and direct suture (DS) by means of computational fluid dynamics (CFD). To identify carotid regions with flow recirculation more prone to restenosis development, we analyzed time-averaged oscillatory shear index (OSI) and relative residence time (RRT), that are well-known indices correlated with plaque formation. CFD was performed in 12 patients (13 carotids) who underwent surgery for stenosis >70%, 9 with PG, and 4 with DS. Flow conditions were modeled using patient-specific boundary conditions derived from Doppler ultrasound and geometries from magnetic resonance angiography. Mean value of the spatial averaged OSI resulted 0.07 for PG group and 0.03 for DS group, the percentage of area with OSI above a threshold of 0.2 resulted 10.1% and 3.7%, respectively. The mean of averaged-in-space RRT values was 4.4 1/Pa for PG group and 1.6 1/Pa for DS group, the percentage of area with RRT values above a threshold of 4 1/Pa resulted 22.5% and 6.5%, respectively. Both OSI and RRT values resulted higher when PG was preferred to DS and also areas with disturbed flow resulted wider. The absolute higher values computed by means of CFD were observed when PG was used indiscriminately regardless of carotid diameters. DS does not seem to create negative hemodynamic conditions with potential adverse effects on long-term outcomes, in particular when CEA is performed at the common carotid artery and/or the bulb or when ICA diameter is greater than 5.0 mm. Copyright © 2017 Elsevier Inc. All rights reserved.
Changing mothers' perception of infant emotion: a pilot study.
Carnegie, Rebecca; Shepherd, C; Pearson, R M; Button, K S; Munafò, M R; Evans, J; Penton-Voak, I S
2016-02-01
Cognitive bias modification (CBM) techniques, which experimentally retrain abnormal processing of affective stimuli, are becoming established for various psychiatric disorders. Such techniques have not yet been applied to maternal processing of infant emotion, which is affected by various psychiatric disorders. In a pilot study, mothers of children under 3 years old (n = 2) were recruited and randomly allocated to one of three training exercises, aiming either to increase or decrease their threshold of perceiving distress in a morphed continuum of 15 infant facial images. Differences between pre- and post-training threshold were analysed between and within subjects. Compared to baseline thresholds, the threshold for perceiving infant distress decreased in the lowered threshold group (mean difference -1.7 frames, 95 % confidence intervals (CI) -3.1 to -0.3, p = 0.02), increased in the raised threshold group (1.3 frames, 95 % CI 0.6 to 2.1, p < 0.01) and was unchanged in the control group (0.1 frames, 95 % CI -0.8 to 1.1, p = 0.80). Between-group differences were similarly robust in regression models and were not attenuated by potential confounders. The findings suggest that it is possible to change the threshold at which mothers perceive ambiguous infant faces as distressed, either to increase or decrease sensitivity to distress. This small study was intended to provide proof of concept (i.e. that it is possible to alter a mother's perception of infant distress). Questions remain as to whether the effects persist beyond the immediate experimental session, have an impact on maternal behaviour and could be used in clinical samples to improve maternal sensitivity and child outcomes.
Clinical-outcome-based demand management in health services.
Brogan, C; Lawrence, D; Mayhew, L
2008-01-01
THE PROBLEM OF MANAGING DEMAND: Most healthcare systems have 'third-party payers' who face the problem of keeping within budgets despite pressures to increase resources due to the ageing population, new technologies and patient demands to lower thresholds for care. This paper uses the UK National Health Service as a case study to suggest techniques for system-based demand management, which aims to control demand and costs whilst maintaining the cost-effectiveness of the system. The technique for managing demand in primary, elective and urgent care consists of managing treatment thresholds for appropriate care, using a whole-systems approach and costing the care elements in the system. It is important to analyse activity in relation to capacity and demand. Examples of using these techniques in practice are given. The practical effects of using such techniques need evaluation. If these techniques are not used, managing demand and limiting healthcare expenditure will be at the expense of clinical outcomes and unmet need, which will perpetuate financial crises.
Tanabe, T; Noda, K; Saito, M; Starikov, E B; Tateno, M
2004-07-23
Electron-DNA anion collisions were studied using an electrostatic storage ring with a merging electron-beam technique. The rate of neutral particles emitted in collisions started to increase from definite threshold energies, which increased regularly with ion charges in steps of about 10 eV. These threshold energies were almost independent of the length and sequence of DNA, but depended strongly on the ion charges. Neutral particles came from breaks of DNAs, rather than electron detachment. The step of the threshold energy increase approximately agreed with the plasmon excitation energy. It is deduced that plasmon excitation is closely related to the reaction mechanism. Copyright 2004 The American Physical Society
Fault Isolation Filter for Networked Control System with Event-Triggered Sampling Scheme
Li, Shanbin; Sauter, Dominique; Xu, Bugong
2011-01-01
In this paper, the sensor data is transmitted only when the absolute value of difference between the current sensor value and the previously transmitted one is greater than the given threshold value. Based on this send-on-delta scheme which is one of the event-triggered sampling strategies, a modified fault isolation filter for a discrete-time networked control system with multiple faults is then implemented by a particular form of the Kalman filter. The proposed fault isolation filter improves the resource utilization with graceful fault estimation performance degradation. An illustrative example is given to show the efficiency of the proposed method. PMID:22346590
Single- and double-photoionization cross sections of atomic nitrogen from threshold to 31 A
NASA Technical Reports Server (NTRS)
Samson, James A. R.; Angel, G. C.
1990-01-01
The relative photoionization cross section of atomic nitrogen for the production of singly and doubly charged ions has been measured from 44.3 to 275 A and from 520 to 852 A. The results have been made absolute by normalization to one-half of the molecular nitrogen cross section at short wavelengths. The smoothed atomic nitrogen cross sections sigma can be accurately represented, at short wavelengths, by the equation sigma(Mb) = 36,700 x (E exp-2.3) as a function of the photon energy E (eV), thereby allowing the cross sections to be extrapolated to the nitrogen K edge at 31 A.
Finite element simulation of cracks formation in parabolic flume above fixed service live
NASA Astrophysics Data System (ADS)
Bandurin, M. A.; Volosukhin, V. A.; Mikheev, A. V.; Volosukhin, Y. V.; Bandurina, I. P.
2018-03-01
In the article, digital simulation data on influence of defect different characteristics on cracks formation in a parabolic flume are presented. The finite element method is based on general hypotheses of the theory of elasticity. The studies showed that the values of absolute movements satisfy the standards of design. The results of the digital simulation of stresses and strains for cracks formation in concrete parabolic flumes after long-term service above the fixed service life are described. Stressed and strained state of reinforced concrete bearing elements under different load combinations is considered. Intensive threshold of danger to form longitudinal cracks in reinforced concrete elements is determined.
Nonsequential two-photon absorption from the K shell in solid zirconium
Ghimire, Shambhu; Fuchs, Matthias; Hastings, Jerry; ...
2016-10-21
Here, we report the observation of nonsequential two-photon absorption from the K shell of solid Zr (atomic number Z=40) using intense x-ray pulses from the Spring-8 Angstrom Compact Free-Electron Laser (SACLA). We determine the generalized nonlinear two-photon absorption cross section at the two-photon threshold in the range of 3.9–57 ×10 –60 cm 4s bounded by the estimated uncertainty in the absolute intensity. The lower limit is consistent with the prediction of 3.1 ×10 –60 cm 4s from the nonresonant Z –6 scaling for hydrogenic ions in the nonrelativistic, dipole limit.
Task-dependent color discrimination
NASA Technical Reports Server (NTRS)
Poirson, Allen B.; Wandell, Brian A.
1990-01-01
When color video displays are used in time-critical applications (e.g., head-up displays, video control panels), the observer must discriminate among briefly presented targets seen within a complex spatial scene. Color-discrimination threshold are compared by using two tasks. In one task the observer makes color matches between two halves of a continuously displayed bipartite field. In a second task the observer detects a color target in a set of briefly presented objects. The data from both tasks are well summarized by ellipsoidal isosensitivity contours. The fitted ellipsoids differ both in their size, which indicates an absolute sensitivity difference, and orientation, which indicates a relative sensitivity difference.
The temperature of large dust grains in molecular clouds
NASA Technical Reports Server (NTRS)
Clark, F. O.; Laureijs, R. J.; Prusti, T.
1991-01-01
The temperature of the large dust grains is calculated from three molecular clouds ranging in visual extinction from 2.5 to 8 mag, by comparing maps of either extinction derived from star counts or gas column density derived from molecular observations to I(100). Both techniques show the dust temperature declining into clouds. The two techniques do not agree in absolute scale.
Multiple regression technique for Pth degree polynominals with and without linear cross products
NASA Technical Reports Server (NTRS)
Davis, J. W.
1973-01-01
A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.
NMR high-resolution magic angle spinning rotor design for quantification of metabolic concentrations
NASA Astrophysics Data System (ADS)
Holly, R.; Damyanovich, A.; Peemoeller, H.
2006-05-01
A new high-resolution magic angle spinning nuclear magnetic resonance technique is presented to obtain absolute metabolite concentrations of solutions. The magnetic resonance spectrum of the sample under investigation and an internal reference are acquired simultaneously, ensuring both spectra are obtained under the same experimental conditions. The robustness of the technique is demonstrated using a solution of creatine, and it is shown that the technique can obtain solution concentrations to within 7% or better.
NASA Astrophysics Data System (ADS)
Lin, Yu-Ta; Ker, Ming-Dou; Wang, Tzu-Ming
2011-03-01
A new on-panel readout circuit with threshold voltage compensation for capacitive sensor in low temperature polycrystalline silicon (poly-Si) thin-film transistor (LTPS-TFT) process has been proposed. In order to compensate the threshold voltage variation from LTPS process variation, the proposed readout circuit applies a novel compensation approach with switch capacitor technique. In addition, a 4-bit analog-to-digital converter (ADC) is added to identify different sensed capacitor values and further enhances the overall resolution of touch panel.
Using pyramids to define local thresholds for blob detection.
Shneier, M
1983-03-01
A method of detecting blobs in images is described. The method involves building a succession of lower resolution images and looking for spots in these images. A spot in a low resolution image corresponds to a distinguished compact region in a known position in the original image. Further, it is possible to calculate thresholds in the low resolution image, using very simple methods, and to apply those thresholds to the region of the original image corresponding to the spot. Examples are shown in which variations of the technique are applied to several images.
A method for combining passive microwave and infrared rainfall observations
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Giglio, Louis
1995-01-01
Because passive microwave instruments are confined to polar-orbiting satellites, rainfall estimates must interpolate across long time periods, during which no measurements are available. In this paper the authors discuss a technique that allows one to partially overcome the sampling limitations by using frequent infrared observations from geosynchronous platforms. To accomplish this, the technique compares all coincident microwave and infrared observations. From each coincident pair, the infrared temperature threshold is selected that corresponds to an area equal to the raining area observed in the microwave image. The mean conditional rainfall rate as determined from the microwave image is then assigned to pixels in the infrared image that are colder than the selected threshold. The calibration is also applied to a fixed threshold of 235 K for comparison with established infrared techniques. Once a calibration is determined, it is applied to all infrared images. Monthly accumulations for both methods are then obtained by summing rainfall from all available infrared images. Two examples are used to evaluate the performance of the technique. The first consists of a one-month period (February 1988) over Darwin, Australia, where good validation data are available from radar and rain gauges. For this case it was found that the technique approximately doubled the rain inferred by the microwave method alone and produced exceptional agreement with the validation data. The second example involved comparisons with atoll rain gauges in the western Pacific for June 1989. Results here are overshadowed by the fact that the hourly infrared estimates from established techniques, by themselves, produced very good correlations with the rain gauges. The calibration technique was not able to improve upon these results.
Twelve automated thresholding methods for segmentation of PET images: a phantom study.
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M
2012-06-21
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Twelve automated thresholding methods for segmentation of PET images: a phantom study
NASA Astrophysics Data System (ADS)
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.
2012-06-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Technique for calibrating angular measurement devices when calibration standards are unavailable
NASA Technical Reports Server (NTRS)
Finley, Tom D.
1991-01-01
A calibration technique is proposed that will allow the calibration of certain angular measurement devices without requiring the use of absolute standard. The technique assumes that the device to be calibrated has deterministic bias errors. A comparison device must be available that meets the same requirements. The two devices are compared; one device is then rotated with respect to the other, and a second comparison is performed. If the data are reduced using the described technique, the individual errors of the two devices can be determined.
Potential Audiological and MRI Markers of Tinnitus.
Gopal, Kamakshi V; Thomas, Binu P; Nandy, Rajesh; Mao, Deng; Lu, Hanzhang
2017-09-01
Subjective tinnitus, or ringing sensation in the ear, is a common disorder with no accepted objective diagnostic markers. The purpose of this study was to identify possible objective markers of tinnitus by combining audiological and imaging-based techniques. Case-control studies. Twenty adults drawn from our audiology clinic served as participants. The tinnitus group consisted of ten participants with chronic bilateral constant tinnitus, and the control group consisted of ten participants with no history of tinnitus. Each participant with tinnitus was closely matched with a control participant on the basis of age, gender, and hearing thresholds. Data acquisition focused on systematic administration and evaluation of various audiological tests, including auditory-evoked potentials (AEP) and otoacoustic emissions, and magnetic resonance imaging (MRI) tests. A total of 14 objective test measures (predictors) obtained from audiological and MRI tests were subjected to statistical analyses to identify the best predictors of tinnitus group membership. The least absolute shrinkage and selection operator technique for feature extraction, supplemented by the leave-one-out cross-validation technique, were used to extract the best predictors. This approach provided a conservative model that was highly regularized with its error within 1 standard error of the minimum. The model selected increased frontal cortex (FC) functional MRI activity to pure tones matching their respective tinnitus pitch, and augmented AEP wave N₁ amplitude growth in the tinnitus group as the top two predictors of tinnitus group membership. These findings suggest that the amplified responses to acoustic signals and hyperactivity in attention regions of the brain may be a result of overattention among individuals that experience chronic tinnitus. These results suggest that increased functional MRI activity in the FC to sounds and augmented N₁ amplitude growth may potentially be the objective diagnostic indicators of tinnitus. However, due to the small sample size and lack of subgroups within the tinnitus population in this study, more research is needed before generalizing these findings. American Academy of Audiology
Breast density quantification with cone-beam CT: A post-mortem study
Johnson, Travis; Ding, Huanjun; Le, Huy Q.; Ducote, Justin L.; Molloi, Sabee
2014-01-01
Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The percent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson’s r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate (SEE) was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. PMID:24254317
Whalley, H C; Kestelman, J N; Rimmington, J E; Kelso, A; Abukmeil, S S; Best, J J; Johnstone, E C; Lawrie, S M
1999-07-30
The Edinburgh High Risk Project is a longitudinal study of brain structure (and function) in subjects at high risk of developing schizophrenia in the next 5-10 years for genetic reasons. In this article we describe the methods of volumetric analysis of structural magnetic resonance images used in the study. We also consider potential sources of error in these methods: the validity of our image analysis techniques; inter- and intra-rater reliability; possible positional variation; and thresholding criteria used in separating brain from cerebro-spinal fluid (CSF). Investigation with a phantom test object (of similar imaging characteristics to the brain) provided evidence for the validity of our image acquisition and analysis techniques. Both inter- and intra-rater reliability were found to be good in whole brain measures but less so for smaller regions. There were no statistically significant differences in positioning across the three study groups (patients with schizophrenia, high risk subjects and normal volunteers). A new technique for thresholding MRI scans longitudinally is described (the 'rescale' method) and compared with our established method (thresholding by eye). Few differences between the two techniques were seen at 3- and 6-month follow-up. These findings demonstrate the validity and reliability of the structural MRI analysis techniques used in the Edinburgh High Risk Project, and highlight methodological issues of general concern in cross-sectional and longitudinal studies of brain structure in healthy control subjects and neuropsychiatric populations.
Twist number and order properties of periodic orbits
NASA Astrophysics Data System (ADS)
Petrisor, Emilia
2013-11-01
A less studied numerical characteristic of periodic orbits of area preserving twist maps of the annulus is the twist or torsion number, called initially the amount of rotation Mather (1984) [2]. It measures the average rotation of tangent vectors under the action of the derivative of the map along that orbit, and characterizes the degree of complexity of the dynamics. The aim of this paper is to give new insights into the definition and properties of the twist number and to relate its range to the order properties of periodic orbits. We derive an algorithm to deduce the exact value or a demi-unit interval containing the exact value of the twist number. We prove that at a period-doubling bifurcation threshold of a mini-maximizing periodic orbit, the new born doubly periodic orbit has the absolute twist number larger than the absolute twist of the original orbit after bifurcation. We give examples of periodic orbits having large absolute twist number, that are badly ordered, and illustrate how characterization of these orbits only by their residue can lead to incorrect results. In connection to the study of the twist number of periodic orbits of standard-like maps we introduce a new tool, called 1-cone function. We prove that the location of minima of this function with respect to the vertical symmetry lines of a standard-like map encodes a valuable information on the symmetric periodic orbits and their twist number.
Clavel, Marie-Annick; Pibarot, Philippe; Messika-Zeitoun, David; Capoulade, Romain; Malouf, Joseph; Aggarval, Shivani; Araoz, Phillip A.; Michelena, Hector I.; Cueff, Caroline; Larose, Eric; Miller, Jordan D.; Vahanian, Alec; Enriquez-Sarano, Maurice
2014-01-01
BACKGROUND Aortic valve calcification (AVC) load measures lesion severity in aortic stenosis (AS) and is useful for diagnostic purposes. Whether AVC predicts survival after diagnosis, independent of clinical and Doppler echocardiographic AS characteristics, has not been studied. OBJECTIVES This study evaluated the impact of AVC load, absolute and relative to aortic annulus size (AVCdensity), on overall mortality in patients with AS under conservative treatment and without regard to treatment. METHODS In 3 academic centers, we enrolled 794 patients (mean age, 73 ± 12 years; 274 women) diagnosed with AS by Doppler echocardiography who underwent multidetector computed tomography (MDCT) within the same episode of care. Absolute AVC load and AVCdensity (ratio of absolute AVC to cross-sectional area of aortic annulus) were measured, and severe AVC was separately defined in men and women. RESULTS During follow-up, there were 440 aortic valve implantations (AVIs) and 194 deaths (115 under medical treatment). Univariate analysis showed strong association of absolute AVC and AVCdensity with survival (both, p < 0.0001) with a spline curve analysis pattern of threshold and plateau of risk. After adjustment for age, sex, coronary artery disease, diabetes, symptoms, AS severity on hemodynamic assessment, and LV ejection fraction, severe absolute AVC (adjusted hazard ratio [HR]: 1.75; 95% confidence interval [CI]: 1.04 to 2.92; p = 0.03) or severe AVCdensity (adjusted HR: 2.44; 95% CI: 1.37 to 4.37; p = 0.002) independently predicted mortality under medical treatment, with additive model predictive value (all, p ≤ 0.04) and a net reclassification index of 12.5% (p = 0.04). Severe absolute AVC (adjusted HR: 1.71; 95% CI: 1.12 to 2.62; p = 0.01) and severe AVCdensity (adjusted HR: 2.22; 95% CI: 1.40 to 3.52; p = 0.001) also independently predicted overall mortality, even with adjustment for time-dependent AVI. CONCLUSIONS This large-scale, multicenter outcomes study of quantitative Doppler echocardiographic and MDCT assessment of AS shows that measuring AVC load provides incremental prognostic value for survival beyond clinical and Doppler echocardiographic assessment. Severe AVC independently predicts excess mortality after AS diagnosis, which is greatly alleviated by AVI. Thus, measurement of AVC by MDCT should be considered for not only diagnostic but also risk-stratification purposes in patients with AS. PMID:25236511
Hornsby, Benjamin W. Y.; Johnson, Earl E.; Picou, Erin
2011-01-01
Objectives The purpose of this study was to examine the effects of degree and configuration of hearing loss on the use of, and benefit from, information in amplified high- and low-frequency speech presented in background noise. Design Sixty-two adults with a wide range of high- and low-frequency sensorineural hearing loss (5–115+ dB HL) participated. To examine the contribution of speech information in different frequency regions, speech understanding in noise was assessed in multiple low- and high-pass filter conditions, as well as a band-pass (713–3534 Hz) and wideband (143–8976 Hz) condition. To increase audibility over a wide frequency range, speech and noise were amplified based on each individual’s hearing loss. A stepwise multiple linear regression approach was used to examine the contribution of several factors to 1) absolute performance in each filter condition and 2) the change in performance with the addition of amplified high- and low-frequency speech components. Results Results from the regression analysis showed that degree of hearing loss was the strongest predictor of absolute performance for low- and high-pass filtered speech materials. In addition, configuration of hearing loss affected both absolute performance for severely low-pass filtered speech and benefit from extending high-frequency (3534–8976 Hz) bandwidth. Specifically, individuals with steeply sloping high-frequency losses made better use of low-pass filtered speech information than individuals with similar low-frequency thresholds but less high-frequency loss. In contrast, given similar high-frequency thresholds, individuals with flat hearing losses received more benefit from extending high-frequency bandwidth than individuals with more sloping losses. Conclusions Consistent with previous work, benefit from speech information in a given frequency region generally decreases as degree of hearing loss in that frequency region increases. However, given a similar degree of loss, the configuration of hearing loss also affects the ability to use speech information in different frequency regions. Except for individuals with steeply sloping high-frequency losses, providing high-frequency amplification (3534–8976 Hz) had either a beneficial effect on, or did not significantly degrade, speech understanding. These findings highlight the importance of extended high-frequency amplification for listeners with a wide range of high-frequency hearing losses, when seeking to maximize intelligibility. PMID:21336138
Stoller, Oliver; de Bruin, Eling D; Schindelholz, Matthias; Schuster-Amft, Corina; de Bie, Rob A; Hunt, Kenneth J
2014-10-11
Exercise capacity is seriously reduced after stroke. While cardiopulmonary assessment and intervention strategies have been validated for the mildly and moderately impaired populations post-stroke, there is a lack of effective concepts for stroke survivors suffering from severe motor limitations. This study investigated the test-retest reliability and repeatability of cardiopulmonary exercise testing (CPET) using feedback-controlled robotics-assisted treadmill exercise (FC-RATE) in severely motor impaired individuals early after stroke. 20 subjects (age 44-84 years, <6 month post-stroke) with severe motor limitations (Functional Ambulatory Classification 0-2) were selected for consecutive constant load testing (CLT) and incremental exercise testing (IET) within a powered exoskeleton, synchronised with a treadmill and a body weight support system. A manual human-in-the-loop feedback system was used to guide individual work rate levels. Outcome variables focussed on standard cardiopulmonary performance parameters. Relative and absolute test-retest reliability were assessed by intraclass correlation coefficients (ICC), standard error of the measurement (SEM), and minimal detectable change (MDC). Mean difference, limits of agreement, and coefficient of variation (CoV) were estimated to assess repeatability. Peak performance parameters during IET yielded good to excellent relative reliability: absolute peak oxygen uptake (ICC =0.82), relative peak oxygen uptake (ICC =0.72), peak work rate (ICC =0.91), peak heart rate (ICC =0.80), absolute gas exchange threshold (ICC =0.91), relative gas exchange threshold (ICC =0.88), oxygen cost of work (ICC =0.87), oxygen pulse at peak oxygen uptake (ICC =0.92), ventilation rate versus carbon dioxide output slope (ICC =0.78). For these variables, SEM was 4-13%, MDC 12-36%, and CoV 0.10-0.36. CLT revealed high mean differences and insufficient test-retest reliability for all variables studied. This study presents first evidence on reliability and repeatability for CPET in severely motor impaired individuals early after stroke using a feedback-controlled robotics-assisted treadmill. The results demonstrate good to excellent test-retest reliability and appropriate repeatability for the most important peak cardiopulmonary performance parameters. These findings have important implications for the design and implementation of cardiovascular exercise interventions in severely impaired populations. Future research needs to develop advanced control strategies to enable the true limit of functional exercise capacity to be reached and to further assess test-retest reliability and repeatability in larger samples.
Liu, Haisong; Li, Jun; Pappas, Evangelos; Andrews, David; Evans, James; Werner-Wasik, Maria; Yu, Yan; Dicker, Adam; Shi, Wenyin
2016-09-08
An automatic brain-metastases planning (ABMP) software has been installed in our institution. It is dedicated for treating multiple brain metastases with radiosurgery on linear accelerators (linacs) using a single-setup isocenter with noncoplanar dynamic conformal arcs. This study is to validate the calculated absolute dose and dose distribution of ABMP. Three types of measurements were performed to validate the planning software: 1, dual micro ion chambers were used with an acrylic phantom to measure the absolute dose; 2, a 3D cylindrical phantom with dual diode array was used to evaluate 2D dose distribution and point dose for smaller targets; and 3, a 3D pseudo-in vivo patient-specific phantom filled with polymer gels was used to evaluate the accuracy of 3D dose distribution and radia-tion delivery. Micro chamber measurement of two targets (volumes of 1.2 cc and 0.9 cc, respectively) showed that the percentage differences of the absolute dose at both targets were less than 1%. Averaged GI passing rate of five different plans measured with the diode array phantom was above 98%, using criteria of 3% dose difference, 1 mm distance to agreement (DTA), and 10% low-dose threshold. 3D gel phantom measurement results demonstrated a 3D displacement of nine targets of 0.7 ± 0.4 mm (range 0.2 ~ 1.1 mm). The averaged two-dimensional (2D) GI passing rate for several region of interests (ROI) on axial slices that encompass each one of the nine targets was above 98% (5% dose difference, 2 mm DTA, and 10% low-dose threshold). Measured D95, the minimum dose that covers 95% of the target volume, of the nine targets was 0.7% less than the calculated D95. Three different types of dosimetric verification methods were used and proved the dose calculation of the new automatic brain metastases planning (ABMP) software was clinical acceptable. The 3D pseudo-in vivo patient-specific gel phantom test also served as an end-to-end test for validating not only the dose calculation, but the treatment delivery accuracy as well. © 2016 The Authors.
Vehicle Speed and Length Estimation Using Data from Two Anisotropic Magneto-Resistive (AMR) Sensors
Markevicius, Vytautas; Navikas, Dangirutis; Valinevicius, Algimantas; Zilys, Mindaugas
2017-01-01
Methods for estimating a car’s length are presented in this paper, as well as the results achieved by using a self-designed system equipped with two anisotropic magneto-resistive (AMR) sensors, which were placed on a road lane. The purpose of the research was to compare the lengths of mid-size cars, i.e., family cars (hatchbacks), saloons (sedans), station wagons and SUVs. Four methods were used in the research: a simple threshold based method, a threshold method based on moving average and standard deviation, a two-extreme-peak detection method and a method based on the amplitude and time normalization using linear extrapolation (or interpolation). The results were achieved by analyzing changes in the magnitude and in the absolute z-component of the magnetic field as well. The tests, which were performed in four different Earth directions, show differences in the values of estimated lengths. The magnitude-based results in the case when cars drove from the South to the North direction were even up to 1.2 m higher than the other results achieved using the threshold methods. Smaller differences in lengths were observed when the distances were measured between two extreme peaks in the car magnetic signatures. The results were summarized in tables and the errors of estimated lengths were presented. The maximal errors, related to real lengths, were up to 22%. PMID:28771171
2017-01-01
Visually guided behaviour at its sensitivity limit relies on single-photon responses originating in a small number of rod photoreceptors. For decades, researchers have debated the neural mechanisms and noise sources that underlie this striking sensitivity. To address this question, we need to understand the constraints arising from the retinal output signals provided by distinct retinal ganglion cell types. It has recently been shown in the primate retina that On and Off parasol ganglion cells, the cell types likely to underlie light detection at the absolute visual threshold, differ fundamentally not only in response polarity, but also in the way they handle single-photon responses originating in rods. The On pathway provides the brain with a thresholded, low-noise readout and the Off pathway with a noisy, linear readout. We outline the mechanistic basis of these different coding strategies and analyse their implications for detecting the weakest light signals. We show that high-fidelity, nonlinear signal processing in the On pathway comes with costs: more single-photon responses are lost and their propagation is delayed compared with the Off pathway. On the other hand, the responses of On ganglion cells allow better intensity discrimination compared with the Off ganglion cell responses near visual threshold. This article is part of the themed issue ‘Vision in dim light’. PMID:28193818
Processing of color signals in female carriers of color vision deficiency.
Konstantakopoulou, Evgenia; Rodriguez-Carmona, Marisa; Barbur, John L
2012-02-14
The aim of this study was to assess the chromatic sensitivity of carriers of color deficiency, specifically in relation to dependence on retinal illuminance, and to reference these findings to the corresponding red-green (RG) thresholds measured in normal trichromatic males. Thirty-six carriers of congenital RG color deficiency and 26 normal trichromatic males participated in the study. The retinal illuminance was estimated by measuring the pupil diameter and the optical density of the lens and the macular pigment. Each subject's color vision was examined using the Color Assessment and Diagnosis (CAD) test, the Ishihara and American Optical pseudoisochromatic plates, and the Nagel anomaloscope. Carriers of deuteranopia (D) and deuteranomaly (DA) had higher RG thresholds than male trichromats (p < 0.05). When referenced to male trichromats, carriers of protanomaly (PA) needed 28% less color signal strength; carriers of D required ∼60% higher thresholds at mesopic light levels. Variation in the L:M ratio and hence the absolute M-cone density may be the principal factor underlying the poorer chromatic sensitivity of D carriers in the low photopic range. The increased sensitivity of PA carriers at lower light levels is consistent with the pooling of signals from the hybrid M' and the M cones and the subsequent stronger inhibition of the rods. The findings suggest that signals from hybrid photopigments may pool preferentially with the spectrally closest "normal" pigments.
Laumen, Geneviève; Tollin, Daniel J.; Beutelmann, Rainer; Klump, Georg M.
2016-01-01
The effect of interaural time difference (ITD) and interaural level difference (ILD) on wave 4 of the binaural and summed monaural auditory brainstem responses (ABRs) as well as on the DN1 component of the binaural interaction component (BIC) of the ABR in young and old Mongolian gerbils (Meriones unguiculatus) was investigated. Measurements were made at a fixed sound pressure level (SPL) and a fixed level above visually detected ABR threshold to compensate for individual hearing threshold differences. In both stimulation modes (fixed SPL and fixed level above visually detected ABR threshold) an effect of ITD on the latency and the amplitude of wave 4 as well as of the BIC was observed. With increasing absolute ITD values BIC latencies were increased and amplitudes were decreased. ILD had a much smaller effect on these measures. Old animals showed a reduced amplitude of the DN1 component. This difference was due to a smaller wave 4 in the summed monaural ABRs of old animals compared to young animals whereas wave 4 in the binaural-evoked ABR showed no age-related difference. In old animals the small amplitude of the DN1 component was correlated with small binaural-evoked wave 1 and wave 3 amplitudes. This suggests that the reduced peripheral input affects central binaural processing which is reflected in the BIC. PMID:27173973
Takeshita, Daisuke; Smeds, Lina; Ala-Laurila, Petri
2017-04-05
Visually guided behaviour at its sensitivity limit relies on single-photon responses originating in a small number of rod photoreceptors. For decades, researchers have debated the neural mechanisms and noise sources that underlie this striking sensitivity. To address this question, we need to understand the constraints arising from the retinal output signals provided by distinct retinal ganglion cell types. It has recently been shown in the primate retina that On and Off parasol ganglion cells, the cell types likely to underlie light detection at the absolute visual threshold, differ fundamentally not only in response polarity, but also in the way they handle single-photon responses originating in rods. The On pathway provides the brain with a thresholded, low-noise readout and the Off pathway with a noisy, linear readout. We outline the mechanistic basis of these different coding strategies and analyse their implications for detecting the weakest light signals. We show that high-fidelity, nonlinear signal processing in the On pathway comes with costs: more single-photon responses are lost and their propagation is delayed compared with the Off pathway. On the other hand, the responses of On ganglion cells allow better intensity discrimination compared with the Off ganglion cell responses near visual threshold.This article is part of the themed issue 'Vision in dim light'. © 2017 The Authors.
Electron-impact ionization of silicon tetrachloride (SiCl4).
Basner, R; Gutkin, M; Mahoney, J; Tarnovsky, V; Deutsch, H; Becker, K
2005-08-01
We measured absolute partial cross sections for the formation of various singly charged and doubly charged positive ions produced by electron impact on silicon tetrachloride (SiCl4) using two different experimental techniques, a time-of-flight mass spectrometer (TOF-MS) and a fast-neutral-beam apparatus. The energy range covered was from the threshold to 900 eV in the TOF-MS and to 200 eV in the fast-neutral-beam apparatus. The results obtained by the two different experimental techniques were found to agree very well (better than their combined margins of error). The SiCl3(+) fragment ion has the largest partial ionization cross section with a maximum value of slightly above 6x10(-20) m2 at about 100 eV. The cross sections for the formation of SiCl4(+), SiCl+, and Cl+ have maximum values around 4x10(-20) m2. Some of the cross-section curves exhibit an unusual energy dependence with a pronounced low-energy maximum at an energy around 30 eV followed by a broad second maximum at around 100 eV. This is similar to what has been observed by us earlier for another Cl-containing molecule, TiCl4 [R. Basner, M. Schmidt, V. Tamovsky, H. Deutsch, and K. Becker, Thin Solid Films 374 291 (2000)]. The maximum cross-section values for the formation of the doubly charged ions, with the exception of SiCl3(++), are 0.05x10(-20) m2 or less. The experimentally determined total single ionization cross section of SiCl4 is compared with the results of semiempirical calculations.
NASA Astrophysics Data System (ADS)
Beigi, Parmida; Salcudean, Tim; Rohling, Robert; Lessoway, Victoria A.; Ng, Gary C.
2015-03-01
This paper presents a new needle detection technique for ultrasound guided interventions based on the spectral properties of small displacements arising from hand tremour or intentional motion. In a block-based approach, the displacement map is computed for each block of interest versus a reference frame, using an optical flow technique. To compute the flow parameters, the Lucas-Kanade approach is used in a multiresolution and regularized form. A least-squares fit is used to estimate the flow parameters from the overdetermined system of spatial and temporal gradients. Lateral and axial components of the displacement are obtained for each block of interest at consecutive frames. Magnitude-squared spectral coherency is derived between the median displacements of the reference block and each block of interest, to determine the spectral correlation. In vivo images were obtained from the tissue near the abdominal aorta to capture the extreme intrinsic body motion and insertion images were captured from a tissue-mimicking agar phantom. According to the analysis, both the involuntary and intentional movement of the needle produces coherent displacement with respect to a reference window near the insertion site. Intrinsic body motion also produces coherent displacement with respect to a reference window in the tissue; however, the coherency spectra of intrinsic and needle motion are distinguishable spectrally. Blocks with high spectral coherency at high frequencies are selected, estimating a channel for needle trajectory. The needle trajectory is detected from locally thresholded absolute displacement map within the initial estimate. Experimental results show the RMS localization accuracy of 1:0 mm, 0:7 mm, and 0:5 mm for hand tremour, vibrational and rotational needle movements, respectively.
NASA Astrophysics Data System (ADS)
Musgrave, M. M.; Baeßler, S.; Balascuta, S.; Barrón-Palos, L.; Blyth, D.; Bowman, J. D.; Chupp, T. E.; Cianciolo, V.; Crawford, C.; Craycraft, K.; Fomin, N.; Fry, J.; Gericke, M.; Gillis, R. C.; Grammer, K.; Greene, G. L.; Hamblen, J.; Hayes, C.; Huffman, P.; Jiang, C.; Kucuker, S.; McCrea, M.; Mueller, P. E.; Penttilä, S. I.; Snow, W. M.; Tang, E.; Tang, Z.; Tong, X.; Wilburn, W. S.
2018-07-01
Accurately measuring the neutron beam polarization of a high flux, large area neutron beam is necessary for many neutron physics experiments. The Fundamental Neutron Physics Beamline (FnPB) at the Spallation Neutron Source (SNS) is a pulsed neutron beam that was polarized with a supermirror polarizer for the NPDGamma experiment. The polarized neutron beam had a flux of ∼ 109 neutrons per second per cm2 and a cross sectional area of 10 × 12 cm2. The polarization of this neutron beam and the efficiency of a RF neutron spin rotator installed downstream on this beam were measured by neutron transmission through a polarized 3He neutron spin-filter. The pulsed nature of the SNS enabled us to employ an absolute measurement technique for both quantities which does not depend on accurate knowledge of the phase space of the neutron beam or the 3He polarization in the spin filter and is therefore of interest for any experiments on slow neutron beams from pulsed neutron sources which require knowledge of the absolute value of the neutron polarization. The polarization and spin-reversal efficiency measured in this work were done for the NPDGamma experiment, which measures the parity violating γ-ray angular distribution asymmetry with respect to the neutron spin direction in the capture of polarized neutrons on protons. The experimental technique, results, systematic effects, and applications to neutron capture targets are discussed.
NASA Astrophysics Data System (ADS)
Billaud, Pierre; Marhaba, Salem; Grillet, Nadia; Cottancin, Emmanuel; Bonnet, Christophe; Lermé, Jean; Vialle, Jean-Louis; Broyer, Michel; Pellarin, Michel
2010-04-01
This article describes a high sensitivity spectrophotometer designed to detect the overall extinction of light by a single nanoparticle (NP) in the 10-4-10-5 relative range, using a transmission measurement configuration. We focus here on the simple and low cost scheme where a white lamp is used as a light source, permitting easy and broadband extinction measurements (300-900 nm). Using a microscope, in a confocal geometry, an increased sensitivity is reached thanks to a modulation of the NP position under the light spot combined with lock-in detection. Moreover, it is shown that this technique gives access to the absolute extinction cross-sections of the single NP provided that the incident electromagnetic field distribution experienced by the NP is accurately characterized. In this respect, an experimental procedure to characterize the light spot profile in the focal plane, using a reference NP as a probe, is also laid out. The validity of this approach is discussed and confirmed by comparing experimental intensity distributions to theoretical calculations taking into account the vector character of the tightly focused beam. The calibration procedure permitting to obtain the absolute extinction cross-section of the probed NP is then fully described. Finally, the force of the present technique is illustrated through selected examples concerning spherical and slightly elongated gold and silver NPs. Absolute extinction measurements are found to be in good consistency with the NP size and shape independently obtained from transmission electron microscopy, showing that spatial modulation spectroscopy is a powerful tool to get an optical fingerprint of the NP.
NASA Technical Reports Server (NTRS)
Giesy, D. P.
1978-01-01
A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.
Phonation Threshold Pressure Measurement with a Semi-Occluded Vocal Tract
ERIC Educational Resources Information Center
Titze, Ingo R.
2009-01-01
Purpose: The purpose of this article was to determine if a semi-occluded vocal tract could be used to measure phonation threshold pressure. This is in contrast to the shutter technique, where an alternation between a fully occluded tract and an unoccluded tract is used. Method: Five male and 5 female volunteers phonated through a thin straw held…
Nicola-Richmond, Kelli M; Pépin, Geneviève; Larkin, Helen
2016-04-01
Understanding and facilitating the transformation from occupational therapy student to practitioner is central to the development of competent and work-ready graduates. However, the pivotal concepts and capabilities that need to be taught and learnt in occupational therapy are not necessarily explicit. The threshold concepts theory of teaching and learning proposes that every discipline has a set of transformational concepts that students must acquire in order to progress. As students acquire the threshold concepts, they develop a transformed way of understanding content related to their course of study which contributes to their developing expertise. The aim of this study was to identify the threshold concepts of occupational therapy. The Delphi technique, a data collection method that aims to demonstrate consensus in relation to important questions, was used with three groups comprising final year occupational therapy students (n = 11), occupational therapy clinicians (n = 21) and academics teaching occupational therapy (n = 10) in Victoria, Australia. Participants reached consensus regarding 10 threshold concepts for the occupational therapy discipline. These are: understanding and applying the models and theories of occupational therapy; occupation; evidence-based practice; clinical reasoning; discipline specific skills and knowledge; practising in context; a client-centred approach; the occupational therapist role; reflective practice and; a holistic approach. The threshold concepts identified provide valuable information for the discipline. They can potentially inform the development of competencies for occupational therapy and provide guidance for teaching and learning activities to facilitate the transformation to competent practitioner. © 2015 Occupational Therapy Australia.
Behavioral and auditory evoked potential audiograms of a false killer whale (Pseudorca crassidens)
NASA Astrophysics Data System (ADS)
Yuen, Michelle M. L.; Nachtigall, Paul E.; Breese, Marlee; Supin, Alexander Ya.
2005-10-01
Behavioral and auditory evoked potential (AEP) audiograms of a false killer whale were measured using the same subject and experimental conditions. The objective was to compare and assess the correspondence of auditory thresholds collected by behavioral and electrophysiological techniques. Behavioral audiograms used 3-s pure-tone stimuli from 4 to 45 kHz, and were conducted with a go/no-go modified staircase procedure. AEP audiograms used 20-ms sinusoidally amplitude-modulated tone bursts from 4 to 45 kHz, and the electrophysiological responses were received through gold disc electrodes in rubber suction cups. The behavioral data were reliable and repeatable, with the region of best sensitivity between 16 and 24 kHz and peak sensitivity at 20 kHz. The AEP audiograms produced thresholds that were also consistent over time, with range of best sensitivity from 16 to 22.5 kHz and peak sensitivity at 22.5 kHz. Behavioral thresholds were always lower than AEP thresholds. However, AEP audiograms were completed in a shorter amount of time with minimum participation from the animal. These data indicated that behavioral and AEP techniques can be used successfully and interchangeably to measure cetacean hearing sensitivity.
Combining multiple thresholding binarization values to improve OCR output
NASA Astrophysics Data System (ADS)
Lund, William B.; Kennard, Douglas J.; Ringger, Eric K.
2013-01-01
For noisy, historical documents, a high optical character recognition (OCR) word error rate (WER) can render the OCR text unusable. Since image binarization is often the method used to identify foreground pixels, a body of research seeks to improve image-wide binarization directly. Instead of relying on any one imperfect binarization technique, our method incorporates information from multiple simple thresholding binarizations of the same image to improve text output. Using a new corpus of 19th century newspaper grayscale images for which the text transcription is known, we observe WERs of 13.8% and higher using current binarization techniques and a state-of-the-art OCR engine. Our novel approach combines the OCR outputs from multiple thresholded images by aligning the text output and producing a lattice of word alternatives from which a lattice word error rate (LWER) is calculated. Our results show a LWER of 7.6% when aligning two threshold images and a LWER of 6.8% when aligning five. From the word lattice we commit to one hypothesis by applying the methods of Lund et al. (2011) achieving an improvement over the original OCR output and a 8.41% WER result on this data set.
Digital audio watermarking using moment-preserving thresholding
NASA Astrophysics Data System (ADS)
Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong
2007-09-01
The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.
Aqueous stress-corrosion cracking of high-toughness D6AC steel
NASA Technical Reports Server (NTRS)
Gilbreath, W. P.; Adamson, M. J.
1976-01-01
The crack growth behavior of D6AC steel as a function of stress intensity, stress and corrosion history, and test technique, under sustained load in filtered natural seawater, 3.3 per cent sodium chloride solution, and distilled water, was investigated. Reported investigations of D6AC were considered in terms of the present study with emphasis on thermal treatment, specimen configuration, fracture toughness, crack-growth rates, initiation period, and threshold. Both threshold and growth kinetics were found to be relatively insensitive to these test parameters. The apparent incubation period was dependent on technique, both detection sensitivity and precracking stress intensity level.
Low energy analysis techniques for CUORE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alduino, C.; Alfonso, K.; Artusa, D. R.
CUORE is a tonne-scale cryogenic detector operating at the Laboratori Nazionali del Gran Sasso (LNGS) that uses tellurium dioxide bolometers to search for neutrinoless double-beta decay of 130Te. CUORE is also suitable to search for low energy rare events such as solar axions or WIMP scattering, thanks to its ultra-low background and large target mass. However, to conduct such sensitive searches requires improving the energy threshold to 10 keV. Here in this article, we describe the analysis techniques developed for the low energy analysis of CUORE-like detectors, using the data acquired from November 2013 to March 2015 by CUORE-0, amore » single-tower prototype designed to validate the assembly procedure and new cleaning techniques of CUORE. We explain the energy threshold optimization, continuous monitoring of the trigger efficiency, data and event selection, and energy calibration at low energies in detail. We also present the low energy background spectrum of CUORE-0 below 60keV. Finally, we report the sensitivity of CUORE to WIMP annual modulation using the CUORE-0 energy threshold and background, as well as an estimate of the uncertainty on the nuclear quenching factor from nuclear recoils inCUORE-0.« less
Low energy analysis techniques for CUORE
Alduino, C.; Alfonso, K.; Artusa, D. R.; ...
2017-12-12
CUORE is a tonne-scale cryogenic detector operating at the Laboratori Nazionali del Gran Sasso (LNGS) that uses tellurium dioxide bolometers to search for neutrinoless double-beta decay of 130Te. CUORE is also suitable to search for low energy rare events such as solar axions or WIMP scattering, thanks to its ultra-low background and large target mass. However, to conduct such sensitive searches requires improving the energy threshold to 10 keV. Here in this article, we describe the analysis techniques developed for the low energy analysis of CUORE-like detectors, using the data acquired from November 2013 to March 2015 by CUORE-0, amore » single-tower prototype designed to validate the assembly procedure and new cleaning techniques of CUORE. We explain the energy threshold optimization, continuous monitoring of the trigger efficiency, data and event selection, and energy calibration at low energies in detail. We also present the low energy background spectrum of CUORE-0 below 60keV. Finally, we report the sensitivity of CUORE to WIMP annual modulation using the CUORE-0 energy threshold and background, as well as an estimate of the uncertainty on the nuclear quenching factor from nuclear recoils inCUORE-0.« less
Stroke-model-based character extraction from gray-level document images.
Ye, X; Cheriet, M; Suen, C Y
2001-01-01
Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.
Liquid impact and fracture of free-standing CVD diamond
NASA Astrophysics Data System (ADS)
Kennedy, Claire F.; Telling, Robert H.; Field, John E.
1999-07-01
The Cavendish Laboratory has developed extensive facilities for studies of liquid and solid particle erosion. This paper describes the high-speed liquid impact erosion of thin CVD diamond discs and the variation with grain sizes of the absolute damage threshold velocity (ADTV), viz., the threshold below which the specimen shows no damage. All specimens fail by rear surface cracking and there is shown to be a shallow dependence of rear surface ADTV on grain size. Fracture propagation in CVD diamond has also been monitored using a specially-designed double-torsion apparatus and data for K1C are presented. Tentatively, the results suggest that finer-grained CVD diamond exhibits a higher fracture toughness, although the differences are slight even over a fourfold variation in the mean grain size. No preference for intergranular fracture was observed and one may conclude from this that the grain boundaries themselves do not seriously weaken the material. The large pre-existing flaws, both within and between grains, whose size varies the grain size are believed to be the dominant source of weakness.
NASA Astrophysics Data System (ADS)
Casperson, R. J.; Asner, D. M.; Baker, J.; Baker, R. G.; Barrett, J. S.; Bowden, N. S.; Brune, C.; Bundgaard, J.; Burgett, E.; Cebra, D. A.; Classen, T.; Cunningham, M.; Deaven, J.; Duke, D. L.; Ferguson, I.; Gearhart, J.; Geppert-Kleinrath, V.; Greife, U.; Grimes, S.; Guardincerri, E.; Hager, U.; Hagmann, C.; Heffner, M.; Hensle, D.; Hertel, N.; Higgins, D.; Hill, T.; Isenhower, L. D.; King, J.; Klay, J. L.; Kornilov, N.; Kudo, R.; Laptev, A. B.; Loveland, W.; Lynch, M.; Lynn, W. S.; Magee, J. A.; Manning, B.; Massey, T. N.; McGrath, C.; Meharchand, R.; Mendenhall, M. P.; Montoya, L.; Pickle, N. T.; Qu, H.; Ruz, J.; Sangiorgio, S.; Schmitt, K. T.; Seilhan, B.; Sharma, S.; Snyder, L.; Stave, S.; Tate, A. C.; Tatishvili, G.; Thornton, R. T.; Tovesson, F.; Towell, D. E.; Towell, R. S.; Walsh, N.; Watson, S.; Wendt, B.; Wood, L.; Yao, L.; Younes, W.; Niffte Collaboration
2018-03-01
The normalized 238U(n ,f )/235U(n ,f ) cross section ratio has been measured using the NIFFTE fission Time Projection Chamber (fissionTPC) from the reaction threshold to 30 MeV . The fissionTPC is a two-volume MICROMEGAS time projection chamber that allows for full three-dimensional reconstruction of fission-fragment ionization profiles from neutron-induced fission. The measurement was performed at the Los Alamos Neutron Science Center, where the neutron energy is determined from neutron time of-flight. The 238U(n ,f )/235U(n ,f ) ratio reported here is the first cross section measurement made with the fissionTPC, and will provide new experimental data for evaluation of the 238U(n ,f ) cross section, an important standard used in neutron-flux measurements. Use of a development target in this work prevented the determination of an absolute normalization, to be addressed in future measurements. Instead, the measured cross section ratio has been normalized to ENDF/B-VIII.β 5 at 14.5 MeV.
[A Generator of Mono-energetic Electrons for Response Test of Charged Particle Detectors.].
Matsubayashi, Fumiyasu; Yoshida, Katsuhide; Maruyama, Koichi
2005-01-01
We designed and fabricated a generator of mono-energetic electrons for the response test of charged particle detectors, which is used to measure fragmented particles of the carbon beam for cancer therapy. Mono-energetic electrons are extracted from (90)Sr by analyzing the energy of beta rays in the generator with a magnetic field. We evaluated performance parameters of the generator such as the absolute energy, the energy resolution and the counting rates of extracted electrons. The generator supplies mono-energetic electrons from 0.5MeV to 1.7MeV with the energy resolution of 20% in FWHM at higher energies than 1.0MeV. The counting rate of electrons is 400cpm at the maximum when the activity of (90)Sr is 298kBq. The generator was used to measure responses of fragmented-particle detectors and to determine the threshold energy of the detectors. We evaluated the dependence of pulse height variation on the detector position and the threshold energy by using the generator. We concluded this generator is useful for the response test of general charged particle detectors.
Behavioural and physiological limits to vision in mammals
Field, Greg D.
2017-01-01
Human vision is exquisitely sensitive—a dark-adapted observer is capable of reliably detecting the absorption of a few quanta of light. Such sensitivity requires that the sensory receptors of the retina, rod photoreceptors, generate a reliable signal when single photons are absorbed. In addition, the retina must be able to extract this information and relay it to higher visual centres under conditions where very few rods signal single-photon responses while the majority generate only noise. Critical to signal transmission are mechanistic optimizations within rods and their dedicated retinal circuits that enhance the discriminability of single-photon responses by mitigating photoreceptor and synaptic noise. We describe behavioural experiments over the past century that have led to the appreciation of high sensitivity near absolute visual threshold. We further consider mechanisms within rod photoreceptors and dedicated rod circuits that act to extract single-photon responses from cellular noise. We highlight how these studies have shaped our understanding of brain function and point out several unresolved questions in the processing of light near the visual threshold. This article is part of the themed issue ‘Vision in dim light’. PMID:28193817
Return volatility interval analysis of stock indexes during a financial crash
NASA Astrophysics Data System (ADS)
Li, Wei-Shen; Liaw, Sy-Sang
2015-09-01
We investigate the interval between return volatilities above a certain threshold q for 10 countries data sets during the 2008/2009 global financial crisis, and divide these data into several stages according to stock price tendencies: plunging stage (stage 1), fluctuating or rebounding stage (stage 2) and soaring stage (stage 3). For different thresholds q, the cumulative distribution function always satisfies a power law tail distribution. We find the absolute value of the power-law exponent is lowest in stage 1 for various types of markets, and increases monotonically from stage 1 to stage 3 in emerging markets. The fractal dimension properties of the return volatility interval series provide some surprising results. We find that developed markets have strong persistence and transform to weaker correlation in the plunging and soaring stages. In contrast, emerging markets fail to exhibit such a transformation, but rather show a constant-correlation behavior with the recurrence of extreme return volatility in corresponding stages during a crash. We believe this long-memory property found in recurrence-interval series, especially for developed markets, plays an important role in volatility clustering.
Re, Rebecca; Muthalib, Makii; Contini, Davide; Zucchelli, Lucia; Torricelli, Alessandro; Spinelli, Lorenzo; Caffini, Matteo; Ferrari, Marco; Quaresima, Valentina; Perrey, Stephane; Kerr, Graham
2013-01-01
The application of different EMS current thresholds on muscle activates not only the muscle but also peripheral sensory axons that send proprioceptive and pain signals to the cerebral cortex. A 32-channel time-domain fNIRS instrument was employed to map regional cortical activities under varied EMS current intensities applied on the right wrist extensor muscle. Eight healthy volunteers underwent four EMS at different current thresholds based on their individual maximal tolerated intensity (MTI), i.e., 10 % < 50 % < 100 % < over 100 % MTI. Time courses of the absolute oxygenated and deoxygenated hemoglobin concentrations primarily over the bilateral sensorimotor cortical (SMC) regions were extrapolated, and cortical activation maps were determined by general linear model using the NIRS-SPM software. The stimulation-induced wrist extension paradigm significantly increased activation of the contralateral SMC region according to the EMS intensities, while the ipsilateral SMC region showed no significant changes. This could be due in part to a nociceptive response to the higher EMS current intensities and result also from increased sensorimotor integration in these cortical regions.
Face verification with balanced thresholds.
Yan, Shuicheng; Xu, Dong; Tang, Xiaoou
2007-01-01
The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.
New developments in supra-threshold perimetry.
Henson, David B; Artes, Paul H
2002-09-01
To describe a series of recent enhancements to supra-threshold perimetry. Computer simulations were used to develop an improved algorithm (HEART) for the setting of the supra-threshold test intensity at the beginning of a field test, and to evaluate the relationship between various pass/fail criteria and the test's performance (sensitivity and specificity) and how they compare with modern threshold perimetry. Data were collected in optometric practices to evaluate HEART and to assess how the patient's response times can be analysed to detect false positive response errors in visual field test results. The HEART algorithm shows improved performance (reduced between-eye differences) over current algorithms. A pass/fail criterion of '3 stimuli seen of 3-5 presentations' at each test location reduces test/retest variability and combines high sensitivity and specificity. A large percentage of false positive responses can be detected by comparing their latencies to the average response time of a patient. Optimised supra-threshold visual field tests can perform as well as modern threshold techniques. Such tests may be easier to perform for novice patients, compared with the more demanding threshold tests.
Quantitative Phase Fraction Detection in Organic Photovoltaic Materials through EELS Imaging
Dyck, Ondrej; Hu, Sheng; Das, Sanjib; ...
2015-11-24
Organic photovoltaic materials have recently seen intense interest from the research community. Improvements in device performance are occurring at an impressive rate; however, visualization of the active layer phase separation still remains a challenge. Our paper outlines the application of two electron energy-loss spectroscopic (EELS) imaging techniques that can complement and enhance current phase detection techniques. Specifically, the bulk plasmon peak position, often used to produce contrast between phases in energy filtered transmission electron microscopy (EFTEM), is quantitatively mapped across a sample cross section. One complementary spectrum image capturing the carbon and sulfur core loss edges is compared with themore » plasmon peak map and found to agree quite well, indicating that carbon and sulfur density differences between the two phases also allows phase discrimination. Additionally, an analytical technique for determining absolute atomic areal density is used to produce an absolute carbon and sulfur areal density map. We also show how these maps may be re-interpreted as a phase ratio map, giving quantitative information about the purity of the phases within the junction.« less
ITER-relevant calibration technique for soft x-ray spectrometer.
Rzadkiewicz, J; Książek, I; Zastrow, K-D; Coffey, I H; Jakubowska, K; Lawson, K D
2010-10-01
The ITER-oriented JET research program brings new requirements for the low-Z impurity monitoring, in particular for the Be—the future main wall component of JET and ITER. Monitoring based on Bragg spectroscopy requires an absolute sensitivity calibration, which is challenging for large tokamaks. This paper describes both “component-by-component” and “continua” calibration methods used for the Be IV channel (75.9 Å) of the Bragg rotor spectrometer deployed on JET. The calibration techniques presented here rely on multiorder reflectivity calculations and measurements of continuum radiation emitted from helium plasmas. These offer excellent conditions for the absolute photon flux calibration due to their low level of impurities. It was found that the component-by-component method gives results that are four times higher than those obtained by means of the continua method. A better understanding of this discrepancy requires further investigations.
Relative Attitude Determination of Earth Orbiting Formations Using GPS Receivers
NASA Technical Reports Server (NTRS)
Lightsey, E. Glenn
2004-01-01
Satellite formation missions require the precise determination of both the position and attitude of multiple vehicles to achieve the desired objectives. In order to support the mission requirements for these applications, it is necessary to develop techniques for representing and controlling the attitude of formations of vehicles. A generalized method for representing the attitude of a formation of vehicles has been developed. The representation may be applied to both absolute and relative formation attitude control problems. The technique is able to accommodate formations of arbitrarily large number of vehicles. To demonstrate the formation attitude problem, the method is applied to the attitude determination of a simple leader-follower along-track orbit formation. A multiplicative extended Kalman filter is employed to estimate vehicle attitude. In a simulation study using GPS receivers as the attitude sensors, the relative attitude between vehicles in the formation is determined 3 times more accurately than the absolute attitude.
Photocurrent mapping of near-field optical antenna resonances
NASA Astrophysics Data System (ADS)
Barnard, Edward S.; Pala, Ragip A.; Brongersma, Mark L.
2011-09-01
An increasing number of photonics applications make use of nanoscale optical antennas that exhibit a strong, resonant interaction with photons of a specific frequency. The resonant properties of such antennas are conventionally characterized by far-field light-scattering techniques. However, many applications require quantitative knowledge of the near-field behaviour, and existing local field measurement techniques provide only relative, rather than absolute, data. Here, we demonstrate a photodetector platform that uses a silicon-on-insulator substrate to spectrally and spatially map the absolute values of enhanced fields near any type of optical antenna by transducing local electric fields into photocurrent. We are able to quantify the resonant optical and materials properties of nanoscale (~50 nm) and wavelength-scale (~1 µm) metallic antennas as well as high-refractive-index semiconductor antennas. The data agree well with light-scattering measurements, full-field simulations and intuitive resonator models.
Calibration of High Heat Flux Sensors at NIST
Murthy, A. V.; Tsai, B. K.; Gibson, C. E.
1997-01-01
An ongoing program at the National Institute of Standards and Technology (NIST) is aimed at improving and standardizing heat-flux sensor calibration methods. The current calibration needs of U.S. science and industry exceed the current NIST capability of 40 kW/m2 irradiance. In achieving this goal, as well as meeting lower-level non-radiative heat flux calibration needs of science and industry, three different types of calibration facilities currently are under development at NIST: convection, conduction, and radiation. This paper describes the research activities associated with the NIST Radiation Calibration Facility. Two different techniques, transfer and absolute, are presented. The transfer calibration technique employs a transfer standard calibrated with reference to a radiometric standard for calibrating the sensors using a graphite tube blackbody. Plans for an absolute calibration facility include the use of a spherical blackbody and a cooled aperture and sensor-housing assembly to calibrate the sensors in a low convective environment. PMID:27805156
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyar, M. Darby; McCanta, Molly; Breves, Elly
2016-03-01
Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yield accurate resultsmore » from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyar, M. Darby; McCanta, Molly; Breves, Elly
2016-03-01
Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe 3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe 3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yieldmore » accurate results from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less
Xiao, Hong; Lin, Xiao-ling; Dai, Xiang-yu; Gao, Li-dong; Chen, Bi-yun; Zhang, Xi-xing; Zhu, Pei-juan; Tian, Huai-yu
2012-05-01
To analyze the periodicity of pandemic influenza A (H1N1) in Changsha in year 2009 and its correlation with sensitive climatic factors. The information of 5439 cases of influenza A (H1N1) and synchronous meteorological data during the period between May 22th and December 31st in year 2009 (223 days in total) in Changsha city were collected. The classification and regression tree (CART) was employed to screen the sensitive climatic factors on influenza A (H1N1); meanwhile, cross wavelet transform and wavelet coherence analysis were applied to assess and compare the periodicity of the pandemic disease and its association with the time-lag phase features of the sensitive climatic factors. The results of CART indicated that the daily minimum temperature and daily absolute humidity were the sensitive climatic factors for the popularity of influenza A (H1N1) in Changsha. The peak of the incidence of influenza A (H1N1) was in the period between October and December (Median (M) = 44.00 cases per day), simultaneously the daily minimum temperature (M = 13°C) and daily absolute humidity (M = 6.69 g/m(3)) were relatively low. The results of wavelet analysis demonstrated that a period of 16 days was found in the epidemic threshold in Changsha, while the daily minimum temperature and daily absolute humidity were the relatively sensitive climatic factors. The number of daily reported patients was statistically relevant to the daily minimum temperature and daily absolute humidity. The frequency domain was mostly in the period of (16 ± 2) days. In the initial stage of the disease (from August 9th and September 8th), a 6-day lag was found between the incidence and the daily minimum temperature. In the peak period of the disease, the daily minimum temperature and daily absolute humidity were negatively relevant to the incidence of the disease. In the pandemic period, the incidence of influenza A (H1N1) showed periodic features; and the sensitive climatic factors did have a "driving effect" on the incidence of influenza A (H1N1).
Some photometric techniques for atmosphereless solar system bodies.
Lumme, K; Peltoniemi, J; Irvine, W M
1990-01-01
We discuss various photometric techniques and their absolute scales in relation to the information that can be derived from the relevant data. We also outline a new scattering model for atmosphereless bodies in the solar system and show how it fits Mariner 10 surface photometry of the planet Mercury. It is shown how important the correct scattering law is while deriving the topography by photoclinometry.
Automatic threshold optimization in nonlinear energy operator based spike detection.
Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M
2016-08-01
In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.
NASA Astrophysics Data System (ADS)
Bai, F.; Gagar, D.; Foote, P.; Zhao, Y.
2017-02-01
Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors in an array is essential in performing localisation. Currently, this is determined using a fixed threshold which is particularly prone to errors when not set to optimal values. This paper presents three new methods for determining the onset of AE signals without the need for a predetermined threshold. The performance of the techniques is evaluated using AE signals generated during fatigue crack growth and compared to the established Akaike Information Criterion (AIC) and fixed threshold methods. It was found that the 1D location accuracy of the new methods was within the range of < 1 - 7.1 % of the monitored region compared to 2.7% for the AIC method and a range of 1.8-9.4% for the conventional Fixed Threshold method at different threshold levels.
NASA Astrophysics Data System (ADS)
Saini, A.; Christenson, C. W.; Khattab, T. A.; Wang, R.; Twieg, R. J.; Singer, K. D.
2017-01-01
In order to achieve a high capacity 3D optical data storage medium, a nonlinear or threshold writing process is necessary to localize data in the axial dimension. To this end, commercial multilayer discs use thermal ablation of metal films or phase change materials to realize such a threshold process. This paper addresses a threshold writing mechanism relevant to recently reported fluorescence-based data storage in dye-doped co-extruded multilayer films. To gain understanding of the essential physics, single layer spun coat films were used so that the data is easily accessible by analytical techniques. Data were written by attenuating the fluorescence using nanosecond-range exposure times from a 488 nm continuous wave laser overlapping with the single photon absorption spectrum. The threshold writing process was studied over a range of exposure times and intensities, and with different fluorescent dyes. It was found that all of the dyes have a common temperature threshold where fluorescence begins to attenuate, and the physical nature of the thermal process was investigated.
Underwater temporary threshold shift in pinnipeds: effects of noise level and duration.
Kastak, David; Southall, Brandon L; Schusterman, Ronald J; Kastak, Colleen Reichmuth
2005-11-01
Behavioral psychophysical techniques were used to evaluate the residual effects of underwater noise on the hearing sensitivity of three pinnipeds: a California sea lion (Zalophus californianus), a harbor seal (Phoca vitulina), and a northern elephant seal (Mirounga angustirostris). Temporary threshold shift (TTS), defined as the difference between auditory thresholds obtained before and after noise exposure, was assessed. The subjects were exposed to octave-band noise centered at 2500 Hz at two sound pressure levels: 80 and 95 dB SL (re: auditory threshold at 2500 Hz). Noise exposure durations were 22, 25, and 50 min. Threshold shifts were assessed at 2500 and 3530 Hz. Mean threshold shifts ranged from 2.9-12.2 dB. Full recovery of auditory sensitivity occurred within 24 h of noise exposure. Control sequences, comprising sham noise exposures, did not result in significant mean threshold shifts for any subject. Threshold shift magnitudes increased with increasing noise sound exposure level (SEL) for two of the three subjects. The results underscore the importance of including sound exposure metrics (incorporating sound pressure level and exposure duration) in order to fully assess the effects of noise on marine mammal hearing.
Molloi, Sabee; Ding, Huanjun; Feig, Stephen
2015-01-01
Purpose The purpose of this study was to compare the precision of mammographic breast density measurement using radiologist reader assessment, histogram threshold segmentation, fuzzy C-mean segmentation and spectral material decomposition. Materials and Methods Spectral mammography images from a total of 92 consecutive asymptomatic women (50–69 years old) who presented for annual screening mammography were retrospectively analyzed for this study. Breast density was estimated using 10 radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and spectral material decomposition. The breast density correlation between left and right breasts was used to assess the precision of these techniques to measure breast composition relative to dual-energy material decomposition. Results In comparison to the other techniques, the results of breast density measurements using dual-energy material decomposition showed the highest correlation. The relative standard error of estimate for breast density measurements from left and right breasts using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and dual-energy material decomposition was calculated to be 1.95, 2.87, 2.07 and 1.00, respectively. Conclusion The results indicate that the precision of dual-energy material decomposition was approximately factor of two higher than the other techniques with regard to better correlation of breast density measurements from right and left breasts. PMID:26031229
Denaï, Mouloud A; Mahfouf, Mahdi; Mohamad-Samuri, Suzani; Panoutsos, George; Brown, Brian H; Mills, Gary H
2010-05-01
Thoracic electrical impedance tomography (EIT) is a noninvasive, radiation-free monitoring technique whose aim is to reconstruct a cross-sectional image of the internal spatial distribution of conductivity from electrical measurements made by injecting small alternating currents via an electrode array placed on the surface of the thorax. The purpose of this paper is to discuss the fundamentals of EIT and demonstrate the principles of mechanical ventilation, lung recruitment, and EIT imaging on a comprehensive physiological model, which combines a model of respiratory mechanics, a model of the human lung absolute resistivity as a function of air content, and a 2-D finite-element mesh of the thorax to simulate EIT image reconstruction during mechanical ventilation. The overall model gives a good understanding of respiratory physiology and EIT monitoring techniques in mechanically ventilated patients. The model proposed here was able to reproduce consistent images of ventilation distribution in simulated acutely injured and collapsed lung conditions. A new advisory system architecture integrating a previously developed data-driven physiological model for continuous and noninvasive predictions of blood gas parameters with the regional lung function data/information generated from absolute EIT (aEIT) is proposed for monitoring and ventilator therapy management of critical care patients.
Counting the Photons: Determining the Absolute Storage Capacity of Persistent Phosphors
Rodríguez Burbano, Diana C.; Capobianco, John A.
2017-01-01
The performance of a persistent phosphor is often determined by comparing luminance decay curves, expressed in cd/m2. However, these photometric units do not enable a straightforward, objective comparison between different phosphors in terms of the total number of emitted photons, as these units are dependent on the emission spectrum of the phosphor. This may lead to incorrect conclusions regarding the storage capacity of the phosphor. An alternative and convenient technique of characterizing the performance of a phosphor was developed on the basis of the absolute storage capacity of phosphors. In this technique, the phosphor is incorporated in a transparent polymer and the measured afterglow is converted into an absolute number of emitted photons, effectively quantifying the amount of energy that can be stored in the material. This method was applied to the benchmark phosphor SrAl2O4:Eu,Dy and to the nano-sized phosphor CaS:Eu. The results indicated that only a fraction of the Eu ions (around 1.6% in the case of SrAl2O4:Eu,Dy) participated in the energy storage process, which is in line with earlier reports based on X-ray absorption spectroscopy. These findings imply that there is still a significant margin for improving the storage capacity of persistent phosphors. PMID:28773228
First metatarsal length change after basilar closing wedge osteotomy for hallux valgus.
Day, Thomas; Charlton, Timothy P; Thordarson, David B
2011-05-01
Hallux valgus deformities with large intermetatarsal angles require a more proximal metatarsal procedure to adequately correct the deformity. Due to the relative ease of a closing wedge osteotomy, this technique was adopted but with concern over first metatarsal shortening. In this study, we primarily evaluated angular correction and first metatarsal shortening. We evaluated 70 feet in 57 patients (average age, 54 years) with 52 female and five male. The average followup was 14 (range, 6 to 45) months. The charts were reviewed for the presence of metatarsalgia. Digital radiographic measurements were made for pre- and postoperative hallux valgus and intermetatarsal angles, dorsiflexion angle of the first metatarsal, and absolute and relative shortening of the first metatarsal. The average hallux valgus angle improved from 31 to 11 degrees (p < 0.0001) and intermetatarsal angle from 13.2 to 4.4 angles (p < 0.0001). The absolute shortening of the first metatarsal was 2.2 mm and relative shortening was 0.6 mm. There was 1.3 degrees of dorsiflexion on average. Excellent correction of the deformity with minimal dorsiflexion or new complaints of metatarsalgia was found with this technique. The new method of assessing the relative shortening found to be less than the absolute shortening, which we feel more accurately reflects the functional length of the first metatarsal.
SART-Type Half-Threshold Filtering Approach for CT Reconstruction
YU, HENGYONG; WANG, GE
2014-01-01
The ℓ1 regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the ℓp norm (0 < p < 1) and solve the ℓp minimization problem. Very recently, Xu et al. developed an analytic solution for the ℓ1∕2 regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering. PMID:25530928
Electroconvulsive therapy stimulus titration: Not all it seems.
Rosenman, Stephen J
2018-05-01
To examine the provenance and implications of seizure threshold titration in electroconvulsive therapy. Titration of seizure threshold has become a virtual standard for electroconvulsive therapy. It is justified as individualisation and optimisation of the balance between efficacy and unwanted effects. Present day threshold estimation is significantly different from the 1960 studies of Cronholm and Ottosson that are its usual justification. The present form of threshold estimation is unstable and too uncertain for valid optimisation or individualisation of dose. Threshold stimulation (lowest dose that produces a seizure) has proven therapeutically ineffective, and the multiples applied to threshold to attain efficacy have never been properly investigated or standardised. The therapeutic outcomes of threshold estimation (or its multiples) have not been separated from simple dose effects. Threshold estimation does not optimise dose due to its own uncertainties and the different short-term and long-term cognitive and memory effects. Potential harms of titration have not been examined. Seizure threshold titration in electroconvulsive therapy is not a proven technique of dose optimisation. It is widely held and practiced; its benefit and harmlessness assumed but unproven. It is a prematurely settled answer to an unsettled question that discourages further enquiry. It is an example of how practices, assumed scientific, enter medicine by obscure paths.
SART-Type Half-Threshold Filtering Approach for CT Reconstruction.
Yu, Hengyong; Wang, Ge
2014-01-01
The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.
Ertl, Peter; Kruse, Annika; Tilp, Markus
2016-10-01
The aim of the current paper was to systematically review the relevant existing electromyographic threshold concepts within the literature. The electronic databases MEDLINE and SCOPUS were screened for papers published between January 1980 and April 2015 including the keywords: neuromuscular fatigue threshold, anaerobic threshold, electromyographic threshold, muscular fatigue, aerobic-anaerobictransition, ventilatory threshold, exercise testing, and cycle-ergometer. 32 articles were assessed with regard to their electromyographic methodologies, description of results, statistical analysis and test protocols. Only one article was of very good quality. 21 were of good quality and two articles were of very low quality. The review process revealed that: (i) there is consistent evidence of one or two non-linear increases of EMG that might reflect the additional recruitment of motor units (MU) or different fiber types during fatiguing cycle ergometer exercise, (ii) most studies reported no statistically significant difference between electromyographic and metabolic thresholds, (iii) one minute protocols with increments between 10 and 25W appear most appropriate to detect muscular threshold, (iv) threshold detection from the vastus medialis, vastus lateralis, and rectus femoris is recommended, and (v) there is a great variety in study protocols, measurement techniques, and data processing. Therefore, we recommend further research and standardization in the detection of EMGTs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sound and vibration sensitivity of VIIIth nerve fibers in the grassfrog, Rana temporaria.
Christensen-Dalsgaard, J; Jørgensen, M B
1996-10-01
We have studied the sound and vibration sensitivity of 164 amphibian papilla fibers in the VIIIth nerve of the grassfrog, Rana temporaria. The VIIIth nerve was exposed using a dorsal approach. The frogs were placed in a natural sitting posture and stimulated by free-field sound. Furthermore, the animals were stimulated with dorso-ventral vibrations, and the sound-induced vertical vibrations in the setup could be canceled by emitting vibrations in antiphase from the vibration exciter. All low-frequency fibers responded to both sound and vibration with sound thresholds from 23 dB SPL and vibration thresholds from 0.02 cm/s2. The sound and vibration sensitivity was compared for each fiber using the offset between the rate-level curves for sound and vibration stimulation as a measure of relative vibration sensitivity. When measured in this way relative vibration sensitivity decreases with frequency from 42 dB at 100 Hz to 25 dB at 400 Hz. Since sound thresholds decrease from 72 dB SPL at 100 Hz to 50 dB SPL at 400 Hz the decrease in relative vibration sensitivity reflects an increase in sound sensitivity with frequency, probably due to enhanced tympanic sensitivity at higher frequencies. In contrast, absolute vibration sensitivity is constant in most of the frequency range studied. Only small effects result from the cancellation of sound-induced vibrations. The reason for this probably is that the maximal induced vibrations in the present setup are 6-10 dB below the fibers' vibration threshold at the threshold for sound. However, these results are only valid for the present physical configuration of the setup and the high vibration-sensitivities of the fibers warrant caution whenever the auditory fibers are stimulated with free-field sound. Thus, the experiments suggest that the low-frequency sound sensitivity is not caused by sound-induced vertical vibrations. Instead, the low-frequency sound sensitivity is either tympanic or mediated through bone conduction or sound-induced pulsations of the lungs.