Non-Linear Concentration-Response Relationships between Ambient Ozone and Daily Mortality.
Bae, Sanghyuk; Lim, Youn-Hee; Kashima, Saori; Yorifuji, Takashi; Honda, Yasushi; Kim, Ho; Hong, Yun-Chul
2015-01-01
Ambient ozone (O3) concentration has been reported to be significantly associated with mortality. However, linearity of the relationships and the presence of a threshold has been controversial. The aim of the present study was to examine the concentration-response relationship and threshold of the association between ambient O3 concentration and non-accidental mortality in 13 Japanese and Korean cities from 2000 to 2009. We selected Japanese and Korean cities which have population of over 1 million. We constructed Poisson regression models adjusting daily mean temperature, daily mean PM10, humidity, time trend, season, year, day of the week, holidays and yearly population. The association between O3 concentration and mortality was examined using linear, spline and linear-threshold models. The thresholds were estimated for each city, by constructing linear-threshold models. We also examined the city-combined association using a generalized additive mixed model. The mean O3 concentration did not differ greatly between Korea and Japan, which were 26.2 ppb and 24.2 ppb, respectively. Seven out of 13 cities showed better fits for the spline model compared with the linear model, supporting a non-linear relationships between O3 concentration and mortality. All of the 7 cities showed J or U shaped associations suggesting the existence of thresholds. The range of city-specific thresholds was from 11 to 34 ppb. The city-combined analysis also showed a non-linear association with a threshold around 30-40 ppb. We have observed non-linear concentration-response relationship with thresholds between daily mean ambient O3 concentration and daily number of non-accidental death in Japanese and Korean cities.
Dong, Fulong; Tian, Yiqun; Yu, Shujuan; Wang, Shang; Yang, Shiping; Chen, Yanjun
2015-07-13
We investigate the polarization properties of below-threshold harmonics from aligned molecules in linearly polarized laser fields numerically and analytically. We focus on lower-order harmonics (LOHs). Our simulations show that the ellipticity of below-threshold LOHs depends strongly on the orientation angle and differs significantly for different harmonic orders. Our analysis reveals that this LOH ellipticity is closely associated with resonance effects and the axis symmetry of the molecule. These results shed light on the complex generation mechanism of below-threshold harmonics from aligned molecules.
[The analysis of threshold effect using Empower Stats software].
Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan
2013-11-01
In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.
Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.
OConnor, William; Runquist, Elizabeth A
2008-07-01
Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.
Yang, Xiaowei; Nie, Kun
2008-03-15
Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.
Grantz, Erin; Haggard, Brian; Scott, J Thad
2018-06-12
We calculated four median datasets (chlorophyll a, Chl a; total phosphorus, TP; and transparency) using multiple approaches to handling censored observations, including substituting fractions of the quantification limit (QL; dataset 1 = 1QL, dataset 2 = 0.5QL) and statistical methods for censored datasets (datasets 3-4) for approximately 100 Texas, USA reservoirs. Trend analyses of differences between dataset 1 and 3 medians indicated percent difference increased linearly above thresholds in percent censored data (%Cen). This relationship was extrapolated to estimate medians for site-parameter combinations with %Cen > 80%, which were combined with dataset 3 as dataset 4. Changepoint analysis of Chl a- and transparency-TP relationships indicated threshold differences up to 50% between datasets. Recursive analysis identified secondary thresholds in dataset 4. Threshold differences show that information introduced via substitution or missing due to limitations of statistical methods biased values, underestimated error, and inflated the strength of TP thresholds identified in datasets 1-3. Analysis of covariance identified differences in linear regression models relating transparency-TP between datasets 1, 2, and the more statistically robust datasets 3-4. Study findings identify high-risk scenarios for biased analytical outcomes when using substitution. These include high probability of median overestimation when %Cen > 50-60% for a single QL, or when %Cen is as low 16% for multiple QL's. Changepoint analysis was uniquely vulnerable to substitution effects when using medians from sites with %Cen > 50%. Linear regression analysis was less sensitive to substitution and missing data effects, but differences in model parameters for transparency cannot be discounted and could be magnified by log-transformation of the variables.
Discrete analysis of spatial-sensitivity models
NASA Technical Reports Server (NTRS)
Nielsen, Kenneth R. K.; Wandell, Brian A.
1988-01-01
Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.; Marino, J. T., Jr.
1974-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.
1975-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.
Experimental and environmental factors affect spurious detection of ecological thresholds
Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.
2012-01-01
Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.
Effect of postprandial thermogenesis on the cutaneous vasodilatory response during exercise.
Hayashi, Keiji; Ito, Nozomi; Ichikawa, Yoko; Suzuki, Yuichi
2014-08-01
To examine the effect of postprandial thermogenesis on the cutaneous vasodilatory response, 10 healthy male subjects exercised for 30 min on a cycle ergometer at 50% of peak oxygen uptake, with and without food intake. Mean skin temperature, mean body temperature (Tb), heart rate, oxygen uptake, carbon dioxide elimination, and respiratory quotient were all significantly higher at baseline in the session with food intake than in the session without food intake. To evaluate the cutaneous vasodilatory response, relative laser Doppler flowmetry values were plotted against esophageal temperature (Tes) and Tb. Regression analysis revealed that the [Formula: see text] threshold for cutaneous vasodilation tended to be higher with food intake than without it, but there were no significant differences in the sensitivity. To clarify the effect of postprandial thermogenesis on the threshold for cutaneous vasodilation, the between-session difference in the Tes threshold and the Tb threshold were plotted against the between-session difference in baseline Tes and baseline Tb, respectively. Linear regression analysis of the resultant plot showed significant positive linear relationships (Tes: r = 0.85, P < 0.01; Tb: r = 0.67, P < 0.05). These results suggest that postprandial thermogenesis increases baseline body temperature, which raises the body temperature threshold for cutaneous vasodilation during exercise.
Threshold law for positron-atom impact ionisation
NASA Technical Reports Server (NTRS)
Temkin, A.
1982-01-01
The threshold law for ionisation of atoms by positron impact is adduced in analogy with our approach to the electron-atom ionization. It is concluded the Coulomb-dipole region of the potential gives the essential part of the interaction in both cases and leads to the same kind of result: a modulated linear law. An additional process which enters positron ionization is positronium formation in the continuum, but that will not dominate the threshold yield. The result is in sharp contrast to the positron threshold law as recently derived by Klar on the basis of a Wannier-type analysis.
Lemaire, Edward D; Samadi, Reza; Goudreau, Louis; Kofman, Jonathan
2013-01-01
A linear piston hydraulic angular-velocity-based control knee joint was designed for people with knee-extensor weakness to engage knee-flexion resistance when knee-flexion angular velocity reaches a preset threshold, such as during a stumble, but to otherwise allow free knee motion. During mechanical testing at the lowest angular-velocity threshold, the device engaged within 2 degrees knee flexion and resisted moment loads of over 150 Nm. The device completed 400,000 loading cycles without mechanical failure or wear that would affect function. Gait patterns of nondisabled participants were similar to normal at walking speeds that produced below-threshold knee angular velocities. Fast walking speeds, employed purposely to attain the angular-velocity threshold and cause knee-flexion resistance, reduced maximum knee flexion by approximately 25 degrees but did not lead to unsafe gait patterns in foot ground clearance during swing. In knee collapse tests, the device successfully engaged knee-flexion resistance and stopped knee flexion with peak knee moments of up to 235.6 Nm. The outcomes from this study support the potential for the linear piston hydraulic knee joint in knee and knee-ankle-foot orthoses for people with lower-limb weakness.
Cross-validation analysis for genetic evaluation models for ranking in endurance horses.
García-Ballesteros, S; Varona, L; Valera, M; Gutiérrez, J P; Cervantes, I
2018-01-01
Ranking trait was used as a selection criterion for competition horses to estimate racing performance. In the literature the most common approaches to estimate breeding values are the linear or threshold statistical models. However, recent studies have shown that a Thurstonian approach was able to fix the race effect (competitive level of the horses that participate in the same race), thus suggesting a better prediction accuracy of breeding values for ranking trait. The aim of this study was to compare the predictability of linear, threshold and Thurstonian approaches for genetic evaluation of ranking in endurance horses. For this purpose, eight genetic models were used for each approach with different combinations of random effects: rider, rider-horse interaction and environmental permanent effect. All genetic models included gender, age and race as systematic effects. The database that was used contained 4065 ranking records from 966 horses and that for the pedigree contained 8733 animals (47% Arabian horses), with an estimated heritability around 0.10 for the ranking trait. The prediction ability of the models for racing performance was evaluated using a cross-validation approach. The average correlation between real and predicted performances across genetic models was around 0.25 for threshold, 0.58 for linear and 0.60 for Thurstonian approaches. Although no significant differences were found between models within approaches, the best genetic model included: the rider and rider-horse random effects for threshold, only rider and environmental permanent effects for linear approach and all random effects for Thurstonian approach. The absolute correlations of predicted breeding values among models were higher between threshold and Thurstonian: 0.90, 0.91 and 0.88 for all animals, top 20% and top 5% best animals. For rank correlations these figures were 0.85, 0.84 and 0.86. The lower values were those between linear and threshold approaches (0.65, 0.62 and 0.51). In conclusion, the Thurstonian approach is recommended for the routine genetic evaluations for ranking in endurance horses.
Wall, Michael; Zamba, Gideon K D; Artes, Paul H
2018-01-01
It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.
Ye, Xin; Beck, Travis W; DeFreitas, Jason M; Wages, Nathan P
2015-04-01
The aim of this study was to compare the acute effects of concentric versus eccentric exercise on motor control strategies. Fifteen men performed six sets of 10 repetitions of maximal concentric exercises or eccentric isokinetic exercises with their dominant elbow flexors on separate experimental visits. Before and after the exercise, maximal strength testing and submaximal trapezoid isometric contractions (40% of the maximal force) were performed. Both exercise conditions caused significant strength loss in the elbow flexors, but the loss was greater following the eccentric exercise (t=2.401, P=.031). The surface electromyographic signals obtained from the submaximal trapezoid isometric contractions were decomposed into individual motor unit action potential trains. For each submaximal trapezoid isometric contraction, the relationship between the average motor unit firing rate and the recruitment threshold was examined using linear regression analysis. In contrast to the concentric exercise, which did not cause significant changes in the mean linear slope coefficient and y-intercept of the linear regression line, the eccentric exercise resulted in a lower mean linear slope and an increased mean y-intercept, thereby indicating that increasing the firing rates of low-threshold motor units may be more important than recruiting high-threshold motor units to compensate for eccentric exercise-induced strength loss. Copyright © 2014 Elsevier B.V. All rights reserved.
Topsakal, Vedat; Fransen, Erik; Schmerber, Sébastien; Declau, Frank; Yung, Matthew; Gordts, Frans; Van Camp, Guy; Van de Heyning, Paul
2006-09-01
To report the preoperative audiometric profile of surgically confirmed otosclerosis. Retrospective, multicenter study. Four tertiary referral centers. One thousand sixty-four surgically confirmed patients with otosclerosis. Therapeutic ear surgery for hearing improvement. Preoperative audiometric air conduction (AC) and bone conduction (BC) hearing thresholds were obtained retrospectively for 1064 patients with otosclerosis. A cross-sectional multiple linear regression analysis was performed on audiometric data of affected ears. Influences of age and sex were analyzed and age-related typical audiograms were created. Bone conduction thresholds were corrected for Carhart effect and presbyacusis; in addition, we tested to see if separate cochlear otosclerosis component existed. Corrected thresholds were than analyzed separately for progression of cochlear otosclerosis. The study population consisted of 35% men and 65% women (mean age, 44 yr). The mean pure-tone average at 0.5, 1, and 2 kHz was 57 dB hearing level. Multiple linear regression analysis showed significant progression for all measured AC and BC thresholds. The average annual threshold deterioration for AC was 0.45 dB/yr and the annual threshold deterioration for BC was 0.37 dB/yr. The average annual gap expansion was 0.08 dB/year. The corrected BC thresholds for Carhart effect and presbyacusis remained significantly different from zero, but only showed progression at 2 kHz. The preoperative audiological profile of otosclerosis is described. There is a significant sensorineural component in patients with otosclerosis planned for stapedotomy, which is worse than age-related hearing loss by itself. Deterioration rates of AC and BC thresholds have been reported, which can be helpful in clinical practice and might also guide the characterization of allegedly different phenotypes for familial and sporadic otosclerosis.
Bogen, Kenneth T
2016-03-01
To improve U.S. Environmental Protection Agency (EPA) dose-response (DR) assessments for noncarcinogens and for nonlinear mode of action (MOA) carcinogens, the 2009 NRC Science and Decisions Panel recommended that the adjustment-factor approach traditionally applied to these endpoints should be replaced by a new default assumption that both endpoints have linear-no-threshold (LNT) population-wide DR relationships. The panel claimed this new approach is warranted because population DR is LNT when any new dose adds to a background dose that explains background levels of risk, and/or when there is substantial interindividual heterogeneity in susceptibility in the exposed human population. Mathematically, however, the first claim is either false or effectively meaningless and the second claim is false. Any dose-and population-response relationship that is statistically consistent with an LNT relationship may instead be an additive mixture of just two quasi-threshold DR relationships, which jointly exhibit low-dose S-shaped, quasi-threshold nonlinearity just below the lower end of the observed "linear" dose range. In this case, LNT extrapolation would necessarily overestimate increased risk by increasingly large relative magnitudes at diminishing values of above-background dose. The fact that chemically-induced apoptotic cell death occurs by unambiguously nonlinear, quasi-threshold DR mechanisms is apparent from recent data concerning this quintessential toxicity endpoint. The 2009 NRC Science and Decisions Panel claims and recommendations that default LNT assumptions be applied to DR assessment for noncarcinogens and nonlinear MOA carcinogens are therefore not justified either mathematically or biologically. © 2015 The Author. Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.
Optical ranked-order filtering using threshold decomposition
Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.
1990-01-01
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.
Optical ranked-order filtering using threshold decomposition
Allebach, J.P.; Ochoa, E.; Sweeney, D.W.
1987-10-09
A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.
Zamba, Gideon K. D.; Artes, Paul H.
2018-01-01
Purpose It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). Methods In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on “censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Results Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Conclusions Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher. PMID:29356822
Engdahl, Bo; Tambs, Kristian; Borchgrevink, Hans M; Hoffman, Howard J
2005-01-01
This study aims to describe the association between otoacoustic emissions (OAEs) and pure-tone hearing thresholds (PTTs) in an unscreened adult population (N =6415), to determine the efficiency by which TEOAEs and DPOAEs can identify ears with elevated PTTs, and to investigate whether a combination of DPOAE and TEOAE responses improves this performance. Associations were examined by linear regression analysis and ANOVA. Test performance was assessed by receiver operator characteristic (ROC) curves. The relation between OAEs and PTTs appeared curvilinear with a moderate degree of non-linearity. Combining DPOAEs and TEOAEs improved performance. Test performance depended on the cut-off thresholds defining elevated PTTs with optimal values between 25 and 45 dB HL, depending on frequency and type of OAE measure. The unique constitution of the present large sample, which reflects the general adult population, makes these results applicable to population-based studies and screening programs.
White, J O; Vasilyev, A; Cahill, J P; Satyan, N; Okusaga, O; Rakuljic, G; Mungan, C E; Yariv, A
2012-07-02
The output of high power fiber amplifiers is typically limited by stimulated Brillouin scattering (SBS). An analysis of SBS with a chirped pump laser indicates that a chirp of 2.5 × 10(15) Hz/s could raise, by an order of magnitude, the SBS threshold of a 20-m fiber. A diode laser with a constant output power and a linear chirp of 5 × 10(15) Hz/s has been previously demonstrated. In a low-power proof-of-concept experiment, the threshold for SBS in a 6-km fiber is increased by a factor of 100 with a chirp of 5 × 10(14) Hz/s. A linear chirp will enable straightforward coherent combination of multiple fiber amplifiers, with electronic compensation of path length differences on the order of 0.2 m.
Ahmadpanah, J; Ghavi Hossein-Zadeh, N; Shadparvar, A A; Pakdel, A
2017-02-01
1. The objectives of the current study were to investigate the effect of incidence rate (5%, 10%, 20%, 30% and 50%) of ascites syndrome on the expression of genetic characteristics for body weight at 5 weeks of age (BW5) and AS and to compare different methods of genetic parameter estimation for these traits. 2. Based on stochastic simulation, a population with discrete generations was created in which random mating was used for 10 generations. Two methods of restricted maximum likelihood and Bayesian approach via Gibbs sampling were used for the estimation of genetic parameters. A bivariate model including maternal effects was used. The root mean square error for direct heritabilities was also calculated. 3. The results showed that when incidence rates of ascites increased from 5% to 30%, the heritability of AS increased from 0.013 and 0.005 to 0.110 and 0.162 for linear and threshold models, respectively. 4. Maternal effects were significant for both BW5 and AS. Genetic correlations were decreased by increasing incidence rates of ascites in the population from 0.678 and 0.587 at 5% level of ascites to 0.393 and -0.260 at 50% occurrence for linear and threshold models, respectively. 5. The RMSE of direct heritability from true values for BW5 was greater based on a linear-threshold model compared with the linear model of analysis (0.0092 vs. 0.0015). The RMSE of direct heritability from true values for AS was greater based on a linear-linear model (1.21 vs. 1.14). 6. In order to rank birds for ascites incidence, it is recommended to use a threshold model because it resulted in higher heritability estimates compared with the linear model and that BW5 could be one of the main components of selection goals.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
Exploration of the psychophysics of a motion displacement hyperacuity stimulus.
Verdon-Roe, Gay Mary; Westcott, Mark C; Viswanathan, Ananth C; Fitzke, Frederick W; Garway-Heath, David F
2006-11-01
To explore the summation properties of a motion-displacement hyperacuity stimulus with respect to stimulus area and luminance, with the goal of applying the results to the development of a motion-displacement test (MDT) for the detection of early glaucoma. A computer-generated line stimulus was presented with displacements randomized between 0 and 40 minutes of arc (min arc). Displacement thresholds (50% seen) were compared for stimuli of equal area but different edge length (orthogonal to the direction of motion) at four retinal locations. Also, MDT thresholds were recorded at five values of Michelson contrast (25%-84%) for each of five line lengths (11-128 min arc) at a single nasal location (-27,3). Frequency-of-seeing (FOS) curves were generated and displacement thresholds and interquartile ranges (IQR, 25%-75% seen) determined by probit analysis. Equivalent displacement thresholds were found for stimuli of equal area but half the edge length. Elevations of thresholds and IQR were demonstrated as line length and contrast were reduced. Equivalent displacement thresholds were also found for stimuli of equivalent energy (stimulus area x [stimulus luminance - background luminance]), in accordance with Ricco's law. There was a linear relationship (slope -0.5) between log MDT threshold and log stimulus energy. Stimulus area, rather than edge length, determined displacement thresholds within the experimental conditions tested. MDT thresholds are linearly related to the square root of the total energy of the stimulus. A new law, the threshold energy-displacement (TED) law, is proposed to apply to MDT summation properties, giving the relationship T = K logE where, T is the MDT threshold, Kis the constant, and E is the stimulus energy.
How much crosstalk can be allowed in a stereoscopic system at various grey levels?
NASA Astrophysics Data System (ADS)
Shestak, Sergey; Kim, Daesik; Kim, Yongie
2012-03-01
We have calculated a perceptual threshold of stereoscopic crosstalk on the basis of mathematical model of human vision sensitivity. Instead of linear model of just noticeable difference (JND) known as Weber's law we applied nonlinear Barten's model. The predicted crosstalk threshold varies with the background luminance. The calculated values of threshold are in a reasonable agreement with known experimental data. We calculated perceptual threshold of crosstalk for various combinations of the applied grey level. This result can be applied for the assessment of grey-to-grey crosstalk compensation. Further computational analysis of the applied model predicts the increase of the displayable image contrast with reduction of the maximum displayable luminance.
Cloherty, Shaun L; Hietanen, Markus A; Suaning, Gregg J; Ibbotson, Michael R
2010-01-01
We performed optical intrinsic signal imaging of cat primary visual cortex (Area 17 and 18) while delivering bipolar electrical stimulation to the retina by way of a supra-choroidal electrode array. Using a general linear model (GLM) analysis we identified statistically significant (p < 0.01) activation in a localized region of cortex following supra-threshold electrical stimulation at a single retinal locus. (1) demonstrate that intrinsic signal imaging combined with linear model analysis provides a powerful tool for assessing cortical responses to prosthetic stimulation, and (2) confirm that supra-choroidal electrical stimulation can achieve localized activation of the cortex consistent with focal activation of the retina.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawton, L.J.; Mihalich, J.P.
1995-12-31
The chlorinated alkenes 1,1-dichloroethene (1,1-DCE), tetrachloroethene (PCE), and trichloroethene (TCE) are common environmental contaminants found in soil and groundwater at hazardous waste sites. Recent assessment of data from epidemiology and mechanistic studies indicates that although exposure to 1,1-DCE, PCE, and TCE causes tumor formation in rodents, it is unlikely that these chemicals are carcinogenic to humans. Nevertheless, many state and federal agencies continue to regulate these compounds as carcinogens through the use of the linearized multistage model and resulting cancer slope factor (CSF). The available data indicate that 1,1-DCE, PCE, and TCE should be assessed using a threshold (i.e., referencemore » dose [RfD]) approach rather than a CSF. This paper summarizes the available metabolic, toxicologic, and epidemiologic data that question the use of the linear multistage model (and CSF) for extrapolation from rodents to humans. A comparative analysis of potential risk-based cleanup goals (RBGs) for these three compounds in soil is presented for a hazardous waste site. Goals were calculated using the USEPA CSFs and using a threshold (i.e., RfD) approach. Costs associated with remediation activities required to meet each set of these cleanup goals are presented and compared.« less
AN EVALUATION OF HEURISTICS FOR THRESHOLD-FUNCTION TEST-SYNTHESIS,
Linear programming offers the most attractive procedure for testing and obtaining optimal threshold gate realizations for functions generated in...The design of the experiments may be of general interest to students of automatic problem solving; the results should be of interest in threshold logic and linear programming. (Author)
Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region
NASA Astrophysics Data System (ADS)
Khan, Muhammad Yousaf; Mittnik, Stefan
2018-01-01
In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.
Analysis of Waveform Retracking Methods in Antarctic Ice Sheet Based on CRYOSAT-2 Data
NASA Astrophysics Data System (ADS)
Xiao, F.; Li, F.; Zhang, S.; Hao, W.; Yuan, L.; Zhu, T.; Zhang, Y.; Zhu, C.
2017-09-01
Satellite altimetry plays an important role in many geoscientific and environmental studies of Antarctic ice sheet. The ranging accuracy is degenerated near coasts or over nonocean surfaces, due to waveform contamination. A postprocess technique, known as waveform retracking, can be used to retrack the corrupt waveform and in turn improve the ranging accuracy. In 2010, the CryoSat-2 satellite was launched with the Synthetic aperture Interferometric Radar ALtimeter (SIRAL) onboard. Satellite altimetry waveform retracking methods are discussed in the paper. Six retracking methods including the OCOG method, the threshold method with 10 %, 25 % and 50 % threshold level, the linear and exponential 5-β parametric methods are used to retrack CryoSat-2 waveform over the transect from Zhongshan Station to Dome A. The results show that the threshold retracker performs best with the consideration of waveform retracking success rate and RMS of retracking distance corrections. The linear 5-β parametric retracker gives best waveform retracking precision, but cannot make full use of the waveform data.
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Thresholds for the perception of whole-body linear sinusoidal motion in the horizontal plane
NASA Technical Reports Server (NTRS)
Mah, Robert W.; Young, Laurence R.; Steele, Charles R.; Schubert, Earl D.
1989-01-01
An improved linear sled has been developed to provide precise motion stimuli without generating perceptible extraneous motion cues (a noiseless environment). A modified adaptive forced-choice method was employed to determine perceptual thresholds to whole-body linear sinusoidal motion in 25 subjects. Thresholds for the detection of movement in the horizontal plane were found to be lower than those reported previously. At frequencies of 0.2 to 0.5 Hz, thresholds were shown to be independent of frequency, while at frequencies of 1.0 to 3.0 Hz, thresholds showed a decreasing sensitivity with increasing frequency, indicating that the perceptual process is not sensitive to the rate change of acceleration of the motion stimulus. The results suggest that the perception of motion behaves as an integrating accelerometer with a bandwidth of at least 3 Hz.
The risk equivalent of an exposure to-, versus a dose of radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, V.P.
The long-term potential carcinogenic effects of low-level exposure (LLE) are addressed. The principal point discussed is linear, no-threshold dose-response curve. That the linear no-threshold, or proportional relationship is widely used is seen in the way in which the values for cancer risk coefficients are expressed - in terms of new cases, per million persons exposed, per year, per unit exposure or dose. This implies that the underlying relationship is proportional, i.e., ''linear, without threshold''. 12 refs., 9 figs., 1 tab.
Temporal discrimination threshold with healthy aging.
Ramos, Vesper Fe Marie Llaneza; Esquenazi, Alina; Villegas, Monica Anne Faye; Wu, Tianxia; Hallett, Mark
2016-07-01
The temporal discrimination threshold (TDT) is the shortest interstimulus interval at which a subject can perceive successive stimuli as separate. To investigate the effects of aging on TDT, we studied tactile TDT using the method of limits with 120% of sensory threshold in each hand for each of 100 healthy volunteers, equally divided among men and women, across 10 age groups, from 18 to 79 years. Linear regression analysis showed that age was significantly related to left-hand mean, right-hand mean, and mean of 2 hands with R-square equal to 0.08, 0.164, and 0.132, respectively. Reliability analysis indicated that the 3 measures had fair-to-good reliability (intraclass correlation coefficient: 0.4-0.8). We conclude that TDT is affected by age and has fair-to-good reproducibility using our technique. Published by Elsevier Inc.
Ocean rogue waves and their phase space dynamics in the limit of a linear interference model.
Birkholz, Simon; Brée, Carsten; Veselić, Ivan; Demircan, Ayhan; Steinmeyer, Günter
2016-10-12
We reanalyse the probability for formation of extreme waves using the simple model of linear interference of a finite number of elementary waves with fixed amplitude and random phase fluctuations. Under these model assumptions no rogue waves appear when less than 10 elementary waves interfere with each other. Above this threshold rogue wave formation becomes increasingly likely, with appearance frequencies that may even exceed long-term observations by an order of magnitude. For estimation of the effective number of interfering waves, we suggest the Grassberger-Procaccia dimensional analysis of individual time series. For the ocean system, it is further shown that the resulting phase space dimension may vary, such that the threshold for rogue wave formation is not always reached. Time series analysis as well as the appearance of particular focusing wind conditions may enable an effective forecast of such rogue-wave prone situations. In particular, extracting the dimension from ocean time series allows much more specific estimation of the rogue wave probability.
Ocean rogue waves and their phase space dynamics in the limit of a linear interference model
Birkholz, Simon; Brée, Carsten; Veselić, Ivan; Demircan, Ayhan; Steinmeyer, Günter
2016-01-01
We reanalyse the probability for formation of extreme waves using the simple model of linear interference of a finite number of elementary waves with fixed amplitude and random phase fluctuations. Under these model assumptions no rogue waves appear when less than 10 elementary waves interfere with each other. Above this threshold rogue wave formation becomes increasingly likely, with appearance frequencies that may even exceed long-term observations by an order of magnitude. For estimation of the effective number of interfering waves, we suggest the Grassberger-Procaccia dimensional analysis of individual time series. For the ocean system, it is further shown that the resulting phase space dimension may vary, such that the threshold for rogue wave formation is not always reached. Time series analysis as well as the appearance of particular focusing wind conditions may enable an effective forecast of such rogue-wave prone situations. In particular, extracting the dimension from ocean time series allows much more specific estimation of the rogue wave probability. PMID:27731411
Automatic threshold optimization in nonlinear energy operator based spike detection.
Malik, Muhammad H; Saeed, Maryam; Kamboh, Awais M
2016-08-01
In neural spike sorting systems, the performance of the spike detector has to be maximized because it affects the performance of all subsequent blocks. Non-linear energy operator (NEO), is a popular spike detector due to its detection accuracy and its hardware friendly architecture. However, it involves a thresholding stage, whose value is usually approximated and is thus not optimal. This approximation deteriorates the performance in real-time systems where signal to noise ratio (SNR) estimation is a challenge, especially at lower SNRs. In this paper, we propose an automatic and robust threshold calculation method using an empirical gradient technique. The method is tested on two different datasets. The results show that our optimized threshold improves the detection accuracy in both high SNR and low SNR signals. Boxplots are presented that provide a statistical analysis of improvements in accuracy, for instance, the 75th percentile was at 98.7% and 93.5% for the optimized NEO threshold and traditional NEO threshold, respectively.
Padmavathi, Chintalapati; Katti, Gururaj; Sailaja, V.; Padmakumari, A.P.; Jhansilakshmi, V.; Prabhakar, M.; Prasad, Y.G.
2013-01-01
The rice leaf folder, Cnaphalocrocis medinalis Guenée (Lepidoptera: Pyralidae) is a predominant foliage feeder in all the rice ecosystems. The objective of this study was to examine the development of leaf folder at 7 constant temperatures (18, 20, 25, 30, 32, 34, 35° C) and to estimate temperature thresholds and thermal constants for the forecasting models based on heat accumulation units, which could be developed for use in forecasting. The developmental periods of different stages of rice leaf folder were reduced with increases in temperature from 18 to 34° C. The lower threshold temperatures of 11.0, 10.4, 12.8, and 11.1° C, and thermal constants of 69, 270, 106, and 455 degree days, were estimated by linear regression analysis for egg, larva, pupa, and total development, respectively. Based on the thermodynamic non-linear optimSSI model, intrinsic optimum temperatures for the development of egg, larva, and pupa were estimated at 28.9, 25.1 and 23.7° C, respectively. The upper and lower threshold temperatures were estimated as 36.4° C and 11.2° C for total development, indicating that the enzyme was half active and half inactive at these temperatures. These estimated thermal thresholds and degree days could be used to predict the leaf folder activity in the field for their effective management. PMID:24205891
NASA Astrophysics Data System (ADS)
Varghese, Babu; Bonito, Valentina; Turco, Simona; Verhagen, Rieko
2016-03-01
Laser induced optical breakdown (LIOB) is a non-linear absorption process leading to plasma formation at locations where the threshold irradiance for breakdown is surpassed. In this paper we experimentally demonstrate the influence of polarization and absorption on laser induced breakdown threshold in transparent, absorbing and scattering phantoms made from water suspensions of polystyrene microspheres. We demonstrate that radially polarized light yields a lower irradiance threshold for creating optical breakdown compared to linearly polarized light. We also demonstrate that the thermal initiation pathway used for generating seed electrons results in a lower irradiance threshold compared to multiphoton initiation pathway used for optical breakdown.
Agronomic threshold of soil available phosphorus in grey desert soils in Xinjiang, China
NASA Astrophysics Data System (ADS)
Wang, B.; Liu, H.; Hao, X. Y.; Wang, X. H.; Sun, J. S.; Li, J. M.; Ma, Y. B.
2016-08-01
Based on 23 years of data, yields of maize, wheat and cotton were modelled under different fertilizer management practices and at different levels of available phosphorus (Olsen-P) in soil. Three types of threshold models were used, namely linear-linear (LL), linear- plateau (LP), and Mitscherlich type exponential (Exp). The agronomic thresholds of available phosphorus were 25.4 mgkg-1 for cotton, 14.8 mgkg-1 for wheat, 13.1 mgkg-1 for maize and 25.4 mgkg-1 for the grey desert soil regions of Xinjiang in China as a whole.
Blain, G; Meste, O; Bouchard, T; Bermon, S
2005-07-01
To test whether ventilatory thresholds, measured during an exercise test, could be assessed using time varying analysis of respiratory sinus arrhythmia frequency (f(RSA)). Fourteen sedentary subjects and 12 endurance athletes performed a graded and maximal exercise test on a cycle ergometer: initial load 75 W (sedentary subjects) and 150 W (athletes), increments 37.5 W/2 min. f(RSA) was extracted from heart period series using an evolutive model. First (T(V1)) and second (T(V2)) ventilatory thresholds were determined from the time course curves of ventilation and ventilatory equivalents for O(2) and CO(2). f(RSA) was accurately extracted from all recordings and positively correlated to respiratory frequency (r = 0.96 (0.03), p<0.01). In 21 of the 26 subjects, two successive non-linear increases were determined in f(RSA), defining the first (T(RSA1)) and second (T(RSA2)) f(RSA) thresholds. When expressed as a function of power, T(RSA1) and T(RSA2) were not significantly different from and closely linked to T(V1) (r = 0.99, p<0.001) and T(V2) (r = 0.99, p<0.001), respectively. In the five remaining subjects, only one non-linear increase was observed close to T(V2). Significant differences (p<0.04) were found between athlete and sedentary groups when T(RSA1) and T(RSA2) were expressed in terms of absolute and relative power and percentage of maximal aerobic power. In the sedentary group, T(RSA1) and T(RSA2) were 150.3 (18.7) W and 198.3 (28.8) W, respectively, whereas in the athlete group T(RSA1) and T(RSA2) were 247.3 (32.8) W and 316.0 (28.8) W, respectively. Dynamic analysis of f(RSA) provides a useful tool for identifying ventilatory thresholds during graded and maximal exercise test in sedentary subjects and athletes.
Blain, G; Meste, O; Bouchard, T; Bermon, S; Segura, R.
2005-01-01
Objective: To test whether ventilatory thresholds, measured during an exercise test, could be assessed using time varying analysis of respiratory sinus arrhythmia frequency (fRSA). Methods: Fourteen sedentary subjects and 12 endurance athletes performed a graded and maximal exercise test on a cycle ergometer: initial load 75 W (sedentary subjects) and 150 W (athletes), increments 37.5 W/2 min. fRSA was extracted from heart period series using an evolutive model. First (TV1) and second (TV2) ventilatory thresholds were determined from the time course curves of ventilation and ventilatory equivalents for O2 and CO2. Results: fRSA was accurately extracted from all recordings and positively correlated to respiratory frequency (r = 0.96 (0.03), p<0.01). In 21 of the 26 subjects, two successive non-linear increases were determined in fRSA, defining the first (TRSA1) and second (TRSA2) fRSA thresholds. When expressed as a function of power, TRSA1 and TRSA2 were not significantly different from and closely linked to TV1 (r = 0.99, p<0.001) and TV2 (r = 0.99, p<0.001), respectively. In the five remaining subjects, only one non-linear increase was observed close to TV2. Significant differences (p<0.04) were found between athlete and sedentary groups when TRSA1 and TRSA2 were expressed in terms of absolute and relative power and percentage of maximal aerobic power. In the sedentary group, TRSA1 and TRSA2 were 150.3 (18.7) W and 198.3 (28.8) W, respectively, whereas in the athlete group TRSA1 and TRSA2 were 247.3 (32.8) W and 316.0 (28.8) W, respectively. Conclusions: Dynamic analysis of fRSA provides a useful tool for identifying ventilatory thresholds during graded and maximal exercise test in sedentary subjects and athletes. PMID:15976169
Regional rainfall thresholds for landslide occurrence using a centenary database
NASA Astrophysics Data System (ADS)
Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia
2018-04-01
This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.
Analysis of the instability underlying electrostatic suppression of the Leidenfrost state
NASA Astrophysics Data System (ADS)
Shahriari, Arjang; Das, Soumik; Bahadur, Vaibhav; Bonnecaze, Roger T.
2017-03-01
A liquid droplet on a hot solid can generate enough vapor to prevent its contact on the surface and reduce the rate of heat transfer, the so-called Leidenfrost effect. We show theoretically and experimentally that for a sufficiently high electrostatic potential on the droplet, the formation of the vapor layer is suppressed. The interplay of the destabilizing electrostatic force and stabilizing capillary force and evaporation determines the minimum or threshold voltage to suppress the Leidenfrost effect. Linear stability theory accurately predicts threshold voltages for different size droplets and varying temperatures.
Chen, Xiaofeng; Song, Qiankun; Li, Zhongshan; Zhao, Zhenjiang; Liu, Yurong
2018-07-01
This paper addresses the problem of stability for continuous-time and discrete-time quaternion-valued neural networks (QVNNs) with linear threshold neurons. Applying the semidiscretization technique to the continuous-time QVNNs, the discrete-time analogs are obtained, which preserve the dynamical characteristics of their continuous-time counterparts. Via the plural decomposition method of quaternion, homeomorphic mapping theorem, as well as Lyapunov theorem, some sufficient conditions on the existence, uniqueness, and global asymptotical stability of the equilibrium point are derived for the continuous-time QVNNs and their discrete-time analogs, respectively. Furthermore, a uniform sufficient condition on the existence, uniqueness, and global asymptotical stability of the equilibrium point is obtained for both continuous-time QVNNs and their discrete-time version. Finally, two numerical examples are provided to substantiate the effectiveness of the proposed results.
Poleti, Marcelo Lupion; Fernandes, Thais Maria Freire; Pagin, Otávio; Moretti, Marcela Rodrigues; Rubira-Bullen, Izabel Regina Fischer
2016-01-01
The aim of this in vitro study was to evaluate the reliability and accuracy of linear measurements on three-dimensional (3D) surface models obtained by standard pre-set thresholds in two segmentation software programs. Ten mandibles with 17 silica markers were scanned for 0.3-mm voxels in the i-CAT Classic (Imaging Sciences International, Hatfield, PA, USA). Twenty linear measurements were carried out by two observers two times on the 3D surface models: the Dolphin Imaging 11.5 (Dolphin Imaging & Management Solutions, Chatsworth, CA, USA), using two filters(Translucent and Solid-1), and in the InVesalius 3.0.0 (Centre for Information Technology Renato Archer, Campinas, SP, Brazil). The physical measurements were made by another observer two times using a digital caliper on the dry mandibles. Excellent intra- and inter-observer reliability for the markers, physical measurements, and 3D surface models were found (intra-class correlation coefficient (ICC) and Pearson's r ≥ 0.91). The linear measurements on 3D surface models by Dolphin and InVesalius software programs were accurate (Dolphin Solid-1 > InVesalius > Dolphin Translucent). The highest absolute and percentage errors were obtained for the variable R1-R1 (1.37 mm) and MF-AC (2.53 %) in the Dolphin Translucent and InVesalius software, respectively. Linear measurements on 3D surface models obtained by standard pre-set thresholds in the Dolphin and InVesalius software programs are reliable and accurate compared with physical measurements. Studies that evaluate the reliability and accuracy of the 3D models are necessary to ensure error predictability and to establish diagnosis, treatment plan, and prognosis in a more realistic way.
Analysis of the PLL phase error in presence of simulated ionospheric scintillation events
NASA Astrophysics Data System (ADS)
Forte, B.
2012-01-01
The functioning of standard phase locked loops (PLL), including those used to track radio signals from Global Navigation Satellite Systems (GNSS), is based on a linear approximation which holds in presence of small phase errors. Such an approximation represents a reasonable assumption in most of the propagation channels. However, in presence of a fading channel the phase error may become large, making the linear approximation no longer valid. The PLL is then expected to operate in a non-linear regime. As PLLs are generally designed and expected to operate in their linear regime, whenever the non-linear regime comes into play, they will experience a serious limitation in their capability to track the corresponding signals. The phase error and the performance of a typical PLL embedded into a commercial multiconstellation GNSS receiver were analyzed in presence of simulated ionospheric scintillation. Large phase errors occurred during scintillation-induced signal fluctuations although cycle slips only occurred during the signal re-acquisition after a loss of lock. Losses of lock occurred whenever the signal faded below the minimumC/N0threshold allowed for tracking. The simulations were performed for different signals (GPS L1C/A, GPS L2C, GPS L5 and Galileo L1). L5 and L2C proved to be weaker than L1. It appeared evident that the conditions driving the PLL phase error in the specific case of GPS receivers in presence of scintillation-induced signal perturbations need to be evaluated in terms of the combination of the minimumC/N0 tracking threshold, lock detector thresholds, possible cycle slips in the tracking PLL and accuracy of the observables (i.e. the error propagation onto the observables stage).
A critique of the use of indicator-species scores for identifying thresholds in species responses
Cuffney, Thomas F.; Qian, Song S.
2013-01-01
Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.
Ding, Changfeng; Li, Xiaogang; Zhang, Taolin; Ma, Yibing; Wang, Xingxiang
2014-10-01
Soil environmental quality standards in respect of heavy metals for farmlands should be established considering both their effects on crop yield and their accumulation in the edible part. A greenhouse experiment was conducted to investigate the effects of chromium (Cr) on biomass production and Cr accumulation in carrot plants grown in a wide range of soils. The results revealed that carrot yield significantly decreased in 18 of the total 20 soils with Cr addition being the soil environmental quality standard of China. The Cr content of carrot grown in the five soils with pH>8.0 exceeded the maximum allowable level (0.5mgkg(-1)) according to the Chinese General Standard for Contaminants in Foods. The relationship between carrot Cr concentration and soil pH could be well fitted (R(2)=0.70, P<0.0001) by a linear-linear segmented regression model. The addition of Cr to soil influenced carrot yield firstly rather than the food quality. The major soil factors controlling Cr phytotoxicity and the prediction models were further identified and developed using path analysis and stepwise multiple linear regression analysis. Soil Cr thresholds for phytotoxicity meanwhile ensuring food safety were then derived on the condition of 10 percent yield reduction. Copyright © 2014 Elsevier Inc. All rights reserved.
Relative Velocity as a Metric for Probability of Collision Calculations
NASA Technical Reports Server (NTRS)
Frigm, Ryan Clayton; Rohrbaugh, Dave
2008-01-01
Collision risk assessment metrics, such as the probability of collision calculation, are based largely on assumptions about the interaction of two objects during their close approach. Specifically, the approach to probabilistic risk assessment can be performed more easily if the relative trajectories of the two close approach objects are assumed to be linear during the encounter. It is shown in this analysis that one factor in determining linearity is the relative velocity of the two encountering bodies, in that the assumption of linearity breaks down at low relative approach velocities. The first part of this analysis is the determination of the relative velocity threshold below which the assumption of linearity becomes invalid. The second part is a statistical study of conjunction interactions between representative asset spacecraft and the associated debris field environment to determine the likelihood of encountering a low relative velocity close approach. This analysis is performed for both the LEO and GEO orbit regimes. Both parts comment on the resulting effects to collision risk assessment operations.
Ertl, Peter; Kruse, Annika; Tilp, Markus
2016-10-01
The aim of the current paper was to systematically review the relevant existing electromyographic threshold concepts within the literature. The electronic databases MEDLINE and SCOPUS were screened for papers published between January 1980 and April 2015 including the keywords: neuromuscular fatigue threshold, anaerobic threshold, electromyographic threshold, muscular fatigue, aerobic-anaerobictransition, ventilatory threshold, exercise testing, and cycle-ergometer. 32 articles were assessed with regard to their electromyographic methodologies, description of results, statistical analysis and test protocols. Only one article was of very good quality. 21 were of good quality and two articles were of very low quality. The review process revealed that: (i) there is consistent evidence of one or two non-linear increases of EMG that might reflect the additional recruitment of motor units (MU) or different fiber types during fatiguing cycle ergometer exercise, (ii) most studies reported no statistically significant difference between electromyographic and metabolic thresholds, (iii) one minute protocols with increments between 10 and 25W appear most appropriate to detect muscular threshold, (iv) threshold detection from the vastus medialis, vastus lateralis, and rectus femoris is recommended, and (v) there is a great variety in study protocols, measurement techniques, and data processing. Therefore, we recommend further research and standardization in the detection of EMGTs. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)
2004-01-01
A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).
NASA Technical Reports Server (NTRS)
Pain, Bedabrata; Yang, Guang; Ortiz, Monico; Wrigley, Christopher; Hancock, Bruce; Cunningham, Thomas
2000-01-01
Noise in photodiode-type CMOS active pixel sensors (APS) is primarily due to the reset (kTC) noise at the sense node, since it is difficult to implement in-pixel correlated double sampling for a 2-D array. Signal integrated on the photodiode sense node (SENSE) is calculated by measuring difference between the voltage on the column bus (COL) - before and after the reset (RST) is pulsed. Lower than kTC noise can be achieved with photodiode-type pixels by employing "softreset" technique. Soft-reset refers to resetting with both drain and gate of the n-channel reset transistor kept at the same potential, causing the sense node to be reset using sub-threshold MOSFET current. However, lowering of noise is achieved only at the expense higher image lag and low-light-level non-linearity. In this paper, we present an analysis to explain the noise behavior, show evidence of degraded performance under low-light levels, and describe new pixels that eliminate non-linearity and lag without compromising noise.
Yang, Xujun; Li, Chuandong; Song, Qiankun; Chen, Jiyang; Huang, Junjian
2018-05-04
This paper talks about the stability and synchronization problems of fractional-order quaternion-valued neural networks (FQVNNs) with linear threshold neurons. On account of the non-commutativity of quaternion multiplication resulting from Hamilton rules, the FQVNN models are separated into four real-valued neural network (RVNN) models. Consequently, the dynamic analysis of FQVNNs can be realized by investigating the real-valued ones. Based on the method of M-matrix, the existence and uniqueness of the equilibrium point of the FQVNNs are obtained without detailed proof. Afterwards, several sufficient criteria ensuring the global Mittag-Leffler stability for the unique equilibrium point of the FQVNNs are derived by applying the Lyapunov direct method, the theory of fractional differential equation, the theory of matrix eigenvalue, and some inequality techniques. In the meanwhile, global Mittag-Leffler synchronization for the drive-response models of the addressed FQVNNs are investigated explicitly. Finally, simulation examples are designed to verify the feasibility and availability of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Zhang, Zhen; Xie, Xu; Chen, Xiliang; Li, Yuan; Lu, Yan; Mei, Shujiang; Liao, Yuxue; Lin, Hualiang
2016-01-01
Various meteorological factors have been associated with hand, foot and mouth disease (HFMD) among children; however, fewer studies have examined the non-linearity and interaction among the meteorological factors. A generalized additive model with a log link allowing Poisson auto-regression and over-dispersion was applied to investigate the short-term effects daily meteorological factors on children HFMD with adjustment of potential confounding factors. We found positive effects of mean temperature and wind speed, the excess relative risk (ERR) was 2.75% (95% CI: 1.98%, 3.53%) for one degree increase in daily mean temperature on lag day 6, and 3.93% (95% CI: 2.16% to 5.73%) for 1m/s increase in wind speed on lag day 3. We found a non-linear effect of relative humidity with thresholds with the low threshold at 45% and high threshold at 85%, within which there was positive effect, the ERR was 1.06% (95% CI: 0.85% to 1.27%) for 1 percent increase in relative humidity on lag day 5. No significant effect was observed for rainfall and sunshine duration. For the interactive effects, we found a weak additive interaction between mean temperature and relative humidity, and slightly antagonistic interaction between mean temperature and wind speed, and between relative humidity and wind speed in the additive models, but the interactions were not statistically significant. This study suggests that mean temperature, relative humidity and wind speed might be risk factors of children HFMD in Shenzhen, and the interaction analysis indicates that these meteorological factors might have played their roles individually. Copyright © 2015 Elsevier B.V. All rights reserved.
Dynamics of chromatic visual system processing differ in complexity between children and adults.
Boon, Mei Ying; Suttle, Catherine M; Henry, Bruce I; Dain, Stephen J
2009-06-30
Measures of chromatic contrast sensitivity in children are lower than those of adults. This may be related to immaturities in signal processing at or near threshold. We have found that children's VEPs in response to low contrast supra-threshold chromatic stimuli are more intra-individually variable than those recorded from adults. Here, we report on linear and nonlinear analyses of chromatic VEPs recorded from children and adults. Two measures of signal-to-noise ratio are similar between the adults and children, suggesting that relatively high noise is unlikely to account for the poor clarity of negative and positive peak components in the children's VEPs. Nonlinear analysis indicates higher complexity of adults' than children's chromatic VEPs, at levels of chromatic contrast around and well above threshold.
Hsu, Ruey-Fen; Ho, Chi-Kung; Lu, Sheng-Nan; Chen, Shun-Sheng
2010-10-01
An objective investigation is needed to verify the existence and severity of hearing impairments resulting from work-related, noise-induced hearing loss in arbitration of medicolegal aspects. We investigated the accuracy of multiple-frequency auditory steady-state responses (Mf-ASSRs) between subjects with sensorineural hearing loss (SNHL) with and without occupational noise exposure. Cross-sectional study. Tertiary referral medical centre. Pure-tone audiometry and Mf-ASSRs were recorded in 88 subjects (34 patients had occupational noise-induced hearing loss [NIHL], 36 patients had SNHL without noise exposure, and 18 volunteers were normal controls). Inter- and intragroup comparisons were made. A predicting equation was derived using multiple linear regression analysis. ASSRs and pure-tone thresholds (PTTs) showed a strong correlation for all subjects (r = .77 ≈ .94). The relationship is demonstrated by the equationThe differences between the ASSR and PTT were significantly higher for the NIHL group than for the subjects with non-noise-induced SNHL (p < .001). Mf-ASSR is a promising tool for objectively evaluating hearing thresholds. Predictive value may be lower in subjects with occupational hearing loss. Regardless of carrier frequencies, the severity of hearing loss affects the steady-state response. Moreover, the ASSR may assist in detecting noise-induced injury of the auditory pathway. A multiple linear regression equation to accurately predict thresholds was shown that takes into consideration all effect factors.
Vanderick, S; Troch, T; Gillon, A; Glorieux, G; Gengler, N
2014-12-01
Calving ease scores from Holstein dairy cattle in the Walloon Region of Belgium were analysed using univariate linear and threshold animal models. Variance components and derived genetic parameters were estimated from a data set including 33,155 calving records. Included in the models were season, herd and sex of calf × age of dam classes × group of calvings interaction as fixed effects, herd × year of calving, maternal permanent environment and animal direct and maternal additive genetic as random effects. Models were fitted with the genetic correlation between direct and maternal additive genetic effects either estimated or constrained to zero. Direct heritability for calving ease was approximately 8% with linear models and approximately 12% with threshold models. Maternal heritabilities were approximately 2 and 4%, respectively. Genetic correlation between direct and maternal additive effects was found to be not significantly different from zero. Models were compared in terms of goodness of fit and predictive ability. Criteria of comparison such as mean squared error, correlation between observed and predicted calving ease scores as well as between estimated breeding values were estimated from 85,118 calving records. The results provided few differences between linear and threshold models even though correlations between estimated breeding values from subsets of data for sires with progeny from linear model were 17 and 23% greater for direct and maternal genetic effects, respectively, than from threshold model. For the purpose of genetic evaluation for calving ease in Walloon Holstein dairy cattle, the linear animal model without covariance between direct and maternal additive effects was found to be the best choice. © 2014 Blackwell Verlag GmbH.
Population response to climate change: linear vs. non-linear modeling approaches.
Ellis, Alicia M; Post, Eric
2004-03-31
Research on the ecological consequences of global climate change has elicited a growing interest in the use of time series analysis to investigate population dynamics in a changing climate. Here, we compare linear and non-linear models describing the contribution of climate to the density fluctuations of the population of wolves on Isle Royale, Michigan from 1959 to 1999. The non-linear self excitatory threshold autoregressive (SETAR) model revealed that, due to differences in the strength and nature of density dependence, relatively small and large populations may be differentially affected by future changes in climate. Both linear and non-linear models predict a decrease in the population of wolves with predicted changes in climate. Because specific predictions differed between linear and non-linear models, our study highlights the importance of using non-linear methods that allow the detection of non-linearity in the strength and nature of density dependence. Failure to adopt a non-linear approach to modelling population response to climate change, either exclusively or in addition to linear approaches, may compromise efforts to quantify ecological consequences of future warming.
NASA Astrophysics Data System (ADS)
Chefranov, Sergey; Chefranov, Alexander
2016-04-01
Linear hydrodynamic stability theory for the Hagen-Poiseuille (HP) flow yields a conclusion of infinitely large threshold Reynolds number, Re, value. This contradiction to the observation data is bypassed using assumption of the HP flow instability having hard type and possible for sufficiently high-amplitude disturbances. HP flow disturbance evolution is considered by nonlinear hydrodynamic stability theory. Similar is the case of the plane Couette (PC) flow. For the plane Poiseuille (PP) flow, linear theory just quantitatively does not agree with experimental data defining the threshold Reynolds number Re= 5772 ( S. A. Orszag, 1971), more than five-fold exceeding however the value observed, Re=1080 (S. J. Davies, C. M. White, 1928). In the present work, we show that the linear stability theory conclusions for the HP and PC on stability for any Reynolds number and evidently too high threshold Reynolds number estimate for the PP flow are related with the traditional use of the disturbance representation assuming the possibility of separation of the longitudinal (along the flow direction) variable from the other spatial variables. We show that if to refuse from this traditional form, conclusions on the linear instability for the HP and PC flows may be obtained for finite Reynolds numbers (for the HP flow, for Re>704, and for the PC flow, for Re>139). Also, we fit the linear stability theory conclusion on the PP flow to the experimental data by getting an estimate of the minimal threshold Reynolds number as Re=1040. We also get agreement of the minimal threshold Reynolds number estimate for PC with the experimental data of S. Bottin, et.al., 1997, where the laminar PC flow stability threshold is Re = 150. Rogue waves excitation mechanism in oppositely directed currents due to the PC flow linear instability is discussed. Results of the new linear hydrodynamic stability theory for the HP, PP, and PC flows are published in the following papers: 1. S.G. Chefranov, A.G. Chefranov, JETP, v.119, No.2, 331, 2014 2. S.G. Chefranov, A.G. Chefranov, Doklady Physics, vol.60, No.7, 327-332, 2015 3. S.G. Chefranov, A. G. Chefranov, arXiv: 1509.08910v1 [physics.flu-dyn] 29 Sep 2015 (accepted to JETP)
Morris, Katrina A; Parry, Allyson; Pretorius, Pieter M
2016-09-01
To compare the sensitivity of linear and volumetric measurements on MRI in detecting schwannoma progression in patients with neurofibromatosis type 2 on bevacizumab treatment as well as the extent to which this depends on the size of the tumour. We compared retrospectively, changes in linear tumour dimensions at a range of thresholds to volumetric tumour measurements performed using Brainlab iPlan(®) software (Feldkirchen, Germany) and classified for tumour progression according to the Response Evaluation in Neurofibromatosis and Schwannomatosis (REiNS) criteria. Assessment of 61 schwannomas in 46 patients with a median follow-up of 20 months (range 3-43 months) was performed. There was a mean of 7 time points per tumour (range 2-12 time points). Using the volumetric REiNS criteria as the gold standard, a sensitivity of 86% was achieved for linear measurement using a 2-mm threshold to define progression. We propose that a change in linear measurement by 2 mm (particularly in tumours with starting diameters 20-30 mm, the majority of this cohort) could be used as a filter to identify cases of possible progression requiring volumetric analysis. This pragmatic approach can be used if stabilization of a previously growing schwannoma is sufficient for a patient to continue treatment in such a circumstance. We demonstrate the real-world limitations of linear vs volumetric measurement in tumour response assessment and identify limited circumstances where linear measurements can be used to determine which patients require the more resource-intensive volumetric measurements.
NETWORK SYNTHESIS OF CASCADED THRESHOLD ELEMENTS.
A threshold function is a switching function which can be stimulated by a single, simplified, idealized neuron, or threshold element. In this report... threshold functions are examined in the context of abstract set theory and linear algebra for the purpose of obtaining practical synthesis procedures...for networks of threshold elements. A procedure is described by which, for any given switching function, a cascade network of these elements can be
Li, Shi; Batterman, Stuart; Wasilevich, Elizabeth; Wahl, Robert; Wirth, Julie; Su, Feng-Chiao; Mukherjee, Bhramar
2011-11-01
Asthma morbidity has been associated with ambient air pollutants in time-series and case-crossover studies. In such study designs, threshold effects of air pollutants on asthma outcomes have been relatively unexplored, which are of potential interest for exploring concentration-response relationships. This study analyzes daily data on the asthma morbidity experienced by the pediatric Medicaid population (ages 2-18 years) of Detroit, Michigan and concentrations of pollutants fine particles (PM2.5), CO, NO2 and SO2 for the 2004-2006 period, using both time-series and case-crossover designs. We use a simple, testable and readily implementable profile likelihood-based approach to estimate threshold parameters in both designs. Evidence of significant increases in daily acute asthma events was found for SO2 and PM2.5, and a significant threshold effect was estimated for PM2.5 at 13 and 11 μg m(-3) using generalized additive models and conditional logistic regression models, respectively. Stronger effect sizes above the threshold were typically noted compared to standard linear relationship, e.g., in the time series analysis, an interquartile range increase (9.2 μg m(-3)) in PM2.5 (5-day-moving average) had a risk ratio of 1.030 (95% CI: 1.001, 1.061) in the generalized additive models, and 1.066 (95% CI: 1.031, 1.102) in the threshold generalized additive models. The corresponding estimates for the case-crossover design were 1.039 (95% CI: 1.013, 1.066) in the conditional logistic regression, and 1.054 (95% CI: 1.023, 1.086) in the threshold conditional logistic regression. This study indicates that the associations of SO2 and PM2.5 concentrations with asthma emergency department visits and hospitalizations, as well as the estimated PM2.5 threshold were fairly consistent across time-series and case-crossover analyses, and suggests that effect estimates based on linear models (without thresholds) may underestimate the true risk. Copyright © 2011 Elsevier Inc. All rights reserved.
Threshold Fatigue Crack Growth in Ti-6Al-2Sn-4Zr-6Mo.
1987-12-01
vii I. Introduction ................... ........ ........... 1 Overviev .................................... 1 Background...threshold region. 7. All experiments were conducted under fully automated I’ computer control using a laser interferometric displacement gage (IDG) to...reduction in the local driving force. This non-linear crack 0 appears to grow slower than a linear crack and therefore results in lover than actual computed
Introducing linear functions: an alternative statistical approach
NASA Astrophysics Data System (ADS)
Nolan, Caroline; Herbert, Sandra
2015-12-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be `threshold concepts'. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-of-topic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.
Malchiodi, F; Koeck, A; Mason, S; Christen, A M; Kelton, D F; Schenkel, F S; Miglior, F
2017-04-01
A national genetic evaluation program for hoof health could be achieved by using hoof lesion data collected directly by hoof trimmers. However, not all cows in the herds during the trimming period are always presented to the hoof trimmer. This preselection process may not be completely random, leading to erroneous estimations of the prevalence of hoof lesions in the herd and inaccuracies in the genetic evaluation. The main objective of this study was to estimate genetic parameters for individual hoof lesions in Canadian Holsteins by using an alternative cohort to consider all cows in the herd during the period of the hoof trimming sessions, including those that were not examined by the trimmer over the entire lactation. A second objective was to compare the estimated heritabilities and breeding values for resistance to hoof lesions obtained with threshold and linear models. Data were recorded by 23 hoof trimmers serving 521 herds located in Alberta, British Columbia, and Ontario. A total of 73,559 hoof-trimming records from 53,654 cows were collected between 2009 and 2012. Hoof lesions included in the analysis were digital dermatitis, interdigital dermatitis, interdigital hyperplasia, sole hemorrhage, sole ulcer, toe ulcer, and white line disease. All variables were analyzed as binary traits, as the presence or the absence of the lesions, using a threshold and a linear animal model. Two different cohorts were created: Cohort 1, which included only cows presented to hoof trimmers, and Cohort 2, which included all cows present in the herd at the time of hoof trimmer visit. Using a threshold model, heritabilities on the observed scale ranged from 0.01 to 0.08 for Cohort 1 and from 0.01 to 0.06 for Cohort 2. Heritabilities estimated with the linear model ranged from 0.01 to 0.07 for Cohort 1 and from 0.01 to 0.05 for Cohort 2. Despite a low heritability, the distribution of the sire breeding values showed large and exploitable variation among sires. Higher breeding values for hoof lesion resistance corresponded to sires with a higher prevalence of healthy daughters. The rank correlations between estimated breeding values ranged from 0.96 to 0.99 when predicted using either one of the 2 cohorts and from 0.94 to 0.99 when predicted using either a threshold or a linear model. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Shinomori, Keizo; Panorgias, Athanasios; Werner, John S.
2017-01-01
Age-related changes in chromatic discrimination along dichromatic confusion lines were measured with the Cambridge Colour Test (CCT). One hundred and sixty-two individuals (16 to 88 years old) with normal Rayleigh matches were the major focus of this paper. An additional 32 anomalous trichromats classified by their Rayleigh matches were also tested. All subjects were screened to rule out abnormalities of the anterior and posterior segments. Thresholds on all three chromatic vectors measured with the CCT showed age-related increases. Protan and deutan vector thresholds increased linearly with age while the tritan vector threshold was described with a bilinear model. Analysis and modeling demonstrated that the nominal vectors of the CCT are shifted by senescent changes in ocular media density, and a method for correcting the CCT vectors is demonstrated. A correction for these shifts indicates that classification among individuals of different ages is unaffected. New vector thresholds for elderly observers and for all age groups are suggested based on calculated tolerance limits. PMID:26974943
Atomic physics effects on tokamak edge drift-tearing modes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahm, T.S.
1993-03-01
The effects of ionization and charge exchange on the linear stability of drift-tearing modes are analytically investigated. In particular, the linear instability threshold {Delta}{sup Th}, produced by ion sound wave coupling is modified. In the strongly collisional regime, the ionization breaks up the near cancellation of the perturbed electric field and the pressure gradient along the magnetic field, and increases the threshold. In the semi-collisional regime, both ionization and charge exchange act as drag on the ion parallel velocity, and consequently decrease the threshold by reducing the effectiveness of ion sound wave propagation.
Atomic physics effects on tokamak edge drift-tearing modes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahm, T.S.
1993-03-01
The effects of ionization and charge exchange on the linear stability of drift-tearing modes are analytically investigated. In particular, the linear instability threshold [Delta][sup Th], produced by ion sound wave coupling is modified. In the strongly collisional regime, the ionization breaks up the near cancellation of the perturbed electric field and the pressure gradient along the magnetic field, and increases the threshold. In the semi-collisional regime, both ionization and charge exchange act as drag on the ion parallel velocity, and consequently decrease the threshold by reducing the effectiveness of ion sound wave propagation.
Marcus, Carol S
2015-07-01
On February 9, 2015, I submitted a petition to the U.S. Nuclear Regulatory Commission (NRC) to reject the linear-no threshold (LNT) hypothesis and ALARA as the bases for radiation safety regulation in the United States, using instead threshold and hormesis evidence. In this article, I will briefly review the history of LNT and its use by regulators, the lack of evidence supporting LNT, and the large body of evidence supporting thresholds and hormesis. Physician acceptance of cancer risk from low dose radiation based upon federal regulatory claims is unfortunate and needs to be reevaluated. This is dangerous to patients and impedes good medical care. A link to my petition is available: http://radiationeffects.org/wp-content/uploads/2015/03/Hormesis-Petition-to-NRC-02-09-15.pdf, and support by individual physicians once the public comment period begins would be extremely important.
THRESHOLD ELEMENTS AND THE DESIGN OF SEQUENTIAL SWITCHING NETWORKS.
The report covers research performed from March 1966 to March 1967. The major topics treated are: (1) methods for finding weight- threshold vectors...that realize a given switching function in multi- threshold linear logic; (2) synthesis of sequential machines by means of shift registers and simple
NASA Astrophysics Data System (ADS)
Sahoo, N. K.; Thakur, S.; Senthilkumar, M.; Das, N. C.
2005-02-01
Thickness-dependent index non-linearity in thin films has been a thought provoking as well as intriguing topic in the field of optical coatings. The characterization and analysis of such inhomogeneous index profiles pose several degrees of challenges to thin-film researchers depending upon the availability of relevant experimental and process-monitoring-related information. In the present work, a variety of novel experimental non-linear index profiles have been observed in thin films of MgOAl2O3ZrO2 ternary composites in solid solution under various electron-beam deposition parameters. Analysis and derivation of these non-linear spectral index profiles have been carried out by an inverse-synthesis approach using a real-time optical monitoring signal and post-deposition transmittance and reflection spectra. Most of the non-linear index functions are observed to fit polynomial equations of order seven or eight very well. In this paper, the application of such a non-linear index function has also been demonstrated in designing electric-field-optimized high-damage-threshold multilayer coatings such as normal- and oblique-incidence edge filters and a broadband beam splitter for p-polarized light. Such designs can also advantageously maintain the microstructural stability of the multilayer structure due to the low stress factor of the non-linear ternary composite layers.
Effects of Frequency Drift on the Quantification of Gamma-Aminobutyric Acid Using MEGA-PRESS
NASA Astrophysics Data System (ADS)
Tsai, Shang-Yueh; Fang, Chun-Hao; Wu, Thai-Yu; Lin, Yi-Ru
2016-04-01
The MEGA-PRESS method is the most common method used to measure γ-aminobutyric acid (GABA) in the brain at 3T. It has been shown that the underestimation of the GABA signal due to B0 drift up to 1.22 Hz/min can be reduced by post-frequency alignment. In this study, we show that the underestimation of GABA can still occur even with post frequency alignment when the B0 drift is up to 3.93 Hz/min. The underestimation can be reduced by applying a frequency shift threshold. A total of 23 subjects were scanned twice to assess the short-term reproducibility, and 14 of them were scanned again after 2-8 weeks to evaluate the long-term reproducibility. A linear regression analysis of the quantified GABA versus the frequency shift showed a negative correlation (P < 0.01). Underestimation of the GABA signal was found. When a frequency shift threshold of 0.125 ppm (15.5 Hz or 1.79 Hz/min) was applied, the linear regression showed no statistically significant difference (P > 0.05). Therefore, a frequency shift threshold at 0.125 ppm (15.5 Hz) can be used to reduce underestimation during GABA quantification. For data with a B0 drift up to 3.93 Hz/min, the coefficients of variance of short-term and long-term reproducibility for the GABA quantification were less than 10% when the frequency threshold was applied.
MacNeilage, Paul R.; Turner, Amanda H.
2010-01-01
Gravitational signals arising from the otolith organs and vertical plane rotational signals arising from the semicircular canals interact extensively for accurate estimation of tilt and inertial acceleration. Here we used a classical signal detection paradigm to examine perceptual interactions between otolith and horizontal semicircular canal signals during simultaneous rotation and translation on a curved path. In a rotation detection experiment, blindfolded subjects were asked to detect the presence of angular motion in blocks where half of the trials were pure nasooccipital translation and half were simultaneous translation and yaw rotation (curved-path motion). In separate, translation detection experiments, subjects were also asked to detect either the presence or the absence of nasooccipital linear motion in blocks, in which half of the trials were pure yaw rotation and half were curved path. Rotation thresholds increased slightly, but not significantly, with concurrent linear velocity magnitude. Yaw rotation detection threshold, averaged across all conditions, was 1.45 ± 0.81°/s (3.49 ± 1.95°/s2). Translation thresholds, on the other hand, increased significantly with increasing magnitude of concurrent angular velocity. Absolute nasooccipital translation detection threshold, averaged across all conditions, was 2.93 ± 2.10 cm/s (7.07 ± 5.05 cm/s2). These findings suggest that conscious perception might not have independent access to separate estimates of linear and angular movement parameters during curved-path motion. Estimates of linear (and perhaps angular) components might instead rely on integrated information from canals and otoliths. Such interaction may underlie previously reported perceptual errors during curved-path motion and may originate from mechanisms that are specialized for tilt-translation processing during vertical plane rotation. PMID:20554843
King, D; Hume, P; Gissane, C; Brughelli, M; Clark, T
2016-02-01
Head impacts and resulting head accelerations cause concussive injuries. There is no standard for reporting head impact data in sports to enable comparison between studies. The aim was to outline methods for reporting head impact acceleration data in sport and the effect of the acceleration thresholds on the number of impacts reported. A systematic review of accelerometer systems utilised to report head impact data in sport was conducted. The effect of using different thresholds on a set of impact data from 38 amateur senior rugby players in New Zealand over a competition season was calculated. Of the 52 studies identified, 42% reported impacts using a >10-g threshold, where g is the acceleration of gravity. Studies reported descriptive statistics as mean ± standard deviation, median, 25th to 75th interquartile range, and 95th percentile. Application of the varied impact thresholds to the New Zealand data set resulted in 20,687 impacts of >10 g, 11,459 (45% less) impacts of >15 g, and 4024 (81% less) impacts of >30 g. Linear and angular raw data were most frequently reported. Metrics combining raw data may be more useful; however, validity of the metrics has not been adequately addressed for sport. Differing data collection methods and descriptive statistics for reporting head impacts in sports limit inter-study comparisons. Consensus on data analysis methods for sports impact assessment is needed, including thresholds. Based on the available data, the 10-g threshold is the most commonly reported impact threshold and should be reported as the median with 25th and 75th interquartile ranges as the data are non-normally distributed. Validation studies are required to determine the best threshold and metrics for impact acceleration data collection in sport. Until in-field validation studies are completed, it is recommended that head impact data should be reported as median and interquartile ranges using the 10-g impact threshold.
Competitive inhibition can linearize dose-response and generate a linear rectifier
Savir, Yonatan; Tu, Benjamin P.; Springer, Michael
2015-01-01
Summary Many biological responses require a dynamic range that is larger than standard bi-molecular interactions allow, yet the also ability to remain off at low input. Here we mathematically show that an enzyme reaction system involving a combination of competitive inhibition, conservation of the total level of substrate and inhibitor, and positive feedback can behave like a linear rectifier—that is, a network motif with an input-output relationship that is linearly sensitive to substrate above a threshold but unresponsive below the threshold. We propose that the evolutionarily conserved yeast SAGA histone acetylation complex may possess the proper physiological response characteristics and molecular interactions needed to perform as a linear rectifier, and we suggest potential experiments to test this hypothesis. One implication of this work is that linear responses and linear rectifiers might be easier to evolve or synthetically construct than is currently appreciated. PMID:26495436
Competitive inhibition can linearize dose-response and generate a linear rectifier.
Savir, Yonatan; Tu, Benjamin P; Springer, Michael
2015-09-23
Many biological responses require a dynamic range that is larger than standard bi-molecular interactions allow, yet the also ability to remain off at low input. Here we mathematically show that an enzyme reaction system involving a combination of competitive inhibition, conservation of the total level of substrate and inhibitor, and positive feedback can behave like a linear rectifier-that is, a network motif with an input-output relationship that is linearly sensitive to substrate above a threshold but unresponsive below the threshold. We propose that the evolutionarily conserved yeast SAGA histone acetylation complex may possess the proper physiological response characteristics and molecular interactions needed to perform as a linear rectifier, and we suggest potential experiments to test this hypothesis. One implication of this work is that linear responses and linear rectifiers might be easier to evolve or synthetically construct than is currently appreciated.
Comparison of algorithms of testing for use in automated evaluation of sensation.
Dyck, P J; Karnes, J L; Gillen, D A; O'Brien, P C; Zimmerman, I R; Johnson, D M
1990-10-01
Estimates of vibratory detection threshold may be used to detect, characterize, and follow the course of sensory abnormality in neurologic disease. The approach is especially useful in epidemiologic and controlled clinical trials. We studied which algorithm of testing and finding threshold should be used in automatic systems by comparing among algorithms and stimulus conditions for the index finger of healthy subjects and for the great toe of patients with mild neuropathy. Appearance thresholds obtained by linear ramps increasing at a rate less than 4.15 microns/sec provided accurate and repeatable thresholds compared with thresholds obtained by forced-choice testing. These rates would be acceptable if only sensitive sites were studied, but they were too slow for use in automatic testing of insensitive parts. Appearance thresholds obtained by fast linear rates (4.15 or 16.6 microns/sec) overestimated threshold, especially for sensitive parts. Use of the mean of appearance and disappearance thresholds, with the stimulus increasing exponentially at rates of 0.5 or 1.0 just noticeable difference (JND) units per second, and interspersion of null stimuli, Békésy with null stimuli, provided accurate, repeatable, and fast estimates of threshold for sensitive parts. Despite the good performance of Békésy testing, we prefer forced choice for evaluation of the sensation of patients with neuropathy.
Nguyen, N H; Whatmore, P; Miller, A; Knibb, W
2016-02-01
The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.
Hopfer, Helene; Jodari, Farman; Negre-Zakharov, Florence; Wylie, Phillip L; Ebeler, Susan E
2016-05-25
Demand for aromatic rice varieties (e.g., Basmati) is increasing in the US. Aromatic varieties typically have elevated levels of the aroma compound 2-acetyl-1-pyrroline (2AP). Due to its very low aroma threshold, analysis of 2AP provides a useful screening tool for rice breeders. Methods for 2AP analysis in rice should quantitate 2AP at or below sensory threshold level, avoid artifactual 2AP generation, and be able to analyze single rice kernels in cases where only small sample quantities are available (e.g., breeding trials). We combined headspace solid phase microextraction with gas chromatography tandem mass spectrometry (HS-SPME-GC-MS/MS) for analysis of 2AP, using an extraction temperature of 40 °C and a stable isotopologue as internal standard. 2AP calibrations were linear between the concentrations of 53 and 5380 pg/g, with detection limits below the sensory threshold of 2AP. Forty-eight aromatic and nonaromatic, milled rice samples from three harvest years were screened with the method for their 2AP content, and overall reproducibility, observed for all samples, ranged from 5% for experimental aromatic lines to 33% for nonaromatic lines.
Threshold Hypothesis: Fact or Artifact?
ERIC Educational Resources Information Center
Karwowski, Maciej; Gralewski, Jacek
2013-01-01
The threshold hypothesis (TH) assumes the existence of complex relations between creative abilities and intelligence: linear associations below 120 points of IQ and weaker or lack of associations above the threshold. However, diverse results have been obtained over the last six decades--some confirmed the hypothesis and some rejected it. In this…
Permitted and forbidden sets in symmetric threshold-linear networks.
Hahnloser, Richard H R; Seung, H Sebastian; Slotine, Jean-Jacques
2003-03-01
The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.
Liu, Yang; Hoppe, Brenda O; Convertino, Matteo
2018-04-10
Emergency risk communication (ERC) programs that activate when the ambient temperature is expected to cross certain extreme thresholds are widely used to manage relevant public health risks. In practice, however, the effectiveness of these thresholds has rarely been examined. The goal of this study is to test if the activation criteria based on extreme temperature thresholds, both cold and heat, capture elevated health risks for all-cause and cause-specific mortality and morbidity in the Minneapolis-St. Paul Metropolitan Area. A distributed lag nonlinear model (DLNM) combined with a quasi-Poisson generalized linear model is used to derive the exposure-response functions between daily maximum heat index and mortality (1998-2014) and morbidity (emergency department visits; 2007-2014). Specific causes considered include cardiovascular, respiratory, renal diseases, and diabetes. Six extreme temperature thresholds, corresponding to 1st-3rd and 97th-99th percentiles of local exposure history, are examined. All six extreme temperature thresholds capture significantly increased relative risks for all-cause mortality and morbidity. However, the cause-specific analyses reveal heterogeneity. Extreme cold thresholds capture increased mortality and morbidity risks for cardiovascular and respiratory diseases and extreme heat thresholds for renal disease. Percentile-based extreme temperature thresholds are appropriate for initiating ERC targeting the general population. Tailoring ERC by specific causes may protect some but not all individuals with health conditions exacerbated by hazardous ambient temperature exposure. © 2018 Society for Risk Analysis.
Protograph LDPC Codes with Node Degrees at Least 3
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher
2006-01-01
In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
NASA Astrophysics Data System (ADS)
Maes, C.; Asbóth, J. K.; Ritsch, H.
2007-05-01
We study the dynamics of a fast gaseous beam in a high Q ring cavity counter propagating a strong pump laser with large detuning from any particle optical resonance. As spontaneous emission is strongly suppressed the particles can be treated as polarizable point masses forming a dynamic moving mirror. Above a threshold intensity the particles exhibit spatial periodic ordering enhancing collective coherent backscattering which decelerates the beam. Based on a linear stability analysis in their accelerated rest frame we derive analytic bounds for the intensity threshold of this selforganization as a function of particle number, average velocity, kinetic temperature, pump detuning and resonator linewidth. The analytical results agree well with time dependent simulations of the N-particle motion including field damping and spontaneous emission noise. Our results give conditions which may be easily evaluated for stopping and cooling a fast molecular beam.
Maes, C; Asbóth, J K; Ritsch, H
2007-05-14
We study the dynamics of a fast gaseous beam in a high Q ring cavity counter propagating a strong pump laser with large detuning from any particle optical resonance. As spontaneous emission is strongly suppressed the particles can be treated as polarizable point masses forming a dynamic moving mirror. Above a threshold intensity the particles exhibit spatial periodic ordering enhancing collective coherent backscattering which decelerates the beam. Based on a linear stability analysis in their accelerated rest frame we derive analytic bounds for the intensity threshold of this selforganization as a function of particle number, average velocity, kinetic temperature, pump detuning and resonator linewidth. The analytical results agree well with time dependent simulations of the N-particle motion including field damping and spontaneous emission noise. Our results give conditions which may be easily evaluated for stopping and cooling a fast molecular beam.
Typical performance of approximation algorithms for NP-hard problems
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-11-01
Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.
NASA Astrophysics Data System (ADS)
Yamamoto, Seiichi; Koyama, Shuji; Yabe, Takuya; Komori, Masataka; Tada, Junki; Ito, Shiori; Toshito, Toshiyuki; Hirata, Yuho; Watanabe, Kenichi
2018-03-01
Luminescence of water during irradiations of proton-beams or X-ray photons lower energy than the Cerenkov-light threshold is promising for range estimation or the distribution measurements of beams. However it is not yet obvious whether the intensities and distributions are stable with the water conditions such as temperature or addition of solvable materials. It remains also unclear whether the luminescence of water linearly increases with the irradiated proton or X-ray energies. Consequently we measured the luminescence of water during irradiations of proton-beam or X-ray photons lower energy than the Cerenkov-light threshold with different water conditions and energies to evaluate the stability and linearity of luminescence of water. We placed a water phantom set with a proton therapy or X-ray system, luminescence images of water with different conditions and energies were measured with a high-sensitivity cooled charge coupled device (CCD) camera during proton or X-ray irradiations to the water phantom. In the stability measurements, imaging was made for different temperatures of water and addition of inorganic and organic materials to water. In the linearity measurements for the proton, we irradiated with four different energies below Cerenkov light threshold. In the linearity measurements for the X-ray, we irradiated X-ray with different supplied voltages. We evaluated the depth profiles for the luminescence images and evaluated the light intensities and distributions. The results showed that the luminescence of water was quite stable with the water conditions. There were no significant changes of intensities and distributions with the different temperatures. Results from the linearity experiments showed that the luminescence of water linearly increased with their energies. We confirmed that luminescence of water is stable with conditions of water. We also confirmed that the luminescence of water linearly increased with their energies.
Threshold responses of Amazonian stream fishes to timing and extent of deforestation.
Brejão, Gabriel L; Hoeinghaus, David J; Pérez-Mayorga, María Angélica; Ferraz, Silvio F B; Casatti, Lilian
2017-12-06
Deforestation is a primary driver of biodiversity change through habitat loss and fragmentation. Stream biodiversity may not respond to deforestation in a simple linear relationship. Rather, threshold responses to extent and timing of deforestation may occur. Identification of critical deforestation thresholds is needed for effective conservation and management. We tested for threshold responses of fish species and functional groups to degree of watershed and riparian zone deforestation and time since impact in 75 streams in the western Brazilian Amazon. We used remote sensing to assess deforestation from 1984 to 2011. Fish assemblages were sampled with seines and dip nets in a standardized manner. Fish species (n = 84) were classified into 20 functional groups based on ecomorphological traits associated with habitat use, feeding, and locomotion. Threshold responses were quantified using threshold indicator taxa analysis. Negative threshold responses to deforestation were common and consistently occurred at very low levels of deforestation (<20%) and soon after impact (<10 years). Sensitive species were functionally unique and associated with complex habitats and structures of allochthonous origin found in forested watersheds. Positive threshold responses of species were less common and generally occurred at >70% deforestation and >10 years after impact. Findings were similar at the community level for both taxonomic and functional analyses. Because most negative threshold responses occurred at low levels of deforestation and soon after impact, even minimal change is expected to negatively affect biodiversity. Delayed positive threshold responses to extreme deforestation by a few species do not offset the loss of sensitive taxa and likely contribute to biotic homogenization. © 2017 Society for Conservation Biology.
Single speckle SRS threshold as determined by electron trapping, collisions and speckle duration
NASA Astrophysics Data System (ADS)
Rose, Harvey; Daughton, William; Yin, Lin; Langdon, Bruce
2008-11-01
Speckle SRS intensity threshold has been shown to increase with spatial dimension, D, because both diffraction and trapped electron escape rate increase with D, though the net effect is to substantially decrease the threshold compared to 1D linear gain calculations. On the other hand, the apparent threshold appears to decrease with integration time in PIC simulations. We present an optimum nonlinearly resonant calculation of the SRS threshold, taking into account large fluctuations of the SRS seed reflectivity, R0. Such fluctuations, absent in 1D, are caused by a gap in the linear reflectivity gain spectrum which leads to an exponential probability distribution for R0. While the SRS threshold intensity is of course finite, these fluctuations lead to a decrease of apparent threshold with increasing speckle lifetime. L. Yin et al., Physics of Plasmas 15, 013109 (2008). D. S. Montgomery et al., 9, 2311(2002). Bruce Langdon et al., 38^th Anomalous Absorption Conference (2008). Harvey A. Rose, Physics of Plasmas 10, 1468 (2003). Harvey A. Rose and L. Yin, Physics of Plasmas 15, 042311 (2008)., Harvey A. Rose and David A. Russell, Phys. Plasma 8, 4784 (2001).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Wei; Li, Hong-Yi; Leung, L. Ruby
Anthropogenic activities, e.g., reservoir operation, may alter the characteristics of Flood Frequency Curve (FFC) and challenge the basic assumption of stationarity used in flood frequency analysis. This paper presents a combined data-modeling analysis of the nonlinear filtering effects of reservoirs on the FFCs over the contiguous United States. A dimensionless Reservoir Impact Index (RII), defined as the total upstream reservoir storage capacity normalized by the annual streamflow volume, is used to quantify reservoir regulation effects. Analyses are performed for 388 river stations with an average record length of 50 years. The first two moments of the FFC, mean annual maximummore » flood (MAF) and coefficient of variations (CV), are calculated for the pre- and post-dam periods and compared to elucidate the reservoir regulation effects as a function of RII. It is found that MAF generally decreases with increasing RII but stabilizes when RII exceeds a threshold value, and CV increases with RII until a threshold value beyond which CV decreases with RII. The processes underlying the nonlinear threshold behavior of MAF and CV are investigated using three reservoir models with different levels of complexity. All models capture the non-linear relationships of MAF and CV with RII, suggesting that the basic flood control function of reservoirs is key to the non-linear relationships. The relative roles of reservoir storage capacity, operation objectives, available storage prior to a flood event, and reservoir inflow pattern are systematically investigated. Our findings may help improve flood-risk assessment and mitigation in regulated river systems at the regional scale.« less
Music, Mark; Finderle, Zarko; Cankar, Ksenija
2011-05-01
The aim of the present study was to investigate the effect of quantitatively measured cold perception (CP) thresholds on microcirculatory response to local cooling as measured by direct and indirect response of laser-Doppler (LD) flux during local cooling at different temperatures. The CP thresholds were measured in 18 healthy males using the Marstock method (thermode placed on the thenar). The direct (at the cooling site) and indirect (on contralateral hand) LD flux responses were recorded during immersion of the hand in a water bath at 20°C, 15°C, and 10°C. The cold perception threshold correlated (linear regression analysis, Pearson correlation) with the indirect LD flux response at cooling temperatures 20°C (r=0.782, p<0.01) and 15°C (r=0.605, p<0.01). In contrast, there was no correlation between the CP threshold and the indirect LD flux response during cooling in water at 10°C. The results demonstrate that during local cooling, depending on the cooling temperature used, cold perception threshold influences indirect LD flux response. Copyright © 2011 Elsevier Inc. All rights reserved.
Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre
2003-03-01
A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.
Effects of Frequency Drift on the Quantification of Gamma-Aminobutyric Acid Using MEGA-PRESS
Tsai, Shang-Yueh; Fang, Chun-Hao; Wu, Thai-Yu; Lin, Yi-Ru
2016-01-01
The MEGA-PRESS method is the most common method used to measure γ-aminobutyric acid (GABA) in the brain at 3T. It has been shown that the underestimation of the GABA signal due to B0 drift up to 1.22 Hz/min can be reduced by post-frequency alignment. In this study, we show that the underestimation of GABA can still occur even with post frequency alignment when the B0 drift is up to 3.93 Hz/min. The underestimation can be reduced by applying a frequency shift threshold. A total of 23 subjects were scanned twice to assess the short-term reproducibility, and 14 of them were scanned again after 2–8 weeks to evaluate the long-term reproducibility. A linear regression analysis of the quantified GABA versus the frequency shift showed a negative correlation (P < 0.01). Underestimation of the GABA signal was found. When a frequency shift threshold of 0.125 ppm (15.5 Hz or 1.79 Hz/min) was applied, the linear regression showed no statistically significant difference (P > 0.05). Therefore, a frequency shift threshold at 0.125 ppm (15.5 Hz) can be used to reduce underestimation during GABA quantification. For data with a B0 drift up to 3.93 Hz/min, the coefficients of variance of short-term and long-term reproducibility for the GABA quantification were less than 10% when the frequency threshold was applied. PMID:27079873
Durante, Alessandra Spada; Wieselberg, Margarita Bernal; Roque, Nayara; Carvalho, Sheila; Pucci, Beatriz; Gudayol, Nicolly; de Almeida, Kátia
The use of hearing aids by individuals with hearing loss brings a better quality of life. Access to and benefit from these devices may be compromised in patients who present difficulties or limitations in traditional behavioral audiological evaluation, such as newborns and small children, individuals with auditory neuropathy spectrum, autism, and intellectual deficits, and in adults and the elderly with dementia. These populations (or individuals) are unable to undergo a behavioral assessment, and generate a growing demand for objective methods to assess hearing. Cortical auditory evoked potentials have been used for decades to estimate hearing thresholds. Current technological advances have lead to the development of equipment that allows their clinical use, with features that enable greater accuracy, sensitivity, and specificity, and the possibility of automated detection, analysis, and recording of cortical responses. To determine and correlate behavioral auditory thresholds with cortical auditory thresholds obtained from an automated response analysis technique. The study included 52 adults, divided into two groups: 21 adults with moderate to severe hearing loss (study group); and 31 adults with normal hearing (control group). An automated system of detection, analysis, and recording of cortical responses (HEARLab ® ) was used to record the behavioral and cortical thresholds. The subjects remained awake in an acoustically treated environment. Altogether, 150 tone bursts at 500, 1000, 2000, and 4000Hz were presented through insert earphones in descending-ascending intensity. The lowest level at which the subject detected the sound stimulus was defined as the behavioral (hearing) threshold (BT). The lowest level at which a cortical response was observed was defined as the cortical electrophysiological threshold. These two responses were correlated using linear regression. The cortical electrophysiological threshold was, on average, 7.8dB higher than the behavioral for the group with hearing loss and, on average, 14.5dB higher for the group without hearing loss for all studied frequencies. The cortical electrophysiological thresholds obtained with the use of an automated response detection system were highly correlated with behavioral thresholds in the group of individuals with hearing loss. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Passive quantum error correction of linear optics networks through error averaging
NASA Astrophysics Data System (ADS)
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
Estimation of ultrashort laser irradiation effect over thin transparent biopolymer films morphology
NASA Astrophysics Data System (ADS)
Daskalova, A.; Nathala, C.; Bliznakova, I.; Slavov, D.; Husinsky, W.
2015-01-01
The collagen - elastin biopolymer thin films treated by CPA Ti:Sapphire laser (Femtopower - Compact Pro) at 800nm central wavelength with 30fs and 1kHz repetition rate are investigated. A process of surface modifications and microporous scaffold creation after ultrashort laser irradiation has been observed. The single-shot (N=1) and multi-shot (N<1) ablation threshold values were estimated by studying the linear relationship between the square of the crater diameter D2 and the logarithm of the laser fluence F for determination of the threshold fluences for N=1, 2, 5, 10, 15 and 30 number of laser pulses. The incubation analysis by calculation of the incubation coefficient ξ for multi - shot fluence threshold for selected materials by power - law relationship form Fth(N)=Fth(1)Nξ-1 was also obtained. In this paper, we have also shown another consideration of the multi - shot ablation threshold calculation by logarithmic dependence of the ablation rate d on the laser fluence. The morphological surface changes of the modified regions were characterized by scanning electron microscopy to estimate the generated variations after the laser treatment.
Critical Gradient Behavior of Alfvén Eigenmode Induced Fast-Ion Transport in Phase Space
NASA Astrophysics Data System (ADS)
Collins, C. S.; Pace, D. C.; van Zeeland, M. A.; Heidbrink, W. W.; Stagner, L.; Zhu, Y. B.; Kramer, G. J.; Podesta, M.; White, R. B.
2016-10-01
Experiments on DIII-D have shown that energetic particle (EP) transport suddenly increases when multiple Alfvén eigenmodes (AEs) cause particle orbits to become stochastic. Several key features have been observed; (1) the transport threshold is phase-space dependent and occurs above the AE linear stability threshold, (2) EP losses become intermittent above threshold and appear to depend on the types of AEs present, and (3) stiff transport causes the EP density profile to remain unchanged even if the source increases. Theoretical analysis using the NOVA and ORBIT codes shows that the threshold corresponds to when particle orbits become stochastic due to wave-particle resonances with AEs in the region of phase space measured by the diagnostics. The kick model in NUBEAM (TRANSP) is used to evolve the EP distribution function to study which modes cause the most transport and further characterize intermittent bursts of EP losses, which are associated with large scale redistribution through the domino effect. Work supported by the US DOE under DE-FC02-04ER54698.
Interlaminar shear fracture toughness and fatigue thresholds for composite materials
NASA Technical Reports Server (NTRS)
Obrien, T. Kevin; Murri, Gretchen B.; Salpekar, Satish A.
1987-01-01
Static and cyclic end notched flexure tests were conducted on a graphite epoxy, a glass epoxy, and graphite thermoplastic to determine their interlaminar shear fracture toughness and fatigue thresholds for delamination in terms of limiting values of the mode II strain energy release rate, G-II, for delamination growth. The influence of precracking and data reduction schemes are discussed. Finite element analysis indicated that the beam theory calculation for G-II with the transverse shear contribution included was reasonably accurate over the entire range of crack lengths. Cyclic loading significantly reduced the critical G-II for delamination. A threshold value of the maximum cyclic G-II below which no delamination occurred after one million cycles was identified for each material. Also, residual static toughness tests were conducted on glass epoxy specimens that had undergone one million cycles without delamination. A linear mixed-mode delamination criteria was used to characterize the static toughness of several composite materials; however, a total G threshold criterion appears to characterize the fatigue delamination durability of composite materials with a wide range of static toughness.
NASA Technical Reports Server (NTRS)
Berthoz, A.; Pavard, B.; Young, L. R.
1975-01-01
The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.
Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance
NASA Astrophysics Data System (ADS)
Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola
2013-04-01
Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into four reoccurring patterns of typical model performance, which can be related to different phases of the hydrograph. Overall, the baseflow cluster has the lowest performance. By combining the periods with poor model performance with the dominant model components during these phases, the groundwater module was detected as the model part with the highest potential for model improvements. The detection of dominant processes in periods of poor model performance enhances the understanding of the SWAT model. Based on this, concepts how to improve the SWAT model structure for the application in German lowland catchment are derived.
NASA Astrophysics Data System (ADS)
Gariano, Stefano Luigi; Brunetti, Maria Teresa; Iovine, Giulio; Melillo, Massimo; Peruccacci, Silvia; Terranova, Oreste Giuseppe; Vennari, Carmela; Guzzetti, Fausto
2015-04-01
Prediction of rainfall-induced landslides can rely on empirical rainfall thresholds. These are obtained from the analysis of past rainfall events that have (or have not) resulted in slope failures. Accurate prediction requires reliable thresholds, which need to be validated before their use in operational landslide warning systems. Despite the clear relevance of validation, only a few studies have addressed the problem, and have proposed and tested robust validation procedures. We propose a validation procedure that allows for the definition of optimal thresholds for early warning purposes. The validation is based on contingency table, skill scores, and receiver operating characteristic (ROC) analysis. To establish the optimal threshold, which maximizes the correct landslide predictions and minimizes the incorrect predictions, we propose an index that results from the linear combination of three weighted skill scores. Selection of the optimal threshold depends on the scope and the operational characteristics of the early warning system. The choice is made by selecting appropriately the weights, and by searching for the optimal (maximum) value of the index. We discuss weakness in the validation procedure caused by the inherent lack of information (epistemic uncertainty) on landslide occurrence typical of large study areas. When working at the regional scale, landslides may have occurred and may have not been reported. This results in biases and variations in the contingencies and the skill scores. We introduce two parameters to represent the unknown proportion of rainfall events (above and below the threshold) for which landslides occurred and went unreported. We show that even a very small underestimation in the number of landslides can result in a significant decrease in the performance of a threshold measured by the skill scores. We show that the variations in the skill scores are different for different uncertainty of events above or below the threshold. This has consequences in the ROC analysis. We applied the proposed procedure to a catalogue of rainfall conditions that have resulted in landslides, and to a set of rainfall events that - presumably - have not resulted in landslides, in Sicily, in the period 2002-2012. First, we determined regional event duration-cumulated event (ED) rainfall thresholds for shallow landslide occurrence using 200 rainfall conditions that have resulted in 223 shallow landslides in Sicily in the period 2002-2011. Next, we validated the thresholds using 29 rainfall conditions that have triggered 42 shallow landslides in Sicily in 2012, and 1250 rainfall events that presumably have not resulted in landslides in the same year. We performed a back analysis simulating the use of the thresholds in a hypothetical landslide warning system operating in 2012.
Statistical analysis of effective singular values in matrix rank determination
NASA Technical Reports Server (NTRS)
Konstantinides, Konstantinos; Yao, Kung
1988-01-01
A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.
NASA Astrophysics Data System (ADS)
Shirazi, M. R.; Mohamed Taib, J.; De La Rue, R. M.; Harun, S. W.; Ahmad, H.
2015-03-01
Dynamic characteristics of a multi-wavelength Brillouin-Raman fiber laser (MBRFL) assisted by four-wave mixing have been investigated through the development of Stokes and anti-Stokes lines under different combinations of Brillouin and Raman pump power levels and different Raman pumping schemes in a ring cavity. For a Stokes line of order higher than three, the threshold power was less than the saturation power of its last-order Stokes line. By increasing the Brillouin pump power, the nth order anti-Stokes and the (n+4)th order Stokes power levels were unexpectedly increased almost the same before the Stokes line threshold power. It was also found out that the SBS threshold reduction (SBSTR) depended linearly on the gain factor for the 1st and 2nd Stokes lines, as the first set. This relation for the 3rd and 4th Stokes lines as the second set, however, was almost linear with the same slope before SBSTR -6 dB, then, it approached to the linear relation in the first set when the gain factor was increased to 50 dB. Therefore, the threshold power levels of Stokes lines for a given Raman gain can be readily estimated only by knowing the threshold power levels in which there is no Raman amplification.
NASA Astrophysics Data System (ADS)
Garca Fernández, P.; Colet, P.; Toral, R.; San Miguel, M.; Bermejo, F. J.
1991-05-01
The squeezing properties of a model of a degenerate parametric amplifier with absorption losses and an added fourth-order nonlinearity have been analyzed. The approach used consists of obtaining the Langevin equation for the optical field from the Heisenberg equation provided that a linearization procedure is valid. The steady states of the deterministic equations have been obtained and their local stability has been analyzed. The stationary covariance matrix has been calculated below and above threshold. Below threshold, a squeezed vacuum state is obtained and the nonlinear effects in the fluctuations have been taken into account by a Gaussian decoupling. In the case above threshold, a phase-squeezed coherent state is obtained and numerical simulations allowed to compute the time interval, depending on the loss parameter, on which the system jumps from one stable state to the other. Finally, the variances numerically determined have been compared with those obtained from the linearized theory and the limits of validity of the linear theory have been analyzed. It has become clear that the nonlinear contribution may perhaps be profitably used for the construction of above-threshold squeezing devices.
Effects of fatigue on motor unit firing rate versus recruitment threshold relationships.
Stock, Matt S; Beck, Travis W; Defreitas, Jason M
2012-01-01
The purpose of this study was to examine the influence of fatigue on the average firing rate versus recruitment threshold relationships for the vastus lateralis (VL) and vastus medialis. Nineteen subjects performed ten maximum voluntary contractions of the dominant leg extensors. Before and after this fatiguing protocol, the subjects performed a trapezoid isometric muscle action of the leg extensors, and bipolar surface electromyographic signals were detected from both muscles. These signals were then decomposed into individual motor unit action potential trains. For each subject and muscle, the relationship between average firing rate and recruitment threshold was examined using linear regression analyses. For the VL, the linear slope coefficients and y-intercepts for these relationships increased and decreased, respectively, after fatigue. For both muscles, many of the motor units decreased their firing rates. With fatigue, recruitment of higher threshold motor units resulted in an increase in slope for the VL. Copyright © 2011 Wiley Periodicals, Inc.
Thresholds and the Evolution of Bedrock Channels on the Hawaiian Islands
NASA Astrophysics Data System (ADS)
Raming, L. W.; Whipple, K. X.
2017-12-01
Erosional thresholds are a key component of the non-linear dynamics of bedrock channel incision and long-term landscape evolution. Erosion thresholds, however, have remained difficult to quantify and uniquely identify in landscape evolution. Here we present an analysis of the morphology of canyons on the Hawaiian Islands and put forth the hypothesis that they are threshold-dominated landforms. Geologic(USGS), topographic (USGS 10m DEM), runoff (USGS) and meteorological data (Rainfall Atlas of Hawai`i) were used in an analysis of catchments on the islands of Hawai`i, Kaua`i, Lāna`i, Maui, and Moloka'i. Channel incision was estimated by differencing the present topography from reconstructed pre-incision volcanic surfaces. Four key results were obtained from our analysis: (1) Mean total incision ranged from 11 to 684 m and exhibited no correlation with incision duration. (2) In major canyons on the Islands of Hawaii and Kauai rejuvenated-stage basalt flow outcrops at river level show incision effectively ceased after a period no longer than 100 ka and 1.4 Ma, respectively. (3) Mean canyon wall gradient below knickpoints decreases with volcano age, with a median value of 1 measured on Hawaii and of 0.7 on Kauai. (4) Downstream of major knickpoints which demarcate the upper limits of deep canyons, channel profiles have near uniform channel steepness with most values ranging between 60 and 100. The presence of uniform channel steepness (KSN) implies uniform bed shear stress and typically is interpreted as a steady-state balance between uplift and incision in tectonically active landscapes. However, this is untenable for Hawaiian canyons and subsequently we posit that uniform KSN represents a condition where flood shear stress has been reduced to threshold values and incision reduced to near zero. Uniform KSN values decrease with rainfall, consistent with wetter regions generating threshold shear stress at lower KSN. This suggests that rapid incision occurred during brief intervals where thresholds were exceeded through a combination of initial slope, over-steeping due to cliff formation, and available runoff as function of climate. From this analysis, we find significant evidence of the role of thresholds in landscape evolution and an alternative framework for viewing the evolution of the Hawaiian Islands.
Introducing Linear Functions: An Alternative Statistical Approach
ERIC Educational Resources Information Center
Nolan, Caroline; Herbert, Sandra
2015-01-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be "threshold concepts". There is recognition that linear functions can be taught in context through the exploration of linear…
Zuo, Yue Yue J; Hébraud, Pascal; Hemar, Yacine; Ashokkumar, Muthupandian
2012-05-01
A simple light microscopic technique was developed in order to quantify the damage inflicted by high-power low-frequency ultrasound (0-160 W, 20 kHz) treatment on potato starch granules in aqueous dispersions. The surface properties of the starch granules were modified using ethanol and SDS washing methods, which are known to displace proteins and lipids from the surface of the starch granules. The study showed that in the case of normal and ethanol-washed potato starch dispersions, two linear regions were observed. The number of defects first increased linearly with an increase in ultrasound power up to a threshold level. This was then followed by another linear dependence of the number of defects on the ultrasound power. The power threshold where the change-over occurred was higher for the ethanol-washed potato dispersions compared to non-washed potato dispersions. In the case of SDS-washed potato starch, although the increase in defects was linear with the ultrasound power, the power threshold for a second linear region was not observed. These results are discussed in terms of the different possible mechanisms of cavitation induced-damage (hydrodynamic shear stresses and micro-jetting) and by taking into account the hydrophobicity of the starch granule surface. Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyea, Jan, E-mail: jbeyea@cipi.com
There are both statistically valid and invalid reasons why scientists with differing default hypotheses can disagree in high-profile situations. Examples can be found in recent correspondence in this journal, which may offer lessons for resolving challenges to mainstream science, particularly when adherents of a minority view attempt to elevate the status of outlier studies and/or claim that self-interest explains the acceptance of the dominant theory. Edward J. Calabrese and I have been debating the historical origins of the linear no-threshold theory (LNT) of carcinogenesis and its use in the regulation of ionizing radiation. Professor Calabrese, a supporter of hormesis, hasmore » charged a committee of scientists with misconduct in their preparation of a 1956 report on the genetic effects of atomic radiation. Specifically he argues that the report mischaracterized the LNT research record and suppressed calculations of some committee members. After reviewing the available scientific literature, I found that the contemporaneous evidence overwhelmingly favored a (genetics) LNT and that no calculations were suppressed. Calabrese's claims about the scientific record do not hold up primarily because of lack of attention to statistical analysis. Ironically, outlier studies were more likely to favor supra-linearity, not sub-linearity. Finally, the claim of investigator bias, which underlies Calabrese's accusations about key studies, is based on misreading of text. Attention to ethics charges, early on, may help seed a counter narrative explaining the community's adoption of a default hypothesis and may help focus attention on valid evidence and any real weaknesses in the dominant paradigm. - Highlights: • Edward J Calabrese has made a contentious challenge to mainstream radiobiological science. • Such challenges should not be neglected, lest they enter the political arena without review. • Key genetic studies from the 1940s, challenged by Calabrese, were found consistent and unbiased. • A 1956 genetics report did not hide estimates and does not need investigation for misconduct. • The scientific record was strong for a no-threshold, linear genetic response to radiation.« less
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.
Donoho, David; Jin, Jiashun
2008-09-30
In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.
Higher criticism thresholding: Optimal feature selection when useful features are rare and weak
Donoho, David; Jin, Jiashun
2008-01-01
In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365
Non-linear effects of soda taxes on consumption and weight outcomes.
Fletcher, Jason M; Frisvold, David E; Tefft, Nathan
2015-05-01
The potential health impacts of imposing large taxes on soda to improve population health have been of interest for over a decade. As estimates of the effects of existing soda taxes with low rates suggest little health improvements, recent proposals suggest that large taxes may be effective in reducing weight because of non-linear consumption responses or threshold effects. This paper tests this hypothesis in two ways. First, we estimate non-linear effects of taxes using the range of current rates. Second, we leverage the sudden, relatively large soda tax increase in two states during the early 1990s combined with new synthetic control methods useful for comparative case studies. Our findings suggest virtually no evidence of non-linear or threshold effects. Copyright © 2014 John Wiley & Sons, Ltd.
Regulatory implications of a linear non-threshold (LNT) dose-based risks.
Aleta, C R
2009-01-01
Current radiation protection regulatory limits are based on the linear non-threshold (LNT) theory using health data from atomic bombing survivors. Studies in recent years sparked debate on the validity of the theory, especially at low doses. The present LNT overestimates radiation risks since the dosimetry included only acute gammas and neutrons; the role of other bomb-caused factors, e.g. fallout, induced radioactivity, thermal radiation (UVR), electromagnetic pulse (EMP), and blast, were excluded. Studies are proposed to improve the dose-response relationship.
Estimation of neural energy in microelectrode signals
NASA Astrophysics Data System (ADS)
Gaumond, R. P.; Clement, R.; Silva, R.; Sander, D.
2004-09-01
We considered the problem of determining the neural contribution to the signal recorded by an intracortical electrode. We developed a linear least-squares approach to determine the energy fraction of a signal attributable to an arbitrary number of autocorrelation-defined signals buried in noise. Application of the method requires estimation of autocorrelation functions Rap(tgr) characterizing the action potential (AP) waveforms and Rn(tgr) characterizing background noise. This method was applied to the analysis of chronically implanted microelectrode signals from motor cortex of rat. We found that neural (AP) energy consisted of a large-signal component which grows linearly with the number of threshold-detected neural events and a small-signal component unrelated to the count of threshold-detected AP signals. The addition of pseudorandom noise to electrode signals demonstrated the algorithm's effectiveness for a wide range of noise-to-signal energy ratios (0.08 to 39). We suggest, therefore, that the method could be of use in providing a measure of neural response in situations where clearly identified spike waveforms cannot be isolated, or in providing an additional 'background' measure of microelectrode neural activity to supplement the traditional AP spike count.
NASA Technical Reports Server (NTRS)
Bassom, Andrew P.; Seddougui, Sharon O.
1991-01-01
There exist two types of stationary instability of the flow over a rotating disc corresponding to the upper branch, inviscid mode and the lower branch mode, which has a triple deck structure, of the neutral stability curve. A theoretical study of the linear problem and an account of the weakly nonlinear properties of the lower branch modes have been undertaken by Hall and MacKerrell respectively. Motivated by recent reports of experimental sightings of the lower branch mode and an examination of the role of suction on the linear stability properties of the flow here, the effects are studied of suction on the nonlinear disturbance described by MacKerrell. The additional analysis required in order to incorporate suction is relatively straightforward and enables the derivation of an amplitude equation which describes the evolution of the mode. For each value of the suction, a threshold value of the disturbance amplitude is obtained; modes of size greater than this threshold grow without limit as they develop away from the point of neutral stability.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otake, M.; Schull, W.J.
The occurrence of lenticular opacities among atomic bomb survivors in Hiroshima and Nagasaki detected in 1963-1964 has been examined in reference to their ..gamma.. and neutron doses. A lenticular opacity in this context implies an ophthalmoscopic and slit lamp biomicroscopic defect in the axial posterior aspect of the lens which may or may not interfere measureably with visual acuity. Several different dose-response models were fitted to the data after the effects of age at time of bombing (ATB) were examined. Some postulate the existence of a threshold(s), others do not. All models assume a ''background'' exists, that is, that somemore » number of posterior lenticular opacities are ascribable to events other than radiation exposure. Among these alternatives we can show that a simple linear ..gamma..-neutron relationship which assumes no threshold does not fit the data adequately under the T65 dosimetry, but does fit the recent Oak Ridge and Lawrence Livermore estimates. Other models which envisage quadratic terms in gamma and which may or may not assume a threshold are compatible with the data. The ''best'' fit, that is, the one with the smallest X/sup 2/ and largest tail probability, is with a ''linear gamma:linear neutron'' model which postulates a ..gamma.. threshold but no threshold for neutrons. It should be noted that the greatest difference in the dose-response models associated with the three different sets of doses involves the neutron component, as is, of course, to be expected. No effect of neutrons on the occurrence of lenticular opacities is demonstrable with either the Lawrence Livermore or Oak Ridge estimates.« less
NASA Astrophysics Data System (ADS)
Pal, Anirban; Picu, Catalin; Lupulescu, Marian V.
We study the mechanical behavior of two-dimensional, stochastically microcracked continua in the range of crack densities close to, and above the transport percolation threshold. We show that these materials retain stiffness up to crack densities much larger than the transport percolation threshold, due to topological interlocking of sample sub-domains. Even with a linear constitutive law for the continuum, the mechanical behavior becomes non-linear in the range of crack densities bounded by the transport and stiffness percolation thresholds. The effect is due to the fractal nature of the fragmentation process and is not linked to the roughness of individual cracks. We associate this behavior to that of itacolumite, a sandstone that exhibits unusual flexibility.
Rexhepaj, Elton; Brennan, Donal J; Holloway, Peter; Kay, Elaine W; McCann, Amanda H; Landberg, Goran; Duffy, Michael J; Jirstrom, Karin; Gallagher, William M
2008-01-01
Manual interpretation of immunohistochemistry (IHC) is a subjective, time-consuming and variable process, with an inherent intra-observer and inter-observer variability. Automated image analysis approaches offer the possibility of developing rapid, uniform indicators of IHC staining. In the present article we describe the development of a novel approach for automatically quantifying oestrogen receptor (ER) and progesterone receptor (PR) protein expression assessed by IHC in primary breast cancer. Two cohorts of breast cancer patients (n = 743) were used in the study. Digital images of breast cancer tissue microarrays were captured using the Aperio ScanScope XT slide scanner (Aperio Technologies, Vista, CA, USA). Image analysis algorithms were developed using MatLab 7 (MathWorks, Apple Hill Drive, MA, USA). A fully automated nuclear algorithm was developed to discriminate tumour from normal tissue and to quantify ER and PR expression in both cohorts. Random forest clustering was employed to identify optimum thresholds for survival analysis. The accuracy of the nuclear algorithm was initially confirmed by a histopathologist, who validated the output in 18 representative images. In these 18 samples, an excellent correlation was evident between the results obtained by manual and automated analysis (Spearman's rho = 0.9, P < 0.001). Optimum thresholds for survival analysis were identified using random forest clustering. This revealed 7% positive tumour cells as the optimum threshold for the ER and 5% positive tumour cells for the PR. Moreover, a 7% cutoff level for the ER predicted a better response to tamoxifen than the currently used 10% threshold. Finally, linear regression was employed to demonstrate a more homogeneous pattern of expression for the ER (R = 0.860) than for the PR (R = 0.681). In summary, we present data on the automated quantification of the ER and the PR in 743 primary breast tumours using a novel unsupervised image analysis algorithm. This novel approach provides a useful tool for the quantification of biomarkers on tissue specimens, as well as for objective identification of appropriate cutoff thresholds for biomarker positivity. It also offers the potential to identify proteins with a homogeneous pattern of expression.
Okudan, N; Gökbel, H
2006-03-01
The aim of the present study was to investigate the relationships between critical power (CP), maximal aerobic power and the anaerobic threshold and whether exercise time to exhaustion and work at the CP can be used as an index in the determination of endurance. An incremental maximal cycle exercise test was performed on 30 untrained males aged 18-22 years. Lactate analysis was carried out on capillary blood samples at every 2 minutes. From gas exchange parameters and heart rate and lactate values, ventilatory anaerobic thresholds, heart rate deflection point and the onset of blood lactate accumulation were calculated. CP was determined with linear work-time method using 3 loads. The subjects exercised until they could no longer maintain a cadence above 24 rpm at their CP and exercise time to exhaustion was determined. CP was lower than the power output corresponding to VO2max, higher than the power outputs corresponding to anaerobic threshold. CP was correlated with VO2max and anaerobic threshold. Exercise time to exhaustion and work at CP were not correlated with VO2max and anaerobic threshold. Because of the correlations of the CP with VO2max and anaerobic threshold and no correlation of exercise time to exhaustion and work at the CP with these parameters, we conclude that exercise time to exhaustion and work at the CP cannot be used as an index in the determination of endurance.
Moring, J. Bruce
2009-01-01
In 2001, the U.S. Geological Survey National Water Quality Assessment Program began a series of studies in the contiguous United States to examine the effects of urbanization on the chemical, physical, and biological characteristics of streams. Small streams in the Texas Blackland Prairie level III ecoregion in and near the Dallas-Fort Worth metropolitan area were the focus of one of the studies. The principal objectives of the study, based on data collected in 2003-04 from 28 subbasins of the Trinity River Basin, were to (1) define a gradient of urbanization for small Blackland Prairie streams in the Trinity River Basin on the basis of a range of urban intensity indexes (UIIs) calculated using land-use/land-cover, infrastructure, and socioeconomic characteristics; (2) assess the relation between this gradient of urbanization and the chemical, physical, and biological characteristics of these streams; and (3) evaluate the type of relation (that is, linear or nonlinear, and whether there was a threshold response) of the chemical, physical, and biological characteristics of these streams to the gradient of urbanization. Of 94 water-chemistry variables and one measure of potential toxicity from a bioassay, the concentrations of two pesticides (diazinon and sima-zine) and one measure of potential toxicity (P450RGS assay) from compounds sequestered in semipermeable membrane devices were significantly positively correlated with the UII. No threshold responses to the UII for diazinon and simazine concentrations were observed over the entire range of the UII scores. The linear correlation for diazinon with the UII was significant, but the linear correlation for simazine with the UII was not. No statistically significant relations between the UII and concentrations of suspended sediment, total nitrogen, total phosphorous, or any major ions were indicated. Eleven of 59 physical variables from streamflow were significantly correlated with the UII. Temperature was not significantly correlated with the UII, and none of the physical habitat measurements were significantly correlated with the UII. Seven physical variables categorized as streamflow flashiness metrics were significantly positively correlated with the UII, two of which showed a linear but not a threshold response to the UII. Four flow-duration metrics were significantly negatively correlated with the UII, of which two showed a linear response to the UII, one showed a threshold response, and one showed neither. None of the fish metrics were significantly correlated with the UII in the Blackland Prairie streams. Two qualitative multi-habitat benthic macroinvertebrate metrics, predator richness and percentage filterer-collector richness, were significantly correlated with the UII; predator richness was negatively correlated with the UII, and percentage filterer-collector richness was positively correlated with the UII. No threshold response to the UII was observed for either metric, but both showed a significant linear response to the UII. Three richest targeted habitat (RTH) benthic macroinvertebrate metrics, Margalef's richness, predator richness, and omnivore richness were significantly negatively correlated with the UII. Margalef's richness was the only RTH metric that indicated a threshold response to the UII. The majority of unique taxa collected in the periphytic algae samples were diatoms. Six RTH periphytic algae metrics were correlated with the UII and five of the six showed no notable threshold response to the UII; but all five showed significant linear responses to the UII. Only the metric OT_VL_DP, which indicates the presence of algae that are tolerant of low dissolved oxygen conditions, showed a threshold response to the UII. Six depositional target habitat periphytic algae metrics were correlated with the UII, five of which showed no threshold response to the UII; three of the five showed significant linear responses to the UII, one showed a borderline significant
Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.
2012-01-01
We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.
Gawthrop, Peter J.; Lakie, Martin; Loram, Ian D.
2017-01-01
Key points A human controlling an external system is described most easily and conventionally as linearly and continuously translating sensory input to motor output, with the inevitable output remnant, non‐linearly related to the input, attributed to sensorimotor noise.Recent experiments show sustained manual tracking involves repeated refractoriness (insensitivity to sensory information for a certain duration), with the temporary 200–500 ms periods of irresponsiveness to sensory input making the control process intrinsically non‐linear.This evidence calls for re‐examination of the extent to which random sensorimotor noise is required to explain the non‐linear remnant.This investigation of manual tracking shows how the full motor output (linear component and remnant) can be explained mechanistically by aperiodic sampling triggered by prediction error thresholds.Whereas broadband physiological noise is general to all processes, aperiodic sampling is associated with sensorimotor decision making within specific frontal, striatal and parietal networks; we conclude that manual tracking utilises such slow serial decision making pathways up to several times per second. Abstract The human operator is described adequately by linear translation of sensory input to motor output. Motor output also always includes a non‐linear remnant resulting from random sensorimotor noise from multiple sources, and non‐linear input transformations, for example thresholds or refractory periods. Recent evidence showed that manual tracking incurs substantial, serial, refractoriness (insensitivity to sensory information of 350 and 550 ms for 1st and 2nd order systems respectively). Our two questions are: (i) What are the comparative merits of explaining the non‐linear remnant using noise or non‐linear transformations? (ii) Can non‐linear transformations represent serial motor decision making within the sensorimotor feedback loop intrinsic to tracking? Twelve participants (instructed to act in three prescribed ways) manually controlled two systems (1st and 2nd order) subject to a periodic multi‐sine disturbance. Joystick power was analysed using three models, continuous‐linear‐control (CC), continuous‐linear‐control with calculated noise spectrum (CCN), and intermittent control with aperiodic sampling triggered by prediction error thresholds (IC). Unlike the linear mechanism, the intermittent control mechanism explained the majority of total power (linear and remnant) (77–87% vs. 8–48%, IC vs. CC). Between conditions, IC used thresholds and distributions of open loop intervals consistent with, respectively, instructions and previous measured, model independent values; whereas CCN required changes in noise spectrum deviating from broadband, signal dependent noise. We conclude that manual tracking uses open loop predictive control with aperiodic sampling. Because aperiodic sampling is inherent to serial decision making within previously identified, specific frontal, striatal and parietal networks we suggest that these structures are intimately involved in visuo‐manual tracking. PMID:28833126
NASA Astrophysics Data System (ADS)
Perino, E. J.; Matoz-Fernandez, D. A.; Pasinetti, P. M.; Ramirez-Pastor, A. J.
2017-07-01
Monte Carlo simulations and finite-size scaling analysis have been performed to study the jamming and percolation behavior of linear k-mers (also known as rods or needles) on a two-dimensional triangular lattice of linear dimension L, considering an isotropic RSA process and periodic boundary conditions. Extensive numerical work has been done to extend previous studies to larger system sizes and longer k-mers, which enables the confirmation of a nonmonotonic size dependence of the percolation threshold and the estimation of a maximum value of k from which percolation would no longer occur. Finally, a complete analysis of critical exponents and universality has been done, showing that the percolation phase transition involved in the system is not affected, having the same universality class of the ordinary random percolation.
Optimal estimation of recurrence structures from time series
NASA Astrophysics Data System (ADS)
beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel
2016-05-01
Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.
Longobardo, G S; Evangelisti, C J; Cherniack, N S
2009-12-01
We examined the effect of arousals (shifts from sleep to wakefulness) on breathing during sleep using a mathematical model. The model consisted of a description of the fluid dynamics and mechanical properties of the upper airways and lungs, as well as a controller sensitive to arterial and brain changes in CO(2), changes in arterial oxygen, and a neural input, alertness. The body was divided into multiple gas store compartments connected by the circulation. Cardiac output was constant, and cerebral blood flows were sensitive to changes in O(2) and CO(2) levels. Arousal was considered to occur instantaneously when afferent respiratory chemical and neural stimulation reached a threshold value, while sleep occurred when stimulation fell below that value. In the case of rigid and nearly incompressible upper airways, lowering arousal threshold decreased the stability of breathing and led to the occurrence of repeated apnoeas. In more compressible upper airways, to maintain stability, increasing arousal thresholds and decreasing elasticity were linked approximately linearly, until at low elastances arousal thresholds had no effect on stability. Increased controller gain promoted instability. The architecture of apnoeas during unstable sleep changed with the arousal threshold and decreases in elasticity. With rigid airways, apnoeas were central. With lower elastances, apnoeas were mixed even with higher arousal thresholds. With very low elastances and still higher arousal thresholds, sleep consisted totally of obstructed apnoeas. Cycle lengths shortened as the sleep architecture changed from mixed apnoeas to total obstruction. Deeper sleep also tended to promote instability by increasing plant gain. These instabilities could be countered by arousal threshold increases which were tied to deeper sleep or accumulated aroused time, or by decreased controller gains.
Detector noise statistics in the non-linear regime
NASA Technical Reports Server (NTRS)
Shopbell, P. L.; Bland-Hawthorn, J.
1992-01-01
The statistical behavior of an idealized linear detector in the presence of threshold and saturation levels is examined. It is assumed that the noise is governed by the statistical fluctuations in the number of photons emitted by the source during an exposure. Since physical detectors cannot have infinite dynamic range, our model illustrates that all devices have non-linear regimes, particularly at high count rates. The primary effect is a decrease in the statistical variance about the mean signal due to a portion of the expected noise distribution being removed via clipping. Higher order statistical moments are also examined, in particular, skewness and kurtosis. In principle, the expected distortion in the detector noise characteristics can be calibrated using flatfield observations with count rates matched to the observations. For this purpose, some basic statistical methods that utilize Fourier analysis techniques are described.
Functional properties of models for direction selectivity in the retina.
Grzywacz, N M; Koch, C
1987-01-01
Poggio and Reichardt (Kybernetik, 13:223-227, 1973) showed that if the average response of a visual system to a moving stimulus is directionally selective, then this sensitivity must be mediated by a nonlinear operation. In particular, it has been proposed that at the behavioral level, motion-sensitive biological systems are implemented by quadratic nonlinearities (Hassenstein and Reichardt: Z. Naturforsch., 11b:513-524, 1956; van Santen and Sperling: J. Opt. Soc. Am. [A] 1:451-473, 1984; Adelson and Bergen: J. Opt. Soc. Am. [A], 2:284-299, 1985). This paper analyzes theoretically two nonlinear neural mechanisms that possibly underlie retinal direction selectivity and explores the conditions under which they behave as a quadratic nonlinearity. The first mechanism is shunting inhibition (Torre and Poggio: Proc. R. Soc. Lond. [Biol.], 202:409-416, 1978), and the second consists of the linear combination of the outputs of a depolarizing and a hyperpolarizing synapse, followed by a threshold operation. It was found that although sometimes possible, it is in practice hard to approximate the Shunting Inhibition and the Threshold models for direction selectivity by quadratic systems. For instance, the level of the threshold on the Threshold model must be close to the steady-state level of the cell's combined synaptic input. Furthermore, for both the Shunting and the Threshold models, the approximation by a quadratic system is only possible for a small range of low contrast stimuli and for situations where the rectifications due to the ON-OFF mechanisms, and to the ganglion cells' action potentials, can be linearized. The main question that this paper leaves open is, how do we account for the apparent quadratic properties of motion perception given that the same properties seem so fragile at the single cell level? Finally, as a result of this study, some system analysis experiments were proposed that can distinguish between different instances of the models.
Phase-space dependent critical gradient behavior of fast-ion transport due to Alfvén eigenmodes
Collins, C. S.; Heidbrink, W. W.; Podestà, M.; ...
2017-06-09
Experiments in the DIII-D tokamak show that many overlapping small-amplitude Alfv en eigenmodes (AEs) cause fast-ion transport to sharply increase above a critical threshold, leading to fast-ion density profile resilience and reduced fusion performance. The threshold is above the AE linear stability limit and varies between diagnostics that are sensitive to different parts of fast-ion phase-space. A comparison with theoretical analysis using the nova and orbit codes shows that, for the neutral particle diagnostic, the threshold corresponds to the onset of stochastic particle orbits due to wave-particle resonances with AEs in the measured region of phase space. We manipulated themore » bulk fast-ion distribution and instability behavior through variations in beam deposition geometry, and no significant differences in the onset threshold outside of measurement uncertainties were found, in agreement with the theoretical stochastic threshold analysis. Simulations using the `kick model' produce beam ion density gradients consistent with the empirically measured radial critical gradient and highlight the importance of including the energy and pitch dependence of the fast-ion distribution function in critical gradient models. The addition of electron cyclotron heating changes the types of AEs present in the experiment, comparatively increasing the measured fast-ion density and radial gradient. Our studies provide the basis for understanding how to avoid AE transport that can undesirably redistribute current and cause fast-ion losses, and the measurements are being used to validate AE-induced transport models that use the critical gradient paradigm, giving greater confidence when applied to ITER.« less
Phase-space dependent critical gradient behavior of fast-ion transport due to Alfvén eigenmodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, C. S.; Heidbrink, W. W.; Podestà, M.
Experiments in the DIII-D tokamak show that many overlapping small-amplitude Alfv en eigenmodes (AEs) cause fast-ion transport to sharply increase above a critical threshold, leading to fast-ion density profile resilience and reduced fusion performance. The threshold is above the AE linear stability limit and varies between diagnostics that are sensitive to different parts of fast-ion phase-space. A comparison with theoretical analysis using the nova and orbit codes shows that, for the neutral particle diagnostic, the threshold corresponds to the onset of stochastic particle orbits due to wave-particle resonances with AEs in the measured region of phase space. We manipulated themore » bulk fast-ion distribution and instability behavior through variations in beam deposition geometry, and no significant differences in the onset threshold outside of measurement uncertainties were found, in agreement with the theoretical stochastic threshold analysis. Simulations using the `kick model' produce beam ion density gradients consistent with the empirically measured radial critical gradient and highlight the importance of including the energy and pitch dependence of the fast-ion distribution function in critical gradient models. The addition of electron cyclotron heating changes the types of AEs present in the experiment, comparatively increasing the measured fast-ion density and radial gradient. Our studies provide the basis for understanding how to avoid AE transport that can undesirably redistribute current and cause fast-ion losses, and the measurements are being used to validate AE-induced transport models that use the critical gradient paradigm, giving greater confidence when applied to ITER.« less
Impact of Sampling Density on the Extent of HIV Clustering
Novitsky, Vlad; Moyo, Sikhulile; Lei, Quanhong; DeGruttola, Victor
2014-01-01
Abstract Identifying and monitoring HIV clusters could be useful in tracking the leading edge of HIV transmission in epidemics. Currently, greater specificity in the definition of HIV clusters is needed to reduce confusion in the interpretation of HIV clustering results. We address sampling density as one of the key aspects of HIV cluster analysis. The proportion of viral sequences in clusters was estimated at sampling densities from 1.0% to 70%. A set of 1,248 HIV-1C env gp120 V1C5 sequences from a single community in Botswana was utilized in simulation studies. Matching numbers of HIV-1C V1C5 sequences from the LANL HIV Database were used as comparators. HIV clusters were identified by phylogenetic inference under bootstrapped maximum likelihood and pairwise distance cut-offs. Sampling density below 10% was associated with stochastic HIV clustering with broad confidence intervals. HIV clustering increased linearly at sampling density >10%, and was accompanied by narrowing confidence intervals. Patterns of HIV clustering were similar at bootstrap thresholds 0.7 to 1.0, but the extent of HIV clustering decreased with higher bootstrap thresholds. The origin of sampling (local concentrated vs. scattered global) had a substantial impact on HIV clustering at sampling densities ≥10%. Pairwise distances at 10% were estimated as a threshold for cluster analysis of HIV-1 V1C5 sequences. The node bootstrap support distribution provided additional evidence for 10% sampling density as the threshold for HIV cluster analysis. The detectability of HIV clusters is substantially affected by sampling density. A minimal genotyping density of 10% and sampling density of 50–70% are suggested for HIV-1 V1C5 cluster analysis. PMID:25275430
Chen, Hung-Yuan; Chiu, Yen-Ling; Hsu, Shih-Ping; Pai, Mei-Fen; Ju-YehYang; Lai, Chun-Fu; Lu, Hui-Min; Huang, Shu-Chen; Yang, Shao-Yu; Wen, Su-Yin; Chiu, Hsien-Ching; Hu, Fu-Chang; Peng, Yu-Sen; Jee, Shiou-Hwa
2013-01-01
Background Uremic pruritus is a common and intractable symptom in patients on chronic hemodialysis, but factors associated with the severity of pruritus remain unclear. This study aimed to explore the associations of metabolic factors and dialysis adequacy with the aggravation of pruritus. Methods We conducted a 5-year prospective cohort study on patients with maintenance hemodialysis. A visual analogue scale (VAS) was used to assess the intensity of pruritus. Patient demographic and clinical characteristics, laboratory parameters, dialysis adequacy (assessed by Kt/V), and pruritus intensity were recorded at baseline and follow-up. Change score analysis of the difference score of VAS between baseline and follow-up was performed using multiple linear regression models. The optimal threshold of Kt/V, which is associated with the aggravation of uremic pruritus, was determined by generalized additive models and receiver operating characteristic analysis. Results A total of 111 patients completed the study. Linear regression analysis showed that lower Kt/V and use of low-flux dialyzer were significantly associated with the aggravation of pruritus after adjusting for the baseline pruritus intensity and a variety of confounding factors. The optimal threshold value of Kt/V for pruritus was 1.5 suggested by both generalized additive models and receiver operating characteristic analysis. Conclusions Hemodialysis with the target of Kt/V ≥1.5 and use of high-flux dialyzer may reduce the intensity of pruritus in patients on chronic hemodialysis. Further clinical trials are required to determine the optimal dialysis dose and regimen for uremic pruritus. PMID:23940749
Forutan, M; Ansari Mahyari, S; Sargolzaei, M
2015-02-01
Calf and heifer survival are important traits in dairy cattle affecting profitability. This study was carried out to estimate genetic parameters of survival traits in female calves at different age periods, until nearly the first calving. Records of 49,583 female calves born during 1998 and 2009 were considered in five age periods as days 1-30, 31-180, 181-365, 366-760 and full period (day 1-760). Genetic components were estimated based on linear and threshold sire models and linear animal models. The models included both fixed effects (month of birth, dam's parity number, calving ease and twin/single) and random effects (herd-year, genetic effect of sire or animal and residual). Rates of death were 2.21, 3.37, 1.97, 4.14 and 12.4% for the above periods, respectively. Heritability estimates were very low ranging from 0.48 to 3.04, 0.62 to 3.51 and 0.50 to 4.24% for linear sire model, animal model and threshold sire model, respectively. Rank correlations between random effects of sires obtained with linear and threshold sire models and with linear animal and sire models were 0.82-0.95 and 0.61-0.83, respectively. The estimated genetic correlations between the five different periods were moderate and only significant for 31-180 and 181-365 (r(g) = 0.59), 31-180 and 366-760 (r(g) = 0.52), and 181-365 and 366-760 (r(g) = 0.42). The low genetic correlations in current study would suggest that survival at different periods may be affected by the same genes with different expression or by different genes. Even though the additive genetic variations of survival traits were small, it might be possible to improve these traits by traditional or genomic selection. © 2014 Blackwell Verlag GmbH.
Sreedevi, Gudapati; Prasad, Yenumula Gerard; Prabhakar, Mathyam; Rao, Gubbala Ramachandra; Vennila, Sengottaiyan; Venkateswarlu, Bandi
2013-01-01
Temperature-driven development and survival rates of the mealybug, Phenacoccussolenopsis Tinsley (Hemiptera: Pseudococcidae) were examined at nine constant temperatures (15, 20, 25, 27, 30, 32, 35 and 40°C) on hibiscus ( Hibiscus rosa -sinensis L.). Crawlers successfully completed development to adult stage between 15 and 35°C, although their survival was affected at low temperatures. Two linear and four nonlinear models were fitted to describe developmental rates of P . solenopsis as a function of temperature, and for estimating thermal constants and bioclimatic thresholds (lower, optimum and upper temperature thresholds for development: Tmin, Topt and Tmax, respectively). Estimated thresholds between the two linear models were statistically similar. Ikemoto and Takai’s linear model permitted testing the equivalence of lower developmental thresholds for life stages of P . solenopsis reared on two hosts, hibiscus and cotton. Thermal constants required for completion of cumulative development of female and male nymphs and for the whole generation were significantly lower on hibiscus (222.2, 237.0, 308.6 degree-days, respectively) compared to cotton. Three nonlinear models performed better in describing the developmental rate for immature instars and cumulative life stages of female and male and for generation based on goodness-of-fit criteria. The simplified β type distribution function estimated Topt values closer to the observed maximum rates. Thermodynamic SSI model indicated no significant differences in the intrinsic optimum temperature estimates for different geographical populations of P . solenopsis . The estimated bioclimatic thresholds and the observed survival rates of P . solenopsis indicate the species to be high-temperature adaptive, and explained the field abundance of P . solenopsis on its host plants. PMID:24086597
Yu, Dahai; Armstrong, Ben G.; Pattenden, Sam; Wilkinson, Paul; Doherty, Ruth M.; Heal, Mathew R.; Anderson, H. Ross
2012-01-01
Background: Short-term exposure to ozone has been associated with increased daily mortality. The shape of the concentration–response relationship—and, in particular, if there is a threshold—is critical for estimating public health impacts. Objective: We investigated the concentration–response relationship between daily ozone and mortality in five urban and five rural areas in the United Kingdom from 1993 to 2006. Methods: We used Poisson regression, controlling for seasonality, temperature, and influenza, to investigate associations between daily maximum 8-hr ozone and daily all-cause mortality, assuming linear, linear-threshold, and spline models for all-year and season-specific periods. We examined sensitivity to adjustment for particles (urban areas only) and alternative temperature metrics. Results: In all-year analyses, we found clear evidence for a threshold in the concentration–response relationship between ozone and all-cause mortality in London at 65 µg/m3 [95% confidence interval (CI): 58, 83] but little evidence of a threshold in other urban or rural areas. Combined linear effect estimates for all-cause mortality were comparable for urban and rural areas: 0.48% (95% CI: 0.35, 0.60) and 0.58% (95% CI: 0.36, 0.81) per 10-µg/m3 increase in ozone concentrations, respectively. Seasonal analyses suggested thresholds in both urban and rural areas for effects of ozone during summer months. Conclusions: Our results suggest that health impacts should be estimated across the whole ambient range of ozone using both threshold and nonthreshold models, and models stratified by season. Evidence of a threshold effect in London but not in other study areas requires further investigation. The public health impacts of exposure to ozone in rural areas should not be overlooked. PMID:22814173
Temperature dependence of spontaneous emission in GaAs-AlGaAs quantum well lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blood, P.; Kucharska, A.I.; Foxon, C.T.
1989-09-18
Using quantum well laser devices with a window in the {ital p}-type contact, we have measured the relative change of spontaneous emission intensity at threshold with temperature for 58-A-wide GaAs wells. Over the range 250--340 K the data are in good agreement with the linear relation obtained from a gain-current calculation which includes transition broadening. This linear behavior contrasts with the stronger temperature dependence of the total measured threshold current of the same devices which includes nonradiative barrier recombination processes.
Gollee, Henrik; Gawthrop, Peter J; Lakie, Martin; Loram, Ian D
2017-11-01
A human controlling an external system is described most easily and conventionally as linearly and continuously translating sensory input to motor output, with the inevitable output remnant, non-linearly related to the input, attributed to sensorimotor noise. Recent experiments show sustained manual tracking involves repeated refractoriness (insensitivity to sensory information for a certain duration), with the temporary 200-500 ms periods of irresponsiveness to sensory input making the control process intrinsically non-linear. This evidence calls for re-examination of the extent to which random sensorimotor noise is required to explain the non-linear remnant. This investigation of manual tracking shows how the full motor output (linear component and remnant) can be explained mechanistically by aperiodic sampling triggered by prediction error thresholds. Whereas broadband physiological noise is general to all processes, aperiodic sampling is associated with sensorimotor decision making within specific frontal, striatal and parietal networks; we conclude that manual tracking utilises such slow serial decision making pathways up to several times per second. The human operator is described adequately by linear translation of sensory input to motor output. Motor output also always includes a non-linear remnant resulting from random sensorimotor noise from multiple sources, and non-linear input transformations, for example thresholds or refractory periods. Recent evidence showed that manual tracking incurs substantial, serial, refractoriness (insensitivity to sensory information of 350 and 550 ms for 1st and 2nd order systems respectively). Our two questions are: (i) What are the comparative merits of explaining the non-linear remnant using noise or non-linear transformations? (ii) Can non-linear transformations represent serial motor decision making within the sensorimotor feedback loop intrinsic to tracking? Twelve participants (instructed to act in three prescribed ways) manually controlled two systems (1st and 2nd order) subject to a periodic multi-sine disturbance. Joystick power was analysed using three models, continuous-linear-control (CC), continuous-linear-control with calculated noise spectrum (CCN), and intermittent control with aperiodic sampling triggered by prediction error thresholds (IC). Unlike the linear mechanism, the intermittent control mechanism explained the majority of total power (linear and remnant) (77-87% vs. 8-48%, IC vs. CC). Between conditions, IC used thresholds and distributions of open loop intervals consistent with, respectively, instructions and previous measured, model independent values; whereas CCN required changes in noise spectrum deviating from broadband, signal dependent noise. We conclude that manual tracking uses open loop predictive control with aperiodic sampling. Because aperiodic sampling is inherent to serial decision making within previously identified, specific frontal, striatal and parietal networks we suggest that these structures are intimately involved in visuo-manual tracking. © 2017 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arsenin, V. V., E-mail: arsenin-vv@nrcki.ru; Skovoroda, A. A., E-mail: skovoroda-aa@nrcki.ru
2015-12-15
Using a cylindrical model, a relatively simple description is presented of how a magnetic field perturbation stimulated by a low external helical current or a small helical distortion of the boundary and generating magnetic islands penetrates into a plasma column with a magnetic surface q=m/n to which tearing instability is attached. Linear analysis of the classical instability with an aperiodic growth of the perturbation in time shows that the perturbation amplitude in plasma increases in a resonant manner as the discharge parameters approach the threshold of tearing instability. In a stationary case, under the assumption on the helical character ofmore » equilibrium, which can be found from the two-dimensional nonlinear equation for the helical flux, there is no requirement for the small size of the island. Examples of calculations in which magnetic islands are large near the threshold of tearing instability are presented. The bifurcation of equilibrium near the threshold of tearing instability in plasma with a cylindrical boundary, i.e., the existence of helical equilibrium (along with cylindrical equilibrium) with large islands, is described. Moreover, helical equilibrium can also exist in the absence of instability.« less
Leong, Tora; Rehman, Michaela B.; Pastormerlo, Luigi Emilio; Harrell, Frank E.; Coats, Andrew J. S.; Francis, Darrel P.
2014-01-01
Background Clinicians are sometimes advised to make decisions using thresholds in measured variables, derived from prognostic studies. Objectives We studied why there are conflicting apparently-optimal prognostic thresholds, for example in exercise peak oxygen uptake (pVO2), ejection fraction (EF), and Brain Natriuretic Peptide (BNP) in heart failure (HF). Data Sources and Eligibility Criteria Studies testing pVO2, EF or BNP prognostic thresholds in heart failure, published between 1990 and 2010, listed on Pubmed. Methods First, we examined studies testing pVO2, EF or BNP prognostic thresholds. Second, we created repeated simulations of 1500 patients to identify whether an apparently-optimal prognostic threshold indicates step change in risk. Results 33 studies (8946 patients) tested a pVO2 threshold. 18 found it prognostically significant: the actual reported threshold ranged widely (10–18 ml/kg/min) but was overwhelmingly controlled by the individual study population's mean pVO2 (r = 0.86, p<0.00001). In contrast, the 15 negative publications were testing thresholds 199% further from their means (p = 0.0001). Likewise, of 35 EF studies (10220 patients), the thresholds in the 22 positive reports were strongly determined by study means (r = 0.90, p<0.0001). Similarly, in the 19 positives of 20 BNP studies (9725 patients): r = 0.86 (p<0.0001). Second, survival simulations always discovered a “most significant” threshold, even when there was definitely no step change in mortality. With linear increase in risk, the apparently-optimal threshold was always near the sample mean (r = 0.99, p<0.001). Limitations This study cannot report the best threshold for any of these variables; instead it explains how common clinical research procedures routinely produce false thresholds. Key Findings First, shifting (and/or disappearance) of an apparently-optimal prognostic threshold is strongly determined by studies' average pVO2, EF or BNP. Second, apparently-optimal thresholds always appear, even with no step in prognosis. Conclusions Emphatic therapeutic guidance based on thresholds from observational studies may be ill-founded. We should not assume that optimal thresholds, or any thresholds, exist. PMID:24475020
Cascaded systems analysis of photon counting detectors
Xu, J.; Zbijewski, W.; Gang, G.; Stayman, J. W.; Taguchi, K.; Lundqvist, M.; Fredenberg, E.; Carrino, J. A.; Siewerdsen, J. H.
2014-01-01
Purpose: Photon counting detectors (PCDs) are an emerging technology with applications in spectral and low-dose radiographic and tomographic imaging. This paper develops an analytical model of PCD imaging performance, including the system gain, modulation transfer function (MTF), noise-power spectrum (NPS), and detective quantum efficiency (DQE). Methods: A cascaded systems analysis model describing the propagation of quanta through the imaging chain was developed. The model was validated in comparison to the physical performance of a silicon-strip PCD implemented on an experimental imaging bench. The signal response, MTF, and NPS were measured and compared to theory as a function of exposure conditions (70 kVp, 1–7 mA), detector threshold, and readout mode (i.e., the option for coincidence detection). The model sheds new light on the dependence of spatial resolution, charge sharing, and additive noise effects on threshold selection and was used to investigate the factors governing PCD performance, including the fundamental advantages and limitations of PCDs in comparison to energy-integrating detectors (EIDs) in the linear regime for which pulse pileup can be ignored. Results: The detector exhibited highly linear mean signal response across the system operating range and agreed well with theoretical prediction, as did the system MTF and NPS. The DQE analyzed as a function of kilovolt (peak), exposure, detector threshold, and readout mode revealed important considerations for system optimization. The model also demonstrated the important implications of false counts from both additive electronic noise and charge sharing and highlighted the system design and operational parameters that most affect detector performance in the presence of such factors: for example, increasing the detector threshold from 0 to 100 (arbitrary units of pulse height threshold roughly equivalent to 0.5 and 6 keV energy threshold, respectively), increased the f50 (spatial-frequency at which the MTF falls to a value of 0.50) by ∼30% with corresponding improvement in DQE. The range in exposure and additive noise for which PCDs yield intrinsically higher DQE was quantified, showing performance advantages under conditions of very low-dose, high additive noise, and high fidelity rejection of coincident photons. Conclusions: The model for PCD signal and noise performance agreed with measurements of detector signal, MTF, and NPS and provided a useful basis for understanding complex dependencies in PCD imaging performance and the potential advantages (and disadvantages) in comparison to EIDs as well as an important guide to task-based optimization in developing new PCD imaging systems. PMID:25281959
Cascaded systems analysis of photon counting detectors.
Xu, J; Zbijewski, W; Gang, G; Stayman, J W; Taguchi, K; Lundqvist, M; Fredenberg, E; Carrino, J A; Siewerdsen, J H
2014-10-01
Photon counting detectors (PCDs) are an emerging technology with applications in spectral and low-dose radiographic and tomographic imaging. This paper develops an analytical model of PCD imaging performance, including the system gain, modulation transfer function (MTF), noise-power spectrum (NPS), and detective quantum efficiency (DQE). A cascaded systems analysis model describing the propagation of quanta through the imaging chain was developed. The model was validated in comparison to the physical performance of a silicon-strip PCD implemented on an experimental imaging bench. The signal response, MTF, and NPS were measured and compared to theory as a function of exposure conditions (70 kVp, 1-7 mA), detector threshold, and readout mode (i.e., the option for coincidence detection). The model sheds new light on the dependence of spatial resolution, charge sharing, and additive noise effects on threshold selection and was used to investigate the factors governing PCD performance, including the fundamental advantages and limitations of PCDs in comparison to energy-integrating detectors (EIDs) in the linear regime for which pulse pileup can be ignored. The detector exhibited highly linear mean signal response across the system operating range and agreed well with theoretical prediction, as did the system MTF and NPS. The DQE analyzed as a function of kilovolt (peak), exposure, detector threshold, and readout mode revealed important considerations for system optimization. The model also demonstrated the important implications of false counts from both additive electronic noise and charge sharing and highlighted the system design and operational parameters that most affect detector performance in the presence of such factors: for example, increasing the detector threshold from 0 to 100 (arbitrary units of pulse height threshold roughly equivalent to 0.5 and 6 keV energy threshold, respectively), increased the f50 (spatial-frequency at which the MTF falls to a value of 0.50) by ∼30% with corresponding improvement in DQE. The range in exposure and additive noise for which PCDs yield intrinsically higher DQE was quantified, showing performance advantages under conditions of very low-dose, high additive noise, and high fidelity rejection of coincident photons. The model for PCD signal and noise performance agreed with measurements of detector signal, MTF, and NPS and provided a useful basis for understanding complex dependencies in PCD imaging performance and the potential advantages (and disadvantages) in comparison to EIDs as well as an important guide to task-based optimization in developing new PCD imaging systems.
Direction detection thresholds of passive self-motion in artistic gymnasts.
Hartmann, Matthias; Haller, Katia; Moser, Ivan; Hossner, Ernst-Joachim; Mast, Fred W
2014-04-01
In this study, we compared direction detection thresholds of passive self-motion in the dark between artistic gymnasts and controls. Twenty-four professional female artistic gymnasts (ranging from 7 to 20 years) and age-matched controls were seated on a motion platform and asked to discriminate the direction of angular (yaw, pitch, roll) and linear (leftward-rightward) motion. Gymnasts showed lower thresholds for the linear leftward-rightward motion. Interestingly, there was no difference for the angular motions. These results show that the outstanding self-motion abilities in artistic gymnasts are not related to an overall higher sensitivity in self-motion perception. With respect to vestibular processing, our results suggest that gymnastic expertise is exclusively linked to superior interpretation of otolith signals when no change in canal signals is present. In addition, thresholds were overall lower for the older (14-20 years) than for the younger (7-13 years) participants, indicating the maturation of vestibular sensitivity from childhood to adolescence.
Compendium of Operations Research and Economic Analysis Studies
1992-10-01
were to: (1) review and document current po~icios and procedures, k2) identity relevant economic and non -economic decision vAriibles, (3) design a...minimize the total sample size while ensuring that the proportion of samples closely resembled the actual population proportions. Both linear and non ...would cost about $290.00. DLA-92-PlO10. Impact of Increasing the Non -Competitive Threshold from Index No. 92-26 $2,500 to $5,000 (October 1991) In
Bradley, Ann E; Shoenfelt, Joanna L; Durda, Judi L
2016-04-01
Alpha-hexachlorocyclohexane (alpha-HCH) is one of eight structural isomers that have been used worldwide as insecticides. Although no longer produced or used agriculturally in the United States, exposure to HCH isomers is of continuing concern due to legacy usage and persistence in the environment. The U.S. Environmental Protection Agency (EPA) classifies alpha-HCH as a probable human carcinogen and provides a slope factor of 6.3 (mg/kg-day)(-1) for the compound, based on hepatic nodules and hepatocellular carcinomas observed in male mice and derived using a default linear approach for modeling carcinogens. EPA's evaluation, last updated in 1993, does not consider more recently available guidance that allows for the incorporation of mode of action (MOA) for determining a compound's dose-response. Contrary to the linear approach assumed by EPA, the available data indicate that alpha-HCH exhibits carcinogenicity via an MOA that yields a nonlinear, threshold dose-response. In our analysis, we conducted an MOA evaluation and dose-response analysis for alpha-HCH-induced liver carcinogenesis. We concluded that alpha-HCH causes liver tumors in rats and mice through an MOA involving increased promotion of cell growth, or mitogenesis. Based on these findings, we developed a threshold, cancer-based, reference dose (RfD) for alpha-HCH. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Multiwavelength ultralow-threshold lasing in quantum dot photonic crystal microcavities.
Chakravarty, S; Bhattacharya, P; Chakrabarti, S; Mi, Z
2007-05-15
We demonstrate multiwavelength lasing of resonant modes in linear (L3) microcavities in a triangular-lattice 2D photonic crystal (PC) slab. The broad spontaneous emission spectrum from coupled quantum dots, modified by the PC microcavity, is studied as a function of the intensity of incident optical excitation. We observe lasing with an ultralow-threshold power of approximately 600 nW and an output efficiency of approximately 3% at threshold. Two other resonant modes exhibit weaker turnon characteristics and thresholds of approximately 2.5 and 200 microW, respectively.
The Simplest Complete Model of Choice Response Time: Linear Ballistic Accumulation
ERIC Educational Resources Information Center
Brown, Scott D.; Heathcote, Andrew
2008-01-01
We propose a linear ballistic accumulator (LBA) model of decision making and reaction time. The LBA is simpler than other models of choice response time, with independent accumulators that race towards a common response threshold. Activity in the accumulators increases in a linear and deterministic manner. The simplicity of the model allows…
High voltage threshold for stable operation in a dc electron gun
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Masahiro, E-mail: masahiro@post.kek.jp; Nishimori, Nobuyuki, E-mail: n-nishim@tagen.tohoku.ac.jp
We report clear observation of a high voltage (HV) threshold for stable operation in a dc electron gun. The HV hold-off time without any discharge is longer than many hours for operation below the threshold, while it is roughly 10 min above the threshold. The HV threshold corresponds to the minimum voltage where discharge ceases. The threshold increases with the number of discharges during HV conditioning of the gun. Above the threshold, the amount of gas desorption per discharge increases linearly with the voltage difference from the threshold. The present experimental observations can be explained by an avalanche discharge modelmore » based on the interplay between electron stimulated desorption (ESD) from the anode surface and subsequent secondary electron emission from the cathode by the impact of ionic components of the ESD molecules or atoms.« less
Nonmonotonic Dose-Response Curves and Endocrine-Disrupting Chemicals: Fact or Falderal?**
Nonmonotonic Dose-Response Curves and Endocrine-Disrupting Chemicals: Fact or Falderal? The shape of the dose response curve in the low dose region has been debated since the 1940s, originally focusing on linear no threshold (LNT) versus threshold responses for cancer and noncanc...
Wood, Scott T; Dean, Brian C; Dean, Delphine
2013-04-01
This paper presents a novel computer vision algorithm to analyze 3D stacks of confocal images of fluorescently stained single cells. The goal of the algorithm is to create representative in silico model structures that can be imported into finite element analysis software for mechanical characterization. Segmentation of cell and nucleus boundaries is accomplished via standard thresholding methods. Using novel linear programming methods, a representative actin stress fiber network is generated by computing a linear superposition of fibers having minimum discrepancy compared with an experimental 3D confocal image. Qualitative validation is performed through analysis of seven 3D confocal image stacks of adherent vascular smooth muscle cells (VSMCs) grown in 2D culture. The presented method is able to automatically generate 3D geometries of the cell's boundary, nucleus, and representative F-actin network based on standard cell microscopy data. These geometries can be used for direct importation and implementation in structural finite element models for analysis of the mechanics of a single cell to potentially speed discoveries in the fields of regenerative medicine, mechanobiology, and drug discovery. Copyright © 2012 Elsevier B.V. All rights reserved.
Linear dynamic range enhancement in a CMOS imager
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor)
2008-01-01
A CMOS imager with increased linear dynamic range but without degradation in noise, responsivity, linearity, fixed-pattern noise, or photometric calibration comprises a linear calibrated dual gain pixel in which the gain is reduced after a pre-defined threshold level by switching in an additional capacitance. The pixel may include a novel on-pixel latch circuit that is used to switch in the additional capacitance.
Hunsicker, Mary E; Kappel, Carrie V; Selkoe, Kimberly A; Halpern, Benjamin S; Scarborough, Courtney; Mease, Lindley; Amrhein, Alisan
2016-04-01
Scientists and resource managers often use methods and tools that assume ecosystem components respond linearly to environmental drivers and human stressors. However, a growing body of literature demonstrates that many relationships are-non-linear, where small changes in a driver prompt a disproportionately large ecological response. We aim to provide a comprehensive assessment of the relationships between drivers and ecosystem components to identify where and when non-linearities are likely to occur. We focused our analyses on one of the best-studied marine systems, pelagic ecosystems, which allowed us to apply robust statistical techniques on a large pool of previously published studies. In this synthesis, we (1) conduct a wide literature review on single driver-response relationships in pelagic systems, (2) use statistical models to identify the degree of non-linearity in these relationships, and (3) assess whether general patterns exist in the strengths and shapes of non-linear relationships across drivers. Overall we found that non-linearities are common in pelagic ecosystems, comprising at least 52% of all driver-response relation- ships. This is likely an underestimate, as papers with higher quality data and analytical approaches reported non-linear relationships at a higher frequency (on average 11% more). Consequently, in the absence of evidence for a linear relationship, it is safer to assume a relationship is non-linear. Strong non-linearities can lead to greater ecological and socioeconomic consequences if they are unknown (and/or unanticipated), but if known they may provide clear thresholds to inform management targets. In pelagic systems, strongly non-linear relationships are often driven by climate and trophodynamic variables but are also associated with local stressors, such as overfishing and pollution, that can be more easily controlled by managers. Even when marine resource managers cannot influence ecosystem change, they can use information about threshold responses to guide how other stressors are managed and to adapt to new ocean conditions. As methods to detect and reduce uncertainty around threshold values improve, managers will be able to better understand and account for ubiquitous non-linear relationships.
An adaptive design for updating the threshold value of a continuous biomarker
Spencer, Amy V.; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian
2017-01-01
Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker ‘positive’ and ‘negative’ is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that ‘no population subset exists in which the novel treatment has a desirable response rate’ to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. PMID:27417407
Ourso, R.T.; Frenzel, S.A.
2003-01-01
We examined biotic and physiochemical responses in urbanized Anchorage, Alaska, to the percent of impervious area within stream basins, as determined by high-resolution IKONOS satellite imagery and aerial photography. Eighteen of the 86 variables examined, including riparian and instream habitat, macroinvertebrate communities, and water/sediment chemistry, were significantly correlated with percent impervious area. Variables related to channel condition, instream substrate, water chemistry, and residential and transportation right-of-way land uses were identified by principal components analysis as significant factors separating site groups. Detrended canonical correspondence analysis indicated that the macroinvertebrate communities responded to an urbanization gradient closely paralleling the percent of impervious area within the subbasin. A sliding regression analysis of variables significantly correlated with percent impervious area revealed 8 variables exhibiting threshold responses that correspond to a mean of 4.4-5.8% impervious area, much lower than mean values reported in other, similar investigations. As contributing factors to a subbasin's impervious area, storm drains and roads appeared to be important elements influencing the degradation of water quality with respect to the biota.
Influence of prolonged static stretching on motor unit firing properties.
Ye, Xin; Beck, Travis W; Wages, Nathan P
2016-05-01
The purpose of this study was to examine the influence of a stretching intervention on motor control strategy of the biceps brachii muscle. Ten men performed twelve 100-s passive static stretches of the biceps brachii. Before and after the intervention, isometric strength was tested during maximal voluntary contractions (MVCs) of the elbow flexors. Subjects also performed trapezoid isometric contractions at 30% and 70% of MVC. Surface electromyographic signals from the submaximal contractions were decomposed into individual motor unit action potential trains. Linear regression analysis was used to examine the relationship between motor unit mean firing rate and recruitment threshold. The stretching intervention caused significant decreases in y-intercepts of the linear regression lines. In addition, linear slopes at both intensities remained unchanged. Despite reduced motor unit firing rates following the stretches, the motor control scheme remained unchanged. © 2016 Wiley Periodicals, Inc.
Cabral, Ana Caroline; Stark, Jonathan S; Kolm, Hedda E; Martins, César C
2018-04-01
Sewage input and the relationship between chemical markers (linear alkylbenzenes and coprostanol) and fecal indicator bacteria (FIB, Escherichia coli and enterococci), were evaluated in order to establish thresholds values for chemical markers in suspended particulate matter (SPM) as indicators of sewage contamination in two subtropical estuaries in South Atlantic Brazil. Both chemical markers presented no linear relationship with FIB due to high spatial microbiological variability, however, microbiological water quality was related to coprostanol values when analyzed by logistic regression, indicating that linear models may not be the best representation of the relationship between both classes of indicators. Logistic regression was performed with all data and separately for two sampling seasons, using 800 and 100 MPN 100 mL -1 of E. coli and enterococci, respectively, as the microbiological limits of sewage contamination. Threshold values of coprostanol varied depending on the FIB and season, ranging between 1.00 and 2.23 μg g -1 SPM. The range of threshold values of coprostanol for SPM are relatively higher and more variable than those suggested in literature for sediments (0.10-0.50 μg g -1 ), probably due to higher concentration of coprostanol in SPM than in sediment. Temperature may affect the relationship between microbiological indicators and coprostanol, since the threshold value of coprostanol found here was similar to tropical areas, but lower than those found during winter in temperate areas, reinforcing the idea that threshold values should be calibrated for different climatic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Dose Response Data for Hormonally Active Chemicals: Estrogens, Antiandrogens and Androgens
The shape of the dose response curve in the low dose region has been debated since the late 1940s. The debate originally focused on linear no threshold (LNT) vs threshold responses in the low dose range for cancer and noncancer related effects. For noncancer effects the defaul...
Linear No-Threshold Model VS. Radiation Hormesis
Doss, Mohan
2013-01-01
The atomic bomb survivor cancer mortality data have been used in the past to justify the use of the linear no-threshold (LNT) model for estimating the carcinogenic effects of low dose radiation. An analysis of the recently updated atomic bomb survivor cancer mortality dose-response data shows that the data no longer support the LNT model but are consistent with a radiation hormesis model when a correction is applied for a likely bias in the baseline cancer mortality rate. If the validity of the phenomenon of radiation hormesis is confirmed in prospective human pilot studies, and is applied to the wider population, it could result in a considerable reduction in cancers. The idea of using radiation hormesis to prevent cancers was proposed more than three decades ago, but was never investigated in humans to determine its validity because of the dominance of the LNT model and the consequent carcinogenic concerns regarding low dose radiation. Since cancer continues to be a major health problem and the age-adjusted cancer mortality rates have declined by only ∼10% in the past 45 years, it may be prudent to investigate radiation hormesis as an alternative approach to reduce cancers. Prompt action is urged. PMID:24298226
Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm
NASA Astrophysics Data System (ADS)
Elahi, Sana; kaleem, Muhammad; Omer, Hammad
2018-01-01
Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.
Sampling Based Influence Maximization on Linear Threshold Model
NASA Astrophysics Data System (ADS)
Jia, Su; Chen, Ling
2018-04-01
A sampling based influence maximization on linear threshold (LT) model method is presented. The method samples the routes in the possible worlds in the social networks, and uses Chernoff bound to estimate the number of samples so that the error can be constrained within a given bound. Then the active possibilities of the routes in the possible worlds are calculated, and are used to compute the influence spread of each node in the network. Our experimental results show that our method can effectively select appropriate seed nodes set that spreads larger influence than other similar methods.
Reassessment of data used in setting exposure limits for hot particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baum, J.W.; Kaurin, D.G.
1991-05-01
A critical review and a reassessment of data reviewed in NCRP Report 106 on effects of hot particles'' on the skin of pigs, monkeys, and humans were made. Our analysis of the data of Forbes and Mikhail on effects from activated UC{sub 2} particles, ranging in diameter from 144 {mu}m to 328 {mu}m, led to the formulation of a new model for prediction of both the threshold for acute ulceration and for ulcer diameter. A dose of 27 Gy at a depth of 1.33 mm in tissue in this model will result in an acute ulcer with a diameter determinedmore » by the radius over which this dose (at 1.33-mm depth) extends. Application of the model to the Forbes-Mikhail data yielded a threshold'' (5% probability) of 6 {times} 10{sup 9} beta particles from a point source on skin of mixed fission product beta particles, or about 10{sup 10} beta particles from Sr--Y-90, since few of the Sr-90 beta particles reach this depth. The data of Hopewell et al. for their 1 mm Sr-Y-90 exposures were also analyzed with the above model and yielded a predicted threshold of 2 {times} 10{sup 10} Sr-Y-90 beta particles for a point source on skin. Dosimetry values were employed in this latter analysis that are 3.3 times higher than previously reported for this source. An alternate interpretation of the Forbes and Mikhail data, derived from linear plots of the data, is that the threshold depends strongly on particle size with the smaller particles yielding a much lower threshold and smaller minimum size ulcer. Additional animal exposures are planned to distinguish between the above explanations. 17 refs., 3 figs., 3 tabs.« less
Chameleon's behavior of modulable nonlinear electrical transmission line
NASA Astrophysics Data System (ADS)
Togueu Motcheyo, A. B.; Tchinang Tchameu, J. D.; Fewo, S. I.; Tchawoua, C.; Kofane, T. C.
2017-12-01
We show that modulable discrete nonlinear transmission line can adopt Chameleon's behavior due to the fact that, without changing its appearance structure, it can become alternatively purely right or left handed line which is different to the composite one. Using a quasidiscrete approximation, we derive a nonlinear Schrödinger equation, that predicts accurately the carrier frequency threshold from the linear analysis. It appears that the increasing of the linear capacitor in parallel in the series branch induced the selectivity of the filter in the right-handed region while it increases band pass filter in the left-handed region. Numerical simulations of the nonlinear model confirm the forward wave in the right handed line and the backward wave in the left handed one.
Tori and chaos in a simple C1-system
NASA Astrophysics Data System (ADS)
Roessler, O. E.; Kahiert, C.; Ughleke, B.
A piecewise-linear autonomous 3-variable ordinary differential equation is presented which permits analytical modeling of chaotic attractors. A once-differentiable system of equations is defined which consists of two linear half-systems which meet along a threshold plane. The trajectories described by each equation is thereby continuous along the divide, forming a one-parameter family of invariant tori. The addition of a damping term produces a system of equations for various chaotic attractors. Extension of the system by means of a 4-variable generalization yields hypertori and hyperchaos. It is noted that the hierarchy established is amenable to analysis by the use of Poincare half-maps. Applications of the systems of ordinary differential equations to modeling turbulent flows are discussed.
Slotnick, Scott D; Jeye, Brittany M; Dodson, Chad S
2016-01-01
Is recollection a continuous/graded process or a threshold/all-or-none process? Receiver operating characteristic (ROC) analysis can answer this question as the continuous model and the threshold model predict curved and linear recollection ROCs, respectively. As memory for plurality, an item's previous singular or plural form, is assumed to rely on recollection, the nature of recollection can be investigated by evaluating plurality memory ROCs. The present study consisted of four experiments. During encoding, words (singular or plural) or objects (single/singular or duplicate/plural) were presented. During retrieval, old items with the same plurality or different plurality were presented. For each item, participants made a confidence rating ranging from "very sure old", which was correct for same plurality items, to "very sure new", which was correct for different plurality items. Each plurality memory ROC was the proportion of same versus different plurality items classified as "old" (i.e., hits versus false alarms). Chi-squared analysis revealed that all of the plurality memory ROCs were adequately fit by the continuous unequal variance model, whereas none of the ROCs were adequately fit by the two-high threshold model. These plurality memory ROC results indicate recollection is a continuous process, which complements previous source memory and associative memory ROC findings.
NASA Astrophysics Data System (ADS)
Lau, K. Y.; Ng, E. K.; Abu Bakar, M. H.; Abas, A. F.; Alresheedi, M. T.; Yusoff, Z.; Mahdi, M. A.
2018-06-01
In this work, we demonstrate a linear cavity mode-locked erbium-doped fiber laser in C-band wavelength region. The passive mode-locking is achieved using a microfiber-based carbon nanotube saturable absorber. The carbon nanotube saturable absorber has low saturation fluence of 0.98 μJ/cm2. Together with the linear cavity architecture, the fiber laser starts to produce soliton pulses at low pump power of 22.6 mW. The proposed fiber laser generates fundamental soliton pulses with a center wavelength, pulse width, and repetition rate of 1557.1 nm, 820 fs, and 5.41 MHz, respectively. This mode-locked laser scheme presents a viable option in the development of low threshold ultrashort pulse system for deployment as a seed laser.
1990-05-01
ALARM LAMPS A CHECK TWT POWER SUPPLY VOLTAGE AND CURRENT A ADJUST POWER ALARM THRESHOLD AND TRANSMITTER OUTPUT A CHECK HELIX MONITOR K INTERPRET AN/FRC...POWER SUPPLY A CHECK TRAVELING WAVE TUBE ( TWT ) POWER SUPPLY HELIX CURRENT AND BEAM CURRENT A CHECK TWT RF POWER OUTPUT A CHECK TRANSMITTER POWER...A ADJUST TRANSMITTER LINEARITY A CALIBRATE TRANSMIT DEVIATION AND ADJUST MODULATION AMPLIFIER A ADJUST TWT PERFORMANCE MONITOR A ADJUST TWT OUTPUT
Perfect Detection of Spikes in the Linear Sub-threshold Dynamics of Point Neurons
Krishnan, Jeyashree; Porta Mana, PierGianLuca; Helias, Moritz; Diesmann, Markus; Di Napoli, Edoardo
2018-01-01
Spiking neuronal networks are usually simulated with one of three main schemes: the classical time-driven and event-driven schemes, and the more recent hybrid scheme. All three schemes evolve the state of a neuron through a series of checkpoints: equally spaced in the first scheme and determined neuron-wise by spike events in the latter two. The time-driven and the hybrid scheme determine whether the membrane potential of a neuron crosses a threshold at the end of the time interval between consecutive checkpoints. Threshold crossing can, however, occur within the interval even if this test is negative. Spikes can therefore be missed. The present work offers an alternative geometric point of view on neuronal dynamics, and derives, implements, and benchmarks a method for perfect retrospective spike detection. This method can be applied to neuron models with affine or linear subthreshold dynamics. The idea behind the method is to propagate the threshold with a time-inverted dynamics, testing whether the threshold crosses the neuron state to be evolved, rather than vice versa. Algebraically this translates into a set of inequalities necessary and sufficient for threshold crossing. This test is slower than the imperfect one, but can be optimized in several ways. Comparison confirms earlier results that the imperfect tests rarely miss spikes (less than a fraction 1/108 of missed spikes) in biologically relevant settings. PMID:29379430
A study on the temperature dependence of the threshold switching characteristics of Ge2Sb2Te5
NASA Astrophysics Data System (ADS)
Lee, Suyoun; Jeong, Doo Seok; Jeong, Jeung-hyun; Zhe, Wu; Park, Young-Wook; Ahn, Hyung-Woo; Cheong, Byung-ki
2010-01-01
We investigated the temperature dependence of the threshold switching characteristics of a memory-type chalcogenide material, Ge2Sb2Te5. We found that the threshold voltage (Vth) decreased linearly with temperature, implying the existence of a critical conductivity of Ge2Sb2Te5 for its threshold switching. In addition, we investigated the effect of bias voltage and temperature on the delay time (tdel) of the threshold switching of Ge2Sb2Te5 and described the measured relationship by an analytic expression which we derived based on a physical model where thermally activated hopping is a dominant transport mechanism in the material.
Electron elevator: Excitations across the band gap via a dynamical gap state
Lim, Anthony; Foulkes, W. M. C.; Horsfield, A. P.; ...
2016-01-27
We use time-dependent density functional theory to study self-irradiated Si. We calculate the electronic stopping power of Si in Si by evaluating the energy transferred to the electrons per unit path length by an ion of kinetic energy from 1 eV to 100 keV moving through the host. Electronic stopping is found to be significant below the threshold velocity normally identified with transitions across the band gap. A structured crossover at low velocity exists in place of a hard threshold. Lastly, an analysis of the time dependence of the transition rates using coupled linear rate equations enables one of themore » excitation mechanisms to be clearly identified: a defect state induced in the gap by the moving ion acts like an elevator and carries electrons across the band gap.« less
Electron Elevator: Excitations across the Band Gap via a Dynamical Gap State.
Lim, A; Foulkes, W M C; Horsfield, A P; Mason, D R; Schleife, A; Draeger, E W; Correa, A A
2016-01-29
We use time-dependent density functional theory to study self-irradiated Si. We calculate the electronic stopping power of Si in Si by evaluating the energy transferred to the electrons per unit path length by an ion of kinetic energy from 1 eV to 100 keV moving through the host. Electronic stopping is found to be significant below the threshold velocity normally identified with transitions across the band gap. A structured crossover at low velocity exists in place of a hard threshold. An analysis of the time dependence of the transition rates using coupled linear rate equations enables one of the excitation mechanisms to be clearly identified: a defect state induced in the gap by the moving ion acts like an elevator and carries electrons across the band gap.
NASA Astrophysics Data System (ADS)
Roesch, M.; Garimella, S.; Roesch, C.; Zawadowicz, M. A.; Katich, J. M.; Froyd, K. D.; Cziczo, D. J.
2016-12-01
In this study, a parallel-plate ice chamber, the SPectrometer for Ice Nuclei (SPIN, DMT Inc.) was combined with a pumped counterflow virtual impactor (PCVI, BMI Inc.) to separate ice crystals from interstitial aerosol particles by their aerodynamic size. These measurements were part of the FIN-3 workshop, which took place in fall 2015 at Storm Peak Laboratory (SPL), a high altitude mountain top facility (3220 m m.s.l.) in the Rocky Mountains. The investigated particles were sampled from ambient air and were exposed to cirrus-like conditions inside SPIN (-40°C, 130% RHice). Previous SPIN experiments under these conditions showed that ice crystals were found to be in the super-micron range. Connected to the outlet of the ice chamber, the PCVI was adjusted to separate all particulates aerodynamically larger than 3.5 micrometer to the sample flow while smaller ones were rejected and removed by a pump flow. Using this technique reduces the number of interstitial aerosol particles, which could bias subsequent ice nucleating particle (INP) analysis. Downstream of the PCVI, the separated ice crystals were evaporated and the flow with the remaining INPs was split up to a particle analysis by laser mass spectrometry (PALMS) instrument a laser aerosol spectrometer (LAS, TSI Inc.) and a single particle soot photometer (SP2, DMT Inc.). Based on the sample flow and the resolution of the measured particle data, the lowest concentration threshold for the SP2 instrument was 294 INP L-1 and for the LAS instrument 60 INP L-1. Applying these thresholds as filters to the measured PALMS time series 944 valid INP spectra using the SP2 threshold and 445 valid INP spectra using the LAS threshold were identified. A sensitivity study determining the number of good INP spectra as a function of the filter threshold concentration showed a two-phase linear growth when increasing the threshold concentration showing a breakpoint around 100 INP L-1.
The adequate stimulus for avian short latency vestibular responses to linear translation
NASA Technical Reports Server (NTRS)
Jones, T. A.; Jones, S. M.; Colbert, S.
1998-01-01
Transient linear acceleration stimuli have been shown to elicit eighth nerve vestibular compound action potentials in birds and mammals. The present study was undertaken to better define the nature of the adequate stimulus for neurons generating the response in the chicken (Gallus domesticus). In particular, the study evaluated the question of whether the neurons studied are most sensitive to the maximum level of linear acceleration achieved or to the rate of change in acceleration (da/dt, or jerk). To do this, vestibular response thresholds were measured as a function of stimulus onset slope. Traditional computer signal averaging was used to record responses to pulsed linear acceleration stimuli. Stimulus onset slope was systematically varied. Acceleration thresholds decreased with increasing stimulus onset slope (decreasing stimulus rise time). When stimuli were expressed in units of jerk (g/ms), thresholds were virtually constant for all stimulus rise times. Moreover, stimuli having identical jerk magnitudes but widely varying peak acceleration levels produced virtually identical responses. Vestibular response thresholds, latencies and amplitudes appear to be determined strictly by stimulus jerk magnitudes. Stimulus attributes such as peak acceleration or rise time alone do not provide sufficient information to predict response parameter quantities. Indeed, the major response parameters were shown to be virtually independent of peak acceleration levels or rise time when these stimulus features were isolated and considered separately. It is concluded that the neurons generating short latency vestibular evoked potentials do so as "jerk encoders" in the chicken. Primary afferents classified as "irregular", and which traditionally fall into the broad category of "dynamic" or "phasic" neurons, would seem to be the most likely candidates for the neural generators of short latency vestibular compound action potentials.
ABSTRACT BODY: The shape of the dose response curve in the low dose region has been debated since the 1940s, originally focusing on linear no threshold (LNT) versus threshold responses for cancer and noncancer effects. Recently, it has been claimed that endocrine disrupters (EDCs...
Del Prete, Valeria; Treves, Alessandro
2002-04-01
In a previous paper we have evaluated analytically the mutual information between the firing rates of N independent units and a set of multidimensional continuous and discrete stimuli, for a finite population size and in the limit of large noise. Here, we extend the analysis to the case of two interconnected populations, where input units activate output ones via Gaussian weights and a threshold linear transfer function. We evaluate the information carried by a population of M output units, again about continuous and discrete correlates. The mutual information is evaluated solving saddle-point equations under the assumption of replica symmetry, a method that, by taking into account only the term linear in N of the input information, is equivalent to assuming the noise to be large. Within this limitation, we analyze the dependence of the information on the ratio M/N, on the selectivity of the input units and on the level of the output noise. We show analytically, and confirm numerically, that in the limit of a linear transfer function and of a small ratio between output and input noise, the output information approaches asymptotically the information carried in input. Finally, we show that the information loss in output does not depend much on the structure of the stimulus, whether purely continuous, purely discrete or mixed, but only on the position of the threshold nonlinearity, and on the ratio between input and output noise.
Scaling properties of ballistic nano-transistors
2011-01-01
Recently, we have suggested a scale-invariant model for a nano-transistor. In agreement with experiments a close-to-linear thresh-old trace was found in the calculated ID - VD-traces separating the regimes of classically allowed transport and tunneling transport. In this conference contribution, the relevant physical quantities in our model and its range of applicability are discussed in more detail. Extending the temperature range of our studies it is shown that a close-to-linear thresh-old trace results at room temperatures as well. In qualitative agreement with the experiments the ID - VG-traces for small drain voltages show thermally activated transport below the threshold gate voltage. In contrast, at large drain voltages the gate-voltage dependence is weaker. As can be expected in our relatively simple model, the theoretical drain current is larger than the experimental one by a little less than a decade. PMID:21711899
Energy Switching Threshold for Climatic Benefits
NASA Astrophysics Data System (ADS)
Zhang, X.; Cao, L.; Caldeira, K.
2013-12-01
Climate change is one of the great challenges facing humanity currently and in the future. Its most severe impacts may still be avoided if efforts are made to transform current energy systems (1). A transition from the global system of high Greenhouse Gas (GHG) emission electricity generation to low GHG emission energy technologies is required to mitigate climate change (2). Natural gas is increasingly seen as a choice for transitions to renewable sources. However, recent researches in energy and climate puzzled about the climate implications of relying more energy on natural gas. On one hand, a shift to natural gas is promoted as climate mitigation because it has lower carbon per unit energy than coal (3). On the other hand, the effect of switching to natural gas on nuclear-power and other renewable energies development may offset benefits from fuel-switching (4). Cheap natural gas is causing both coal plants and nuclear plants to close in the US. The objective of this study is to measure and evaluate the threshold of energy switching for climatic benefits. We hypothesized that the threshold ratio of energy switching for climatic benefits is related to GHGs emission factors of energy technologies, but the relation is not linear. A model was developed to study the fuel switching threshold for greenhouse gas emission reduction, and transition from coal and nuclear electricity generation to natural gas electricity generation was analyzed as a case study. The results showed that: (i) the threshold ratio of multi-energy switching for climatic benefits changes with GHGs emission factors of energy technologies. (ii)The mathematical relation between the threshold ratio of energy switching and GHGs emission factors of energies is a curved surface function. (iii) The analysis of energy switching threshold for climatic benefits can be used for energy and climate policy decision support.
Beyea, Jan
2017-04-01
There are both statistically valid and invalid reasons why scientists with differing default hypotheses can disagree in high-profile situations. Examples can be found in recent correspondence in this journal, which may offer lessons for resolving challenges to mainstream science, particularly when adherents of a minority view attempt to elevate the status of outlier studies and/or claim that self-interest explains the acceptance of the dominant theory. Edward J. Calabrese and I have been debating the historical origins of the linear no-threshold theory (LNT) of carcinogenesis and its use in the regulation of ionizing radiation. Professor Calabrese, a supporter of hormesis, has charged a committee of scientists with misconduct in their preparation of a 1956 report on the genetic effects of atomic radiation. Specifically he argues that the report mischaracterized the LNT research record and suppressed calculations of some committee members. After reviewing the available scientific literature, I found that the contemporaneous evidence overwhelmingly favored a (genetics) LNT and that no calculations were suppressed. Calabrese's claims about the scientific record do not hold up primarily because of lack of attention to statistical analysis. Ironically, outlier studies were more likely to favor supra-linearity, not sub-linearity. Finally, the claim of investigator bias, which underlies Calabrese's accusations about key studies, is based on misreading of text. Attention to ethics charges, early on, may help seed a counter narrative explaining the community's adoption of a default hypothesis and may help focus attention on valid evidence and any real weaknesses in the dominant paradigm. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Feldthusen, Caroline; Grimby-Ekman, Anna; Forsblad-d'Elia, Helena; Jacobsson, Lennart; Mannerkorpi, Kaisa
2016-04-28
To investigate the impact of disease-related aspects on long-term variations in fatigue in persons with rheumatoid arthritis. Observational longitudinal study. Sixty-five persons with rheumatoid arthritis, age range 20-65 years, were invited to a clinical examination at 4 time-points during the 4 seasons. Outcome measures were: general fatigue rated on visual analogue scale (0-100) and aspects of fatigue assessed by the Bristol Rheumatoid Arthritis Fatigue Multidimensional Questionnaire. Disease-related variables were: disease activity (erythrocyte sedimentation rate), pain threshold (pressure algometer), physical capacity (six-minute walk test), pain (visual analogue scale (0-100)), depressive mood (Hospital Anxiety and Depression scale, depression subscale), personal factors (age, sex, body mass index) and season. Multivariable regression analysis, linear mixed effects models were applied. The strongest explanatory factors for all fatigue outcomes, when recorded at the same time-point as fatigue, were pain threshold and depressive mood. Self-reported pain was an explanatory factor for physical aspects of fatigue and body mass index contributed to explaining the consequences of fatigue on everyday living. For predicting later fatigue pain threshold and depressive mood were the strongest predictors. Pain threshold and depressive mood were the most important factors for fatigue in persons with rheumatoid arthritis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sysoeva, E. V., E-mail: tinlit@yandex.ru; Gusakov, E. Z.; Simonchik, L. V.
2016-07-15
The possibility of the low-threshold decay of an ordinary wave into an upper hybrid wave localized in a plasma column (or in an axisymmetric plasma filament) and a low-frequency wave is analyzed. It is shown that the threshold for such a decay, accompanied by the excitation of an ion-acoustic wave, can easily be overcome for plasma parameters typical of model experiments on the Granit linear plasma facility.
Li, Z J; Zhang, X J; Hou, X X; Xu, S; Zhang, J S; Song, H B; Lin, H L
2015-12-01
Previous studies examining the weather-bacillary dysentery association were of a large time scale (monthly or weekly) and examined the linear relationship without checking the linearity assumption. We examined this association in Beijing at a daily scale based on the exposure-response curves using generalized additive models. Our analyses suggested that there were thresholds for effects of temperature and relative humidity, with an approximately linear effect for temperature >12·5 °C [excess risk (ER) for 1 °C increase: 1·06%, 95% confidence interval (CI) 0·63-1·49 on lag day 3] and for relative humidity >40% (ER for 1% increase: 0·18%, 95% CI 0·12-0·24 at lag day 4); and there were linear effects of rainfall (ER for 1-mm increase: 0·22%, 95% CI 0·12-0·32), negative effects for wind speed (ER: -2·91%, 95% CI -4·28 to -1·52 at lag day 3) and sunshine duration (ER: -0·25% 95% CI -0·43 to -0·07 at lag day 4). This study suggests that there are thresholds for the effects of temperature and relative humidity on bacillary dysentery, and these findings should be considered in its prevention and control programmes.
Hyperglycaemia and risk of adverse perinatal outcomes: systematic review and meta-analysis.
Farrar, Diane; Simmonds, Mark; Bryant, Maria; Sheldon, Trevor A; Tuffnell, Derek; Golder, Su; Dunne, Fidelma; Lawlor, Debbie A
2016-09-13
To assess the association between maternal glucose concentrations and adverse perinatal outcomes in women without gestational or existing diabetes and to determine whether clear thresholds for identifying women at risk of perinatal outcomes can be identified. Systematic review and meta-analysis of prospective cohort studies and control arms of randomised trials. Databases including Medline and Embase were searched up to October 2014 and combined with individual participant data from two additional birth cohorts. Studies including pregnant women with oral glucose tolerance (OGTT) or challenge (OGCT) test results, with data on at least one adverse perinatal outcome. Glucose test results were extracted for OGCT (50 g) and OGTT (75 g and 100 g) at fasting and one and two hour post-load timings. Data were extracted on induction of labour; caesarean and instrumental delivery; pregnancy induced hypertension; pre-eclampsia; macrosomia; large for gestational age; preterm birth; birth injury; and neonatal hypoglycaemia. Risk of bias was assessed with a modified version of the critical appraisal skills programme and quality in prognostic studies tools. 25 reports from 23 published studies and two individual participant data cohorts were included, with up to 207 172 women (numbers varied by the test and outcome analysed in the meta-analyses). Overall most studies were judged as having a low risk of bias. There were positive linear associations with caesarean section, induction of labour, large for gestational age, macrosomia, and shoulder dystocia for all glucose exposures across the distribution of glucose concentrations. There was no clear evidence of a threshold effect. In general, associations were stronger for fasting concentration than for post-load concentration. For example, the odds ratios for large for gestational age per 1 mmol/L increase of fasting and two hour post-load glucose concentrations (after a 75 g OGTT) were 2.15 (95% confidence interval 1.60 to 2.91) and 1.20 (1.13 to 1.28), respectively. Heterogeneity was low between studies in all analyses. This review and meta-analysis identified a large number of studies in various countries. There was a graded linear association between fasting and post-load glucose concentration across the whole glucose distribution and most adverse perinatal outcomes in women without pre-existing or gestational diabetes. The lack of a clear threshold at which risk increases means that decisions regarding thresholds for diagnosing gestational diabetes are somewhat arbitrary. Research should now investigate the clinical and cost-effectiveness of applying different glucose thresholds for diagnosis of gestational diabetes on perinatal and longer term outcomes. PROSPERO CRD42013004608. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
An adaptive design for updating the threshold value of a continuous biomarker.
Spencer, Amy V; Harbron, Chris; Mander, Adrian; Wason, James; Peers, Ian
2016-11-30
Potential predictive biomarkers are often measured on a continuous scale, but in practice, a threshold value to divide the patient population into biomarker 'positive' and 'negative' is desirable. Early phase clinical trials are increasingly using biomarkers for patient selection, but at this stage, it is likely that little will be known about the relationship between the biomarker and the treatment outcome. We describe a single-arm trial design with adaptive enrichment, which can increase power to demonstrate efficacy within a patient subpopulation, the parameters of which are also estimated. Our design enables us to learn about the biomarker and optimally adjust the threshold during the study, using a combination of generalised linear modelling and Bayesian prediction. At the final analysis, a binomial exact test is carried out, allowing the hypothesis that 'no population subset exists in which the novel treatment has a desirable response rate' to be tested. Through extensive simulations, we are able to show increased power over fixed threshold methods in many situations without increasing the type-I error rate. We also show that estimates of the threshold, which defines the population subset, are unbiased and often more precise than those from fixed threshold studies. We provide an example of the method applied (retrospectively) to publically available data from a study of the use of tamoxifen after mastectomy by the German Breast Study Group, where progesterone receptor is the biomarker of interest. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
On the multifractal effects generated by monofractal signals
NASA Astrophysics Data System (ADS)
Grech, Dariusz; Pamuła, Grzegorz
2013-12-01
We study quantitatively the level of false multifractal signal one may encounter while analyzing multifractal phenomena in time series within multifractal detrended fluctuation analysis (MF-DFA). The investigated effect appears as a result of finite length of used data series and is additionally amplified by the long-term memory the data eventually may contain. We provide the detailed quantitative description of such apparent multifractal background signal as a threshold in spread of generalized Hurst exponent values Δh or a threshold in the width of multifractal spectrum Δα below which multifractal properties of the system are only apparent, i.e. do not exist, despite Δα≠0 or Δh≠0. We find this effect quite important for shorter or persistent series and we argue it is linear with respect to autocorrelation exponent γ. Its strength decays according to power law with respect to the length of time series. The influence of basic linear and nonlinear transformations applied to initial data in finite time series with various levels of long memory is also investigated. This provides additional set of semi-analytical results. The obtained formulas are significant in any interdisciplinary application of multifractality, including physics, financial data analysis or physiology, because they allow to separate the ‘true’ multifractal phenomena from the apparent (artificial) multifractal effects. They should be a helpful tool of the first choice to decide whether we do in particular case with the signal with real multiscaling properties or not.
Visual recovery in cortical blindness is limited by high internal noise
Cavanaugh, Matthew R.; Zhang, Ruyuan; Melnick, Michael D.; Das, Anasuya; Roberts, Mariel; Tadin, Duje; Carrasco, Marisa; Huxlin, Krystel R.
2015-01-01
Damage to the primary visual cortex typically causes cortical blindness (CB) in the hemifield contralateral to the damaged hemisphere. Recent evidence indicates that visual training can partially reverse CB at trained locations. Whereas training induces near-complete recovery of coarse direction and orientation discriminations, deficits in fine motion processing remain. Here, we systematically disentangle components of the perceptual inefficiencies present in CB fields before and after coarse direction discrimination training. In seven human CB subjects, we measured threshold versus noise functions before and after coarse direction discrimination training in the blind field and at corresponding intact field locations. Threshold versus noise functions were analyzed within the framework of the linear amplifier model and the perceptual template model. Linear amplifier model analysis identified internal noise as a key factor differentiating motion processing across the tested areas, with visual training reducing internal noise in the blind field. Differences in internal noise also explained residual perceptual deficits at retrained locations. These findings were confirmed with perceptual template model analysis, which further revealed that the major residual deficits between retrained and intact field locations could be explained by differences in internal additive noise. There were no significant differences in multiplicative noise or the ability to process external noise. Together, these results highlight the critical role of altered internal noise processing in mediating training-induced visual recovery in CB fields, and may explain residual perceptual deficits relative to intact regions of the visual field. PMID:26389544
NASA Astrophysics Data System (ADS)
Mikishev, Alexander B.; Nepomnyashchy, Alexander A.
2018-05-01
The paper presents the analysis of the impact of vertical periodic vibrations on the long-wavelength Marangoni instability in a liquid layer with poorly conducting boundaries in the presence of insoluble surfactant on the deformable gas-liquid interface. The layer is subject to a uniform transverse temperature gradient. Linear stability analysis is performed in order to find critical values of Marangoni numbers for both monotonic and oscillatory instability modes. Longwave asymptotic expansions are used. At the leading order, the critical values are independent on vibration parameters; at the next order of approximation we obtained the rise of stability thresholds due to vibration.
Mirror instability near the threshold: Hybrid simulations
NASA Astrophysics Data System (ADS)
Hellinger, P.; Trávníček, P.; Passot, T.; Sulem, P.; Kuznetsov, E. A.; Califano, F.
2007-12-01
Nonlinear behavior of the mirror instability near the threshold is investigated using 1-D hybrid simulations. The simulations demonstrate the presence of an early phase where quasi-linear effects dominate [ Shapiro and Shevchenko, 1964]. The quasi-linear diffusion is however not the main saturation mechanism. A second phase is observed where the mirror mode is linearly stable (the stability is evaluated using the instantaneous ion distribution function) but where the instability nevertheless continues to develop, leading to nonlinear coherent structures in the form of magnetic humps. This regime is well modeled by a nonlinear equation for the magnetic field evolution, derived from a reductive perturbative expansion of the Vlasov-Maxwell equations [ Kuznetsov et al., 2007] with a phenomenological term which represents local variations of the ion Larmor radius. In contrast with previous models where saturation is due to the cooling of a population of trapped particles, the resulting equation correctly reproduces the development of magnetic humps from an initial noise. References Kuznetsov, E., T. Passot and P. L. Sulem (2007), Dynamical model for nonlinear mirror modes near threshold, Phys. Rev. Lett., 98, 235003. Shapiro, V. D., and V. I. Shevchenko (1964), Sov. JETP, 18, 1109.
Breast density quantification with cone-beam CT: A post-mortem study
Johnson, Travis; Ding, Huanjun; Le, Huy Q.; Ducote, Justin L.; Molloi, Sabee
2014-01-01
Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The percent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson’s r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate (SEE) was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. PMID:24254317
Morignat, Eric; Gay, Emilie; Vinard, Jean-Luc; Calavas, Didier; Hénaux, Viviane
2015-07-01
In the context of climate change, the frequency and severity of extreme weather events are expected to increase in temperate regions, and potentially have a severe impact on farmed cattle through production losses or deaths. In this study, we used distributed lag non-linear models to describe and quantify the relationship between a temperature-humidity index (THI) and cattle mortality in 12 areas in France. THI incorporates the effects of both temperature and relative humidity and was already used to quantify the degree of heat stress on dairy cattle because it does reflect physical stress deriving from extreme conditions better than air temperature alone. Relationships between daily THI and mortality were modeled separately for dairy and beef cattle during the 2003-2006 period. Our general approach was to first determine the shape of the THI-mortality relationship in each area by modeling THI with natural cubic splines. We then modeled each relationship assuming a three-piecewise linear function, to estimate the critical cold and heat THI thresholds, for each area, delimiting the thermoneutral zone (i.e. where the risk of death is at its minimum), and the cold and heat effects below and above these thresholds, respectively. Area-specific estimates of the cold or heat effects were then combined in a hierarchical Bayesian model to compute the pooled effects of THI increase or decrease on dairy and beef cattle mortality. A U-shaped relationship, indicating a mortality increase below the cold threshold and above the heat threshold was found in most of the study areas for dairy and beef cattle. The pooled estimate of the mortality risk associated with a 1°C decrease in THI below the cold threshold was 5.0% for dairy cattle [95% posterior interval: 4.4, 5.5] and 4.4% for beef cattle [2.0, 6.5]. The pooled mortality risk associated with a 1°C increase above the hot threshold was estimated to be 5.6% [5.0, 6.2] for dairy and 4.6% [0.9, 8.7] for beef cattle. Knowing the thermoneutral zone and temperature effects outside this zone is of primary interest for farmers because it can help determine when to implement appropriate preventive and mitigation measures. Copyright © 2015 Elsevier Inc. All rights reserved.
Stefani, Luciana Cadore; Muller, Suzana; Torres, Iraci L. S.; Razzolini, Bruna; Rozisky, Joanna R.; Fregni, Felipe; Markus, Regina; Caumo, Wolnei
2013-01-01
Background Previous studies have suggested that melatonin may produce antinociception through peripheral and central mechanisms. Based on the preliminary encouraging results of studies of the effects of melatonin on pain modulation, the important question has been raised of whether there is a dose relationship in humans of melatonin on pain modulation. Objective The objective was to evaluate the analgesic dose response of the effects of melatonin on pressure and heat pain threshold and tolerance and the sedative effects. Methods Sixty-one healthy subjects aged 19 to 47 y were randomized into one of four groups: placebo, 0.05 mg/kg sublingual melatonin, 0.15 mg/kg sublingual melatonin or 0.25 mg/kg sublingual melatonin. We determine the pressure pain threshold (PPT) and the pressure pain tolerance (PPTo). Quantitative sensory testing (QST) was used to measure the heat pain threshold (HPT) and the heat pain tolerance (HPTo). Sedation was assessed with a visual analogue scale and bispectral analysis. Results Serum plasma melatonin levels were directly proportional to the melatonin doses given to each subject. We observed a significant effect associated with dose group. Post hoc analysis indicated significant differences between the placebo vs. the intermediate (0.15 mg/kg) and the highest (0.25 mg/kg) melatonin doses for all pain threshold and sedation level tests. A linear regression model indicated a significant association between the serum melatonin concentrations and changes in pain threshold and pain tolerance (R2 = 0.492 for HPT, R2 = 0.538 for PPT, R2 = 0.558 for HPTo and R2 = 0.584 for PPTo). Conclusions The present data indicate that sublingual melatonin exerts well-defined dose-dependent antinociceptive activity. There is a correlation between the plasma melatonin drug concentration and acute changes in the pain threshold. These results provide additional support for the investigation of melatonin as an analgesic agent. Brazilian Clinical Trials Registry (ReBec): (U1111-1123-5109). IRB: Research Ethics Committee at the Hospital de Clínicas de Porto Alegre. PMID:25947930
Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.
ERIC Educational Resources Information Center
Goetschel, Roy; Voxman, William
1987-01-01
Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)
The shape of the dose response curve in the low dose region has been debated since the late 1940s. The debate originally focused on linear no threshold (LNT) vs threshold responses in the low dose range for cancer and noncancer related effects. Recently, claims have arisen tha...
NASA Astrophysics Data System (ADS)
Picu, R. C.; Pal, A.; Lupulescu, M. V.
2016-04-01
We study the mechanical behavior of two-dimensional, stochastically microcracked continua in the range of crack densities close to, and above, the transport percolation threshold. We show that these materials retain stiffness up to crack densities much larger than the transport percolation threshold due to topological interlocking of sample subdomains. Even with a linear constitutive law for the continuum, the mechanical behavior becomes nonlinear in the range of crack densities bounded by the transport and stiffness percolation thresholds. The effect is due to the fractal nature of the fragmentation process and is not linked to the roughness of individual cracks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korotkevich, Alexander O.; Lushnikov, Pavel M., E-mail: plushnik@math.unm.edu; Landau Institute for Theoretical Physics, 2 Kosygin Str., Moscow 119334
2015-01-15
We developed a linear theory of backward stimulated Brillouin scatter (BSBS) of a spatially and temporally random laser beam relevant for laser fusion. Our analysis reveals a new collective regime of BSBS (CBSBS). Its intensity threshold is controlled by diffraction, once cT{sub c} exceeds a laser speckle length, with T{sub c} the laser coherence time. The BSBS spatial gain rate is approximately the sum of that due to CBSBS, and a part which is independent of diffraction and varies linearly with T{sub c}. The CBSBS spatial gain rate may be reduced significantly by the temporal bandwidth of KrF-based laser systemsmore » compared to the bandwidth currently available to temporally smoothed glass-based laser systems.« less
Measurement of visual contrast sensitivity
NASA Astrophysics Data System (ADS)
Vongierke, H. E.; Marko, A. R.
1985-04-01
This invention involves measurement of the visual contrast sensitivity (modulation transfer) function of a human subject by means of linear or circular spatial frequency pattern on a cathode ray tube whose contrast is automatically decreasing or increasing depending on the subject pressing or releasing a hand-switch button. The threshold of detection of the pattern modulation is found by the subject by adjusting the contrast to values which vary about the subject's threshold thereby determining the threshold and also providing by the magnitude of the contrast fluctuations between reversals some estimate of the variability of the subject's absolute threshold. The invention also involves the slow automatic sweeping of the spatial frequency of the pattern over the spatial frequencies after preset time intervals or after threshold has been defined at each frequency by a selected number of subject-determined threshold crossings; i.e., contrast reversals.
Analysis of ecological thresholds in a temperate forest undergoing dieback
Newton, Adrian C.; Cantarello, Elena; Evans, Paul M.
2017-01-01
Positive feedbacks in drivers of degradation can cause threshold responses in natural ecosystems. Though threshold responses have received much attention in studies of aquatic ecosystems, they have been neglected in terrestrial systems, such as forests, where the long time-scales required for monitoring have impeded research. In this study we explored the role of positive feedbacks in a temperate forest that has been monitored for 50 years and is undergoing dieback, largely as a result of death of the canopy dominant species (Fagus sylvatica, beech). Statistical analyses showed strong non-linear losses in basal area for some plots, while others showed relatively gradual change. Beech seedling density was positively related to canopy openness, but a similar relationship was not observed for saplings, suggesting a feedback whereby mortality in areas with high canopy openness was elevated. We combined this observation with empirical data on size- and growth-mediated mortality of trees to produce an individual-based model of forest dynamics. We used this model to simulate changes in the structure of the forest over 100 years under scenarios with different juvenile and mature mortality probabilities, as well as a positive feedback between seedling and mature tree mortality. This model produced declines in forest basal area when critical juvenile and mature mortality probabilities were exceeded. Feedbacks in juvenile mortality caused a greater reduction in basal area relative to scenarios with no feedback. Non-linear, concave declines of basal area occurred only when mature tree mortality was 3–5 times higher than rates observed in the field. Our results indicate that the longevity of trees may help to buffer forests against environmental change and that the maintenance of old, large trees may aid the resilience of forest stands. In addition, our work suggests that dieback of forests may be avoidable providing pressures on mature and juvenile trees do not pass critical thresholds. PMID:29240842
Ecosystem resilience and threshold response in the Galápagos coastal zone.
Seddon, Alistair W R; Froyd, Cynthia A; Leng, Melanie J; Milne, Glenn A; Willis, Katherine J
2011-01-01
The Intergovernmental Panel on Climate Change (IPCC) provides a conservative estimate on rates of sea-level rise of 3.8 mm yr(-1) at the end of the 21(st) century, which may have a detrimental effect on ecologically important mangrove ecosystems. Understanding factors influencing the long-term resilience of these communities is critical but poorly understood. We investigate ecological resilience in a coastal mangrove community from the Galápagos Islands over the last 2700 years using three research questions: What are the 'fast and slow' processes operating in the coastal zone? Is there evidence for a threshold response? How can the past inform us about the resilience of the modern system? Palaeoecological methods (AMS radiocarbon dating, stable carbon isotopes (δ(13)C)) were used to reconstruct sedimentation rates and ecological change over the past 2,700 years at Diablas lagoon, Isabela, Galápagos. Bulk geochemical analysis was also used to determine local environmental changes, and salinity was reconstructed using a diatom transfer function. Changes in relative sea level (RSL) were estimated using a glacio-isostatic adjustment model. Non-linear behaviour was observed in the Diablas mangrove ecosystem as it responded to increased salinities following exposure to tidal inundations. A negative feedback was observed which enabled the mangrove canopy to accrete vertically, but disturbances may have opened up the canopy and contributed to an erosion of resilience over time. A combination of drier climatic conditions and a slight fall in RSL then resulted in a threshold response, from a mangrove community to a microbial mat. Palaeoecological records can provide important information on the nature of non-linear behaviour by identifying thresholds within ecological systems, and in outlining responses to 'fast' and 'slow' environmental change between alternative stable states. This study highlights the need to incorporate a long-term ecological perspective when designing strategies for maximizing coastal resilience.
Ecosystem Resilience and Threshold Response in the Galápagos Coastal Zone
Seddon, Alistair W. R.; Froyd, Cynthia A.; Leng, Melanie J.; Milne, Glenn A.; Willis, Katherine J.
2011-01-01
Background The Intergovernmental Panel on Climate Change (IPCC) provides a conservative estimate on rates of sea-level rise of 3.8 mm yr−1 at the end of the 21st century, which may have a detrimental effect on ecologically important mangrove ecosystems. Understanding factors influencing the long-term resilience of these communities is critical but poorly understood. We investigate ecological resilience in a coastal mangrove community from the Galápagos Islands over the last 2700 years using three research questions: What are the ‘fast and slow’ processes operating in the coastal zone? Is there evidence for a threshold response? How can the past inform us about the resilience of the modern system? Methodology/Principal Findings Palaeoecological methods (AMS radiocarbon dating, stable carbon isotopes (δ13C)) were used to reconstruct sedimentation rates and ecological change over the past 2,700 years at Diablas lagoon, Isabela, Galápagos. Bulk geochemical analysis was also used to determine local environmental changes, and salinity was reconstructed using a diatom transfer function. Changes in relative sea level (RSL) were estimated using a glacio-isostatic adjustment model. Non-linear behaviour was observed in the Diablas mangrove ecosystem as it responded to increased salinities following exposure to tidal inundations. A negative feedback was observed which enabled the mangrove canopy to accrete vertically, but disturbances may have opened up the canopy and contributed to an erosion of resilience over time. A combination of drier climatic conditions and a slight fall in RSL then resulted in a threshold response, from a mangrove community to a microbial mat. Conclusions/Significance Palaeoecological records can provide important information on the nature of non-linear behaviour by identifying thresholds within ecological systems, and in outlining responses to ‘fast’ and ‘slow’ environmental change between alternative stable states. This study highlights the need to incorporate a long-term ecological perspective when designing strategies for maximizing coastal resilience. PMID:21811594
Surface ablation of aluminum and silicon by ultrashort laser pulses of variable width
NASA Astrophysics Data System (ADS)
Zayarny, D. A.; Ionin, A. A.; Kudryashov, S. I.; Makarov, S. V.; Kuchmizhak, A. A.; Vitrik, O. B.; Kulchin, Yu. N.
2016-06-01
Single-shot thresholds of surface ablation of aluminum and silicon via spallative ablation by infrared (IR) and visible ultrashort laser pulses of variable width τlas (0.2-12 ps) have been measured by optical microscopy. For increasing laser pulse width τlas < 3 ps, a drastic (threefold) drop of the ablation threshold of aluminum has been observed for visible pulses compared to an almost negligible threshold variation for IR pulses. In contrast, the ablation threshold in silicon increases threefold with increasing τlas for IR pulses, while the corresponding thresholds for visible pulses remained almost constant. In aluminum, such a width-dependent decrease in ablation thresholds has been related to strongly diminished temperature gradients for pulse widths exceeding the characteristic electron-phonon thermalization time. In silicon, the observed increase in ablation thresholds has been ascribed to two-photon IR excitation, while in the visible range linear absorption of the material results in almost constant thresholds.
Dynamical processes and epidemic threshold on nonlinear coupled multiplex networks
NASA Astrophysics Data System (ADS)
Gao, Chao; Tang, Shaoting; Li, Weihua; Yang, Yaqian; Zheng, Zhiming
2018-04-01
Recently, the interplay between epidemic spreading and awareness diffusion has aroused the interest of many researchers, who have studied models mainly based on linear coupling relations between information and epidemic layers. However, in real-world networks the relation between two layers may be closely correlated with the property of individual nodes and exhibits nonlinear dynamical features. Here we propose a nonlinear coupled information-epidemic model (I-E model) and present a comprehensive analysis in a more generalized scenario where the upload rate differs from node to node, deletion rate varies between susceptible and infected states, and infection rate changes between unaware and aware states. In particular, we develop a theoretical framework of the intra- and inter-layer dynamical processes with a microscopic Markov chain approach (MMCA), and derive an analytic epidemic threshold. Our results suggest that the change of upload and deletion rate has little effect on the diffusion dynamics in the epidemic layer.
NASA Astrophysics Data System (ADS)
Wu, W.; Chen, G. Y.; Kang, R.; Xia, J. C.; Huang, Y. P.; Chen, K. J.
2017-07-01
During slaughtering and further processing, chicken carcasses are inevitably contaminated by microbial pathogen contaminants. Due to food safety concerns, many countries implement a zero-tolerance policy that forbids the placement of visibly contaminated carcasses in ice-water chiller tanks during processing. Manual detection of contaminants is labor consuming and imprecise. Here, a successive projections algorithm (SPA)-multivariable linear regression (MLR) classifier based on an optimal performance threshold was developed for automatic detection of contaminants on chicken carcasses. Hyperspectral images were obtained using a hyperspectral imaging system. A regression model of the classifier was established by MLR based on twelve characteristic wavelengths (505, 537, 561, 562, 564, 575, 604, 627, 656, 665, 670, and 689 nm) selected by SPA , and the optimal threshold T = 1 was obtained from the receiver operating characteristic (ROC) analysis. The SPA-MLR classifier provided the best detection results when compared with the SPA-partial least squares (PLS) regression classifier and the SPA-least squares supported vector machine (LS-SVM) classifier. The true positive rate (TPR) of 100% and the false positive rate (FPR) of 0.392% indicate that the SPA-MLR classifier can utilize spatial and spectral information to effectively detect contaminants on chicken carcasses.
NASA Astrophysics Data System (ADS)
Chen, Yong; Yan, Zhenya; Li, Xin
2018-02-01
The influence of spatially-periodic momentum modulation on beam dynamics in parity-time (PT) symmetric optical lattice is systematically investigated in the one- and two-dimensional nonlinear Schrödinger equations. In the linear regime, we demonstrate that the momentum modulation can alter the first and second PT thresholds of the classical lattice, periodically or regularly change the shapes of the band structure, rotate and split the diffraction patterns of beams leading to multiple refraction and emissions. In the Kerr-nonlinear regime for one-dimension (1D) case, a large family of fundamental solitons within the semi-infinite gap can be found to be stable, even beyond the second PT threshold; it is shown that the momentum modulation can shrink the existing range of fundamental solitons and not change their stability. For two-dimension (2D) case, most solitons with higher intensities are relatively unstable in their existing regions which are narrower than those in 1D case, but we also find stable fundamental solitons corroborated by linear stability analysis and direct beam propagation. More importantly, the momentum modulation can also utterly change the direction of the transverse power flow and control the energy exchange among gain or loss regions.
NASA Astrophysics Data System (ADS)
Fleury, Manon; Charron, Dominique F.; Holt, John D.; Allen, O. Brian; Maarouf, Abdel R.
2006-07-01
The incidence of enteric infections in the Canadian population varies seasonally, and may be expected to be change in response to global climate changes. To better understand any potential impact of warmer temperature on enteric infections in Canada, we investigated the relationship between ambient temperature and weekly reports of confirmed cases of three pathogens in Canada: Salmonella, pathogenic Escherichia coli and Campylobacter, between 1992 and 2000 in two Canadian provinces. We used generalized linear models (GLMs) and generalized additive models (GAMs) to estimate the effect of seasonal adjustments on the estimated models. We found a strong non-linear association between ambient temperature and the occurrence of all three enteric pathogens in Alberta, Canada, and of Campylobacter in Newfoundland-Labrador. Threshold models were used to quantify the relationship of disease and temperature with thresholds chosen from 0 to -10°C depending on the pathogen modeled. For Alberta, the log relative risk of Salmonella weekly case counts increased by 1.2%, Campylobacter weekly case counts increased by 2.2%, and E. coli weekly case counts increased by 6.0% for every degree increase in weekly mean temperature. For Newfoundland-Labrador the log relative risk increased by 4.5% for Campylobacter for every degree increase in weekly mean temperature.
THRESHOLD LOGIC IN ARTIFICIAL INTELLIGENCE
COMPUTER LOGIC, ARTIFICIAL INTELLIGENCE , BIONICS, GEOMETRY, INPUT OUTPUT DEVICES, LINEAR PROGRAMMING, MATHEMATICAL LOGIC, MATHEMATICAL PREDICTION, NETWORKS, PATTERN RECOGNITION, PROBABILITY, SWITCHING CIRCUITS, SYNTHESIS
Robust detection, isolation and accommodation for sensor failures
NASA Technical Reports Server (NTRS)
Emami-Naeini, A.; Akhter, M. M.; Rock, S. M.
1986-01-01
The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous techniques
Biomechanical properties of concussions in high school football.
Broglio, Steven P; Schnebel, Brock; Sosnoff, Jacob J; Shin, Sunghoon; Fend, Xingdong; He, Xuming; Zimmerman, Jerrad
2010-11-01
Sport concussion represents the majority of brain injuries occurring in the United States with 1.6–3.8 million cases annually. Understanding the biomechanical properties of this injury will support the development of better diagnostics and preventative techniques. We monitored all football related head impacts in 78 high school athletes (mean age = 16.7 yr) from 2005 to 2008 to better understand the biomechanical characteristics of concussive impacts. Using the Head Impact Telemetry System, a total of 54,247 impacts were recorded, and 13 concussive episodes were captured for analysis. A classification and regression tree analysis of impacts indicated that rotational acceleration (95582.3 rad·s−²), linear acceleration (996.1g), and impact location (front, top, and back) yielded the highest predictive value of concussion. These threshold values are nearly identical with those reported at the collegiate and professional level. If the Head Impact Telemetry System were implemented for medical use, sideline personnel can expect to diagnose one of every five athletes with a concussion when the impact exceeds these tolerance levels. Why all athletes did not sustain a concussion when the impacts generated variables in excess of our threshold criteria is not entirely clear, although individual differences between participants may play a role. A similar threshold to concussion in adolescent athletes compared with their collegiate and professional counterparts suggests an equal concussion risk at all levels of play.
Numerical Analysis of the Effects of Normalized Plasma Pressure on RMP ELM Suppression in DIII-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orlov, D. M.; Moyer, R.A.; Evans, T. E.
2010-01-01
The effect of normalized plasma pressure as characterized by normalized pressure parameter (beta(N)) on the suppression of edge localized modes (ELMs) using resonant magnetic perturbations (RMPs) is studied in low-collisionality (nu* <= 0.2) H-mode plasmas with low-triangularity ( = 0.25) and ITER similar shapes ( = 0.51). Experimental results have suggested that ELM suppression by RMPs requires a minimum threshold in plasma pressure as characterized by beta(N). The variations in the vacuum field topology with beta(N) due to safety factor profile and island overlap changes caused by variation of the Shafranov shift and pedestal bootstrap current are examined numerically withmore » the field line integration code TRIP3D. The results show very small differences in the vacuum field structure in terms of the Chirikov (magnetic island overlap) parameter, Poincare sections and field line loss fractions. These differences do not appear to explain the observed threshold in beta(N) for ELM suppression. Linear peeling-ballooning stability analysis with the ELITE code suggests that the ELMs which persist during the RMPs when beta(N) is below the observed threshold are not type I ELMs, because the pedestal conditions are deep within the stable regime for peeling-ballooning modes. These ELMs have similarities to type III ELMs or low density ELMs.« less
NASA Astrophysics Data System (ADS)
Uenomachi, M.; Orita, T.; Shimazoe, K.; Takahashi, H.; Ikeda, H.; Tsujita, K.; Sekiba, D.
2018-01-01
High-resolution Elastic Recoil Detection Analysis (HERDA), which consists of a 90o sector magnetic spectrometer and a position-sensitive detector (PSD), is a method of quantitative hydrogen analysis. In order to increase sensitivity, a HERDA system using a multi-channel silicon-based ion detector has been developed. Here, as a parallel and fast readout circuit from a multi-channel silicon-based ion detector, a slew-rate-limited time-over-threshold (ToT) application-specific integrated circuit (ASIC) was designed, and a new slew-rate-limited ToT method is proposed. The designed ASIC has 48 channels and each channel consists of a preamplifier, a slew-rate-limited shaping amplifier, which makes ToT response linear, and a comparator. The measured equivalent noise charges (ENCs) of the preamplifier, the shaper, and the ToT on no detector capacitance were 253±21, 343±46, and 560±56 electrons RMS, respectively. The spectra from a 241Am source measured using a slew-rate-limited ToT ASIC are also reported.
Risk equivalent of exposure versus dose of radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, V.P.
This report describes a risk analysis study of low-dose irradiation and the resulting biological effects on a cell. The author describes fundamental differences between the effects of high-level exposure (HLE) and low-level exposure (LLE). He stresses that the concept of absorbed dose to an organ is not a dose but a level of effect produced by a particular number of particles. He discusses the confusion between a linear-proportional representation of dose limits and a threshold-curvilinear representation, suggesting that a LLE is a composite of both systems. (TEM)
NASA Astrophysics Data System (ADS)
Tankam, Israel; Tchinda Mouofo, Plaire; Mendy, Abdoulaye; Lam, Mountaga; Tewa, Jean Jules; Bowong, Samuel
2015-06-01
We investigate the effects of time delay and piecewise-linear threshold policy harvesting for a delayed predator-prey model. It is the first time that Holling response function of type III and the present threshold policy harvesting are associated with time delay. The trajectories of our delayed system are bounded; the stability of each equilibrium is analyzed with and without delay; there are local bifurcations as saddle-node bifurcation and Hopf bifurcation; optimal harvesting is also investigated. Numerical simulations are provided in order to illustrate each result.
Threshold current for fireball generation
NASA Astrophysics Data System (ADS)
Dijkhuis, Geert C.
1982-05-01
Fireball generation from a high-intensity circuit breaker arc is interpreted here as a quantum-mechanical phenomenon caused by severe cooling of electrode material evaporating from contact surfaces. According to the proposed mechanism, quantum effects appear in the arc plasma when the radius of one magnetic flux quantum inside solid electrode material has shrunk to one London penetration length. A formula derived for the threshold discharge current preceding fireball generation is found compatible with data reported by Silberg. This formula predicts linear scaling of the threshold current with the circuit breaker's electrode radius and concentration of conduction electrons.
Have the temperature time series a structural change after 1998?
NASA Astrophysics Data System (ADS)
Werner, Rolf; Valev, Dimitare; Danov, Dimitar
2012-07-01
The global and hemisphere temperature GISS and Hadcrut3 time series were analysed for structural changes. We postulate the continuity of the preceding temperature function depending from the time. The slopes are calculated for a sequence of segments limited by time thresholds. We used a standard method, the restricted linear regression with dummy variables. We performed the calculations and tests for different number of thresholds. The thresholds are searched continuously in determined time intervals. The F-statistic is used to obtain the time points of the structural changes.
NASA Astrophysics Data System (ADS)
Lisauskas, Alvydas; Ikamas, Kestutis; Massabeau, Sylvain; Bauer, Maris; ČibiraitÄ--, DovilÄ--; Matukas, Jonas; Mangeney, Juliette; Mittendorff, Martin; Winnerl, Stephan; Krozer, Viktor; Roskos, Hartmut G.
2018-05-01
We propose to exploit rectification in field-effect transistors as an electrically controllable higher-order nonlinear phenomenon for the convenient monitoring of the temporal characteristics of THz pulses, for example, by autocorrelation measurements. This option arises because of the existence of a gate-bias-controlled super-linear response at sub-threshold operation conditions when the devices are subjected to THz radiation. We present measurements for different antenna-coupled transistor-based THz detectors (TeraFETs) employing (i) AlGaN/GaN high-electron-mobility and (ii) silicon CMOS field-effect transistors and show that the super-linear behavior in the sub-threshold bias regime is a universal phenomenon to be expected if the amplitude of the high-frequency voltage oscillations exceeds the thermal voltage. The effect is also employed as a tool for the direct determination of the speed of the intrinsic TeraFET response which allows us to avoid limitations set by the read-out circuitry. In particular, we show that the build-up time of the intrinsic rectification signal of a patch-antenna-coupled CMOS detector changes from 20 ps in the deep sub-threshold voltage regime to below 12 ps in the vicinity of the threshold voltage.
Nonlinear ionic transport through microstructured solid electrolytes: homogenization estimates
NASA Astrophysics Data System (ADS)
Curto Sillamoni, Ignacio J.; Idiart, Martín I.
2016-10-01
We consider the transport of multiple ionic species by diffusion and migration through microstructured solid electrolytes in the presence of strong electric fields. The assumed constitutive relations for the constituent phases follow from convex energy and dissipation potentials which guarantee thermodynamic consistency. The effective response is heuristically deduced from a multi-scale convergence analysis of the relevant field equations. The resulting homogenized response involves an effective dissipation potential per species. Each potential is mathematically akin to that of a standard nonlinear heterogeneous conductor. A ‘linear-comparison’ homogenization technique is then used to generate estimates for these nonlinear potentials in terms of available estimates for corresponding linear conductors. By way of example, use is made of the Maxwell-Garnett and effective-medium linear approximations to generate estimates for two-phase systems with power-law dissipation. Explicit formulas are given for some limiting cases. In the case of threshold-type behavior, the estimates exhibit non-analytical dilute limits and seem to be consistent with fields localized in low energy paths.
Molecular orbital imaging via above-threshold ionization with circularly polarized pulses.
Zhu, Xiaosong; Zhang, Qingbin; Hong, Weiyi; Lu, Peixiang; Xu, Zhizhan
2011-07-18
Above-threshold ionization (ATI) for aligned or orientated linear molecules by circularly polarized laser pulsed is investigated. It is found that the all-round structural information of the molecular orbital is extracted with only one shot by the circularly polarized probe pulse rather than with multi-shot detections in a linearly polarized case. The obtained photoelectron momentum spectrum directly depicts the symmetry and electron distribution of the occupied molecular orbital, which results from the strong sensitivity of the ionization probability to these structural features. Our investigation indicates that the circularly polarized probe scheme would present a simple method to study the angle-dependent ionization and image the occupied electronic orbital.
Method for extracting long-equivalent wavelength interferometric information
NASA Technical Reports Server (NTRS)
Hochberg, Eric B. (Inventor)
1991-01-01
A process for extracting long-equivalent wavelength interferometric information from a two-wavelength polychromatic or achromatic interferometer. The process comprises the steps of simultaneously recording a non-linear sum of two different frequency visible light interferograms on a high resolution film and then placing the developed film in an optical train for Fourier transformation, low pass spatial filtering and inverse transformation of the film image to produce low spatial frequency fringes corresponding to a long-equivalent wavelength interferogram. The recorded non-linear sum irradiance derived from the two-wavelength interferometer is obtained by controlling the exposure so that the average interferogram irradiance is set at either the noise level threshold or the saturation level threshold of the film.
Gonçalves, Hernâni; Pinto, Paula; Silva, Manuela; Ayres-de-Campos, Diogo; Bernardes, João
2016-04-01
Fetal heart rate (FHR) monitoring is used routinely in labor, but conventional methods have a limited capacity to detect fetal hypoxia/acidosis. An exploratory study was performed on the simultaneous assessment of maternal heart rate (MHR) and FHR variability, to evaluate their evolution during labor and their capacity to detect newborn acidemia. MHR and FHR were simultaneously recorded in 51 singleton term pregnancies during the last two hours of labor and compared with newborn umbilical artery blood (UAB) pH. Linear/nonlinear indices were computed separately for MHR and FHR. Interaction between MHR and FHR was quantified through the same indices on FHR-MHR and through their correlation and cross-entropy. Univariate and bivariate statistical analysis included nonparametric confidence intervals and statistical tests, receiver operating characteristic curves and linear discriminant analysis. Progression of labor was associated with a significant increase in most MHR and FHR linear indices, whereas entropy indices decreased. FHR alone and in combination with MHR as FHR-MHR evidenced the highest auROC values for prediction of fetal acidemia, with 0.76 and 0.88 for the UAB pH thresholds 7.20 and 7.15, respectively. The inclusion of MHR on bivariate analysis achieved sensitivity and specificity values of nearly 100 and 89.1%, respectively. These results suggest that simultaneous analysis of MHR and FHR may improve the identification of fetal acidemia compared with FHR alone, namely during the last hour of labor.
NASA Astrophysics Data System (ADS)
Liu, Xuejin; Chen, Han; Bornefalk, Hans; Danielsson, Mats; Karlsson, Staffan; Persson, Mats; Xu, Cheng; Huber, Ben
2015-02-01
The variation among energy thresholds in a multibin detector for photon-counting spectral CT can lead to ring artefacts in the reconstructed images. Calibration of the energy thresholds can be used to achieve homogeneous threshold settings or to develop compensation methods to reduce the artefacts. We have developed an energy-calibration method for the different comparator thresholds employed in a photon-counting silicon-strip detector. In our case, this corresponds to specifying the linear relation between the threshold positions in units of mV and the actual deposited photon energies in units of keV. This relation is determined by gain and offset values that differ for different detector channels due to variations in the manufacturing process. Typically, the calibration is accomplished by correlating the peak positions of obtained pulse-height spectra to known photon energies, e.g. with the aid of mono-energetic x rays from synchrotron radiation, radioactive isotopes or fluorescence materials. Instead of mono-energetic x rays, the calibration method presented in this paper makes use of a broad x-ray spectrum provided by commercial x-ray tubes. Gain and offset as the calibration parameters are obtained by a regression analysis that adjusts a simulated spectrum of deposited energies to a measured pulse-height spectrum. Besides the basic photon interactions such as Rayleigh scattering, Compton scattering and photo-electric absorption, the simulation takes into account the effect of pulse pileup, charge sharing and the electronic noise of the detector channels. We verify the method for different detector channels with the aid of a table-top setup, where we find the uncertainty of the keV-value of a calibrated threshold to be between 0.1 and 0.2 keV.
Precluding nonlinear ISI in direct detection long-haul fiber optic systems
NASA Technical Reports Server (NTRS)
Swenson, Norman L.; Shoop, Barry L.; Cioffi, John M.
1991-01-01
Long-distance, high-rate fiber optic systems employing directly modulated 1.55-micron single-mode lasers and conventional single-mode fiber suffer severe intersymbol interference (ISI) with a large nonlinear component. A method of reducing the nonlinearity of the ISI, thereby making linear equalization more viable, is investigated. It is shown that the degree of nonlinearity is highly dependent on the choice of laser bias current, and that in some cases the ISI nonlinearity can be significantly reduced by biasing the laser substantially above threshold. Simulation results predict that an increase in signal-to-nonlinear-distortion ratio as high as 25 dB can be achieved for synchronously spaced samples at an optimal sampling phase by increasing the bias current from 1.2 times threshold to 3.5 times threshold. The high SDR indicates that a linear tapped delay line equalizer could be used to mitigate ISI. Furthermore, the shape of the pulse response suggests that partial response precoding and digital feedback equalization would be particularly effective for this channel.
The critical density for star formation in HII galaxies
NASA Technical Reports Server (NTRS)
Taylor, Christopher L.; Brinks, Elias; Skillman, Evan D.
1993-01-01
The star formation rate (SFR) in galaxies is believed to obey a power law relation with local gas density, first proposed by Schmidt (1959). Kennicutt (1989) has shown that there is a threshold density above which star formation occurs, and for densities at or near the threshold density, the DFR is highly non-linear, leading to bursts of star formation. Skillman (1987) empirically determined this threshold for dwarf galaxies to be approximately 1 x 10(exp 21) cm(exp -2), at a linear resolution of 500pc. During the course of our survey for HI companion clouds to HII galaxies, we obtained high resolution HI observations of five nearby HII galaxies. HII galaxies are low surface brightness, rich in HI, and contain one or a few high surface brightness knots whose optical spectra resemble those of HII regions. These knots are currently experiencing a burst of star formation. After Kennicutt (1989) we determine the critical density for star formation in the galaxies, and compare the predictions with radio and optical data.
A Near-Threshold Shape Resonance in the Valence-Shell Photoabsorption of Linear Alkynes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacovella, U.; Holland, D. M. P.; Boyé-Péronne, S.
2015-12-17
The room-temperature photoabsorption spectra of a number of linear alkynes with internal triple bonds (e.g., 2-butyne, 2-pentyne, and 2- and 3-hexyne) show similar resonances just above the lowest ionization threshold of the neutral molecules. These features result in a substantial enhancement of the photoabsorption cross sections relative to the cross sections of alkynes with terminal triple bonds (e.g., propyne, 1-butyne, 1-pentyne,...). Based on earlier work on 2-butyne [Xu et al., J. Chem. Phys. 2012, 136, 154303], these features are assigned to excitation from the neutral highest occupied molecular orbital (HOMO) to a shape resonance with g (l = 4) charactermore » and approximate pi symmetry. This generic behavior results from the similarity of the HOMOs in all internal alkynes, as well as the similarity of the corresponding g pi virtual orbital in the continuum. Theoretical calculations of the absorption spectrum above the ionization threshold for the 2- and 3-alkynes show the presence of a shape resonance when the coupling between the two degenerate or nearly degenerate pi channels is included, with a dominant contribution from l = 4. These calculations thus confirm the qualitative arguments for the importance of the l = 4 continuum near threshold for internal alkynes, which should also apply to other linear internal alkynes and alkynyl radicals. The 1-alkynes do not have such high partial waves present in the shape resonance. The lower l partial waves in these systems are consistent with the broader features observed in the corresponding spectra.« less
The Design of Optical Sensor for the Pinhole/Occulter Facility
NASA Technical Reports Server (NTRS)
Greene, Michael E.
1990-01-01
Three optical sight sensor systems were designed, built and tested. Two optical lines of sight sensor system are capable of measuring the absolute pointing angle to the sun. The system is for use with the Pinhole/Occulter Facility (P/OF), a solar hard x ray experiment to be flown from Space Shuttle or Space Station. The sensor consists of a pinhole camera with two pairs of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the pinhole, track and hold circuitry for data reduction, an analog to digital converter, and a microcomputer. The deflection of the image center is calculated from these data using an approximation for the solar image. A second system consists of a pinhole camera with a pair of perpendicularly mounted linear photodiode arrays, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed. A third optical sensor system is capable of measuring the internal vibration of the P/OF between the mask and base. The system consists of a white light source, a mirror and a pair of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the mirror, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image and hence the vibration of the structure is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed.
Detectability Thresholds and Optimal Algorithms for Community Structure in Dynamic Networks
NASA Astrophysics Data System (ADS)
Ghasemian, Amir; Zhang, Pan; Clauset, Aaron; Moore, Cristopher; Peel, Leto
2016-07-01
The detection of communities within a dynamic network is a common means for obtaining a coarse-grained view of a complex system and for investigating its underlying processes. While a number of methods have been proposed in the machine learning and physics literature, we lack a theoretical analysis of their strengths and weaknesses, or of the ultimate limits on when communities can be detected. Here, we study the fundamental limits of detecting community structure in dynamic networks. Specifically, we analyze the limits of detectability for a dynamic stochastic block model where nodes change their community memberships over time, but where edges are generated independently at each time step. Using the cavity method, we derive a precise detectability threshold as a function of the rate of change and the strength of the communities. Below this sharp threshold, we claim that no efficient algorithm can identify the communities better than chance. We then give two algorithms that are optimal in the sense that they succeed all the way down to this threshold. The first uses belief propagation, which gives asymptotically optimal accuracy, and the second is a fast spectral clustering algorithm, based on linearizing the belief propagation equations. These results extend our understanding of the limits of community detection in an important direction, and introduce new mathematical tools for similar extensions to networks with other types of auxiliary information.
Visuomotor sensitivity to visual information about surface orientation.
Knill, David C; Kersten, Daniel
2004-03-01
We measured human visuomotor sensitivity to visual information about three-dimensional surface orientation by analyzing movements made to place an object on a slanted surface. We applied linear discriminant analysis to the kinematics of subjects' movements to surfaces with differing slants (angle away form the fronto-parallel) to derive visuomotor d's for discriminating surfaces differing in slant by 5 degrees. Subjects' visuomotor sensitivity to information about surface orientation was very high, with discrimination "thresholds" ranging from 2 to 3 degrees. In a first experiment, we found that subjects performed only slightly better using binocular cues alone than monocular texture cues and that they showed only weak evidence for combining the cues when both were available, suggesting that monocular cues can be just as effective in guiding motor behavior in depth as binocular cues. In a second experiment, we measured subjects' perceptual discrimination and visuomotor thresholds in equivalent stimulus conditions to decompose visuomotor sensitivity into perceptual and motor components. Subjects' visuomotor thresholds were found to be slightly greater than their perceptual thresholds for a range of memory delays, from 1 to 3 s. The data were consistent with a model in which perceptual noise increases with increasing delay between stimulus presentation and movement initiation, but motor noise remains constant. This result suggests that visuomotor and perceptual systems rely on the same visual estimates of surface slant for memory delays ranging from 1 to 3 s.
Voltage and pace-capture mapping of linear ablation lesions overestimates chronic ablation gap size.
O'Neill, Louisa; Harrison, James; Chubb, Henry; Whitaker, John; Mukherjee, Rahul K; Bloch, Lars Ølgaard; Andersen, Niels Peter; Dam, Høgni; Jensen, Henrik K; Niederer, Steven; Wright, Matthew; O'Neill, Mark; Williams, Steven E
2018-04-26
Conducting gaps in lesion sets are a major reason for failure of ablation procedures. Voltage mapping and pace-capture have been proposed for intra-procedural identification of gaps. We aimed to compare gap size measured acutely and chronically post-ablation to macroscopic gap size in a porcine model. Intercaval linear ablation was performed in eight Göttingen minipigs with a deliberate gap of ∼5 mm left in the ablation line. Gap size was measured by interpolating ablation contact force values between ablation tags and thresholding at a low force cut-off of 5 g. Bipolar voltage mapping and pace-capture mapping along the length of the line were performed immediately, and at 2 months, post-ablation. Animals were euthanized and gap sizes were measured macroscopically. Voltage thresholds to define scar were determined by receiver operating characteristic analysis as <0.56 mV (acutely) and <0.62 mV (chronically). Taking the macroscopic gap size as gold standard, error in gap measurements were determined for voltage, pace-capture, and ablation contact force maps. All modalities overestimated chronic gap size, by 1.4 ± 2.0 mm (ablation contact force map), 5.1 ± 3.4 mm (pace-capture), and 9.5 ± 3.8 mm (voltage mapping). Error on ablation contact force map gap measurements were significantly less than for voltage mapping (P = 0.003, Tukey's multiple comparisons test). Chronically, voltage mapping and pace-capture mapping overestimated macroscopic gap size by 11.9 ± 3.7 and 9.8 ± 3.5 mm, respectively. Bipolar voltage and pace-capture mapping overestimate the size of chronic gap formation in linear ablation lesions. The most accurate estimation of chronic gap size was achieved by analysis of catheter-myocardium contact force during ablation.
Disturbances of rod threshold forced by briefly exposed luminous lines, edges, disks and annuli
Hallett, P. E.
1971-01-01
1. When the dark-adapted eye is exposed to a brief duration (2 msec) luminous line the resulting threshold disturbance is much sharper (decay constant of ca. 10 min arc) than would be expected in a system which is known to integrate the effects of light quanta over a distance of 1 deg or so. 2. When the forcing input is a pair of brief duration parallel luminous lines the threshold disturbance falls off sharply at the outsides of the pattern but on the inside a considerable spread of threshold-raising effects may occur unless the lines are sufficiently far apart. 3. The threshold disturbance due to a briefly exposed edge shows an overshoot reminiscent of `lateral inhibition'. 4. If the threshold is measured at the centre of a black disk presented in a briefly lit surround then (a) the dependence of threshold on time interval between test and surround suggests that the threshold elevation is due to a non-optical effect which is not `metacontrast'; (b) the dependence of threshold on black disk diameter is consistent with the notion that the spatial threshold disturbance is progressively sharpened as the separation of luminous edges increases. 5. If the threshold is measured at the centre of briefly exposed luminous disks of various diameters one obtains the same evidence for an `antagonistic centre-surround' system as that produced by other workers (e.g. Westheimer, 1965) for the steadily light-adapted eye. 6. The previous paper (Hallett, 1971) showed that brief illumination of the otherwise dark-adapted eye can rapidly and substantially change the extent of spatial integration. The present paper shows that brief illumination leads to substantial `inhibitory' effects. 7. Earlier approaches are reviewed: (a) the linear system signal/noise theory of the time course of threshold disturbances (Hallett, 1969b) is illustrated by the case of a small subtense flash superimposed on a large oscillatory background; (b) the spatial weighting functions of some other authors are given. 8. A possible non-linear model is briefly described: the line weighting function for the receptive field centre is taken to be a single Gaussian, as is customary, but the line weighting function for the inhibitory surround is bimodal. PMID:5145728
The perception of FM sweeps by Chinese and English listeners.
Luo, Huan; Boemio, Anthony; Gordon, Michael; Poeppel, David
2007-02-01
Frequency-modulated (FM) signals are an integral acoustic component of ecologically natural sounds and are analyzed effectively in the auditory systems of humans and animals. Linearly frequency-modulated tone sweeps were used here to evaluate two questions. First, how rapid a sweep can listeners accurately perceive? Second, is there an effect of native language insofar as the language (phonology) is differentially associated with processing of FM signals? Speakers of English and Mandarin Chinese were tested to evaluate whether being a speaker of a tone language altered the perceptual identification of non-speech tone sweeps. In two psychophysical studies, we demonstrate that Chinese subjects perform better than English subjects in FM direction identification, but not in an FM discrimination task, in which English and Chinese speakers show similar detection thresholds of approximately 20 ms duration. We suggest that the better FM direction identification in Chinese subjects is related to their experience with FM direction analysis in the tone-language environment, even though supra-segmental tonal variation occurs over a longer time scale. Furthermore, the observed common discrimination temporal threshold across two language groups supports the conjecture that processing auditory signals at durations of approximately 20 ms constitutes a fundamental auditory perceptual threshold.
Reaction πN → ππN near threshold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frlez, Emil
1993-11-01
The LAMPF E1179 experiment used the π 0 spectrometer and an array of charged particle range counters to detect and record π +π 0, π 0p, and π +π 0p coincidences following the reaction π +p → π 0π +p near threshold. The total cross sections for single pion production were measured at the incident pion kinetic energies 190, 200, 220, 240, and 260 MeV. Absolute normalizations were fixed by measuring π +p elastic scattering at 260 MeV. A detailed analysis of the π 0 detection efficiency was performed using cosmic ray calibrations and pion single charge exchange measurements with a 30 MeV π - beam. All published data on πN → ππN, including our results, are simultaneously fitted to yield a common chiral symmetry breaking parameter ξ =-0.25±0.10. The threshold matrix element |α 0(π 0π +p)| determined by linear extrapolation yields the value of the s-wave isospin-2 ππ scattering length αmore » $$2\\atop{0}$$(ππ) = -0.041±0.003 m$$-1\\atop{π}$$ -1, within the framework of soft-pion theory.« less
Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models
Rice, John D.; Taylor, Jeremy M. G.
2016-01-01
One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492
Ecosystem services response to urbanization in metropolitan areas: Thresholds identification.
Peng, Jian; Tian, Lu; Liu, Yanxu; Zhao, Mingyue; Hu, Yi'na; Wu, Jiansheng
2017-12-31
Ecosystem service is the key comprehensive indicator for measuring the ecological effects of urbanization. Although various studies have found a causal relationship between urbanization and ecosystem services degradation, the linear or non-linear characteristics are still unclear, especially identifying the impact thresholds in this relationship. This study quantified four ecosystem services (i.e. soil conservation, carbon sequestration and oxygen production, water yield, and food production) and total ecosystem services (TES), and then identified multiple advantageous area of ecosystem services in the peri-urban area of Beijing City. Using piecewise linear regression, the response of TES to urbanization (i.e., population density, GDP density, and construction land proportion) and its thresholds were detected. The results showed that, the TES was high in the north and west and low in the southeast, and there were seven multiple advantageous areas (distributed in the new urban development zone and ecological conservation zone), one single advantageous area (distributed in the ecological conservation zone), and six disadvantageous areas (mainly distributed in the urban function extended zone). TES response to population and economic urbanization each had a threshold (229personkm -2 and 107.15millionyuankm -2 , respectively), above which TES decreased rapidly with intensifying urbanization. However, there was a negative linear relationship between land urbanization and TES, which indicated that the impact of land urbanization on ecosystem services was more direct and effective than that of population and economic urbanization. It was also found that the negative impact of urbanization on TES was highest in the urban function extended zone, followed in descending order by that in the new urban development zone and ecological conservation zone. According to the detected relationships between urbanization and TES, the economic and population urbanization should be strengthened accompanied by slowing or even reducing land urbanization, so as to achieve urban ecological sustainability with less ecosystem services degradation. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sulistyo, Bambang
2016-11-01
The research was aimed at studying the efect of choosing three different C factor formulae derived from NDVI on a fully raster-based erosion modelling of The USLE using remote sensing data and GIS technique. Methods applied was by analysing all factors affecting erosion such that all data were in the form of raster. Those data were R, K, LS, C and P factors. Monthly R factor was evaluated based on formula developed by Abdurachman. K factor was determined using modified formula used by Ministry of Forestry based on soil samples taken in the field. LS factor was derived from Digital Elevation Model. Three C factors used were all derived from NDVI and developed by Suriyaprasit (non-linear) and by Sulistyo (linear and non-linear). P factor was derived from the combination between slope data and landcover classification interpreted from Landsat 7 ETM+. Another analysis was the creation of map of Bulk Density used to convert erosion unit. To know the model accuracy, model validation was done by applying statistical analysis and by comparing Emodel with Eactual. A threshold value of ≥ 0.80 or ≥ 80% was chosen to justify. The research result showed that all Emodel using three formulae of C factors have coeeficient of correlation value of > 0.8. The results of analysis of variance showed that there was significantly difference between Emodel and Eactual when using C factor formula developed by Suriyaprasit and Sulistyo (non-linear). Among the three formulae, only Emodel using C factor formula developed by Sulistyo (linear) reached the accuracy of 81.13% while the other only 56.02% as developed by Sulistyo (nonlinear) and 4.70% as developed by Suriyaprasit, respectively.
The Management Standards Indicator Tool and evaluation of burnout.
Ravalier, J M; McVicar, A; Munn-Giddings, C
2013-03-01
Psychosocial hazards in the workplace can impact upon employee health. The UK Health and Safety Executive's (HSE) Management Standards Indicator Tool (MSIT) appears to have utility in relation to health impacts but we were unable to find studies relating it to burnout. To explore the utility of the MSIT in evaluating risk of burnout assessed by the Maslach Burnout Inventory-General Survey (MBI-GS). This was a cross-sectional survey of 128 borough council employees. MSIT data were analysed according to MSIT and MBI-GS threshold scores and by using multivariate linear regression with MBI-GS factors as dependent variables. MSIT factor scores were gradated according to categories of risk of burnout according to published MBI-GS thresholds, and identified priority workplace concerns as demands, relationships, role and change. These factors also featured as significant independent variables, with control, in outcomes of the regression analysis. Exhaustion was associated with demands and control (adjusted R (2) = 0.331); cynicism was associated with change, role and demands (adjusted R (2) =0.429); and professional efficacy was associated with managerial support, role, control and demands (adjusted R (2) = 0.413). MSIT analysis generally has congruence with MBI-GS assessment of burnout. The identification of control within regression models but not as a priority concern in the MSIT analysis could suggest an issue of the setting of the MSIT thresholds for this factor, but verification requires a much larger study. Incorporation of relationship, role and change into the MSIT, missing from other conventional tools, appeared to add to its validity.
Santos, Frédéric; Guyomarc'h, Pierre; Bruzek, Jaroslav
2014-12-01
Accuracy of identification tools in forensic anthropology primarily rely upon the variations inherent in the data upon which they are built. Sex determination methods based on craniometrics are widely used and known to be specific to several factors (e.g. sample distribution, population, age, secular trends, measurement technique, etc.). The goal of this study is to discuss the potential variations linked to the statistical treatment of the data. Traditional craniometrics of four samples extracted from documented osteological collections (from Portugal, France, the U.S.A., and Thailand) were used to test three different classification methods: linear discriminant analysis (LDA), logistic regression (LR), and support vector machines (SVM). The Portuguese sample was set as a training model on which the other samples were applied in order to assess the validity and reliability of the different models. The tests were performed using different parameters: some included the selection of the best predictors; some included a strict decision threshold (sex assessed only if the related posterior probability was high, including the notion of indeterminate result); and some used an unbalanced sex-ratio. Results indicated that LR tends to perform slightly better than the other techniques and offers a better selection of predictors. Also, the use of a decision threshold (i.e. p>0.95) is essential to ensure an acceptable reliability of sex determination methods based on craniometrics. Although the Portuguese, French, and American samples share a similar sexual dimorphism, application of Western models on the Thai sample (that displayed a lower degree of dimorphism) was unsuccessful. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Effects of urbanization on benthic macroinvertebrate communities in streams, Anchorage, Alaska
Ourso, Robert T.
2001-01-01
The effect of urbanization on stream macroinvertebrate communities was examined by using data gathered during a 1999 reconnaissance of 14 sites in the Municipality of Anchorage, Alaska. Data collected included macroinvertebrate abundance, water chemistry, and trace elements in bed sediments. Macroinvertebrate relative-abundance data were edited and used in metric and index calculations. Population density was used as a surrogate for urbanization. Cluster analysis (unweighted-paired-grouping method) using arithmetic means of macroinvertebrate presence-absence data showed a well-defined separation between urbanized and nonurbanized sites as well as extracted sites that did not cleanly fall into either category. Water quality in Anchorage generally declined with increasing urbanization (population density). Of 59 variables examined, 31 correlated with urbanization. Local regression analysis extracted 11 variables that showed a significant impairment threshold response and 6 that showed a significant linear response. Significant biological variables for determining the impairment threshold in this study were the Margalef diversity index, Ephemeroptera-Plecoptera-Trichoptera taxa richness, and total taxa richness. Significant thresholds were observed in the water-chemistry variables conductivity, dissolved organic carbon, potassium, and total dissolved solids. Significant thresholds in trace elements in bed sediments included arsenic, iron, manganese, and lead. Results suggest that sites in Anchorage that have ratios of population density to road density greater than 70, storm-drain densities greater than 0.45 miles per square mile, road densities greater than 4 miles per square mile, or population densities greater than 125-150 persons per square mile may require further monitoring to determine if the stream has become impaired. This population density is far less than the 1,000 persons per square mile used by the U.S. Census Bureau to define an urban area.
The influence of thresholds on the risk assessment of carcinogens in food.
Pratt, Iona; Barlow, Susan; Kleiner, Juliane; Larsen, John Christian
2009-08-01
The risks from exposure to chemical contaminants in food must be scientifically assessed, in order to safeguard the health of consumers. Risk assessment of chemical contaminants that are both genotoxic and carcinogenic presents particular difficulties, since the effects of such substances are normally regarded as being without a threshold. No safe level can therefore be defined, and this has implications for both risk management and risk communication. Risk management of these substances in food has traditionally involved application of the ALARA (As Low as Reasonably Achievable) principle, however ALARA does not enable risk managers to assess the urgency and extent of the risk reduction measures needed. A more refined approach is needed, and several such approaches have been developed. Low-dose linear extrapolation from animal carcinogenicity studies or epidemiological studies to estimate risks for humans at low exposure levels has been applied by a number of regulatory bodies, while more recently the Margin of Exposure (MOE) approach has been applied by both the European Food Safety Authority and the Joint FAO/WHO Expert Committee on Food Additives. A further approach is the Threshold of Toxicological Concern (TTC), which establishes exposure thresholds for chemicals present in food, dependent on structure. Recent experimental evidence that genotoxic responses may be thresholded has significant implications for the risk assessment of chemicals that are both genotoxic and carcinogenic. In relation to existing approaches such as linear extrapolation, MOE and TTC, the existence of a threshold reduces the uncertainties inherent in such methodology and improves confidence in the risk assessment. However, for the foreseeable future, regulatory decisions based on the concept of thresholds for genotoxic carcinogens are likely to be taken case-by-case, based on convincing data on the Mode of Action indicating that the rate limiting variable for the development of cancer lies on a critical pathway that is thresholded.
The Uncertainty of Long-term Linear Trend in Global SST Due to Internal Variation
NASA Astrophysics Data System (ADS)
Lian, Tao
2016-04-01
In most parts of the global ocean, the magnitude of the long-term linear trend in sea surface temperature (SST) is much smaller than the amplitude of local multi-scale internal variation. One can thus use the record of a specified period to arbitrarily determine the value and the sign of the long-term linear trend in regional SST, and further leading to controversial conclusions on how global SST responds to global warming in the recent history. Analyzing the linear trend coefficient estimated by the ordinary least-square method indicates that the linear trend consists of two parts: One related to the long-term change, and the other related to the multi-scale internal variation. The sign of the long-term change can be correctly reproduced only when the magnitude of the linear trend coefficient is greater than a theoretical threshold which scales the influence from the multi-scale internal variation. Otherwise, the sign of the linear trend coefficient will depend on the phase of the internal variation, or in the other words, the period being used. An improved least-square method is then proposed to reduce the theoretical threshold. When apply the new method to a global SST reconstruction from 1881 to 2013, we find that in a large part of Pacific, the southern Indian Ocean and North Atlantic, the influence from the multi-scale internal variation on the sign of the linear trend coefficient can-not be excluded. Therefore, the resulting warming or/and cooling linear trends in these regions can-not be fully assigned to global warming.
Smeared spectrum jamming suppression based on generalized S transform and threshold segmentation
NASA Astrophysics Data System (ADS)
Li, Xin; Wang, Chunyang; Tan, Ming; Fu, Xiaolong
2018-04-01
Smeared Spectrum (SMSP) jamming is an effective jamming in countering linear frequency modulation (LFM) radar. According to the time-frequency distribution difference between jamming and echo, a jamming suppression method based on Generalized S transform (GST) and threshold segmentation is proposed. The sub-pulse period is firstly estimated based on auto correlation function firstly. Secondly, the time-frequency image and the related gray scale image are achieved based on GST. Finally, the Tsallis cross entropy is utilized to compute the optimized segmentation threshold, and then the jamming suppression filter is constructed based on the threshold. The simulation results show that the proposed method is of good performance in the suppression of false targets produced by SMSP.
Temperature dependence of threshold current in GaAs/AlGaAs quantum well lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blood, P.; Colak, S.; Kucharska, A.I.
1988-02-22
We have calculated the threshold current and its temperature (T) dependence in the range 200--400 K for AlGaAs quantum well lasers with 25-A-wide GaAs wells using a model which includes lifetime broadening of the transitions and broadening of the density of states function by fluctuations in the well width. The threshold current varies approximately linearly with T and the principal effect of broadening is to increase the threshold current causing a reduction in the fractional change of current with temperature. The apparent value of the parameter T/sub 0/ is increased to approx. =400 K, compared with approx. =320 K withoutmore » broadening. The calculations are compared with experimental data.« less
Świąder, Mariusz J; Paruszewski, Ryszard; Łuszczki, Jarogniew J
2016-04-01
The aim of this study was to assess the anticonvulsant potency of 6 various benzylamide derivatives [i.e., nicotinic acid benzylamide (Nic-BZA), picolinic acid 2-fluoro-benzylamide (2F-Pic-BZA), picolinic acid benzylamide (Pic-BZA), (RS)-methyl-alanine-benzylamide (Me-Ala-BZA), isonicotinic acid benzylamide (Iso-Nic-BZA), and (R)-N-methyl-proline-benzylamide (Me-Pro-BZA)] in the threshold for maximal electroshock (MEST)-induced seizures in mice. Electroconvulsions (seizure activity) were produced in mice by means of a current (sine-wave, 50Hz, 500V, strength from 4 to 18mA, ear-clip electrodes, 0.2-s stimulus duration, tonic hindlimb extension taken as the endpoint). Nic-BZA, 2F-Pic-BZA, Pic-BZA, Me-Ala-BZA, Iso-Nic-BZA, and Me-Pro-BZA administered systemically (ip) in a dose-dependent manner increase the threshold for maximal electroconvulsions in mice. Linear regression analysis of Nic-BZA, 2F-Pic-BZA, Pic-BZA, MeAla-BZA, IsoNic-BZA, and Me-Pro-BZA doses and their corresponding threshold increases allowed determining threshold increasing doses by 20% (TID20 values) that elevate the threshold in drug-treated animals over the threshold in control animals. The experimentally derived TID20 values in the MEST test for Nic-BZA, 2F-Pic-BZA, Pic-BZA, Me-Ala-BZA, Iso-Nic-BZA, and Me-Pro-BZA were 7.45mg/kg, 7.72mg/kg, 8.74mg/kg, 15.11mg/kg, 21.95mg/kg and 28.06mg/kg, respectively. The studied benzylamide derivatives can be arranged with respect to their anticonvulsant potency in the MEST test as follows: Nic-BZA>2F-Pic-BZA>Pic-BZA>Me-Ala-BZA>Iso-Nic-BZA>Me-Pro-BZA. Copyright © 2015 Institute of Pharmacology, Polish Academy of Sciences. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.
Fracture mechanics concepts in reliability analysis of monolithic ceramics
NASA Technical Reports Server (NTRS)
Manderscheid, Jane M.; Gyekenyesi, John P.
1987-01-01
Basic design concepts for high-performance, monolithic ceramic structural components are addressed. The design of brittle ceramics differs from that of ductile metals because of the inability of ceramic materials to redistribute high local stresses caused by inherent flaws. Random flaw size and orientation requires that a probabilistic analysis be performed in order to determine component reliability. The current trend in probabilistic analysis is to combine linear elastic fracture mechanics concepts with the two parameter Weibull distribution function to predict component reliability under multiaxial stress states. Nondestructive evaluation supports this analytical effort by supplying data during verification testing. It can also help to determine statistical parameters which describe the material strength variation, in particular the material threshold strength (the third Weibull parameter), which in the past was often taken as zero for simplicity.
The three-dimensional structure of cumulus clouds over the ocean. 1: Structural analysis
NASA Technical Reports Server (NTRS)
Kuo, Kwo-Sen; Welch, Ronald M.; Weger, Ronald C.; Engelstad, Mark A.; Sengupta, S. K.
1993-01-01
Thermal channel (channel 6, 10.4-12.5 micrometers) images of five Landsat thematic mapper cumulus scenes over the ocean are examined. These images are thresholded using the standard International Satellite Cloud Climatology Project (ISCCP) thermal threshold algorithm. The individual clouds in the cloud fields are segmented to obtain their structural statistics which include size distribution, orientation angle, horizontal aspect ratio, and perimeter-to-area (PtA) relationship. The cloud size distributions exhibit a double power law with the smaller clouds having a smaller absolute exponent. The cloud orientation angles, horizontal aspect ratios, and PtA exponents are found in good agreement with earlier studies. A technique also is developed to recognize individual cells within a cloud so that statistics of cloud cellular structure can be obtained. Cell structural statistics are computed for each cloud. Unicellular clouds are generally smaller (less than or equal to 1 km) and have smaller PtA exponents, while multicellular clouds are larger (greater than or equal to 1 km) and have larger PtA exponents. Cell structural statistics are similar to those of the smaller clouds. When each cell is approximated as a quadric surface using a linear least squares fit, most cells have the shape of a hyperboloid of one sheet, but about 15% of the cells are best modeled by a hyperboloid of two sheets. Less than 1% of the clouds are ellipsoidal. The number of cells in a cloud increases slightly faster than linearly with increasing cloud size. The mean nearest neighbor distance between cells in a cloud, however, appears to increase linearly with increasing cloud size and to reach a maximum when the cloud effective diameter is about 10 km; then it decreases with increasing cloud size. Sensitivity studies of threshold and lapse rate show that neither has a significant impact upon the results. A goodness-of-fit ratio is used to provide a quantitative measure of the individual cloud results. Significantly improved results are obtained after applying a smoothing operator, suggesting the eliminating subresolution scale variations with higher spatial resolution may yield even better shape analyses.
Sex differences in the fetal heart rate variability indices of twins.
Tendais, Iva; Figueiredo, Bárbara; Gonçalves, Hernâni; Bernardes, João; Ayres-de-Campos, Diogo; Montenegro, Nuno
2015-03-01
To evaluate the differences in linear and complex heart rate dynamics in twin pairs according to fetal sex combination [male-female (MF), male-male (MM), and female-female (FF)]. Fourteen twin pairs (6 MF, 3 MM, and 5 FF) were monitored between 31 and 36.4 weeks of gestation. Twenty-six fetal heart rate (FHR) recordings of both twins were simultaneously acquired and analyzed with a system for computerized analysis of cardiotocograms. Linear and nonlinear FHR indices were calculated. Overall, MM twins presented higher intrapair average in linear indices than the other pairs, whereas FF twins showed higher sympathetic-vagal balance. MF twins exhibited higher intrapair average in entropy indices and MM twins presented lower entropy values than FF twins considering the (automatically selected) threshold rLu. MM twin pairs showed higher intrapair differences in linear heart rate indices than MF and FF twins, whereas FF twins exhibited lower intrapair differences in entropy indices. The results of this exploratory study suggest that twins have sex-specific differences in linear and nonlinear indices of FHR. MM twins expressed signs of a more active autonomic nervous system and MF twins showed the most active complexity control system. These results suggest that fetal sex combination should be taken into consideration when performing detailed evaluation of the FHR in twins.
Alcohol outlet density and assault: a spatial analysis.
Livingston, Michael
2008-04-01
A large number of studies have found links between alcohol outlet densities and assault rates in local areas. This study tests a variety of specifications of this link, focusing in particular on the possibility of a non-linear relationship. Cross-sectional data on police-recorded assaults during high alcohol hours, liquor outlets and socio-demographic characteristics were obtained for 223 postcodes in Melbourne, Australia. These data were used to construct a series of models testing the nature of the relationship between alcohol outlet density and assault, while controlling for socio-demographic factors and spatial auto-correlation. Four types of relationship were examined: a normal linear relationship between outlet density and assault, a non-linear relationship with potential threshold or saturation densities, a relationship mediated by the socio-economic status of the neighbourhood and a relationship which takes into account the effect of outlets in surrounding neighbourhoods. The model positing non-linear relationships between outlet density and assaults was found to fit the data most effectively. An increasing accelerating effect for the density of hotel (pub) licences was found, suggesting a plausible upper limit for these licences in Melbourne postcodes. The study finds positive relationships between outlet density and assault rates and provides evidence that this relationship is non-linear and thus has critical values at which licensing policy-makers can impose density limits.
Threshold and Beyond: Modeling The Intensity Dependence of Auditory Responses
2007-01-01
In many studies of auditory-evoked responses to low-intensity sounds, the response amplitude appears to increase roughly linearly with the sound level in decibels (dB), corresponding to a logarithmic intensity dependence. But the auditory system is assumed to be linear in the low-intensity limit. The goal of this study was to resolve the seeming contradiction. Based on assumptions about the rate-intensity functions of single auditory-nerve fibers and the pattern of cochlear excitation caused by a tone, a model for the gross response of the population of auditory nerve fibers was developed. In accordance with signal detection theory, the model denies the existence of a threshold. This implies that regarding the detection of a significant stimulus-related effect, a reduction in sound intensity can always be compensated for by increasing the measurement time, at least in theory. The model suggests that the gross response is proportional to intensity when the latter is low (range I), and a linear function of sound level at higher intensities (range III). For intensities in between, it is concluded that noisy experimental data may provide seemingly irrefutable evidence of a linear dependence on sound pressure (range II). In view of the small response amplitudes that are to be expected for intensity range I, direct observation of the predicted proportionality with intensity will generally be a challenging task for an experimenter. Although the model was developed for the auditory nerve, the basic conclusions are probably valid for higher levels of the auditory system, too, and might help to improve models for loudness at threshold. PMID:18008105
A Linearized Model for Flicker and Contrast Thresholds at Various Retinal Illuminances
NASA Technical Reports Server (NTRS)
Ahumada, Albert; Watson, Andrew
2015-01-01
We previously proposed a flicker visibility metric for bright displays, based on psychophysical data collected at a high mean luminance. Here we extend the metric to other mean luminances. This extension relies on a linear relation between log sensitivity and critical fusion frequency, and a linear relation between critical fusion frequency and log retina lilluminance. Consistent with our previous metric, the extended flicker visibility metric is measured in just-noticeable differences (JNDs).
Contrast effects on speed perception for linear and radial motion.
Champion, Rebecca A; Warren, Paul A
2017-11-01
Speed perception is vital for safe activity in the environment. However, considerable evidence suggests that perceived speed changes as a function of stimulus contrast, with some investigators suggesting that this might have meaningful real-world consequences (e.g. driving in fog). In the present study we investigate whether the neural effects of contrast on speed perception occur at the level of local or global motion processing. To do this we examine both speed discrimination thresholds and contrast-dependent speed perception for two global motion configurations that have matched local spatio-temporal structure. Specifically we compare linear and radial configurations, the latter of which arises very commonly due to self-movement. In experiment 1 the stimuli comprised circular grating patches. In experiment 2, to match stimuli even more closely, motion was presented in multiple local Gabor patches equidistant from central fixation. Each patch contained identical linear motion but the global configuration was either consistent with linear or radial motion. In both experiments 1 and 2, discrimination thresholds and contrast-induced speed biases were similar in linear and radial conditions. These results suggest that contrast-based speed effects occur only at the level of local motion processing, irrespective of global structure. This result is interpreted in the context of previous models of speed perception and evidence suggesting differences in perceived speed of locally matched linear and radial stimuli. Copyright © 2017 Elsevier Ltd. All rights reserved.
Threshold law for electron-atom impact ionization
NASA Technical Reports Server (NTRS)
Temkin, A.
1982-01-01
A derivation of the explicit form of the threshold law for electron impact ionization of atoms is presented, based on the Coulomb-dipole theory. The important generalization is made of using a dipole function whose moment is the dipole moment formed by an inner electron and the nucleus. The result is a modulated quasi-linear law for the yield of positive ions which applies to positron-atom impact ionization.
Thresholding of auditory cortical representation by background noise
Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju
2014-01-01
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029
Costantini, Raffaele; Affaitati, Giannapia; Massimini, Francesca; Tana, Claudio; Innocenti, Paolo; Giamberardino, Maria Adele
2016-01-01
Fibromyalgia, a chronic syndrome of diffuse musculoskeletal pain and somatic hyperalgesia from central sensitization, is very often comorbid with visceral pain conditions. In fibromyalgia patients with gallbladder calculosis, this study assessed the short and long-term impact of laparoscopic cholecystectomy on fibromyalgia pain symptoms. Fibromyalgia pain (VAS scale) and pain thresholds in tender points and control areas (skin, subcutis and muscle) were evaluated 1week before (basis) and 1week, 1,3,6 and 12months after laparoscopic cholecystectomy in fibromyalgia patients with symptomatic calculosis (n = 31) vs calculosis patients without fibromyalgia (n. 26) and at comparable time points in fibromyalgia patients not undergoing cholecystectomy, with symptomatic (n = 27) and asymptomatic (n = 28) calculosis, and no calculosis (n = 30). At basis, fibromyalgia+symptomatic calculosis patients presented a significant linear correlation between the number of previously experienced biliary colics and fibromyalgia pain (direct) and muscle thresholds (inverse)(p<0.0001). After cholecystectomy, fibromyalgia pain significantly increased and all thresholds significantly decreased at 1week and 1month (1-way ANOVA, p<0.01-p<0.001), the decrease in muscle thresholds correlating linearly with the peak postoperative pain at surgery site (p<0.003-p<0.0001). Fibromyalgia pain and thresholds returned to preoperative values at 3months, then pain significantly decreased and thresholds significantly increased at 6 and 12months (p<0.05-p<0.0001). Over the same 12-month period: in non-fibromyalgia patients undergoing cholecystectomy thresholds did not change; in all other fibromyalgia groups not undergoing cholecystectomy fibromyalgia pain and thresholds remained stable, except in fibromyalgia+symptomatic calculosis at 12months when pain significantly increased and muscle thresholds significantly decreased (p<0.05-p<0.0001). The results of the study show that biliary colics from gallbladder calculosis represent an exacerbating factor for fibromyalgia symptoms and that laparoscopic cholecystectomy produces only a transitory worsening of these symptoms, largely compensated by the long-term improvement/desensitization due to gallbladder removal. This study provides new insights into the role of visceral pain comorbidities and the effects of their treatment on fibromyalgia pain/hypersensitivity. PMID:27081848
Costantini, Raffaele; Affaitati, Giannapia; Massimini, Francesca; Tana, Claudio; Innocenti, Paolo; Giamberardino, Maria Adele
2016-01-01
Fibromyalgia, a chronic syndrome of diffuse musculoskeletal pain and somatic hyperalgesia from central sensitization, is very often comorbid with visceral pain conditions. In fibromyalgia patients with gallbladder calculosis, this study assessed the short and long-term impact of laparoscopic cholecystectomy on fibromyalgia pain symptoms. Fibromyalgia pain (VAS scale) and pain thresholds in tender points and control areas (skin, subcutis and muscle) were evaluated 1week before (basis) and 1week, 1,3,6 and 12months after laparoscopic cholecystectomy in fibromyalgia patients with symptomatic calculosis (n = 31) vs calculosis patients without fibromyalgia (n. 26) and at comparable time points in fibromyalgia patients not undergoing cholecystectomy, with symptomatic (n = 27) and asymptomatic (n = 28) calculosis, and no calculosis (n = 30). At basis, fibromyalgia+symptomatic calculosis patients presented a significant linear correlation between the number of previously experienced biliary colics and fibromyalgia pain (direct) and muscle thresholds (inverse)(p<0.0001). After cholecystectomy, fibromyalgia pain significantly increased and all thresholds significantly decreased at 1week and 1month (1-way ANOVA, p<0.01-p<0.001), the decrease in muscle thresholds correlating linearly with the peak postoperative pain at surgery site (p<0.003-p<0.0001). Fibromyalgia pain and thresholds returned to preoperative values at 3months, then pain significantly decreased and thresholds significantly increased at 6 and 12months (p<0.05-p<0.0001). Over the same 12-month period: in non-fibromyalgia patients undergoing cholecystectomy thresholds did not change; in all other fibromyalgia groups not undergoing cholecystectomy fibromyalgia pain and thresholds remained stable, except in fibromyalgia+symptomatic calculosis at 12months when pain significantly increased and muscle thresholds significantly decreased (p<0.05-p<0.0001). The results of the study show that biliary colics from gallbladder calculosis represent an exacerbating factor for fibromyalgia symptoms and that laparoscopic cholecystectomy produces only a transitory worsening of these symptoms, largely compensated by the long-term improvement/desensitization due to gallbladder removal. This study provides new insights into the role of visceral pain comorbidities and the effects of their treatment on fibromyalgia pain/hypersensitivity.
A Theoretical and Experimental Analysis of the Outside World Perception Process
NASA Technical Reports Server (NTRS)
Wewerinke, P. H.
1978-01-01
The outside scene is often an important source of information for manual control tasks. Important examples of these are car driving and aircraft control. This paper deals with modelling this visual scene perception process on the basis of linear perspective geometry and the relative motion cues. Model predictions utilizing psychophysical threshold data from base-line experiments and literature of a variety of visual approach tasks are compared with experimental data. Both the performance and workload results illustrate that the model provides a meaningful description of the outside world perception process, with a useful predictive capability.
Passive decoy-state quantum key distribution with practical light sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curty, Marcos; Ma, Xiongfeng; Qi, Bing
2010-02-15
Decoy states have been proven to be a very useful method for significantly enhancing the performance of quantum key distribution systems with practical light sources. Although active modulation of the intensity of the laser pulses is an effective way of preparing decoy states in principle, in practice passive preparation might be desirable in some scenarios. Typical passive schemes involve parametric down-conversion. More recently, it has been shown that phase-randomized weak coherent pulses (WCP) can also be used for the same purpose [M. Curty et al., Opt. Lett. 34, 3238 (2009).] This proposal requires only linear optics together with a simplemore » threshold photon detector, which shows the practical feasibility of the method. Most importantly, the resulting secret key rate is comparable to the one delivered by an active decoy-state setup with an infinite number of decoy settings. In this article we extend these results, now showing specifically the analysis for other practical scenarios with different light sources and photodetectors. In particular, we consider sources emitting thermal states, phase-randomized WCP, and strong coherent light in combination with several types of photodetectors, like, for instance, threshold photon detectors, photon number resolving detectors, and classical photodetectors. Our analysis includes as well the effect that detection inefficiencies and noise in the form of dark counts shown by current threshold detectors might have on the final secret key rate. Moreover, we provide estimations on the effects that statistical fluctuations due to a finite data size can have in practical implementations.« less
Kim, Jimyung; Delfyett, Peter J
2009-12-07
The spectral dependence of the linewidth enhancement factor above threshold is experimentally observed from a quantum dot Fabry-Pérot semiconductor laser. The linewidth enhancement factor is found to be reduced when the quantum dot laser operates approximately 10 nm offset to either side of the gain peak. It becomes significantly reduced on the anti-Stokes side as compared to the Stokes side. It is also found that the temporal duration of the optical pulses generated from quantum dot mode-locked lasers is shorter when the laser operates away from the gain peak. In addition, less linear chirp is impressed on the pulse train generated from the anti-Stokes side whereas the pulses generated from the gain peak and Stokes side possess a large linear chirp. These experimental results imply that enhanced performance characteristics of quantum dot lasers can be achieved by operating on the anti-Stokes side, approximately 10 nm away from the gain peak.
Skin cancer incidence among atomic bomb survivors from 1958 to 1996.
Sugiyama, Hiromi; Misumi, Munechika; Kishikawa, Masao; Iseki, Masachika; Yonehara, Shuji; Hayashi, Tomayoshi; Soda, Midori; Tokuoka, Shoji; Shimizu, Yukiko; Sakata, Ritsu; Grant, Eric J; Kasagi, Fumiyoshi; Mabuchi, Kiyohiko; Suyama, Akihiko; Ozasa, Kotaro
2014-05-01
The radiation risk of skin cancer by histological types has been evaluated in the atomic bomb survivors. We examined 80,158 of the 120,321 cohort members who had their radiation dose estimated by the latest dosimetry system (DS02). Potential skin tumors diagnosed from 1958 to 1996 were reviewed by a panel of pathologists, and radiation risk of the first primary skin cancer was analyzed by histological types using a Poisson regression model. A significant excess relative risk (ERR) of basal cell carcinoma (BCC) (n = 123) was estimated at 1 Gy (0.74, 95% confidence interval (CI): 0.26, 1.6) for those age 30 at exposure and age 70 at observation based on a linear-threshold model with a threshold dose of 0.63 Gy (95% CI: 0.32, 0.89) and a slope of 2.0 (95% CI: 0.69, 4.3). The estimated risks were 15, 5.7, 1.3 and 0.9 for age at exposure of 0-9, 10-19, 20-39, over 40 years, respectively, and the risk increased 11% with each one-year decrease in age at exposure. The ERR for squamous cell carcinoma (SCC) in situ (n = 64) using a linear model was estimated as 0.71 (95% CI: 0.063, 1.9). However, there were no significant dose responses for malignant melanoma (n = 10), SCC (n = 114), Paget disease (n = 10) or other skin cancers (n = 15). The significant linear radiation risk for BCC with a threshold at 0.63 Gy suggested that the basal cells of the epidermis had a threshold sensitivity to ionizing radiation, especially for young persons at the time of exposure.
Photon beam asymmetry Σ for η and η‧ photoproduction from the proton
NASA Astrophysics Data System (ADS)
Collins, P.; Ritchie, B. G.; Dugger, M.; Anisovich, A. V.; Döring, M.; Klempt, E.; Nikonov, V. A.; Rönchen, D.; Sadasivan, D.; Sarantsev, A.; Adhikari, K. P.; Akbar, Z.; Amaryan, M. J.; Anefalos Pereira, S.; Avakian, H.; Ball, J.; Balossino, I.; Bashkanov, M.; Battaglieri, M.; Bedlinskiy, I.; Biselli, A. S.; Briscoe, W. J.; Brooks, W. K.; Burkert, V. D.; Cao, Frank Thanh; Carman, D. S.; Celentano, A.; Chandavar, S.; Charles, G.; Chetry, T.; Ciullo, G.; Clark, L.; Colaneri, L.; Cole, P. L.; Compton, N.; Contalbrigo, M.; Cortes, O.; Crede, V.; D'Angelo, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Dupre, R.; Egiyan, H.; El Alaoui, A.; El Fassi, L.; Elouadrhiri, L.; Eugenio, P.; Fanchini, E.; Fedotov, G.; Filippi, A.; Fleming, J. A.; Ghandilyan, Y.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Glazier, D. I.; Gleason, C.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guo, L.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Harrison, N.; Heddle, D.; Hicks, K.; Holtrop, M.; Hughes, S. M.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Jenkins, D.; Jo, H. S.; Joosten, S.; Keller, D.; Khachatryan, G.; Khachatryan, M.; Khandaker, M.; Kim, A.; Kim, W.; Klein, A.; Klein, F. J.; Kubarovsky, V.; Lanza, L.; Lenisa, P.; Livingston, K.; MacGregor, I. J. D.; Markov, N.; McKinnon, B.; Meyer, C. A.; Mirazita, M.; Mokeev, V.; Montgomery, R. A.; Movsisyan, A.; Munoz Camacho, C.; Murdoch, G.; Nadel-Turonski, P.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Osipenko, M.; Ostrovidov, A. I.; Paolone, M.; Paremuzyan, R.; Park, K.; Pasyuk, E.; Phelps, W.; Pisano, S.; Pogorelko, O.; Price, J. W.; Prok, Y.; Protopopescu, D.; Raue, B. A.; Ripani, M.; Rizzo, A.; Rosner, G.; Roy, P.; Sabatié, F.; Salgado, C.; Schumacher, R. A.; Sharabian, Y. G.; Skorodumina, Iu.; Smith, G. D.; Sokhan, D.; Sparveris, N.; Stepanyan, S.; Strakovsky, I. I.; Strauch, S.; Taiuti, M.; Tian, Ye; Torayev, B.; Ungaro, M.; Voskanyan, H.; Voutier, E.; Walford, N. K.; Wei, X.; Zachariou, N.; Zhang, J.
2017-08-01
Measurements of the linearly-polarized photon beam asymmetry Σ for photoproduction from the proton of η and η‧ mesons are reported. A linearly-polarized tagged photon beam produced by coherent bremsstrahlung was incident on a cryogenic hydrogen target within the CEBAF Large Acceptance Spectrometer. Results are presented for the γp → ηp reaction for incident photon energies from 1.070 to 1.876 GeV, and from 1.516 to 1.836 GeV for the γp →η‧ p reaction. For γp → ηp, the data reported here considerably extend the range of measurements to higher energies, and are consistent with the few previously published measurements for this observable near threshold. For γp →η‧ p, the results obtained are consistent with the few previously published measurements for this observable near threshold, but also greatly expand the incident photon energy coverage for that reaction. Initial analysis of the data reported here with the Bonn-Gatchina model strengthens the evidence for four nucleon resonances - the N (1895) 1 /2-, N (1900) 3 /2+, N (2100) 1 /2+ and N (2120) 3 /2- resonances - which presently lack the ;four-star; status in the current Particle Data Group compilation, providing examples of how these new measurements help refine models of the photoproduction process.
NASA Astrophysics Data System (ADS)
Litvinov, I. I.
2015-11-01
A critical analysis is given of the well-known expression for the electron-impact ionization rate constant α i of neutral atoms and ions, derived by linearization of the ionization cross section σ i (ɛ) as a function of the electron energy near the threshold I and containing the characteristic factor ( I + 2 kT). Using the classical Thomson expression for the ionization cross section, it is shown that in addition to the linear slope of σ i (ɛ), it is also necessary to take into account the large negative curvature of this function near the threshold. In this case, the second term in parentheses changes its sign, which means that the commonly used expression for α i (˜4 kT/I) already at moderate values of the temperature ( kT/I ˜ 0.1). The source of this error lies in a mathematical mistake in the original approach and is related to the incorrect choice of the sequential orders of terms small in the parameter kT/I. On the basis of a large amount of experimental data and considerations similar to the Gryzinski theory, a universal two-parameter modification of the Thomson formula (as well as the Bethe—Born formula) is proposed and a new simple expression for the ionization rate constant for arbitrary values of kT/I is derived.
Photon beam asymmetry Σ for η and η' photoproduction from the proton
Collins, P.; Ritchie, B. G.; Dugger, M.; ...
2017-05-18
Measurements of the linearly-polarized photon beam asymmetrymore » $$\\Sigma$$ for photoproduction from the proton of $$\\eta$$ and $$\\eta^\\prime$$ mesons are reported. A linearly-polarized tagged photon beam produced by coherent bremsstrahlung was incident on a cryogenic hydrogen target within the CEBAF Large Acceptance Spectrometer. Results are presented for the $$\\gamma p \\to \\eta p$$ reaction for incident photon energies from 1.070 to 1.876 GeV, and from 1.516 to 1.836 GeV for the $$\\gamma p \\to \\eta^\\prime p$$ reaction. For $$\\gamma p \\to \\eta p$$, the data reported here considerably extend the range of measurements to higher energies, and are consistent with the few previously published measurements for this observable near threshold. For $$\\gamma p \\to \\eta^\\prime p$$, the results obtained are consistent with the few previously published measurements for this observable near threshold, but also greatly expand the incident photon energy coverage for that reaction. In conclusion, initial analysis of the data reported here with the Bonn-Gatchina model strengthens the evidence for four nucleon resonances -- the $N(1895)1/2^-$, $N(1900)3/2^+$, $N(2100)1/2^+$ and $N(2120)3/2^-$ resonances -- which presently lack the "four-star" status in the current Particle Data Group compilation, providing examples of how these new measurements help refine models of the photoproduction process.« less
Photon beam asymmetry Σ for η and η' photoproduction from the proton
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, P.; Ritchie, B. G.; Dugger, M.
Measurements of the linearly-polarized photon beam asymmetrymore » $$\\Sigma$$ for photoproduction from the proton of $$\\eta$$ and $$\\eta^\\prime$$ mesons are reported. A linearly-polarized tagged photon beam produced by coherent bremsstrahlung was incident on a cryogenic hydrogen target within the CEBAF Large Acceptance Spectrometer. Results are presented for the $$\\gamma p \\to \\eta p$$ reaction for incident photon energies from 1.070 to 1.876 GeV, and from 1.516 to 1.836 GeV for the $$\\gamma p \\to \\eta^\\prime p$$ reaction. For $$\\gamma p \\to \\eta p$$, the data reported here considerably extend the range of measurements to higher energies, and are consistent with the few previously published measurements for this observable near threshold. For $$\\gamma p \\to \\eta^\\prime p$$, the results obtained are consistent with the few previously published measurements for this observable near threshold, but also greatly expand the incident photon energy coverage for that reaction. In conclusion, initial analysis of the data reported here with the Bonn-Gatchina model strengthens the evidence for four nucleon resonances -- the $N(1895)1/2^-$, $N(1900)3/2^+$, $N(2100)1/2^+$ and $N(2120)3/2^-$ resonances -- which presently lack the "four-star" status in the current Particle Data Group compilation, providing examples of how these new measurements help refine models of the photoproduction process.« less
The sensitivity of the human thirst response to changes in plasma osmolality: a systematic review.
Hughes, Fintan; Mythen, Monty; Montgomery, Hugh
2018-01-01
Dehydration is highly prevalent and is associated with adverse cardiovascular and renal events. Clinical assessment of dehydration lacks sensitivity. Perhaps a patient's thirst can provide an accurate guide to fluid therapy. This systematic review examines the sensitivity of thirst in responding to changes in plasma osmolality in participants of any age with no condition directly effecting their sense of thirst. Medline and EMBASE were searched up to June 2017. Inclusion criteria were all studies reporting the plasma osmolality threshold for the sensation of thirst. A total of 12 trials were included that assessed thirst intensity on a visual analogue scale, as a function of plasma osmolality (pOsm), and employed linear regression to define the thirst threshold. This included 167 participants, both healthy controls and those with a range of pathologies, with a mean age of 41 (20-78) years.The value ±95% CI for the pOsm threshold for thirst sensation was found to be 285.23 ± 1.29 mOsm/kg. Above this threshold, thirst intensity as a function of pOsm had a mean ± SEM slope of 0.54 ± 0.07 cm/mOsm/kg. The mean ± 95% CI vasopressin release threshold was very similar to that of thirst, being 284.3 ± 0.71 mOsm/kg.Heterogeneity across studies can be accounted for by subtle variation in experimental protocol and data handling. The thresholds for thirst activation and vasopressin release lie in the middle of the normal range of plasma osmolality. Thirst increases linearly as pOsm rises. Thus, osmotically balanced fluid administered as per a patient's sensation of thirst should result in a plasma osmolality within the normal range. This work received no funding.
A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Goldberg, Hirsh; Nasrabadi, Nasser M.
2007-04-01
In this paper we implement various linear and nonlinear subspace-based anomaly detectors for hyperspectral imagery. First, a dual window technique is used to separate the local area around each pixel into two regions - an inner-window region (IWR) and an outer-window region (OWR). Pixel spectra from each region are projected onto a subspace which is defined by projection bases that can be generated in several ways. Here we use three common pattern classification techniques (Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD) Analysis, and the Eigenspace Separation Transform (EST)) to generate projection vectors. In addition to these three algorithms, the well-known Reed-Xiaoli (RX) anomaly detector is also implemented. Each of the four linear methods is then implicitly defined in a high- (possibly infinite-) dimensional feature space by using a nonlinear mapping associated with a kernel function. Using a common machine-learning technique known as the kernel trick all dot products in the feature space are replaced with a Mercer kernel function defined in terms of the original input data space. To determine how anomalous a given pixel is, we then project the current test pixel spectra and the spectral mean vector of the OWR onto the linear and nonlinear projection vectors in order to exploit the statistical differences between the IWR and OWR pixels. Anomalies are detected if the separation of the projection of the current test pixel spectra and the OWR mean spectra are greater than a certain threshold. Comparisons are made using receiver operating characteristics (ROC) curves.
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
Motor Unit Interpulse Intervals During High Force Contractions.
Stock, Matt S; Thompson, Brennan J
2016-01-01
We examined the means, medians, and variability for motor-unit interpulse intervals (IPIs) during voluntary, high force contractions. Eight men (mean age = 22 years) attempted to perform isometric contractions at 90% of their maximal voluntary contraction force while bipolar surface electromyographic (EMG) signals were detected from the vastus lateralis and vastus medialis muscles. Surface EMG signal decomposition was used to determine the recruitment thresholds and IPIs of motor units that demonstrated accuracy levels ≥ 96.0%. Motor units with high recruitment thresholds demonstrated longer mean IPIs, but the coefficients of variation were similar across all recruitment thresholds. Polynomial regression analyses indicated that for both muscles, the relationship between the means and standard deviations of the IPIs was linear. The majority of IPI histograms were positively skewed. Although low-threshold motor units were associated with shorter IPIs, the variability among motor units with differing recruitment thresholds was comparable.
Human sensitivity to vertical self-motion.
Nesti, Alessandro; Barnett-Cowan, Michael; Macneilage, Paul R; Bülthoff, Heinrich H
2014-01-01
Perceiving vertical self-motion is crucial for maintaining balance as well as for controlling an aircraft. Whereas heave absolute thresholds have been exhaustively studied, little work has been done in investigating how vertical sensitivity depends on motion intensity (i.e., differential thresholds). Here we measure human sensitivity for 1-Hz sinusoidal accelerations for 10 participants in darkness. Absolute and differential thresholds are measured for upward and downward translations independently at 5 different peak amplitudes ranging from 0 to 2 m/s(2). Overall vertical differential thresholds are higher than horizontal differential thresholds found in the literature. Psychometric functions are fit in linear and logarithmic space, with goodness of fit being similar in both cases. Differential thresholds are higher for upward as compared to downward motion and increase with stimulus intensity following a trend best described by two power laws. The power laws' exponents of 0.60 and 0.42 for upward and downward motion, respectively, deviate from Weber's Law in that thresholds increase less than expected at high stimulus intensity. We speculate that increased sensitivity at high accelerations and greater sensitivity to downward than upward self-motion may reflect adaptations to avoid falling.
NASA Astrophysics Data System (ADS)
Sakthy Priya, S.; Alexandar, A.; Surendran, P.; Lakshmanan, A.; Rameshkumar, P.; Sagayaraj, P.
2017-04-01
An efficient organic nonlinear optical single crystal of L-arginine maleate dihydrate (LAMD) has been grown by slow evaporation solution technique (SEST) and slow cooling technique (SCT). The crystalline perfection of the crystal was examined using high-resolution X-ray diffractometry (HRXRD) analysis. Photoluminescence study confirmed the optical properties and defects level in the crystal lattice. Electromechanical behaviour was observed using piezoelectric co-efficient (d33) analysis. The photoconductivity analysis confirmed the negative photoconducting nature of the material. The dielectric constant and loss were measured as a function of frequency with varying temperature and vice-versa. The laser damage threshold (LDT) measurement was carried out using Nd:YAG Laser with a wavelength of 1064 nm (Focal length is 35 cm) and the obtained results showed that LDT value of the crystal is high compared to KDP crystal. The high laser damage threshold of the grown crystal makes it a potential candidate for second and higher order nonlinear optical device application. The third order nonlinear optical parameters of LAMD crystal is determined by open-aperture and closed-aperture studies using Z-scan technique. The third order linear and nonlinear optical parameters such as the nonlinear refractive index (n2), two photon absorption coefficient (β), Real part (Reχ3) and imaginary part (Imχ3) of third-order nonlinear optical susceptibility are calculated.
Wu, Xiaocheng; Lang, Lingling; Ma, Wenjun; Song, Tie; Kang, Min; He, Jianfeng; Zhang, Yonghui; Lu, Liang; Lin, Hualiang; Ling, Li
2018-07-01
Dengue fever is an important infectious disease in Guangzhou, China; previous studies on the effects of weather factors on the incidence of dengue fever did not consider the linearity of the associations. This study evaluated the effects of daily mean temperature, relative humidity and rainfall on the incidence of dengue fever. A generalized additive model with splines smoothing function was performed to examine the effects of daily mean, minimum and maximum temperatures, relative humidity and rainfall on incidence of dengue fever during 2006-2014. Our analysis detected a non-linear effect of mean, minimum and maximum temperatures and relative humidity on dengue fever with the thresholds at 28°C, 23°C and 32°C for daily mean, minimum and maximum temperatures, 76% for relative humidity, respectively. Below the thresholds, there was a significant positive effect, the excess risk in dengue fever for each 1°C in the mean temperature at lag7-14days was 10.21%, (95% CI: 6.62% to 13.92%), 7.10% (95% CI: 4.99%, 9.26%) for 1°C increase in daily minimum temperature in lag 11days, and 2.27% (95% CI: 0.84%, 3.72%) for 1°C increase in daily maximum temperature in lag 10days; and each 1% increase in relative humidity of lag7-14days was associated with 1.95% (95% CI: 1.21% to 2.69%) in risk of dengue fever. Future prevention and control measures and epidemiology studies on dengue fever should consider these weather factors based on their exposure-response relationship. Copyright © 2018. Published by Elsevier B.V.
Schöllnberger, Helmut; Eidemüller, Markus; Cullings, Harry M; Simonetto, Cristoforo; Neff, Frauke; Kaiser, Jan Christian
2018-03-01
The scientific community faces important discussions on the validity of the linear no-threshold (LNT) model for radiation-associated cardiovascular diseases at low and moderate doses. In the present study, mortalities from cerebrovascular diseases (CeVD) and heart diseases from the latest data on atomic bomb survivors were analyzed. The analysis was performed with several radio-biologically motivated linear and nonlinear dose-response models. For each detrimental health outcome one set of models was identified that all fitted the data about equally well. This set was used for multi-model inference (MMI), a statistical method of superposing different models to allow risk estimates to be based on several plausible dose-response models rather than just relying on a single model of choice. MMI provides a more accurate determination of the dose response and a more comprehensive characterization of uncertainties. It was found that for CeVD, the dose-response curve from MMI is located below the linear no-threshold model at low and medium doses (0-1.4 Gy). At higher doses MMI predicts a higher risk compared to the LNT model. A sublinear dose-response was also found for heart diseases (0-3 Gy). The analyses provide no conclusive answer to the question whether there is a radiation risk below 0.75 Gy for CeVD and 2.6 Gy for heart diseases. MMI suggests that the dose-response curves for CeVD and heart diseases in the Lifespan Study are sublinear at low and moderate doses. This has relevance for radiotherapy treatment planning and for international radiation protection practices in general.
Kmeans-ICA based automatic method for ocular artifacts removal in a motorimagery classification.
Bou Assi, Elie; Rihana, Sandy; Sawan, Mohamad
2014-01-01
Electroencephalogram (EEG) recordings aroused as inputs of a motor imagery based BCI system. Eye blinks contaminate the spectral frequency of the EEG signals. Independent Component Analysis (ICA) has been already proved for removing these artifacts whose frequency band overlap with the EEG of interest. However, already ICA developed methods, use a reference lead such as the ElectroOculoGram (EOG) to identify the ocular artifact components. In this study, artifactual components were identified using an adaptive thresholding by means of Kmeans clustering. The denoised EEG signals have been fed into a feature extraction algorithm extracting the band power, the coherence and the phase locking value and inserted into a linear discriminant analysis classifier for a motor imagery classification.
Housing flexibility effects on rotor stability
NASA Technical Reports Server (NTRS)
Davis, L. B.; Wolfe, E. A.; Beatty, R. F.
1985-01-01
Preliminary rotordynamic evaluations are performed with a housing stiffness assumption that is typically determined only after the hardware is built. In addressing rotor stability, a rigid housing assumption was shown to predict an instability at a lower spin speed than a comparable flexible housing analysis. This rigid housing assumption therefore provides a conservative estimate of the stability threshold speed. A flexible housing appears to act as an energy absorber and dissipated some of the destabilizing force. The fact that a flexible housing is usually asymmetric and considerably heavier than the rotor was related to this apparent increase in rotor stability. Rigid housing analysis is proposed as a valuable screening criteria and may save time and money in construction of elaborate housing finite element models for linear stability analyses.
Hoyer, Annika; Kuss, Oliver
2018-05-01
Meta-analysis of diagnostic studies is still a rapidly developing area of biostatistical research. Especially, there is an increasing interest in methods to compare different diagnostic tests to a common gold standard. Restricting to the case of two diagnostic tests, in these meta-analyses the parameters of interest are the differences of sensitivities and specificities (with their corresponding confidence intervals) between the two diagnostic tests while accounting for the various associations across single studies and between the two tests. We propose statistical models with a quadrivariate response (where sensitivity of test 1, specificity of test 1, sensitivity of test 2, and specificity of test 2 are the four responses) as a sensible approach to this task. Using a quadrivariate generalized linear mixed model naturally generalizes the common standard bivariate model of meta-analysis for a single diagnostic test. If information on several thresholds of the tests is available, the quadrivariate model can be further generalized to yield a comparison of full receiver operating characteristic (ROC) curves. We illustrate our model by an example where two screening methods for the diagnosis of type 2 diabetes are compared.
NASA Astrophysics Data System (ADS)
Koga, Kusuto; Hayashi, Yuichiro; Hirose, Tomoaki; Oda, Masahiro; Kitasaka, Takayuki; Igami, Tsuyoshi; Nagino, Masato; Mori, Kensaku
2014-03-01
In this paper, we propose an automated biliary tract extraction method from abdominal CT volumes. The biliary tract is the path by which bile is transported from liver to the duodenum. No extraction method have been reported for the automated extraction of the biliary tract from common contrast CT volumes. Our method consists of three steps including: (1) extraction of extrahepatic bile duct (EHBD) candidate regions, (2) extraction of intrahepatic bile duct (IHBD) candidate regions, and (3) combination of these candidate regions. The IHBD has linear structures and intensities of the IHBD are low in CT volumes. We use a dark linear structure enhancement (DLSE) filter based on a local intensity structure analysis method using the eigenvalues of the Hessian matrix for the IHBD candidate region extraction. The EHBD region is extracted using a thresholding process and a connected component analysis. In the combination process, we connect the IHBD candidate regions to each EHBD candidate region and select a bile duct region from the connected candidate regions. We applied the proposed method to 22 cases of CT volumes. An average Dice coefficient of extraction result was 66.7%.
CARA Risk Assessment Thresholds
NASA Technical Reports Server (NTRS)
Hejduk, M. D.
2016-01-01
Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tzou, J. C.; Kevrekidis, P. G.; Kolokolnikov, T.
2016-05-10
For a dissipative variant of the two-dimensional Gross--Pitaevskii equation with a parabolic trap under rotation, we study a symmetry breaking process that leads to the formation of vortices. The first symmetry breaking leads to the formation of many small vortices distributed uniformly near the Thomas$-$Fermi radius. The instability occurs as a result of a linear instability of a vortex-free steady state as the rotation is increased above a critical threshold. We focus on the second subsequent symmetry breaking, which occurs in the weakly nonlinear regime. At slightly above threshold, we derive a one-dimensional amplitude equation that describes the slow evolutionmore » of the envelope of the initial instability. Here, we show that the mechanism responsible for initiating vortex formation is a modulational instability of the amplitude equation. We also illustrate the role of dissipation in the symmetry breaking process. All analyses are confirmed by detailed numerical computations« less
NASA Astrophysics Data System (ADS)
Jiang, C.; Rumyantsev, S. L.; Samnakay, R.; Shur, M. S.; Balandin, A. A.
2015-02-01
We report on fabrication of MoS2 thin-film transistors (TFTs) and experimental investigations of their high-temperature current-voltage characteristics. The measurements show that MoS2 devices remain functional to temperatures of at least as high as 500 K. The temperature increase results in decreased threshold voltage and mobility. The comparison of the direct current (DC) and pulse measurements shows that the direct current sub-linear and super-linear output characteristics of MoS2 thin-films devices result from the Joule heating and the interplay of the threshold voltage and mobility temperature dependences. At temperatures above 450 K, a kink in the drain current occurs at zero gate voltage irrespective of the threshold voltage value. This intriguing phenomenon, referred to as a "memory step," was attributed to the slow relaxation processes in thin films similar to those in graphene and electron glasses. The fabricated MoS2 thin-film transistors demonstrated stable operation after two months of aging. The obtained results suggest new applications for MoS2 thin-film transistors in extreme-temperature electronics and sensors.
Percolation Thresholds in Angular Grain media: Drude Directed Infiltration
NASA Astrophysics Data System (ADS)
Priour, Donald
Pores in many realistic systems are not well delineated channels, but are void spaces among grains impermeable to charge or fluid flow which comprise the medium. Sparse grain concentrations lead to permeable systems, while concentrations in excess of a critical density block bulk fluid flow. We calculate percolation thresholds in porous materials made up of randomly placed (and oriented) disks, tetrahedrons, and cubes. To determine if randomly generated finite system samples are permeable, we deploy virtual tracer particles which are scattered (e.g. specularly) by collisions with impenetrable angular grains. We hasten the rate of exploration (which would otherwise scale as ncoll1 / 2 where ncoll is the number of collisions with grains if the tracers followed linear trajectories) by considering the tracer particles to be charged in conjunction with a randomly directed uniform electric field. As in the Drude treatment, where a succession of many scattering events leads to a constant drift velocity, tracer displacements on average grow linearly in ncoll. By averaging over many disorder realizations for a variety of systems sizes, we calculate the percolation threshold and critical exponent which characterize the phase transition.
Cyclone–anticyclone vortex asymmetry mechanism and linear Ekman friction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chefranov, S. G., E-mail: schefranov@mail.ru
2016-04-15
Allowance for the linear Ekman friction has been found to ensure a threshold (in rotation frequency) realization of the linear dissipative–centrifugal instability and the related chiral symmetry breaking in the dynamics of Lagrangian particles, which leads to the cyclone–anticyclone vortex asymmetry. An excess of the fluid rotation rate ω{sub 0} over some threshold value determined by the fluid eigenfrequency ω (i.e., ω{sub 0} > ω) is shown to be a condition for the realization of such an instability. A new generalization of the solution of the Karman problem to determine the steady-state velocity field in a viscous incompressible fluid abovemore » a rotating solid disk of large radius, in which the linear Ekman friction was additionally taken into account, has been obtained. A correspondence of this solution and the conditions for the realization of the dissipative–centrifugal instability of a chiral-symmetric vortex state and the corresponding cyclone–anticyclone vortex asymmetry has been shown. A generalization of the well-known spiral velocity distribution in an “Ekman layer” near a solid surface has been established for the case where the fluid rotation frequency far from the disk ω differs from the disk rotation frequency ω{sub 0}.« less
Stimulus and recording variables and their effects on mammalian vestibular evoked potentials
NASA Technical Reports Server (NTRS)
Jones, Sherri M.; Subramanian, Geetha; Avniel, Wilma; Guo, Yuqing; Burkard, Robert F.; Jones, Timothy A.
2002-01-01
Linear vestibular evoked potentials (VsEPs) measure the collective neural activity of the gravity receptor organs in the inner ear that respond to linear acceleration transients. The present study examined the effects of electrode placement, analog filtering, stimulus polarity and stimulus rate on linear VsEP thresholds, latencies and amplitudes recorded from mice. Two electrode-recording montages were evaluated, rostral (forebrain) to 'mastoid' and caudal (cerebellum) to 'mastoid'. VsEP thresholds and peak latencies were identical between the two recording sites; however, peak amplitudes were larger for the caudal recording montage. VsEPs were also affected by filtering. Results suggest optimum high pass filter cutoff at 100-300 Hz, and low pass filter cutoff at 10,000 Hz. To evaluate stimulus rate, linear jerk pulses were presented at 9.2, 16, 25, 40 and 80 Hz. At 80 Hz, mean latencies were longer (0.350-0.450 ms) and mean amplitudes reduced (0.8-1.8 microV) for all response peaks. In 50% of animals, late peaks (P3, N3) disappeared at 80 Hz. The results offer options for VsEP recording protocols. Copyright 2002 Elsevier Science B.V.
NASA Astrophysics Data System (ADS)
Belfiore, Laurence A.; Volpato, Fabio Z.; Paulino, Alexandre T.; Belfiore, Carol J.
2011-12-01
The primary objective of this investigation is to establish guidelines for generating significant mammalian cell density in suspension bioreactors when stress-sensitive kinetics enhance the rate of nutrient consumption. Ultra-low-frequency dynamic modulations of the impeller (i.e., 35104 Hz) introduce time-dependent oscillatory shear into this transient analysis of cell proliferation under semi-continuous creeping flow conditions. Greater nutrient consumption is predicted when the amplitude
Synchronization of low- and high-threshold motor units.
Defreitas, Jason M; Beck, Travis W; Ye, Xin; Stock, Matt S
2014-04-01
We examined the degree of synchronization for both low- and high-threshold motor unit (MU) pairs at high force levels. MU spike trains were recorded from the quadriceps during high-force isometric leg extensions. Short-term synchronization (between -6 and 6 ms) was calculated for every unique MU pair for each contraction. At high force levels, earlier recruited motor unit pairs (low-threshold) demonstrated relatively low levels of short-term synchronization (approximately 7.3% extra firings than would have been expected by chance). However, the magnitude of synchronization increased significantly and linearly with mean recruitment threshold (reaching 22.1% extra firings for motor unit pairs recruited above 70% MVC). Three potential mechanisms that could explain the observed differences in synchronization across motor unit types are proposed and discussed. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Gong, He; Fan, Yubo; Zhang, Ming
2008-04-01
The objective of this paper is to identify the effects of mechanical disuse and basic multi-cellular unit (BMU) activation threshold on the form of trabecular bone during menopause. A bone adaptation model with mechanical- biological factors at BMU level was integrated with finite element analysis to simulate the changes of trabecular bone structure during menopause. Mechanical disuse and changes in the BMU activation threshold were applied to the model for the period from 4 years before to 4 years after menopause. The changes in bone volume fraction, trabecular thickness and fractal dimension of the trabecular structures were used to quantify the changes of trabecular bone in three different cases associated with mechanical disuse and BMU activation threshold. It was found that the changes in the simulated bone volume fraction were highly correlated and consistent with clinical data, and that the trabecular thickness reduced significantly during menopause and was highly linearly correlated with the bone volume fraction, and that the change trend of fractal dimension of the simulated trabecular structure was in correspondence with clinical observations. The numerical simulation in this paper may help to better understand the relationship between the bone morphology and the mechanical, as well as biological environment; and can provide a quantitative computational model and methodology for the numerical simulation of the bone structural morphological changes caused by the mechanical environment, and/or the biological environment.
Mirror Instability: Quasi-linear Effects
NASA Astrophysics Data System (ADS)
Hellinger, P.; Travnicek, P. M.; Passot, T.; Sulem, P.; Kuznetsov, E. A.
2008-12-01
Nonlinear properties of the mirror instability are investigated by direct integration of the quasi-linear diffusion equation [Shapiro and Shevchenko, 1964] near threshold. The simulation results are compared to the results of standard hybrid simulations [Califano et al., 2008] and discussed in the context of the nonlinear dynamical model by Kuznetsov et al. [2007]. References: Califano, F., P. Hellinger, E. Kuznetsov, T. Passot, P. L. Sulem, and P. M. Travnicek (2008), Nonlinear mirror mode dynamics: Simulations and modeling, J. Geophys. Res., 113, A08219, doi:10.1029/2007JA012898. Kuznetsov, E., T. Passot and P. L. Sulem (2007), Dynamical model for nonlinear mirror modes near threshold, Phys. Rev. Lett., 98, 235003 . Shapiro, V. D., and V. I. Shevchenko (1964), Quasilinear theory of instability of a plasma with an anisotropic ion velocity distribution, Sov. JETP, 18, 1109.
Energy conserving, linear scaling Born-Oppenheimer molecular dynamics.
Cawkwell, M J; Niklasson, Anders M N
2012-10-07
Born-Oppenheimer molecular dynamics simulations with long-term conservation of the total energy and a computational cost that scales linearly with system size have been obtained simultaneously. Linear scaling with a low pre-factor is achieved using density matrix purification with sparse matrix algebra and a numerical threshold on matrix elements. The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] yields microcanonical trajectories with the approximate forces obtained from the linear scaling method that exhibit no systematic drift over hundreds of picoseconds and which are indistinguishable from trajectories computed using exact forces.
Spectral singularities, threshold gain, and output intensity for a slab laser with mirrors
NASA Astrophysics Data System (ADS)
Doğan, Keremcan; Mostafazadeh, Ali; Sarısaman, Mustafa
2018-05-01
We explore the consequences of the emergence of linear and nonlinear spectral singularities in TE modes of a homogeneous slab of active optical material that is placed between two mirrors. We use the results together with two basic postulates regarding the behavior of laser light emission to derive explicit expressions for the laser threshold condition and output intensity for these modes of the slab and discuss their physical implications. In particular, we reveal the details of the dependence of the threshold gain and output intensity on the position and properties of the mirrors and on the real part of the refractive index of the gain material.
Neural Activity Patterns in the Human Brain Reflect Tactile Stickiness Perception.
Kim, Junsuk; Yeon, Jiwon; Ryu, Jaekyun; Park, Jang-Yeon; Chung, Soon-Cheol; Kim, Sung-Phil
2017-01-01
Our previous human fMRI study found brain activations correlated with tactile stickiness perception using the uni-variate general linear model (GLM) (Yeon et al., 2017). Here, we conducted an in-depth investigation on neural correlates of sticky sensations by employing a multivoxel pattern analysis (MVPA) on the same dataset. In particular, we statistically compared multi-variate neural activities in response to the three groups of sticky stimuli: A supra-threshold group including a set of sticky stimuli that evoked vivid sticky perception; an infra-threshold group including another set of sticky stimuli that barely evoked sticky perception; and a sham group including acrylic stimuli with no physically sticky property. Searchlight MVPAs were performed to search for local activity patterns carrying neural information of stickiness perception. Similar to the uni-variate GLM results, significant multi-variate neural activity patterns were identified in postcentral gyrus, subcortical (basal ganglia and thalamus), and insula areas (insula and adjacent areas). Moreover, MVPAs revealed that activity patterns in posterior parietal cortex discriminated the perceptual intensities of stickiness, which was not present in the uni-variate analysis. Next, we applied a principal component analysis (PCA) to the voxel response patterns within identified clusters so as to find low-dimensional neural representations of stickiness intensities. Follow-up clustering analyses clearly showed separate neural grouping configurations between the Supra- and Infra-threshold groups. Interestingly, this neural categorization was in line with the perceptual grouping pattern obtained from the psychophysical data. Our findings thus suggest that different stickiness intensities would elicit distinct neural activity patterns in the human brain and may provide a neural basis for the perception and categorization of tactile stickiness.
Neural Activity Patterns in the Human Brain Reflect Tactile Stickiness Perception
Kim, Junsuk; Yeon, Jiwon; Ryu, Jaekyun; Park, Jang-Yeon; Chung, Soon-Cheol; Kim, Sung-Phil
2017-01-01
Our previous human fMRI study found brain activations correlated with tactile stickiness perception using the uni-variate general linear model (GLM) (Yeon et al., 2017). Here, we conducted an in-depth investigation on neural correlates of sticky sensations by employing a multivoxel pattern analysis (MVPA) on the same dataset. In particular, we statistically compared multi-variate neural activities in response to the three groups of sticky stimuli: A supra-threshold group including a set of sticky stimuli that evoked vivid sticky perception; an infra-threshold group including another set of sticky stimuli that barely evoked sticky perception; and a sham group including acrylic stimuli with no physically sticky property. Searchlight MVPAs were performed to search for local activity patterns carrying neural information of stickiness perception. Similar to the uni-variate GLM results, significant multi-variate neural activity patterns were identified in postcentral gyrus, subcortical (basal ganglia and thalamus), and insula areas (insula and adjacent areas). Moreover, MVPAs revealed that activity patterns in posterior parietal cortex discriminated the perceptual intensities of stickiness, which was not present in the uni-variate analysis. Next, we applied a principal component analysis (PCA) to the voxel response patterns within identified clusters so as to find low-dimensional neural representations of stickiness intensities. Follow-up clustering analyses clearly showed separate neural grouping configurations between the Supra- and Infra-threshold groups. Interestingly, this neural categorization was in line with the perceptual grouping pattern obtained from the psychophysical data. Our findings thus suggest that different stickiness intensities would elicit distinct neural activity patterns in the human brain and may provide a neural basis for the perception and categorization of tactile stickiness. PMID:28936171
Lasing eigenvalue problems: the electromagnetic modelling of microlasers
NASA Astrophysics Data System (ADS)
Benson, Trevor; Nosich, Alexander; Smotrova, Elena; Balaban, Mikhail; Sewell, Phillip
2007-02-01
Comprehensive microcavity laser models should account for several physical mechanisms, e.g. carrier transport, heating and optical confinement, coupled by non-linear effects. Nevertheless, considerable useful information can still be obtained if all non-electromagnetic effects are neglected, often within an additional effective-index reduction to an equivalent 2D problem, and the optical modes viewed as solutions of Maxwell's equations. Integral equation (IE) formulations have many advantages over numerical techniques such as FDTD for the study of such microcavity laser problems. The most notable advantages of an IE approach are computational efficiency, the correct description of cavity boundaries without stair-step errors, and the direct solution of an eigenvalue problem rather than the spectral analysis of a transient signal. Boundary IE (BIE) formulations are more economic that volume IE (VIE) ones, because of their lower dimensionality, but they are only applicable to the constant cavity refractive index case. The Muller BIE, being free of 'defect' frequencies and having smooth or integrable kernels, provides a reliable tool for the modal analysis of microcavities. Whilst such an approach can readily identify complex-valued natural frequencies and Q-factors, the lasing condition is not addressed directly. We have thus suggested using a Muller BIE approach to solve a lasing eigenvalue problem (LEP), i.e. a linear eigenvalue solution in the form of two real-valued numbers (lasing wavelength and threshold information) when macroscopic gain is introduced into the cavity material within an active region. Such an approach yields clear insight into the lasing thresholds of individual cavities with uniform and non-uniform gain, cavities coupled as photonic molecules and cavities equipped with one or more quantum dots.
Thermal sensation and climate: a comparison of UTCI and PET thresholds in different climates
NASA Astrophysics Data System (ADS)
Pantavou, Katerina; Lykoudis, Spyridon; Nikolopoulou, Marialena; Tsiros, Ioannis X.
2018-06-01
The influence of physiological acclimatization and psychological adaptation on thermal perception is well documented and has revealed the importance of thermal experience and expectation in the evaluation of environmental stimuli. Seasonal patterns of thermal perception have been studied, and calibrated thermal indices' scales have been proposed to obtain meaningful interpretations of thermal sensation indices in different climate regions. The current work attempts to quantify the contribution of climate to the long-term thermal adaptation by examining the relationship between climate normal annual air temperature (1971-2000) and such climate-calibrated thermal indices' assessment scales. The thermal sensation ranges of two thermal indices, the Universal Thermal Climate Index (UTCI) and the Physiological Equivalent Temperature Index (PET), were calibrated for three warm temperate climate contexts (Cfa, Cfb, Csa), against the subjective evaluation of the thermal environment indicated by interviewees during field surveys conducted at seven European cities: Athens (GR), Thessaloniki (GR), Milan (IT), Fribourg (CH), Kassel (DE), Cambridge (UK), and Sheffield (UK), under the same research protocol. Then, calibrated scales for other climate contexts were added from the literature, and the relationship between the respective scales' thresholds and climate normal annual air temperature was examined. To maintain the maximum possible comparability, three methods were applied for the calibration, namely linear, ordinal, and probit regression. The results indicated that the calibrated UTCI and PET thresholds increase with the climate normal annual air temperature of the survey city. To investigate further climates, we also included in the analysis results of previous studies presenting only thresholds for neutral thermal sensation. The average increase of the respective thresholds in the case of neutral thermal sensation was about 0.6 °C for each 1 °C increase of the normal annual air temperature for both indices, statistically significant only for PET though.
Sparse principal component analysis in medical shape modeling
NASA Astrophysics Data System (ADS)
Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus
2006-03-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.
An extensive investigation of work function modulated trapezoidal recessed channel MOSFET
NASA Astrophysics Data System (ADS)
Lenka, Annada Shankar; Mishra, Sikha; Mishra, Satyaranjan; Bhanja, Urmila; Mishra, Guru Prasad
2017-11-01
The concept of silicon on insulator (SOI) and grooved gate help to lessen the short channel effects (SCEs). Again the work function modulation along the metal gate gives a better drain current due to the uniform electric field along the channel. So all these concepts are combined and used in the proposed MOSFET structure for more improved performance. In this work, trapezoidal recessed channel silicon on insulator (TRC-SOI) MOSFET and work function modulated trapezoidal recessed channel silicon on insulator (WFM-TRC-SOI) MOSFET are compared with DC and RF parameters and later linearity of both the devices is tested. An analytical model is formulated by using a 2-D Poisson's equation and develops a compact equation for threshold voltage using minimum surface potential. In this work we analyze the effect of negative junction depth and the corner angle on various device parameters such as minimum surface potential, sub-threshold slope (SS), drain induced barrier lowering (DIBL) and threshold voltage. The analysis interprets that the switching performance of WFM-TRC-SOI MOSFET surpasses TRC-SOI MOSFET in terms of high Ion/Ioff ratio and also the proposed structure can minimize the short channel effects (SCEs) in RF application. The validity of proposed model has been verified with simulation result performed on Sentaurus TCAD device simulator.
Regulation of ventral surface chemoreceptors by the central respiratory pattern generator.
Guyenet, Patrice G; Mulkey, Daniel K; Stornetta, Ruth L; Bayliss, Douglas A
2005-09-28
The rat retrotrapezoid nucleus (RTN) contains neurons described as central chemoreceptors in the adult and respiratory rhythm-generating pacemakers in neonates [parafacial respiratory group (pfRG)]. Here we test the hypothesis that both RTN and pfRG neurons are intrinsically chemosensitive and tonically firing neurons whose respiratory rhythmicity is caused by a synaptic feedback from the central respiratory pattern generator (CPG). In halothane-anesthetized adults, RTN neurons were silent below 4.5% end-expiratory (e-exp) CO2. Their activity increased linearly (3.2 Hz/1% CO2) up to 6.5% (CPG threshold) and then more slowly to peak approximately 10 Hz at 10% CO2. Respiratory modulation of RTN neurons was absent below CPG threshold, gradually stronger beyond, and, like pfRG neurons, typically (42%) characterized by twin periods of reduced activity near phrenic inspiration. After CPG inactivation with kynurenate (KYN), RTN neurons discharged linearly as a function of e-exp CO2 (slope, +1.7 Hz/1% CO2) and arterial pH (threshold, 7.48; slope, 39 Hz/pH unit). In coronal brain slices (postnatal days 7-12), RTN chemosensitive neurons were silent at pH 7.55. Their activity increased linearly with acidification up to pH 7.2 (17 Hz/pH unit at 35 degrees C) and was always tonic. In conclusion, consistent with their postulated central chemoreceptor role, RTN/pfRG neurons encode pH linearly and discharge tonically when disconnected from the rest of the respiratory centers in vivo (KYN treatment) and in vitro. In vivo, RTN neurons receive respiratory synchronous inhibitory inputs that may serve as feedback and impart these neurons with their characteristic respiratory modulation.
NASA Astrophysics Data System (ADS)
Attal, M.; Hobley, D.; Cowie, P. A.; Whittaker, A. C.; Tucker, G. E.; Roberts, G. P.
2008-12-01
Prominent convexities in channel long profiles, or knickzones, are an expected feature of bedrock rivers responding to a change in the rate of base level fall driven by tectonic processes. In response to a change in relative uplift rate, the simple stream power model which is characterized by a slope exponent equal to unity predicts that knickzone retreat velocity is independent of uplift rate and that channel slope and uplift rate are linearly related along the reaches which have re-equilibrated with respect to the new uplift condition (i.e., downstream of the profile convexity). However, a threshold for erosion has been shown to introduce non- linearity between slope and uplift rate when associated with stochastic rainfall variability. We present field data regarding the height and retreat rates of knickzones in rivers upstream of active normal faults in the central Apennines, Italy, where excellent constraints exist on the temporal and spatial history of fault movement. The knickzones developed in response to an independently-constrained increase in fault throw rate 0.75 Ma. Channel characteristics and Shield stress values suggest that these rivers lie close to the detachment-limited end-member but the knickzone retreat velocity (calculated from the time since fault acceleration) has been found to scale systematically with the known fault throw rates, even after accounting for differences in drainage area. In addition, the relationship between measured channel slope and relative uplift rate is non-linear, suggesting that a threshold for erosion might be effective in this setting. We use the Channel-Hillslope Integrated Landscape Development (CHILD) model to quantify the effect of such a threshold on river long profile development and knickzone retreat in response to tectonic perturbation. In particular, we investigate the evolutions of 3 Italian catchments of different size characterized by contrasted degree of tectonic perturbation, using physically realistic threshold values based on sediment grain-size measurements along the studied rivers. We show that the threshold alone cannot account for field observations of the size, position and retreat rate of profile convexities and that other factors neglected by the simple stream power law (e.g. role of sediments) have to be invoked to explain the discrepancy between field observations and modeled topographies.
Standard and inverse bond percolation of straight rigid rods on square lattices
NASA Astrophysics Data System (ADS)
Ramirez, L. S.; Centres, P. M.; Ramirez-Pastor, A. J.
2018-04-01
Numerical simulations and finite-size scaling analysis have been carried out to study standard and inverse bond percolation of straight rigid rods on square lattices. In the case of standard percolation, the lattice is initially empty. Then, linear bond k -mers (sets of k linear nearest-neighbor bonds) are randomly and sequentially deposited on the lattice. Jamming coverage pj ,k and percolation threshold pc ,k are determined for a wide range of k (1 ≤k ≤120 ). pj ,k and pc ,k exhibit a decreasing behavior with increasing k , pj ,k →∞=0.7476 (1 ) and pc ,k →∞=0.0033 (9 ) being the limit values for large k -mer sizes. pj ,k is always greater than pc ,k, and consequently, the percolation phase transition occurs for all values of k . In the case of inverse percolation, the process starts with an initial configuration where all lattice bonds are occupied and, given that periodic boundary conditions are used, the opposite sides of the lattice are connected by nearest-neighbor occupied bonds. Then, the system is diluted by randomly removing linear bond k -mers from the lattice. The central idea here is based on finding the maximum concentration of occupied bonds (minimum concentration of empty bonds) for which connectivity disappears. This particular value of concentration is called the inverse percolation threshold pc,k i, and determines a geometrical phase transition in the system. On the other hand, the inverse jamming coverage pj,k i is the coverage of the limit state, in which no more objects can be removed from the lattice due to the absence of linear clusters of nearest-neighbor bonds of appropriate size. It is easy to understand that pj,k i=1 -pj ,k . The obtained results for pc,k i show that the inverse percolation threshold is a decreasing function of k in the range 1 ≤k ≤18 . For k >18 , all jammed configurations are percolating states, and consequently, there is no nonpercolating phase. In other words, the lattice remains connected even when the highest allowed concentration of removed bonds pj,k i is reached. In terms of network attacks, this striking behavior indicates that random attacks on single nodes (k =1 ) are much more effective than correlated attacks on groups of close nodes (large k 's). Finally, the accurate determination of critical exponents reveals that standard and inverse bond percolation models on square lattices belong to the same universality class as the random percolation, regardless of the size k considered.
Theory of chromatic noise masking applied to testing linearity of S-cone detection mechanisms.
Giulianini, Franco; Eskew, Rhea T
2007-09-01
A method for testing the linearity of cone combination of chromatic detection mechanisms is applied to S-cone detection. This approach uses the concept of mechanism noise, the noise as seen by a postreceptoral neural mechanism, to represent the effects of superposing chromatic noise components in elevating thresholds and leads to a parameter-free prediction for a linear mechanism. The method also provides a test for the presence of multiple linear detectors and off-axis looking. No evidence for multiple linear mechanisms was found when using either S-cone increment or decrement tests. The results for both S-cone test polarities demonstrate that these mechanisms combine their cone inputs nonlinearly.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-03-08
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.
Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y
2018-01-01
Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062
Sex and Age Differences in the Risk Threshold for Delinquency
Loeber, Rolf; Slotboom, Anne-Marie; Bijleveld, Catrien C. J. H.; Hipwell, Alison E.; Stepp, Stephanie D.; Koot, Hans M.
2015-01-01
This study examines sex differences in the risk threshold for adolescent delinquency. Analyses were based on longitudinal data from the Pittsburgh Youth Study (n = 503) and the Pittsburgh Girls Study (n = 856). The study identified risk factors, promotive factors, and accumulated levels of risks as predictors of delinquency and nondelinquency, respectively. The risk thresholds for boys and girls were established at two developmental stages (late childhood: ages 10–12 years, and adolescence: ages 13–16 years) and compared between boys and girls. Sex similarities as well as differences existed in risk and promotive factors for delinquency. ROC analyses revealed only small sex differences in delinquency thresholds, that varied by age. Accumulative risk level had a linear relationship with boys’ delinquency and a quadratic relationship with girls’ delinquency, indicating stronger effects for girls at higher levels of risk. PMID:23183920
Noise masking of S-cone increments and decrements.
Wang, Quanhong; Richters, David P; Eskew, Rhea T
2014-11-12
S-cone increment and decrement detection thresholds were measured in the presence of bipolar, dynamic noise masks. Noise chromaticities were the L-, M-, and S-cone directions, as well as L-M, L+M, and achromatic (L+M+S) directions. Noise contrast power was varied to measure threshold Energy versus Noise (EvN) functions. S+ and S- thresholds were similarly, and weakly, raised by achromatic noise. However, S+ thresholds were much more elevated by S, L+M, L-M, L- and M-cone noises than were S- thresholds, even though the noises consisted of two symmetric chromatic polarities of equal contrast power. A linear cone combination model accounts for the overall pattern of masking of a single test polarity well. L and M cones have opposite signs in their effects upon raising S+ and S- thresholds. The results strongly indicate that the psychophysical mechanisms responsible for S+ and S- detection, presumably based on S-ON and S-OFF pathways, are distinct, unipolar mechanisms, and that they have different spatiotemporal sampling characteristics, or contrast gains, or both. © 2014 ARVO.
Wilson, Raymond C.
1997-01-01
Broad-scale variations in long-term precipitation climate may influence rainfall/debris-flow threshold values along the U.S. Pacific coast, where both the mean annual precipitation (MAP) and the number of rainfall days (#RDs) are controlled by topography, distance from the coastline, and geographic latitude. Previous authors have proposed that rainfall thresholds are directly proportional to MAP, but this appears to hold only within limited areas (< 1?? latitude), where rainfall frequency (#RDs) is nearly constant. MAP-normalized thresholds underestimate the critical rainfall when applied to areas to the south, where the #RDs decrease, and overestimate threshold rainfall when applied to areas to the north, where the #RDs increase. For normalization between climates where both MAP and #RDs vary significantly, thresholds may best be described as multiples of the rainy-day normal, RDN = MAP/#RDs. Using data from several storms that triggered significant debris-flow activity in southern California, the San Francisco Bay region, and the Pacific Northwest, peak 24-hour rainfalls were plotted against RDN values, displaying a linear relationship with a lower bound at about 14 RDN. RDN ratios in this range may provide a threshold for broad-scale regional forecasting of debris-flow activity.
Aghamohammadi, Mahdieh; Rödel, Reinhold; Zschieschang, Ute; Ocal, Carmen; Boschker, Hans; Weitz, R Thomas; Barrena, Esther; Klauk, Hagen
2015-10-21
The mechanisms behind the threshold-voltage shift in organic transistors due to functionalizing of the gate dielectric with self-assembled monolayers (SAMs) are still under debate. We address the mechanisms by which SAMs determine the threshold voltage, by analyzing whether the threshold voltage depends on the gate-dielectric capacitance. We have investigated transistors based on five oxide thicknesses and two SAMs with rather diverse chemical properties, using the benchmark organic semiconductor dinaphtho[2,3-b:2',3'-f]thieno[3,2-b]thiophene. Unlike several previous studies, we have found that the dependence of the threshold voltage on the gate-dielectric capacitance is completely different for the two SAMs. In transistors with an alkyl SAM, the threshold voltage does not depend on the gate-dielectric capacitance and is determined mainly by the dipolar character of the SAM, whereas in transistors with a fluoroalkyl SAM the threshold voltages exhibit a linear dependence on the inverse of the gate-dielectric capacitance. Kelvin probe force microscopy measurements indicate this behavior is attributed to an electronic coupling between the fluoroalkyl SAM and the organic semiconductor.
A Continuous Threshold Expectile Model.
Zhang, Feipeng; Li, Qunhua
2017-12-01
Expectile regression is a useful tool for exploring the relation between the response and the explanatory variables beyond the conditional mean. A continuous threshold expectile regression is developed for modeling data in which the effect of a covariate on the response variable is linear but varies below and above an unknown threshold in a continuous way. The estimators for the threshold and the regression coefficients are obtained using a grid search approach. The asymptotic properties for all the estimators are derived, and the estimator for the threshold is shown to achieve root-n consistency. A weighted CUSUM type test statistic is proposed for the existence of a threshold at a given expectile, and its asymptotic properties are derived under both the null and the local alternative models. This test only requires fitting the model under the null hypothesis in the absence of a threshold, thus it is computationally more efficient than the likelihood-ratio type tests. Simulation studies show that the proposed estimators and test have desirable finite sample performance in both homoscedastic and heteroscedastic cases. The application of the proposed method on a Dutch growth data and a baseball pitcher salary data reveals interesting insights. The proposed method is implemented in the R package cthreshER .
Beyea, Jan
2016-07-01
It is not true that successive groups of researchers from academia and research institutions-scientists who served on panels of the US National Academy of Sciences (NAS)-were duped into supporting a linear no-threshold model (LNT) by the opinions expressed in the genetic panel section of the 1956 "BEAR I" report. Successor reports had their own views of the LNT model, relying on mouse and human data, not fruit fly data. Nor was the 1956 report biased and corrupted, as has been charged in an article by Edward J. Calabrese in this journal. With or without BEAR I, the LNT model would likely have been accepted in the US for radiation protection purposes in the 1950's. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Abbott, Allan; Ghasemi-Kafash, Elaheh; Dedering, Åsa
2014-10-01
The purpose of this study was to evaluate the validity and preference for assessing pain magnitude with electrocutaneous testing (ECT) compared to the visual analogue scale (VAS) and Borg CR10 scale in men and women with cervical radiculopathy of varying sensory phenotypes. An additional purpose was to investigate ECT sensory and pain thresholds in men and women with cervical radiculopathy of varying sensory phenotypes. This is a cross-sectional study of 34 patients with cervical radiculopathy. Scatterplots and linear regression were used to investigate bivariate relationships between ECT, VAS and Borg CR10 methods of pain magnitude measurement as well as ECT sensory and pain thresholds. The use of the ECT pain magnitude matching paradigm for patients with cervical radiculopathy with normal sensory phenotype shows good linear association with arm pain VAS (R(2) = 0.39), neck pain VAS (R(2) = 0.38), arm pain Borg CR10 scale (R(2) = 0.50) and neck pain Borg CR10 scale (R(2) = 0.49) suggesting acceptable validity of the procedure. For patients with hypoesthesia and hyperesthesia sensory phenotypes, the ECT pain magnitude matching paradigm does not show adequate linear association with rating scale methods rendering the validity of the procedure as doubtful. ECT for sensory and pain threshold investigation, however, provides a method to objectively assess global sensory function in conjunction with sensory receptor specific bedside examination measures.
On the linear stability of blood flow through model capillary networks.
Davis, Jeffrey M
2014-12-01
Under the approximation that blood behaves as a continuum, a numerical implementation is presented to analyze the linear stability of capillary blood flow through model tree and honeycomb networks that are based on the microvascular structures of biological tissues. The tree network is comprised of a cascade of diverging bifurcations, in which a parent vessel bifurcates into two descendent vessels, while the honeycomb network also contains converging bifurcations, in which two parent vessels merge into one descendent vessel. At diverging bifurcations, a cell partitioning law is required to account for the nonuniform distribution of red blood cells as a function of the flow rate of blood into each descendent vessel. A linearization of the governing equations produces a system of delay differential equations involving the discharge hematocrit entering each network vessel and leads to a nonlinear eigenvalue problem. All eigenvalues in a specified region of the complex plane are captured using a transformation based on contour integrals to construct a linear eigenvalue problem with identical eigenvalues, which are then determined using a standard QR algorithm. The predicted value of the dimensionless exponent in the cell partitioning law at the instability threshold corresponds to a supercritical Hopf bifurcation in numerical simulations of the equations governing unsteady blood flow. Excellent agreement is found between the predictions of the linear stability analysis and nonlinear simulations. The relaxation of the assumption of plug flow made in previous stability analyses typically has a small, quantitative effect on the stability results that depends on the specific network structure. This implementation of the stability analysis can be applied to large networks with arbitrary structure provided only that the connectivity among the network segments is known.
Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2016-01-01
A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.
Michel, Franck; Jørgensen, Kristoffer Foldager
2017-02-01
The objective of this study is to compare air-conduction thresholds obtained with ASSR evoked by narrow band (NB) CE-chirps and ABR evoked by tone pips (tpABR) in infants with various degrees of hearing loss. Thresholds were measured at 500, 1000, 2000 and 4000 Hz. Data on each participant were collected at the same day. Sixty-seven infants aged 4 d to 22 months (median age = 96 days), resulting in 57, 52, 87 and 56 ears for 500, 1000, 2000 and 4000 Hz, respectively. Statistical analysis was performed for ears with hearing loss (HL) and showed a very strong correlation between tpABR and ASSR evoked by NB CE-chirps: 0.90 (n = 28), 0.90 (n = 28), 0.96 (n = 42) and 0.95 (n = 30) for 500, 1000, 2000 and 4000 Hz, respectively. At these frequencies, the mean difference between tpABR and ASSR was -3.6 dB (± 7.0), -5.2 dB (± 7.3), -3.9 dB (± 5.2) and -5.2 dB (± 4.7). Linear regression analysis indicated that the relationship was not influenced by the degree of hearing loss. We propose that dB nHL to dB eHL correction values for ASSR evoked by NB CE-chirps should be 5 dB lower than values used for tpABR.
NASA Astrophysics Data System (ADS)
Weerasinghe, H. W. Kushan; Dadashzadeh, Neda; Thirugnanasambandam, Manasadevi P.; Debord, Benoît.; Chafer, Matthieu; Gérôme, Frédéric; Benabid, Fetah; Corwin, Kristan L.; Washburn, Brian R.
2018-02-01
The effect of gas pressure, fiber length, and optical pump power on an acetylene mid-infrared hollow-core optical fiber gas laser (HOFGLAS) is experimentally determined in order to scale the laser to higher powers. The absorbed optical power and threshold power are measured for different pressures providing an optimum pressure for a given fiber length. We observe a linear dependence of both absorbed pump energy and lasing threshold for the acetylene HOFGLAS, while maintaining a good mode quality with an M-squared of 1.15. The threshold and mode behavior are encouraging for scaling to higher pressures and pump powers.
Dimits shift in realistic gyrokinetic plasma-turbulence simulations.
Mikkelsen, D R; Dorland, W
2008-09-26
In simulations of turbulent plasma transport due to long wavelength (k perpendicular rhoi < or = 1) electrostatic drift-type instabilities, we find a persistent nonlinear up-shift of the effective threshold. Next-generation tokamaks will likely benefit from the higher effective threshold for turbulent transport, and transport models should incorporate suitable corrections to linear thresholds. The gyrokinetic simulations reported here are more realistic than previous reports of a Dimits shift because they include nonadiabatic electron dynamics, strong collisional damping of zonal flows, and finite electron and ion collisionality together with realistic shaped magnetic geometry. Reversing previously reported results based on idealized adiabatic electrons, we find that increasing collisionality reduces the heat flux because collisionality reduces the nonadiabatic electron microinstability drive.
[Approach to the Development of Mind and Persona].
Sawaguchi, Toshiko
2018-01-01
To access medical specialists by health specialists working in the regional health field, the possibility of utilizing the voice approach for dissociative identity disorder (DID) patients as a health assessment for medical access (HAMA) was investigated. The first step is to investigate whether the plural personae in a single DID patient can be discriminated by voice analysis. Voices of DID patients including these with different personae were extracted from YouTube and were analysed using the software PRAAT with basic frequency, oral factors, chin factors and tongue factors. In addition, RAKUGO story teller voices made artificially and dramatically were analysed in the same manner. Quantitive and qualitative analysis method were carried out and nested logistic regression and a nested generalized linear model was developed. The voice from different personae in one DID patient could be visually and easily distinquished using basic frequency curve, cluster analysis and factor analysis. In the canonical analysis, only Roy's maximum root was <0.01. In the nested generalized linear model, the model using a standard deviation (SD) indicator fit best and some other possibilities are shown here. In DID patients, the short transition time among plural personae could guide to the risky situation such as suicide. So if the voice approach can show the time threshold of changes between the different personae, it would be useful as an Access Assessment in the form of a simple HAMA.
NASA Astrophysics Data System (ADS)
Michoski, Craig; Janhunen, Salomon; Faghihi, Danial; Carey, Varis; Moser, Robert
2017-10-01
The suppression of micro-turbulence and ultimately the inhibition of large-scale instabilities observed in tokamak plasmas is partially characterized by the onset of a global stationary state. This stationary attractor corresponds experimentally to a state of ``marginal stability'' in the plasma. The critical threshold that characterizes the onset in the nonlinear regime is observed both experimentally and numerically to exhibit an upshift relative to the linear theory. That is, the onset in the stationary state is up-shifted from those predicted by the linear theory as a function of the ion temperature gradient R0 /LT . Because the transition to this state with enhanced transport and therefore reduced confinement times is inaccessible to the linear theory, strategies for developing nonlinear reduced physics models to predict the upshift have been ongoing. As a complement to these effort, the principle aim of this work is to establish low-fidelity surrogate models that can be used to predict instability driven loss of confinement using training data from high-fidelity models. DE-SC0008454 and DE-AC02-09CH11466.
MO-FG-202-06: Improving the Performance of Gamma Analysis QA with Radiomics- Based Image Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wootton, L; Nyflot, M; Ford, E
2016-06-15
Purpose: The use of gamma analysis for IMRT quality assurance has well-known limitations. Traditionally, a simple thresholding technique is used to evaluated passing criteria. However, like any image the gamma distribution is rich in information which thresholding mostly discards. We therefore propose a novel method of analyzing gamma images that uses quantitative image features borrowed from radiomics, with the goal of improving error detection. Methods: 368 gamma images were generated from 184 clinical IMRT beams. For each beam the dose to a phantom was measured with EPID dosimetry and compared to the TPS dose calculated with and without normally distributedmore » (2mm sigma) errors in MLC positions. The magnitude of 17 intensity histogram and size-zone radiomic features were derived from each image. The features that differed most significantly between image sets were determined with ROC analysis. A linear machine-learning model was trained on these features to classify images as with or without errors on 180 gamma images.The model was then applied to an independent validation set of 188 additional gamma distributions, half with and half without errors. Results: The most significant features for detecting errors were histogram kurtosis (p=0.007) and three size-zone metrics (p<1e-6 for each). The sizezone metrics detected clusters of high gamma-value pixels under mispositioned MLCs. The model applied to the validation set had an AUC of 0.8, compared to 0.56 for traditional gamma analysis with the decision threshold restricted to 98% or less. Conclusion: A radiomics-based image analysis method was developed that is more effective in detecting error than traditional gamma analysis. Though the pilot study here considers only MLC position errors, radiomics-based methods for other error types are being developed, which may provide better error detection and useful information on the source of detected errors. This work was partially supported by a grant from the Agency for Healthcare Research and Quality, grant number R18 HS022244-01.« less
NASA Astrophysics Data System (ADS)
Takahashi, Hajime; Hanafusa, Yuki; Kimura, Yoshinari; Kitamura, Masatoshi
2018-03-01
Oxygen plasma treatment has been carried out to control the threshold voltage in organic thin-film transistors (TFTs) having a SiO2 gate dielectric prepared by rf sputtering. The threshold voltage linearly changed in the range of -3.7 to 3.1 V with the increase in plasma treatment time. Although the amount of change is smaller than that for organic TFTs having thermally grown SiO2, the tendency of the change was similar to that for thermally grown SiO2. To realize different plasma treatment times on the same substrate, a certain region on the SiO2 surface was selected using a shadow mask, and was treated with oxygen plasma. Using the process, organic TFTs with negative threshold voltages and those with positive threshold voltages were fabricated on the same substrate. As a result, enhancement/depletion inverters consisting of the organic TFTs operated at supply voltages of 5 to 15 V.
FOKKER-PLANCK ANALYSIS OF TRANSVERSE COLLECTIVE INSTABILITIES IN ELECTRON STORAGE RINGS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindberg, R. R.
We analyze single bunch transverse instabilities due to wakefields using a Fokker-Planck model. We expand on the work of Suzuki [1], writing out the linear matrix equation including chromaticity, both dipolar and quadrupolar transverse wakefields, and the effects of damping and diffusion due to the synchrotron radiation. The eigenvalues and eigenvectors determine the collective stability of the beam, and we show that the predicted threshold current for transverse instability and the profile of the unstable agree well with tracking simulations. In particular, we find that predicting collective stability for high energy electron beams at moderate to large values of chromaticitymore » requires the full Fokker-Planck analysis to properly account for the effects of damping and diffusion due to synchrotron radiation.« less
Flaw Growth of 6Al-4V Titanium in a Freon TF Environment
NASA Technical Reports Server (NTRS)
Tiffany, C. F.; Masters, J. N.; Bixler, W. D.
1969-01-01
The plane strain threshold stress intensity and sustained stress flaw growth rates were experimentally determined for 6AI-4V S.T.A. titanium forging and weldments in environments of Freon TF at room temperature. Sustained load tests of surface flawed specimens were conducted with the experimental approach based on linear elastic fracture mechanics. It was concluded that sustained stress flaw growth rates, in conjunction with threshold stress intensities, can be used in assessing the service life of pressure vessels.
A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output
Stevanovic, Stefan; Pervan, Boris
2018-01-01
We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. PMID:29351250
Chen, Xilin; Gestring, Mark L; Rosengart, Matthew R; Peitzman, Andrew B; Billiar, Timothy R; Sperry, Jason L; Brown, Joshua B
2018-05-04
Trauma is a time sensitive disease. Helicopter emergency medical services (HEMS) have shown benefit over ground EMS (GEMS), which may be related to reduced prehospital time. The distance at which this time benefit emerges depends on many factors that can vary across regions. Our objective was to determine the threshold distance at which HEMS has shorter prehospital time than GEMS under different conditions. Patients in the PA trauma registry 2000-2013 were included. Distance between zip centroid and trauma center was calculated using straight-line distance for HEMS and driving distance from GIS network analysis for GEMS. Contrast margins from linear regression identified the threshold distance at which HEMS had a significantly lower prehospital time than GEMS, indicated by non-overlapping 95% confidence intervals. The effect of peak traffic times and adverse weather on the threshold distance was evaluated. Geographic effects across EMS regions were also evaluated. A total of 144,741 patients were included with 19% transported by HEMS. Overall, HEMS became faster than GEMS at 7.7miles from the trauma center (p=0.043). HEMS became faster at 6.5miles during peak traffic (p=0.025) compared to 7.9miles during off-peak traffic (p=0.048). Adverse weather increased the distance at which HEMS was faster to 17.1miles (p=0.046) from 7.3miles in clear weather (p=0.036). Significant variation occurred across EMS regions, with threshold distances ranging from 5.4miles to 35.3miles. There was an inverse but non-significant relationship between urban population and threshold distance across EMS regions (ρ -0.351, p=0.28). This is the first study to demonstrate that traffic, weather, and geographic region significantly impact the threshold distance at which HEMS is faster than GEMS. HEMS was faster at shorter distances during peak traffic while adverse weather increased this distance. The threshold distance varied widely across geographic region. These factors must be considered to guide appropriate HEMS triage protocols. III, Therapeutic.
Hemanth, M; Deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B
2015-08-01
Orthodontic tooth movement is a complex procedure that occurs due to various biomechanical changes in the periodontium. Optimal orthodontic forces yield maximum tooth movement whereas if the forces fall beyond the optimal threshold it can cause deleterious effects. Among various types of tooth movements intrusion and lingual root torque are associated with causing root resoprtion, especially with the incisors. Therefore in this study, the stress patterns in the periodontal ligament (PDL) were evaluated with intrusion and lingual root torque using finite element method (FEM). A three-dimensional (3D) FEM model of the maxillary incisors was generated using SOLIDWORKS modeling software. Stresses in the PDL were evaluated with intrusive and lingual root torque movements by a 3D FEM using ANSYS software using linear stress analysis. It was observed that with the application of intrusive load compressive stresses were distributed at the apex whereas tensile stress was seen at the cervical margin. With the application of lingual root torque maximum compressive stress was distributed at the apex and tensile stress was distributed throughout the PDL. For intrusive and lingual root torque movements stress values over the PDL was within the range of optimal stress value as proposed by Lee, with a given force system by Proffit as optimum forces for orthodontic tooth movement using linear properties.
Hemanth, M; deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B
2015-01-01
Background: Orthodontic tooth movement is a complex procedure that occurs due to various biomechanical changes in the periodontium. Optimal orthodontic forces yield maximum tooth movement whereas if the forces fall beyond the optimal threshold it can cause deleterious effects. Among various types of tooth movements intrusion and lingual root torque are associated with causing root resoprtion, especially with the incisors. Therefore in this study, the stress patterns in the periodontal ligament (PDL) were evaluated with intrusion and lingual root torque using finite element method (FEM). Materials and Methods: A three-dimensional (3D) FEM model of the maxillary incisors was generated using SOLIDWORKS modeling software. Stresses in the PDL were evaluated with intrusive and lingual root torque movements by a 3D FEM using ANSYS software using linear stress analysis. Results: It was observed that with the application of intrusive load compressive stresses were distributed at the apex whereas tensile stress was seen at the cervical margin. With the application of lingual root torque maximum compressive stress was distributed at the apex and tensile stress was distributed throughout the PDL. Conclusion: For intrusive and lingual root torque movements stress values over the PDL was within the range of optimal stress value as proposed by Lee, with a given force system by Proffit as optimum forces for orthodontic tooth movement using linear properties. PMID:26464555
A THRESHOLD ANALYSIS OF THE TUNNEL INJECTION LASER.
A new threshold analysis of the tunnel injection laser is given that differs from previous treatments in that an additional loss mechanism is...a slight increase in the threshold current density of the tunnel laser. For a device one millimeter long composed of GaAs at 77K, the threshold
Seabirds as indicators of marine food supplies: Cairns revisited
Piatt, John F.; Harding, Ann M.A.; Shultz, Michael T.; Speckman, Suzann G.; van Pelt, Thomas I.; Drew, Gary S.; Kettle, Arthur B.
2007-01-01
In his seminal paper about using seabirds as indicators of marine food supplies, Cairns (1987, Biol Oceanogr 5:261–271) predicted that (1) parameters of seabird biology and behavior would vary in curvilinear fashion with changes in food supply, (2) the threshold of prey density over which birds responded would be different for each parameter, and (3) different seabird species would respond differently to variation in food availability depending on foraging behavior and ability to adjust time budgets. We tested these predictions using data collected at colonies of common murre Uria aalge and black-legged kittiwake Rissa tridactyla in Cook Inlet, Alaska. (1) Of 22 seabird responses fitted with linear and non-linear functions, 16 responses exhibited significant curvilinear shapes, and Akaike’s information criterion (AIC) analysis indicated that curvilinear functions provided the best-fitting model for 12 of those. (2) However, there were few differences among parameters in their threshold to prey density, presumably because most responses ultimately depend upon a single threshold for prey acquisition at sea. (3) There were similarities and some differences in how species responded to variability in prey density. Both murres and kittiwakes minimized variability (CV < 15%) in their own body condition and growth of chicks in the face of high annual variability (CV = 69%) in local prey density. Whereas kittiwake breeding success (CV = 63%, r2 = 0.89) reflected prey variability, murre breeding success did not (CV = 29%, r2< 0.00). It appears that murres were able to buffer breeding success by reallocating discretionary ‘loafing’ time to foraging effort in response (r2 = 0.64) to declining prey density. Kittiwakes had little or no discretionary time, so fledging success was a more direct function of local prey density. Implications of these results for using ‘seabirds as indicators’ are discussed.
A time series study on the effects of cold temperature on road traffic injuries in Seoul, Korea.
Lee, Won-Kyung; Lee, Hye-Ah; Hwang, Seung-sik; Kim, Ho; Lim, Youn-Hee; Hong, Yun-Chul; Ha, Eun-Hee; Park, Hyesook
2014-07-01
Although traffic accidents are associated with weather, the influence of temperature on injuries from traffic accidents has not been evaluated sufficiently. The objective of this study was to evaluate the effect of temperature, especially cold temperatures, on injuries from traffic accidents in Seoul, Korea. We also explored the relationship of temperature with different types of traffic accident. The daily frequencies of injuries from traffic accidents in Seoul were summarized from the integrated database established by the Korea Road Traffic Authority. Weather data included temperature, barometric pressure, rainfall, snow, and fog from May 2007 to December 2011. The qualitative relationship between daily mean temperature and injuries from traffic accidents was evaluated using a generalized additive model with Poisson distribution. Further analysis was performed using piecewise linear regression if graph the showed non-linearity with threshold. The incidence of injuries was 216 per 100,000 person-months in Seoul. The effect of temperature on injuries from traffic accidents was minimal during spring and summer. However, injuries showed a more striking relationship with temperature in winter than in other seasons. In winter, the number of injuries increased as the temperature decreased to <0°C. The injuries increased by 2.1% per 1°C decrease under the threshold of the daily average temperature -5.7°C, which is 10-fold greater than the effect of temperature above the threshold. Some groups were more susceptible to injuries, such as young and male drivers, according to the types of traffic accident when the temperature decreased to below the freezing temperature. The incidence of injuries increased sharply when the temperature decreased below freezing temperature in winter. Temperature can be effectively used to inform high risk of road traffic injuries, thus helping to prevent road traffic injuries. Copyright © 2014 Elsevier Inc. All rights reserved.
Comparison between Humphrey Field Analyzer and Micro Perimeter 1 in normal and glaucoma subjects.
Ratra, Vineet; Ratra, Dhanashree; Gupta, Muneeswar; Vaitheeswaran, K
2012-05-01
To determine the correlation between fundus perimetry with Micro Perimeter 1 (MP1) and conventional automated static threshold perimetry using the Humphrey Field Analyzer (HFA) in healthy individuals and in subjects with glaucoma. In this study, we enrolled 45 eyes with glaucoma and 21 eyes of age-matched, healthy individuals. All subjects underwent complete ophthalmic examination. Differential light sensitivity was measured at 21 corresponding points in a rectangular test grid in both MP1 and HFA. Similar examination settings were used with Goldmann III stimulus, stimulus presentation time of 200 ms, and white background illumination (1.27 cd/m(2)). Statistical analysis was done with the SPSS 14 using linear regression and independent t-test. The mean light thresholds of 21 matching points in control group with MP1 and HFA were 14.97 ± 2.64 dB and 30.90 ± 2.08 dB, respectively. In subjects with glaucoma, the mean values were MP1: 11.73 ± 4.36 dB and HFA: 27.96 ± 5.41 dB. Mean difference of light thresholds among the two instruments was 15.86 ± 3.25 dB in normal subjects (P < 0.001) and 16.22 ± 2.77 dB in glaucoma subjects (P < 0.001). Pearson correlation analysis of the HFA and MP1 results for each test point location in both cases and control subjects showed significant positive correlation (controls, r = 0.439, P = 0.047; glaucoma subjects, r = 0.812, P < 0.001). There was no difference between nasal and temporal points but a slight vertical asymmetry was observed with MP1. There are significant and reproducible differences in the differential light threshold in MP1 and HFA in both normal and glaucoma subjects. We found a correction factor of 17.271 for comparison of MP1 with HFA. MP1 appeared to be more sensitive in predicting loss in glaucoma.
González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier
2008-01-01
Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760
Evolutionary dynamics of general group interactions in structured populations
NASA Astrophysics Data System (ADS)
Li, Aming; Broom, Mark; Du, Jinming; Wang, Long
2016-02-01
The evolution of populations is influenced by many factors, and the simple classical models have been developed in a number of important ways. Both population structure and multiplayer interactions have been shown to significantly affect the evolution of important properties, such as the level of cooperation or of aggressive behavior. Here we combine these two key factors and develop the evolutionary dynamics of general group interactions in structured populations represented by regular graphs. The traditional linear and threshold public goods games are adopted as models to address the dynamics. We show that for linear group interactions, population structure can favor the evolution of cooperation compared to the well-mixed case, and we see that the more neighbors there are, the harder it is for cooperators to persist in structured populations. We further show that threshold group interactions could lead to the emergence of cooperation even in well-mixed populations. Here population structure sometimes inhibits cooperation for the threshold public goods game, where depending on the benefit to cost ratio, the outcomes are bistability or a monomorphic population of defectors or cooperators. Our results suggest, counterintuitively, that structured populations are not always beneficial for the evolution of cooperation for nonlinear group interactions.
Networks of conforming or nonconforming individuals tend to reach satisfactory decisions.
Ramazi, Pouria; Riehl, James; Cao, Ming
2016-11-15
Binary decisions of agents coupled in networks can often be classified into two types: "coordination," where an agent takes an action if enough neighbors are using that action, as in the spread of social norms, innovations, and viral epidemics, and "anticoordination," where too many neighbors taking a particular action causes an agent to take the opposite action, as in traffic congestion, crowd dispersion, and division of labor. Both of these cases can be modeled using linear-threshold-based dynamics, and a fundamental question is whether the individuals in such networks are likely to reach decisions with which they are satisfied. We show that, in the coordination case, and perhaps more surprisingly, also in the anticoordination case, the agents will indeed always tend to reach satisfactory decisions, that is, the network will almost surely reach an equilibrium state. This holds for every network topology and every distribution of thresholds, for both asynchronous and partially synchronous decision-making updates. These results reveal that irregular network topology, population heterogeneity, and partial synchrony are not sufficient to cause cycles or nonconvergence in linear-threshold dynamics; rather, other factors such as imitation or the coexistence of coordinating and anticoordinating agents must play a role.
NASA Astrophysics Data System (ADS)
Alberti, Stefano; Battista Crosta, Giovanni; Rivolta, Carlo
2016-04-01
Rockslides are characterized by complex spatial and temporal evolution. Forecasting their behaviour is a hard task, due to non-linear displacement trends and the significant effects of seasonal or occasional events. The displacement rate and the landslide evolution are influenced by various factors like lithology, structural and hydrological settings, as well as meteo-climatic factors (e.g. snowmelt and rainfall). The nature of the relationships among these factors is clearly non linear, site specific and even specific to each sector that can be individuated within the main landslide mass. In this contribution, total displacement and displacement rate time series are extracted from Ground-based Interferometric synthetic aperture radar (GB-InSAR) surveys, monitoring of optical targets by total stations, a GPS network and multi-parametric borehole probes. Different Early Warning domains, characterized by different velocity regimes (slow to fast domains) and with different sensitivity to external perturbations (e.g. snowmelt and rainfall), have been identified in previous studies at the two sites. The Mont de La Saxe rockslide (ca. 8 x 106 m3) is located in the Upper Aosta Valley, and it has been intensively monitored since 2009 by the Valle D'Aosta Geological Survey. The Ruinon landslide (ca. 15 x 106 to 20 x 106 m3) is located in the Upper Valtellina (Lombardy region) and monitoring data are available starting since 2006 and have been provided by ARPA Lombardia. Both phenomena are alpine deep-seated rockslides characterized by different displacement velocity, from few centimetres to over 1 meter per year, and which have undergone exceptional accelerations during some specific events. We experiment the use of normal probability plots for the analysis of displacement rates of specific points belonging to different landslide sectors and recorded during almost ten years of monitoring. This analyses allow us to define: (i) values with a specific probability value expressed in terms of percentiles; (ii) values for which a specific change in behaviour is observed which could be associated to a specific type of triggering event (e.g. rainfall intensity, duration or amount; snowmelt amount) . These values could be used to support the choice of threshold values for the management of Early Warning System, by considering also the minimization of false alarms. The analyses have been performed by using data averaged over different time intervals so to study the effects of noise on the threshold values. Analyses of false alarm triggered by the choice of different threshold values (i.e. different percentiles) have been performed and analysed. This could be an innovative approach to define velocity thresholds of Early Warning system and to analyse the quantitative data derived from remote sensing monitoring and filed surveys, by linking them to both spatial and temporal changes.
Vucetić, Vlatko; Sentija, Davor; Sporis, Goran; Trajković, Nebojsa; Milanović, Zoran
2014-06-01
The purpose of this study was to compare two methods for determination of anaerobic threshold from two different treadmill protocols. Forty-eight Croatian runners of national rank (ten sprinters, fifteen 400-m runners, ten middle distance runners and thirteen long distance runners), mean age 21.7 +/- 5.1 years, participated in the study. They performed two graded maximal exercise tests on a treadmill, a standard ramp treadmill test (T(SR), speed increments of 1 km x h(-1) every 60 seconds) and a fast ramp treadmill test (T(FR), speed increments of 1 km x h(-1) every 30 seconds) to determine and compare the parameters at peak values and at heart rate at the deflection point (HR(DP)) and ventilation threshold (VT). There were no significant differences between protocols (p > 0.05) for peak values of oxygen uptake (VO(2max), 4.48 +/- 0.43 and 4.44 +/- 0.45 L x min(-1)), weight related VO(2max) (62.5 +/- 6.2 and 62.0 +/- 6.0 mL x kg(-1) x min(-1)), pulmonary ventilation (VE(max), 163.1 +/- 18.7 and 161.3 +/- 19.9 L x min(-1)) and heart rate (HR(max), 192.3 +/- 8.5 and 194.4 +/- 8.7 bpm) (T(FR) and T(SR), respectively). Moreover, no significant differences between T(FR) and T(SR) where found for VT and HR(DP) when expressed as VO2 and HR. However, there was a significant effect of ramp slope on running speed at VO(2max) and at the anaerobic threshold (AnT), independent of the method used (VT: 16.0 +/- 2.2 vs 14.9 +/- 2.2 km x h(-1);HR(DP): 16.5 +/- 1.9 vs 14.9 +/- 2.0 km x h(-1) for T(FR) and T(SR) respectively). Linear regression analysis revealed high between-test and between-method correlations for VO2, HR and running speed parameters (r = 0.78-0.89, p < 0.01). The present study has indicated that the VT and HR(DP) for running (VO2, ventilation, and heart rate at VT/HR(DP)) are independent of test protocol, while there is a significant effect of ramp slope on VT and HR(DP) when expressed as running speed. Moreover, this study demonstrates that the point of deflection from linearity of heart rate may be an accurate predictor of the anaerobic threshold in trained runners, independently of the protocol used.
Radiation characterization report for the GPS Receiver microcontroller chip. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-06-20
The overall objective of this characterization test was to determine the sensitivity of the Motorola 68332 32-bit microcontroller to radiation induced single event upset and latch-up (SEU/SEL). The microcontroller is a key component of the GPS Receiver which will be a subsystem of the satellite required for the {open_quotes}FORTE{close_quotes} experiment. Testing was conducted at the Single Event Effects Laboratory at Brookhaven National Laboratory. The results obtained included a latch-up (SEL) threshold LET (Linear Energy Transfer) of 20 MeV-CM{sub 2}/mg and an upset (SEU) threshold LET of 5 MeV-CM{sup 2}/mg. The SEU threshold is typical of this technology, commercial 0.8{mu}m HCMOS.more » Some flow errors were observed that were not reset by the internal watchdog timer of the 68332. It is important that the Receiver design include a monitor of the device, such as an external watch-dog timer, that would initiate a reset of the program when this type of upset occurs. The SEL threshold is lower than would be expected for this 12{mu}m epi layer process and suggests the need for a strategy that would allow for a hard reset of the controller when a latch-up event occurs. Analysis of the galactic cosmic ray spectrum for the FORTE orbit was done and the results indicate a worst case latch-up rate for this device of 6.3 {times} 10{sup {minus}5} latch-ups per device day or roughly one latch-up per 43.5 device years.« less
NASA Astrophysics Data System (ADS)
Underwood, Kristen L.; Rizzo, Donna M.; Schroth, Andrew W.; Dewoolkar, Mandar M.
2017-12-01
Given the variable biogeochemical, physical, and hydrological processes driving fluvial sediment and nutrient export, the water science and management communities need data-driven methods to identify regions prone to production and transport under variable hydrometeorological conditions. We use Bayesian analysis to segment concentration-discharge linear regression models for total suspended solids (TSS) and particulate and dissolved phosphorus (PP, DP) using 22 years of monitoring data from 18 Lake Champlain watersheds. Bayesian inference was leveraged to estimate segmented regression model parameters and identify threshold position. The identified threshold positions demonstrated a considerable range below and above the median discharge—which has been used previously as the default breakpoint in segmented regression models to discern differences between pre and post-threshold export regimes. We then applied a Self-Organizing Map (SOM), which partitioned the watersheds into clusters of TSS, PP, and DP export regimes using watershed characteristics, as well as Bayesian regression intercepts and slopes. A SOM defined two clusters of high-flux basins, one where PP flux was predominantly episodic and hydrologically driven; and another in which the sediment and nutrient sourcing and mobilization were more bimodal, resulting from both hydrologic processes at post-threshold discharges and reactive processes (e.g., nutrient cycling or lateral/vertical exchanges of fine sediment) at prethreshold discharges. A separate DP SOM defined two high-flux clusters exhibiting a bimodal concentration-discharge response, but driven by differing land use. Our novel framework shows promise as a tool with broad management application that provides insights into landscape drivers of riverine solute and sediment export.
Optimizing fluence and debridement effects on cutaneous resurfacing carbon dioxide laser surgery.
Weisberg, N K; Kuo, T; Torkian, B; Reinisch, L; Ellis, D L
1998-10-01
To develop methods to compare carbon dioxide (CO2) resurfacing lasers, fluence, and debridement effects on tissue shrinkage and histological thermal denaturation. In vitro human or in vivo porcine skin samples received up to 5 passes with scanner or short-pulsed CO2 resurfacing lasers. Fluences ranging from 2.19 to 17.58 J/cm2 (scanner) and 1.11 to 5.56 J/cm2 (short pulsed) were used to determine each laser's threshold energy for clinical effect. Variable amounts of debridement were also studied. Tissue shrinkage was evaluated by using digital photography to measure linear distance change of the treated tissue. Tissue histological studies were evaluated using quantitative computer image analysis. Fluence-independent in vitro tissue shrinkage was seen with the scanned and short-pulsed lasers above threshold fluence levels of 5.9 and 2.5 J/cm2, respectively. Histologically, fluence-independent thermal depths of damage of 77 microns (scanner) and 25 microns (pulsed) were observed. Aggressive debridement of the tissue increased the shrinkage per pass of the laser, and decreased the fluence required for the threshold effect. In vivo experiments confirmed the in vitro results, although the in vivo threshold fluence level was slightly higher and the shrinkage obtained was slightly lower per pass. Our methods allow comparison of different resurfacing lasers' acute effects. We found equivalent laser tissue effects using lower fluences than those currently accepted clinically. This suggests that the morbidity associated with CO2 laser resurfacing may be minimized by lowering levels of tissue input energy and controlling for tissue debridement.
Absolute versus convective helical magnetorotational instability in a Taylor-Couette flow.
Priede, Jānis; Gerbeth, Gunter
2009-04-01
We analyze numerically the magnetorotational instability of a Taylor-Couette flow in a helical magnetic field [helical magnetorotational instability (HMRI)] using the inductionless approximation defined by a zero magnetic Prandtl number (Pr_{m}=0) . The Chebyshev collocation method is used to calculate the eigenvalue spectrum for small-amplitude perturbations. First, we carry out a detailed conventional linear stability analysis with respect to perturbations in the form of Fourier modes that corresponds to the convective instability which is not in general self-sustained. The helical magnetic field is found to extend the instability to a relatively narrow range beyond its purely hydrodynamic limit defined by the Rayleigh line. There is not only a lower critical threshold at which HMRI appears but also an upper one at which it disappears again. The latter distinguishes the HMRI from a magnetically modified Taylor vortex flow. Second, we find an absolute instability threshold as well. In the hydrodynamically unstable regime before the Rayleigh line, the threshold of absolute instability is just slightly above the convective one although the critical wavelength of the former is noticeably shorter than that of the latter. Beyond the Rayleigh line the lower threshold of absolute instability rises significantly above the corresponding convective one while the upper one descends significantly below its convective counterpart. As a result, the extension of the absolute HMRI beyond the Rayleigh line is considerably shorter than that of the convective instability. The absolute HMRI is supposed to be self-sustained and, thus, experimentally observable without any external excitation in a system of sufficiently large axial extension.
Chiang, H; Chang, K-C; Kan, H-W; Wu, S-W; Tseng, M-T; Hsueh, H-W; Lin, Y-H; Chao, C-C; Hsieh, S-T
2018-07-01
The study aimed to investigate the physiology, psychophysics, pathology and their relationship in reversible nociceptive nerve degeneration, and the physiology of acute hyperalgesia. We enrolled 15 normal subjects to investigate intraepidermal nerve fibre (IENF) density, contact heat-evoked potential (CHEP) and thermal thresholds during the capsaicin-induced skin nerve degeneration-regeneration; and CHEP and thermal thresholds at capsaicin-induced acute hyperalgesia. After 2-week capsaicin treatment, IENF density of skin was markedly reduced with reduced amplitude and prolonged latency of CHEP, and increased warm and heat pain thresholds. The time courses of skin nerve regeneration and reversal of physiology and psychophysics were different: IENF density was still lower at 10 weeks after capsaicin treatment than that at baseline, whereas CHEP amplitude and warm threshold became normalized within 3 weeks after capsaicin treatment. Although CHEP amplitude and IENF density were best correlated in a multiple linear regression model, a one-phase exponential association model showed better fit than a simple linear one, that is in the regeneration phase, the slope of the regression line between CHEP amplitude and IENF density was steeper in the subgroup with lower IENF densities than in the one with higher IENF densities. During capsaicin-induced hyperalgesia, recordable rate of CHEP to 43 °C heat stimulation was higher with enhanced CHEP amplitude and pain perception compared to baseline. There were differential restoration of IENF density, CHEP and thermal thresholds, and changed CHEP-IENF relationships during skin reinnervation. CHEP can be a physiological signature of acute hyperalgesia. These observations suggested the relationship between nociceptive nerve terminals and brain responses to thermal stimuli changed during different degree of skin denervation, and CHEP to low-intensity heat stimulus can reflect the physiology of hyperalgesia. © 2018 European Pain Federation - EFIC®.
Desensitization of the cough reflex by exercise and voluntary isocapnic hyperpnea.
Lavorini, Federico; Fontana, Giovanni A; Chellini, Elisa; Magni, Chiara; Duranti, Roberto; Widdicombe, John
2010-05-01
Little is known about the effects of exercise on the sensory and cognitive aspects of coughing evoked by inhalation of tussigenic agents. The threshold for the cough reflex induced by inhalation of increasing nebulizer outputs of ultrasonically nebulized distilled water (fog), an index of cough reflex sensitivity, was assessed in twelve healthy humans in control conditions, during exercise and during voluntary isocapnic hyperpnea (VIH) at the same ventilatory level as the exercise. The intensity of the urge to cough (UTC), a cognitive component of coughing, was recorded throughout the trials on a linear scale. The relationships between inhaled fog nebulizer outputs and the correspondingly evoked UTC values, an index of the perceptual magnitude of the UTC sensitivity, were also calculated. Cough appearance was always assessed audiovisually. At an exercise level of 80% of anaerobic threshold, the median cough threshold was increased from a control value of 0.73 to 2.22 ml/min (P<0.01), i.e., cough sensitivity was downregulated. With VIH, the threshold increased from 0.73 to 2.22 ml/min (P<0.01), a similar downregulation. With exercise and VIH compared with control, mean UTC values at cough threshold were unchanged, i.e., control, 3.83 cm; exercise, 3.12 cm; VIH, 4.08 cm. The relationship of the fog nebulizer output/UTC value was linear in control conditions and logarithmic during both exercise and VIH. The perception of the magnitude of the UTC seems to be influenced by signals or sensations arising from exercising limb and thoracic muscles and/or by higher nervous (cortical) mechanisms. The results indicate that the adjustments brought into action by exercise-induced or voluntary hyperpnea exert inhibitory influences on the sensory and cognitive components of fog-induced cough.
Towards a threshold climate for emergency lower respiratory hospital admissions.
Islam, Muhammad Saiful; Chaussalet, Thierry J; Koizumi, Naoru
2017-02-01
Identification of 'cut-points' or thresholds of climate factors would play a crucial role in alerting risks of climate change and providing guidance to policymakers. This study investigated a 'Climate Threshold' for emergency hospital admissions of chronic lower respiratory diseases by using a distributed lag non-linear model (DLNM). We analysed a unique longitudinal dataset (10 years, 2000-2009) on emergency hospital admissions, climate, and pollution factors for the Greater London. Our study extends existing work on this topic by considering non-linearity, lag effects between climate factors and disease exposure within the DLNM model considering B-spline as smoothing technique. The final model also considered natural cubic splines of time since exposure and 'day of the week' as confounding factors. The results of DLNM indicated a significant improvement in model fitting compared to a typical GLM model. The final model identified the thresholds of several climate factors including: high temperature (≥27°C), low relative humidity (≤ 40%), high Pm10 level (≥70-µg/m 3 ), low wind speed (≤ 2 knots) and high rainfall (≥30mm). Beyond the threshold values, a significantly higher number of emergency admissions due to lower respiratory problems would be expected within the following 2-3 days after the climate shift in the Greater London. The approach will be useful to initiate 'region and disease specific' climate mitigation plans. It will help identify spatial hot spots and the most sensitive areas and population due to climate change, and will eventually lead towards a diversified health warning system tailored to specific climate zones and populations. Copyright © 2016 Elsevier Inc. All rights reserved.
Wang, Boshuo; Aberra, Aman S; Grill, Warren M; Peterchev, Angel V
2018-04-01
We present a theory and computational methods to incorporate transverse polarization of neuronal membranes into the cable equation to account for the secondary electric field generated by the membrane in response to transverse electric fields. The effect of transverse polarization on nonlinear neuronal activation thresholds is quantified and discussed in the context of previous studies using linear membrane models. The response of neuronal membranes to applied electric fields is derived under two time scales and a unified solution of transverse polarization is given for spherical and cylindrical cell geometries. The solution is incorporated into the cable equation re-derived using an asymptotic model that separates the longitudinal and transverse dimensions. Two numerical methods are proposed to implement the modified cable equation. Several common neural stimulation scenarios are tested using two nonlinear membrane models to compare thresholds of the conventional and modified cable equations. The implementations of the modified cable equation incorporating transverse polarization are validated against previous results in the literature. The test cases show that transverse polarization has limited effect on activation thresholds. The transverse field only affects thresholds of unmyelinated axons for short pulses and in low-gradient field distributions, whereas myelinated axons are mostly unaffected. The modified cable equation captures the membrane's behavior on different time scales and models more accurately the coupling between electric fields and neurons. It addresses the limitations of the conventional cable equation and allows sound theoretical interpretations. The implementation provides simple methods that are compatible with current simulation approaches to study the effect of transverse polarization on nonlinear membranes. The minimal influence by transverse polarization on axonal activation thresholds for the nonlinear membrane models indicates that predictions of stronger effects in linear membrane models with a fixed activation threshold are inaccurate. Thus, the conventional cable equation works well for most neuroengineering applications, and the presented modeling approach is well suited to address the exceptions.
Simulated mussel mortality thresholds as a function of mussel biomass and nutrient loading
Bril, Jeremy S.; Langenfeld, Kathryn; Just, Craig L.; Spak, Scott N.; Newton, Teresa
2017-01-01
A freshwater “mussel mortality threshold” was explored as a function of porewater ammonium (NH4+) concentration, mussel biomass, and total nitrogen (N) utilizing a numerical model calibrated with data from mesocosms with and without mussels. A mortality threshold of 2 mg-N L−1 porewater NH4+ was selected based on a study that estimated 100% mortality of juvenile Lampsilis mussels exposed to 1.9 mg-N L−1NH4+ in equilibrium with 0.18 mg-N L−1 NH3. At the highest simulated mussel biomass (560 g m−2) and the lowest simulated influent water “food” concentration (0.1 mg-N L−1), the porewater NH4+ concentration after a 2,160 h timespan without mussels was 0.5 mg-N L−1 compared to 2.25 mg-N L−1 with mussels. Continuing these simulations while varying mussel biomass and N content yielded a mortality threshold contour that was essentially linear which contradicted the non-linear and non-monotonic relationship suggested by Strayer (2014). Our model suggests that mussels spatially focus nutrients from the overlying water to the sediments as evidenced by elevated porewater NH4+ in mesocosms with mussels. However, our previous work and the model utilized here show elevated concentrations of nitrite and nitrate in overlying waters as an indirect consequence of mussel activity. Even when the simulated overlying water food availability was quite low, the mortality threshold was reached at a mussel biomass of about 480 g m−2. At a food concentration of 10 mg-N L−1, the mortality threshold was reached at a biomass of about 250 g m−2. Our model suggests the mortality threshold for juvenile Lampsilis species could be exceeded at low mussel biomass if exposed for even a short time to the highly elevated total N loadings endemic to the agricultural Midwest.
Rossi, Sergio; Anfodillo, Tommaso; Cufar, Katarina; Cuny, Henri E; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gricar, Jozica; Gruber, Andreas; King, Gregory M; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B K
2013-12-01
Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1-9 years per site from 1998 to 2011. The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern. The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond differently to changes in environmental conditions.
Messaoudi, Noureddine; Bekka, Raïs El'hadi; Ravier, Philippe; Harba, Rachid
2017-02-01
The purpose of this paper was to evaluate the effects of the longitudinal single differential (LSD), the longitudinal double differential (LDD) and the normal double differential (NDD) spatial filters, the electrode shape, the inter-electrode distance (IED) on non-Gaussianity and non-linearity levels of simulated surface EMG (sEMG) signals when the maximum voluntary contraction (MVC) varied from 10% to 100% by a step of 10%. The effects of recruitment range thresholds (RR), the firing rate (FR) strategy and the peak firing rate (PFR) of motor units were also considered. A cylindrical multilayer model of the volume conductor and a model of motor unit (MU) recruitment and firing rate were used to simulate sEMG signals in a pool of 120 MUs for 5s. Firstly, the stationarity of sEMG signals was tested by the runs, the reverse arrangements (RA) and the modified reverse arrangements (MRA) tests. Then the non-Gaussianity was characterised with bicoherence and kurtosis, and non-linearity levels was evaluated with linearity test. The kurtosis analysis showed that the sEMG signals detected by the LSD filter were the most Gaussian and those detected by the NDD filter were the least Gaussian. In addition, the sEMG signals detected by the LSD filter were the most linear. For a given filter, the sEMG signals detected by using rectangular electrodes were more Gaussian and more linear than that detected with circular electrodes. Moreover, the sEMG signals are less non-Gaussian and more linear with reverse onion-skin firing rate strategy than those with onion-skin strategy. The levels of sEMG signal Gaussianity and linearity increased with the increase of the IED, RR and PFR. Copyright © 2016 Elsevier Ltd. All rights reserved.
Total and partial photoneutron cross sections for Pb isotopes
NASA Astrophysics Data System (ADS)
Kondo, T.; Utsunomiya, H.; Goriely, S.; Daoutidis, I.; Iwamoto, C.; Akimune, H.; Okamoto, A.; Yamagata, T.; Kamata, M.; Itoh, O.; Toyokawa, H.; Lui, Y.-W.; Harada, H.; Kitatani, F.; Hilaire, S.; Koning, A. J.
2012-07-01
Using quasimonochromatic laser-Compton scattering γ rays, total photoneutron cross sections were measured for 206,207,208Pb near neutron threshold with a high-efficiency 4π neutron detector. Partial E1 and M1 photoneutron cross sections along with total cross sections were determined for 207,208Pb at four energies near threshold by measuring anisotropies in photoneutron emission with linearly polarized γ rays. The E1 strength dominates over the M1 strength in the neutron channel where E1 photoneutron cross sections show extra strength of the pygmy dipole resonance in 207,208Pb near the neutron threshold corresponding to 0.32%-0.42% of the Thomas-Reiche-Kuhn sum rule. Several μN2 units of B(M1)↑ strength were observed in 207,208Pb just above neutron threshold, which correspond to an M1 cross section less than 10% of the total photoneutron cross section.
An exploratory analysis of Indiana and Illinois biotic ...
EPA recognizes the importance of nutrient criteria in protecting designated uses from eutrophication effects associated with elevated phosphorus and nitrogen in streams and has worked with states over the past 12 years to assist them in developing nutrient criteria. Towards that end, EPA has provided states and tribes with technical guidance to assess nutrient impacts and to develop criteria. EPA published recommendations in 2000 on scientifically defensible empirical approaches for setting numeric criteria. EPA also published eco-regional criteria recommendations in 2000-2001 based on a frequency distribution approach meant to approximate reference condition concentrations. In 2010, EPA elaborated on one of these empirical approaches (i.e., stressor-response relationships) for developing nutrient criteria. The purpose of this report was to conduct exploratory analyses of state datasets from Illinois and Indiana to determine threshold values for nutrients and chlorophyll a that could guide Indiana and Illinois criteria development. Box and whisker plots were used to compare nutrient and chlorophyll a concentrations between Illinois and Indiana. Stressor response analyses, using piece-wise linear regression and change-point analysis (Illinois only) were conducted to determine thresholds of change in relationships between nutrients and biotic assemblages. Impact stmt: The purpose of this report was to conduct exploratory analyses of state datasets from Illinois
Passos, L T; Cruz, E A da; Fischer, V; Porciuncula, G C da; Werncke, D; Dalto, A G C; Stumpf, M T; Vizzotto, E F; da Silveira, I D B
2017-04-01
Lameness can negatively affect production, but there is still controversy about the perception of pain in dairy cows. This study aimed to verify the effects of hoof affections in dairy cows on locomotion score, physiological attributes, pressure nociceptive threshold, and thermographic variables, as well as assess improvement on these variables after corrective trimming and treatment. Thirty-four lame lactating cows were gait-scored, and all cows with locomotion score ≥4 were retained for this study 1 day before trimming. Lame cows were diagnosed, pressure nociceptive threshold at sound, and affected hooves were measured, thermographic images were recorded, and physiological attributes were evaluated. Hooves with lesions were trimmed and treated and cows were re-evaluated 1 week after such procedures. The experimental design was a completely randomized design. Each cow was considered an experimental unit and traits were analyzed using paired t test, linear correlation, and linear regression. Digital and interdigital dermatitis were classified as infectious diseases while laminitis sequels, sole ulcers, and white line were classified as non-infectious diseases. After 1 week, the locomotion score was reduced on average in 1.5 points. Trimming increased the pressure nociceptive threshold for cows with non-infectious affections while tended to increase the pressure nociceptive threshold for cows with infectious affections. Physiological attributes and thermographic values did not change with trimming. Trimming and treatment have benefic effects on animal welfare as gait is improved and sensitivity to pain is reduced.
Kärkkäinen, Hanni P; Sillanpää, Mikko J
2013-09-04
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed.
Kärkkäinen, Hanni P.; Sillanpää, Mikko J.
2013-01-01
Because of the increased availability of genome-wide sets of molecular markers along with reduced cost of genotyping large samples of individuals, genomic estimated breeding values have become an essential resource in plant and animal breeding. Bayesian methods for breeding value estimation have proven to be accurate and efficient; however, the ever-increasing data sets are placing heavy demands on the parameter estimation algorithms. Although a commendable number of fast estimation algorithms are available for Bayesian models of continuous Gaussian traits, there is a shortage for corresponding models of discrete or censored phenotypes. In this work, we consider a threshold approach of binary, ordinal, and censored Gaussian observations for Bayesian multilocus association models and Bayesian genomic best linear unbiased prediction and present a high-speed generalized expectation maximization algorithm for parameter estimation under these models. We demonstrate our method with simulated and real data. Our example analyses suggest that the use of the extra information present in an ordered categorical or censored Gaussian data set, instead of dichotomizing the data into case-control observations, increases the accuracy of genomic breeding values predicted by Bayesian multilocus association models or by Bayesian genomic best linear unbiased prediction. Furthermore, the example analyses indicate that the correct threshold model is more accurate than the directly used Gaussian model with a censored Gaussian data, while with a binary or an ordinal data the superiority of the threshold model could not be confirmed. PMID:23821618
Lafage, Renaud; Schwab, Frank; Challier, Vincent; Henry, Jensen K; Gum, Jeffrey; Smith, Justin; Hostin, Richard; Shaffrey, Christopher; Kim, Han J; Ames, Christopher; Scheer, Justin; Klineberg, Eric; Bess, Shay; Burton, Douglas; Lafage, Virginie
2016-01-01
Retrospective review of prospective, multicenter database. The aim of the study was to determine age-specific spino-pelvic parameters, to extrapolate age-specific Oswestry Disability Index (ODI) values from published Short Form (SF)-36 Physical Component Score (PCS) data, and to propose age-specific realignment thresholds for adult spinal deformity (ASD). The Scoliosis Research Society-Schwab classification offers a framework for defining alignment in patients with ASD. Although age-specific changes in spinal alignment and patient-reported outcomes have been established in the literature, their relationship in the setting of ASD operative realignment has not been reported. ASD patients who received operative or nonoperative treatment were consecutively enrolled. Patients were stratified by age, consistent with published US-normative values (Norms) of the SF-36 PCS (<35, 35-44, 45-54, 55-64, 65-74, >75 y old). At baseline, relationships between between radiographic spino-pelvic parameters (lumbar-pelvic mismatch [PI-LL], pelvic tilt [PT], sagittal vertical axis [SVA], and T1 pelvic angle [TPA]), age, and PCS were established using linear regression analysis; normative PCS values were then used to establish age-specific targets. Correlation analysis with ODI and PCS was used to determine age-specific ideal alignment. Baseline analysis included 773 patients (53.7 y old, 54% operative, 83% female). There was a strong correlation between ODI and PCS (r = 0.814, P < 0.001), allowing for the extrapolation of US-normative ODI by age group. Linear regression analysis (all with r > 0.510, P < 0.001) combined with US-normative PCS values demonstrated that ideal spino-pelvic values increased with age, ranging from PT = 10.9 degrees, PI-LL = -10.5 degrees, and SVA = 4.1 mm for patients under 35 years to PT = 28.5 degrees, PI-LL = 16.7 degrees, and SVA = 78.1 mm for patients over 75 years. Clinically, older patients had greater compensation, more degenerative loss of lordosis, and were more pitched forward. This study demonstrated that sagittal spino-pelvic alignment varies with age. Thus, operative realignment targets should account for age, with younger patients requiring more rigorous alignment objectives.
NASA Astrophysics Data System (ADS)
Brunner, D.; Kuang, A. Q.; LaBombard, B.; Burke, W.
2017-07-01
A new servomotor drive system has been developed for the horizontal reciprocating probe on the Alcator C-Mod tokamak. Real-time measurements of plasma temperature and density—through use of a mirror Langmuir probe bias system—combined with a commercial linear servomotor and controller enable self-adaptive position control. Probe surface temperature and its rate of change are computed in real time and used to control probe insertion depth. It is found that a universal trigger threshold can be defined in terms of these two parameters; if the probe is triggered to retract when crossing the trigger threshold, it will reach the same ultimate surface temperature, independent of velocity, acceleration, or scrape-off layer heat flux scale length. In addition to controlling the probe motion, the controller is used to monitor and control all aspects of the integrated probe drive system.
Voice tracking and spoken word recognition in the presence of other voices
NASA Astrophysics Data System (ADS)
Litong-Palima, Marisciel; Violanda, Renante; Saloma, Caesar
2004-12-01
We study the human hearing process by modeling the hair cell as a thresholded Hopf bifurcator and compare our calculations with experimental results involving human subjects in two different multi-source listening tasks of voice tracking and spoken-word recognition. In the model, we observed noise suppression by destructive interference between noise sources which weakens the effective noise strength acting on the hair cell. Different success rate characteristics were observed for the two tasks. Hair cell performance at low threshold levels agree well with results from voice-tracking experiments while those of word-recognition experiments are consistent with a linear model of the hearing process. The ability of humans to track a target voice is robust against cross-talk interference unlike word-recognition performance which deteriorates quickly with the number of uncorrelated noise sources in the environment which is a response behavior that is associated with linear systems.
A simple two-stage model predicts response time distributions.
Carpenter, R H S; Reddi, B A J; Anderson, A J
2009-08-15
The neural mechanisms underlying reaction times have previously been modelled in two distinct ways. When stimuli are hard to detect, response time tends to follow a random-walk model that integrates noisy sensory signals. But studies investigating the influence of higher-level factors such as prior probability and response urgency typically use highly detectable targets, and response times then usually correspond to a linear rise-to-threshold mechanism. Here we show that a model incorporating both types of element in series - a detector integrating noisy afferent signals, followed by a linear rise-to-threshold performing decision - successfully predicts not only mean response times but, much more stringently, the observed distribution of these times and the rate of decision errors over a wide range of stimulus detectability. By reconciling what previously may have seemed to be conflicting theories, we are now closer to having a complete description of reaction time and the decision processes that underlie it.
Mask Matching for Linear Feature Detection.
1987-01-01
decide which matched masks are part of a linear feature by sim- ple thresholding of the confidence measures. However, it is shown in a compan - ion report...Laboratory, Center for Automation Research, University of Maryland, January 1987. 3. E.M. Allen, R.H. Trigg, and R.J. Wood, The Maryland Artificial ... Intelligence Group Franz Lisp Environment, Variation 3.5, TR-1226, Department of Computer Science, University of Maryland, December 1984. 4. D.E. Knuth, The
Electrically Tunable Mid-Infrared Single-Mode High-Speed Semiconductor Laser
2010-11-01
effective and the net tunnel rate may decrease in spite of progressing carrier density buildup in the accumulation well. Enforcing the bias current at...In te ns ity , a .u . E, eV Regular ICL Figure 4 The dependence of the electroluminescence (EL) quantum energy on the bias voltage for a...spectral maximum energy increases linearly with the bias voltage. Since the dependence is measured in the sub-threshold pumping region, the linear
Propagation Effects in the Assessment of Laser Damage Thresholds to the Eye and Skin
2007-01-01
Conference on Optical Interactions with Tissue and Cells [18th] Held in San Jose, California on January 22-24, 2007 To order the complete compilation report...evaluation of the role of propagation with regard to laser damage to tissues. Regions of the optical spectrum, where linear and non-linear propagation...photo-chemical toxicity. Exposure limits commonly address skin and eye hazards through separate definitions. Differing optical absorption and scattering
Janky, Kristen L; Shepard, Neil
2009-09-01
Vestibular evoked myogenic potential (VEMP) testing has gained increased interest in the diagnosis of a variety of vestibular etiologies. P13/N23 latency, amplitude and threshold response curves have been used to compare pathologic groups to normal controls. Appropriate characterization of these etiologies requires normative data across the frequency spectrum and age range. The objective of the current study was to test the hypothesis that significant changes in VEMP responses occur as a function of increased age across all test stimuli as well as characterize the VEMP threshold response curve across age. This project incorporated a prospective study design using a sample of convenience. Openly recruited subjects were assigned to groups according to age. Forty-six normal controls ranging between 20 and 76 years of age participated in the study. Participants were separated by decade into five age categories from 20 to 60 plus years. Normal participants were characterized by having normal hearing sensitivity, no history of neurologic or balance/dizziness involvement, and negative results on a direct office vestibular examination. VEMP responses were measured at threshold to click and 250, 500, 750, and 1000 Hz tone burst stimuli and at a suprathreshold level to 500 Hz toneburst stimuli at123 dB SPL. A mixed group factorial ANOVA (analysis of variance) and linear regression were performed to examine the effects of VEMP characteristics on age. There were no significant differences between ears for any of the test parameters. There were no significant differences between age groups for n23 latency or amplitude in response to any of the stimuli. Significant mean differences did exist between age groups for p13 latency (250, 750, and 1000 Hz) and threshold (500 and 750 Hz). Age was significantly correlated with VEMP parameters. VEMP threshold was positively correlated (250, 500, 750, 1000 Hz); and amplitude was negatively correlated (500 Hz maximum). The threshold response curves revealed best frequency tuning at 500 Hz with the highest thresholds in response to click stimuli. However, this best frequency tuning dissipated with increased age. VEMP response rates also decreased with increased age. We have demonstrated that minor differences in VEMP responses occur with age. Given the reduced response rates and flattened frequency tuning curve for individuals over the age of 60, frequency tuning curves may not be a good diagnostic indicator for this age group.
Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.
Vikhe, P S; Thool, V R
2016-04-01
Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection.
Magneto-Rayleigh-Taylor instability in solid media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y. B.; School of Physical Science and Technology, Lanzhou University, Lanzhou 73000; University of Chinese Academy of Sciences, Beijing 100049
2014-07-15
A linear analysis of the magneto-Rayleigh-Taylor instability at the interface between a Newtonian fluid and an elastic-plastic solid is performed by considering a uniform magnetic B{sup →}, parallel to the interface, which has diffused into the fluid but not into the solid. It is found that the magnetic field attributes elastic properties to the viscous fluid which enhance the stability region by stabilizing all the perturbation wavelengths shorter than λ{sub 0}∝B{sup 2} for any initial perturbation amplitude. Longer wavelengths are stabilized by the mechanical properties of the solid provided that the initial perturbation wavelength is smaller than a threshold valuemore » determined by the yield strength and the shear modulus of the solid. Beyond this threshold, the amplitude grows initially with a growth rate reduced by the solid strength properties. However, such properties do not affect the asymptotic growth rate which is only determined by the magnetic field and the fluid viscosity. The described physical situation intends to resemble some of the features present in recent experiments involving the magnetic shockless acceleration of flyers plates.« less
Flight test measurements and analysis of sonic boom phenomena near the shock wave extremity
NASA Technical Reports Server (NTRS)
Haglund, G. T.; Kane, E. J.
1973-01-01
The sonic boom flight test program conducted at Jackass Flats, Nevada, during the summer and fall of 1970 consisted of 121 sonic-boom-generating flights over the 1500 ft instrumented BREN tower. This test program was designed to provide information on several aspects of sonic boom, including caustics produced by longitudinal accelerations, caustics produced by steady flight near the threshold Mach number, sonic boom characteristics near lateral cutoff, and the vertical extent of shock waves attached to near-sonic airplanes. The measured test data, except for the near-sonic flight data, were analyzed in detail to determine sonic boom characteristics for these flight conditions and to determine the accuracy and the range of validity of linear sonic boom theory. The caustic phenomena observed during the threshold Mach number flights and during the transonic acceleration flights are documented and analyzed in detail. The theory of geometric acoustics is shown to be capable of predicting shock wave-ground intersections, and current methods for calculating sonic boom pressure signature away from caustics are shown to be reasonably accurate.
NASA Astrophysics Data System (ADS)
Feng, Xiao-Li; Li, Yu-Xiao; Gu, Jian-Zhong; Zhuo, Yi-Zhong
2009-10-01
The relaxation property of both Eigen model and Crow-Kimura model with a single peak fitness landscape is studied from phase transition point of view. We first analyze the eigenvalue spectra of the replication mutation matrices. For sufficiently long sequences, the almost crossing point between the largest and second-largest eigenvalues locates the error threshold at which critical slowing down behavior appears. We calculate the critical exponent in the limit of infinite sequence lengths and compare it with the result from numerical curve fittings at sufficiently long sequences. We find that for both models the relaxation time diverges with exponent 1 at the error (mutation) threshold point. Results obtained from both methods agree quite well. From the unlimited correlation length feature, the first order phase transition is further confirmed. Finally with linear stability theory, we show that the two model systems are stable for all ranges of mutation rate. The Eigen model is asymptotically stable in terms of mutant classes, and the Crow-Kimura model is completely stable.
Variation of surface ozone in Campo Grande, Brazil: meteorological effect analysis and prediction.
Pires, J C M; Souza, A; Pavão, H G; Martins, F G
2014-09-01
The effect of meteorological variables on surface ozone (O3) concentrations was analysed based on temporal variation of linear correlation and artificial neural network (ANN) models defined by genetic algorithms (GAs). ANN models were also used to predict the daily average concentration of this air pollutant in Campo Grande, Brazil. Three methodologies were applied using GAs, two of them considering threshold models. In these models, the variables selected to define different regimes were daily average O3 concentration, relative humidity and solar radiation. The threshold model that considers two O3 regimes was the one that correctly describes the effect of important meteorological variables in O3 behaviour, presenting also a good predictive performance. Solar radiation, relative humidity and rainfall were considered significant for both O3 regimes; however, wind speed (dispersion effect) was only significant for high concentrations. According to this model, high O3 concentrations corresponded to high solar radiation, low relative humidity and wind speed. This model showed to be a powerful tool to interpret the O3 behaviour, being useful to define policy strategies for human health protection regarding air pollution.
NASA Technical Reports Server (NTRS)
Brucker, G. J.; Stassinopoulos, E. G.
1991-01-01
An analysis of the expected space radiation effects on the single event upset (SEU) properties of CMOS/bulk memories onboard the Combined Release and Radiation Effects Satellite (CRRES) is presented. Dose-imprint data from ground test irradiations of identical devices are applied to the predictions of cosmic-ray-induced space upset rates in the memories onboard the spacecraft. The calculations take into account the effect of total dose on the SEU sensitivity of the devices as the dose accumulates in orbit. Estimates of error rates, which involved an arbitrary selection of a single pair of threshold linear energy transfer (LET) and asymptotic cross-section values, were compared to the results of an integration over the cross-section curves versus LET. The integration gave lower upset rates than the use of the selected values of the SEU parameters. Since the integration approach is more accurate and eliminates the need for an arbitrary definition of threshold LET and asymptotic cross section, it is recommended for all error rate predictions where experimental sigma-versus-LET curves are available.
Automating linear accelerator quality assurance.
Eckhause, Tobias; Al-Hallaq, Hania; Ritter, Timothy; DeMarco, John; Farrey, Karl; Pawlicki, Todd; Kim, Gwe-Ya; Popple, Richard; Sharma, Vijeshwar; Perez, Mario; Park, SungYong; Booth, Jeremy T; Thorwarth, Ryan; Moran, Jean M
2015-10-01
The purpose of this study was 2-fold. One purpose was to develop an automated, streamlined quality assurance (QA) program for use by multiple centers. The second purpose was to evaluate machine performance over time for multiple centers using linear accelerator (Linac) log files and electronic portal images. The authors sought to evaluate variations in Linac performance to establish as a reference for other centers. The authors developed analytical software tools for a QA program using both log files and electronic portal imaging device (EPID) measurements. The first tool is a general analysis tool which can read and visually represent data in the log file. This tool, which can be used to automatically analyze patient treatment or QA log files, examines the files for Linac deviations which exceed thresholds. The second set of tools consists of a test suite of QA fields, a standard phantom, and software to collect information from the log files on deviations from the expected values. The test suite was designed to focus on the mechanical tests of the Linac to include jaw, MLC, and collimator positions during static, IMRT, and volumetric modulated arc therapy delivery. A consortium of eight institutions delivered the test suite at monthly or weekly intervals on each Linac using a standard phantom. The behavior of various components was analyzed for eight TrueBeam Linacs. For the EPID and trajectory log file analysis, all observed deviations which exceeded established thresholds for Linac behavior resulted in a beam hold off. In the absence of an interlock-triggering event, the maximum observed log file deviations between the expected and actual component positions (such as MLC leaves) varied from less than 1% to 26% of published tolerance thresholds. The maximum and standard deviations of the variations due to gantry sag, collimator angle, jaw position, and MLC positions are presented. Gantry sag among Linacs was 0.336 ± 0.072 mm. The standard deviation in MLC position, as determined by EPID measurements, across the consortium was 0.33 mm for IMRT fields. With respect to the log files, the deviations between expected and actual positions for parameters were small (<0.12 mm) for all Linacs. Considering both log files and EPID measurements, all parameters were well within published tolerance values. Variations in collimator angle, MLC position, and gantry sag were also evaluated for all Linacs. The performance of the TrueBeam Linac model was shown to be consistent based on automated analysis of trajectory log files and EPID images acquired during delivery of a standardized test suite. The results can be compared directly to tolerance thresholds. In addition, sharing of results from standard tests across institutions can facilitate the identification of QA process and Linac changes. These reference values are presented along with the standard deviation for common tests so that the test suite can be used by other centers to evaluate their Linac performance against those in this consortium.
Stefan, Sabina; Schorr, Barbara; Lopez-Rolon, Alex; Kolassa, Iris-Tatjana; Shock, Jonathan P; Rosenfelder, Martin; Heck, Suzette; Bender, Andreas
2018-04-17
We applied the following methods to resting-state EEG data from patients with disorders of consciousness (DOC) for consciousness indexing and outcome prediction: microstates, entropy (i.e. approximate, permutation), power in alpha and delta frequency bands, and connectivity (i.e. weighted symbolic mutual information, symbolic transfer entropy, complex network analysis). Patients with unresponsive wakefulness syndrome (UWS) and patients in a minimally conscious state (MCS) were classified into these two categories by fitting and testing a generalised linear model. We aimed subsequently to develop an automated system for outcome prediction in severe DOC by selecting an optimal subset of features using sequential floating forward selection (SFFS). The two outcome categories were defined as UWS or dead, and MCS or emerged from MCS. Percentage of time spent in microstate D in the alpha frequency band performed best at distinguishing MCS from UWS patients. The average clustering coefficient obtained from thresholding beta coherence performed best at predicting outcome. The optimal subset of features selected with SFFS consisted of the frequency of microstate A in the 2-20 Hz frequency band, path length obtained from thresholding alpha coherence, and average path length obtained from thresholding alpha coherence. Combining these features seemed to afford high prediction power. Python and MATLAB toolboxes for the above calculations are freely available under the GNU public license for non-commercial use ( https://qeeg.wordpress.com ).
Sharp, Madeleine E.; Viswanathan, Jayalakshmi; Lanyon, Linda J.; Barton, Jason J. S.
2012-01-01
Background There are few clinical tools that assess decision-making under risk. Tests that characterize sensitivity and bias in decisions between prospects varying in magnitude and probability of gain may provide insights in conditions with anomalous reward-related behaviour. Objective We designed a simple test of how subjects integrate information about the magnitude and the probability of reward, which can determine discriminative thresholds and choice bias in decisions under risk. Design/Methods Twenty subjects were required to choose between two explicitly described prospects, one with higher probability but lower magnitude of reward than the other, with the difference in expected value between the two prospects varying from 3 to 23%. Results Subjects showed a mean threshold sensitivity of 43% difference in expected value. Regarding choice bias, there was a ‘risk premium’ of 38%, indicating a tendency to choose higher probability over higher reward. An analysis using prospect theory showed that this risk premium is the predicted outcome of hypothesized non-linearities in the subjective perception of reward value and probability. Conclusions This simple test provides a robust measure of discriminative value thresholds and biases in decisions under risk. Prospect theory can also make predictions about decisions when subjective perception of reward or probability is anomalous, as may occur in populations with dopaminergic or striatal dysfunction, such as Parkinson's disease and schizophrenia. PMID:22493669
Matthews, P B
1999-01-01
This paper reviews two new facets of the behaviour of human motoneurones; these were demonstrated by modelling combined with analysis of long periods of low-frequency tonic motor unit firing (sub-primary range). 1) A novel transformation of the interval histogram has shown that the effective part of the membrane's post-spike voltage trajectory is a segment of an exponential (rather than linear), with most spikes being triggered by synaptic noise before the mean potential reaches threshold. The curvature of the motoneurone's trajectory affects virtually all measures of its behaviour and response to stimulation. The 'trajectory' is measured from threshold, and so includes any changes in threshold during the interspike interval. 2) A novel rhythmic stimulus (amplitude-modulated pulsed vibration) has been used to show that the motoneurone produces appreciable phase-advance during sinusoidal excitation. At low frequencies, the advance increases with rising stimulus frequency but then, slightly below the motoneurones mean firing rate, it suddenly becomes smaller. The gain has a maximum for stimuli at the mean firing rate (the 'carrier'). Such behaviour is functionally important since it affects the motoneurone's response to any rhythmic input, whether generated peripherally by the receptors (as in tremor) or by the CNS (as with cortical oscillations). Low mean firing rates favour tremor, since the high gain and reduced phase advance at the 'carrier' reduce the stability of the stretch reflex.
Data Fitting to Study Ablated Hard Dental Tissues by Nanosecond Laser Irradiation.
Al-Hadeethi, Y; Al-Jedani, S; Razvi, M A N; Saeed, A; Abdel-Daiem, A M; Ansari, M Shahnawaze; Babkair, Saeed S; Salah, Numan A; Al-Mujtaba, A
2016-01-01
Laser ablation of dental hard tissues is one of the most important laser applications in dentistry. Many works have reported the interaction of laser radiations with tooth material to optimize laser parameters such as wavelength, energy density, etc. This work has focused on determining the relationship between energy density and ablation thresholds using pulsed, 5 nanosecond, neodymium-doped yttrium aluminum garnet; Nd:Y3Al5O12 (Nd:YAG) laser at 1064 nanometer. For enamel and dentin tissues, the ablations have been performed using laser-induced breakdown spectroscopy (LIBS) technique. The ablation thresholds and relationship between energy densities and peak areas of calcium lines, which appeared in LIBS, were determined using data fitting. Furthermore, the morphological changes were studied using Scanning Electron Microscope (SEM). Moreover, the chemical stability of the tooth material after ablation has been studied using Energy-Dispersive X-Ray Spectroscopy (EDX). The differences between carbon atomic % of non-irradiated and irradiated samples were tested using statistical t-test. Results revealed that the best fitting between energy densities and peak areas of calcium lines were exponential and linear for enamel and dentin, respectively. In addition, the ablation threshold of Nd:YAG lasers in enamel was higher than that of dentin. The morphology of the surrounded ablated region of enamel showed thermal damages. For enamel, the EDX quantitative analysis showed that the atomic % of carbon increased significantly when laser energy density increased.
Auditory brainstem response to complex sounds predicts self-reported speech-in-noise performance.
Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina
2013-02-01
To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004) and pure-tone hearing thresholds. Participants included 111 middle- to older-age adults (range = 45-78) with audiometric configurations ranging from normal hearing levels to moderate sensorineural hearing loss. In addition to using audiometric testing, the authors also used such evaluation measures as the QuickSIN, the SSQ, and the cABR. Multiple linear regression analysis indicated that the inclusion of brainstem variables in a model with QuickSIN, hearing thresholds, and age accounted for 30% of the variance in the Speech subtest of the SSQ, compared with significantly less variance (19%) when brainstem variables were not included. The authors' results demonstrate the cABR's efficacy for predicting self-reported speech-in-noise perception difficulties. The fact that the cABR predicts more variance in self-reported speech-in-noise (SIN) perception than either the QuickSIN or hearing thresholds indicates that the cABR provides additional insight into an individual's ability to hear in background noise. In addition, the findings underscore the link between the cABR and hearing in noise.
Sharp, Madeleine E; Viswanathan, Jayalakshmi; Lanyon, Linda J; Barton, Jason J S
2012-01-01
There are few clinical tools that assess decision-making under risk. Tests that characterize sensitivity and bias in decisions between prospects varying in magnitude and probability of gain may provide insights in conditions with anomalous reward-related behaviour. We designed a simple test of how subjects integrate information about the magnitude and the probability of reward, which can determine discriminative thresholds and choice bias in decisions under risk. Twenty subjects were required to choose between two explicitly described prospects, one with higher probability but lower magnitude of reward than the other, with the difference in expected value between the two prospects varying from 3 to 23%. Subjects showed a mean threshold sensitivity of 43% difference in expected value. Regarding choice bias, there was a 'risk premium' of 38%, indicating a tendency to choose higher probability over higher reward. An analysis using prospect theory showed that this risk premium is the predicted outcome of hypothesized non-linearities in the subjective perception of reward value and probability. This simple test provides a robust measure of discriminative value thresholds and biases in decisions under risk. Prospect theory can also make predictions about decisions when subjective perception of reward or probability is anomalous, as may occur in populations with dopaminergic or striatal dysfunction, such as Parkinson's disease and schizophrenia.
Zseq: An Approach for Preprocessing Next-Generation Sequencing Data.
Alkhateeb, Abedalrhman; Rueda, Luis
2017-08-01
Next-generation sequencing technology generates a huge number of reads (short sequences), which contain a vast amount of genomic data. The sequencing process, however, comes with artifacts. Preprocessing of sequences is mandatory for further downstream analysis. We present Zseq, a linear method that identifies the most informative genomic sequences and reduces the number of biased sequences, sequence duplications, and ambiguous nucleotides. Zseq finds the complexity of the sequences by counting the number of unique k-mers in each sequence as its corresponding score and also takes into the account other factors such as ambiguous nucleotides or high GC-content percentage in k-mers. Based on a z-score threshold, Zseq sweeps through the sequences again and filters those with a z-score less than the user-defined threshold. Zseq algorithm is able to provide a better mapping rate; it reduces the number of ambiguous bases significantly in comparison with other methods. Evaluation of the filtered reads has been conducted by aligning the reads and assembling the transcripts using the reference genome as well as de novo assembly. The assembled transcripts show a better discriminative ability to separate cancer and normal samples in comparison with another state-of-the-art method. Moreover, de novo assembled transcripts from the reads filtered by Zseq have longer genomic sequences than other tested methods. Estimating the threshold of the cutoff point is introduced using labeling rules with optimistic results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeters, A. G.; Rath, F.; Buchholz, R.
2016-08-15
It is shown that Ion Temperature Gradient turbulence close to the threshold exhibits a long time behaviour, with smaller heat fluxes at later times. This reduction is connected with the slow growth of long wave length zonal flows, and consequently, the numerical dissipation on these flows must be sufficiently small. Close to the nonlinear threshold for turbulence generation, a relatively small dissipation can maintain a turbulent state with a sizeable heat flux, through the damping of the zonal flow. Lowering the dissipation causes the turbulence, for temperature gradients close to the threshold, to be subdued. The heat flux then doesmore » not go smoothly to zero when the threshold is approached from above. Rather, a finite minimum heat flux is obtained below which no fully developed turbulent state exists. The threshold value of the temperature gradient length at which this finite heat flux is obtained is up to 30% larger compared with the threshold value obtained by extrapolating the heat flux to zero, and the cyclone base case is found to be nonlinearly stable. Transport is subdued when a fully developed staircase structure in the E × B shearing rate forms. Just above the threshold, an incomplete staircase develops, and transport is mediated by avalanche structures which propagate through the marginally stable regions.« less
YADCLAN: yet another digitally-controlled linear artificial neuron.
Frenger, Paul
2003-01-01
This paper updates the author's 1999 RMBS presentation on digitally controlled linear artificial neuron design. Each neuron is based on a standard operational amplifier having excitatory and inhibitory inputs, variable gain, an amplified linear analog output and an adjustable threshold comparator for digital output. This design employs a 1-wire serial network of digitally controlled potentiometers and resistors whose resistance values are set and read back under microprocessor supervision. This system embodies several unique and useful features, including: enhanced neuronal stability, dynamic reconfigurability and network extensibility. This artificial neuronal is being employed for feature extraction and pattern recognition in an advanced robotic application.
Venkataraman, Vinay; Turaga, Pavan; Baran, Michael; Lehrer, Nicole; Du, Tingfang; Cheng, Long; Rikakis, Thanassis; Wolf, Steven L.
2016-01-01
In this paper, we propose a general framework for tuning component-level kinematic features using therapists’ overall impressions of movement quality, in the context of a Home-based Adaptive Mixed Reality Rehabilitation (HAMRR) system. We propose a linear combination of non-linear kinematic features to model wrist movement, and propose an approach to learn feature thresholds and weights using high-level labels of overall movement quality provided by a therapist. The kinematic features are chosen such that they correlate with the quality of wrist movements to clinical assessment scores. Further, the proposed features are designed to be reliably extracted from an inexpensive and portable motion capture system using a single reflective marker on the wrist. Using a dataset collected from ten stroke survivors, we demonstrate that the framework can be reliably used for movement quality assessment in HAMRR systems. The system is currently being deployed for large-scale evaluations, and will represent an increasingly important application area of motion capture and activity analysis. PMID:25438331
Global dynamics for switching systems and their extensions by linear differential equations
NASA Astrophysics Data System (ADS)
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-01
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
Huang, Yuanyuan; Zhu, Lipeng; Zhao, Qiyi; Guo, Yaohui; Ren, Zhaoyu; Bai, Jintao; Xu, Xinlong
2017-02-08
Surface optical rectification was observed from the layered semiconductor molybdenum disulfide (MoS 2 ) crystal via terahertz (THz) time-domain surface emission spectroscopy under linearly polarized femtosecond laser excitation. The radiated THz amplitude of MoS 2 has a linear dependence on ever-increasing pump fluence and thus quadratic with the pump electric field, which discriminates from the surface Dember field induced THz radiation in InAs and the transient photocurrent-induced THz generation in graphite. Theoretical analysis based on space symmetry of MoS 2 crystal suggests that the underlying mechanism of THz radiation is surface optical rectification under the reflection configuration. This is consistent with the experimental results according to the radiated THz amplitude dependences on azimuthal and incident polarization angles. We also demonstrated the damage threshold of MoS 2 due to microscopic bond breaking under the femtosecond laser irradiation, which can be monitored via THz time-domain emission spectroscopy and Raman spectroscopy.
NASA Astrophysics Data System (ADS)
Nageshwari, M.; Jayaprakash, P.; Kumari, C. Rathika Thaya; Vinitha, G.; Caroline, M. Lydia
2017-04-01
An efficient nonlinear optical semiorganic material L-valinium L-valine chloride (LVVCl) was synthesized and grown-up by means of slow evaporation process. Single crystal XRD evince that LVVCl corresponds to monoclinic system having acentric space group P21. The diverse functional groups existing in LVVCl were discovered with FTIR spectral investigation. The UV-Visible and photoluminescence spectrum discloses the optical and electronic properties respectively for the grown crystal. Several optical properties specifically extinction coefficient, reflectance, linear refractive index, electrical and optical conductivity were also determined. The SEM analysis was also carried out and it portrayed the surface morphology of LVVCl. The calculated value of laser damage threshold was 2.59 GW/cm2. The mechanical and dielectric property of LVVCl was investigated employing microhardness and dielectric studies. The second and third order nonlinear optical characteristics of LVVCl was characterized utilizing Kurtz Perry and Z scan technique respectively clearly suggest its suitability in the domain of optics and photonics.
Global dynamics for switching systems and their extensions by linear differential equations.
Huttinga, Zane; Cummins, Bree; Gedeon, Tomáš; Mischaikow, Konstantin
2018-03-15
Switching systems use piecewise constant nonlinearities to model gene regulatory networks. This choice provides advantages in the analysis of behavior and allows the global description of dynamics in terms of Morse graphs associated to nodes of a parameter graph. The parameter graph captures spatial characteristics of a decomposition of parameter space into domains with identical Morse graphs. However, there are many cellular processes that do not exhibit threshold-like behavior and thus are not well described by a switching system. We consider a class of extensions of switching systems formed by a mixture of switching interactions and chains of variables governed by linear differential equations. We show that the parameter graphs associated to the switching system and any of its extensions are identical. For each parameter graph node, there is an order-preserving map from the Morse graph of the switching system to the Morse graph of any of its extensions. We provide counterexamples that show why possible stronger relationships between the Morse graphs are not valid.
Campolo, O; Malacrinò, A; Laudani, F; Maione, V; Zappalà, L; Palmeri, V
2014-10-01
The increasing worldwide trades progressively led to decreased impact of natural barriers on wild species movement. The exotic scale Chrysomphalus aonidum (L.) (Hemiptera: Diaspididae), recently reported on citrus in southern Italy, may represent a new threat to Mediterranean citriculture. We studied C. aonidum population dynamics under field conditions and documented its development under various temperatures. To enable describing temperature-dependent development through the use of linear and non-linear models, low temperature thresholds and thermal constants for each developmental stage were estimated. Chrysomphalus aonidum was able to perform four generations on green parts (leaves, sprouts) of citrus trees and three on fruits. In addition, an overall higher population density was observed on samples collected in the southern part of the tree canopy. Temperature had a significant effect on the developmental rate; female needed 625 degree days (DD) to complete its development, while male needed 833 DD. The low threshold temperatures, together with data from population dynamics, demonstrated that C. aonidum is able to overwinter as second instar and as an adult. The results obtained, validated by those collected in the field, revealed few differences between predicted and observed dates of first occurrence of each C. aonidum instar in citrus orchards. Data on C. aonidum phenology and the definition of the thermal parameters (lower and upper threshold temperatures, optimum temperature, and the thermal constant) by non-linear models could allow the estimation of the occurrence in the field of each life stage and would be helpful in developing effective integrated control strategies.
Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds
Deeks, J.J.; Martin, E.C.; Riley, R.D.
2017-01-01
Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347
Potgieter, Danielle; Simmers, Dale; Ryan, Lisa; Biccard, Bruce M; Lurati-Buse, Giovanna A; Cardinale, Daniela M; Chong, Carol P W; Cnotliwy, Miloslaw; Farzi, Sylvia I; Jankovic, Radmilo J; Lim, Wen Kwang; Mahla, Elisabeth; Manikandan, Ramaswamy; Oscarsson, Anna; Phy, Michael P; Rajagopalan, Sriram; Van Gaal, William J; Waliszek, Marek; Rodseth, Reitze N
2015-08-01
N-terminal fragment B-type natriuretic peptide (NT-proBNP) prognostic utility is commonly determined post hoc by identifying a single optimal discrimination threshold tailored to the individual study population. The authors aimed to determine how using these study-specific post hoc thresholds impacts meta-analysis results. The authors conducted a systematic review of studies reporting the ability of preoperative NT-proBNP measurements to predict the composite outcome of all-cause mortality and nonfatal myocardial infarction at 30 days after noncardiac surgery. Individual patient-level data NT-proBNP thresholds were determined using two different methodologies. First, a single combined NT-proBNP threshold was determined for the entire cohort of patients, and a meta-analysis conducted using this single threshold. Second, study-specific thresholds were determined for each individual study, with meta-analysis being conducted using these study-specific thresholds. The authors obtained individual patient data from 14 studies (n = 2,196). Using a single NT-proBNP cohort threshold, the odds ratio (OR) associated with an increased NT-proBNP measurement was 3.43 (95% CI, 2.08 to 5.64). Using individual study-specific thresholds, the OR associated with an increased NT-proBNP measurement was 6.45 (95% CI, 3.98 to 10.46). In smaller studies (<100 patients) a single cohort threshold was associated with an OR of 5.4 (95% CI, 2.27 to 12.84) as compared with an OR of 14.38 (95% CI, 6.08 to 34.01) for study-specific thresholds. Post hoc identification of study-specific prognostic biomarker thresholds artificially maximizes biomarker predictive power, resulting in an amplification or overestimation during meta-analysis of these results. This effect is accentuated in small studies.
Quantitative comparisons of type 3 radio burst intensity and fast electron flux at 1 AU
NASA Technical Reports Server (NTRS)
Fitzenreiter, R. J.; Evans, L. G.; Lin, R. P.
1975-01-01
The flux of fast solar electrons and the intensity of the type 111 radio emission generated by these particles were compared at one AU. Two regimes were found in the generation of type 111 radiation: one where the radio intensity is linearly proportional to the electron flux, and another, which occurs above a threshold electron flux, where the radio intensity is approximately proportional to the 2.4 power of the electron flux. This threshold appears to reflect a transition to a different emission mechanism.
Rate-Compatible Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.
Analytical and Experimental Study of Near-Threshold Interactions Between Crack Closure Mechanisms
NASA Technical Reports Server (NTRS)
Newman, John A.; Riddell, William T.; Piascik, Robert S.
2003-01-01
The results of an analytical closure model that considers contributions and interactions between plasticity-, roughness-, and oxide-induced crack closure mechanisms are presented and compared with experimental data. The analytical model is shown to provide a good description of the combined influences of crack roughness, oxide debris, and plasticity in the near-threshold regime. Furthermore, analytical results indicate that closure mechanisms interact in a non-linear manner such that the total amount of closure is not the sum of closure contributions for each mechanism.
Macro-motion detection using ultra-wideband impulse radar.
Xin Li; Dengyu Qiao; Ye Li
2014-01-01
Radar has the advantage of being able to detect hidden individuals, which can be used in homeland security, disaster rescue, and healthcare monitoring-related applications. Human macro-motion detection using ultra-wideband impulse radar is studied in this paper. First, a frequency domain analysis is carried out to show that the macro-motion yields a bandpass signal in slow-time. Second, the FTFW (fast-time frequency windowing), which has the advantage of avoiding the measuring range reduction, and the HLF (high-pass linear-phase filter), which can preserve the motion signal effectively, are proposed to preprocess the radar echo. Last, a threshold decision method, based on the energy detector structure, is presented.
Device for rapid quantification of human carotid baroreceptor-cardiac reflex responses
NASA Technical Reports Server (NTRS)
Sprenkle, J. M.; Eckberg, D. L.; Goble, R. L.; Schelhorn, J. J.; Halliday, H. C.
1986-01-01
A new device has been designed, constructed, and evaluated to characterize the human carotid baroreceptor-cardiac reflex response relation rapidly. This system was designed for study of reflex responses of astronauts before, during, and after space travel. The system comprises a new tightly sealing silicon rubber neck chamber, a stepping motor-driven electrodeposited nickel bellows pressure system, capable of delivering sequential R-wave-triggered neck chamber pressure changes between +40 and -65 mmHg, and a microprocessor-based electronics system for control of pressure steps and analysis and display of responses. This new system provokes classic sigmoid baroreceptor-cardiac reflex responses with threshold, linear, and saturation ranges in most human volunteers during one held expiration.
Polynomial sequences for bond percolation critical thresholds
Scullard, Christian R.
2011-09-22
In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less
A generalized linear integrate-and-fire neural model produces diverse spiking behaviors.
Mihalaş, Stefan; Niebur, Ernst
2009-03-01
For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.
Physics must join with biology in better assessing risk from low-dose irradiation.
Feinendegen, L E; Neumann, R D
2005-01-01
This review summarises the complex response of mammalian cells and tissues to low doses of ionising radiation. This thesis encompasses induction of DNA damage, and adaptive protection against both renewed damage and against propagation of damage from the basic level of biological organisation to the clinical expression of detriment. The induction of DNA damage at low radiation doses apparently is proportional to absorbed dose at the physical/chemical level. However, any propagation of such damage to higher levels of biological organisation inherently follows a sigmoid function. Moreover, low-dose-induced inhibition of damage propagation is not linear, but instead follows a dose-effect function typical for adaptive protection, after an initial rapid rise it disappears at doses higher than approximately 0.1-0.2 Gy to cells. The particular biological response duality at low radiation doses precludes the validity of the linear-no-threshold hypothesis in the attempt to relate absorbed dose to cancer. In fact, theory and observation support not only a lower cancer incidence than expected from the linear-no-threshold hypothesis, but also a reduction of spontaneously occurring cancer, a hormetic response, in the healthy individual.
Marginal Stability of Sweet–Parker Type Current Sheets at Low Lundquist Numbers
NASA Astrophysics Data System (ADS)
Shi, Chen; Velli, Marco; Tenerani, Anna
2018-06-01
Magnetohydrodynamic simulations have shown that a nonunique critical Lundquist number S c exists, hovering around S c ∼ 104, above which threshold Sweet–Parker type stationary reconnecting configurations become unstable to a fast tearing mode dominated by plasmoid generation. It is known that the flow along the sheet plays a stabilizing role, though a satisfactory explanation of the nonuniversality and variable critical Lundquist numbers observed is still lacking. Here we discuss this question using 2D linear MHD simulations and linear stability analyses of Sweet–Parker type current sheets in the presence of background stationary inflows and outflows at low Lundquist numbers (S ≤ 104). Simulations show that the inhomogeneous outflow stabilizes the current sheet by stretching the growing magnetic islands and at the same time evacuating the magnetic islands out of the current sheet. This limits the time during which fluctuations that begin at any given wavelength can remain unstable, rendering the instability nonexponential. We find that the linear theory based on the expanding-wavelength assumption works well for S larger than ∼1000. However, we also find that the inflow and location of the initial perturbation also affect the stability threshold.
A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors
Mihalaş, Ştefan; Niebur, Ernst
2010-01-01
For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model’s rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation. PMID:18928368
Perception of linear acceleration in weightlessness
NASA Technical Reports Server (NTRS)
Arrott, A. P.; Young, L. R.
1987-01-01
Eye movements and subjective detection of acceleration were measured on human experimental subjects during vestibular sled acceleration during the D1 Spacelab Mission. Methods and results are reported on the time to detection of small acceleration steps, the threshold for detection of linear acceleration, perceived motion path, and CLOAT. A consistently shorter time to detection of small acceleration steps is found. Subjective reports of perceived motion during sinusoidal oscillation in weightlessness were qualitatively similar to reports on earth.
Laser induced white lighting of tungsten filament
NASA Astrophysics Data System (ADS)
Strek, W.; Tomala, R.; Lukaszewicz, M.
2018-04-01
The sustained bright white light emission of thin tungsten filament was induced under irradiation with focused beam of CW infrared laser diode. The broadband emission centered at 600 nm has demonstrated the threshold behavior on excitation power. Its intensity increased non-linearly with excitation power. The emission occurred only from the spot of focused beam of excitation laser diode. The white lighting was accompanied by efficient photocurrent flow and photoelectron emission which both increased non-linearly with laser irradiation power.
Zhang, Yunquan; Li, Cunlu; Feng, Renjie; Zhu, Yaohui; Wu, Kai; Tan, Xiaodong; Ma, Lu
2016-01-01
Less evidence concerning the association between ambient temperature and mortality is available in developing countries/regions, especially inland areas of China, and few previous studies have compared the predictive ability of different temperature indictors (minimum, mean, and maximum temperature) on mortality. We assessed the effects of temperature on daily mortality from 2003 to 2010 in Jiang’an District of Wuhan, the largest city in central China. Quasi-Poisson generalized linear models combined with both non-threshold and double-threshold distributed lag non-linear models (DLNM) were used to examine the associations between different temperature indictors and cause-specific mortality. We found a U-shaped relationship between temperature and mortality in Wuhan. Double-threshold DLNM with mean temperature performed best in predicting temperature-mortality relationship. Cold effect was delayed, whereas hot effect was acute, both of which lasted for several days. For cold effects over lag 0–21 days, a 1 °C decrease in mean temperature below the cold thresholds was associated with a 2.39% (95% CI: 1.71, 3.08) increase in non-accidental mortality, 3.65% (95% CI: 2.62, 4.69) increase in cardiovascular mortality, 3.87% (95% CI: 1.57, 6.22) increase in respiratory mortality, 3.13% (95% CI: 1.88, 4.38) increase in stroke mortality, and 21.57% (95% CI: 12.59, 31.26) increase in ischemic heart disease (IHD) mortality. For hot effects over lag 0–7 days, a 1 °C increase in mean temperature above the hot thresholds was associated with a 25.18% (95% CI: 18.74, 31.96) increase in non-accidental mortality, 34.10% (95% CI: 25.63, 43.16) increase in cardiovascular mortality, 24.27% (95% CI: 7.55, 43.59) increase in respiratory mortality, 59.1% (95% CI: 41.81, 78.5) increase in stroke mortality, and 17.00% (95% CI: 7.91, 26.87) increase in IHD mortality. This study suggested that both low and high temperature were associated with increased mortality in Wuhan, and that mean temperature had better predictive ability than minimum and maximum temperature in the association between temperature and mortality. PMID:27438847
Gou, Faxiang; Liu, Xinfeng; He, Jian; Liu, Dongpeng; Cheng, Yao; Liu, Haixia; Yang, Xiaoting; Wei, Kongfu; Zheng, Yunhe; Jiang, Xiaojuan; Meng, Lei; Hu, Wenbiao
2018-01-08
To determine the linear and non-linear interacting relationships between weather factors and hand, foot and mouth disease (HFMD) in children in Gansu, China, and gain further traction as an early warning signal based on weather variability for HFMD transmission. Weekly HFMD cases aged less than 15 and meteorological information from 2010 to 2014 in Jiuquan, Lanzhou and Tianshu, Gansu, China were collected. Generalized linear regression models (GLM) with Poisson link and classification and regression trees (CART) were employed to determine the combined and interactive relationship of weather factors and HFMD in both linear and non-linear ways. GLM suggested an increase in weekly HFMD of 5.9% [95% confidence interval (CI): 5.4%, 6.5%] in Tianshui, 2.8% [2.5%, 3.1%] in Lanzhou and 1.8% [1.4%, 2.2%] in Jiuquan in association with a 1 °C increase in average temperature, respectively. And 1% increase of relative humidity could increase weekly HFMD of 2.47% [2.23%, 2.71%] in Lanzhou and 1.11% [0.72%, 1.51%] in Tianshui. CART revealed that average temperature and relative humidity were the first two important determinants, and their threshold values for average temperature deceased from 20 °C of Jiuquan to 16 °C in Tianshui; and for relative humidity, threshold values increased from 38% of Jiuquan to 65% of Tianshui. Average temperature was the primary weather factor in three areas, more sensitive in southeast Tianshui, compared with northwest Jiuquan; Relative humidity's effect on HFMD showed a non-linear interacting relationship with average temperature.
Is the sky the limit? On the expansion threshold of a species' range.
Polechová, Jitka
2018-06-15
More than 100 years after Grigg's influential analysis of species' borders, the causes of limits to species' ranges still represent a puzzle that has never been understood with clarity. The topic has become especially important recently as many scientists have become interested in the potential for species' ranges to shift in response to climate change-and yet nearly all of those studies fail to recognise or incorporate evolutionary genetics in a way that relates to theoretical developments. I show that range margins can be understood based on just two measurable parameters: (i) the fitness cost of dispersal-a measure of environmental heterogeneity-and (ii) the strength of genetic drift, which reduces genetic diversity. Together, these two parameters define an 'expansion threshold': adaptation fails when genetic drift reduces genetic diversity below that required for adaptation to a heterogeneous environment. When the key parameters drop below this expansion threshold locally, a sharp range margin forms. When they drop below this threshold throughout the species' range, adaptation collapses everywhere, resulting in either extinction or formation of a fragmented metapopulation. Because the effects of dispersal differ fundamentally with dimension, the second parameter-the strength of genetic drift-is qualitatively different compared to a linear habitat. In two-dimensional habitats, genetic drift becomes effectively independent of selection. It decreases with 'neighbourhood size'-the number of individuals accessible by dispersal within one generation. Moreover, in contrast to earlier predictions, which neglected evolution of genetic variance and/or stochasticity in two dimensions, dispersal into small marginal populations aids adaptation. This is because the reduction of both genetic and demographic stochasticity has a stronger effect than the cost of dispersal through increased maladaptation. The expansion threshold thus provides a novel, theoretically justified, and testable prediction for formation of the range margin and collapse of the species' range.
Thermal sensation and climate: a comparison of UTCI and PET thresholds in different climates.
Pantavou, Katerina; Lykoudis, Spyridon; Nikolopoulou, Marialena; Tsiros, Ioannis X
2018-06-07
The influence of physiological acclimatization and psychological adaptation on thermal perception is well documented and has revealed the importance of thermal experience and expectation in the evaluation of environmental stimuli. Seasonal patterns of thermal perception have been studied, and calibrated thermal indices' scales have been proposed to obtain meaningful interpretations of thermal sensation indices in different climate regions. The current work attempts to quantify the contribution of climate to the long-term thermal adaptation by examining the relationship between climate normal annual air temperature (1971-2000) and such climate-calibrated thermal indices' assessment scales. The thermal sensation ranges of two thermal indices, the Universal Thermal Climate Index (UTCI) and the Physiological Equivalent Temperature Index (PET), were calibrated for three warm temperate climate contexts (Cfa, Cfb, Csa), against the subjective evaluation of the thermal environment indicated by interviewees during field surveys conducted at seven European cities: Athens (GR), Thessaloniki (GR), Milan (IT), Fribourg (CH), Kassel (DE), Cambridge (UK), and Sheffield (UK), under the same research protocol. Then, calibrated scales for other climate contexts were added from the literature, and the relationship between the respective scales' thresholds and climate normal annual air temperature was examined. To maintain the maximum possible comparability, three methods were applied for the calibration, namely linear, ordinal, and probit regression. The results indicated that the calibrated UTCI and PET thresholds increase with the climate normal annual air temperature of the survey city. To investigate further climates, we also included in the analysis results of previous studies presenting only thresholds for neutral thermal sensation. The average increase of the respective thresholds in the case of neutral thermal sensation was about 0.6 °C for each 1 °C increase of the normal annual air temperature for both indices, statistically significant only for PET though.
Tyka, Aleksander; Pałka, Tomasz; Tyka, Anna; Cisoń, Tomasz; Szyguła, Zbigniew
2009-01-01
To compare the mechanical power and physiological parameters in males at the lactate (LAAT) and integrated electromyographic (IEMGAT) anaerobic thresholds during exercise testing at 23 degrees C, 31 degrees C and 37 degrees C. Fifteen men aged 21.9+/-1.80 years performed an incremental exercise test on a cycle ergometer at pedal frequency of 60 rpm. The test began at the power output of 120 W which was increased by 30 W every 3 min. Heart rate, oxygen uptake, carbon dioxide in expired air and minute ventilation were monitored. Venous blood samples were collected at 30 s before termination of each 3-min stage of test to determine the lactate anaerobic threshold. IEMGAT for vastus lateralis (VL) and rectus femoris (RF) muscles were regarded as the inflection point at which a non-linear increase in IEMGAT occurred. IEMGAT for VL and RF were similar for all the three temperatures. IEMGAT (VL and RF) correlated closely with LAAT at ambient temperatures of 23 degrees C (r = 0.91), 31 degrees C (r = 0.96) and 37 degrees C (r = 0.97). Repeated measures analysis of variance (ANOVA) revealed that the mechanical power at LAAT and IEMGAT was higher at 23 degrees C (202+/-26.5 W vs. 205+/-22.9 W) than at 31 degrees C (186+/-20.2 W vs. 186.2+/-20.2 W) and 37 degrees C (175.5+/-25.2 W vs. 175.3+/-20.0 W) for LAAT and IEMGAT respectively (p < 0.01). Higher ambient temperature induced a decrease in the mechanical power at which the anaerobic threshold occurred. The high correlation between LAAT and IEMGAT (r = 0.91-0.97) indicated that IEMGAT can be used as a practical and reliable, non-invasive method for assessment of the anaerobic threshold.
Can we set a global threshold age to define mature forests?
Martin, Philip; Jung, Martin; Brearley, Francis Q; Ribbons, Relena R; Lines, Emily R; Jacob, Aerin L
2016-01-01
Globally, mature forests appear to be increasing in biomass density (BD). There is disagreement whether these increases are the result of increases in atmospheric CO2 concentrations or a legacy effect of previous land-use. Recently, it was suggested that a threshold of 450 years should be used to define mature forests and that many forests increasing in BD may be younger than this. However, the study making these suggestions failed to account for the interactions between forest age and climate. Here we revisit the issue to identify: (1) how climate and forest age control global forest BD and (2) whether we can set a threshold age for mature forests. Using data from previously published studies we modelled the impacts of forest age and climate on BD using linear mixed effects models. We examined the potential biases in the dataset by comparing how representative it was of global mature forests in terms of its distribution, the climate space it occupied, and the ages of the forests used. BD increased with forest age, mean annual temperature and annual precipitation. Importantly, the effect of forest age increased with increasing temperature, but the effect of precipitation decreased with increasing temperatures. The dataset was biased towards northern hemisphere forests in relatively dry, cold climates. The dataset was also clearly biased towards forests <250 years of age. Our analysis suggests that there is not a single threshold age for forest maturity. Since climate interacts with forest age to determine BD, a threshold age at which they reach equilibrium can only be determined locally. We caution against using BD as the only determinant of forest maturity since this ignores forest biodiversity and tree size structure which may take longer to recover. Future research should address the utility and cost-effectiveness of different methods for determining whether forests should be classified as mature.
Single photon counting linear mode avalanche photodiode technologies
NASA Astrophysics Data System (ADS)
Williams, George M.; Huntington, Andrew S.
2011-10-01
The false count rate of a single-photon-sensitive photoreceiver consisting of a high-gain, low-excess-noise linear-mode InGaAs avalanche photodiode (APD) and a high-bandwidth transimpedance amplifier (TIA) is fit to a statistical model. The peak height distribution of the APD's multiplied dark current is approximated by the weighted sum of McIntyre distributions, each characterizing dark current generated at a different location within the APD's junction. The peak height distribution approximated in this way is convolved with a Gaussian distribution representing the input-referred noise of the TIA to generate the statistical distribution of the uncorrelated sum. The cumulative distribution function (CDF) representing count probability as a function of detection threshold is computed, and the CDF model fit to empirical false count data. It is found that only k=0 McIntyre distributions fit the empirically measured CDF at high detection threshold, and that false count rate drops faster than photon count rate as detection threshold is raised. Once fit to empirical false count data, the model predicts the improvement of the false count rate to be expected from reductions in TIA noise and APD dark current. Improvement by at least three orders of magnitude is thought feasible with further manufacturing development and a capacitive-feedback TIA (CTIA).
Iler, Amy M; Høye, Toke T; Inouye, David W; Schmidt, Niels M
2013-08-19
Many alpine and subalpine plant species exhibit phenological advancements in association with earlier snowmelt. While the phenology of some plant species does not advance beyond a threshold snowmelt date, the prevalence of such threshold phenological responses within plant communities is largely unknown. We therefore examined the shape of flowering phenology responses (linear versus nonlinear) to climate using two long-term datasets from plant communities in snow-dominated environments: Gothic, CO, USA (1974-2011) and Zackenberg, Greenland (1996-2011). For a total of 64 species, we determined whether a linear or nonlinear regression model best explained interannual variation in flowering phenology in response to increasing temperatures and advancing snowmelt dates. The most common nonlinear trend was for species to flower earlier as snowmelt advanced, with either no change or a slower rate of change when snowmelt was early (average 20% of cases). By contrast, some species advanced their flowering at a faster rate over the warmest temperatures relative to cooler temperatures (average 5% of cases). Thus, some species seem to be approaching their limits of phenological change in response to snowmelt but not temperature. Such phenological thresholds could either be a result of minimum springtime photoperiod cues for flowering or a slower rate of adaptive change in flowering time relative to changing climatic conditions.
ECG compression using Slantlet and lifting wavelet transform with and without normalisation
NASA Astrophysics Data System (ADS)
Aggarwal, Vibha; Singh Patterh, Manjeet
2013-05-01
This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.
Navarro-Mesa, Juan L.; Juliá-Serdá, Gabriel; Ramírez-Ávila, G. Marcelo; Ravelo-García, Antonio G.
2018-01-01
Our contribution focuses on the characterization of sleep apnea from a cardiac rate point of view, using Recurrence Quantification Analysis (RQA), based on a Heart Rate Variability (HRV) feature selection process. Three parameters are crucial in RQA: those related to the embedding process (dimension and delay) and the threshold distance. There are no overall accepted parameters for the study of HRV using RQA in sleep apnea. We focus on finding an overall acceptable combination, sweeping a range of values for each of them simultaneously. Together with the commonly used RQA measures, we include features related to recurrence times, and features originating in the complex network theory. To the best of our knowledge, no author has used them all for sleep apnea previously. The best performing feature subset is entered into a Linear Discriminant classifier. The best results in the “Apnea-ECG Physionet database” and the “HuGCDN2014 database” are, according to the area under the receiver operating characteristic curve, 0.93 (Accuracy: 86.33%) and 0.86 (Accuracy: 84.18%), respectively. Our system outperforms, using a relatively small set of features, previously existing studies in the context of sleep apnea. We conclude that working with dimensions around 7–8 and delays about 4–5, and using for the threshold distance the Fixed Amount of Nearest Neighbours (FAN) method with 5% of neighbours, yield the best results. Therefore, we would recommend these reference values for future work when applying RQA to the analysis of HRV in sleep apnea. We also conclude that, together with the commonly used vertical and diagonal RQA measures, there are newly used features that contribute valuable information for apnea minutes discrimination. Therefore, they are especially interesting for characterization purposes. Using two different databases supports that the conclusions reached are potentially generalizable, and are not limited by database variability. PMID:29621264
Martín-González, Sofía; Navarro-Mesa, Juan L; Juliá-Serdá, Gabriel; Ramírez-Ávila, G Marcelo; Ravelo-García, Antonio G
2018-01-01
Our contribution focuses on the characterization of sleep apnea from a cardiac rate point of view, using Recurrence Quantification Analysis (RQA), based on a Heart Rate Variability (HRV) feature selection process. Three parameters are crucial in RQA: those related to the embedding process (dimension and delay) and the threshold distance. There are no overall accepted parameters for the study of HRV using RQA in sleep apnea. We focus on finding an overall acceptable combination, sweeping a range of values for each of them simultaneously. Together with the commonly used RQA measures, we include features related to recurrence times, and features originating in the complex network theory. To the best of our knowledge, no author has used them all for sleep apnea previously. The best performing feature subset is entered into a Linear Discriminant classifier. The best results in the "Apnea-ECG Physionet database" and the "HuGCDN2014 database" are, according to the area under the receiver operating characteristic curve, 0.93 (Accuracy: 86.33%) and 0.86 (Accuracy: 84.18%), respectively. Our system outperforms, using a relatively small set of features, previously existing studies in the context of sleep apnea. We conclude that working with dimensions around 7-8 and delays about 4-5, and using for the threshold distance the Fixed Amount of Nearest Neighbours (FAN) method with 5% of neighbours, yield the best results. Therefore, we would recommend these reference values for future work when applying RQA to the analysis of HRV in sleep apnea. We also conclude that, together with the commonly used vertical and diagonal RQA measures, there are newly used features that contribute valuable information for apnea minutes discrimination. Therefore, they are especially interesting for characterization purposes. Using two different databases supports that the conclusions reached are potentially generalizable, and are not limited by database variability.
Sex ratio variation in Iberian pigs.
Toro, M A; Fernández, A; García-Cortés, L A; Rodrigáñez, J; Silió, L
2006-06-01
Within the area of sex allocation, one of the topics that has attracted a lot of attention is the sex ratio problem. Fisher (1930) proposed that equal numbers of males and females have been promoted by natural selection and it has an adaptive significance. But the empirical success of Fisher's theory remains doubtful because a sex ratio of 0.50 is also expected from the chromosomal mechanism of sex determination. Another way of approaching the subject is to consider that Fisher's argument relies on the underlying assumption that offspring inherit their parent's tendency in biased sex ratio and therefore that genetic variance for this trait exists. Here, we analyzed sex ratio data of 56,807 piglets coming from 550 boars and 1893 dams. In addition to classical analysis of heterogeneity we performed analyses fitting linear and threshold animal models in a Bayesian framework using Gibbs sampling techniques. The marginal posterior mean of heritability was 2.63 x 10(-4) under the sire linear model and 9.17 x 10(-4) under the sire threshold model. The probability of the hypothesis p(h(2) = 0) fitting the last model was 0.996. Also, we did not detect any trend in sex ratio related to maternal age. From an evolutionary point of view, the chromosomal sex determination acts as a constraint that precludes control of offspring sex ratio in vertebrates and it should be included in the general theory of sex allocation. From a practical view that means that the sex ratio in domestic species is hardly susceptible to modification by artificial selection.
Shain, Kellen S; Madigan, Michael L; Rowson, Steven; Bisplinghoff, Jill; Duma, Stefan M
2010-11-01
The goals of this study were to measure the ability of catcher's masks to attenuate head accelerations on impact with a baseball and to compare these head accelerations to established injury thresholds for mild traumatic brain injury. Testing involved using a pneumatic cannon to shoot baseballs at an instrumented Hybrid III headform (a 50th percentile male head and neck) with and without a catcher's mask on the head. The ball speed was controlled from approximately 26.8 to 35.8 m/s (60-80 mph), and the regulation National Collegiate Athletic Association baseballs were used. Research laboratory. None. Catcher's masks and impact velocity. The linear and angular head accelerations of the Hybrid III headform. Peak linear resultant acceleration was 140 to 180 g without a mask and 16 to 30 g with a mask over the range of ball's speed investigated. Peak angular resultant acceleration was 19 500 to 25 700 rad/s without a mask and 2250 to 3230 rad/s with a mask. The Head Injury Criterion was 93 to 181 without a mask and 3 to 13 with a mask, and the Severity Index was 110 to 210 without a mask and 3 to 15 with a mask. Catcher's masks reduced head acceleration metrics by approximately 85%. Head acceleration metrics with a catcher's mask were significantly lower than contemporary injury thresholds, yet reports in the mass media clearly indicate that baseball impacts to the mask still occasionally result in mild traumatic brain injuries. Further research is needed to address this apparent contradiction.
An increase in visceral fat is associated with a decrease in the taste and olfactory capacity
Fernandez-Garcia, Jose Carlos; Alcaide, Juan; Santiago-Fernandez, Concepcion; Roca-Rodriguez, MM.; Aguera, Zaida; Baños, Rosa; Botella, Cristina; de la Torre, Rafael; Fernandez-Real, Jose M.; Fruhbeck, Gema; Gomez-Ambrosi, Javier; Jimenez-Murcia, Susana; Menchon, Jose M.; Casanueva, Felipe F.; Fernandez-Aranda, Fernando; Tinahones, Francisco J.; Garrido-Sanchez, Lourdes
2017-01-01
Introduction Sensory factors may play an important role in the determination of appetite and food choices. Also, some adipokines may alter or predict the perception and pleasantness of specific odors. We aimed to analyze differences in smell–taste capacity between females with different weights and relate them with fat and fat-free mass, visceral fat, and several adipokines. Materials and methods 179 females with different weights (from low weight to morbid obesity) were studied. We analyzed the relation between fat, fat-free mass, visceral fat (indirectly estimated by bioelectrical impedance analysis with visceral fat rating (VFR)), leptin, adiponectin and visfatin. The smell and taste assessments were performed through the "Sniffin’ Sticks" and "Taste Strips" respectively. Results We found a lower score in the measurement of smell (TDI-score (Threshold, Discrimination and Identification)) in obese subjects. All the olfactory functions measured, such as threshold, discrimination, identification and the TDI-score, correlated negatively with age, body mass index (BMI), leptin, fat mass, fat-free mass and VFR. In a multiple linear regression model, VFR mainly predicted the TDI-score. With regard to the taste function measurements, the normal weight subjects showed a higher score of taste functions. However a tendency to decrease was observed in the groups with greater or lesser BMI. In a multiple linear regression model VFR and age mainly predicted the total taste scores. Discussion We show for the first time that a reverse relationship exists between visceral fat and sensory signals, such as smell and taste, across a population with different body weight conditions. PMID:28158237
Pre-operative renal volume predicts peak creatinine after congenital heart surgery in neonates.
Carmody, J Bryan; Seckeler, Michael D; Ballengee, Cortney R; Conaway, Mark; Jayakumar, K Anitha; Charlton, Jennifer R
2014-10-01
Acute kidney injury is common in neonates following surgery for congenital heart disease. We conducted a retrospective analysis to determine whether neonates with smaller pre-operative renal volume were more likely to develop post-operative acute kidney injury. We conducted a retrospective review of 72 neonates who underwent congenital heart surgery for any lesion other than patent ductus arteriosus at our institution from January 2007 to December 2011. Renal volume was calculated by ultrasound using the prolate ellipsoid formula. The presence and severity of post-operative acute kidney injury was determined both by measuring the peak serum creatinine in the first 7 days post-operatively and by using the Acute Kidney Injury Network scoring system. Using a linear change point model, a threshold renal volume of 17 cm³ was identified. Below this threshold, there was an inverse linear relationship between renal volume and peak post-operative creatinine for all patients (p = 0.036) and the subgroup with a single morphologic right ventricle (p = 0.046). There was a non-significant trend towards more acute kidney injury using Acute Kidney Injury Network criteria in all neonates with renal volume ≤17 cm³ (p = 0.11) and in the subgroup with a single morphologic right ventricle (p = 0.17). Pre-operative renal volume ≤17 cm³ is associated with a higher peak post-operative creatinine and potentially greater risk for post-operative acute kidney injury for neonates undergoing congenital heart surgery. Neonates with a single right ventricle may be at higher risk.
Relation Between Cochlear Mechanics and Performance of Temporal Fine Structure-Based Tasks.
Otsuka, Sho; Furukawa, Shigeto; Yamagishi, Shimpei; Hirota, Koich; Kashino, Makio
2016-12-01
This study examined whether the mechanical characteristics of the cochlea could influence individual variation in the ability to use temporal fine structure (TFS) information. Cochlear mechanical functioning was evaluated by swept-tone evoked otoacoustic emissions (OAEs), which are thought to comprise linear reflection by micromechanical impedance perturbations, such as spatial variations in the number or geometry of outer hair cells, on the basilar membrane (BM). Low-rate (2 Hz) frequency modulation detection limens (FMDLs) were measured for carrier frequency of 1000 Hz and interaural phase difference (IPD) thresholds as indices of TFS sensitivity and high-rate (16 Hz) FMDLs and amplitude modulation detection limens (AMDLs) as indices of sensitivity to non-TFS cues. Significant correlations were found among low-rate FMDLs, low-rate AMDLs, and IPD thresholds (R = 0.47-0.59). A principal component analysis was used to show a common factor that could account for 81.1, 74.1, and 62.9 % of the variance in low-rate FMDLs, low-rate AMDLs, and IPD thresholds, respectively. An OAE feature, specifically a characteristic dip around 2-2.5 kHz in OAE spectra, showed a significant correlation with the common factor (R = 0.54). High-rate FMDLs and AMDLs were correlated with each other (R = 0.56) but not with the other measures. The results can be interpreted as indicating that (1) the low-rate AMDLs, as well as the IPD thresholds and low-rate FMDLs, depend on the use of TFS information coded in neural phase locking and (2) the use of TFS information is influenced by a particular aspect of cochlear mechanics, such as mechanical irregularity along the BM.
Narrowing of ischiofemoral and quadratus femoris spaces in pediatric ischiofemoral impingement.
Goldberg-Stein, Shlomit; Friedman, Avi; Gao, Qi; Choi, Jaeun; Schulz, Jacob; Fornari, Eric; Taragin, Benjamin
2018-05-05
To correlate MRI findings of quadratus femoris muscle edema (QFME) with narrowing of the ischiofemoral space (IFS) and quadratus femoris space (QFS) in children, and to identify threshold values reflecting an anatomic architecture that may predispose to ischiofemoral impingement. A case-control retrospective MRI review of 49 hips in 27 children (mean, 13 years) with QFME was compared to 49 hips in 27 gender and age-matched controls. Two radiologists independently measured IFS and QFS. Generalized linear mixed-effects models were fit to compare IFS and QFS values between cases and controls, and adjust for correlation in repeated measures from the same subject. Receiver operating characteristic (ROC) analysis determined optimal threshold values. Compared to controls, cases had significantly smaller IFS (p < 0.001, both readers) and QFS (reader 1: p < 0.001; reader 2: p = 0.003). When stratified as preteen (< 13) or teenage (≥ 13), lower mean IFS and QFS were observed in cases versus controls in both age groups. Area under ROC curve for IFS and QFS was high in preteens (0.77 and 0.71) and teens (0.94 and 0.88). Threshold values were 14.9 mm (preteens) and 19 mm (teens) for IFS and 11.2 mm (preteens) and 11.1 mm (teens) for QFS. IFS and QFS were modestly correlated with age among controls only. Pediatric patients with QFME had significantly narrower QFS and IFS compared with controls. IFS and QFS were found to normally increase in size with age. Optimal cutoff threshold values were identified for QFS and IFS in preteens and teenagers.
Spatiotemporal Characterization of Ambient PM2.5 Concentrations in Shandong Province (China).
Yang, Yong; Christakos, George
2015-11-17
China experiences severe particulate matter (PM) pollution problems closely linked to its rapid economic growth. Advancing the understanding and characterization of spatiotemporal air pollution distribution is an area where improved quantitative methods are of great benefit to risk assessment and environmental policy. This work uses the Bayesian maximum entropy (BME) method to assess the space-time variability of PM2.5 concentrations and predict their distribution in the Shandong province, China. Daily PM2.5 concentrations obtained at air quality monitoring sites during 2014 were used. On the basis of the space-time PM2.5 distributions generated by BME, we performed three kinds of querying analysis to reveal the main distribution features. The results showed that the entire region of interest is seriously polluted (BME maps identified heavy pollution clusters during 2014). Quantitative characterization of pollution severity included both pollution level and duration. The number of days during which regional PM2.5 exceeded 75, 115, 150, and 250 μg m(-3) varied: 43-253, 13-128, 4-66, and 0-15 days, respectively. The PM2.5 pattern exhibited an increasing trend from east to west, with the western part of Shandong being a heavily polluted area (PM2.5 exceeded 150 μg m(-3) during long time periods). Pollution was much more serious during winter than during other seasons. Site indicators of PM2.5 pollution intensity and space-time variation were used to assess regional uncertainties and risks with their interpretation depending on the pollutant threshold. The observed PM2.5 concentrations exceeding a specified threshold increased almost linearly with increasing threshold value, whereas the relative probability of excess pollution decreased sharply with increasing threshold.
Vaccination intervention on epidemic dynamics in networks
NASA Astrophysics Data System (ADS)
Peng, Xiao-Long; Xu, Xin-Jian; Fu, Xinchu; Zhou, Tao
2013-02-01
Vaccination is an important measure available for preventing or reducing the spread of infectious diseases. In this paper, an epidemic model including susceptible, infected, and imperfectly vaccinated compartments is studied on Watts-Strogatz small-world, Barabási-Albert scale-free, and random scale-free networks. The epidemic threshold and prevalence are analyzed. For small-world networks, the effective vaccination intervention is suggested and its influence on the threshold and prevalence is analyzed. For scale-free networks, the threshold is found to be strongly dependent both on the effective vaccination rate and on the connectivity distribution. Moreover, so long as vaccination is effective, it can linearly decrease the epidemic prevalence in small-world networks, whereas for scale-free networks it acts exponentially. These results can help in adopting pragmatic treatment upon diseases in structured populations.
Muir, Ryan D.; Pogranichney, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.
2014-01-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment. PMID:25178010
Muir, Ryan D; Pogranichney, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J
2014-09-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment.
Vance, Carol Grace T.; Rakel, Barbara A.; Blodgett, Nicole P.; DeSantana, Josimari Melo; Amendola, Annunziato; Zimmerman, Miriam Bridget; Walsh, Deirdre M.
2012-01-01
Background Transcutaneous electrical nerve stimulation (TENS) is commonly used for the management of pain; however, its effects on several pain and function measures are unclear. Objective The purpose of this study was to determine the effects of high-frequency TENS (HF-TENS) and low-frequency TENS (LF-TENS) on several outcome measures (pain at rest, movement-evoked pain, and pain sensitivity) in people with knee osteoarthritis. Design The study was a double-blind, randomized clinical trial. Setting The setting was a tertiary care center. Participants Seventy-five participants with knee osteoarthritis (29 men and 46 women; 31–94 years of age) were assessed. Intervention Participants were randomly assigned to receive HF-TENS (100 Hz) (n=25), LF-TENS (4 Hz) (n=25), or placebo TENS (n=25) (pulse duration=100 microseconds; intensity=10% below motor threshold). Measurements The following measures were assessed before and after a single TENS treatment: cutaneous mechanical pain threshold, pressure pain threshold (PPT), heat pain threshold, heat temporal summation, Timed “Up & Go” Test (TUG), and pain intensity at rest and during the TUG. A linear mixed-model analysis of variance was used to compare differences before and after TENS and among groups (HF-TENS, LF-TENS, and placebo TENS). Results Compared with placebo TENS, HF-TENS and LF-TENS increased PPT at the knee; HF-TENS also increased PPT over the tibialis anterior muscle. There was no effect on the cutaneous mechanical pain threshold, heat pain threshold, or heat temporal summation. Pain at rest and during the TUG was significantly reduced by HF-TENS, LF-TENS, and placebo TENS. Limitations This study tested only a single TENS treatment. Conclusions Both HF-TENS and LF-TENS increased PPT in people with knee osteoarthritis; placebo TENS had no significant effect on PPT. Cutaneous pain measures were unaffected by TENS. Subjective pain ratings at rest and during movement were similarly reduced by active TENS and placebo TENS, suggesting a strong placebo component of the effect of TENS. PMID:22466027
CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.
Zahery, Mahsa; Maes, Hermine H; Neale, Michael C
2017-08-01
We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.
Can power-law scaling and neuronal avalanches arise from stochastic dynamics?
Touboul, Jonathan; Destexhe, Alain
2010-02-11
The presence of self-organized criticality in biology is often evidenced by a power-law scaling of event size distributions, which can be measured by linear regression on logarithmic axes. We show here that such a procedure does not necessarily mean that the system exhibits self-organized criticality. We first provide an analysis of multisite local field potential (LFP) recordings of brain activity and show that event size distributions defined as negative LFP peaks can be close to power-law distributions. However, this result is not robust to change in detection threshold, or when tested using more rigorous statistical analyses such as the Kolmogorov-Smirnov test. Similar power-law scaling is observed for surrogate signals, suggesting that power-law scaling may be a generic property of thresholded stochastic processes. We next investigate this problem analytically, and show that, indeed, stochastic processes can produce spurious power-law scaling without the presence of underlying self-organized criticality. However, this power-law is only apparent in logarithmic representations, and does not survive more rigorous analysis such as the Kolmogorov-Smirnov test. The same analysis was also performed on an artificial network known to display self-organized criticality. In this case, both the graphical representations and the rigorous statistical analysis reveal with no ambiguity that the avalanche size is distributed as a power-law. We conclude that logarithmic representations can lead to spurious power-law scaling induced by the stochastic nature of the phenomenon. This apparent power-law scaling does not constitute a proof of self-organized criticality, which should be demonstrated by more stringent statistical tests.
Radhakrishnan, Kirthi; Haworth, Kevin J; Peng, Tao; McPherson, David D.; Holland, Christy K.
2014-01-01
Echogenic liposomes (ELIP) are being developed for the early detection and treatment of atherosclerotic lesions. An 80% loss of echogenicity of ELIP (Radhakrishnan et al. 2013) has been shown to be concomitant with the onset of stable and inertial cavitation. The ultrasound pressure amplitude at which this occurs is weakly dependent on pulse duration. Smith et al. (2007) have reported that the rapid fragmentation threshold of ELIP (based on changes in echogenicity) is dependent on the insonation pulse repetition frequency (PRF). The current study evaluates the relationship between loss of echogenicity and cavitation emissions from ELIP insonified by duplex Doppler pulses at four PRFs (1.25 kHz, 2.5 kHz, 5 kHz, and 8.33 kHz). Loss of echogenicity was evaluated on B-mode images of ELIP. Cavitation emissions from ELIP were recorded passively on a focused single-element transducer and a linear array. Emissions recorded by the linear array were beamformed and the spatial widths of stable and inertial cavitation emissions were compared to the calibrated azimuthal beamwidth of the Doppler pulse exceeding the stable and inertial cavitation thresholds. The inertial cavitation thresholds had a very weak dependence on PRF and stable cavitation thresholds were independent of PRF. The spatial widths of the cavitation emissions recorded by the passive cavitation imaging system agreed with the calibrated Doppler beamwidths. The results also show that 64%–79% loss of echogenicity can be used to classify the presence or absence of cavitation emissions with greater than 80% accuracy. PMID:25438849
Radhakrishnan, Kirthi; Haworth, Kevin J; Peng, Tao; McPherson, David D; Holland, Christy K
2015-01-01
Echogenic liposomes (ELIP) are being developed for the early detection and treatment of atherosclerotic lesions. An 80% loss of echogenicity of ELIP has been found to be concomitant with the onset of stable and inertial cavitation. The ultrasound pressure amplitude at which this occurs is weakly dependent on pulse duration. It has been reported that the rapid fragmentation threshold of ELIP (based on changes in echogenicity) is dependent on the insonation pulse repetition frequency (PRF). The study described here evaluates the relationship between loss of echogenicity and cavitation emissions from ELIP insonified by duplex Doppler pulses at four PRFs (1.25, 2.5, 5 and 8.33 kHz). Loss of echogenicity was evaluated on B-mode images of ELIP. Cavitation emissions from ELIP were recorded passively on a focused single-element transducer and a linear array. Emissions recorded by the linear array were beamformed, and the spatial widths of stable and inertial cavitation emissions were compared with the calibrated azimuthal beamwidth of the Doppler pulse exceeding the stable and inertial cavitation thresholds. The inertial cavitation thresholds had a very weak dependence on PRF, and stable cavitation thresholds were independent of PRF. The spatial widths of the cavitation emissions recorded by the passive cavitation imaging system agreed with the calibrated Doppler beamwidths. The results also indicate that 64%-79% loss of echogenicity can be used to classify the presence or absence of cavitation emissions with greater than 80% accuracy. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Variations in recollection: the effects of complexity on source recognition.
Parks, Colleen M; Murray, Linda J; Elfman, Kane; Yonelinas, Andrew P
2011-07-01
Whether recollection is a threshold or signal detection process is highly controversial, and the controversy has centered in part on the shape of receiver operating characteristics (ROCs) and z-transformed ROCs (zROCs). U-shaped zROCs observed in tests thought to rely heavily on recollection, such as source memory tests, have provided evidence in favor of the threshold assumption, but zROCs are not always as U-shaped as threshold theory predicts. Source zROCs have been shown to become more linear when the contribution of familiarity to source discriminations is increased, and this may account for the existing results. However, another way in which source zROCs may become more linear is if the recollection threshold begins to break down and recollection becomes more graded and Gaussian. We tested the "graded recollection" account in the current study. We found that increasing stimulus complexity (i.e., changing from single words to sentences) or increasing source complexity (i.e., changing the sources from audio to videos of speakers) resulted in flatter source zROCs. In addition, conditions expected to reduce recollection (i.e., divided attention and amnesia) had comparable effects on source memory in simple and complex conditions, suggesting that differences between simple and complex conditions were due to differences in the nature of recollection, rather than differences in the utility of familiarity. The results suggest that under conditions of high complexity, recollection can appear more graded, and it can produce curved ROCs. The results have implications for measurement models and for current theories of recognition memory.
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods.
Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding
NASA Astrophysics Data System (ADS)
Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry
2014-07-01
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.
Virgilio, Massimiliano; Jordaens, Kurt; Breman, Floris C.; Backeljau, Thierry; De Meyer, Marc
2012-01-01
We propose a general working strategy to deal with incomplete reference libraries in the DNA barcoding identification of species. Considering that (1) queries with a large genetic distance with their best DNA barcode match are more likely to be misidentified and (2) imposing a distance threshold profitably reduces identification errors, we modelled relationships between identification performances and distance thresholds in four DNA barcode libraries of Diptera (n = 4270), Lepidoptera (n = 7577), Hymenoptera (n = 2067) and Tephritidae (n = 602 DNA barcodes). In all cases, more restrictive distance thresholds produced a gradual increase in the proportion of true negatives, a gradual decrease of false positives and more abrupt variations in the proportions of true positives and false negatives. More restrictive distance thresholds improved precision, yet negatively affected accuracy due to the higher proportions of queries discarded (viz. having a distance query-best match above the threshold). Using a simple linear regression we calculated an ad hoc distance threshold for the tephritid library producing an estimated relative identification error <0.05. According to the expectations, when we used this threshold for the identification of 188 independently collected tephritids, less than 5% of queries with a distance query-best match below the threshold were misidentified. Ad hoc thresholds can be calculated for each particular reference library of DNA barcodes and should be used as cut-off mark defining whether we can proceed identifying the query with a known estimated error probability (e.g. 5%) or whether we should discard the query and consider alternative/complementary identification methods. PMID:22359600
Effects of whole body vibration on motor unit recruitment and threshold
Woledge, Roger C.; Martin, Finbarr C.; Newham, Di J.
2012-01-01
Whole body vibration (WBV) has been suggested to elicit reflex muscle contractions but this has never been verified. We recorded from 32 single motor units (MU) in the vastus lateralis of 7 healthy subjects (34 ± 15.4 yr) during five 1-min bouts of WBV (30 Hz, 3 mm peak to peak), and the vibration waveform was also recorded. Recruitment thresholds were recorded from 38 MUs before and after WBV. The phase angle distribution of all MUs during WBV was nonuniform (P < 0.001) and displayed a prominent peak phase angle of firing. There was a strong linear relationship (r = −0.68, P < 0.001) between the change in recruitment threshold after WBV and average recruitment threshold; the lowest threshold MUs increased recruitment threshold (P = 0.008) while reductions were observed in the higher threshold units (P = 0.031). We investigated one possible cause of changed thresholds. Presynaptic inhibition in the soleus was measured in 8 healthy subjects (29 ± 4.6 yr). A total of 30 H-reflexes (stimulation intensity 30% Mmax) were recorded before and after WBV: 15 conditioned by prior stimulation (60 ms) of the antagonist and 15 unconditioned. There were no significant changes in the relationship between the conditioned and unconditioned responses. The consistent phase angle at which each MU fired during WBV indicates the presence of reflex muscle activity similar to the tonic vibration reflex. The varying response in high- and low-threshold MUs may be due to the different contributions of the mono- and polysynaptic pathways but not presynaptic inhibition. PMID:22096119
Effects of whole body vibration on motor unit recruitment and threshold.
Pollock, Ross D; Woledge, Roger C; Martin, Finbarr C; Newham, Di J
2012-02-01
Whole body vibration (WBV) has been suggested to elicit reflex muscle contractions but this has never been verified. We recorded from 32 single motor units (MU) in the vastus lateralis of 7 healthy subjects (34 ± 15.4 yr) during five 1-min bouts of WBV (30 Hz, 3 mm peak to peak), and the vibration waveform was also recorded. Recruitment thresholds were recorded from 38 MUs before and after WBV. The phase angle distribution of all MUs during WBV was nonuniform (P < 0.001) and displayed a prominent peak phase angle of firing. There was a strong linear relationship (r = -0.68, P < 0.001) between the change in recruitment threshold after WBV and average recruitment threshold; the lowest threshold MUs increased recruitment threshold (P = 0.008) while reductions were observed in the higher threshold units (P = 0.031). We investigated one possible cause of changed thresholds. Presynaptic inhibition in the soleus was measured in 8 healthy subjects (29 ± 4.6 yr). A total of 30 H-reflexes (stimulation intensity 30% Mmax) were recorded before and after WBV: 15 conditioned by prior stimulation (60 ms) of the antagonist and 15 unconditioned. There were no significant changes in the relationship between the conditioned and unconditioned responses. The consistent phase angle at which each MU fired during WBV indicates the presence of reflex muscle activity similar to the tonic vibration reflex. The varying response in high- and low-threshold MUs may be due to the different contributions of the mono- and polysynaptic pathways but not presynaptic inhibition.
Romariz, Alexandre R S; Wagner, Kelvin H
2007-07-20
An optoelectronic implementation of a modified FitzHugh-Nagumo neuron model is proposed, analyzed, and experimentally demonstrated. The setup uses linear optics and linear electronics for implementing an optical wavelength-domain nonlinearity. The system attains instability through a bifurcation mechanism present in a class of neuron models, a fact that is shown analytically. The implementation exhibits basic features of neural dynamics including threshold, production of short pulses (or spikes), and refractoriness.
NASA Astrophysics Data System (ADS)
Kochetov, Andrey
2016-07-01
Numerical simulations of the dynamics of electromagnetic fields in a smoothly inhomogeneous nonlinear plasma layer in frameworks of the nonlinear Schrödinger equation with boundary conditions responsible for the pumping of the field in the layer by an incident wave and the inverse radiation losses supplemented the volume field dissipation due to the electromagnetic excitation of Langmuir turbulence are carried out. The effects of the threshold of non-linearity and it's evolution, of the threshold and saturation levels of dissipation in the vicinity of the wave reflection point on the features of the dynamics of reflection and absorption indexes are investigated. We consider the hard drive damping depending on the local field amplitude and hysteresis losses with different in several times "on" and "off" absorption thresholds as well. The dependence of the thresholds of the steady-state, periodic and chaotic regimes of plasma-wave interaction on the scenario of turbulence evolution is demonstrated. The results are compared with the experimental observations of Langmuir stage ionospheric modification.
Low threshold optical bistability in one-dimensional gratings based on graphene plasmonics.
Guo, Jun; Jiang, Leyong; Jia, Yue; Dai, Xiaoyu; Xiang, Yuanjiang; Fan, Dianyuan
2017-03-20
Optical bistability of graphene surface plasmon is investigated numerically, using grating coupling method at normal light incidence. The linear surface plasmon resonance is strongly dependent on Femi-level of graphene, hence it can be tuned in a large wavelength range. Due to the field enhancement of graphene surface plasmon resonance and large third-order nonlinear response of graphene, a low-threshold optical hysteresis has been observed. The threshold value with 20MW/cm2 and response time with 1.7ps have been verified. Especially, it is found that this optical bistability phenomenon is angular insensitivity for near 15° incident angle. The threshold of optical bistability can be further lowered to 0.5MW/cm2 by using graphene nanoribbons, and the response time is also shorten to 800fs. We believe that our results will find potential applications in bistable devices and all-optical switching from mid-IR to THz range.
Global gray-level thresholding based on object size.
Ranefall, Petter; Wählby, Carolina
2016-04-01
In this article, we propose a fast and robust global gray-level thresholding method based on object size, where the selection of threshold level is based on recall and maximum precision with regard to objects within a given size interval. The method relies on the component tree representation, which can be computed in quasi-linear time. Feature-based segmentation is especially suitable for biomedical microscopy applications where objects often vary in number, but have limited variation in size. We show that for real images of cell nuclei and synthetic data sets mimicking fluorescent spots the proposed method is more robust than all standard global thresholding methods available for microscopy applications in ImageJ and CellProfiler. The proposed method, provided as ImageJ and CellProfiler plugins, is simple to use and the only required input is an interval of the expected object sizes. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Methods of Muscle Activation Onset Timing Recorded During Spinal Manipulation.
Currie, Stuart J; Myers, Casey A; Krishnamurthy, Ashok; Enebo, Brian A; Davidson, Bradley S
2016-05-01
The purpose of this study was to determine electromyographic threshold parameters that most reliably characterize the muscular response to spinal manipulation and compare 2 methods that detect muscle activity onset delay: the double-threshold method and cross-correlation method. Surface and indwelling electromyography were recorded during lumbar side-lying manipulations in 17 asymptomatic participants. Muscle activity onset delays in relation to the thrusting force were compared across methods and muscles using a generalized linear model. The threshold combinations that resulted in the lowest Detection Failures were the "8 SD-0 milliseconds" threshold (Detection Failures = 8) and the "8 SD-10 milliseconds" threshold (Detection Failures = 9). The average muscle activity onset delay for the double-threshold method across all participants was 149 ± 152 milliseconds for the multifidus and 252 ± 204 milliseconds for the erector spinae. The average onset delay for the cross-correlation method was 26 ± 101 for the multifidus and 67 ± 116 for the erector spinae. There were no statistical interactions, and a main effect of method demonstrated that the delays were higher when using the double-threshold method compared with cross-correlation. The threshold parameters that best characterized activity onset delays were an 8-SD amplitude and a 10-millisecond duration threshold. The double-threshold method correlated well with visual supervision of muscle activity. The cross-correlation method provides several advantages in signal processing; however, supervision was required for some results, negating this advantage. These results help standardize methods when recording neuromuscular responses of spinal manipulation and improve comparisons within and across investigations. Copyright © 2016 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.
Grimbergen, M C M; van Swol, C F P; Kendall, C; Verdaasdonk, R M; Stone, N; Bosch, J L H R
2010-01-01
The overall quality of Raman spectra in the near-infrared region, where biological samples are often studied, has benefited from various improvements to optical instrumentation over the past decade. However, obtaining ample spectral quality for analysis is still challenging due to device requirements and short integration times required for (in vivo) clinical applications of Raman spectroscopy. Multivariate analytical methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA), are routinely applied to Raman spectral datasets to develop classification models. Data compression is necessary prior to discriminant analysis to prevent or decrease the degree of over-fitting. The logical threshold for the selection of principal components (PCs) to be used in discriminant analysis is likely to be at a point before the PCs begin to introduce equivalent signal and noise and, hence, include no additional value. Assessment of the signal-to-noise ratio (SNR) at a certain peak or over a specific spectral region will depend on the sample measured. Therefore, the mean SNR over the whole spectral region (SNR(msr)) is determined in the original spectrum as well as for spectra reconstructed from an increasing number of principal components. This paper introduces a method of assessing the influence of signal and noise from individual PC loads and indicates a method of selection of PCs for LDA. To evaluate this method, two data sets with different SNRs were used. The sets were obtained with the same Raman system and the same measurement parameters on bladder tissue collected during white light cystoscopy (set A) and fluorescence-guided cystoscopy (set B). This method shows that the mean SNR over the spectral range in the original Raman spectra of these two data sets is related to the signal and noise contribution of principal component loads. The difference in mean SNR over the spectral range can also be appreciated since fewer principal components can reliably be used in the low SNR data set (set B) compared to the high SNR data set (set A). Despite the fact that no definitive threshold could be found, this method may help to determine the cutoff for the number of principal components used in discriminant analysis. Future analysis of a selection of spectral databases using this technique will allow optimum thresholds to be selected for different applications and spectral data quality levels.
Gauging the likelihood of stable cavitation from ultrasound contrast agents
NASA Astrophysics Data System (ADS)
Bader, Kenneth B.; Holland, Christy K.
2013-01-01
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form ICAV = Pr/f (where Pr is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.
Gauging the likelihood of stable cavitation from ultrasound contrast agents.
Bader, Kenneth B; Holland, Christy K
2013-01-07
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form I(CAV) = P(r)/f (where P(r) is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs.
Gauging the likelihood of stable cavitation from ultrasound contrast agents
Bader, Kenneth B; Holland, Christy K
2015-01-01
The mechanical index (MI) was formulated to gauge the likelihood of adverse bioeffects from inertial cavitation. However, the MI formulation did not consider bubble activity from stable cavitation. This type of bubble activity can be readily nucleated from ultrasound contrast agents (UCAs) and has the potential to promote beneficial bioeffects. Here, the presence of stable cavitation is determined numerically by tracking the onset of subharmonic oscillations within a population of bubbles for frequencies up to 7 MHz and peak rarefactional pressures up to 3 MPa. In addition, the acoustic pressure rupture threshold of an UCA population was determined using the Marmottant model. The threshold for subharmonic emissions of optimally sized bubbles was found to be lower than the inertial cavitation threshold for all frequencies studied. The rupture thresholds of optimally sized UCAs were found to be lower than the threshold for subharmonic emissions for either single cycle or steady state acoustic excitations. Because the thresholds of both subharmonic emissions and UCA rupture are linearly dependent on frequency, an index of the form ICAV = Pr/f (where Pr is the peak rarefactional pressure in MPa and f is the frequency in MHz) was derived to gauge the likelihood of subharmonic emissions due to stable cavitation activity nucleated from UCAs. PMID:23221109
Chuang, Michael L; Gona, Philimon; Hautvast, Gilion L T F; Salton, Carol J; Breeuwer, Marcel; O'Donnell, Christopher J; Manning, Warren J
2014-04-01
To determine sex-specific reference values for left ventricular (LV) volumes, mass, and ejection fraction (EF) in healthy adults using computer-aided analysis and to examine the effect of age on LV parameters. We examined data from 1494 members of the Framingham Heart Study Offspring cohort, obtained using short-axis stack cine SSFP CMR, identified a healthy reference group (without cardiovascular disease, hypertension, or LV wall motion abnormality) and determined sex-specific upper 95th percentile thresholds for LV volumes and mass, and lower 5th percentile thresholds for EF using computer-assisted border detection. In secondary analyses, we stratified participants by age-decade and tested for linear trend across age groups. The reference group comprised 685 adults (423F; 61 ± 9 years). Men had greater LV volumes and mass, before and after indexation to common measures of body size (all P = 0.001). Women had greater EF (73 ± 6 versus 71 ± 6%; P = 0.0002). LV volumes decreased with greater age in both sexes, even after indexation. Indexed LV mass did not vary with age. LV EF and concentricity increased with greater age in both sexes. We present CMR-derived LV reference values. There are significant age and sex differences in LV volumes, EF, and geometry, whereas mass differs between sexes but not age groups. Copyright © 2013 Wiley Periodicals, Inc.
Chuang, Michael L.; Gona, Philimon; Hautvast, Gilion L.T.F.; Salton, Carol J.; Breeuwer, Marcel; O’Donnell, Christopher J.; Manning, Warren J.
2013-01-01
Purpose To determine sex-specific reference values for left ventricular (LV) volumes, mass and ejection fraction (EF) in healthy adults using computer-aided analysis and to examine the effect of age on LV parameters. Methods and Methods We examined data from 1494 members of the Framingham Heart Study Offspring cohort, obtained using short-axis stack cine SSFP CMR, identified a healthy reference group (without cardiovascular disease, hypertension, or LV wall motion abnormality) and determined sex-specific upper 95th percentile thresholds for LV volumes and mass, and lower 5th percentile thresholds for EF using computer-assisted border detection. In secondary analyses we stratified participants by age-decade and tested for linear trend across age groups. Results The reference group comprised 685 adults (423F; 61±9 years). Men had greater LV volumes and mass, before and after indexation to common measures of body size (all p<0.001). Women had greater EF (73±6 vs. 71±6%, p=0.0002). LV volumes decreased with greater age in both sexes, even after indexation. Indexed LV mass did not vary with age. LV EF and concentricity increased with greater age in both sexes. Conclusion We present CMR-derived LV reference values. There are significant age and sex differences in LV volumes, EF and geometry, while mass differs between sexes but not age groups. PMID:24123369
Owen, Rhiannon K; Cooper, Nicola J; Quinn, Terence J; Lees, Rosalind; Sutton, Alex J
2018-07-01
Network meta-analyses (NMA) have extensively been used to compare the effectiveness of multiple interventions for health care policy and decision-making. However, methods for evaluating the performance of multiple diagnostic tests are less established. In a decision-making context, we are often interested in comparing and ranking the performance of multiple diagnostic tests, at varying levels of test thresholds, in one simultaneous analysis. Motivated by an example of cognitive impairment diagnosis following stroke, we synthesized data from 13 studies assessing the efficiency of two diagnostic tests: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA), at two test thresholds: MMSE <25/30 and <27/30, and MoCA <22/30 and <26/30. Using Markov chain Monte Carlo (MCMC) methods, we fitted a bivariate network meta-analysis model incorporating constraints on increasing test threshold, and accounting for the correlations between multiple test accuracy measures from the same study. We developed and successfully fitted a model comparing multiple tests/threshold combinations while imposing threshold constraints. Using this model, we found that MoCA at threshold <26/30 appeared to have the best true positive rate, whereas MMSE at threshold <25/30 appeared to have the best true negative rate. The combined analysis of multiple tests at multiple thresholds allowed for more rigorous comparisons between competing diagnostics tests for decision making. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Single-mode operation of mushroom structure surface emitting lasers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Y.J.; Dziura, T.G.; Wang, S.C.
1991-01-01
Mushroom structure vertical cavity surface emitting lasers with a 0.6 {mu}m GaAs active layer sandwiched by two Al{sub 0.6{sup {minus}}}Ga{sub 0.4}As-Al{sub 0.08}Ga{sub 0.92}As multilayers as top and bottom mirrors exhibit 15 mA pulsed threshold current at 880 nm. Single longitudinal and single transverse mode operation was achieved on lasers with a 5 {mu}m diameter active region at current levels near 2 {times} I{sub th}. The light output above threshold current was linearly polarized with a polarization ratio of 25:1.
Sputtering of cobalt and chromium by argon and xenon ions near the threshold energy region
NASA Technical Reports Server (NTRS)
Handoo, A. K.; Ray, P. K.
1993-01-01
Sputtering yields of cobalt and chromium by argon and xenon ions with energies below 50 eV are reported. The targets were electroplated on copper substrates. Measurable sputtering yields were obtained from cobalt with ion energies as low as 10 eV. The ion beams were produced by an ion gun. A radioactive tracer technique was used for the quantitative measurement of the sputtering yield. Co-57 and Cr-51 were used as tracers. The yield-energy curves are observed to be concave, which brings into question the practice of finding threshold energies by linear extrapolation.
Quantitative comparisons of type III radio burst intensity and fast electron flux at 1 AU
NASA Technical Reports Server (NTRS)
Fitzenreiter, R. J.; Evans, L. G.; Lin, R. P.
1976-01-01
We compare the flux of fast solar electrons and the intensity of the type III radio emission generated by these particles at 1 AU. We find that there are two regimes in the generation of type III radiation: one where the radio intensity is linearly proportional to the electron flux, and the second regime, which occurs above a threshold electron flux, where the radio intensity is proportional to the approximately 2.4 power of the electron flux. This threshold appears to reflect a transition to a different emission mechanism.
Liang, C Jason; Budoff, Matthew J; Kaufman, Joel D; Kronmal, Richard A; Brown, Elizabeth R
2012-07-02
Extent of atherosclerosis measured by amount of coronary artery calcium (CAC) in computed tomography (CT) has been traditionally assessed using thresholded scoring methods, such as the Agatston score (AS). These thresholded scores have value in clinical prediction, but important information might exist below the threshold, which would have important advantages for understanding genetic, environmental, and other risk factors in atherosclerosis. We developed a semi-automated threshold-free scoring method, the spatially weighted calcium score (SWCS) for CAC in the Multi-Ethnic Study of Atherosclerosis (MESA). Chest CT scans were obtained from 6814 participants in the Multi-Ethnic Study of Atherosclerosis (MESA). The SWCS and the AS were calculated for each of the scans. Cox proportional hazards models and linear regression models were used to evaluate the associations of the scores with CHD events and CHD risk factors. CHD risk factors were summarized using a linear predictor. Among all participants and participants with AS > 0, the SWCS and AS both showed similar strongly significant associations with CHD events (hazard ratios, 1.23 and 1.19 per doubling of SWCS and AS; 95% CI, 1.16 to 1.30 and 1.14 to 1.26) and CHD risk factors (slopes, 0.178 and 0.164; 95% CI, 0.162 to 0.195 and 0.149 to 0.179). Even among participants with AS = 0, an increase in the SWCS was still significantly associated with established CHD risk factors (slope, 0.181; 95% CI, 0.138 to 0.224). The SWCS appeared to be predictive of CHD events even in participants with AS = 0, though those events were rare as expected. The SWCS provides a valid, continuous measure of CAC suitable for quantifying the extent of atherosclerosis without a threshold, which will be useful for examining novel genetic and environmental risk factors for atherosclerosis.
Optimal Sequential Rules for Computer-Based Instruction.
ERIC Educational Resources Information Center
Vos, Hans J.
1998-01-01
Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…
Teaching Information Literacy to Generation Y.
ERIC Educational Resources Information Center
Manuel, Kate
2002-01-01
Discusses how to change library information literacy classes for Generation Y students (born after 1981) to accommodate their learning styles and preferences, based on experiences at California State University, Hayward. Topics include positive outlooks toward technology; orientation toward images, not linear text; low thresholds for boredom and…
Effect of particle stiffness on contact dynamics and rheology in a dense granular flow
NASA Astrophysics Data System (ADS)
Bharathraj, S.; Kumaran, V.
2018-01-01
Dense granular flows have been well described by the Bagnold rheology, even when the particles are in the multibody contact regime and the coordination number is greater than 1. This is surprising, because the Bagnold law should be applicable only in the instantaneous collision regime, where the time between collisions is much larger than the period of a collision. Here, the effect of particle stiffness on rheology is examined. It is found that there is a rheological threshold between a particle stiffness of 104-105 for the linear contact model and 105-106 for the Hertzian contact model above which Bagnold rheology (stress proportional to square of the strain rate) is valid and below which there is a power-law rheology, where all components of the stress and the granular temperature are proportional to a power of the strain rate that is less then 2. The system is in the multibody contact regime at the rheological threshold. However, the contact energy per particle is less than the kinetic energy per particle above the rheological threshold, and it becomes larger than the kinetic energy per particle below the rheological threshold. The distribution functions for the interparticle forces and contact energies are also analyzed. The distribution functions are invariant with height, but they do depend on the contact model. The contact energy distribution functions are well fitted by Gamma distributions. There is a transition in the shape of the distribution function as the particle stiffness is decreased from 107 to 106 for the linear model and 108 to 107 for the Hertzian model, when the contact number exceeds 1. Thus, the transition in the distribution function correlates to the contact regime threshold from the binary to multibody contact regime, and is clearly different from the rheological threshold. An order-disorder transition has recently been reported in dense granular flows. The Bagnold rheology applies for both the ordered and disordered states, even though the rheological constants differ by orders of magnitude. The effect of particle stiffness on the order-disorder transition is examined here. It is found that when the particle stiffness is above the rheological threshold, there is an order-disorder transition as the base roughness is increased. The order-disorder transition disappears after the crossover to the soft-particle regime when the particle stiffness is decreased below the rheological threshold, indicating that the transition is a hard-particle phenomenon.
Effect of particle stiffness on contact dynamics and rheology in a dense granular flow.
Bharathraj, S; Kumaran, V
2018-01-01
Dense granular flows have been well described by the Bagnold rheology, even when the particles are in the multibody contact regime and the coordination number is greater than 1. This is surprising, because the Bagnold law should be applicable only in the instantaneous collision regime, where the time between collisions is much larger than the period of a collision. Here, the effect of particle stiffness on rheology is examined. It is found that there is a rheological threshold between a particle stiffness of 10^{4}-10^{5} for the linear contact model and 10^{5}-10^{6} for the Hertzian contact model above which Bagnold rheology (stress proportional to square of the strain rate) is valid and below which there is a power-law rheology, where all components of the stress and the granular temperature are proportional to a power of the strain rate that is less then 2. The system is in the multibody contact regime at the rheological threshold. However, the contact energy per particle is less than the kinetic energy per particle above the rheological threshold, and it becomes larger than the kinetic energy per particle below the rheological threshold. The distribution functions for the interparticle forces and contact energies are also analyzed. The distribution functions are invariant with height, but they do depend on the contact model. The contact energy distribution functions are well fitted by Gamma distributions. There is a transition in the shape of the distribution function as the particle stiffness is decreased from 10^{7} to 10^{6} for the linear model and 10^{8} to 10^{7} for the Hertzian model, when the contact number exceeds 1. Thus, the transition in the distribution function correlates to the contact regime threshold from the binary to multibody contact regime, and is clearly different from the rheological threshold. An order-disorder transition has recently been reported in dense granular flows. The Bagnold rheology applies for both the ordered and disordered states, even though the rheological constants differ by orders of magnitude. The effect of particle stiffness on the order-disorder transition is examined here. It is found that when the particle stiffness is above the rheological threshold, there is an order-disorder transition as the base roughness is increased. The order-disorder transition disappears after the crossover to the soft-particle regime when the particle stiffness is decreased below the rheological threshold, indicating that the transition is a hard-particle phenomenon.
Nonlinearity and Scaling Behavior in Lead Zirconate Titanate Piezoceramic
NASA Astrophysics Data System (ADS)
Mueller, V.
1998-03-01
The results of a comprehensive study of the nonlinear dielectric and electromechanical response of lead zirconate titanate (PZT) piezoceramics are presented. The piezoelectric strain of a series of donor doped (soft PZT) and acceptor doped (hard PZT) polycrystalline systems was measured under quasistatic (nonresonant) conditions. The measuring field was applied both parallel and perpendicular to the poling direction of the ceramic in order to investigate the influence of different symmetry conditions. Dielectric properties were studied in addition to the electromechanical measurements which enables us to compare piezoelectric and dielectric nonlinearities. Due to the different level and type of dopants, the piezoceramics examined differ significantly with regard to its Curie temperature (190^o C
NASA Astrophysics Data System (ADS)
Zhang, Z.; Cardwell, D.; Sasikumar, A.; Kyle, E. C. H.; Chen, J.; Zhang, E. X.; Fleetwood, D. M.; Schrimpf, R. D.; Speck, J. S.; Arehart, A. R.; Ringel, S. A.
2016-04-01
The impact of proton irradiation on the threshold voltage (VT) of AlGaN/GaN heterostructures is systematically investigated to enhance the understanding of a primary component of the degradation of irradiated high electron mobility transistors. The value of VT was found to increase monotonically as a function of 1.8 MeV proton fluence in a sub-linear manner reaching 0.63 V at a fluence of 1 × 1014 cm-2. Silvaco Atlas simulations of VT shifts caused by GaN buffer traps using experimentally measured introduction rates, and energy levels closely match the experimental results. Different buffer designs lead to different VT dependences on proton irradiation, confirming that deep, acceptor-like defects in the GaN buffer are primarily responsible for the observed VT shifts. The proton irradiation induced VT shifts are found to depend on the barrier thickness in a linear fashion; thus, scaling the barrier thickness could be an effective way to reduce such degradation.
Langmuir wave turbulence transition in a model of stimulated Raman scatter
NASA Astrophysics Data System (ADS)
Rose, Harvey A.
2000-06-01
In a one-dimensional stationary slab model, it is found that once the stimulated Raman scatter (SRS) homogeneous growth rate, γ0, exceeds a threshold value, γT, there exists a local, finite amplitude instability, which leads to Langmuir wave turbulence (LWT). Given energetic enough initial conditions, this allows forward SRS, a linearly convective instability, to be nonlinearly self-sustaining for γ0>γT. Levels of forward scatter, much larger than predicted by the linear amplification of thermal fluctuations, are then accessible. The Stochastic quasilinear Markovian (SQM) model of SRS interacting with LWT predicts a jump in the value of <ɛ>, the mean energy injection rate from the laser to the plasma, across this threshold, while one-dimensional plasma slab simulations reveal large fluctuations in ɛ, and a smooth variation of <ɛ> with γ0. Away from γT, <ɛ> is well predicted by the SQM. If a background density ramp is imposed, LWT may lead to loss of SRS gradient stabilization for γ0≪γT.
Verduijn, J; Milaneschi, Y; Schoevers, R A; van Hemert, A M; Beekman, A T F; Penninx, B W J H
2015-09-29
Meta-analyses support the involvement of different pathophysiological mechanisms (inflammation, hypothalamic-pituitary (HPA)-axis, neurotrophic growth and vitamin D) in major depressive disorder (MDD). However, it remains unknown whether dysregulations in these mechanisms are more pronounced when MDD progresses toward multiple episodes and/or chronicity. We hypothesized that four central pathophysiological mechanisms of MDD are not only involved in etiology, but also associated with clinical disease progression. Therefore, we expected to find increasingly more dysregulation across consecutive stages of MDD progression. The sample from the Netherlands Study of Depression and Anxiety (18-65 years) consisted of 230 controls and 2333 participants assigned to a clinical staging model categorizing MDD in eight stages (0, 1A, 1B, 2, 3A, 3B, 3C and 4), from familial risk at MDD (stage 0) to chronic MDD (stage 4). Analyses of covariance examined whether pathophysiological mechanism markers (interleukin (IL)-6, C-reactive protein (CRP), cortisol, brain-derived neurotrophic factor and vitamin D) showed a linear trend across controls, those at risk for MDD (stages 0, 1A and 1B), and those with full-threshold MDD (stages 2, 3A, 3B, 3C and 4). Subsequently, pathophysiological differences across separate stages within those at risk and with full-threshold MDD were examined. A linear increase of inflammatory markers (CRP P=0.026; IL-6 P=0.090), cortisol (P=0.025) and decrease of vitamin D (P<0.001) was found across the entire sample (for example, from controls to those at risk and those with full-threshold MDD). Significant trends of dysregulations across stages were present in analyses focusing on at-risk individuals (IL-6 P=0.050; cortisol P=0.008; vitamin D P<0.001); however, no linear trends were found in dysregulations for any of the mechanisms across more progressive stages of full-threshold MDD. Our results support that the examined pathophysiological mechanisms are involved in MDD's etiology. These same mechanisms, however, are less important in clinical progression from first to later MDD episodes and toward chronicity.
Verduijn, J; Milaneschi, Y; Schoevers, R A; van Hemert, A M; Beekman, A T F; Penninx, B W J H
2015-01-01
Meta-analyses support the involvement of different pathophysiological mechanisms (inflammation, hypothalamic–pituitary (HPA)-axis, neurotrophic growth and vitamin D) in major depressive disorder (MDD). However, it remains unknown whether dysregulations in these mechanisms are more pronounced when MDD progresses toward multiple episodes and/or chronicity. We hypothesized that four central pathophysiological mechanisms of MDD are not only involved in etiology, but also associated with clinical disease progression. Therefore, we expected to find increasingly more dysregulation across consecutive stages of MDD progression. The sample from the Netherlands Study of Depression and Anxiety (18–65 years) consisted of 230 controls and 2333 participants assigned to a clinical staging model categorizing MDD in eight stages (0, 1A, 1B, 2, 3A, 3B, 3C and 4), from familial risk at MDD (stage 0) to chronic MDD (stage 4). Analyses of covariance examined whether pathophysiological mechanism markers (interleukin (IL)-6, C-reactive protein (CRP), cortisol, brain-derived neurotrophic factor and vitamin D) showed a linear trend across controls, those at risk for MDD (stages 0, 1A and 1B), and those with full-threshold MDD (stages 2, 3A, 3B, 3C and 4). Subsequently, pathophysiological differences across separate stages within those at risk and with full-threshold MDD were examined. A linear increase of inflammatory markers (CRP P=0.026; IL-6 P=0.090), cortisol (P=0.025) and decrease of vitamin D (P<0.001) was found across the entire sample (for example, from controls to those at risk and those with full-threshold MDD). Significant trends of dysregulations across stages were present in analyses focusing on at-risk individuals (IL-6 P=0.050; cortisol P=0.008; vitamin D P<0.001); however, no linear trends were found in dysregulations for any of the mechanisms across more progressive stages of full-threshold MDD. Our results support that the examined pathophysiological mechanisms are involved in MDD’s etiology. These same mechanisms, however, are less important in clinical progression from first to later MDD episodes and toward chronicity. PMID:26418277
Nonlinear Upshift of Trapped Electron Mode Critical Density Gradient: Simulation and Experiment
NASA Astrophysics Data System (ADS)
Ernst, D. R.
2012-10-01
A new nonlinear critical density gradient for pure trapped electron mode (TEM) turbulence increases strongly with collisionality, saturating at several times the linear threshold. The nonlinear TEM threshold appears to limit the density gradient in new experiments subjecting Alcator C-Mod internal transport barriers to modulated radio-frequency heating. Gyrokinetic simulations show the nonlinear upshift of the TEM critical density gradient is associated with long-lived zonal flow dominated states [1]. This introduces a strong temperature dependence that allows external RF heating to control TEM turbulent transport. During pulsed on-axis heating of ITB discharges, core electron temperature modulations of 50% were produced. Bursts of line-integrated density fluctuations, observed on phase contrast imaging, closely follow modulations of core electron temperature inside the ITB foot. Multiple edge fluctuation measurements show the edge response to modulated heating is out of phase with the core response. A new limit cycle stability diagram shows the density gradient appears to be clamped during on-axis heating by the nonlinear TEM critical density gradient, rather than by the much lower linear threshold. Fluctuation wavelength spectra will be quantitatively compared with nonlinear TRINITY/GS2 gyrokinetic transport simulations, using an improved synthetic diagnostic. In related work, we are implementing the first gyrokinetic exact linearized Fokker Planck collision operator [2]. Initial results show short wavelength TEMs are fully stabilized by finite-gyroradius collisional effects for realistic collisionalities. The nonlinear TEM threshold and its collisionality dependence may impact predictions of density peaking based on quasilinear theory, which excludes zonal flows.[4pt] In collaboration with M. Churchill, A. Dominguez, C. L. Fiore, Y. Podpaly, M. L. Reinke, J. Rice, J. L. Terry, N. Tsujii, M. A. Barnes, I. Bespamyatnov, R. Granetz, M. Greenwald, A. Hubbard, J. W. Hughes, M. Landreman, B. Li, Y. Ma, P. Phillips, M. Porkolab, W. Rowan, S. Wolfe, and S. Wukitch.[4pt] [1] D. R. Ernst et al., Proc. 21st IAEA Fusion Energy Conference, Chengdu, China, paper IAEA-CN-149/TH/1-3 (2006). http://www-pub.iaea.org/MTCD/Meetings/FEC200/th1-3.pdf[0pt] [2] B. Li and D.R. Ernst, Phys. Rev. Lett. 106, 195002 (2011).
Regional rainfall thresholds for landslide occurrence using a centenary database
NASA Astrophysics Data System (ADS)
Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Quaresma, Ivânia
2017-04-01
Rainfall is one of the most important triggering factors for landslides occurrence worldwide. The relation between rainfall and landslide occurrence is complex and some approaches have been focus on the rainfall thresholds identification, i.e., rainfall critical values that when exceeded can initiate landslide activity. In line with these approaches, this work proposes and validates rainfall thresholds for the Lisbon region (Portugal), using a centenary landslide database associated with a centenary daily rainfall database. The main objectives of the work are the following: i) to compute antecedent rainfall thresholds using linear and potential regression; ii) to define lower limit and upper limit rainfall thresholds; iii) to estimate the probability of critical rainfall conditions associated with landslide events; and iv) to assess the thresholds performance using receiver operating characteristic (ROC) metrics. In this study we consider the DISASTER database, which lists landslides that caused fatalities, injuries, missing people, evacuated and homeless people occurred in Portugal from 1865 to 2010. The DISASTER database was carried out exploring several Portuguese daily and weekly newspapers. Using the same newspaper sources, the DISASTER database was recently updated to include also the landslides that did not caused any human damage, which were also considered for this study. The daily rainfall data were collected at the Lisboa-Geofísico meteorological station. This station was selected considering the quality and completeness of the rainfall data, with records that started in 1864. The methodology adopted included the computation, for each landslide event, of the cumulative antecedent rainfall for different durations (1 to 90 consecutive days). In a second step, for each combination of rainfall quantity-duration, the return period was estimated using the Gumbel probability distribution. The pair (quantity-duration) with the highest return period was considered as the critical rainfall combination responsible for triggering the landslide event. Only events whose critical rainfall combinations have a return period above 3 years were included. This criterion reduces the likelihood of been included events whose triggering factor was other than rainfall. The rainfall quantity-duration threshold for the Lisbon region was firstly defined using the linear and potential regression. Considering that this threshold allow the existence of false negatives (i.e. events below the threshold) it was also identified the lower limit and upper limit rainfall thresholds. These limits were defined empirically by establishing the quantity-durations combinations bellow which no landslides were recorded (lower limit) and the quantity-durations combinations above which only landslides were recorded without any false positive occurrence (upper limit). The zone between the lower limit and upper limit rainfall thresholds was analysed using a probabilistic approach, defining the uncertainties of each rainfall critical conditions in the triggering of landslides. Finally, the performances of the thresholds obtained in this study were assessed using ROC metrics. This work was supported by the project FORLAND - Hydrogeomorphologic risk in Portugal: driving forces and application for land use planning [grant number PTDC/ATPGEO/1660/2014] funded by the Portuguese Foundation for Science and Technology (FCT), Portugal. Sérgio Cruz Oliveira is a post-doc fellow of the FCT [grant number SFRH/BPD/85827/2012].
Data Fitting to Study Ablated Hard Dental Tissues by Nanosecond Laser Irradiation
Abdel-Daiem, A. M.; Ansari, M. Shahnawaze; Babkair, Saeed S.; Salah, Numan A.; Al-Mujtaba, A.
2016-01-01
Laser ablation of dental hard tissues is one of the most important laser applications in dentistry. Many works have reported the interaction of laser radiations with tooth material to optimize laser parameters such as wavelength, energy density, etc. This work has focused on determining the relationship between energy density and ablation thresholds using pulsed, 5 nanosecond, neodymium-doped yttrium aluminum garnet; Nd:Y3Al5O12 (Nd:YAG) laser at 1064 nanometer. For enamel and dentin tissues, the ablations have been performed using laser-induced breakdown spectroscopy (LIBS) technique. The ablation thresholds and relationship between energy densities and peak areas of calcium lines, which appeared in LIBS, were determined using data fitting. Furthermore, the morphological changes were studied using Scanning Electron Microscope (SEM). Moreover, the chemical stability of the tooth material after ablation has been studied using Energy-Dispersive X-Ray Spectroscopy (EDX). The differences between carbon atomic % of non-irradiated and irradiated samples were tested using statistical t-test. Results revealed that the best fitting between energy densities and peak areas of calcium lines were exponential and linear for enamel and dentin, respectively. In addition, the ablation threshold of Nd:YAG lasers in enamel was higher than that of dentin. The morphology of the surrounded ablated region of enamel showed thermal damages. For enamel, the EDX quantitative analysis showed that the atomic % of carbon increased significantly when laser energy density increased. PMID:27228169
Constrained sampling experiments reveal principles of detection in natural scenes.
Sebastian, Stephen; Abrams, Jared; Geisler, Wilson S
2017-07-11
A fundamental everyday visual task is to detect target objects within a background scene. Using relatively simple stimuli, vision science has identified several major factors that affect detection thresholds, including the luminance of the background, the contrast of the background, the spatial similarity of the background to the target, and uncertainty due to random variations in the properties of the background and in the amplitude of the target. Here we use an experimental approach based on constrained sampling from multidimensional histograms of natural stimuli, together with a theoretical analysis based on signal detection theory, to discover how these factors affect detection in natural scenes. We sorted a large collection of natural image backgrounds into multidimensional histograms, where each bin corresponds to a particular luminance, contrast, and similarity. Detection thresholds were measured for a subset of bins spanning the space, where a natural background was randomly sampled from a bin on each trial. In low-uncertainty conditions, both the background bin and the amplitude of the target were fixed, and, in high-uncertainty conditions, they varied randomly on each trial. We found that thresholds increase approximately linearly along all three dimensions and that detection accuracy is unaffected by background bin and target amplitude uncertainty. The results are predicted from first principles by a normalized matched-template detector, where the dynamic normalizing gain factor follows directly from the statistical properties of the natural backgrounds. The results provide an explanation for classic laws of psychophysics and their underlying neural mechanisms.
Comparison of Various Anthropometric Indices as Risk Factors for Hearing Impairment in Asian Women.
Kang, Seok Hui; Jung, Da Jung; Lee, Kyu Yup; Choi, Eun Woo; Do, Jun Young
2015-01-01
The objective of the present study was to examine the associations between various anthropometric measures and metabolic syndrome and hearing impairment in Asian women. We identified 11,755 women who underwent voluntary routine health checkups at Yeungnam University Hospital between June 2008 and April 2014. Among these patients, 2,485 participants were <40 years old, and 1,072 participants lacked information regarding their laboratory findings or hearing and were therefore excluded. In total 8,198 participants were recruited into our study. The AUROC value for metabolic syndrome was 0.790 for the waist to hip ratio (WHR). The cutoff value was 0.939. The sensitivity and specificity for predicting metabolic syndrome were 72.7% and 71.7%, respectively. The AUROC value for hearing loss was 0.758 for WHR. The cutoff value was 0.932. The sensitivity and specificity for predicting hearing loss were 65.8% and 73.4%, respectively. The WHR had the highest AUC and was the best predictor of metabolic syndrome and hearing loss. Univariate and multivariate linear regression analyses showed that WHR levels were positively associated with four hearing thresholds including averaged hearing threshold and low, middle, and high frequency thresholds. In addition, multivariate logistic analysis revealed that those with a high WHR had a 1.347-fold increased risk of hearing loss compared with the participants with a low WHR. Our results demonstrated that WHR may be a surrogate marker for predicting the risk of hearing loss resulting from metabolic syndrome.
Hemodynamic responses to acute and gradual renal artery stenosis in pigs.
Rognant, Nicolas; Rouvière, Olivier; Janier, Marc; Lê, Quoc Hung; Barthez, Paul; Laville, Maurice; Juillard, Laurent
2010-11-01
Reduction of renal blood flow (RBF) due to a renal artery stenosis (RAS) can lead to renal ischemia and atrophy. However in pigs, there are no data describing the relationship between the degree of RAS, the reduction of RBF, and the increase of systemic plasma renin activity (PRA). Therefore, we conducted a study in order to measure the effect of acute and gradual RAS on RBF, mean arterial pressure (MAP), and systemic PRA in pigs. RAS was induced experimentally in six pigs using an occluder placed around the renal artery downstream of an ultrasound flow probe. The vascular occluder was inflated gradually to reduce RBF. At each inflation step, percentage of RAS was measured by digital subtraction angiography (DSA) with simultaneous measurements of RBF, MAP, and PRA. Data were normalized to baseline values obtained before RAS induction. Piecewise regression analysis was performed between percentage of RAS and relative RBF, MAP, and PRA, respectively. In all pigs, the relationship between the degree of RAS and RBF was similar. RBF decreased over a threshold of 42% of RAS, with a rapid drop in RBF when RAS reached 70%. PRA increased dramatically over a threshold of 58% of RAS (+1,300% before occlusion). MAP increased slightly (+15% before occlusion) without identifiable threshold. This study emphasizes that the relation between the degree of RAS and RBF and systemic PRA is not linear and that a high degree of RAS must be reached before the occurrence of significant hemodynamic and humoral effects.
Fokker-Planck analysis of transverse collective instabilities in electron storage rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindberg, Ryan R.
We analyze single bunch transverse instabilities due to wakefields using a Fokker-Planck model. We first expand on the work of T. Suzuki, Part. Accel. 12, 237 (1982) to derive the theoretical model including chromaticity, both dipolar and quadrupolar transverse wakefields, and the effects of damping and diffusion due to the synchrotron radiation. We reduce the problem to a linear matrix equation, whose eigenvalues and eigenvectors determine the collective stability of the beam. We then show that various predictions of the theory agree quite well with results from particle tracking simulations, including the threshold current for transverse instability and the profilemore » of the unstable mode. In particular, we find that predicting collective stability for high energy electron beams at moderate to large values of chromaticity requires the full Fokker-Planck analysis to properly account for the effects of damping and diffusion due to synchrotron radiation.« less
Terahertz radiation-induced sub-cycle field electron emission across a split-gap dipole antenna
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jingdi; Averitt, Richard D., E-mail: xinz@bu.edu, E-mail: raveritt@ucsd.edu; Department of Physics, Boston University, Boston, Massachusetts 02215
We use intense terahertz pulses to excite the resonant mode (0.6 THz) of a micro-fabricated dipole antenna with a vacuum gap. The dipole antenna structure enhances the peak amplitude of the in-gap THz electric field by a factor of ∼170. Above an in-gap E-field threshold amplitude of ∼10 MV/cm{sup −1}, THz-induced field electron emission is observed as indicated by the field-induced electric current across the dipole antenna gap. Field emission occurs within a fraction of the driving THz period. Our analysis of the current (I) and incident electric field (E) is in agreement with a Millikan-Lauritsen analysis where log (I) exhibits amore » linear dependence on 1/E. Numerical estimates indicate that the electrons are accelerated to a value of approximately one tenth of the speed of light.« less
Effective field theory analysis on μ problem in low-scale gauge mediation
NASA Astrophysics Data System (ADS)
Zheng, Sibo
2012-02-01
Supersymmetric models based on the scenario of gauge mediation often suffer from the well-known μ problem. In this paper, we reconsider this problem in low-scale gauge mediation in terms of effective field theory analysis. In this paradigm, all high energy input soft mass can be expressed via loop expansions. If the corrections coming from messenger thresholds are small, as we assume in this letter, then all RG evaluations can be taken as linearly approximation for low-scale supersymmetric breaking. Due to these observations, the parameter space can be systematically classified and studied after constraints coming from electro-weak symmetry breaking are imposed. We find that some old proposals in the literature are reproduced, and two new classes are uncovered. We refer to a microscopic model, where the specific relations among coefficients in one of the new classes are well motivated. Also, we discuss some primary phenomenologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolesnikov, R.A.; Krommes, J.A.
The collisionless limit of the transition to ion-temperature-gradient-driven plasma turbulence is considered with a dynamical-systems approach. The importance of systematic analysis for understanding the differences in the bifurcations and dynamics of linearly damped and undamped systems is emphasized. A model with ten degrees of freedom is studied as a concrete example. A four-dimensional center manifold (CM) is analyzed, and fixed points of its dynamics are identified and used to predict a ''Dimits shift'' of the threshold for turbulence due to the excitation of zonal flows. The exact value of that shift in terms of physical parameters is established for themore » model; the effects of higher-order truncations on the dynamics are noted. Multiple-scale analysis of the CM equations is used to discuss possible effects of modulational instability on scenarios for the transition to turbulence in both collisional and collisionless cases.« less
Janky, Kristen L.; Shepard, Neil
2009-01-01
Background Vestibular Evoked Myogenic Potential (VEMP) testing has gained increased interest in the diagnosis of a variety of vestibular etiologies. Comparisons of P13 / N23 latency, amplitude and threshold response curves have been used to compare pathologic groups to normal controls. Appropriate characterization of these etiologies requires normative data across the frequency spectrum and age range. Purpose The objective of the current study was to test the hypothesis that significant changes in VEMP responses occur as a function of increased age across all test stimuli as well as characterize the VEMP threshold response curve across age. Research Design This project incorporated a prospective study design using a sample of convenience. Openly recruited subjects were assigned to groups according to age. Study Sample Forty-six normal controls ranging between 20 and 76 years of age participated in the study. Participants were separated by decade into 5 age categories from 20 to 60 plus years. Normal participants were characterized by having normal hearing sensitivity, no history of neurologic or balance/dizziness involvement and negative results on a direct office vestibular examination. Intervention VEMP responses were measured at threshold to click and 250, 500, 750, and 1000 Hz tone burst stimuli and at a suprathreshold level to 500 Hz toneburst stimuli at123 dBSPL. Data Collection and Analysis A mixed group factorial ANOVA and linear regression were performed to examine the effects of VEMP characteristics upon age. Results There were no significant differences between ears for any of the test parameters. There were no significant differences between age groups for n23 latency or amplitude in response to any of the stimuli. Significant mean differences did exist between age groups for p13 latency (250, 750, and 1000 Hz) and threshold (500 and 750 Hz). Age was significantly correlated with VEMP parameters. VEMP threshold was positively correlated (250, 500, 750, 1000 Hz); and amplitude was negatively correlated (500 Hz Maximum). The threshold response curves revealed best frequency tuning at 500 Hz with the highest thresholds in response to click stimuli. However, this best frequency tuning dissipated with increased age. VEMP response rates also decreased with increased age. Conclusion We have demonstrated that minor differences in VEMP responses occur with age. Given the reduced response rates and flattened frequency tuning curve for individuals over the age of 60, frequency tuning curves may not be a good diagnostic indicator for this age group. PMID:19764171
NASA Astrophysics Data System (ADS)
Dattoli, G.; Migliorati, M.; Schiavi, A.
2007-05-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed.
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
Vestibular-dependent inter-stimulus interval effects on sound evoked potentials of central origin.
Todd, N P M; Govender, S; Colebatch, J G
2016-11-01
Todd et al. (2014ab) have recently demonstrated the presence of vestibular-dependent contributions to auditory evoked potentials (AEPs) when passing through the vestibular threshold as determined by vestibular evoked myogenic potentials (VEMPs), including a particular deflection labeled as an N42/P52 prior to the long-latency AEPs N1 and P2. In this paper we report the results of an experiment to determine the effect of inter-stimulus interval (ISI) and regularity on potentials recorded above and below VEMP threshold. Five healthy, right-handed subjects were recruited and evoked potentials were recorded to binaurally presented sound stimulation, above and below vestibular threshold, at seven stimulus rates with ISIs of 212, 300, 424, 600, 848, 1200 and 1696 ms. The inner five intervals, i.e. 300, 424, 600, 848, 1200 ms, were presented twice in both regular and irregular conditions. ANOVA on the global field power (GFP) were conducted for each of four waves, N42, P52, N1 and P2 with factors of intensity, ISI and regularity. Both N42 and P52 waves showed significant ANOVA effects of intensity but no other main effects or interactions. In contrast both N1 and P2 showed additional effects of ISI, as well as intensity, and evidence of non-linear interactions between ISI and intensity. A source analysis was carried out consistent with prior work suggesting that when above vestibular threshold, in addition to bilateral superior temporal cortex, ocular, cerebellar and cingulate sources are recruited. Further statistical analysis of the source currents indicated that the origin of the interactions with intensity may be the ISI sensitivity of the vestibular-dependent sources. This in turn may reflect a specific vestibular preference for stimulus rates associated with locomotion, i.e. rates close to 2 Hz, or ISIs close to 500 ms, where saccular afferents show increased gain and the corresponding reflexes are most sensitive. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Jing, Xufeng; Shao, Jianda; Zhang, Junchao; Jin, Yunxia; He, Hongbo; Fan, Zhengxiu
2009-12-21
In order to more exactly predict femtosecond pulse laser induced damage threshold, an accurate theoretical model taking into account photoionization, avalanche ionization and decay of electrons is proposed by comparing respectively several combined ionization models with the published experimental measurements. In addition, the transmittance property and the near-field distribution of the 'moth eye' broadband antireflective microstructure directly patterned into the substrate material as a function of the surface structure period and groove depth are performed by a rigorous Fourier model method. It is found that the near-field distribution is strongly dependent on the periodicity of surface structure for TE polarization, but for TM wave it is insensitive to the period. What's more, the femtosecond pulse laser damage threshold of the surface microstructure on the pulse duration taking into account the local maximum electric field enhancement was calculated using the proposed relatively accurate theoretical ionization model. For the longer incident wavelength of 1064 nm, the weak linear damage threshold on the pulse duration is shown, but there is a surprising oscillation peak of breakdown threshold as a function of the pulse duration for the shorter incident wavelength of 532 nm.
Stathopoulos, Angelike; Levine, Michael
2002-07-01
Differential activation of the Toll receptor leads to the formation of a broad Dorsal nuclear gradient that specifies at least three patterning thresholds of gene activity along the dorsoventral axis of precellular embryos. We investigate the activities of the Pelle kinase and Twist basic helix-loop-helix (bHLH) transcription factor in transducing Toll signaling. Pelle functions downstream of Toll to release Dorsal from the Cactus inhibitor. Twist is an immediate-early gene that is activated upon entry of Dorsal into nuclei. Transgenes misexpressing Pelle and Twist were introduced into different mutant backgrounds and the patterning activities were visualized using various target genes that respond to different thresholds of Toll-Dorsal signaling. These studies suggest that an anteroposterior gradient of Pelle kinase activity is sufficient to generate all known Toll-Dorsal patterning thresholds and that Twist can function as a gradient morphogen to establish at least two distinct dorsoventral patterning thresholds. We discuss how the Dorsal gradient system can be modified during metazoan evolution and conclude that Dorsal-Twist interactions are distinct from the interplay between Bicoid and Hunchback, which pattern the anteroposterior axis.
Symmetry, stability, and computation of degenerate lasing modes
NASA Astrophysics Data System (ADS)
Liu, David; Zhen, Bo; Ge, Li; Hernandez, Felipe; Pick, Adi; Burkhardt, Stephan; Liertzer, Matthias; Rotter, Stefan; Johnson, Steven G.
2017-02-01
We present a general method to obtain the stable lasing solutions for the steady-state ab initio lasing theory (SALT) for the case of a degenerate symmetric laser in two dimensions (2D). We find that under most regimes (with one pathological exception), the stable solutions are clockwise and counterclockwise circulating modes, generalizing previously known results of ring lasers to all 2D rotational symmetry groups. Our method uses a combination of semianalytical solutions close to lasing threshold and numerical solvers to track the lasing modes far above threshold. Near threshold, we find closed-form expressions for both circulating modes and other types of lasing solutions as well as for their linearized Maxwell-Bloch eigenvalues, providing a simple way to determine their stability without having to do a full nonlinear numerical calculation. Above threshold, we show that a key feature of the circulating mode is its "chiral" intensity pattern, which arises from spontaneous symmetry breaking of mirror symmetry, and whose symmetry group requires that the degeneracy persists even when nonlinear effects become important. Finally, we introduce a numerical technique to solve the degenerate SALT equations far above threshold even when spatial discretization artificially breaks the degeneracy.
Generation Process of Large-Amplitude Upper-Band Chorus Emissions Observed by Van Allen Probes
Kubota, Yuko; Omura, Yoshiharu; Kletzing, Craig; ...
2018-04-19
In this paper, we analyze large-amplitude upper-band chorus emissions measured near the magnetic equator by the Electric and Magnetic Field Instrument Suite and Integrated Science instrument package on board the Van Allen Probes. In setting up the parameters of source electrons exciting the emissions based on theoretical analyses and observational results measured by the Helium Oxygen Proton Electron instrument, we calculate threshold and optimum amplitudes with the nonlinear wave growth theory. We find that the optimum amplitude is larger than the threshold amplitude obtained in the frequency range of the chorus emissions and that the wave amplitudes grow between themore » threshold and optimum amplitudes. Finally, in the frame of the wave growth process, the nonlinear growth rates are much greater than the linear growth rates.« less
Luckey, T D
2008-01-01
Media reports of deaths and devastation produced by atomic bombs convinced people around the world that all ionizing radiation is harmful. This concentrated attention on fear of miniscule doses of radiation. Soon the linear no threshold (LNT) paradigm was converted into laws. Scientifically valid information about the health benefits from low dose irradiation was ignored. Here are studies which show increased health in Japanese survivors of atomic bombs. Parameters include decreased mutation, leukemia and solid tissue cancer mortality rates, and increased average lifespan. Each study exhibits a threshold that repudiates the LNT dogma. The average threshold for acute exposures to atomic bombs is about 100 cSv. Conclusions from these studies of atomic bomb survivors are: One burst of low dose irradiation elicits a lifetime of improved health.Improved health from low dose irradiation negates the LNT paradigm.Effective triage should include radiation hormesis for survivor treatment.
Generation Process of Large-Amplitude Upper-Band Chorus Emissions Observed by Van Allen Probes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kubota, Yuko; Omura, Yoshiharu; Kletzing, Craig
In this paper, we analyze large-amplitude upper-band chorus emissions measured near the magnetic equator by the Electric and Magnetic Field Instrument Suite and Integrated Science instrument package on board the Van Allen Probes. In setting up the parameters of source electrons exciting the emissions based on theoretical analyses and observational results measured by the Helium Oxygen Proton Electron instrument, we calculate threshold and optimum amplitudes with the nonlinear wave growth theory. We find that the optimum amplitude is larger than the threshold amplitude obtained in the frequency range of the chorus emissions and that the wave amplitudes grow between themore » threshold and optimum amplitudes. Finally, in the frame of the wave growth process, the nonlinear growth rates are much greater than the linear growth rates.« less
Partial Photoneutron Cross Sections for 207,208Pb
NASA Astrophysics Data System (ADS)
Kondo, T.; Utsunomiya, H.; Goriely, S.; Iwamoto, C.; Akimune, H.; Yamagata, T.; Toyokawa, H.; Harada, H.; Kitatani, F.; Lui, Y.-W.; Hilaire, S.; Koning, A. J.
2014-05-01
Using linearly-polarized laser-Compton scattering γ-rays, partial E1 and M1 photoneutron cross sections along with total cross sections were determined for 207,208Pb at four energies near neutron threshold by measuring anisotropies in photoneutron emission. Separately, total photoneutron cross sections were measured for 207,208Pb with a high-efficiency 4π neutron detector. The partial cross section measurement provides direct evidence for the presence of pygmy dipole resonance (PDR) in 207,208Pb in the vicinity of neutron threshold. The strength of PDR amounts to 0.32%-0.42% of the Thomas-Reiche-Kuhn sum rule. Several μN2 units of B(M1)↑ strength were observed in 207,208Pb just above neutron threshold, which correspond to M1 cross sections less than 10% of the total photoneutron cross sections.
A conditional probability analysis (CPA) approach has been developed for identifying biological thresholds of impact for use in the development of geographic-specific water quality criteria for protection of aquatic life. This approach expresses the threshold as the likelihood ...
Tsai, Shirley C; Tsai, Chen S
2013-08-01
A linear theory on temporal instability of megahertz Faraday waves for monodisperse microdroplet ejection based on mass conservation and linearized Navier-Stokes equations is presented using the most recently observed micrometer- sized droplet ejection from a millimeter-sized spherical water ball as a specific example. The theory is verified in the experiments utilizing silicon-based multiple-Fourier horn ultrasonic nozzles at megahertz frequency to facilitate temporal instability of the Faraday waves. Specifically, the linear theory not only correctly predicted the Faraday wave frequency and onset threshold of Faraday instability, the effect of viscosity, the dynamics of droplet ejection, but also established the first theoretical formula for the size of the ejected droplets, namely, the droplet diameter equals four-tenths of the Faraday wavelength involved. The high rate of increase in Faraday wave amplitude at megahertz drive frequency subsequent to onset threshold, together with enhanced excitation displacement on the nozzle end face, facilitated by the megahertz multiple Fourier horns in resonance, led to high-rate ejection of micrometer- sized monodisperse droplets (>10(7) droplets/s) at low electrical drive power (<;1 W) with short initiation time (<;0.05 s). This is in stark contrast to the Rayleigh-Plateau instability of a liquid jet, which ejects one droplet at a time. The measured diameters of the droplets ranging from 2.2 to 4.6 μm at 2 to 1 MHz drive frequency fall within the optimum particle size range for pulmonary drug delivery.
Susceptible-infected-recovered epidemics in random networks with population awareness
NASA Astrophysics Data System (ADS)
Wu, Qingchu; Chen, Shufang
2017-10-01
The influence of epidemic information-based awareness on the spread of infectious diseases on networks cannot be ignored. Within the effective degree modeling framework, we discuss the susceptible-infected-recovered model in complex networks with general awareness and general degree distribution. By performing the linear stability analysis, the conditions of epidemic outbreak can be deduced and the results of the previous research can be further expanded. Results show that the local awareness can suppress significantly the epidemic spreading on complex networks via raising the epidemic threshold and such effects are closely related to the formulation of awareness functions. In addition, our results suggest that the recovered information-based awareness has no effect on the critical condition of epidemic outbreak.
An automatic frequency control loop using overlapping DFTs (Discrete Fourier Transforms)
NASA Technical Reports Server (NTRS)
Aguirre, S.
1988-01-01
An automatic frequency control (AFC) loop is introduced and analyzed in detail. The new scheme is a generalization of the well known Cross Product AFC loop that uses running overlapping discrete Fourier transforms (DFTs) to create a discriminator curve. Linear analysis is included and supported with computer simulations. The algorithm is tested in a low carrier to noise ratio (CNR) dynamic environment, and the probability of loss of lock is estimated via computer simulations. The algorithm discussed is a suboptimum tracking scheme with a larger frequency error variance compared to an optimum strategy, but offers simplicity of implementation and a very low operating threshold CNR. This technique can be applied during the carrier acquisition and re-acquisition process in the Advanced Receiver.
Epidemic spread in bipartite network by considering risk awareness
NASA Astrophysics Data System (ADS)
Han, She; Sun, Mei; Ampimah, Benjamin Chris; Han, Dun
2018-02-01
Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. Exploring the interplay between human awareness and epidemic spreading is a topic that has been receiving increasing attention. Considering the fact, some well-known diseases only spread between different species we propose a theoretical analysis of the Susceptible-Infected-Susceptible (SIS) epidemic spread from the perspective of bipartite network and risk aversion. Using mean field theory, the epidemic threshold is calculated theoretically. Simulation results are consistent with the proposed analytic model. The results show that, the final infection density is negative linear with the value of individuals' risk awareness. Therefore, the epidemic spread could be effectively suppressed by improving individuals' risk awareness.
Parallel proton fire hose instability in the expanding solar wind: Hybrid simulations
NASA Astrophysics Data System (ADS)
Matteini, Lorenzo; Landi, Simone; Hellinger, Petr; Velli, Marco
2006-10-01
We report a study of the properties of the parallel proton fire hose instability comparing the results obtained by the linear analysis, from one-dimensional (1-D) standard hybrid simulations and 1-D hybrid expanding box simulations. The three different approaches converge toward the same instability threshold condition which is in good agreement with in situ observations, suggesting that such instability is relevant in the solar wind context. We investigate also the effect of the wave-particle interactions on shaping the proton distribution function and on the evolution of the spectrum of the magnetic fluctuations during the expansion. We find that the resonant interaction can provide the proton distribution function to depart from the bi-Maxwellian form.
NASA Astrophysics Data System (ADS)
Wang, Qin; Xie, Hui; Chen, Yongshi; Liu, Chao
2017-04-01
The nucleation and growth of silver nanoparticles in the supersaturated system are investigated by molecular dynamics simulation at different temperatures and pressures. The variety of the atoms in the biggest cluster and the size of average clusters in the system versus the time are estimated to reveal the relationship between the nucleation as well as cluster growth. The nucleation rates in different situations are calculated with the threshold method. The effect of temperature and pressure on the nucleation rate is identified as obeying a linear function. Finally, the development of basal elements, such as monomers, dimers and trimmers, is revealed how the temperature and pressure affect the nucleation and growth of the silver cluster.
Multi-mode sliding mode control for precision linear stage based on fixed or floating stator.
Fang, Jiwen; Long, Zhili; Wang, Michael Yu; Zhang, Lufan; Dai, Xufei
2016-02-01
This paper presents the control performance of a linear motion stage driven by Voice Coil Motor (VCM). Unlike the conventional VCM, the stator of this VCM is regulated, which means it can be adjusted as a floating-stator or fixed-stator. A Multi-Mode Sliding Mode Control (MMSMC), including a conventional Sliding Mode Control (SMC) and an Integral Sliding Mode Control (ISMC), is designed to control the linear motion stage. The control is switched between SMC and IMSC based on the error threshold. To eliminate the chattering, a smooth function is adopted instead of a signum function. The experimental results with the floating stator show that the positioning accuracy and tracking performance of the linear motion stage are improved with the MMSMC approach.
Modified optimal control pilot model for computer-aided design and analysis
NASA Technical Reports Server (NTRS)
Davidson, John B.; Schmidt, David K.
1992-01-01
This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.
NASA Astrophysics Data System (ADS)
Barros, Ana P.; Bowden, Gavin J.
2008-08-01
SummaryResiliency and effectiveness in water resources management of drought is strongly depend on advanced knowledge of drought onset, duration and severity. The motivation of this work is to extend the lead time of operational drought forecasts. The research strategy is to explore the predictability of drought severity from space-time varying indices of large-scale climate phenomena relevant to regional hydrometeorology (e.g. ENSO) by integrating linear and non-linear statistical data models, specifically self-organizing maps (SOM) and multivariate linear regression analysis. The methodology is demonstrated through the step-by-step development of a model to forecast monthly spatial patterns of the standard precipitation index (SPI) within the Murray-Darling Basin (MDB) in Australia up to 12 months in advance. First, the rationale for the physical hypothesis and the exploratory data analysis including principal components, wavelet and partial mutual information analysis to identify and select predictor variables are presented. The focus is on spatial datasets of precipitation, sea surface temperature anomaly (SSTA) patterns over the Indian and Pacific Oceans, temporal and spatial gradients of outgoing longwave radiation (OLR) in the Pacific Ocean, and the far western Pacific wind-stress anomaly. Second, the process of model construction, calibration and evaluation is described. The experimental forecasts show that there is ample opportunity to increase the lead time of drought forecasts for decision support using parsimonious data models that capture the governing climate processes at regional scale. OLR gradients proved to be dispensable predictors, whereas SPI-based predictors appear to control predictability when the SSTA in the region [87.5°N-87.5°S; 27.5°E-67.5°W] and eastward wind-stress anomalies in the region [4°N-4°S; 130°E-160°E) are small, respectively, ±1° and ±0.01 dyne/cm 2, that is when ENSO activity is weak. The areal averaged 12-month lead-time forecasts of SPI in the MDB explain up to 60% of the variance in the observations ( r > 0.7). Based on a threshold SPI of -0.5 for severe drought at the regional scale and for a nominal 12-month lead time, the forecast of the timing of onset is within 0-2 months of the actual threshold being met by the observations, thus effectively a 10-month lead time forecast at a minimum. Spatial analysis suggests that forecast errors can be attributed in part to a mismatch between the spatial heterogeneity of rainfall and raingauge density in the observational network. Forecast uncertainty on the other hand appears associated with the number of redundant predictors used in the forecast model.
Rossi, Sergio; Anfodillo, Tommaso; Čufar, Katarina; Cuny, Henri E.; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gričar, Jožica; Gruber, Andreas; King, Gregory M.; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B. K.
2013-01-01
Background and Aims Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Methods Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1–9 years per site from 1998 to 2011. Key Results The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern Conclusions The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond differently to changes in environmental conditions. PMID:24201138
Ozasa, Kotaro; Shimizu, Yukiko; Suyama, Akihiko; Kasagi, Fumiyoshi; Soda, Midori; Grant, Eric J; Sakata, Ritsu; Sugiyama, Hiromi; Kodama, Kazunori
2012-03-01
This is the 14th report in a series of periodic general reports on mortality in the Life Span Study (LSS) cohort of atomic bomb survivors followed by the Radiation Effects Research Foundation to investigate the late health effects of the radiation from the atomic bombs. During the period 1950-2003, 58% of the 86,611 LSS cohort members with DS02 dose estimates have died. The 6 years of additional follow-up since the previous report provide substantially more information at longer periods after radiation exposure (17% more cancer deaths), especially among those under age 10 at exposure (58% more deaths). Poisson regression methods were used to investigate the magnitude of the radiation-associated risks, the shape of the dose response, and effect modification by gender, age at exposure, and attained age. The risk of all causes of death was positively associated with radiation dose. Importantly, for solid cancers the additive radiation risk (i.e., excess cancer cases per 10(4) person-years per Gy) continues to increase throughout life with a linear dose-response relationship. The sex-averaged excess relative risk per Gy was 0.42 [95% confidence interval (CI): 0.32, 0.53] for all solid cancer at age 70 years after exposure at age 30 based on a linear model. The risk increased by about 29% per decade decrease in age at exposure (95% CI: 17%, 41%). The estimated lowest dose range with a significant ERR for all solid cancer was 0 to 0.20 Gy, and a formal dose-threshold analysis indicated no threshold; i.e., zero dose was the best estimate of the threshold. The risk of cancer mortality increased significantly for most major sites, including stomach, lung, liver, colon, breast, gallbladder, esophagus, bladder and ovary, whereas rectum, pancreas, uterus, prostate and kidney parenchyma did not have significantly increased risks. An increased risk of non-neoplastic diseases including the circulatory, respiratory and digestive systems was observed, but whether these are causal relationships requires further investigation. There was no evidence of a radiation effect for infectious or external causes of death.
N-nitrosamines as "special case" leachables in a metered dose inhaler drug product.
Norwood, Daniel L; Mullis, James O; Feinberg, Thomas N; Davis, Letha K
2009-01-01
N-nitrosamines are chemical entities, some of which are considered to be possible human carcinogens, which can be found at trace levels in some types of foods, tobacco smoke, certain cosmetics, and certain types of rubber. N-nitrosamines are of regulatory concern as leachables in inhalation drug products, particularly metered dose inhalers, which incorporate rubber seals into their container closure systems. The United States Food and Drug Administration considers N-nitrosamines (along with polycyclic aromatic hydrocarbons and 2-mercaptobenzothiazole) to be "special case" leachables in inhalation drug products, meaning that there are no recognized safety or analytical thresholds and these compounds must therefore be identified and quantitated at the lowest practical level. This report presents the development of a quantitative analytical method for target volatile N-nitrosamines in a metered dose inhaler drug product, Atrovent HFA. The method incorporates a target analyte recovery procedure from the drug product matrix with analysis by gas chromatography/thermal energy analysis detection. The capability of the method was investigated with respect to specificity, linearity/range, accuracy (linearity of recovery), precision (repeatability, intermediate precision), limits of quantitation, standard/sample stability, and system suitability. Sample analyses showed that Atrovent HFA contains no target N-nitrosamines at the trace level of 1 ng/canister.
Computation of acoustic ressure fields produced in feline brain by high-intensity focused ultrasound
NASA Astrophysics Data System (ADS)
Omidi, Nazanin
In 1975, Dunn et al. (JASA 58:512-514) showed that a simple relation describes the ultrasonic threshold for cavitation-induced changes in the mammalian brain. The thresholds for tissue damage were estimated for a variety of acoustic parameters in exposed feline brain. The goal of this study was to improve the estimates for acoustic pressures and intensities present in vivo during those experimental exposures by estimating them using nonlinear rather than linear theory. In our current project, the acoustic pressure waveforms produced in the brains of anesthetized felines were numerically simulated for a spherically focused, nominally f1-transducer (focal length = 13 cm) at increasing values of the source pressure at frequencies of 1, 3, and 9 MHz. The corresponding focal intensities were correlated with the experimental data of Dunn et al. The focal pressure waveforms were also computed at the location of the true maximum. For low source pressures, the computed waveforms were the same as those determined using linear theory, and the focal intensities matched experimentally determined values. For higher source pressures, the focal pressure waveforms became increasingly distorted, with the compressional amplitude of the wave becoming greater, and the rarefactional amplitude becoming lower than the values calculated using linear theory. The implications of these results for clinical exposures are discussed.
The Profile-Query Relationship.
ERIC Educational Resources Information Center
Shepherd, Michael A.; Phillips, W. J.
1986-01-01
Defines relationship between user profile and user query in terms of relationship between clusters of documents retrieved by each, and explores the expression of cluster similarity and cluster overlap as linear functions of similarity existing between original pairs of profiles and queries, given the desired retrieval threshold. (23 references)…
Does body mass index misclassify physically active young men.
Grier, Tyson; Canham-Chervak, Michelle; Sharp, Marilyn; Jones, Bruce H
2015-01-01
The purpose of this analysis was to determine the accuracy of age and gender adjusted BMI as a measure of body fat (BF) in U.S. Army Soldiers. BMI was calculated through measured height and weight (kg/m(2)) and body composition was determined by dual energy X-ray absorptiometry (DEXA). Linear regression was used to determine a BF prediction equation and examine the correlation between %BF and BMI. The sensitivity and specificity of BMI compared to %BF as measured by DEXA was calculated. Soldiers (n = 110) were on average 23 years old, with a BMI of 26.4, and approximately 18% BF. The correlation between BMI and %BF (R = 0.86) was strong (p < 0.01). A sensitivity of 77% and specificity of 100% were calculated when using Army age adjusted BMI thresholds. The overall accuracy in determining if a Soldier met Army BMI standards and were within the maximum allowable BF or exceeded BMI standards and were over the maximum allowable BF was 83%. Using adjusted BMI thresholds in populations where physical fitness and training are requirements of the job provides better accuracy in identifying those who are overweight or obese due to high BF.
Mercury demethylation in waterbird livers: Dose-response thresholds and differences among species
Eagles-Smith, Collin A.; Ackerman, Joshua T.; Julie, Y.E.E.; Adelsbach, T.L.
2009-01-01
We assessed methylmercury (MeHg) demethylation in the livers of adults and chicks of four waterbird species that commonly breed in San Francisco Bay: American avocets, black-necked stilts, Caspian terns, and Forster's terns. In adults (all species combined), we found strong evidence for a threshold, model where MeHg demethylation occurred above a hepatic total mercury concentration threshold of 8.51 ?? 0.93 ??g/g dry weight, and there was a strong decline in %MeHg values as total mercury (THg) concentrations increased above 8.51 ??g/g dry weight. Conversely, there was no evidence for a demethylation threshold in chicks, and we found that %MeHg values declined linearly with increasing THg concentrations. For adults, we also found taxonomie differences in the demethylation responses, with avocets and stilts showing a higher demethylation rate than that of terns when concentrations exceeded the threshold, whereas terns had a lower demethylation threshold (7.48 ?? 1.48 ??g/g dry wt) than that of avocets and stilts (9.91 ?? 1.29 ??g/g dry wt). Finally, we assessed the role of selenium (Se) in the demethylation process. Selenium concentrations were positively correlated with inorganic Hg in livers of birds above the demethylation threshold but not below. This suggests that Se may act as a binding site for demethylated Hg and may reduce the potential for secondary toxicity. Our findings indicate that waterbirds demethylate mercury in their livers if exposure exceeds a threshold value and suggest that taxonomie differences in demethylation ability may be an important factor in evaluating species-specific risk to MeHg exposure. Further, we provide strong evidence for a threshold of approximately 8.5 ??g/g dry weight of THg in the liver where demethylation is initiated. ?? 2009 SETAC.
Corneal Mechanical Thresholds Negatively Associate With Dry Eye and Ocular Pain Symptoms.
Spierer, Oriel; Felix, Elizabeth R; McClellan, Allison L; Parel, Jean Marie; Gonzalez, Alex; Feuer, William J; Sarantopoulos, Constantine D; Levitt, Roy C; Ehrmann, Klaus; Galor, Anat
2016-02-01
To examine associations between corneal mechanical thresholds and metrics of dry eye. This was a cross-sectional study of individuals seen in the Miami Veterans Affairs eye clinic. The evaluation consisted of questionnaires regarding dry eye symptoms and ocular pain, corneal mechanical detection and pain thresholds, and a comprehensive ocular surface examination. The main outcome measures were correlations between corneal thresholds and signs and symptoms of dry eye and ocular pain. A total of 129 subjects participated in the study (mean age 64 ± 10 years). Mechanical detection and pain thresholds on the cornea correlated with age (Spearman's ρ = 0.26, 0.23, respectively; both P < 0.05), implying decreased corneal sensitivity with age. Dry eye symptom severity scores and Neuropathic Pain Symptom Inventory (modified for the eye) scores negatively correlated with corneal detection and pain thresholds (range, r = -0.13 to -0.27, P < 0.05 for values between -0.18 and -0.27), suggesting increased corneal sensitivity in those with more severe ocular complaints. Ocular signs, on the other hand, correlated poorly and nonsignificantly with mechanical detection and pain thresholds on the cornea. A multivariable linear regression model found that both posttraumatic stress disorder (PTSD) score (β = 0.21, SE = 0.03) and corneal pain threshold (β = -0.03, SE = 0.01) were significantly associated with self-reported evoked eye pain (pain to wind, light, temperature) and explained approximately 32% of measurement variability (R = 0.57). Mechanical detection and pain thresholds measured on the cornea are correlated with dry eye symptoms and ocular pain. This suggests hypersensitivity within the corneal somatosensory pathways in patients with greater dry eye and ocular pain complaints.
Corneal Mechanical Thresholds Negatively Associate With Dry Eye and Ocular Pain Symptoms
Spierer, Oriel; Felix, Elizabeth R.; McClellan, Allison L.; Parel, Jean Marie; Gonzalez, Alex; Feuer, William J.; Sarantopoulos, Constantine D.; Levitt, Roy C.; Ehrmann, Klaus; Galor, Anat
2016-01-01
Purpose To examine associations between corneal mechanical thresholds and metrics of dry eye. Methods This was a cross-sectional study of individuals seen in the Miami Veterans Affairs eye clinic. The evaluation consisted of questionnaires regarding dry eye symptoms and ocular pain, corneal mechanical detection and pain thresholds, and a comprehensive ocular surface examination. The main outcome measures were correlations between corneal thresholds and signs and symptoms of dry eye and ocular pain. Results A total of 129 subjects participated in the study (mean age 64 ± 10 years). Mechanical detection and pain thresholds on the cornea correlated with age (Spearman's ρ = 0.26, 0.23, respectively; both P < 0.05), implying decreased corneal sensitivity with age. Dry eye symptom severity scores and Neuropathic Pain Symptom Inventory (modified for the eye) scores negatively correlated with corneal detection and pain thresholds (range, r = −0.13 to −0.27, P < 0.05 for values between −0.18 and −0.27), suggesting increased corneal sensitivity in those with more severe ocular complaints. Ocular signs, on the other hand, correlated poorly and nonsignificantly with mechanical detection and pain thresholds on the cornea. A multivariable linear regression model found that both posttraumatic stress disorder (PTSD) score (β = 0.21, SE = 0.03) and corneal pain threshold (β = −0.03, SE = 0.01) were significantly associated with self-reported evoked eye pain (pain to wind, light, temperature) and explained approximately 32% of measurement variability (R = 0.57). Conclusions Mechanical detection and pain thresholds measured on the cornea are correlated with dry eye symptoms and ocular pain. This suggests hypersensitivity within the corneal somatosensory pathways in patients with greater dry eye and ocular pain complaints. PMID:26886896
Optimizing Retransmission Threshold in Wireless Sensor Networks
Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang
2016-01-01
The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092
Quantifying the Arousal Threshold Using Polysomnography in Obstructive Sleep Apnea.
Sands, Scott A; Terrill, Philip I; Edwards, Bradley A; Taranto Montemurro, Luigi; Azarbarzin, Ali; Marques, Melania; de Melo, Camila M; Loring, Stephen H; Butler, James P; White, David P; Wellman, Andrew
2018-01-01
Precision medicine for obstructive sleep apnea (OSA) requires noninvasive estimates of each patient's pathophysiological "traits." Here, we provide the first automated technique to quantify the respiratory arousal threshold-defined as the level of ventilatory drive triggering arousal from sleep-using diagnostic polysomnographic signals in patients with OSA. Ventilatory drive preceding clinically scored arousals was estimated from polysomnographic studies by fitting a respiratory control model (Terrill et al.) to the pattern of ventilation during spontaneous respiratory events. Conceptually, the magnitude of the airflow signal immediately after arousal onset reveals information on the underlying ventilatory drive that triggered the arousal. Polysomnographic arousal threshold measures were compared with gold standard values taken from esophageal pressure and intraoesophageal diaphragm electromyography recorded simultaneously (N = 29). Comparisons were also made to arousal threshold measures using continuous positive airway pressure (CPAP) dial-downs (N = 28). The validity of using (linearized) nasal pressure rather than pneumotachograph ventilation was also assessed (N = 11). Polysomnographic arousal threshold values were correlated with those measured using esophageal pressure and diaphragm EMG (R = 0.79, p < .0001; R = 0.73, p = .0001), as well as CPAP manipulation (R = 0.73, p < .0001). Arousal threshold estimates were similar using nasal pressure and pneumotachograph ventilation (R = 0.96, p < .0001). The arousal threshold in patients with OSA can be estimated using polysomnographic signals and may enable more personalized therapeutic interventions for patients with a low arousal threshold. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
Załuska, Katarzyna; Kondrat-Wróbel, Maria W; Łuszczki, Jarogniew J
2018-05-01
The coexistence of seizures and arterial hypertension requires an adequate and efficacious treatment involving both protection from seizures and reduction of high arterial blood pressure. Accumulating evidence indicates that some diuretic drugs (with a well-established position in the treatment of arterial hypertension) also possess anticonvulsant properties in various experimental models of epilepsy. The aim of this study was to assess the anticonvulsant potency of 6 commonly used diuretic drugs (i.e., amiloride, ethacrynic acid, furosemide, hydrochlorothiazide, indapamide, and spironolactone) in the maximal electroshock-induced seizure threshold (MEST) test in mice. Doses of the studied diuretics and their corresponding threshold increases were linearly related, allowing for the determination of doses which increase the threshold for electroconvulsions in drug-treated animals by 20% (TID20 values) over the threshold in control animals. Amiloride, hydrochlorothiazide and indapamide administered systemically (intraperitoneally - i.p.) increased the threshold for maximal electroconvulsions in mice, and the experimentally-derived TID20 values in the maximal electroshock seizure threshold test were 30.2 mg/kg for amiloride, 68.2 mg/kg for hydrochlorothiazide and 3.9 mg/kg for indapamide. In contrast, ethacrynic acid (up to 100 mg/kg), furosemide (up to 100 mg/kg) and spironolactone (up to 50 mg/kg) administered i.p. had no significant impact on the threshold for electroconvulsions in mice. The studied diuretics can be arranged with respect to their anticonvulsant potency in the MEST test as follows: indapamide > amiloride > hydrochlorothiazide. No anticonvulsant effects were observed for ethacrynic acid, furosemide or spironolactone in the MEST test in mice.
Liu, Yan; Yang, Dong; Xiong, Fen; Yu, Lan; Ji, Fei; Wang, Qiu-Ju
2015-09-01
Hearing loss affects more than 27 million people in mainland China. It would be helpful to develop a portable and self-testing audiometer for the timely detection of hearing loss so that the optimal clinical therapeutic schedule can be determined. The objective of this study was to develop a software-based hearing self-testing system. The software-based self-testing system consisted of a notebook computer, an external sound card, and a pair of 10-Ω insert earphones. The system could be used to test the hearing thresholds by individuals themselves in an interactive manner using software. The reliability and validity of the system at octave frequencies of 0.25 Hz to 8.0 kHz were analyzed in three series of experiments. Thirty-seven normal-hearing particpants (74 ears) were enrolled in experiment 1. Forty individuals (80 ears) with sensorineural hearing loss (SNHL) participated in experiment 2. Thirteen normal-hearing participants (26 ears) and 37 participants (74 ears) with SNHL were enrolled in experiment 3. Each participant was enrolled in only one of the three experiments. In all experiments, pure-tone audiometry in a sound insulation room (standard test) was regarded as the gold standard. SPSS for Windows, version 17.0, was used for statistical analysis. The paired t-test was used to compare the hearing thresholds between the standard test and software-based self-testing (self-test) in experiments 1 and 2. In experiment 3 (main study), one-way analysis of variance and post hoc comparisons were used to compare the hearing thresholds among the standard test and two rounds of the self-test. Linear correlation analysis was carried out for the self-tests performed twice. The concordance was analyzed between the standard test and the self-test using the kappa method. p < 0.05 was considered statistically significant. Experiments 1 and 2: The hearing thresholds determined by the two methods were not significantly different at frequencies of 250, 500, or 8000 Hz (p > 0.05) but were significantly different at frequencies of 1000, 2000, and 4000 Hz (p < 0.05), except for 1000 Hz in the right ear in experiment 2. Experiment 3: The hearing thresholds determined by the standard test and self-tests repeated twice were not significantly different at any frequency (p > 0.05). The overall sensitivity of the self-test method was 97.6%, and the specificity was 98.3%. The sensitivity was 97.6% and the specificity was 97% for the patients with SNHL. The self-test had significant concordance with the standard test (kappa value = 0.848, p < 0.001). This portable hearing self-testing system based on a notebook personal computer is a reliable and sensitive method for hearing threshold assessment and monitoring. American Academy of Audiology.
Sutou, Shizuyo
2017-01-01
The Japanese Environmental Mutagen Society (JEMS) was established in 1972 by 147 members, 11 of whom are still on the active list as of May 1, 2016. As one of them, I introduce some historic topics here. These include 1) establishment of JEMS, 2) the issue of 2-(2-furyl)-3-(3-nitro-2-furyl)acrylamide (AF-2), 3) the Mammalian Mutagenicity Study Group (MMS) and its achievements, and 4) the Collaborative Study Group of the Micronucleus Test (CSGMT) and its achievements. In addition to these historic matters, some of which are still ongoing, a new collaborative study is proposed on adaptive response or hormesis by mutagens. There is a close relationship between mutagens and carcinogens, the dose-response relationship of which has been thought to follow the linear no-threshold model (LNT). LNT was fabricated on the basis of Drosophila sperm experiments using high dose radiation delivered in a short period. The fallacious 60 years-old LNT is applied to cancer induction by radiation without solid data and then to cancer induction by carcinogens also without solid data. Therefore, even the smallest amount of carcinogens is postulated to be carcinogenic without thresholds now. Radiation hormesis is observed in a large variety of living organisms; radiation is beneficial at low doses, but hazardous at high doses. There is a threshold at the boundary between benefit and hazard. Hormesis denies LNT. Not a few papers report existence of chemical hormesis. If mutagens and carcinogens show hormesis, the linear dose-response relationship in mutagenesis and carcinogenesis is denied and thresholds can be introduced.
[Application of artificial neural networks on the prediction of surface ozone concentrations].
Shen, Lu-Lu; Wang, Yu-Xuan; Duan, Lei
2011-08-01
Ozone is an important secondary air pollutant in the lower atmosphere. In order to predict the hourly maximum ozone one day in advance based on the meteorological variables for the Wanqingsha site in Guangzhou, Guangdong province, a neural network model (Multi-Layer Perceptron) and a multiple linear regression model were used and compared. Model inputs are meteorological parameters (wind speed, wind direction, air temperature, relative humidity, barometric pressure and solar radiation) of the next day and hourly maximum ozone concentration of the previous day. The OBS (optimal brain surgeon) was adopted to prune the neutral work, to reduce its complexity and to improve its generalization ability. We find that the pruned neural network has the capacity to predict the peak ozone, with an agreement index of 92.3%, the root mean square error of 0.0428 mg/m3, the R-square of 0.737 and the success index of threshold exceedance 77.0% (the threshold O3 mixing ratio of 0.20 mg/m3). When the neural classifier was added to the neural network model, the success index of threshold exceedance increased to 83.6%. Through comparison of the performance indices between the multiple linear regression model and the neural network model, we conclud that that neural network is a better choice to predict peak ozone from meteorological forecast, which may be applied to practical prediction of ozone concentration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calabrese, Edward J., E-mail: edwardc@schoolph.uma
This paper assesses the discovery of the dose-rate effect in radiation genetics and how it challenged fundamental tenets of the linear non-threshold (LNT) dose response model, including the assumptions that all mutational damage is cumulative and irreversible and that the dose-response is linear at low doses. Newly uncovered historical information also describes how a key 1964 report by the International Commission for Radiological Protection (ICRP) addressed the effects of dose rate in the assessment of genetic risk. This unique story involves assessments by two leading radiation geneticists, Hermann J. Muller and William L. Russell, who independently argued that the report'smore » Genetic Summary Section on dose rate was incorrect while simultaneously offering vastly different views as to what the report's summary should have contained. This paper reveals occurrences of scientific disagreements, how conflicts were resolved, which view(s) prevailed and why. During this process the Nobel Laureate, Muller, provided incorrect information to the ICRP in what appears to have been an attempt to manipulate the decision-making process and to prevent the dose-rate concept from being adopted into risk assessment practices. - Highlights: • The discovery of radiation dose rate challenged the scientific basis of LNT. • Radiation dose rate occurred in males and females. • The dose rate concept supported a threshold dose-response for radiation.« less
Measurement error in environmental epidemiology and the shape of exposure-response curves.
Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E
2011-09-01
Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health.
Johannesen, Peter T.; Pérez-González, Patricia; Kalluri, Sridhar; Blanco, José L.
2016-01-01
The aim of this study was to assess the relative importance of cochlear mechanical dysfunction, temporal processing deficits, and age on the ability of hearing-impaired listeners to understand speech in noisy backgrounds. Sixty-eight listeners took part in the study. They were provided with linear, frequency-specific amplification to compensate for their audiometric losses, and intelligibility was assessed for speech-shaped noise (SSN) and a time-reversed two-talker masker (R2TM). Behavioral estimates of cochlear gain loss and residual compression were available from a previous study and were used as indicators of cochlear mechanical dysfunction. Temporal processing abilities were assessed using frequency modulation detection thresholds. Age, audiometric thresholds, and the difference between audiometric threshold and cochlear gain loss were also included in the analyses. Stepwise multiple linear regression models were used to assess the relative importance of the various factors for intelligibility. Results showed that (a) cochlear gain loss was unrelated to intelligibility, (b) residual cochlear compression was related to intelligibility in SSN but not in a R2TM, (c) temporal processing was strongly related to intelligibility in a R2TM and much less so in SSN, and (d) age per se impaired intelligibility. In summary, all factors affected intelligibility, but their relative importance varied across maskers. PMID:27604779
Intrinsic suppression of turbulence in linear plasma devices
NASA Astrophysics Data System (ADS)
Leddy, J.; Dudson, B.
2017-12-01
Plasma turbulence is the dominant transport mechanism for heat and particles in magnetised plasmas in linear devices and tokamaks, so the study of turbulence is important in limiting and controlling this transport. Linear devices provide an axial magnetic field that serves to confine a plasma in cylindrical geometry as it travels along the magnetic field from the source to the strike point. Due to perpendicular transport, the plasma density and temperature have a roughly Gaussian radial profile with gradients that drive instabilities, such as resistive drift-waves and Kelvin-Helmholtz. If unstable, these instabilities cause perturbations to grow resulting in saturated turbulence, increasing the cross-field transport of heat and particles. When the plasma emerges from the source, there is a time, {τ }\\parallel , that describes the lifetime of the plasma based on parallel velocity and length of the device. As the plasma moves down the device, it also moves azimuthally according to E × B and diamagnetic velocities. There is a balance point in these parallel and perpendicular times that sets the stabilisation threshold. We simulate plasmas with a variety of parallel lengths and magnetic fields to vary the parallel and perpendicular lifetimes, respectively, and find that there is a clear correlation between the saturated RMS density perturbation level and the balance between these lifetimes. The threshold of marginal stability is seen to exist where {τ }\\parallel ≈ 11{τ }\\perp . This is also associated with the product {τ }\\parallel {γ }* , where {γ }* is the drift-wave linear growth rate, indicating that the instability must exist for roughly 100 times the growth time for the instability to enter the nonlinear growth phase. We explore the root of this correlation and the implications for linear device design.
NASA Astrophysics Data System (ADS)
Cheng, Mao-Hsun; Zhao, Chumin; Kanicki, Jerzy
2017-05-01
Current-mode active pixel sensor (C-APS) circuits based on amorphous indium-tin-zinc-oxide thin-film transistors (a-ITZO TFTs) are proposed for indirect X-ray imagers. The proposed C-APS circuits include a combination of a hydrogenated amorphous silicon (a-Si:H) p+-i-n+ photodiode (PD) and a-ITZO TFTs. Source-output (SO) and drain-output (DO) C-APS are investigated and compared. Acceptable signal linearity and high gains are realized for SO C-APS. APS circuit characteristics including voltage gain, charge gain, signal linearity, charge-to-current conversion gain, electron-to-voltage conversion gain are evaluated. The impact of the a-ITZO TFT threshold voltage shifts on C-APS is also considered. A layout for a pixel pitch of 50 μm and an associated fabrication process are suggested. Data line loadings for 4k-resolution X-ray imagers are computed and their impact on circuit performances is taken into consideration. Noise analysis is performed, showing a total input-referred noise of 239 e-.
Should the SCOPA-COG be modified? A Rasch analysis perspective.
Forjaz, M J; Frades-Payo, B; Rodriguez-Blazquez, C; Ayala, A; Martinez-Martin, P
2010-02-01
The SCales for Outcomes in PArkinson's disease-Cognition (SCOPA-COG) is a specific measure of cognitive function for Parkinson's disease (PD) patients. Previous studies, under the frame of the classic test theory, indicate satisfactory psychometric properties. The Rasch model, an item response theory approach, provides new information about the scale, as well as results in a linear scale. This study aims at analysing the SCOPA-COG according to the Rasch model and, on the basis of results, suggesting modification to the SCOPA-COG. Fit to the Rasch model was analysed using a sample of 384 PD patients. A good fit was obtained after rescoring for disordered thresholds. The person separation index, a reliability measure, was 0.83. Differential item functioning was observed by age for three items and by gender for one item. The SCOPA-COG is a unidimensional measure of global cognitive function in PD patients, with good scale targeting and no empirical evidence for use of the subscale scores. Its adequate reliability and internal construct validity were supported. The SCOPA-COG, with the proposed scoring scheme, generates true linear interval scores.
Weichenberger, Markus; Bauer, Martin; Kühler, Robert; Hensel, Johannes; Forlim, Caroline Garcia; Ihlenfeld, Albrecht; Ittermann, Bernd; Gallinat, Jürgen; Koch, Christian; Kühn, Simone
2017-01-01
In the present study, the brain's response towards near- and supra-threshold infrasound (IS) stimulation (sound frequency < 20 Hz) was investigated under resting-state fMRI conditions. The study involved two consecutive sessions. In the first session, 14 healthy participants underwent a hearing threshold-as well as a categorical loudness scaling measurement in which the individual loudness perception for IS was assessed across different sound pressure levels (SPL). In the second session, these participants underwent three resting-state acquisitions, one without auditory stimulation (no-tone), one with a monaurally presented 12-Hz IS tone (near-threshold) and one with a similar tone above the individual hearing threshold corresponding to a 'medium loud' hearing sensation (supra-threshold). Data analysis mainly focused on local connectivity measures by means of regional homogeneity (ReHo), but also involved independent component analysis (ICA) to investigate inter-regional connectivity. ReHo analysis revealed significantly higher local connectivity in right superior temporal gyrus (STG) adjacent to primary auditory cortex, in anterior cingulate cortex (ACC) and, when allowing smaller cluster sizes, also in the right amygdala (rAmyg) during the near-threshold, compared to both the supra-threshold and the no-tone condition. Additional independent component analysis (ICA) revealed large-scale changes of functional connectivity, reflected in a stronger activation of the right amygdala (rAmyg) in the opposite contrast (no-tone > near-threshold) as well as the right superior frontal gyrus (rSFG) during the near-threshold condition. In summary, this study is the first to demonstrate that infrasound near the hearing threshold may induce changes of neural activity across several brain regions, some of which are known to be involved in auditory processing, while others are regarded as keyplayers in emotional and autonomic control. These findings thus allow us to speculate on how continuous exposure to (sub-)liminal IS could exert a pathogenic influence on the organism, yet further (especially longitudinal) studies are required in order to substantialize these findings.
Linear instability of plane Couette and Poiseuille flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chefranov, S. G., E-mail: schefranov@mail.ru; Chefranov, A. G., E-mail: Alexander.chefranov@emu.edu.tr
2016-05-15
It is shown that linear instability of plane Couette flow can take place even at finite Reynolds numbers Re > Re{sub th} ≈ 139, which agrees with the experimental value of Re{sub th} ≈ 150 ± 5 [16, 17]. This new result of the linear theory of hydrodynamic stability is obtained by abandoning traditional assumption of the longitudinal periodicity of disturbances in the flow direction. It is established that previous notions about linear stability of this flow at arbitrarily large Reynolds numbers relied directly upon the assumed separation of spatial variables of the field of disturbances and their longitudinal periodicitymore » in the linear theory. By also abandoning these assumptions for plane Poiseuille flow, a new threshold Reynolds number Re{sub th} ≈ 1035 is obtained, which agrees to within 4% with experiment—in contrast to 500% discrepancy for the previous estimate of Re{sub th} ≈ 5772 obtained in the framework of the linear theory under assumption of the “normal” shape of disturbances [2].« less
NASA Astrophysics Data System (ADS)
Ikeda, Sho; Lee, Sang-Yeop; Ito, Hiroyuki; Ishihara, Noboru; Masu, Kazuya
2015-04-01
In this paper, we present a voltage-controlled oscillator (VCO), which achieves highly linear frequency tuning under a low supply voltage of 0.5 V. To obtain the linear frequency tuning of a VCO, the high linearity of the threshold voltage of a varactor versus its back-gate voltage is utilized. This enables the linear capacitance tuning of the varactor; thus, a highly linear VCO can be achieved. In addition, to decrease the power consumption of the VCO, a current-reuse structure is employed as a cross-coupled pair. The proposed VCO was fabricated using a 65 nm Si complementary metal oxide semiconductor (CMOS) process. It shows the ratio of the maximum VCO gain (KVCO) to the minimum one to be 1.28. The dc power consumption is 0.33 mW at a supply voltage of 0.5 V. The measured phase noise at 10 MHz offset is -123 dBc/Hz at an output frequency of 5.8 GHz.
Tang, M B Y; Goon, A T J; Goh, C L
2004-04-01
ELA-Max and EMLA cream are topical anesthetics that have been shown to have similar anesthetic efficacy in previous studies. To evaluate the analgesic efficacy of ELA-Max in comparison with EMLA cream using a novel method of thermosensory threshold analysis. A thermosensory analyzer was used to assess warmth- and heat-induced pain thresholds. No statistically significant difference was found in pain thresholds using either formulation. However, EMLA cream increased the heat-induced pain threshold to a greater extent than ELA-Max. Thermosensory measurement and analysis was well tolerated and no adverse events were encountered. EMLA cream may be superior to ELA-Max for heat-induced pain. This study suggests that thermosensory measurement may be another suitable tool for future topical anesthetic efficacy studies.