Sample records for discovery rate threshold

  1. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 1. The Russell-Muller debate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calabrese, Edward J., E-mail: edwardc@schoolph.uma

    This paper assesses the discovery of the dose-rate effect in radiation genetics and how it challenged fundamental tenets of the linear non-threshold (LNT) dose response model, including the assumptions that all mutational damage is cumulative and irreversible and that the dose-response is linear at low doses. Newly uncovered historical information also describes how a key 1964 report by the International Commission for Radiological Protection (ICRP) addressed the effects of dose rate in the assessment of genetic risk. This unique story involves assessments by two leading radiation geneticists, Hermann J. Muller and William L. Russell, who independently argued that the report'smore » Genetic Summary Section on dose rate was incorrect while simultaneously offering vastly different views as to what the report's summary should have contained. This paper reveals occurrences of scientific disagreements, how conflicts were resolved, which view(s) prevailed and why. During this process the Nobel Laureate, Muller, provided incorrect information to the ICRP in what appears to have been an attempt to manipulate the decision-making process and to prevent the dose-rate concept from being adopted into risk assessment practices. - Highlights: • The discovery of radiation dose rate challenged the scientific basis of LNT. • Radiation dose rate occurred in males and females. • The dose rate concept supported a threshold dose-response for radiation.« less

  2. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.

    PubMed

    Donoho, David; Jin, Jiashun

    2008-09-30

    In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.

  3. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak

    PubMed Central

    Donoho, David; Jin, Jiashun

    2008-01-01

    In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365

  4. Constraints on the FRB rate at 700-900 MHz

    NASA Astrophysics Data System (ADS)

    Connor, Liam; Lin, Hsiu-Hsien; Masui, Kiyoshi; Oppermann, Niels; Pen, Ue-Li; Peterson, Jeffrey B.; Roman, Alexander; Sievers, Jonathan

    2016-07-01

    Estimating the all-sky rate of fast radio bursts (FRBs) has been difficult due to small-number statistics and the fact that they are seen by disparate surveys in different regions of the sky. In this paper we provide limits for the FRB rate at 800 MHz based on the only burst detected at frequencies below 1.4 GHz, FRB 110523. We discuss the difficulties in rate estimation, particularly in providing an all-sky rate above a single fluence threshold. We find an implied rate between 700 and 900 MHz that is consistent with the rate at 1.4 GHz, scaling to 6.4^{+29.5}_{-5.0} × 10^3 sky-1 d-1 for an HTRU-like survey. This is promising for upcoming experiments below a GHz like CHIME and UTMOST, for which we forecast detection rates. Given 110523's discovery at 32σ with nothing weaker detected, down to the threshold of 8σ, we find consistency with a Euclidean flux distribution but disfavour steep distributions, ruling out γ > 2.2.

  5. Discrete False-Discovery Rate Improves Identification of Differentially Abundant Microbes.

    PubMed

    Jiang, Lingjing; Amir, Amnon; Morton, James T; Heller, Ruth; Arias-Castro, Ery; Knight, Rob

    2017-01-01

    Differential abundance testing is a critical task in microbiome studies that is complicated by the sparsity of data matrices. Here we adapt for microbiome studies a solution from the field of gene expression analysis to produce a new method, discrete false-discovery rate (DS-FDR), that greatly improves the power to detect differential taxa by exploiting the discreteness of the data. Additionally, DS-FDR is relatively robust to the number of noninformative features, and thus removes the problem of filtering taxonomy tables by an arbitrary abundance threshold. We show by using a combination of simulations and reanalysis of nine real-world microbiome data sets that this new method outperforms existing methods at the differential abundance testing task, producing a false-discovery rate that is up to threefold more accurate, and halves the number of samples required to find a given difference (thus increasing the efficiency of microbiome experiments considerably). We therefore expect DS-FDR to be widely applied in microbiome studies. IMPORTANCE DS-FDR can achieve higher statistical power to detect significant findings in sparse and noisy microbiome data compared to the commonly used Benjamini-Hochberg procedure and other FDR-controlling procedures.

  6. Type I and Type II error concerns in fMRI research: re-balancing the scale

    PubMed Central

    Cunningham, William A.

    2009-01-01

    Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017

  7. Probing light sterile neutrino signatures at reactor and Spallation Neutron Source neutrino experiments

    NASA Astrophysics Data System (ADS)

    Kosmas, T. S.; Papoulias, D. K.; Tórtola, M.; Valle, J. W. F.

    2017-09-01

    We investigate the impact of a fourth sterile neutrino at reactor and Spallation Neutron Source neutrino detectors. Specifically, we explore the discovery potential of the TEXONO and COHERENT experiments to subleading sterile neutrino effects through the measurement of the coherent elastic neutrino-nucleus scattering event rate. Our dedicated χ2-sensitivity analysis employs realistic nuclear structure calculations adequate for high purity sub-keV threshold Germanium detectors.

  8. Poisson Statistics of Combinatorial Library Sampling Predict False Discovery Rates of Screening

    PubMed Central

    2017-01-01

    Microfluidic droplet-based screening of DNA-encoded one-bead-one-compound combinatorial libraries is a miniaturized, potentially widely distributable approach to small molecule discovery. In these screens, a microfluidic circuit distributes library beads into droplets of activity assay reagent, photochemically cleaves the compound from the bead, then incubates and sorts the droplets based on assay result for subsequent DNA sequencing-based hit compound structure elucidation. Pilot experimental studies revealed that Poisson statistics describe nearly all aspects of such screens, prompting the development of simulations to understand system behavior. Monte Carlo screening simulation data showed that increasing mean library sampling (ε), mean droplet occupancy, or library hit rate all increase the false discovery rate (FDR). Compounds identified as hits on k > 1 beads (the replicate k class) were much more likely to be authentic hits than singletons (k = 1), in agreement with previous findings. Here, we explain this observation by deriving an equation for authenticity, which reduces to the product of a library sampling bias term (exponential in k) and a sampling saturation term (exponential in ε) setting a threshold that the k-dependent bias must overcome. The equation thus quantitatively describes why each hit structure’s FDR is based on its k class, and further predicts the feasibility of intentionally populating droplets with multiple library beads, assaying the micromixtures for function, and identifying the active members by statistical deconvolution. PMID:28682059

  9. Assessing differential expression in two-color microarrays: a resampling-based empirical Bayes approach.

    PubMed

    Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D

    2013-01-01

    Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.

  10. The Rate of Binary Black Hole Mergers Inferred from Advanced LIGO Observations Surrounding GW150914

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-12-01

    A transient gravitational-wave signal, GW150914, was identified in the twin Advanced LIGO detectors on 2015 September 2015 at 09:50:45 UTC. To assess the implications of this discovery, the detectors remained in operation with unchanged configurations over a period of 39 days around the time of the signal. At the detection statistic threshold corresponding to that observed for GW150914, our search of the 16 days of simultaneous two-detector observational data is estimated to have a false-alarm rate (FAR) of \\lt 4.9× {10}-6 {{yr}}-1, yielding a p-value for GW150914 of \\lt 2× {10}-7. Parameter estimation follow-up on this trigger identifies its source as a binary black hole (BBH) merger with component masses ({m}1,{m}2)=({36}-4+5,{29}-4+4) {M}⊙ at redshift z={0.09}-0.04+0.03 (median and 90% credible range). Here, we report on the constraints these observations place on the rate of BBH coalescences. Considering only GW150914, assuming that all BBHs in the universe have the same masses and spins as this event, imposing a search FAR threshold of 1 per 100 years, and assuming that the BBH merger rate is constant in the comoving frame, we infer a 90% credible range of merger rates between 2{--}53 {{Gpc}}-3 {{yr}}-1 (comoving frame). Incorporating all search triggers that pass a much lower threshold while accounting for the uncertainty in the astrophysical origin of each trigger, we estimate a higher rate, ranging from 13{--}600 {{Gpc}}-3 {{yr}}-1 depending on assumptions about the BBH mass distribution. All together, our various rate estimates fall in the conservative range 2{--}600 {{Gpc}}-3 {{yr}}-1.

  11. "Variation in Student Learning" as a Threshold Concept

    ERIC Educational Resources Information Center

    Meyer, Jan H. F.

    2012-01-01

    The Threshold Concepts Framework acts as a catalyst in faculty development activities, energising and provoking discussion by faculty about their own courses in their own disciplines, and often leading to the discovery of transformational concepts that occasion epistemic and ontological shifts in their students. The present study focuses on…

  12. Essays on price dynamics, discovery, and dynamic threshold effects among energy spot markets in North America

    NASA Astrophysics Data System (ADS)

    Park, Haesun

    2005-12-01

    Given the role electricity and natural gas sectors play in the North American economy, an understanding of how markets for these commodities interact is important. This dissertation independently characterizes the price dynamics of major electricity and natural gas spot markets in North America by combining directed acyclic graphs with time series analyses. Furthermore, the dissertation explores a generalization of price difference bands associated with the law of one price. Interdependencies among 11 major electricity spot markets are examined in Chapter II using a vector autoregression model. Results suggest that the relationships between the markets vary by time. Western markets are separated from the eastern markets and the Electricity Reliability Council of Texas. At longer time horizons these separations disappear. Palo Verde is the important spot market in the west for price discovery. Southwest Power Pool is the dominant market in Eastern Interconnected System for price discovery. Interdependencies among eight major natural gas spot markets are investigated using a vector error correction model and the Greedy Equivalence Search Algorithm in Chapter III. Findings suggest that the eight price series are tied together through six long-run cointegration relationships, supporting the argument that the natural gas market has developed into a single integrated market in North America since deregulation. Results indicate that price discovery tends to occur in the excess consuming regions and move to the excess producing regions. Across North America, the U.S. Midwest region, represented by the Chicago spot market, is the most important for price discovery. The Ellisburg-Leidy Hub in Pennsylvania and Malin Hub in Oregon are important for eastern and western markets. In Chapter IV, a threshold vector error correction model is applied to the natural gas markets to examine nonlinearities in adjustments to the law of one price. Results show that there are nonlinear adjustments to the law of one price in seven pair-wise markets. Four alternative cases for the law of one price are presented as a theoretical background. A methodology is developed for finding a threshold cointegration model that accounts for seasonality in the threshold levels. Results indicate that dynamic threshold effects vary depending on geographical location and whether the markets are excess producing or excess consuming markets.

  13. A Non-parametric Cutout Index for Robust Evaluation of Identified Proteins*

    PubMed Central

    Serang, Oliver; Paulo, Joao; Steen, Hanno; Steen, Judith A.

    2013-01-01

    This paper proposes a novel, automated method for evaluating sets of proteins identified using mass spectrometry. The remaining peptide-spectrum match score distributions of protein sets are compared to an empirical absent peptide-spectrum match score distribution, and a Bayesian non-parametric method reminiscent of the Dirichlet process is presented to accurately perform this comparison. Thus, for a given protein set, the process computes the likelihood that the proteins identified are correctly identified. First, the method is used to evaluate protein sets chosen using different protein-level false discovery rate (FDR) thresholds, assigning each protein set a likelihood. The protein set assigned the highest likelihood is used to choose a non-arbitrary protein-level FDR threshold. Because the method can be used to evaluate any protein identification strategy (and is not limited to mere comparisons of different FDR thresholds), we subsequently use the method to compare and evaluate multiple simple methods for merging peptide evidence over replicate experiments. The general statistical approach can be applied to other types of data (e.g. RNA sequencing) and generalizes to multivariate problems. PMID:23292186

  14. An effect size filter improves the reproducibility in spectral counting-based comparative proteomics.

    PubMed

    Gregori, Josep; Villarreal, Laura; Sánchez, Alex; Baselga, José; Villanueva, Josep

    2013-12-16

    The microarray community has shown that the low reproducibility observed in gene expression-based biomarker discovery studies is partially due to relying solely on p-values to get the lists of differentially expressed genes. Their conclusions recommended complementing the p-value cutoff with the use of effect-size criteria. The aim of this work was to evaluate the influence of such an effect-size filter on spectral counting-based comparative proteomic analysis. The results proved that the filter increased the number of true positives and decreased the number of false positives and the false discovery rate of the dataset. These results were confirmed by simulation experiments where the effect size filter was used to evaluate systematically variable fractions of differentially expressed proteins. Our results suggest that relaxing the p-value cut-off followed by a post-test filter based on effect size and signal level thresholds can increase the reproducibility of statistical results obtained in comparative proteomic analysis. Based on our work, we recommend using a filter consisting of a minimum absolute log2 fold change of 0.8 and a minimum signal of 2-4 SpC on the most abundant condition for the general practice of comparative proteomics. The implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of the results obtained among independent laboratories and MS platforms. Quality control analysis of microarray-based gene expression studies pointed out that the low reproducibility observed in the lists of differentially expressed genes could be partially attributed to the fact that these lists are generated relying solely on p-values. Our study has established that the implementation of an effect size post-test filter improves the statistical results of spectral count-based quantitative proteomics. The results proved that the filter increased the number of true positives whereas decreased the false positives and the false discovery rate of the datasets. The results presented here prove that a post-test filter applying a reasonable effect size and signal level thresholds helps to increase the reproducibility of statistical results in comparative proteomic analysis. Furthermore, the implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of results obtained among independent laboratories and MS platforms. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data

    PubMed Central

    2014-01-01

    Background Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. Methods This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. Results The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Conclusions Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary. PMID:25078574

  16. Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.

    PubMed

    Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried

    2014-01-01

    Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.

  17. How to talk about protein‐level false discovery rates in shotgun proteomics

    PubMed Central

    The, Matthew; Tasnim, Ayesha

    2016-01-01

    A frequently sought output from a shotgun proteomics experiment is a list of proteins that we believe to have been present in the analyzed sample before proteolytic digestion. The standard technique to control for errors in such lists is to enforce a preset threshold for the false discovery rate (FDR). Many consider protein‐level FDRs a difficult and vague concept, as the measurement entities, spectra, are manifestations of peptides and not proteins. Here, we argue that this confusion is unnecessary and provide a framework on how to think about protein‐level FDRs, starting from its basic principle: the null hypothesis. Specifically, we point out that two competing null hypotheses are used concurrently in today's protein inference methods, which has gone unnoticed by many. Using simulations of a shotgun proteomics experiment, we show how confusing one null hypothesis for the other can lead to serious discrepancies in the FDR. Furthermore, we demonstrate how the same simulations can be used to verify FDR estimates of protein inference methods. In particular, we show that, for a simple protein inference method, decoy models can be used to accurately estimate protein‐level FDRs for both competing null hypotheses. PMID:27503675

  18. A pleiotropy-informed Bayesian false discovery rate adapted to a shared control design finds new disease associations from GWAS summary statistics.

    PubMed

    Liley, James; Wallace, Chris

    2015-02-01

    Genome-wide association studies (GWAS) have been successful in identifying single nucleotide polymorphisms (SNPs) associated with many traits and diseases. However, at existing sample sizes, these variants explain only part of the estimated heritability. Leverage of GWAS results from related phenotypes may improve detection without the need for larger datasets. The Bayesian conditional false discovery rate (cFDR) constitutes an upper bound on the expected false discovery rate (FDR) across a set of SNPs whose p values for two diseases are both less than two disease-specific thresholds. Calculation of the cFDR requires only summary statistics and have several advantages over traditional GWAS analysis. However, existing methods require distinct control samples between studies. Here, we extend the technique to allow for some or all controls to be shared, increasing applicability. Several different SNP sets can be defined with the same cFDR value, and we show that the expected FDR across the union of these sets may exceed expected FDR in any single set. We describe a procedure to establish an upper bound for the expected FDR among the union of such sets of SNPs. We apply our technique to pairwise analysis of p values from ten autoimmune diseases with variable sharing of controls, enabling discovery of 59 SNP-disease associations which do not reach GWAS significance after genomic control in individual datasets. Most of the SNPs we highlight have previously been confirmed using replication studies or larger GWAS, a useful validation of our technique; we report eight SNP-disease associations across five diseases not previously declared. Our technique extends and strengthens the previous algorithm, and establishes robust limits on the expected FDR. This approach can improve SNP detection in GWAS, and give insight into shared aetiology between phenotypically related conditions.

  19. The picosecond structure of ultra-fast rogue waves

    NASA Astrophysics Data System (ADS)

    Klein, Avi; Shahal, Shir; Masri, Gilad; Duadi, Hamootal; Sulimani, Kfir; Lib, Ohad; Steinberg, Hadar; Kolpakov, Stanislav A.; Fridman, Moti

    2018-02-01

    We investigated ultrafast rogue waves in fiber lasers and found three different patterns of rogue waves: single- peaks, twin-peaks, and triple-peaks. The statistics of the different patterns as a function of the pump power of the laser reveals that the probability for all rogue waves patterns increase close to the laser threshold. We developed a numerical model which prove that the ultrafast rogue waves patterns result from both the polarization mode dispersion in the fiber and the non-instantaneous nature of the saturable absorber. This discovery reveals that there are three different types of rogue waves in fiber lasers: slow, fast, and ultrafast, which relate to three different time-scales and are governed by three different sets of equations: the laser rate equations, the nonlinear Schrodinger equation, and the saturable absorber equations, accordingly. This discovery is highly important for analyzing rogue waves and other extreme events in fiber lasers and can lead to realizing types of rogue waves which were not possible so far such as triangular rogue waves.

  20. Genetic variants associated with severe retinopathy of prematurity in extremely low birth weight infants.

    PubMed

    Hartnett, M Elizabeth; Morrison, Margaux A; Smith, Silvia; Yanovitch, Tammy L; Young, Terri L; Colaizy, Tarah; Momany, Allison; Dagle, John; Carlo, Waldemar A; Clark, Erin A S; Page, Grier; Murray, Jeff; DeAngelis, Margaret M; Cotten, C Michael

    2014-08-12

    To determine genetic variants associated with severe retinopathy of prematurity (ROP) in a candidate gene cohort study of US preterm infants. Preterm infants in the discovery cohort were enrolled through the Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network, and those in the replication cohort were from the University of Iowa. All infants were phenotyped for ROP severity. Because of differences in the durations of enrollment between cohorts, severe ROP was defined as threshold disease in the discovery cohort and as threshold disease or type 1 ROP in the replication cohort. Whole genome amplified DNA from stored blood spot samples from the Neonatal Research Network biorepository was genotyped using an Illumina GoldenGate platform for candidate gene single nucleotide polymorphisms (SNPs) involving angiogenic, developmental, inflammatory, and oxidative pathways. Three analyses were performed to determine significant epidemiologic variables and SNPs associated with levels of ROP severity. Analyses controlled for multiple comparisons, ancestral eigenvalues, family relatedness, and significant epidemiologic variables. Single nucleotide polymorphisms significantly associated with ROP severity from the discovery cohort were analyzed in the replication cohort and in meta-analysis. Eight hundred seventeen infants in the discovery cohort and 543 in the replication cohort were analyzed. Severe ROP occurred in 126 infants in the discovery and in 14 in the replication cohort. In both cohorts, ventilation days and seizure occurrence were associated with severe ROP. After controlling for significant factors and multiple comparisons, two intronic SNPs in the gene BDNF (rs7934165 and rs2049046, P < 3.1 × 10(-5)) were associated with severe ROP in the discovery cohort and were not associated with severe ROP in the replication cohort. However, when the cohorts were analyzed together in an exploratory meta-analysis, rs7934165 increased in associated significance with severe ROP (P = 2.9 × 10(-7)). Variants in BDNF encoding brain-derived neurotrophic factor were associated with severe ROP in a large candidate gene study of infants with threshold ROP. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  1. Predicting missing values in a home care database using an adaptive uncertainty rule method.

    PubMed

    Konias, S; Gogou, G; Bamidis, P D; Vlahavas, I; Maglaveras, N

    2005-01-01

    Contemporary literature illustrates an abundance of adaptive algorithms for mining association rules. However, most literature is unable to deal with the peculiarities, such as missing values and dynamic data creation, that are frequently encountered in fields like medicine. This paper proposes an uncertainty rule method that uses an adaptive threshold for filling missing values in newly added records. A new approach for mining uncertainty rules and filling missing values is proposed, which is in turn particularly suitable for dynamic databases, like the ones used in home care systems. In this study, a new data mining method named FiMV (Filling Missing Values) is illustrated based on the mined uncertainty rules. Uncertainty rules have quite a similar structure to association rules and are extracted by an algorithm proposed in previous work, namely AURG (Adaptive Uncertainty Rule Generation). The main target was to implement an appropriate method for recovering missing values in a dynamic database, where new records are continuously added, without needing to specify any kind of thresholds beforehand. The method was applied to a home care monitoring system database. Randomly, multiple missing values for each record's attributes (rate 5-20% by 5% increments) were introduced in the initial dataset. FiMV demonstrated 100% completion rates with over 90% success in each case, while usual approaches, where all records with missing values are ignored or thresholds are required, experienced significantly reduced completion and success rates. It is concluded that the proposed method is appropriate for the data-cleaning step of the Knowledge Discovery process in databases. The latter, containing much significance for the output efficiency of any data mining technique, can improve the quality of the mined information.

  2. Lifting Transit Signals from the Kepler Noise Floor. I. Discovery of a Warm Neptune

    NASA Astrophysics Data System (ADS)

    Kunimoto, Michelle; Matthews, Jaymie M.; Rowe, Jason F.; Hoffman, Kelsey

    2018-01-01

    Light curves from the 4-year Kepler exoplanet hunting mission have been searched for transits by NASA’s Kepler team and others, but there are still important discoveries to be made. We have searched the light curves of 400 Kepler Objects of Interest (KOIs) to find transit signals down to signal-to-noise ratio (S/N) ∼ 6, which is under the limit of S/N ∼ 7.1 that has been commonly adopted as a strict threshold to distinguish between a transit candidate and false alarm. We detect four new and convincing planet candidates ranging in radius from near-Mercury-size to slightly larger than Neptune. We highlight the discovery of KOI-408.05 (period = 637 days; radius = 4.9 R ⊕ incident flux = 0.6 S ⊕), a planet candidate within its host star’s Habitable Zone. We dub this planet a “warm Neptune,” a likely volatile-rich world that deserves closer inspection. KOI-408.05 joins 21 other confirmed and candidate planets in the current Kepler sample with semimajor axes a > 1.4 au. These discoveries are significant as a demonstration that the S/N threshold for detection used by the Kepler project is open to debate.

  3. Estimating False Discovery Proportion Under Arbitrary Covariance Dependence*

    PubMed Central

    Fan, Jianqing; Han, Xu; Gu, Weijie

    2012-01-01

    Multiple hypothesis testing is a fundamental problem in high dimensional inference, with wide applications in many scientific fields. In genome-wide association studies, tens of thousands of tests are performed simultaneously to find if any SNPs are associated with some traits and those tests are correlated. When test statistics are correlated, false discovery control becomes very challenging under arbitrary dependence. In the current paper, we propose a novel method based on principal factor approximation, which successfully subtracts the common dependence and weakens significantly the correlation structure, to deal with an arbitrary dependence structure. We derive an approximate expression for false discovery proportion (FDP) in large scale multiple testing when a common threshold is used and provide a consistent estimate of realized FDP. This result has important applications in controlling FDR and FDP. Our estimate of realized FDP compares favorably with Efron (2007)’s approach, as demonstrated in the simulated examples. Our approach is further illustrated by some real data applications. We also propose a dependence-adjusted procedure, which is more powerful than the fixed threshold procedure. PMID:24729644

  4. Functional Brain Connectome and Its Relation to Hoehn and Yahr Stage in Parkinson Disease.

    PubMed

    Suo, Xueling; Lei, Du; Li, Nannan; Cheng, Lan; Chen, Fuqin; Wang, Meiyun; Kemp, Graham J; Peng, Rong; Gong, Qiyong

    2017-12-01

    Purpose To use resting-state functional magnetic resonance (MR) imaging and graph theory approaches to investigate the brain functional connectome and its potential relation to disease severity in Parkinson disease (PD). Materials and Methods This case-control study was approved by the local research ethics committee, and all participants provided informed consent. There were 153 right-handed patients with PD and 81 healthy control participants recruited who were matched for age, sex, and handedness to undergo a 3-T resting-state functional MR examination. The whole-brain functional connectome was constructed by thresholding the Pearson correlation matrices of 90 brain regions, and the topologic properties were analyzed by using graph theory approaches. Nonparametric permutation tests were used to compare topologic properties, and their relationship to disease severity was assessed. Results The functional connectome in PD showed abnormalities at the global level (ie, decrease in clustering coefficient, global efficiency, and local efficiency, and increase in characteristic path length) and at the nodal level (decreased nodal centralities in the sensorimotor cortex, default mode, and temporal-occipital regions; P < .001, false discovery rate corrected). Further, the nodal centralities in left postcentral gyrus and left superior temporal gyrus correlated negatively with Unified Parkinson's Disease Rating Scale III score (P = .038, false discovery rate corrected, r = -0.198; and P = .009, false discovery rate corrected, r = -0.270, respectively) and decreased with increasing Hoehn and Yahr stage in patients with PD. Conclusion The configurations of brain functional connectome in patients with PD were perturbed and correlated with disease severity, notably with those responsible for motor functions. These results provide topologic insights into understanding the neural functional changes in relation to disease severity of PD. © RSNA, 2017 Online supplemental material is available for this article. An earlier incorrect version of this article appeared online. This article was corrected on September 11, 2017.

  5. Optimal False Discovery Rate Control for Dependent Data

    PubMed Central

    Xie, Jichun; Cai, T. Tony; Maris, John; Li, Hongzhe

    2013-01-01

    This paper considers the problem of optimal false discovery rate control when the test statistics are dependent. An optimal joint oracle procedure, which minimizes the false non-discovery rate subject to a constraint on the false discovery rate is developed. A data-driven marginal plug-in procedure is then proposed to approximate the optimal joint procedure for multivariate normal data. It is shown that the marginal procedure is asymptotically optimal for multivariate normal data with a short-range dependent covariance structure. Numerical results show that the marginal procedure controls false discovery rate and leads to a smaller false non-discovery rate than several commonly used p-value based false discovery rate controlling methods. The procedure is illustrated by an application to a genome-wide association study of neuroblastoma and it identifies a few more genetic variants that are potentially associated with neuroblastoma than several p-value-based false discovery rate controlling procedures. PMID:23378870

  6. How to talk about protein-level false discovery rates in shotgun proteomics.

    PubMed

    The, Matthew; Tasnim, Ayesha; Käll, Lukas

    2016-09-01

    A frequently sought output from a shotgun proteomics experiment is a list of proteins that we believe to have been present in the analyzed sample before proteolytic digestion. The standard technique to control for errors in such lists is to enforce a preset threshold for the false discovery rate (FDR). Many consider protein-level FDRs a difficult and vague concept, as the measurement entities, spectra, are manifestations of peptides and not proteins. Here, we argue that this confusion is unnecessary and provide a framework on how to think about protein-level FDRs, starting from its basic principle: the null hypothesis. Specifically, we point out that two competing null hypotheses are used concurrently in today's protein inference methods, which has gone unnoticed by many. Using simulations of a shotgun proteomics experiment, we show how confusing one null hypothesis for the other can lead to serious discrepancies in the FDR. Furthermore, we demonstrate how the same simulations can be used to verify FDR estimates of protein inference methods. In particular, we show that, for a simple protein inference method, decoy models can be used to accurately estimate protein-level FDRs for both competing null hypotheses. © 2016 The Authors. Proteomics Published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Microsatellite markers associated with resistance to Marek's disease in commercial layer chickens.

    PubMed

    McElroy, J P; Dekkers, J C M; Fulton, J E; O'Sullivan, N P; Soller, M; Lipkin, E; Zhang, W; Koehler, K J; Lamont, S J; Cheng, H H

    2005-11-01

    The objective of the current study was to identify QTL conferring resistance to Marek's disease (MD) in commercial layer chickens. To generate the resource population, 2 partially inbred lines that differed in MD-caused mortality were intermated to produce 5 backcross families. Vaccinated chicks were challenged with very virulent plus (vv+) MD virus strain 648A at 6 d and monitored for MD symptoms. A recent field isolate of the MD virus was used because the lines were resistant to commonly used older laboratory strains. Selective genotyping was employed using 81 microsatellites selected based on prior results with selective DNA pooling. Linear regression and Cox proportional hazard models were used to detect associations between marker genotypes and survival. Significance thresholds were validated by simulation. Seven and 6 markers were significant based on proportion of false positive and false discovery rate thresholds less than 0.2, respectively. Seventeen markers were associated with MD survival considering a comparison-wise error rate of 0.10, which is about twice the number expected by chance, indicating that at least some of the associations represent true effects. Thus, the present study shows that loci affecting MD resistance can be mapped in commercial layer lines. More comprehensive studies are under way to confirm and extend these results.

  8. Low level lead exposure: history and discovery.

    PubMed

    Needleman, Herbert

    2009-04-01

    The history of lead toxicity spans 2 millennnia. With increasingly sensitive methods, deficits due to lead exposure have been demonstrated at lower and lower doses. Persuasive evidence suggests that no threshold for lead toxicity exists.

  9. Experimental viscous fingering in a tapered radial Hele-Shaw cell

    NASA Astrophysics Data System (ADS)

    Bongrand, Gregoire; Tsai, Peichun Amy; Complex Fludis Group Team

    2017-11-01

    The fluid-fluid displacement in porous media is a common process that finds direct applications in various fields, such as enhanced oil recovery and geological CO2 sequestration. In this work, we experimentally investigate the influence of converging cells on viscous fingering instabilities using a radially-tapered cell. For air displacing oil, in contrast to the classical Saffman-Taylor fingering, our results show that a converging gradient in a radial propagation can provide a stabilizing effect and hinder fingering. For a fixed gap gradient and thickness, with increasing injection rates we find a stable displacement under small flow rates, whereas unstable fingering occurs above a certain threshold. We further investigate this critical flow rate delineating the stable and unstable regimes for different gap gradients. These results reveal that the displacement efficiency not only depends on the fluid properties but also on the interfacial velocity and channel structure. The latter factors provide a useful and convenient control to either trigger or inhibit fingering instability. NSERC Discovery, Accelerator, and CRC programs.

  10. Pharmacovigilance data mining with methods based on false discovery rates: a comparative simulation study.

    PubMed

    Ahmed, I; Thiessard, F; Miremont-Salamé, G; Bégaud, B; Tubert-Bitter, P

    2010-10-01

    The early detection of adverse reactions caused by drugs that are already on the market is the prime concern of pharmacovigilance efforts; the methods in use for postmarketing surveillance are aimed at detecting signals pointing to potential safety concerns, on the basis of reports from health-care providers and from information available in various databases. Signal detection methods based on the estimation of false discovery rate (FDR) have recently been proposed. They address the limitation of arbitrary detection thresholds of the automatic methods in current use, including those last updated by the US Food and Drug Administration and the World Health Organization's Uppsala Monitoring Centre. We used two simulation procedures to compare the false-positive performances for three current methods: the reporting odds ratio (ROR), the information component (IC), the gamma Poisson shrinkage (GPS), and also for two FDR-based methods derived from the GPS model and Fisher's test. Large differences in FDR rates were associated with the signal-detection methods currently in use. These differences ranged from 0.01 to 12% in an analysis that was restricted to signals with at least three reports. The numbers of signals generated were also highly variable. Among fixed-size lists of signals, the FDR was lowered when the FDR-based approaches were used. Overall, the outcomes in both simulation studies suggest that improvement in effectiveness can be expected from use of the FDR-based GPS method.

  11. Optimal thresholds for the estimation of area rain-rate moments by the threshold method

    NASA Technical Reports Server (NTRS)

    Short, David A.; Shimizu, Kunio; Kedem, Benjamin

    1993-01-01

    Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.

  12. Effects of acute hypoxia on the determination of anaerobic threshold using the heart rate-work rate relationships during incremental exercise tests.

    PubMed

    Ozcelik, O; Kelestimur, H

    2004-01-01

    Anaerobic threshold which describes the onset of systematic increase in blood lactate concentration is a widely used concept in clinical and sports medicine. A deflection point between heart rate-work rate has been introduced to determine the anaerobic threshold non-invasively. However, some researchers have consistently reported a heart rate deflection at higher work rates, while others have not. The present study was designed to investigate whether the heart rate deflection point accurately predicts the anaerobic threshold under the condition of acute hypoxia. Eight untrained males performed two incremental exercise tests using an electromagnetically braked cycle ergometer: one breathing room air and one breathing 12 % O2. The anaerobic threshold was estimated using the V-slope method and determined from the increase in blood lactate and the decrease in standard bicarbonate concentration. This threshold was also estimated by in the heart rate-work rate relationship. Not all subjects exhibited a heart rate deflection. Only two subjects in the control and four subjects in the hypoxia groups showed a heart rate deflection. Additionally, the heart rate deflection point overestimated the anaerobic threshold. In conclusion, the heart rate deflection point was not an accurate predictor of anaerobic threshold and acute hypoxia did not systematically affect the heart rate-work rate relationships.

  13. Building one molecule from a reservoir of two atoms

    NASA Astrophysics Data System (ADS)

    Liu, L. R.; Hood, J. D.; Yu, Y.; Zhang, J. T.; Hutzler, N. R.; Rosenband, T.; Ni, K.-K.

    2018-05-01

    Chemical reactions typically proceed via stochastic encounters between reactants. Going beyond this paradigm, we combined exactly two atoms in a single, controlled reaction. The experimental apparatus traps two individual laser-cooled atoms [one sodium (Na) and one cesium (Cs)] in separate optical tweezers and then merges them into one optical dipole trap. Subsequently, photoassociation forms an excited-state NaCs molecule. The discovery of previously unseen resonances near the molecular dissociation threshold and measurement of collision rates are enabled by the tightly trapped ultracold sample of atoms. As laser-cooling and trapping capabilities are extended to more elements, the technique will enable the study of more diverse, and eventually more complex, molecules in an isolated environment, as well as synthesis of designer molecules for qubits.

  14. WiseView: Visualizing motion and variability of faint WISE sources

    NASA Astrophysics Data System (ADS)

    Caselden, Dan; Westin, Paul, III; Meisner, Aaron; Kuchner, Marc; Colin, Guillaume

    2018-06-01

    WiseView renders image blinks of Wide-field Infrared Survey Explorer (WISE) coadds spanning a multi-year time baseline in a browser. The software allows for easy visual identification of motion and variability for sources far beyond the single-frame detection limit, a key threshold not surmounted by many studies. WiseView transparently gathers small image cutouts drawn from many terabytes of unWISE coadds, facilitating access to this large and unique dataset. Users need only input the coordinates of interest and can interactively tune parameters including the image stretch, colormap and blink rate. WiseView was developed in the context of the Backyard Worlds: Planet 9 citizen science project, and has enabled hundreds of brown dwarf candidate discoveries by citizen scientists and professional astronomers.

  15. Cardiovascular genetics: technological advancements and applicability for dilated cardiomyopathy.

    PubMed

    Kummeling, G J M; Baas, A F; Harakalova, M; van der Smagt, J J; Asselbergs, F W

    2015-07-01

    Genetics plays an important role in the pathophysiology of cardiovascular diseases, and is increasingly being integrated into clinical practice. Since 2008, both capacity and cost-efficiency of mutation screening of DNA have been increased magnificently due to the technological advancement obtained by next-generation sequencing. Hence, the discovery rate of genetic defects in cardiovascular genetics has grown rapidly and the financial threshold for gene diagnostics has been lowered, making large-scale DNA sequencing broadly accessible. In this review, the genetic variants, mutations and inheritance models are briefly introduced, after which an overview is provided of current clinical and technological applications in gene diagnostics and research for cardiovascular disease and in particular, dilated cardiomyopathy. Finally, a reflection on the future perspectives in cardiogenetics is given.

  16. Neuroanatomical correlates of personality in chimpanzees (Pan troglodytes): Associations between personality and frontal cortex.

    PubMed

    Latzman, Robert D; Hecht, Lisa K; Freeman, Hani D; Schapiro, Steven J; Hopkins, William D

    2015-12-01

    Converging empirical data suggests that a set of largely consistent personality traits exist in both human and nonhuman primates; despite these similarities, almost nothing is known concerning the neurobiological basis of these traits in nonhuman primates. The current study examined associations between chimpanzee personality traits and the grey matter volume and asymmetry of various frontal cortex regions in 107 captive chimpanzees. Chimpanzees rated as higher on Openness and Extraversion had greater bilateral grey matter volumes in the anterior cingulate cortex. Further, chimpanzee rated as higher on Dominance had larger grey volumes in the left anterior cingulate cortex and right Prefrontal Cortex (PFC). Finally, apes rated higher on Reactivity/Unpredictability had higher grey matter volumes in the right mesial PFC. All associations survived after applying False Discovery Rate (FDR) thresholds. Results are discussed in terms of current neuroscientific models of personality which suggest that the frontal cortex, and asymmetries in this region, play an important role in the neurobiological foundation of broad dispositional traits. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Chemical characterization of the acid alteration of diesel fuel: Non-targeted analysis by two-dimensional gas chromatography coupled with time-of-flight mass spectrometry with tile-based Fisher ratio and combinatorial threshold determination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parsons, Brendon A.; Pinkerton, David K.; Wright, Bob W.

    The illicit chemical alteration of petroleum fuels is of scientific interest, particularly to regulatory agencies which set fuel specifications, or excises based on those specifications. One type of alteration is the reaction of diesel fuel with concentrated sulfuric acid. Such reactions are known to subtly alter the chemical composition of the fuel, particularly the aromatic species native to the fuel. Comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC × GC–TOFMS) is ideally suited for the analysis of diesel fuel, but may provide the analyst with an overwhelming amount of data, particularly in sample-class comparison experiments comprised of manymore » samples. The tile-based Fisher-ratio (F-ratio) method reduces the abundance of data in a GC × GC–TOFMS experiment to only the peaks which significantly distinguish the unaltered and acid altered sample classes. Three samples of diesel fuel from different filling stations were each altered to discover chemical features, i.e., analyte peaks, which were consistently changed by the acid reaction. Using different fuels prioritizes the discovery of features which are likely to be robust to the variation present between fuel samples and which will consequently be useful in determining whether an unknown sample has been acid altered. The subsequent analysis confirmed that aromatic species are removed by the acid alteration, with the degree of removal consistent with predicted reactivity toward electrophilic aromatic sulfonation. Additionally, we observed that alkenes and alkynes were also removed from the fuel, and that sulfur dioxide or compounds that degrade to sulfur dioxide are generated by the acid alteration. In addition to applying the previously reported tile-based F-ratio method, this report also expands null distribution analysis to algorithmically determine an F-ratio threshold to confidently select only the features which are sufficiently class-distinguishing. When applied to the acid alteration of diesel fuel, the suggested per-hit F-ratio threshold was 12.4, which is predicted to maintain the false discovery rate (FDR) below 0.1%. Using this F-ratio threshold, 107 of the 3362 preliminary hits were deemed significantly changing due to the acid alteration, with the number of false positives estimated to be about 3.« less

  18. Time-related changes in firing rates are influenced by recruitment threshold and twitch force potentiation in the first dorsal interosseous.

    PubMed

    Miller, Jonathan D; Herda, Trent J; Trevino, Michael A; Sterczala, Adam J; Ciccone, Anthony B

    2017-08-01

    What is the central question of this study? The influences of motor unit recruitment threshold and twitch force potentiation on the changes in firing rates during steady-force muscular contractions are not well understood. What is the main finding and its importance? The behaviour of motor units during steady force was influenced by recruitment threshold, such that firing rates decreased for lower-threshold motor units but increased for higher-threshold motor units. In addition, individuals with greater changes in firing rates possessed greater twitch force potentiation. There are contradictory reports regarding changes in motor unit firing rates during steady-force contractions. Inconsistencies are likely to be the result of previous studies disregarding motor unit recruitment thresholds and not examining firing rates on a subject-by-subject basis. It is hypothesized that firing rates are manipulated by twitch force potentiation during contractions. Therefore, in this study we examined time-related changes in firing rates at steady force in relationship to motor unit recruitment threshold in the first dorsal interosseous and the influence of twitch force potentiation on such changes in young versus aged individuals. Subjects performed a 12 s steady-force contraction at 50% maximal voluntary contraction, with evoked twitches before and after the contraction to quantify potentiation. Firing rates, in relationship to recruitment thresholds, were determined at the beginning, middle and end of the steady force. There were no firing rate changes for aged individuals. For the young, firing rates decreased slightly for lower-threshold motor units but increased for higher-threshold motor units. Twitch force potentiation was greater for young than aged subjects, and changes in firing rates were correlated with twitch force potentiation. Thus, individuals with greater increases in firing rates of higher-threshold motor units and decreases in lower-threshold motor units possessed greater twitch force potentiation. Overall, changes in firing rates during brief steady-force contractions are dependent on recruitment threshold and explained in part by twitch force potentiation. Given that firing rate changes were measured in relationship to recruitment threshold, this study illustrates a more complete view of firing rate changes during steady-force contractions. © 2017 The Authors. Experimental Physiology © 2017 The Physiological Society.

  19. Statistical interpretation of machine learning-based feature importance scores for biomarker discovery.

    PubMed

    Huynh-Thu, Vân Anh; Saeys, Yvan; Wehenkel, Louis; Geurts, Pierre

    2012-07-01

    Univariate statistical tests are widely used for biomarker discovery in bioinformatics. These procedures are simple, fast and their output is easily interpretable by biologists but they can only identify variables that provide a significant amount of information in isolation from the other variables. As biological processes are expected to involve complex interactions between variables, univariate methods thus potentially miss some informative biomarkers. Variable relevance scores provided by machine learning techniques, however, are potentially able to highlight multivariate interacting effects, but unlike the p-values returned by univariate tests, these relevance scores are usually not statistically interpretable. This lack of interpretability hampers the determination of a relevance threshold for extracting a feature subset from the rankings and also prevents the wide adoption of these methods by practicians. We evaluated several, existing and novel, procedures that extract relevant features from rankings derived from machine learning approaches. These procedures replace the relevance scores with measures that can be interpreted in a statistical way, such as p-values, false discovery rates, or family wise error rates, for which it is easier to determine a significance level. Experiments were performed on several artificial problems as well as on real microarray datasets. Although the methods differ in terms of computing times and the tradeoff, they achieve in terms of false positives and false negatives, some of them greatly help in the extraction of truly relevant biomarkers and should thus be of great practical interest for biologists and physicians. As a side conclusion, our experiments also clearly highlight that using model performance as a criterion for feature selection is often counter-productive. Python source codes of all tested methods, as well as the MATLAB scripts used for data simulation, can be found in the Supplementary Material.

  20. ROCker: accurate detection and quantification of target genes in short-read metagenomic data sets by modeling sliding-window bitscores

    DOE PAGES

    Orellana, Luis H.; Rodriguez-R, Luis M.; Konstantinidis, Konstantinos T.

    2016-10-07

    Functional annotation of metagenomic and metatranscriptomic data sets relies on similarity searches based on e-value thresholds resulting in an unknown number of false positive and negative matches. To overcome these limitations, we introduce ROCker, aimed at identifying position-specific, most-discriminant thresholds in sliding windows along the sequence of a target protein, accounting for non-discriminative domains shared by unrelated proteins. ROCker employs the receiver operating characteristic (ROC) curve to minimize false discovery rate (FDR) and calculate the best thresholds based on how simulated shotgun metagenomic reads of known composition map onto well-curated reference protein sequences and thus, differs from HMM profiles andmore » related methods. We showcase ROCker using ammonia monooxygenase (amoA) and nitrous oxide reductase (nosZ) genes, mediating oxidation of ammonia and the reduction of the potent greenhouse gas, N 2O, to inert N 2, respectively. ROCker typically showed 60-fold lower FDR when compared to the common practice of using fixed e-values. Previously uncounted ‘atypical’ nosZ genes were found to be two times more abundant, on average, than their typical counterparts in most soil metagenomes and the abundance of bacterial amoA was quantified against the highly-related particulate methane monooxygenase (pmoA). Therefore, ROCker can reliably detect and quantify target genes in short-read metagenomes.« less

  1. ROCker: accurate detection and quantification of target genes in short-read metagenomic data sets by modeling sliding-window bitscores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orellana, Luis H.; Rodriguez-R, Luis M.; Konstantinidis, Konstantinos T.

    Functional annotation of metagenomic and metatranscriptomic data sets relies on similarity searches based on e-value thresholds resulting in an unknown number of false positive and negative matches. To overcome these limitations, we introduce ROCker, aimed at identifying position-specific, most-discriminant thresholds in sliding windows along the sequence of a target protein, accounting for non-discriminative domains shared by unrelated proteins. ROCker employs the receiver operating characteristic (ROC) curve to minimize false discovery rate (FDR) and calculate the best thresholds based on how simulated shotgun metagenomic reads of known composition map onto well-curated reference protein sequences and thus, differs from HMM profiles andmore » related methods. We showcase ROCker using ammonia monooxygenase (amoA) and nitrous oxide reductase (nosZ) genes, mediating oxidation of ammonia and the reduction of the potent greenhouse gas, N 2O, to inert N 2, respectively. ROCker typically showed 60-fold lower FDR when compared to the common practice of using fixed e-values. Previously uncounted ‘atypical’ nosZ genes were found to be two times more abundant, on average, than their typical counterparts in most soil metagenomes and the abundance of bacterial amoA was quantified against the highly-related particulate methane monooxygenase (pmoA). Therefore, ROCker can reliably detect and quantify target genes in short-read metagenomes.« less

  2. ROCker: accurate detection and quantification of target genes in short-read metagenomic data sets by modeling sliding-window bitscores

    PubMed Central

    2017-01-01

    Abstract Functional annotation of metagenomic and metatranscriptomic data sets relies on similarity searches based on e-value thresholds resulting in an unknown number of false positive and negative matches. To overcome these limitations, we introduce ROCker, aimed at identifying position-specific, most-discriminant thresholds in sliding windows along the sequence of a target protein, accounting for non-discriminative domains shared by unrelated proteins. ROCker employs the receiver operating characteristic (ROC) curve to minimize false discovery rate (FDR) and calculate the best thresholds based on how simulated shotgun metagenomic reads of known composition map onto well-curated reference protein sequences and thus, differs from HMM profiles and related methods. We showcase ROCker using ammonia monooxygenase (amoA) and nitrous oxide reductase (nosZ) genes, mediating oxidation of ammonia and the reduction of the potent greenhouse gas, N2O, to inert N2, respectively. ROCker typically showed 60-fold lower FDR when compared to the common practice of using fixed e-values. Previously uncounted ‘atypical’ nosZ genes were found to be two times more abundant, on average, than their typical counterparts in most soil metagenomes and the abundance of bacterial amoA was quantified against the highly-related particulate methane monooxygenase (pmoA). Therefore, ROCker can reliably detect and quantify target genes in short-read metagenomes. PMID:28180325

  3. Thresholds of sea-level rise rate and sea-level acceleration rate in a vulnerable coastal wetland

    NASA Astrophysics Data System (ADS)

    Wu, W.; Biber, P.; Bethel, M.

    2017-12-01

    Feedback among inundation, sediment trapping, and vegetation productivity help maintain coastal wetlands facing sea-level rise (SLR). However, when the SLR rate exceeds a threshold, coastal wetlands can collapse. Understanding the threshold help address the key challenge in ecology - nonlinear response of ecosystems to environmental change, and promote communication between ecologists and policy makers. We studied the threshold of SLR rate and developed a new threshold of SLR acceleration rate on sustainability of coastal wetlands as SLR is likely to accelerate due to the enhanced anthropogenic forces. We developed a mechanistic model to simulate wetland change and derived the SLR thresholds for Grand Bay, MS, a micro-tidal estuary with limited upland freshwater and sediment input in the northern Gulf of Mexico. The new SLR acceleration rate threshold complements the threshold of SLR rate and can help explain the temporal lag before the rapid decline of wetland area becomes evident after the SLR rate threshold is exceeded. Deriving these two thresholds depends on the temporal scale, the interaction of SLR with other environmental factors, and landscape metrics, which have not been fully accounted for before this study. The derived SLR rate thresholds range from 7.3 mm/yr to 11.9 mm/yr. The thresholds of SLR acceleration rate are 3.02×10-4 m/yr2 and 9.62×10-5 m/yr2 for 2050 and 2100 respectively. Based on the thresholds developed, predicted SLR that will adversely impact the coastal wetlands in Grand Bay by 2100 will fall within the likely range of SLR under a high warming scenario (RCP8.5), and beyond the very likely range under a low warming scenario (RCP2.6 or 3), highlighting the need to avoid the high warming scenario in the future if these marshes are to be preserved.

  4. New field discovery rates in lower 48 states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, T.J.; Hugman, R.; Vidas, H.

    1989-03-01

    Through 1982, AAPG reported new field discovery rates. In 1985, a paper demonstrated that through 1975 the AAPG survey of new field discoveries had significantly underreported the larger new field discoveries. This presentation updates the new field discovery data reported in that paper and extends the data through the mid-1980s. Regional details of the new field discoveries, including an explicit breakout of discoveries below 15,000 ft, are reported. The extent to which the observed relative stabilization in new field discoveries per wildcat reflects regional shifts in exploration activity is discussed. Finally, the rate of reserve growth reflected in the passagemore » of particular fields through the AAPG field size categories is discussed.« less

  5. Quantitative trait Loci analysis using the false discovery rate.

    PubMed

    Benjamini, Yoav; Yekutieli, Daniel

    2005-10-01

    False discovery rate control has become an essential tool in any study that has a very large multiplicity problem. False discovery rate-controlling procedures have also been found to be very effective in QTL analysis, ensuring reproducible results with few falsely discovered linkages and offering increased power to discover QTL, although their acceptance has been slower than in microarray analysis, for example. The reason is partly because the methodological aspects of applying the false discovery rate to QTL mapping are not well developed. Our aim in this work is to lay a solid foundation for the use of the false discovery rate in QTL mapping. We review the false discovery rate criterion, the appropriate interpretation of the FDR, and alternative formulations of the FDR that appeared in the statistical and genetics literature. We discuss important features of the FDR approach, some stemming from new developments in FDR theory and methodology, which deem it especially useful in linkage analysis. We review false discovery rate-controlling procedures--the BH, the resampling procedure, and the adaptive two-stage procedure-and discuss the validity of these procedures in single- and multiple-trait QTL mapping. Finally we argue that the control of the false discovery rate has an important role in suggesting, indicating the significance of, and confirming QTL and present guidelines for its use.

  6. Discovery and validation of sub-threshold genome-wide association study loci using epigenomic signatures

    PubMed Central

    Wang, Xinchen; Tucker, Nathan R; Rizki, Gizem; Mills, Robert; Krijger, Peter HL; de Wit, Elzo; Subramanian, Vidya; Bartell, Eric; Nguyen, Xinh-Xinh; Ye, Jiangchuan; Leyton-Mange, Jordan; Dolmatova, Elena V; van der Harst, Pim; de Laat, Wouter; Ellinor, Patrick T; Newton-Cheh, Christopher; Milan, David J; Kellis, Manolis; Boyer, Laurie A

    2016-01-01

    Genetic variants identified by genome-wide association studies explain only a modest proportion of heritability, suggesting that meaningful associations lie 'hidden' below current thresholds. Here, we integrate information from association studies with epigenomic maps to demonstrate that enhancers significantly overlap known loci associated with the cardiac QT interval and QRS duration. We apply functional criteria to identify loci associated with QT interval that do not meet genome-wide significance and are missed by existing studies. We demonstrate that these 'sub-threshold' signals represent novel loci, and that epigenomic maps are effective at discriminating true biological signals from noise. We experimentally validate the molecular, gene-regulatory, cellular and organismal phenotypes of these sub-threshold loci, demonstrating that most sub-threshold loci have regulatory consequences and that genetic perturbation of nearby genes causes cardiac phenotypes in mouse. Our work provides a general approach for improving the detection of novel loci associated with complex human traits. DOI: http://dx.doi.org/10.7554/eLife.10557.001 PMID:27162171

  7. The Upper Atmosphere; Threshold of Space.

    ERIC Educational Resources Information Center

    Bird, John

    This booklet contains illustrations of the upper atmosphere, describes some recent discoveries, and suggests future research questions. It contains many color photographs. Sections include: (1) "Where Does Space Begin?"; (2) "Importance of the Upper Atmosphere" (including neutral atmosphere, ionized regions, and balloon and investigations); (3)…

  8. Monopolar Detection Thresholds Predict Spatial Selectivity of Neural Excitation in Cochlear Implants: Implications for Speech Recognition

    PubMed Central

    2016-01-01

    The objectives of the study were to (1) investigate the potential of using monopolar psychophysical detection thresholds for estimating spatial selectivity of neural excitation with cochlear implants and to (2) examine the effect of site removal on speech recognition based on the threshold measure. Detection thresholds were measured in Cochlear Nucleus® device users using monopolar stimulation for pulse trains that were of (a) low rate and long duration, (b) high rate and short duration, and (c) high rate and long duration. Spatial selectivity of neural excitation was estimated by a forward-masking paradigm, where the probe threshold elevation in the presence of a forward masker was measured as a function of masker-probe separation. The strength of the correlation between the monopolar thresholds and the slopes of the masking patterns systematically reduced as neural response of the threshold stimulus involved interpulse interactions (refractoriness and sub-threshold adaptation), and spike-rate adaptation. Detection threshold for the low-rate stimulus most strongly correlated with the spread of forward masking patterns and the correlation reduced for long and high rate pulse trains. The low-rate thresholds were then measured for all electrodes across the array for each subject. Subsequently, speech recognition was tested with experimental maps that deactivated five stimulation sites with the highest thresholds and five randomly chosen ones. Performance with deactivating the high-threshold sites was better than performance with the subjects’ clinical map used every day with all electrodes active, in both quiet and background noise. Performance with random deactivation was on average poorer than that with the clinical map but the difference was not significant. These results suggested that the monopolar low-rate thresholds are related to the spatial neural excitation patterns in cochlear implant users and can be used to select sites for more optimal speech recognition performance. PMID:27798658

  9. Restrictive transfusion threshold is safe in high-risk patients undergoing brain tumor surgery.

    PubMed

    Alkhalid, Yasmine; Lagman, Carlito; Sheppard, John P; Nguyen, Thien; Prashant, Giyarpuram N; Ziman, Alyssa F; Yang, Isaac

    2017-12-01

    To assess the safety of a restrictive threshold for the transfusion of red blood cells (RBCs) compared to a liberal threshold in high-risk patients undergoing brain tumor surgery. We reviewed patients who were 50 years of age or older with a preoperative American Society of Anesthesiologists physical status class II to V who underwent open craniotomy for tumor resection and were transfused packed RBCs during or after surgery. We retrospectively assigned patients to a restrictive-threshold (a pretransfusion hemoglobin level <8g/dL) or a liberal-threshold group (a pretransfusion hemoglobin level of 8-10/dL). The primary outcome was in-hospital mortality rate. Secondary outcomes were in-hospital complication rates, length of stay, and discharge disposition. Twenty-five patients were included in the study, of which 17 were assigned to a restrictive-threshold group and 8 patients to a liberal-threshold group. The in-hospital mortality rates were 12% for the restrictive-threshold group (odds ratio [OR] 0.93, 95% confidence interval [CI] 0.07-12.11) and 13% for the liberal-threshold group. The in-hospital complication rates were 52.9% for the restrictive-threshold group (OR 1.13, 95% CI 0.21-6.05) and 50% for the liberal-threshold group. The average number of days in the intensive care unit and hospital were 8.6 and 22.4 days in the restrictive-threshold group and 6 and 15 days in the liberal-threshold group, respectively (P=0.69 and P=0.20). The rates of non-routine discharge were 71% in the restrictive-threshold group (OR 2.40, 95% CI 0.42-13.60) and 50% in the liberal-threshold group. A restrictive transfusion threshold did not significantly influence in-hospital mortality or complication rates, length of stay, or discharge disposition in patients at high operative risk. Copyright © 2017. Published by Elsevier B.V.

  10. Genome wide association studies on yield components using a lentil genetic diversity panel

    USDA-ARS?s Scientific Manuscript database

    The cool season food legume research community are now at the threshold of deploying the cutting-edge molecular genetics and genomics tools that have led to significant and rapid expansion of gene discovery, knowledge of gene function (including tolerance to biotic and abiotic stresses) and genetic ...

  11. Chemical characterization of the acid alteration of diesel fuel: Non-targeted analysis by two-dimensional gas chromatography coupled with time-of-flight mass spectrometry with tile-based Fisher ratio and combinatorial threshold determination.

    PubMed

    Parsons, Brendon A; Pinkerton, David K; Wright, Bob W; Synovec, Robert E

    2016-04-01

    The illicit chemical alteration of petroleum fuels is of keen interest, particularly to regulatory agencies that set fuel specifications, or taxes/credits based on those specifications. One type of alteration is the reaction of diesel fuel with concentrated sulfuric acid. Such reactions are known to subtly alter the chemical composition of the fuel, particularly the aromatic species native to the fuel. Comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS) is well suited for the analysis of diesel fuel, but may provide the analyst with an overwhelming amount of data, particularly in sample-class comparison experiments comprised of many samples. Tile-based Fisher-ratio (F-ratio) analysis reduces the abundance of data in a GC×GC-TOFMS experiment to only the peaks which significantly distinguish the unaltered and acid altered sample classes. Three samples of diesel fuel from differently branded filling stations were each altered to discover chemical features, i.e., analyte peaks, which were consistently changed by the acid reaction. Using different fuels prioritizes the discovery of features likely to be robust to the variation present between fuel samples and may consequently be useful in determining whether an unknown sample has been acid altered. The subsequent analysis confirmed that aromatic species are removed by the acid alteration, with the degree of removal consistent with predicted reactivity toward electrophilic aromatic sulfonation. Additionally, we observed that alkenes and alkynes were also removed from the fuel, and that sulfur dioxide or compounds that degrade to sulfur dioxide are generated by the acid alteration. In addition to applying the previously reported tile-based F-ratio method, this report also expands null distribution analysis to algorithmically determine an F-ratio threshold to confidently select only the features which are sufficiently class-distinguishing. When applied to the acid alteration of diesel fuel, the suggested per-hit F-ratio threshold was 12.4, which is predicted to maintain the false discovery rate (FDR) below 0.1%. Using this F-ratio threshold, 107 of the 3362 preliminary hits were deemed significantly changing due to the acid alteration, with the number of false positives estimated to be about 3. Validation of the F-ratio analysis was performed using an additional three fuels. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Effects of cochlear-implant pulse rate and inter-channel timing on channel interactions and thresholds

    NASA Astrophysics Data System (ADS)

    Middlebrooks, John C.

    2004-07-01

    Interactions among the multiple channels of a cochlear prosthesis limit the number of channels of information that can be transmitted to the brain. This study explored the influence on channel interactions of electrical pulse rates and temporal offsets between channels. Anesthetized guinea pigs were implanted with 2-channel scala-tympani electrode arrays, and spike activity was recorded from the auditory cortex. Channel interactions were quantified as the reduction of the threshold for pulse-train stimulation of the apical channel by sub-threshold stimulation of the basal channel. Pulse rates were 254 or 4069 pulses per second (pps) per channel. Maximum threshold reductions averaged 9.6 dB when channels were stimulated simultaneously. Among nonsimultaneous conditions, threshold reductions at the 254-pps rate were entirely eliminated by a 1966-μs inter-channel offset. When offsets were only 41 to 123 μs, however, maximum threshold shifts averaged 3.1 dB, which was comparable to the dynamic ranges of cortical neurons in this experimental preparation. Threshold reductions at 4069 pps averaged up to 1.3 dB greater than at 254 pps, which raises some concern in regard to high-pulse-rate speech processors. Thresholds for various paired-pulse stimuli, pulse rates, and pulse-train durations were measured to test possible mechanisms of temporal integration.

  13. Search strategy has influenced the discovery rate of human viruses.

    PubMed

    Rosenberg, Ronald; Johansson, Michael A; Powers, Ann M; Miller, Barry R

    2013-08-20

    A widely held concern is that the pace of infectious disease emergence has been increasing. We have analyzed the rate of discovery of pathogenic viruses, the preeminent source of newly discovered causes of human disease, from 1897 through 2010. The rate was highest during 1950-1969, after which it moderated. This general picture masks two distinct trends: for arthropod-borne viruses, which comprised 39% of pathogenic viruses, the discovery rate peaked at three per year during 1960-1969, but subsequently fell nearly to zero by 1980; however, the rate of discovery of nonarboviruses remained stable at about two per year from 1950 through 2010. The period of highest arbovirus discovery coincided with a comprehensive program supported by The Rockefeller Foundation of isolating viruses from humans, animals, and arthropod vectors at field stations in Latin America, Africa, and India. The productivity of this strategy illustrates the importance of location, approach, long-term commitment, and sponsorship in the discovery of emerging pathogens.

  14. Assessment of Metabolome Annotation Quality: A Method for Evaluating the False Discovery Rate of Elemental Composition Searches

    PubMed Central

    Matsuda, Fumio; Shinbo, Yoko; Oikawa, Akira; Hirai, Masami Yokota; Fiehn, Oliver; Kanaya, Shigehiko; Saito, Kazuki

    2009-01-01

    Background In metabolomics researches using mass spectrometry (MS), systematic searching of high-resolution mass data against compound databases is often the first step of metabolite annotation to determine elemental compositions possessing similar theoretical mass numbers. However, incorrect hits derived from errors in mass analyses will be included in the results of elemental composition searches. To assess the quality of peak annotation information, a novel methodology for false discovery rates (FDR) evaluation is presented in this study. Based on the FDR analyses, several aspects of an elemental composition search, including setting a threshold, estimating FDR, and the types of elemental composition databases most reliable for searching are discussed. Methodology/Principal Findings The FDR can be determined from one measured value (i.e., the hit rate for search queries) and four parameters determined by Monte Carlo simulation. The results indicate that relatively high FDR values (30–50%) were obtained when searching time-of-flight (TOF)/MS data using the KNApSAcK and KEGG databases. In addition, searches against large all-in-one databases (e.g., PubChem) always produced unacceptable results (FDR >70%). The estimated FDRs suggest that the quality of search results can be improved not only by performing more accurate mass analysis but also by modifying the properties of the compound database. A theoretical analysis indicates that FDR could be improved by using compound database with smaller but higher completeness entries. Conclusions/Significance High accuracy mass analysis, such as Fourier transform (FT)-MS, is needed for reliable annotation (FDR <10%). In addition, a small, customized compound database is preferable for high-quality annotation of metabolome data. PMID:19847304

  15. AREA RADIATION MONITOR

    DOEpatents

    Manning, F.W.; Groothuis, S.E.; Lykins, J.H.; Papke, D.M.

    1962-06-12

    S>An improved area radiation dose monitor is designed which is adapted to compensate continuously for background radiation below a threshold dose rate and to give warning when the dose integral of the dose rate of an above-threshold radiation excursion exceeds a selected value. This is accomplished by providing means for continuously charging an ionization chamber. The chamber provides a first current proportional to the incident radiation dose rate. Means are provided for generating a second current including means for nulling out the first current with the second current at all values of the first current corresponding to dose rates below a selected threshold dose rate value. The second current has a maximum value corresponding to that of the first current at the threshold dose rate. The excess of the first current over the second current, which occurs above the threshold, is integrated and an alarm is given at a selected integrated value of the excess corresponding to a selected radiation dose. (AEC)

  16. Proposing an Empirically Justified Reference Threshold for Blood Culture Sampling Rates in Intensive Care Units

    PubMed Central

    Castell, Stefanie; Schwab, Frank; Geffers, Christine; Bongartz, Hannah; Brunkhorst, Frank M.; Gastmeier, Petra; Mikolajczyk, Rafael T.

    2014-01-01

    Early and appropriate blood culture sampling is recommended as a standard of care for patients with suspected bloodstream infections (BSI) but is rarely taken into account when quality indicators for BSI are evaluated. To date, sampling of about 100 to 200 blood culture sets per 1,000 patient-days is recommended as the target range for blood culture rates. However, the empirical basis of this recommendation is not clear. The aim of the current study was to analyze the association between blood culture rates and observed BSI rates and to derive a reference threshold for blood culture rates in intensive care units (ICUs). This study is based on data from 223 ICUs taking part in the German hospital infection surveillance system. We applied locally weighted regression and segmented Poisson regression to assess the association between blood culture rates and BSI rates. Below 80 to 90 blood culture sets per 1,000 patient-days, observed BSI rates increased with increasing blood culture rates, while there was no further increase above this threshold. Segmented Poisson regression located the threshold at 87 (95% confidence interval, 54 to 120) blood culture sets per 1,000 patient-days. Only one-third of the investigated ICUs displayed blood culture rates above this threshold. We provided empirical justification for a blood culture target threshold in ICUs. In the majority of the studied ICUs, blood culture sampling rates were below this threshold. This suggests that a substantial fraction of BSI cases might remain undetected; reporting observed BSI rates as a quality indicator without sufficiently high blood culture rates might be misleading. PMID:25520442

  17. A common fluence threshold for first positive and second positive phototropism in Arabidopsis thaliana

    NASA Technical Reports Server (NTRS)

    Janoudi, A.; Poff, K. L.

    1990-01-01

    The relationship between the amount of light and the amount of response for any photobiological process can be based on the number of incident quanta per unit time (fluence rate-response) or on the number of incident quanta during a given period of irradiation (fluence-response). Fluence-response and fluence rate-response relationships have been measured for second positive phototropism by seedlings of Arabidopsis thaliana. The fluence-response relationships exhibit a single limiting threshold at about 0.01 micromole per square meter when measured at fluence rates from 2.4 x 10(-5) to 6.5 x 10(-3) micromoles per square meter per second. The threshold values in the fluence rate-response curves decrease with increasing time of irradiation, but show a common fluence threshold at about 0.01 micromole per square meter. These thresholds are the same as the threshold of about 0.01 micromole per square meter measured for first positive phototropism. Based on these data, it is suggested that second positive curvature has a threshold in time of about 10 minutes. Moreover, if the times of irradiation exceed the time threshold, there is a single limiting fluence threshold at about 0.01 micromole per square meter. Thus, the limiting fluence threshold for second positive phototropism is the same as the fluence threshold for first positive phototropism. Based on these data, we suggest that this common fluence threshold for first positive and second positive phototropism is set by a single photoreceptor pigment system.

  18. Speed discrimination predicts word but not pseudo-word reading rate in adults and children

    PubMed Central

    Main, Keith L.; Pestilli, Franco; Mezer, Aviv; Yeatman, Jason; Martin, Ryan; Phipps, Stephanie; Wandell, Brian

    2014-01-01

    Word familiarity may affect magnocellular processes of word recognition. To explore this idea, we measured reading rate, speed-discrimination, and contrast detection thresholds in adults and children with a wide range of reading abilities. We found that speed-discrimination thresholds are higher in children than in adults and are correlated with age. Speed discrimination thresholds are also correlated with reading rate, but only for words, not for pseudo-words. Conversely, we found no correlation between contrast sensitivity and reading rate and no correlation between speed discrimination thresholds WASI subtest scores. These findings support the position that reading rate is influenced by magnocellular circuitry attuned to the recognition of familiar word-forms. PMID:25278418

  19. Comparison of algorithms of testing for use in automated evaluation of sensation.

    PubMed

    Dyck, P J; Karnes, J L; Gillen, D A; O'Brien, P C; Zimmerman, I R; Johnson, D M

    1990-10-01

    Estimates of vibratory detection threshold may be used to detect, characterize, and follow the course of sensory abnormality in neurologic disease. The approach is especially useful in epidemiologic and controlled clinical trials. We studied which algorithm of testing and finding threshold should be used in automatic systems by comparing among algorithms and stimulus conditions for the index finger of healthy subjects and for the great toe of patients with mild neuropathy. Appearance thresholds obtained by linear ramps increasing at a rate less than 4.15 microns/sec provided accurate and repeatable thresholds compared with thresholds obtained by forced-choice testing. These rates would be acceptable if only sensitive sites were studied, but they were too slow for use in automatic testing of insensitive parts. Appearance thresholds obtained by fast linear rates (4.15 or 16.6 microns/sec) overestimated threshold, especially for sensitive parts. Use of the mean of appearance and disappearance thresholds, with the stimulus increasing exponentially at rates of 0.5 or 1.0 just noticeable difference (JND) units per second, and interspersion of null stimuli, Békésy with null stimuli, provided accurate, repeatable, and fast estimates of threshold for sensitive parts. Despite the good performance of Békésy testing, we prefer forced choice for evaluation of the sensation of patients with neuropathy.

  20. Ionizing radiation sensitivity of the ocular lens and its dose rate dependence.

    PubMed

    Hamada, Nobuyuki

    2017-10-01

    In 2011, the International Commission on Radiological Protection reduced the threshold for the lens effects of low linear energy transfer (LET) radiation. On one hand, the revised threshold of 0.5 Gy is much lower than previously recommended thresholds, but mechanisms behind high radiosensitivity remain incompletely understood. On the other hand, such a threshold is independent of dose rate, in contrast to previously recommended separate thresholds each for single and fractionated/protracted exposures. Such a change was made predicated on epidemiological evidence suggesting that a threshold for fractionated/protracted exposures is not higher than an acute threshold, and that a chronic threshold is uncertain. Thus, the dose rate dependence is still unclear. This paper therefore reviews the current knowledge on the radiosensitivity of the lens and the dose rate dependence of radiation cataractogenesis, and discusses its mechanisms. Mounting biological evidence indicates that the lens cells are not necessarily radiosensitive to cell killing, and the high radiosensitivity of the lens thus appears to be attributable to other mechanisms (e.g., excessive proliferation, abnormal differentiation, a slow repair of DNA double-strand breaks, telomere, senescence, crystallin changes, non-targeted effects and inflammation). Both biological and epidemiological evidence generally supports the lack of dose rate effects. However, there is also biological evidence for the tissue sparing dose rate (or fractionation) effect of low-LET radiation and an enhancing inverse dose fractionation effect of high-LET radiation at a limited range of LET. Emerging epidemiological evidence in chronically exposed individuals implies the inverse dose rate effect. Further biological and epidemiological studies are warranted to gain deeper knowledge on the radiosensitivity of the lens and dose rate dependence of radiation cataractogenesis.

  1. Lobectomy is a more Cost-Effective Option than Total Thyroidectomy for 1 to 4 cm Papillary Thyroid Carcinoma that do not Possess Clinically Recognizable High-Risk Features.

    PubMed

    Lang, Brian Hung-Hin; Wong, Carlos K H

    2016-10-01

    Although lobectomy is a viable alternative to total thyroidectomy (TT) in low-risk 1 to 4 cm papillary thyroid carcinoma (PTC), lobectomy is associated with higher locoregional recurrence risk and need for completion TT upon discovery of a previously unrecognized histologic high-risk feature (HRF). The present study evaluated long-term cost-effectiveness between lobectomy and TT. Our base case was a hypothetical female cohort aged 40 years with a low-risk 2.5 cm PTC. A Markov decision tree model was constructed to compare cost-effectiveness between lobectomy and TT after 25 years. Patients with an unrecognized HRF (including aggressive histology, microscopic extrathyroidal extension, lymphovascular invasion, positive resection margin, nodal metastasis >5 mm, and multifocality) underwent completion TT after lobectomy. Outcome probabilities, utilities, and costs were estimated from the literature. The threshold for cost-effectiveness was set at US$50,000/quality-adjusted life-year (QALY). Sensitivity and threshold analyses were used to examine model uncertainty. After 25 years, each patient who underwent lobectomy instead of TT cost an extra US$772.08 but gained an additional 0.300 QALY. The incremental cost-effectiveness ratio was US$2577.65/QALY. In the sensitivity analysis, the lobectomy arm began to become cost-effective only after 3 years. Despite varying the reported prevalence of clinically unrecognized HRFs, complication from surgical procedures, annualized recurrence rates, unit cost of surgical procedure or complication, and utility score, lobectomy remained more cost-effective than TT. Despite the higher locoregional recurrence risk and having almost half of the patients undergoing completion TT after lobectomy upon discovery of a previously unrecognized HRF, initial lobectomy was a more cost-effective long-term option than initial TT for 1 to 4 cm PTCs without clinically recognized HRFs.

  2. Building one molecule from a reservoir of two atoms.

    PubMed

    Liu, L R; Hood, J D; Yu, Y; Zhang, J T; Hutzler, N R; Rosenband, T; Ni, K-K

    2018-05-25

    Chemical reactions typically proceed via stochastic encounters between reactants. Going beyond this paradigm, we combined exactly two atoms in a single, controlled reaction. The experimental apparatus traps two individual laser-cooled atoms [one sodium (Na) and one cesium (Cs)] in separate optical tweezers and then merges them into one optical dipole trap. Subsequently, photoassociation forms an excited-state NaCs molecule. The discovery of previously unseen resonances near the molecular dissociation threshold and measurement of collision rates are enabled by the tightly trapped ultracold sample of atoms. As laser-cooling and trapping capabilities are extended to more elements, the technique will enable the study of more diverse, and eventually more complex, molecules in an isolated environment, as well as synthesis of designer molecules for qubits. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  3. Plasma Polypyrrole Coated Hybrid Composites with Improved Mechanical and Electrical Properties for Aerospace Applications

    NASA Astrophysics Data System (ADS)

    Yavuz, Hande; Bai, Jinbo

    2018-06-01

    This paper deals with the dielectric barrier discharge assisted continuous plasma polypyrrole deposition on CNT-grafted carbon fibers for conductive composite applications. The simultaneous effects of three controllable factors have been studied on the electrical resistivity (ER) of these two material systems based on multivariate experimental design methodology. A posterior probability referring to Benjamini-Hochberg (BH) false discovery rate was explored as multiple testing corrections of the t-test p values. BH significance threshold of 0.05 was produced truly statistically significant coefficients to describe ER of two material systems. A group of plasma modified samples was chosen to be used for composite manufacturing to drive an assessment of interlaminar shear properties under static loading. Transversal and longitudinal electrical resistivity (DC, ω =0) of composite samples were studied to compare both the effects of CNT grafting and plasma modification on ER of resultant composites.

  4. Plasma Polypyrrole Coated Hybrid Composites with Improved Mechanical and Electrical Properties for Aerospace Applications

    NASA Astrophysics Data System (ADS)

    Yavuz, Hande; Bai, Jinbo

    2017-09-01

    This paper deals with the dielectric barrier discharge assisted continuous plasma polypyrrole deposition on CNT-grafted carbon fibers for conductive composite applications. The simultaneous effects of three controllable factors have been studied on the electrical resistivity (ER) of these two material systems based on multivariate experimental design methodology. A posterior probability referring to Benjamini-Hochberg (BH) false discovery rate was explored as multiple testing corrections of the t-test p values. BH significance threshold of 0.05 was produced truly statistically significant coefficients to describe ER of two material systems. A group of plasma modified samples was chosen to be used for composite manufacturing to drive an assessment of interlaminar shear properties under static loading. Transversal and longitudinal electrical resistivity (DC, ω =0) of composite samples were studied to compare both the effects of CNT grafting and plasma modification on ER of resultant composites.

  5. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    PubMed

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  6. Asymptotics of empirical eigenstructure for high dimensional spiked covariance

    PubMed Central

    Wang, Weichen

    2017-01-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726

  7. The Catalina Sky Survey for Near-Earth Objects

    NASA Astrophysics Data System (ADS)

    Christensen, E.

    The Catalina Sky Survey (CSS) specializes in the detection of the closest transients in our transient universe: near-Earth objects (NEOs). CSS is the leading NEO survey program since 2005, with a discovery rate of 500-600 NEOs per year. This rate is set to substantially increase starting in 2014 with the deployment of wider FOV cameras at both survey telescopes, while a proposed 3-telescope system in Chile would provide a new and significant capability in the Southern Hemisphere beginning as early as 2015. Elements contributing to the success of CSS may be applied to other surveys, and include 1) Real-time processing, identification, and reporting of interesting transients; 2) Human-assisted validation to ensure a clean transient stream that is efficient to the limits of the system (˜ 1σ); 3) an integrated follow-up capability to ensure threshold or high-priority transients are properly confirmed and followed up. Additionally, the open-source nature of the CSS data enables considerable secondary science (i.e. CRTS), and CSS continues to pursue collaborations to maximize the utility of the data.

  8. Inferring Planet Occurrence Rates With a Q1-Q16 Kepler Planet Candidate Catalog Produced by a Machine Learning Classifier

    NASA Astrophysics Data System (ADS)

    Catanzarite, Joseph; Jenkins, Jon Michael; Burke, Christopher J.; McCauliff, Sean D.; Kepler Science Operations Center

    2015-01-01

    NASA's Kepler Space Telescope monitored the photometric variations of over 170,000 stars within a ~100 square degree field in the constellation Cygnus, at half-hour cadence, over its four year prime mission. The Kepler SOC (Science Operations Center) pipeline calibrates the pixels of the target apertures for each star, corrects light curves for systematic error, and detects TCEs (threshold-crossing events) that may be due to transiting planets. Finally the pipeline estimates planet parameters for all TCEs and computes quantitative diagnostics that are used by the TCERT (Threshold Crossing Event Review Team) to produce a catalog containing KOIs (Kepler Objects of Interest). KOIs are TCEs that are determined to be either likely transiting planets or astrophysical false positives such as background eclipsing binary stars. Using examples from the Q1-Q16 TCERT KOI catalog as a training set, we created a machine-learning classifier that dispositions the TCEs into categories of PC (planet candidate), AFP (astrophysical false positive) and NTP (non-transiting phenomenon). The classifier uniformly and consistently applies heuristics developed by TCERT as well as other diagnostics to the Q1-Q16 TCEs to produce a more robust and reliable catalog of planet candidates than is possible with only human classification. In this work, we estimate planet occurrence rates, based on the machine-learning-produced catalog of Kepler planet candidates. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.

  9. 76 FR 9517 - Uniform National Threshold Entered Employment Rate for Veterans

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-18

    ...The Veterans' Employment and Training Service (VETS) of the Department of Labor (the Department) is proposing a rule to implement a uniform national threshold entered employment rate for veterans applicable to State employment service delivery systems. The Department undertakes this rulemaking in accordance with the Jobs for Veterans Act, which requires the Department to implement that threshold rate by regulation.

  10. Identification of differentially expressed genes and false discovery rate in microarray studies.

    PubMed

    Gusnanto, Arief; Calza, Stefano; Pawitan, Yudi

    2007-04-01

    To highlight the development in microarray data analysis for the identification of differentially expressed genes, particularly via control of false discovery rate. The emergence of high-throughput technology such as microarrays raises two fundamental statistical issues: multiplicity and sensitivity. We focus on the biological problem of identifying differentially expressed genes. First, multiplicity arises due to testing tens of thousands of hypotheses, rendering the standard P value meaningless. Second, known optimal single-test procedures such as the t-test perform poorly in the context of highly multiple tests. The standard approach of dealing with multiplicity is too conservative in the microarray context. The false discovery rate concept is fast becoming the key statistical assessment tool replacing the P value. We review the false discovery rate approach and argue that it is more sensible for microarray data. We also discuss some methods to take into account additional information from the microarrays to improve the false discovery rate. There is growing consensus on how to analyse microarray data using the false discovery rate framework in place of the classical P value. Further research is needed on the preprocessing of the raw data, such as the normalization step and filtering, and on finding the most sensitive test procedure.

  11. Shrinkage estimation of effect sizes as an alternative to hypothesis testing followed by estimation in high-dimensional biology: applications to differential gene expression.

    PubMed

    Montazeri, Zahra; Yanofsky, Corey M; Bickel, David R

    2010-01-01

    Research on analyzing microarray data has focused on the problem of identifying differentially expressed genes to the neglect of the problem of how to integrate evidence that a gene is differentially expressed with information on the extent of its differential expression. Consequently, researchers currently prioritize genes for further study either on the basis of volcano plots or, more commonly, according to simple estimates of the fold change after filtering the genes with an arbitrary statistical significance threshold. While the subjective and informal nature of the former practice precludes quantification of its reliability, the latter practice is equivalent to using a hard-threshold estimator of the expression ratio that is not known to perform well in terms of mean-squared error, the sum of estimator variance and squared estimator bias. On the basis of two distinct simulation studies and data from different microarray studies, we systematically compared the performance of several estimators representing both current practice and shrinkage. We find that the threshold-based estimators usually perform worse than the maximum-likelihood estimator (MLE) and they often perform far worse as quantified by estimated mean-squared risk. By contrast, the shrinkage estimators tend to perform as well as or better than the MLE and never much worse than the MLE, as expected from what is known about shrinkage. However, a Bayesian measure of performance based on the prior information that few genes are differentially expressed indicates that hard-threshold estimators perform about as well as the local false discovery rate (FDR), the best of the shrinkage estimators studied. Based on the ability of the latter to leverage information across genes, we conclude that the use of the local-FDR estimator of the fold change instead of informal or threshold-based combinations of statistical tests and non-shrinkage estimators can be expected to substantially improve the reliability of gene prioritization at very little risk of doing so less reliably. Since the proposed replacement of post-selection estimates with shrunken estimates applies as well to other types of high-dimensional data, it could also improve the analysis of SNP data from genome-wide association studies.

  12. Application of a Threshold Method to the TRMM Radar for the Estimation of Space-Time Rain Rate Statistics

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Jones, Jeffrey A.

    1997-01-01

    One of the TRMM radar products of interest is the monthly-averaged rain rates over 5 x 5 degree cells. Clearly, the most directly way of calculating these and similar statistics is to compute them from the individual estimates made over the instantaneous field of view of the Instrument (4.3 km horizontal resolution). An alternative approach is the use of a threshold method. It has been established that over sufficiently large regions the fractional area above a rain rate threshold and the area-average rain rate are well correlated for particular choices of the threshold [e.g., Kedem et al., 19901]. A straightforward application of this method to the TRMM data would consist of the conversion of the individual reflectivity factors to rain rates followed by a calculation of the fraction of these that exceed a particular threshold. Previous results indicate that for thresholds near or at 5 mm/h, the correlation between this fractional area and the area-average rain rate is high. There are several drawbacks to this approach, however. At the TRMM radar frequency of 13.8 GHz the signal suffers attenuation so that the negative bias of the high resolution rain rate estimates will increase as the path attenuation increases. To establish a quantitative relationship between fractional area and area-average rain rate, an independent means of calculating the area-average rain rate is needed such as an array of rain gauges. This type of calibration procedure, however, is difficult for a spaceborne radar such as TRMM. To estimate a statistic other than the mean of the distribution requires, in general, a different choice of threshold and a different set of tuning parameters.

  13. Palomar Planet-Crossing Asteroid Survey (PCAS): Recent discovery rate

    NASA Technical Reports Server (NTRS)

    Helin, Eleanor F.

    1992-01-01

    The discovery rate of Near-Earth Asteroids (NEA's) has increased significantly in the last decade. As greater numbers of NEA's are discovered, worldwide interest has grown leading to new programs. With the introduction of CCD telescopes throughout the world, an increase of 1-2 orders of magnitude in the discovery rate can be anticipated. Nevertheless, it will take several decades of dedicated searching to accomplish a 95 percent completeness, even for large objects.

  14. Composite Material Switches

    NASA Technical Reports Server (NTRS)

    Javadi, Hamid (Inventor)

    2001-01-01

    A device to protect electronic circuitry from high voltage transients is constructed from a relatively thin piece of conductive composite sandwiched between two conductors so that conduction is through the thickness of the composite piece. The device is based on the discovery that conduction through conductive composite materials in this configuration switches to a high resistance mode when exposed to voltages above a threshold voltage.

  15. Composite Material Switches

    NASA Technical Reports Server (NTRS)

    Javadi, Hamid (Inventor)

    2002-01-01

    A device to protect electronic circuitry from high voltage transients is constructed from a relatively thin piece of conductive composite sandwiched between two conductors so that conduction is through the thickness of the composite piece. The device is based on the discovery that conduction through conductive composite materials in this configuration switches to a high resistance mode when exposed to voltages above a threshold voltage.

  16. A statistical physics approach to scale-free networks and their behaviors

    NASA Astrophysics Data System (ADS)

    Wu, Fang

    This thesis studies five problems of network properties from a unified local-to-global viewpoint of statistical physics: (1) We propose an algorithm that allows the discovery of communities within graphs of arbitrary size, based on Kirchhoff theory of electric networks. Its time complexity scales linearly with the network size. We additionally show how this algorithm allows for the swift discovery of the community surrounding a given node without having to extract all the communities out of a graph. (2) We present a dynamical theory of opinion formation that takes explicitly into account the structure of the social network in which individuals are embedded. We show that the weighted fraction of the population that holds a certain opinion is a martingale. We show that the importance of a given node is proportional to its degree. We verify our predictions by simulations. (3) We show that, when the information transmissibility decays with distance, the epidemic spread on a scale-free network has a finite threshold. We test our predictions by measuring the spread of messages in an organization and by numerical experiments. (4) Suppose users can switch between two behaviors when entering a queueing system: one that never restarts an initial request and one that restarts infinitely often. We show the existence of two thresholds. When the system load is below the lower threshold, it is always better off to be impatient. When above, it is always better off to be patient. Between the two thresholds there exists a homogeneous Nash equilibrium with non-trivial properties. We obtain exact solutions for the two thresholds. (5) We study the endogenous dynamics of reputations in a system consisting of firms with long horizons that provide services with varying levels of quality, and customers who assign to them reputations on the basis of the quality levels that they experience when interacting with them. We show that the dynamics can lead to either well defined equilibria or persistent nonlinear oscillations in the number of customers visiting a firm, implying unstable reputations. We establish the stable criteria.

  17. A study of the threshold method utilizing raingage data

    NASA Technical Reports Server (NTRS)

    Short, David A.; Wolff, David B.; Rosenfeld, Daniel; Atlas, David

    1993-01-01

    The threshold method for estimation of area-average rain rate relies on determination of the fractional area where rain rate exceeds a preset level of intensity. Previous studies have shown that the optimal threshold level depends on the climatological rain-rate distribution (RRD). It has also been noted, however, that the climatological RRD may be composed of an aggregate of distributions, one for each of several distinctly different synoptic conditions, each having its own optimal threshold. In this study, the impact of RRD variations on the threshold method is shown in an analysis of 1-min rainrate data from a network of tipping-bucket gauges in Darwin, Australia. Data are analyzed for two distinct regimes: the premonsoon environment, having isolated intense thunderstorms, and the active monsoon rains, having organized convective cell clusters that generate large areas of stratiform rain. It is found that a threshold of 10 mm/h results in the same threshold coefficient for both regimes, suggesting an alternative definition of optimal threshold as that which is least sensitive to distribution variations. The observed behavior of the threshold coefficient is well simulated by assumption of lognormal distributions with different scale parameters and same shape parameters.

  18. Experimental and environmental factors affect spurious detection of ecological thresholds

    USGS Publications Warehouse

    Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.

    2012-01-01

    Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.

  19. Effect of mental stress on cold pain in chronic tension-type headache sufferers.

    PubMed

    Cathcart, Stuart; Winefield, Anthony H; Lushington, Kurt; Rolan, Paul

    2009-10-01

    Mental stress is a noted contributing factor in chronic tension-type headache (CTH), however the mechanisms underlying this are not clearly understood. One proposition is that stress aggravates already increased pain sensitivity in CTH sufferers. This hypothesis could be partially tested by examining effects of mental stress on threshold and supra-threshold experimental pain processing in CTH sufferers. Such studies have not been reported to date. The present study measured pain detection and tolerance thresholds and ratings of supra-threshold pain stimulation from cold pressor test in CTH sufferers (CTH-S) and healthy Control (CNT) subjects exposed to a 60-min stressful mental task, and in CTH sufferers exposed to a 60-min neutral condition (CTH-N). Headache sufferers had lower pain tolerance thresholds and increased pain intensity ratings compared to controls. Pain detection and tolerance thresholds decreased and pain intensity ratings increased during the stress task, with a greater reduction in pain detection threshold and increase in pain intensity ratings in the CTH-S compared to CNT group. The results support the hypothesis that mental stress contributes to CTH through aggravating already increased pain sensitivity in CTH sufferers.

  20. Deactivating stimulation sites based on low-rate thresholds improves spectral ripple and speech reception thresholds in cochlear implant users.

    PubMed

    Zhou, Ning

    2017-03-01

    The study examined whether the benefit of deactivating stimulation sites estimated to have broad neural excitation was attributed to improved spectral resolution in cochlear implant users. The subjects' spatial neural excitation pattern was estimated by measuring low-rate detection thresholds across the array [see Zhou (2016). PLoS One 11, e0165476]. Spectral resolution, as assessed by spectral-ripple discrimination thresholds, significantly improved after deactivation of five high-threshold sites. The magnitude of improvement in spectral-ripple discrimination thresholds predicted the magnitude of improvement in speech reception thresholds after deactivation. Results suggested that a smaller number of relatively independent channels provide a better outcome than using all channels that might interact.

  1. Quantifying Learning in Young Infants: Tracking Leg Actions During a Discovery-learning Task.

    PubMed

    Sargent, Barbara; Reimann, Hendrik; Kubo, Masayoshi; Fetters, Linda

    2015-06-01

    Task-specific actions emerge from spontaneous movement during infancy. It has been proposed that task-specific actions emerge through a discovery-learning process. Here a method is described in which 3-4 month old infants learn a task by discovery and their leg movements are captured to quantify the learning process. This discovery-learning task uses an infant activated mobile that rotates and plays music based on specified leg action of infants. Supine infants activate the mobile by moving their feet vertically across a virtual threshold. This paradigm is unique in that as infants independently discover that their leg actions activate the mobile, the infants' leg movements are tracked using a motion capture system allowing for the quantification of the learning process. Specifically, learning is quantified in terms of the duration of mobile activation, the position variance of the end effectors (feet) that activate the mobile, changes in hip-knee coordination patterns, and changes in hip and knee muscle torque. This information describes infant exploration and exploitation at the interplay of person and environmental constraints that support task-specific action. Subsequent research using this method can investigate how specific impairments of different populations of infants at risk for movement disorders influence the discovery-learning process for task-specific action.

  2. Thresholding functional connectomes by means of mixture modeling.

    PubMed

    Bielczyk, Natalia Z; Walocha, Fabian; Ebel, Patrick W; Haak, Koen V; Llera, Alberto; Buitelaar, Jan K; Glennon, Jeffrey C; Beckmann, Christian F

    2018-05-01

    Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture of the visual system in humans. We also demonstrate that with use of our method, we are able to extract similar information on the group level as can be achieved with permutation testing even though these two methods are not equivalent. We demonstrate that with both of these methods, we obtain functional decoupling between the two hemispheres in the higher order areas of the visual cortex during visual stimulation as compared to the resting state, which is in line with previous studies suggesting lateralization in the visual processing. However, as opposed to permutation testing, our approach does not require inference at the cohort level and can be used for creating sparse connectomes at the level of a single subject. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Influence of the contractile properties of muscle on motor unit firing rates during a moderate-intensity contraction in vivo.

    PubMed

    Trevino, Michael A; Herda, Trent J; Fry, Andrew C; Gallagher, Philip M; Vardiman, John P; Mosier, Eric M; Miller, Jonathan D

    2016-08-01

    It is suggested that firing rate characteristics of motor units (MUs) are influenced by the physical properties of the muscle. However, no study has correlated MU firing rates at recruitment, targeted force, or derecruitment with the contractile properties of the muscle in vivo. Twelve participants (age = 20.67 ± 2.35 yr) performed a 40% isometric maximal voluntary contraction of the leg extensors that included linearly increasing, steady force, and decreasing segments. Muscle biopsies were collected with myosin heavy chain (MHC) content quantified, and surface electromyography (EMG) was recorded from the vastus lateralis. The EMG signal was decomposed into the firing events of single MUs. Slopes and y-intercepts were calculated for 1) firing rates at recruitment vs. recruitment threshold, 2) mean firing rates at steady force vs. recruitment threshold, and 3) firing rates at derecruitment vs. derecruitment threshold relationships for each subject. Correlations among type I %MHC isoform content and the slopes and y-intercepts from the three relationships were examined. Type I %MHC isoform content was correlated with MU firing rates at recruitment (y-intercepts: r = -0.577; slopes: r = 0.741) and targeted force (slopes: r = 0.853) vs. recruitment threshold and MU firing rates at derecruitment (y-intercept: r = -0.597; slopes: r = 0.701) vs. derecruitment threshold relationships. However, the majority of the individual MU firing rates vs. recruitment and derecruitment relationships were not significant (P > 0.05) and, thus, revealed no systematic pattern. In contrast, MU firing rates during the steady force demonstrated a systematic pattern with higher firing rates for the lower- than higher-threshold MUs and were correlated with the physical properties of MUs in vivo. Copyright © 2016 the American Physiological Society.

  4. Evaluating methods of correcting for multiple comparisons implemented in SPM12 in social neuroscience fMRI studies: an example from moral psychology.

    PubMed

    Han, Hyemin; Glenn, Andrea L

    2018-06-01

    In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.

  5. Noise-induced cochlear synaptopathy: Past findings and future studies.

    PubMed

    Kobel, Megan; Le Prell, Colleen G; Liu, Jennifer; Hawks, John W; Bao, Jianxin

    2017-06-01

    For decades, we have presumed the death of hair cells and spiral ganglion neurons are the main cause of hearing loss and difficulties understanding speech in noise, but new findings suggest synapse loss may be the key contributor. Specifically, recent preclinical studies suggest that the synapses between inner hair cells and spiral ganglion neurons with low spontaneous rates and high thresholds are the most vulnerable subcellular structures, with respect to insults during aging and noise exposure. This cochlear synaptopathy can be "hidden" because this synaptic loss can occur without permanent hearing threshold shifts. This new discovery of synaptic loss opens doors to new research directions. Here, we review a number of recent studies and make suggestions in two critical future research directions. First, based on solid evidence of cochlear synaptopathy in animal models, it is time to apply molecular approaches to identify the underlying molecular mechanisms; improved understanding is necessary for developing rational, effective therapies against this cochlear synaptopathy. Second, in human studies, the data supporting cochlear synaptopathy are indirect although rapid progress has been made. To fully identify changes in function that are directly related this hidden synaptic damage, we argue that a battery of tests including both electrophysiological and behavior tests should be combined for diagnosis of "hidden hearing loss" in clinical studies. This new approach may provide a direct link between cochlear synaptopathy and perceptual difficulties. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Long-term trends in oil and gas discovery rates in lower 48 United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, T.J.

    1985-09-01

    The Gas Research Institute (GRI), in association with Energy and Environmental Analysis, Inc. (EEA), has developed a data base characterizing the discovered oil and gas fields in the lower 48 United States. The number of fields in this data base reported to have been discovered since 1947 substantially exceeds the count presented in the AAPG survey of new-field discoveries since 1947. The greatest relative difference between the field counts is for fields larger than 10 million bbl of oil equivalent (BOE) (AAPG Class C fields or larger). Two factors contribute to the difference in reported discoveries by field size. First,more » the AAPG survey does not capture all new-field discoveries, particularly in the offshore. Second, the AAPG survey does not update field sizes past 6 years after the field discovery date. Because of reserve appreciation to discovered fields, discovery-trend data based on field-size data should be used with caution, particularly when field-size estimates have not been updated for a substantial period of time. Based on the GRI/EEA data base, the major decline in the discovery rates of large, new oil and gas fields in the lower 48 United States appears to have ended by the early 1960s. Since then, discovery rates seem to have improved. Thus, the outlook for future discoveries of large fields may be much better than previously believed.« less

  7. Reduced Brain Gray Matter Concentration in Patients With Obstructive Sleep Apnea Syndrome

    PubMed Central

    Joo, Eun Yeon; Tae, Woo Suk; Lee, Min Joo; Kang, Jung Woo; Park, Hwan Seok; Lee, Jun Young; Suh, Minah; Hong, Seung Bong

    2010-01-01

    Study Objectives: To investigate differences in brain gray matter concentrations or volumes in patients with obstructive sleep apnea syndrome (OSA) and healthy volunteers. Designs: Optimized voxel-based morphometry, an automated processing technique for MRI, was used to characterize structural differences in gray matter in newly diagnosed male patients. Setting: University hospital Patients and Participants: The study consisted of 36 male OSA and 31 non-apneic male healthy volunteers matched for age (mean age, 44.8 years). Interventions: Using the t-test, gray matter differences were identified. The statistical significance level was set to a false discovery rate P < 0.05 with an extent threshold of kE > 200 voxels. Measurements and Results: The mean apnea-hypopnea index (AHI) of patients was 52.5/ h. On visual inspection of MRI, no structural abnormalities were observed. Compared to healthy volunteers, the gray matter concentrations of OSA patients were significantly decreased in the left gyrus rectus, bilateral superior frontal gyri, left precentral gyrus, bilateral frontomarginal gyri, bilateral anterior cingulate gyri, right insular gyrus, bilateral caudate nuclei, bilateral thalami, bilateral amygdalo-hippocampi, bilateral inferior temporal gyri, and bilateral quadrangular and biventer lobules in the cerebellum (false discovery rate P < 0.05). Gray matter volume was not different between OSA patients and healthy volunteers. Conclusions: The brain gray matter deficits may suggest that memory impairment, affective and cardiovascular disturbances, executive dysfunctions, and dysregulation of autonomic and respiratory control frequently found in OSA patients might be related to morphological differences in the brain gray matter areas. Citation: Joo EY; Tae WS; Lee MJ; Kang JW; Park HS; Lee JY; Suh M; Hong SB. Reduced brain gray matter concentration in patients with obstructive sleep apnea syndrome. SLEEP 2010;33(2):235-241. PMID:20175407

  8. ICan: An Optimized Ion-Current-Based Quantification Procedure with Enhanced Quantitative Accuracy and Sensitivity in Biomarker Discovery

    PubMed Central

    2015-01-01

    The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods. PMID:25285707

  9. Epigenetic Signatures of Cigarette Smoking.

    PubMed

    Joehanes, Roby; Just, Allan C; Marioni, Riccardo E; Pilling, Luke C; Reynolds, Lindsay M; Mandaviya, Pooja R; Guan, Weihua; Xu, Tao; Elks, Cathy E; Aslibekyan, Stella; Moreno-Macias, Hortensia; Smith, Jennifer A; Brody, Jennifer A; Dhingra, Radhika; Yousefi, Paul; Pankow, James S; Kunze, Sonja; Shah, Sonia H; McRae, Allan F; Lohman, Kurt; Sha, Jin; Absher, Devin M; Ferrucci, Luigi; Zhao, Wei; Demerath, Ellen W; Bressler, Jan; Grove, Megan L; Huan, Tianxiao; Liu, Chunyu; Mendelson, Michael M; Yao, Chen; Kiel, Douglas P; Peters, Annette; Wang-Sattler, Rui; Visscher, Peter M; Wray, Naomi R; Starr, John M; Ding, Jingzhong; Rodriguez, Carlos J; Wareham, Nicholas J; Irvin, Marguerite R; Zhi, Degui; Barrdahl, Myrto; Vineis, Paolo; Ambatipudi, Srikant; Uitterlinden, André G; Hofman, Albert; Schwartz, Joel; Colicino, Elena; Hou, Lifang; Vokonas, Pantel S; Hernandez, Dena G; Singleton, Andrew B; Bandinelli, Stefania; Turner, Stephen T; Ware, Erin B; Smith, Alicia K; Klengel, Torsten; Binder, Elisabeth B; Psaty, Bruce M; Taylor, Kent D; Gharib, Sina A; Swenson, Brenton R; Liang, Liming; DeMeo, Dawn L; O'Connor, George T; Herceg, Zdenko; Ressler, Kerry J; Conneely, Karen N; Sotoodehnia, Nona; Kardia, Sharon L R; Melzer, David; Baccarelli, Andrea A; van Meurs, Joyce B J; Romieu, Isabelle; Arnett, Donna K; Ong, Ken K; Liu, Yongmei; Waldenberger, Melanie; Deary, Ian J; Fornage, Myriam; Levy, Daniel; London, Stephanie J

    2016-10-01

    DNA methylation leaves a long-term signature of smoking exposure and is one potential mechanism by which tobacco exposure predisposes to adverse health outcomes, such as cancers, osteoporosis, lung, and cardiovascular disorders. To comprehensively determine the association between cigarette smoking and DNA methylation, we conducted a meta-analysis of genome-wide DNA methylation assessed using the Illumina BeadChip 450K array on 15 907 blood-derived DNA samples from participants in 16 cohorts (including 2433 current, 6518 former, and 6956 never smokers). Comparing current versus never smokers, 2623 cytosine-phosphate-guanine sites (CpGs), annotated to 1405 genes, were statistically significantly differentially methylated at Bonferroni threshold of P<1×10 -7 (18 760 CpGs at false discovery rate <0.05). Genes annotated to these CpGs were enriched for associations with several smoking-related traits in genome-wide studies including pulmonary function, cancers, inflammatory diseases, and heart disease. Comparing former versus never smokers, 185 of the CpGs that differed between current and never smokers were significant P<1×10 -7 (2623 CpGs at false discovery rate <0.05), indicating a pattern of persistent altered methylation, with attenuation, after smoking cessation. Transcriptomic integration identified effects on gene expression at many differentially methylated CpGs. Cigarette smoking has a broad impact on genome-wide methylation that, at many loci, persists many years after smoking cessation. Many of the differentially methylated genes were novel genes with respect to biological effects of smoking and might represent therapeutic targets for prevention or treatment of tobacco-related diseases. Methylation at these sites could also serve as sensitive and stable biomarkers of lifetime exposure to tobacco smoke. © 2016 American Heart Association, Inc.

  10. The discovery rate of new comets in the age of large surveys. Trends, statistics, and an updated evaluation of the comet flux

    NASA Astrophysics Data System (ADS)

    Fernández, Julio A.

    We analyze a sample of 58 Oort cloud comets (OCCs) (original orbital energies x in the range 0 < x < 100, in units of 10-6 AU-1), plus 45 long-period comets with negative orbital energies or poorly determined or undetermined x, discovered during the period 1999-2007. To analyze the degree of completeness of the sample, we use Everhart's (1967 Astr. J 72, 716) concept of “excess magnitude” (in magnitudes × days), defined as the integrated magnitude excess that a given comet presents over the time above a threshold magnitude for detection. This quantity is a measure of the likelihood that the comet will be finally detected. We define two sub-samples of OCCs: 1) new comets (orbital energies 0 < x < 30) as those whose perihelia can shift from outside to the inner planetary region in a single revolution; and 2) inner cloud comets (orbital energies 30 ≤ x < 100), that come from the inner region of the Oort cloud, and for which external perturbers (essentially galactic tidal forces and passing stars) are not strong enough to allow them to overshoot the Jupiter-Saturn barrier. From the observed comet flux and making allowance for missed discoveries, we find a flux of OCCs brighter than absolute total magnitude 9 of ≃0.65 ± 0.18 per year within Earth's orbit. From this flux, about two-thirds corresponds to new comets and the rest to inner cloud comets. We find striking differences in the q-distribution of these two samples: while new comets appear to follow an uniform q-distribution, inner cloud comets show an increase in the rate of perihelion passages with q.

  11. A Common Fluence Threshold for First Positive and Second Positive Phototropism in Arabidopsis thaliana1

    PubMed Central

    Janoudi, Abdul; Poff, Kenneth L.

    1990-01-01

    The relationship between the amount of light and the amount of response for any photobiological process can be based on the number of incident quanta per unit time (fluence rate-response) or on the number of incident quanta during a given period of irradiation (fluence-response). Fluence-response and fluence rate-response relationships have been measured for second positive phototropism by seedlings of Arabidopsis thaliana. The fluence-response relationships exhibit a single limiting threshold at about 0.01 micromole per square meter when measured at fluence rates from 2.4 × 10−5 to 6.5 × 10−3 micromoles per square meter per second. The threshold values in the fluence rateresponse curves decrease with increasing time of irradiation, but show a common fluence threshold at about 0.01 micromole per square meter. These thresholds are the same as the threshold of about 0.01 micromole per square meter measured for first positive phototropism. Based on these data, it is suggested that second positive curvature has a threshold in time of about 10 minutes. Moreover, if the times of irradiation exceed the time threshold, there is a single limiting fluence threshold at about 0.01 micromole per square meter. Thus, the limiting fluence threshold for second positive phototropism is the same as the fluence threshold for first positive phototropism. Based on these data, we suggest that this common fluence threshold for first positive and second positive phototropism is set by a single photoreceptor pigment system. PMID:11537470

  12. Clinical evaluation of an inspiratory impedance threshold device during standard cardiopulmonary resuscitation in patients with out-of-hospital cardiac arrest.

    PubMed

    Aufderheide, Tom P; Pirrallo, Ronald G; Provo, Terry A; Lurie, Keith G

    2005-04-01

    To determine whether an impedance threshold device, designed to enhance circulation, would increase acute resuscitation rates for patients in cardiac arrest receiving conventional manual cardiopulmonary resuscitation. Prospective, randomized, double-blind, intention-to-treat. Out-of-hospital trial conducted in the Milwaukee, WI, emergency medical services system. Adults in cardiac arrest of presumed cardiac etiology. On arrival of advanced life support, patients were treated with standard cardiopulmonary resuscitation combined with either an active or a sham impedance threshold device. We measured safety and efficacy of the impedance threshold device; the primary end point was intensive care unit admission. Statistical analyses performed included the chi-square test and multivariate regression analysis. One hundred sixteen patients were treated with a sham impedance threshold device, and 114 patients were treated with an active impedance threshold device. Overall intensive care unit admission rates were 17% with the sham device vs. 25% in the active impedance threshold device (p = .13; odds ratio, 1.64; 95% confidence interval, 0.87, 3.10). Patients in the subgroup presenting with pulseless electrical activity had intensive care unit admission and 24-hr survival rates of 20% and 12% in sham (n = 25) vs. 52% and 30% in active impedance threshold device groups (n = 27) (p = .018, odds ratio, 4.31; 95% confidence interval, 1.28, 14.5, and p = .12, odds ratio, 3.09; 95% confidence interval, 0.74, 13.0, respectively). A post hoc analysis of patients with pulseless electrical activity at any time during the cardiac arrest revealed that intensive care unit and 24-hr survival rates were 20% and 11% in the sham (n = 56) vs. 41% and 27% in the active impedance threshold device groups (n = 49) (p = .018, odds ratio, 2.82; 95% confidence interval, 1.19, 6.67, and p = .037, odds ratio, 3.01; 95% confidence interval, 1.07, 8.96, respectively). There were no statistically significant differences in outcomes for patients presenting in ventricular fibrillation and asystole. Adverse event and complication rates were also similar. During this first clinical trial of the impedance threshold device during standard cardiopulmonary resuscitation, use of the new device more than doubled short-term survival rates in patients presenting with pulseless electrical activity. A larger clinical trial is underway to determine the potential longer term benefits of the impedance threshold device in cardiac arrest.

  13. Disturbances of motor unit rate modulation are prevalent in muscles of spastic-paretic stroke survivors

    PubMed Central

    Heckman, C. J.; Powers, R. K.; Rymer, W. Z.; Suresh, N. L.

    2014-01-01

    Stroke survivors often exhibit abnormally low motor unit firing rates during voluntary muscle activation. Our purpose was to assess the prevalence of saturation in motor unit firing rates in the spastic-paretic biceps brachii muscle of stroke survivors. To achieve this objective, we recorded the incidence and duration of impaired lower- and higher-threshold motor unit firing rate modulation in spastic-paretic, contralateral, and healthy control muscle during increases in isometric force generated by the elbow flexor muscles. Impaired firing was considered to have occurred when firing rate became constant (i.e., saturated), despite increasing force. The duration of impaired firing rate modulation in the lower-threshold unit was longer for spastic-paretic (3.9 ± 2.2 s) than for contralateral (1.4 ± 0.9 s; P < 0.001) and control (1.1 ± 1.0 s; P = 0.005) muscles. The duration of impaired firing rate modulation in the higher-threshold unit was also longer for the spastic-paretic (1.7 ± 1.6 s) than contralateral (0.3 ± 0.3 s; P = 0.007) and control (0.1 ± 0.2 s; P = 0.009) muscles. This impaired firing rate of the lower-threshold unit arose, despite an increase in the overall descending command, as shown by the recruitment of the higher-threshold unit during the time that the lower-threshold unit was saturating, and by the continuous increase in averages of the rectified EMG of the biceps brachii muscle throughout the rising phase of the contraction. These results suggest that impairments in firing rate modulation are prevalent in motor units of spastic-paretic muscle, even when the overall descending command to the muscle is increasing. PMID:24572092

  14. Adaptive threshold control for auto-rate fallback algorithm in IEEE 802.11 multi-rate WLANs

    NASA Astrophysics Data System (ADS)

    Wu, Qilin; Lu, Yang; Zhu, Xiaolin; Ge, Fangzhen

    2012-03-01

    The IEEE 802.11 standard supports multiple rates for data transmission in the physical layer. Nowadays, to improve network performance, a rate adaptation scheme called auto-rate fallback (ARF) is widely adopted in practice. However, ARF scheme suffers performance degradation in multiple contending nodes environments. In this article, we propose a novel rate adaptation scheme called ARF with adaptive threshold control. In multiple contending nodes environment, the proposed scheme can effectively mitigate the frame collision effect on rate adaptation decision by adaptively adjusting rate-up and rate-down threshold according to the current collision level. Simulation results show that the proposed scheme can achieve significantly higher throughput than the other existing rate adaptation schemes. Furthermore, the simulation results also demonstrate that the proposed scheme can effectively respond to the varying channel condition.

  15. PSYCHIATRIC COMORBIDITY DOES NOT ONLY DEPEND ON DIAGNOSTIC THRESHOLDS: AN ILLUSTRATION WITH MAJOR DEPRESSIVE DISORDER AND GENERALIZED ANXIETY DISORDER.

    PubMed

    van Loo, Hanna M; Schoevers, Robert A; Kendler, Kenneth S; de Jonge, Peter; Romeijn, Jan-Willem

    2016-02-01

    High rates of psychiatric comorbidity are subject of debate: To what extent do they depend on classification choices such as diagnostic thresholds? This paper investigates the influence of different thresholds on rates of comorbidity between major depressive disorder (MDD) and generalized anxiety disorder (GAD). Point prevalence of comorbidity between MDD and GAD was measured in 74,092 subjects from the general population (LifeLines) according to Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) criteria. Comorbidity rates were compared for different thresholds by varying the number of necessary criteria from ≥ 1 to all nine symptoms for MDD, and from ≥ 1 to all seven symptoms for GAD. According to DSM thresholds, 0.86% had MDD only, 2.96% GAD only, and 1.14% both MDD and GAD (odds ratio (OR) 42.6). Lower thresholds for MDD led to higher rates of comorbidity (1.44% for ≥ 4 of nine MDD symptoms, OR 34.4), whereas lower thresholds for GAD hardly influenced comorbidity (1.16% for ≥ 3 of seven GAD symptoms, OR 38.8). Specific patterns in the distribution of symptoms within the population explained this finding: 37.3% of subjects with core criteria of MDD and GAD reported subthreshold MDD symptoms, whereas only 7.6% reported subthreshold GAD symptoms. Lower thresholds for MDD increased comorbidity with GAD, but not vice versa, owing to specific symptom patterns in the population. Generally, comorbidity rates result from both empirical symptom distributions and classification choices and cannot be reduced to either of these exclusively. This insight invites further research into the formation of disease concepts that allow for reliable predictions and targeted therapeutic interventions. © 2015 Wiley Periodicals, Inc.

  16. Near-Earth asteroid discovery rate review

    NASA Technical Reports Server (NTRS)

    Helin, Eleanor F.

    1991-01-01

    Fifteen to twenty years ago the discovery of 1 or 2 Near Earth Asteroids (NEAs) per year was typical from one systematic search program, Palomar Planet Crossing Asteroid Survey (PCAS), and the incidental discovery from a variety of other astronomical program. Sky coverage and magnitude were both limited by slower emulsions, requiring longer exposures. The 1970's sky coverage of 15,000 to 25,000 sq. deg. per year led to about 1 NEA discovery every 13,000 sq. deg. Looking at the years from 1987 through 1990, it was found that by comparing 1987/1988 and 1989/1990, the world discovery rate of NEAs went from 20 to 43. More specifically, PCAS' results when grouped into the two year periods, show an increase from 5 discoveries in the 1st period to 20 in the 2nd period, a fourfold increase. Also, the discoveries went from representing about 25 pct. of the world total to about 50 pct. of discoveries worldwide. The surge of discoveries enjoyed by PCAS in particular is attributed to new fine grain sensitive emulsions, film hypering, more uniformity in the quality of the photograph, more equitable scheduling, better weather, and coordination of efforts. The maximum discoveries seem to have been attained at Palomar Schmidt.

  17. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies

    PubMed Central

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-01-01

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008–2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0. PMID:27892471

  18. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies.

    PubMed

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-11-28

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0.

  19. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies

    NASA Astrophysics Data System (ADS)

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-11-01

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0.

  20. Frequency modulation television analysis: Threshold impulse analysis. [with computer program

    NASA Technical Reports Server (NTRS)

    Hodge, W. H.

    1973-01-01

    A computer program is developed to calculate the FM threshold impulse rates as a function of the carrier-to-noise ratio for a specified FM system. The system parameters and a vector of 1024 integers, representing the probability density of the modulating voltage, are required as input parameters. The computer program is utilized to calculate threshold impulse rates for twenty-four sets of measured probability data supplied by NASA and for sinusoidal and Gaussian modulating waveforms. As a result of the analysis several conclusions are drawn: (1) The use of preemphasis in an FM television system improves the threshold by reducing the impulse rate. (2) Sinusoidal modulation produces a total impulse rate which is a practical upper bound for the impulse rates of TV signals providing the same peak deviations. (3) As the moment of the FM spectrum about the center frequency of the predetection filter increases, the impulse rate tends to increase. (4) A spectrum having an expected frequency above (below) the center frequency of the predetection filter produces a higher negative (positive) than positive (negative) impulse rate.

  1. Timing discriminator using leading-edge extrapolation

    DOEpatents

    Gottschalk, Bernard

    1983-01-01

    A discriminator circuit to recover timing information from slow-rising pulses by means of an output trailing edge, a fixed time after the starting corner of the input pulse, which is nearly independent of risetime and threshold setting. This apparatus comprises means for comparing pulses with a threshold voltage; a capacitor to be charged at a certain rate when the input signal is one-third threshold voltage, and at a lower rate when the input signal is two-thirds threshold voltage; current-generating means for charging the capacitor; means for comparing voltage capacitor with a bias voltage; a flip-flop to be set when the input pulse reaches threshold voltage and reset when capacitor voltage reaches the bias voltage; and a clamping means for discharging the capacitor when the input signal returns below one-third threshold voltage.

  2. Parallel Density-Based Clustering for Discovery of Ionospheric Phenomena

    NASA Astrophysics Data System (ADS)

    Pankratius, V.; Gowanlock, M.; Blair, D. M.

    2015-12-01

    Ionospheric total electron content maps derived from global networks of dual-frequency GPS receivers can reveal a plethora of ionospheric features in real-time and are key to space weather studies and natural hazard monitoring. However, growing data volumes from expanding sensor networks are making manual exploratory studies challenging. As the community is heading towards Big Data ionospheric science, automation and Computer-Aided Discovery become indispensable tools for scientists. One problem of machine learning methods is that they require domain-specific adaptations in order to be effective and useful for scientists. Addressing this problem, our Computer-Aided Discovery approach allows scientists to express various physical models as well as perturbation ranges for parameters. The search space is explored through an automated system and parallel processing of batched workloads, which finds corresponding matches and similarities in empirical data. We discuss density-based clustering as a particular method we employ in this process. Specifically, we adapt Density-Based Spatial Clustering of Applications with Noise (DBSCAN). This algorithm groups geospatial data points based on density. Clusters of points can be of arbitrary shape, and the number of clusters is not predetermined by the algorithm; only two input parameters need to be specified: (1) a distance threshold, (2) a minimum number of points within that threshold. We discuss an implementation of DBSCAN for batched workloads that is amenable to parallelization on manycore architectures such as Intel's Xeon Phi accelerator with 60+ general-purpose cores. This manycore parallelization can cluster large volumes of ionospheric total electronic content data quickly. Potential applications for cluster detection include the visualization, tracing, and examination of traveling ionospheric disturbances or other propagating phenomena. Acknowledgments. We acknowledge support from NSF ACI-1442997 (PI V. Pankratius).

  3. An extended sequential goodness-of-fit multiple testing method for discrete data.

    PubMed

    Castro-Conde, Irene; Döhler, Sebastian; de Uña-Álvarez, Jacobo

    2017-10-01

    The sequential goodness-of-fit (SGoF) multiple testing method has recently been proposed as an alternative to the familywise error rate- and the false discovery rate-controlling procedures in high-dimensional problems. For discrete data, the SGoF method may be very conservative. In this paper, we introduce an alternative SGoF-type procedure that takes into account the discreteness of the test statistics. Like the original SGoF, our new method provides weak control of the false discovery rate/familywise error rate but attains false discovery rate levels closer to the desired nominal level, and thus it is more powerful. We study the performance of this method in a simulation study and illustrate its application to a real pharmacovigilance data set.

  4. Equilibrium analysis of a yellow Fever dynamical model with vaccination.

    PubMed

    Martorano Raimundo, Silvia; Amaku, Marcos; Massad, Eduardo

    2015-01-01

    We propose an equilibrium analysis of a dynamical model of yellow fever transmission in the presence of a vaccine. The model considers both human and vector populations. We found thresholds parameters that affect the development of the disease and the infectious status of the human population in the presence of a vaccine whose protection may wane over time. In particular, we derived a threshold vaccination rate, above which the disease would be eradicated from the human population. We show that if the mortality rate of the mosquitoes is greater than a given threshold, then the disease is naturally (without intervention) eradicated from the population. In contrast, if the mortality rate of the mosquitoes is less than that threshold, then the disease is eradicated from the populations only when the growing rate of humans is less than another threshold; otherwise, the disease is eradicated only if the reproduction number of the infection after vaccination is less than 1. When this reproduction number is greater than 1, the disease will be eradicated from the human population if the vaccination rate is greater than a given threshold; otherwise, the disease will establish itself among humans, reaching a stable endemic equilibrium. The analysis presented in this paper can be useful, both to the better understanding of the disease dynamics and also for the planning of vaccination strategies.

  5. The asymmetry of U.S. monetary policy: Evidence from a threshold Taylor rule with time-varying threshold values

    NASA Astrophysics Data System (ADS)

    Zhu, Yanli; Chen, Haiqiang

    2017-05-01

    In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.

  6. The Evaluation of Olfactory Function in Patients With Schizophrenia.

    PubMed

    Robabeh, Soleimani; Mohammad, Jalali Mir; Reza, Ahmadi; Mahan, Badri

    2015-04-23

    The aim of this study was to compare olfactory threshold, smell identification, intensity and pleasantness ratings between patients with schizophrenia and healthy controls, and (2) to evaluate correlations between ratings of olfactory probes and illness characteristics. Thirty one patients with schizophrenia and 31 control subjects were assessed with the olfactory n-butanol threshold test, the Iran smell identification test (Ir-SIT), and the suprathreshold amyl acetate odor intensity and odor pleasantness rating test. All olfactory tasks were performed unirhinally. Patients with schizophrenia showed disrupted olfaction in all four measures. Longer duration of schizophrenia was associated with a larger impairment of olfactory threshold or microsmic range on the Ir-SIT (P=0.04, P=0.05, respectively). In patients with schizophrenia, female subjects' ratings of pleasantness followed the same trend as control subjects, whereas male patients' ratings showed an opposite trend. Patients exhibiting high positive score on the positive and negative syndrome scale (PANSS) performed better on the olfactory threshold test (r=0.37, P=0.04). The higher odor pleasantness ratings of patients were associated with presence of positive symptoms. The results suggest that both male and female patients with schizophrenia had difficulties on the olfactory threshold and smell identification tests, but appraisal of odor pleasantness was more disrupted in male patients.

  7. Determination of Anaerobic Threshold by Heart Rate or Heart Rate Variability using Discontinuous Cycle Ergometry.

    PubMed

    Park, Sung Wook; Brenneman, Michael; Cooke, William H; Cordova, Alberto; Fogt, Donovan

    The purpose was to determine if heart rate (HR) and heart rate variability (HRV) responses would reflect anaerobic threshold (AT) using a discontinuous, incremental, cycle test. AT was determined by ventilatory threshold (VT). Cyclists (30.6±5.9y; 7 males, 8 females) completed a discontinuous cycle test consisting of 7 stages (6 min each with 3 min of rest between). Three stages were performed at power outputs (W) below those corresponding to a previously established AT, one at W corresponding to AT, and 3 at W above those corresponding to AT. The W at the intersection of the trend lines was considered each metric's "threshold". The averaged stage data for Ve, HR, and time- and frequency-domain HRV metrics were plotted versus W. The W at the "threshold" for the metrics of interest were compared using correlation analysis and paired-sample t -test. In all, several heart rate-related parameters accurately reflected AT with significant correlations (p≤0.05) were observed between AT W and HR, mean RR interval (MRR), low and high frequency spectral energy (LF and HR, respectively), high frequency peak (fHF), and HFxfHF metrics' threshold W (i.e., MRRTW, etc.). Differences in HR or HRV metric threshold W and AT for all subjects were less than 14 W. The steady state data from discontinuous protocols may allow for a true indication of steady-state physiologic stress responses and corresponding W at AT, compared to continuous protocols using 1-2 min exercise stages.

  8. High-frequency (8 to 16 kHz) reference thresholds and intrasubject threshold variability relative to ototoxicity criteria using a Sennheiser HDA 200 earphone.

    PubMed

    Frank, T

    2001-04-01

    The first purpose of this study was to determine high-frequency (8 to 16 kHz) thresholds for standardizing reference equivalent threshold sound pressure levels (RETSPLs) for a Sennheiser HDA 200 earphone. The second and perhaps more important purpose of this study was to determine whether repeated high-frequency thresholds using a Sennheiser HDA 200 earphone had a lower intrasubject threshold variability than the ASHA 1994 significant threshold shift criteria for ototoxicity. High-frequency thresholds (8 to 16 kHz) were obtained for 100 (50 male, 50 female) normally hearing (0.25 to 8 kHz) young adults (mean age of 21.2 yr) in four separate test sessions using a Sennheiser HDA 200 earphone. The mean and median high-frequency thresholds were similar for each test session and increased as frequency increased. At each frequency, the high-frequency thresholds were not significantly (p > 0.05) different for gender, test ear, or test session. The median thresholds at each frequency were similar to the 1998 interim ISO RETSPLs; however, large standard deviations and wide threshold distributions indicated very high intersubject threshold variability, especially at 14 and 16 kHz. Threshold repeatability was determined by finding the threshold differences between each possible test session comparison (N = 6). About 98% of all of the threshold differences were within a clinically acceptable range of +/-10 dB from 8 to 14 kHz. The threshold differences between each subject's second, third, and fourth minus their first test session were also found to determine whether intrasubject threshold variability was less than the ASHA 1994 criteria for determining a significant threshold shift due to ototoxicity. The results indicated a false-positive rate of 0% for a threshold shift > or = 20 dB at any frequency and a false-positive rate of 2% for a threshold shift >10 dB at two consecutive frequencies. This study verified that the output of high-frequency audiometers at 0 dB HL using Sennheiser HDA 200 earphones should equal the 1998 interim ISO RETSPLs from 8 to 16 kHz. Further, because the differences between repeated thresholds were well within +/-10 dB and had an extremely low false-positive rate in reference to the ASHA 1994 criteria for a significant threshold shift due to ototoxicity, a Sennheiser HDA 200 earphone can be used for serial monitoring to determine whether significant high-frequency threshold shifts have occurred for patients receiving potentially ototoxic drug therapy.

  9. PI3K Phosphorylation Is Linked to Improved Electrical Excitability in an In Vitro Engineered Heart Tissue Disease Model System.

    PubMed

    Kana, Kujaany; Song, Hannah; Laschinger, Carol; Zandstra, Peter W; Radisic, Milica

    2015-09-01

    Myocardial infarction, a prevalent cardiovascular disease, is associated with cardiomyocyte cell death, and eventually heart failure. Cardiac tissue engineering has provided hopes for alternative treatment options, and high-fidelity tissue models for drug discovery. The signal transduction mechanisms relayed in response to mechanoelectrical (physical) stimulation or biochemical stimulation (hormones, cytokines, or drugs) in engineered heart tissues (EHTs) are poorly understood. In this study, an EHT model was used to elucidate the signaling mechanisms involved when insulin was applied in the presence of electrical stimulation, a stimulus that mimics functional heart tissue environment in vitro. EHTs were insulin treated, electrically stimulated, or applied in combination (insulin and electrical stimulation). Electrical excitability parameters (excitation threshold and maximum capture rate) were measured. Protein kinase B (AKT) and phosphatidylinositol-3-kinase (PI3K) phosphorylation revealed that insulin and electrical stimulation relayed electrical excitability through two separate signaling cascades, while there was a negative crosstalk between sustained activation of AKT and PI3K.

  10. A nanobuffer reporter library for fine-scale imaging and perturbation of endocytic organelles

    PubMed Central

    Wang, Chensu; Wang, Yiguang; Li, Yang; Bodemann, Brian; Zhao, Tian; Ma, Xinpeng; Huang, Gang; Hu, Zeping; DeBerardinis, Ralph J.; White, Michael A.; Gao, Jinming

    2015-01-01

    Endosomes, lysosomes and related catabolic organelles are a dynamic continuum of vacuolar structures that impact a number of cell physiological processes such as protein/lipid metabolism, nutrient sensing and cell survival. Here we develop a library of ultra-pH-sensitive fluorescent nanoparticles with chemical properties that allow fine-scale, multiplexed, spatio-temporal perturbation and quantification of catabolic organelle maturation at single organelle resolution to support quantitative investigation of these processes in living cells. Deployment in cells allows quantification of the proton accumulation rate in endosomes; illumination of previously unrecognized regulatory mechanisms coupling pH transitions to endosomal coat protein exchange; discovery of distinct pH thresholds required for mTORC1 activation by free amino acids versus proteins; broad-scale characterization of the consequence of endosomal pH transitions on cellular metabolomic profiles; and functionalization of a context-specific metabolic vulnerability in lung cancer cells. Together, these biological applications indicate the robustness and adaptability of this nanotechnology-enabled ‘detection and perturbation' strategy. PMID:26437053

  11. Drug-Based Lead Discovery: The Novel Ablative Antiretroviral Profile of Deferiprone in HIV-1-Infected Cells and in HIV-Infected Treatment-Naive Subjects of a Double-Blind, Placebo-Controlled, Randomized Exploratory Trial

    PubMed Central

    Saxena, Deepti; Spino, Michael; Tricta, Fernando; Connelly, John; Cracchiolo, Bernadette M.; Hanauske, Axel-Rainer; D’Alliessi Gandolfi, Darlene; Mathews, Michael B.; Karn, Jonathan; Holland, Bart; Park, Myung Hee; Pe’ery, Tsafi; Palumbo, Paul E.; Hanauske-Abel, Hartmut M.

    2016-01-01

    Antiretrovirals suppress HIV-1 production yet spare the sites of HIV-1 production, the HIV-1 DNA-harboring cells that evade immune detection and enable viral resistance on-drug and viral rebound off-drug. Therapeutic ablation of pathogenic cells markedly improves the outcome of many diseases. We extend this strategy to HIV-1 infection. Using drug-based lead discovery, we report the concentration threshold-dependent antiretroviral action of the medicinal chelator deferiprone and validate preclinical findings by a proof-of-concept double-blind trial. In isolate-infected primary cultures, supra-threshold concentrations during deferiprone monotherapy caused decline of HIV-1 RNA and HIV-1 DNA; did not allow viral breakthrough for up to 35 days on-drug, indicating resiliency against viral resistance; and prevented, for at least 87 days off-drug, viral rebound. Displaying a steep dose-effect curve, deferiprone produced infection-independent deficiency of hydroxylated hypusyl-eIF5A. However, unhydroxylated deoxyhypusyl-eIF5A accumulated particularly in HIV-infected cells; they preferentially underwent apoptotic DNA fragmentation. Since the threshold, ascertained at about 150 μM, is achievable in deferiprone-treated patients, we proceeded from cell culture directly to an exploratory trial. HIV-1 RNA was measured after 7 days on-drug and after 28 and 56 days off-drug. Subjects who attained supra-threshold concentrations in serum and completed the protocol of 17 oral doses, experienced a zidovudine-like decline of HIV-1 RNA on-drug that was maintained off-drug without statistically significant rebound for 8 weeks, over 670 times the drug’s half-life and thus clearance from circulation. The uniform deferiprone threshold is in agreement with mapping of, and crystallographic 3D-data on, the active site of deoxyhypusyl hydroxylase (DOHH), the eIF5A-hydroxylating enzyme. We propose that deficiency of hypusine-containing eIF5A impedes the translation of mRNAs encoding proline cluster (‘polyproline’)-containing proteins, exemplified by Gag/p24, and facilitated by the excess of deoxyhypusine-containing eIF5A, releases the innate apoptotic defense of HIV-infected cells from viral blockade, thus depleting the cellular reservoir of HIV-1 DNA that drives breakthrough and rebound. Trial Registration: ClinicalTrial.gov NCT02191657 PMID:27191165

  12. Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin

    2006-01-01

    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…

  13. Forecasting petroleum discoveries in sparsely drilled areas: Nigeria and the North Sea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attanasi, E.D.; Root, D.H.

    1988-10-01

    Decline function methods for projecting future discoveries generally capture the crowding effects of wildcat wells on the discovery rate. However, these methods do not accommodate easily situations where exploration areas and horizons are expanding. In this paper, a method is presented that uses a mapping algorithm for separating these often countervailing influences. The method is applied to Nigeria and the North Sea. For an amount of future drilling equivalent to past drilling (825 wildcat wells), future discoveries (in resources found) for Nigeria are expected to decline by 68% per well but still amount to 8.5 billion barrels of oil equivalentmore » (BOE). Similarly, for the total North Sea for an equivalent amount and mix among areas of past drilling (1322 wildcat wells), future discoveries are expected to amount to 17.9 billion BOE, whereas the average discovery rate per well is expected to decline by 71%.« less

  14. Forecasting petroleum discoveries in sparsely drilled areas: Nigeria and the North Sea

    USGS Publications Warehouse

    Attanasi, E.D.; Root, D.H.

    1988-01-01

    Decline function methods for projecting future discoveries generally capture the crowding effects of wildcat wells on the discovery rate. However, these methods do not accommodate easily situations where exploration areas and horizons are expanding. In this paper, a method is presented that uses a mapping algorithm for separating these often countervailing influences. The method is applied to Nigeria and the North Sea. For an amount of future drilling equivalent to past drilling (825 wildcat wells), future discoveries (in resources found) for Nigeria are expected to decline by 68% per well but still amount to 8.5 billion barrels of oil equivalent (BOE). Similarly, for the total North Sea for an equivalent amount and mix among areas of past drilling (1322 wildcat wells), future discoveries are expected to amount to 17.9 billion BOE, whereas the average discovery rate per well is expected to decline by 71%. ?? 1988 International Association for Mathematical Geology.

  15. Control of growth of juvenile leaves of Eucalyptus globulus: effects of leaf age.

    PubMed

    Metcalfe, J C; Davies, W J; Pereira, J S

    1991-12-01

    Biophysical variables influencing the expansion of plant cells (yield threshold, cell wall extensibility and turgor) were measured in individual Eucalyptus globulus leaves from the time of emergence until cessation of growth. Leaf water relations variables and growth rates were determined as relative humidity was changed on an hourly basis. Yield threshold and cell wall extensibility were estimated from plots of leaf growth rate versus turgor. Cell wall extensibility was also measured by the Instron technique, and yield threshold was determined experimentally both by stress relaxation in a psychrometer chamber and by incubation in a range of polyethylene glycol solutions. Once emerging leaves reached approximately 5 cm(2) in size, increases in leaf area were rapid throughout the expansive phase and varied little between light and dark periods. Both leaf growth rate and turgor were sensitive to changes in humidity, and in the longer term, both yield threshold and cell wall extensibility changed as the leaf aged. Rapidly expanding leaves had a very low yield threshold and high cell wall extensibility, whereas mature leaves had low cell wall extensibility. Yield threshold increased with leaf age.

  16. Effects of fatigue on motor unit firing rate versus recruitment threshold relationships.

    PubMed

    Stock, Matt S; Beck, Travis W; Defreitas, Jason M

    2012-01-01

    The purpose of this study was to examine the influence of fatigue on the average firing rate versus recruitment threshold relationships for the vastus lateralis (VL) and vastus medialis. Nineteen subjects performed ten maximum voluntary contractions of the dominant leg extensors. Before and after this fatiguing protocol, the subjects performed a trapezoid isometric muscle action of the leg extensors, and bipolar surface electromyographic signals were detected from both muscles. These signals were then decomposed into individual motor unit action potential trains. For each subject and muscle, the relationship between average firing rate and recruitment threshold was examined using linear regression analyses. For the VL, the linear slope coefficients and y-intercepts for these relationships increased and decreased, respectively, after fatigue. For both muscles, many of the motor units decreased their firing rates. With fatigue, recruitment of higher threshold motor units resulted in an increase in slope for the VL. Copyright © 2011 Wiley Periodicals, Inc.

  17. Closed-loop adaptation of neurofeedback based on mental effort facilitates reinforcement learning of brain self-regulation.

    PubMed

    Bauer, Robert; Fels, Meike; Royter, Vladislav; Raco, Valerio; Gharabaghi, Alireza

    2016-09-01

    Considering self-rated mental effort during neurofeedback may improve training of brain self-regulation. Twenty-one healthy, right-handed subjects performed kinesthetic motor imagery of opening their left hand, while threshold-based classification of beta-band desynchronization resulted in proprioceptive robotic feedback. The experiment consisted of two blocks in a cross-over design. The participants rated their perceived mental effort nine times per block. In the adaptive block, the threshold was adjusted on the basis of these ratings whereas adjustments were carried out at random in the other block. Electroencephalography was used to examine the cortical activation patterns during the training sessions. The perceived mental effort was correlated with the difficulty threshold of neurofeedback training. Adaptive threshold-setting reduced mental effort and increased the classification accuracy and positive predictive value. This was paralleled by an inter-hemispheric cortical activation pattern in low frequency bands connecting the right frontal and left parietal areas. Optimal balance of mental effort was achieved at thresholds significantly higher than maximum classification accuracy. Rating of mental effort is a feasible approach for effective threshold-adaptation during neurofeedback training. Closed-loop adaptation of the neurofeedback difficulty level facilitates reinforcement learning of brain self-regulation. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  18. Timing discriminator using leading-edge extrapolation

    DOEpatents

    Gottschalk, B.

    1981-07-30

    A discriminator circuit to recover timing information from slow-rising pulses by means of an output trailing edge, a fixed time after the starting corner of the input pulse, which is nearly independent of risetime and threshold setting is described. This apparatus comprises means for comparing pulses with a threshold voltage; a capacitor to be charged at a certain rate when the input signal is one-third threshold voltage, and at a lower rate when the input signal is two-thirds threshold voltage; current-generating means for charging the capacitor; means for comparing voltage capacitor with a bias voltage; a flip-flop to be set when the input pulse reaches threshold voltage and reset when capacitor voltage reaches the bias voltage; and a clamping means for discharging the capacitor when the input signal returns below one-third threshold voltage.

  19. Dynamics of a network-based SIS epidemic model with nonmonotone incidence rate

    NASA Astrophysics Data System (ADS)

    Li, Chun-Hsien

    2015-06-01

    This paper studies the dynamics of a network-based SIS epidemic model with nonmonotone incidence rate. This type of nonlinear incidence can be used to describe the psychological effect of certain diseases spread in a contact network at high infective levels. We first find a threshold value for the transmission rate. This value completely determines the dynamics of the model and interestingly, the threshold is not dependent on the functional form of the nonlinear incidence rate. Furthermore, if the transmission rate is less than or equal to the threshold value, the disease will die out. Otherwise, it will be permanent. Numerical experiments are given to illustrate the theoretical results. We also consider the effect of the nonlinear incidence on the epidemic dynamics.

  20. Implementation of false discovery rate for exploring novel paradigms and trait dimensions with ERPs.

    PubMed

    Crowley, Michael J; Wu, Jia; McCreary, Scott; Miller, Kelly; Mayes, Linda C

    2012-01-01

    False discovery rate (FDR) is a multiple comparison procedure that targets the expected proportion of false discoveries among the discoveries. Employing FDR methods in event-related potential (ERP) research provides an approach to explore new ERP paradigms and ERP-psychological trait/behavior relations. In Study 1, we examined neural responses to escape behavior from an aversive noise. In Study 2, we correlated a relatively unexplored trait dimension, ostracism, with neural response. In both situations we focused on the frontal cortical region, applying a channel by time plots to display statistically significant uncorrected data and FDR corrected data, controlling for multiple comparisons.

  1. 49 CFR 385.805 - Events triggering issuance of remedial directive and proposed determination of unfitness.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... accordance with this subpart for threshold rate violations of any appendix C regulation or regulations that... determination of one or more threshold rate violations of any appendix C regulation are discovered. ...

  2. [Correlation analysis of hearing level and soft palate movement after palatoplasty].

    PubMed

    Lou, Qun; Ma, Xiaoran; Ma, Lian; Luo, Yi; Zhu, Hongping; Zhou, Zhibo

    2015-10-01

    To explore the relationship between hearing level and soft palate movement after palatoplasty and to verify the importance of recovery of soft palate movement function for improving the middle ear function as well as reducing the hearing loss. A total of 64 non-syndromic cleft palate patients were selected and the lateral cephalometric radiographs were taken. The patients hearing level was evaluated by the pure tone hearing threshold examination. This study also analyzed the correlation between hearing threshold of the patients after palatoplasty and the soft palate elevation angle and velopharyngeal rate respectively. Kendall correlation analysis revealed that the correlation coefficient between hearing threshold and the soft palate elevation angle after palatoplasty was -0.339 (r = -0.339, P < 0.01).The correlation showed a negative correlation. The hearing threshold decreased as the soft palate elevation angle increased. After palatoplasty, the correlation coefficient between the hearing threshold and the rate of velopharyngeal closure was -0.277 (r = -0.277, P < 0.01). The correlation showed a negative correlation. While, The hearing threshold decreased with the increase of velopharyngeal closure rate. The hearing threshold was correlated with soft palate elevation angle and velpharyngeal closure rate. The movement of soft palate and velopharyngeal closure function after palatoplasty both have impact on patient hearing level. In terms of the influence level, the movement of soft palate has a higher level of impact on patient hearing level than velopharygeal closure function.

  3. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  4. Pleasure and pain: the effect of (almost) having an orgasm on genital and nongenital sensitivity.

    PubMed

    Paterson, Laurel Q P; Amsel, Rhonda; Binik, Yitzchak M

    2013-06-01

    The effect of sexual arousal and orgasm on genital sensitivity has received little research attention, and no study has assessed sensation pleasurableness as well as painfulness. To clarify the relationship between sexual arousal, orgasm, and sensitivity in a healthy female sample. Twenty-six women privately masturbated to orgasm and almost to orgasm at two separate sessions, during which standardized pressure stimulation was applied to the glans clitoris, vulvar vestibule, and volar forearm at three testing times: (i) baseline; (ii) immediately following masturbation; and (iii) following a subsequent 15-minute rest period. Touch thresholds (tactile detection sensitivity), sensation pleasurableness ratings (pleasurable sensitivity), and pain thresholds (pain sensitivity). Pleasurableness ratings were higher on the glans clitoris than the vulvar vestibule, and at most testing times on the vulvar vestibule than the volar forearm; and at baseline and immediately after masturbation than 15 minutes later, mainly on the genital locations only. Pain thresholds were lower on the genital locations than the volar forearm, and immediately and 15 minutes after masturbation than at baseline. After orgasm, genital pleasurableness ratings and vulvar vestibular pain thresholds were lower than after masturbation almost to orgasm. Post-masturbation pleasurableness ratings were positively correlated with pain thresholds but only on the glans clitoris. Hormonal contraception users had lower pleasurableness ratings and pain thresholds on all locations than nonusers. There were no significant effects for touch thresholds. Masturbation appears to maintain pleasurable genital sensitivity but increase pain sensitivity, with lower genital pleasurable sensitivity and higher vulvar vestibular pain sensitivity when orgasm occurs. Findings suggest that enhancing stimulation pleasurableness, psychological sexual arousal and lubrication mitigate normative increases in pain sensitivity during sexual activity, and underscore the importance of measuring both pleasure and pain in sensation research. © 2013 International Society for Sexual Medicine.

  5. Behavior of motor units in human biceps brachii during a submaximal fatiguing contraction.

    PubMed

    Garland, S J; Enoka, R M; Serrano, L P; Robinson, G A

    1994-06-01

    The activity of 50 single motor units was recorded in the biceps brachii muscle of human subjects while they performed submaximal isometric elbow flexion contractions that were sustained to induce fatigue. The purposes of this study were to examine the influence of fatigue on motor unit threshold force and to determine the relationship between the threshold force of recruitment and the initial interimpulse interval on the discharge rates of single motor units during a fatiguing contraction. The discharge rate of most motor units that were active from the beginning of the contraction declined during the fatiguing contraction, whereas the discharge rates of most newly recruited units were either constant or increased slightly. The absolute threshold forces of recruitment and derecruitment decreased, and the variability of interimpulse intervals increased after the fatigue task. The change in motor unit discharge rate during the fatigue task was related to the initial rate, but the direction of the change in discharge rate could not be predicted from the threshold force of recruitment or the variability in the interimpulse intervals. The discharge rate of most motor units declined despite an increase in the excitatory drive to the motoneuron pool during the fatigue task.

  6. On the expected discounted penalty functions for two classes of risk processes under a threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Lu, Zhaoyang; Xu, Wei; Sun, Decai; Han, Weiguo

    2009-10-01

    In this paper, the discounted penalty (Gerber-Shiu) functions for a risk model involving two independent classes of insurance risks under a threshold dividend strategy are developed. We also assume that the two claim number processes are independent Poisson and generalized Erlang (2) processes, respectively. When the surplus is above this threshold level, dividends are paid at a constant rate that does not exceed the premium rate. Two systems of integro-differential equations for discounted penalty functions are derived, based on whether the surplus is above this threshold level. Laplace transformations of the discounted penalty functions when the surplus is below the threshold level are obtained. And we also derive a system of renewal equations satisfied by the discounted penalty function with initial surplus above the threshold strategy via the Dickson-Hipp operator. Finally, analytical solutions of the two systems of integro-differential equations are presented.

  7. Dynamic shear-stress-enhanced rates of nutrient consumption in gas-liquid semi-continuous-flow suspensions

    NASA Astrophysics Data System (ADS)

    Belfiore, Laurence A.; Volpato, Fabio Z.; Paulino, Alexandre T.; Belfiore, Carol J.

    2011-12-01

    The primary objective of this investigation is to establish guidelines for generating significant mammalian cell density in suspension bioreactors when stress-sensitive kinetics enhance the rate of nutrient consumption. Ultra-low-frequency dynamic modulations of the impeller (i.e., 35104 Hz) introduce time-dependent oscillatory shear into this transient analysis of cell proliferation under semi-continuous creeping flow conditions. Greater nutrient consumption is predicted when the amplitude A of modulated impeller rotation increases, and stress-kinetic contributions to nutrient consumption rates increase linearly at higher modulation frequency via an application of fluctuation-dissipation response. Interphase mass transfer is required to replace dissolved oxygen as it is consumed by aerobic nutrient consumption in the liquid phase. The theory and predictions described herein could be important at small length scales in the creeping flow regime where viscous shear is significant at the interface between the nutrient medium and isolated cells in suspension. Two-dimensional flow around spherically shaped mammalian cells, suspended in a Newtonian culture medium, is analyzed to calculate the surface-averaged magnitude of the velocity gradient tensor and modify homogeneous rates of nutrient consumption that are stimulated by viscous shear, via the formalism of stress-kinetic reciprocal relations that obey Curie's theorem in non-equilibrium thermodynamics. Time constants for stress-free free and stress-sensitive stress nutrient consumption are defined and quantified to identify the threshold (i.e., stress,threshold) below which the effect of stress cannot be neglected in accurate predictions of bioreactor performance. Parametric studies reveal that the threshold time constant for stress-sensitive nutrient consumption stress,threshold decreases when the time constant for stress-free nutrient consumption free is shorter. Hence, stress,threshold depends directly on free. In other words, the threshold rate of stress-sensitive nutrient consumption is higher when the stress-free rate of nutrient consumption increases. Modulated rotation of the impeller, superimposed on steady shear, increases stress,threshold when free is constant, and stress,threshold depends directly on the amplitude A of these angular velocity modulations.

  8. Analysis of the rate of wildcat drilling and deposit discovery

    USGS Publications Warehouse

    Drew, L.J.

    1975-01-01

    The rate at which petroleum deposits were discovered during a 16-yr period (1957-72) was examined in relation to changes in a suite of economic and physical variables. The study area encompasses 11,000 mi2 and is located on the eastern flank of the Powder River Basin. A two-stage multiple-regression model was used as a basis for this analysis. The variables employed in this model were: (1) the yearly wildcat drilling rate, (2) a measure of the extent of the physical exhaustion of the resource base of the region, (3) a proxy for the discovery expectation of the exploration operators active in the region, (4) an exploration price/cost ratio, and (5) the expected depths of the exploration targets sought. The rate at which wildcat wells were drilled was strongly correlated with the discovery expectation of the exploration operators. Small additional variations in the wildcat drilling rate were explained by the price/cost ratio and target-depth variables. The number of deposits discovered each year was highly dependent on the wildcat drilling rate, but the aggregate quantity of petroleum discovered each year was independent of the wildcat drilling rate. The independence between these last two variables is a consequence of the cyclical behavior of the exploration play mechanism. Although the discovery success ratio declined sharply during the initial phases of the two exploration plays which developed in the study area, a learning effect occurred whereby the discovery success ratio improved steadily with the passage of time during both exploration plays. ?? 1975 Plenum Publishing Corporation.

  9. Motor unit behaviour and contractile changes during fatigue in the human first dorsal interosseus

    PubMed Central

    Carpentier, Alain; Duchateau, Jacques; Hainaut, Karl

    2001-01-01

    In 67 single motor units, the mechanical properties, the recruitment and derecruitment thresholds, and the discharge rates were recorded concurrently in the first dorsal interosseus (FDI) of human subjects during intermittent fatiguing contractions. The task consisted of isometric ramp-and-hold contractions performed at 50% of the maximal voluntary contraction (MVC). The purpose of this study was to examine the influence of fatigue on the behaviour of motor units with a wide range of activation thresholds. For low-threshold (< 25% MVC) motor units, the mean twitch force increased with fatigue and the recruitment threshold either did not change or increased. In contrast, the twitch force and the activation threshold decreased for the high-threshold (> 25% MVC) units. The observation that in low-threshold motor units a quick stretch of the muscle at the end of the test reset the unit force and recruitment threshold to the prefatigue value suggests a significant role for fatigue-related changes in muscle stiffness but not twitch potentiation or motor unit synchronization. Although the central drive intensified during the fatigue test, as indicated by an increase in surface electromyogram (EMG), the discharge rate of the motor units during the hold phase of each contraction decreased progressively over the course of the task for motor units that were recruited at the beginning of the test, especially the low-threshold units. In contrast, the discharge rates of newly activated units first increased and then decreased. Such divergent behaviour of low- and high-threshold motor units could not be individually controlled by the central drive to the motoneurone pool. Rather, the different behaviours must be the consequence of variable contributions from motoneurone adaptation and afferent feedback from the muscle during the fatiguing contraction. PMID:11483719

  10. Shifts in the relationship between motor unit recruitment thresholds versus derecruitment thresholds during fatigue.

    PubMed

    Stock, Matt S; Mota, Jacob A

    2017-12-01

    Muscle fatigue is associated with diminished twitch force amplitude. We examined changes in the motor unit recruitment versus derecruitment threshold relationship during fatigue. Nine men (mean age = 26 years) performed repeated isometric contractions at 50% maximal voluntary contraction (MVC) knee extensor force until exhaustion. Surface electromyographic signals were detected from the vastus lateralis, and were decomposed into their constituent motor unit action potential trains. Motor unit recruitment and derecruitment thresholds and firing rates at recruitment and derecruitment were evaluated at the beginning, middle, and end of the protocol. On average, 15 motor units were studied per contraction. For the initial contraction, three subjects showed greater recruitment thresholds than derecruitment thresholds for all motor units. Five subjects showed greater recruitment thresholds than derecruitment thresholds for only low-threshold motor units at the beginning, with a mean cross-over of 31.6% MVC. As the muscle fatigued, many motor units were derecruited at progressively higher forces. In turn, decreased slopes and increased y-intercepts were observed. These shifts were complemented by increased firing rates at derecruitment relative to recruitment. As the vastus lateralis fatigued, the central nervous system's compensatory adjustments resulted in a shift of the regression line of the recruitment versus derecruitment threshold relationship. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Absolute auditory threshold: testing the absolute.

    PubMed

    Heil, Peter; Matysiak, Artur

    2017-11-02

    The mechanisms underlying the detection of sounds in quiet, one of the simplest tasks for auditory systems, are debated. Several models proposed to explain the threshold for sounds in quiet and its dependence on sound parameters include a minimum sound intensity ('hard threshold'), below which sound has no effect on the ear. Also, many models are based on the assumption that threshold is mediated by integration of a neural response proportional to sound intensity. Here, we test these ideas. Using an adaptive forced choice procedure, we obtained thresholds of 95 normal-hearing human ears for 18 tones (3.125 kHz carrier) in quiet, each with a different temporal amplitude envelope. Grand-mean thresholds and standard deviations were well described by a probabilistic model according to which sensory events are generated by a Poisson point process with a low rate in the absence, and higher, time-varying rates in the presence, of stimulation. The subject actively evaluates the process and bases the decision on the number of events observed. The sound-driven rate of events is proportional to the temporal amplitude envelope of the bandpass-filtered sound raised to an exponent. We find no evidence for a hard threshold: When the model is extended to include such a threshold, the fit does not improve. Furthermore, we find an exponent of 3, consistent with our previous studies and further challenging models that are based on the assumption of the integration of a neural response that, at threshold sound levels, is directly proportional to sound amplitude or intensity. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology.

    PubMed

    Insel, Philip S; Mattsson, Niklas; Mackin, R Scott; Schöll, Michael; Nosheny, Rachel L; Tosun, Duygu; Donohue, Michael C; Aisen, Paul S; Jagust, William J; Weiner, Michael W

    2016-05-17

    To estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloid positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ42. Future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline. © 2016 American Academy of Neurology.

  13. Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology

    PubMed Central

    Mattsson, Niklas; Mackin, R. Scott; Schöll, Michael; Nosheny, Rachel L.; Tosun, Duygu; Donohue, Michael C.; Aisen, Paul S.; Jagust, William J.; Weiner, Michael W.

    2016-01-01

    Objective: To estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloid positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ42. Future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline. PMID:27164667

  14. Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Insel, Philip S.; Mattsson, Niklas; Mackin, R. Scott

    Objective: Our objective is to estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ 42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloidmore » positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ 42. Lastly, future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline.« less

  15. Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology

    DOE PAGES

    Insel, Philip S.; Mattsson, Niklas; Mackin, R. Scott; ...

    2016-04-15

    Objective: Our objective is to estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ 42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloidmore » positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ 42. Lastly, future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline.« less

  16. Dispersive estimates for massive Dirac operators in dimension two

    NASA Astrophysics Data System (ADS)

    Erdoğan, M. Burak; Green, William R.; Toprak, Ebru

    2018-05-01

    We study the massive two dimensional Dirac operator with an electric potential. In particular, we show that the t-1 decay rate holds in the L1 →L∞ setting if the threshold energies are regular. We also show these bounds hold in the presence of s-wave resonances at the threshold. We further show that, if the threshold energies are regular then a faster decay rate of t-1(log ⁡ t) - 2 is attained for large t, at the cost of logarithmic spatial weights. The free Dirac equation does not satisfy this bound due to the s-wave resonances at the threshold energies.

  17. Accelerated Near-Threshold Fatigue Crack Growth Behavior of an Aluminum Powder Metallurgy Alloy

    NASA Technical Reports Server (NTRS)

    Piascik, Robert S.; Newman, John A.

    2002-01-01

    Fatigue crack growth (FCG) research conducted in the near threshold regime has identified a room temperature creep crack growth damage mechanism for a fine grain powder metallurgy (PM) aluminum alloy (8009). At very low DK, an abrupt acceleration in room temperature FCG rate occurs at high stress ratio (R = Kmin/Kmax). The near threshold accelerated FCG rates are exacerbated by increased levels of Kmax (Kmax less than 0.4 KIC). Detailed fractographic analysis correlates accelerated FCG with the formation of crack-tip process zone micro-void damage. Experimental results show that the near threshold and Kmax influenced accelerated crack growth is time and temperature dependent.

  18. Relationship Between Unusual High-Temperature Fatigue Crack Growth Threshold Behavior in Superalloys and Sudden Failure Mode Transitions

    NASA Technical Reports Server (NTRS)

    Telesman, J.; Smith, T. M.; Gabb, T. P.; Ring, A. J.

    2017-01-01

    An investigation of high temperature cyclic fatigue crack growth (FCG) threshold behavior of two advanced nickel disk alloys was conducted. The focus of the study was the unusual crossover effect in the near-threshold region of these type of alloys where conditions which produce higher crack growth rates in the Paris regime, produce higher resistance to crack growth in the near threshold regime. It was shown that this crossover effect is associated with a sudden change in the fatigue failure mode from a predominant transgranular mode in the Paris regime to fully intergranular mode in the threshold fatigue crack growth region. This type of a sudden change in the fracture mechanisms has not been previously reported and is surprising considering that intergranular failure is typically associated with faster crack growth rates and not the slow FCG rates of the near-threshold regime. By characterizing this behavior as a function of test temperature, environment and cyclic frequency, it was determined that both the crossover effect and the onset of intergranular failure are caused by environmentally driven mechanisms which have not as yet been fully identified. A plausible explanation for the observed behavior is proposed.

  19. A Specimen Size Effect on the Fatigue Crack Growth Rate Threshold of IN 718

    NASA Technical Reports Server (NTRS)

    Garr, K. R.; Hresko, G. C., III

    1998-01-01

    Fatigue crack growth rate (FCGR) tests were conducted on IN 718 in the solution annealed and aged condition at room temperature in accordance with E647-87. As part of each test, the FCGR threshold was measured using the decreasing Delta K method. A new heat of material was being tested and some of this material was sent to a different laboratory which wanted to use a specimen with a 127 mm width. Threshold data previously had been established on specimens with a width of 50.8 mm. As a check of the laboratory, tests were conducted at room temperature and R equal to 0.1 for comparison with the earlier data. The results were a threshold significantly higher than previously observed. Interchanging of specimen sizes and laboratories showed that the results were not due to a heat-to-heat or lab-to-lab variation. The results to be presented here are those obtained at the original laboratory. Growth rates were measured using the electric potential drop technique at R values of 0.1, 0.7, and 0.9. Compact tension specimen sizes with planer dimensions of 25.4 mm, 50.8 mm, and 127 mm were used. Crack growth rates at threshold were generally below 2.5 X 10(exp -8) mm / cycle. Closure measurements were made on some of the specimens by a manual procedure using a clip gage. When the crack growth rate data for the specimens tested at R equal to 0.1 were plotted as a function of applied Delta K, the thresholds varied with specimen width. The larger the width, the higher the threshold. The thresholds varied from 6.5 MPa-m(exp 1/2) for the 25.4 mm specimen to 15.4 MPa-m(exp 1/2) for the 127 mm specimen. At R equal to 0.7, the 25.4 mm and 50.8 mm specimens had essentially the same threshold, about 2.9 MPa-m(exp 1/2)while the 127 mm specimen had a threshold of 4.5 MPa-m(exp 1/2). When plotted as a function of effective Delta K, the R equal to 0.1 data are essentially normalized. Various aspects of the test procedure will be discussed as well as the results of analysis of the data using some different closure models.

  20. Toward a Quantitative Theory of Intellectual Discovery (Especially in Physics).

    ERIC Educational Resources Information Center

    Fowler, Richard G.

    1987-01-01

    Studies time intervals in a list of critical ideas in physics. Infers that the rate of growth of ideas has been proportional to the totality of known ideas multiplied by the totality of people in the world. Indicates that the rate of discovery in physics has been decreasing. (CW)

  1. The voltage threshold for arcing for solar cells in Leo - Flight and ground test results

    NASA Technical Reports Server (NTRS)

    Ferguson, Dale C.

    1986-01-01

    Ground and flight results of solar cell arcing in low earth orbit (LEO) conditions are compared and interpreted. It is shown that an apparent voltage threshold for arcing may be produced by a storage power law dependence of arc rate on voltage, combined with a limited observation time. The change in this apparent threshold with plasma density is a reflection of the density dependence of the arc rate. A nearly linear dependence of arc rate on density is inferred from the data. A real voltage threshold for arcing for 2 by 2 cm solar cells may exist however, independent of plasma density, near -230 V relative to the plasma. Here, arc rates may change by more than an order of magnitude for a change of only 30 V in array potential. For 5.9 by 5.9 solar cells, the voltage dependence of the arc rate is steeper, and the data are insufficient to indicate the existence of an arcing increased by an atomic oxygen plasma, as is found in LEO, and by arcing from the backs of welded-through substrates.

  2. The voltage threshold for arcing for solar cells in LEO: Flight and ground test results

    NASA Technical Reports Server (NTRS)

    Ferguson, D. C.

    1986-01-01

    Ground and flight results of solar cell arcing in low Earth orbit (LEO) conditions are compared and interpreted. It is shown that an apparent voltage threshold for arcing may be produced by a strong power law dependence of arc rate on voltage, combined with a limited observation time. The change in this apparent threshold with plasma density is a reflection of the density dependence of the arc rate. A nearly linear dependence of arc rate on density is inferred from the data. A real voltage threshold for arcing for 2 by 2 cm solar cells may exist however, independent of plasma density, near -230 V relative to the plasma. Here, arc rates may change by more than an order of magnitude for a change of only 30 V in array potential. For 5.9 by 5.9 solar cells, the voltage dependence of the arc rate is steeper, and the data are insufficient to indicate the existence of an arcing increased by an atomic oxygen plasma, as is found in LEO, and by arcing from the backs of welded-through substrates.

  3. New Perspectives Through Data Discovery and Modeling

    NASA Astrophysics Data System (ADS)

    Vorosmarty, C. J.; McGuire, A. D.; Hinzman, L.; Holland, M.; Murray, M.; Schimel, J.; Warnick, W.; Weatherly, J.; Wiggins, H.

    2007-07-01

    Arctic System Synthesis Workshop, Seattle, Washington, 2-4 April 2007 Dramatic changes in the Arctic have captured the attention of the public. Understanding changes such as the decline of sea ice, the fate of polar bears and other marine mammals, melting of the Greenland ice sheet, changes in the terrestrial carbon budget, and impacts on and feedbacks from the human community require a new approach to research involving synthesis of linkages between system components and threshold behaviors.

  4. Discovery and structure determination of a novel Maillard-derived sweetness enhancer by application of the comparative taste dilution analysis (cTDA).

    PubMed

    Ottinger, Harald; Soldo, Tomislav; Hofmann, Thomas

    2003-02-12

    Application of a novel screening procedure, the comparative taste dilution analysis (cTDA), on the non-solvent-extractable reaction products formed in a thermally processed aqueous solution of glucose and l-alanine led to the discovery of the presence of a sweetness-enhancing Maillard reaction product. Isolation, followed by LC-MS and 1D- and 2D-NMR measurements, and synthesis led to its unequivocal identification as N-(1-carboxyethyl)-6-(hydroxymethyl)pyridinium-3-ol inner salt. This so-called alapyridaine, although being tasteless itself, is the first nonvolatile, sweetness-enhancing Maillard reaction product reported in the literature. Depending on the pH value, the detection thresholds of sweet sugars, amino acids, and aspartame, respectively, were found to be significantly decreased when alapyridaine was present; for example, the threshold of glucose decreased by a factor of 16 in an equimolar mixture of glucose and alapyridaine. Studies on the influence of the stereochemistry on taste-enhancing activity revealed that the (+)-(S)-alapyridaine is the physiologically active enantiomer, whereas the (-)-(R)-enantiomer did not affect sweetness perception at all. Thermal processing of aqueous solutions of alapyridaine at 80 degrees C demonstrated a high thermal and hydrolytic stability of that sweetness enhancer; for example, more than 90 or 80% of alapyridaine was recovered when heated for 5 h at pH 7.0, 5.0, or 3.0, respectively.

  5. Construction of Protograph LDPC Codes with Linear Minimum Distance

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  6. Comparison of Ventilatory Measures and 20 km Time Trial Performance.

    PubMed

    Peveler, Willard W; Shew, Brandy; Johnson, Samantha; Sanders, Gabe; Kollock, Roger

    2017-01-01

    Performance threshold measures are used to predict cycling performance. Previous research has focused on long time trials (≥ 40 km) using power at ventilatory threshold and respiratory threshold to estimate time trial performance. As intensity greatly differs during shorter time trails applying findings from longer time trials may not be appropriate. The use of heart rate measures to determine 20 km time trial performance has yet to be examined. The purpose of this study was to determine the effectiveness of heart rate measures at ventilatory threshold (VE/VO 2 Plotted and VT determined by software) and respiratory threshold (RER of 0.95, 1.00, and 1.05) to predict 20 km time trial performance. Eighteen cyclists completed a VO 2max protocol and two 20 km time trials. Average heart rates from 20 km time trials were compared with heart rates from performance threshold measures (VT plotted, VT software, and an RER at 0.95, 1.00, and 1.05) using repeated measures ANOVA. Significance was set a priori at P ≤ 0.05. The only measure not found to be significantly different in relation to time trial performance was HR at an RER of 1.00 (166.61±12.70 bpm vs. 165.89 ± 9.56 bpm, p = .671). VT plotting and VT determined by software were found to underestimate time trial performance by 3% and 8% respectively. From these findings it is recommended to use heart rate at a RER of 1.00 in order to determine 20 km time trial intensity.

  7. Projections for Achieving the Lancet Commission Recommended Surgical Rate of 5000 Operations per 100,000 Population by Region-Specific Surgical Rate Estimates.

    PubMed

    Uribe-Leitz, Tarsicio; Esquivel, Micaela M; Molina, George; Lipsitz, Stuart R; Verguet, Stéphane; Rose, John; Bickler, Stephen W; Gawande, Atul A; Haynes, Alex B; Weiser, Thomas G

    2015-09-01

    We previously identified a range of 4344-5028 annual operations per 100,000 people to be related to desirable health outcomes. From this and other evidence, the Lancet Commission on Global Surgery recommends a minimum rate of 5000 operations per 100,000 people. We evaluate rates of growth and estimate the time it will take to reach this minimum surgical rate threshold. We aggregated country-level surgical rate estimates from 2004 to 2012 into the twenty-one Global Burden of Disease (GBD) regions. We calculated mean rates of surgery proportional to population size for each year and assessed the rate of growth over time. We then extrapolated the time it will take each region to reach a surgical rate of 5000 operations per 100,000 population based on linear rates of change. All but two regions experienced growth in their surgical rates during the past 8 years. Fourteen regions did not meet the recommended threshold in 2012. If surgical capacity continues to grow at current rates, seven regions will not meet the threshold by 2035. Eastern Sub-Saharan Africa will not reach the recommended threshold until 2124. The rates of growth in surgical service delivery are exceedingly variable. At current rates of surgical and population growth, 6.2 billion people (73% of the world's population) will be living in countries below the minimum recommended rate of surgical care in 2035. A strategy for strengthening surgical capacity is essential if these targets are to be met in a timely fashion as part of the integrated health system development.

  8. T wave alternans during exercise and atrial pacing in humans

    NASA Technical Reports Server (NTRS)

    Hohnloser, S. H.; Klingenheben, T.; Zabel, M.; Li, Y. G.; Albrecht, P.; Cohen, R. J.

    1997-01-01

    INTRODUCTION: Evidence is accumulating that microvolt T wave alternans (TWA) is a marker of increased risk for ventricular tachyarrhythmias. Initially, atrial pacing was used to elevate heart rate and elicit TWA. More recently, a noninvasive approach has been developed that elevates heart rate using exercise. METHODS AND RESULTS: In 30 consecutive patients with a history of ventricular tachyarrhythmias, the spectral method was used to detect TWA during both atrial pacing and submaximal exercise testing. The concordance rate for the presence or absence of TWA using the two measurement methods was 84%. There was a patient-specific heart rate threshold for the detection of TWA that averaged 100 +/- 14 beats/min during exercise compared with 97 +/- 9 beats/min during right atrial pacing (P = NS). Beyond this threshold, there was a significant and comparable increase in level of TWA with decreasing pacing cycle length and increasing exercise heart rates. CONCLUSIONS: The present study is the first to demonstrate that microvolt TWA can be assessed reliably and noninvasively during exercise stress. There is a patient-specific heart rate threshold beyond which TWA continues to increase with increasing heart rates. Heart rate thresholds for the onset of TWA measured during atrial pacing and exercise stress were comparable, indicating that heart rate alone appears to be the main factor of determining the onset of TWA during submaximal exercise stress.

  9. Immediate effects of 33 to 180 rad/min (60)Co exposure on performance and blood pressure in monkeys. Topical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruner, A.

    1976-09-01

    Four groups of monkeys received 1000 rads (60)Co at 33, 50, 75, or 180 rad/min wholebody irradiation while performing a delayed matching-to-sample task. Systematic dose rate effects were observed on performance and blood pressure within the initial 20 min postirradiation. The incidence and severity of performance decrement (PD) increased with higher dose rate. The appearance of postirradiation hypotension was systematically delayed and its rate of fall prolonged as dose rate was lower. The hypotension likewise appeared less deep with lower dose rate exposure. Based on the calculated cumulative dose absorbed at the time of symptom appearance two coactive thresholds weremore » proposed to exist: a total dose threshold of approximately 300 rads (midbody measurement), and a dose rate threshold of about 25 rad/min.« less

  10. Separate class true discovery rate degree of association sets for biomarker identification.

    PubMed

    Crager, Michael R; Ahmed, Murat

    2014-01-01

    In 2008, Efron showed that biological features in a high-dimensional study can be divided into classes and a separate false discovery rate (FDR) analysis can be conducted in each class using information from the entire set of features to assess the FDR within each class. We apply this separate class approach to true discovery rate degree of association (TDRDA) set analysis, which is used in clinical-genomic studies to identify sets of biomarkers having strong association with clinical outcome or state while controlling the FDR. Careful choice of classes based on prior information can increase the identification power of the separate class analysis relative to the overall analysis.

  11. Performance of a web-based, realtime, tele-ultrasound consultation system over high-speed commercial telecommunication lines.

    PubMed

    Yoo, Sun K; Kim, D K; Jung, S M; Kim, E-K; Lim, J S; Kim, J H

    2004-01-01

    A Web-based, realtime, tele-ultrasound consultation system was designed. The system employed ActiveX control, MPEG-4 coding of full-resolution ultrasound video (640 x 480 pixels at 30 frames/s) and H.320 videoconferencing. It could be used via a Web browser. The system was evaluated over three types of commercial line: a cable connection, ADSL and VDSL. Three radiologists assessed the quality of compressed and uncompressed ultrasound video-sequences from 16 cases (10 abnormal livers, four abnormal kidneys and two abnormal gallbladders). The radiologists' scores showed that, at a given frame rate, increasing the bit rate was associated with increasing quality; however, at a certain threshold bit rate the quality did not increase significantly. The peak signal to noise ratio (PSNR) was also measured between the compressed and uncompressed images. In most cases, the PSNR increased as the bit rate increased, and increased as the number of dropped frames increased. There was a threshold bit rate, at a given frame rate, at which the PSNR did not improve significantly. Taking into account both sets of threshold values, a bit rate of more than 0.6 Mbit/s, at 30 frames/s, is suggested as the threshold for the maintenance of diagnostic image quality.

  12. Normative behavioral thresholds for short tone-bursts.

    PubMed

    Beattie, R C; Rochverger, I

    2001-10-01

    Although tone-bursts have been commonly used in auditory brainstem response (ABR) evaluations for many years, national standards describing normal calibration values have not been established. This study was designed to gather normative threshold data to establish a physical reference for tone-burst stimuli that can be reproduced across clinics and laboratories. More specifically, we obtained norms for 3-msec tone-bursts presented at two repetition rates (9.3/sec and 39/sec), two gating functions (Trapezoid and Blackman), and four frequencies (500, 1000, 2000, and 4000 Hz). Our results are specified using three physical references: dB peak sound pressure level, dB peak-to-peak equivalent sound pressure level, and dB SPL (fast meter response, rate = 50 stimuli/sec). These data are offered for consideration when calibrating ABR equipment. The 39/sec stimulus rate yielded tone-burst thresholds that were approximately 3 dB lower than the 9.3/sec rate. The improvement in threshold with increasing stimulus rate may reflect the ability of the auditory system to integrate energy that occurs within a time interval of 200 to 500 msec (temporal integration). The Trapezoid gating function yielded thresholds that averaged 1.4 dB lower than the Blackman function. Although these differences are small and of little clinical importance, the cumulative effects of several instrument and/or procedural variables may yield clinically important differences.

  13. Evidence Accumulator or Decision Threshold – Which Cortical Mechanism are We Observing?

    PubMed Central

    Simen, Patrick

    2012-01-01

    Most psychological models of perceptual decision making are of the accumulation-to-threshold variety. The neural basis of accumulation in parietal and prefrontal cortex is therefore a topic of great interest in neuroscience. In contrast, threshold mechanisms have received less attention, and their neural basis has usually been sought in subcortical structures. Here I analyze a model of a decision threshold that can be implemented in the same cortical areas as evidence accumulators, and whose behavior bears on two open questions in decision neuroscience: (1) When ramping activity is observed in a brain region during decision making, does it reflect evidence accumulation? (2) Are changes in speed-accuracy tradeoffs and response biases more likely to be achieved by changes in thresholds, or in accumulation rates and starting points? The analysis suggests that task-modulated ramping activity, by itself, is weak evidence that a brain area mediates evidence accumulation as opposed to threshold readout; and that signs of modulated accumulation are as likely to indicate threshold adaptation as adaptation of starting points and accumulation rates. These conclusions imply that how thresholds are modeled can dramatically impact accumulator-based interpretations of this data. PMID:22737136

  14. The Use of Physiology-Based Pharmacokinetic and Pharmacodynamic Modeling in the Discovery of the Dual Orexin Receptor Antagonist ACT-541468.

    PubMed

    Treiber, Alexander; de Kanter, Ruben; Roch, Catherine; Gatfield, John; Boss, Christoph; von Raumer, Markus; Schindelholz, Benno; Muehlan, Clemens; van Gerven, Joop; Jenck, Francois

    2017-09-01

    The identification of new sleep drugs poses particular challenges in drug discovery owing to disease-specific requirements such as rapid onset of action, sleep maintenance throughout major parts of the night, and absence of residual next-day effects. Robust tools to estimate drug levels in human brain are therefore key for a successful discovery program. Animal models constitute an appropriate choice for drugs without species differences in receptor pharmacology or pharmacokinetics. Translation to man becomes more challenging when interspecies differences are prominent. This report describes the discovery of the dual orexin receptor 1 and 2 (OX 1 and OX 2 ) antagonist ACT-541468 out of a class of structurally related compounds, by use of physiology-based pharmacokinetic and pharmacodynamic (PBPK-PD) modeling applied early in drug discovery. Although all drug candidates exhibited similar target receptor potencies and efficacy in a rat sleep model, they exhibited large interspecies differences in key factors determining their pharmacokinetic profile. Human PK models were built on the basis of in vitro metabolism and physicochemical data and were then used to predict the time course of OX 2 receptor occupancy in brain. An active ACT-541468 dose of 25 mg was estimated on the basis of OX 2 receptor occupancy thresholds of about 65% derived from clinical data for two other orexin antagonists, almorexant and suvorexant. Modeling predictions for ACT-541468 in man were largely confirmed in a single-ascending dose trial in healthy subjects. PBPK-PD modeling applied early in drug discovery, therefore, has great potential to assist in the identification of drug molecules when specific pharmacokinetic and pharmacodynamic requirements need to be met. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.

  15. Examination of motor unit control properties of the vastus lateralis in an individual that had acute paralytic poliomyelitis.

    PubMed

    Herda, Trent J; Cooper, Michael A

    2014-08-01

    The purpose of the study was to examine motor unit (MU) recruitment and derecruitment thresholds and firing rates of the vastus lateralis between 2 healthy (HE) individuals (women, ages = 19 and 23 years) and 1 individual (man, age = 22 years) who acquired acute poliomyelitis (PO). Each participant performed submaximal isometric trapezoid muscle actions of the leg extensors from 20% to 90% maximal voluntary contraction in 10% increments with a sensor placed on the vastus lateralis to record electromyography. Electromyographic signals were decomposed into the firing events of single MUs. Linear regressions were performed on the firing rates at recruitment and peak firing rates versus the recruitment thresholds and the derecruitment versus recruitment thresholds. In addition, data were pooled together from all contractions to examine differences between PO and HE with independent samples t-tests calculated for firing rates at recruitment, peak firing rates, recruitment thresholds, derecruitment thresholds, and duration of MU activity. The results demonstrated systematic differences in MU control strategies between the PO and HE. There were differences in the recruitment thresholds (P < 0.001; HE = 30.5% ± 22.2% maximal voluntary contraction; PO = 14.5% ± 5.0% maximal voluntary contraction), firing rates at recruitment (P < 0.001; HE = 7.4 ± 2.5 pulses per second; PO = 6.2 ± 1.7 pulses per second) and peak firing rates across the force spectrum (P = 0.001; HE = 22.2 ± 5.8 pulses per second; PO = 20.3 ± 2.3 pulses per second), altered derecruitment versus recruitment relationships (HE slope = 0.82 derec/rec, PO slope = 1.78 derec/rec), and duration of MU activity (P < 0.001) between the PO (18.6 ± 2.4 seconds) and HE (15.3 ± 3.0 seconds). Future research should examine the possible differences in MU behavior between PO and HE as a result of fatigue to further elucidate disease-related changes in MU properties.

  16. Accelerated Threshold Fatigue Crack Growth Effect-Powder Metallurgy Aluminum Alloy

    NASA Technical Reports Server (NTRS)

    Piascik, R. S.; Newman, J. A.

    2002-01-01

    Fatigue crack growth (FCG) research conducted in the near threshold regime has identified a room temperature creep crack growth damage mechanism for a fine grain powder metallurgy (PM) aluminum alloy (8009). At very low (Delta) K, an abrupt acceleration in room temperature FCG rate occurs at high stress ratio (R = K(sub min)/K(sub max)). The near threshold accelerated FCG rates are exacerbated by increased levels of K(sub max) (K(sub max) = 0.4 K(sub IC)). Detailed fractographic analysis correlates accelerated FCG with the formation of crack-tip process zone micro-void damage. Experimental results show that the near threshold and K(sub max) influenced accelerated crack growth is time and temperature dependent.

  17. 77 FR 31050 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-24

    ...-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing and Immediate Effectiveness of Proposed Rule... threshold qualifications and corresponding rates applicable to Option Trading Permit (``OTP'') Holder and... restructure the threshold qualifications and corresponding rates applicable to OTP Holder and OTP Firm...

  18. Fitness Load and Exercise Time in Secondary Physical Education Classes.

    ERIC Educational Resources Information Center

    Li, Xiao Jun; Dunham, Paul, Jr.

    1993-01-01

    Investigates the effect of secondary school physical education on fitness load: the product of the mean heart rate above threshold (144 bpm) and the time duration of heart rate above that threshold. Highly and moderately skilled students achieved fitness load more frequently than their lower skilled colleagues. (GLR)

  19. Family nurture intervention in preterm infants increases early development of cortical activity and independence of regional power trajectories.

    PubMed

    Welch, Martha G; Stark, Raymond I; Grieve, Philip G; Ludwig, Robert J; Isler, Joseph R; Barone, Joseph L; Myers, Michael M

    2017-12-01

    Premature delivery and maternal separation during hospitalisation increase infant neurodevelopmental risk. Previously, a randomised controlled trial of Family Nurture Intervention (FNI) in the neonatal intensive care unit demonstrated improvement across multiple mother and infant domains including increased electroencephalographic (EEG) power in the frontal polar region at term age. New aims were to quantify developmental changes in EEG power in all brain regions and frequencies and correlate developmental changes in EEG power among regions. EEG (128 electrodes) was obtained at 34-44 weeks postmenstrual age from preterm infants born 26-34 weeks. Forty-four infants were treated with Standard Care and 53 with FNI. EEG power was computed in 10 frequency bands (1-48 Hz) in 10 brain regions and in active and quiet sleep. Percent change/week in EEG power was increased in FNI in 132/200 tests (p < 0.05), 117 tests passed a 5% False Discovery Rate threshold. In addition, FNI demonstrated greater regional independence in those developmental rates of change. This study strengthens the conclusion that FNI promotes cerebral cortical development of preterm infants. The findings indicate that developmental changes in EEG may provide biomarkers for risk in preterm infants as well as proximal markers of effects of FNI. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  20. The Neural Substrate for Binaural Masking Level Differences in the Auditory Cortex

    PubMed Central

    Gilbert, Heather J.; Krumbholz, Katrin; Palmer, Alan R.

    2015-01-01

    The binaural masking level difference (BMLD) is a phenomenon whereby a signal that is identical at each ear (S0), masked by a noise that is identical at each ear (N0), can be made 12–15 dB more detectable by inverting the waveform of either the tone or noise at one ear (Sπ, Nπ). Single-cell responses to BMLD stimuli were measured in the primary auditory cortex of urethane-anesthetized guinea pigs. Firing rate was measured as a function of signal level of a 500 Hz pure tone masked by low-passed white noise. Responses were similar to those reported in the inferior colliculus. At low signal levels, the response was dominated by the masker. At higher signal levels, firing rate either increased or decreased. Detection thresholds for each neuron were determined using signal detection theory. Few neurons yielded measurable detection thresholds for all stimulus conditions, with a wide range in thresholds. However, across the entire population, the lowest thresholds were consistent with human psychophysical BMLDs. As in the inferior colliculus, the shape of the firing-rate versus signal-level functions depended on the neurons' selectivity for interaural time difference. Our results suggest that, in cortex, BMLD signals are detected from increases or decreases in the firing rate, consistent with predictions of cross-correlation models of binaural processing and that the psychophysical detection threshold is based on the lowest neural thresholds across the population. PMID:25568115

  1. Tectonic uplift, threshold hillslopes, and denudation rates in a developing mountain range

    USGS Publications Warehouse

    Binnie, S.A.; Phillips, W.M.; Summerfield, M.A.; Fifield, L.K.

    2007-01-01

    Studies across a broad range of drainage basins have established a positive correlation between mean slope gradient and denudation rates. It has been suggested, however, that this relationship breaks down for catchments where slopes are at their threshold angle of stability because, in such cases, denudation is controlled by the rate of tectonic uplift through the rate of channel incision and frequency of slope failure. This mechanism is evaluated for the San Bernardino Mountains, California, a nascent range that incorporates both threshold hill-slopes and remnants of pre-uplift topography. Concentrations of in situ-produced cosmogenic 10Be in alluvial sediments are used to quantify catchment-wide denudation rates and show a broadly linear relationship with mean slope gradient up to ???30??: above this value denudation rates vary substantially for similar mean slope gradients. We propose that this decoupling in the slope gradient-denudation rate relationship marks the emergence of threshold topography and coincides with the transition from transport-limited to detachment-limited denudation. The survival in the San Bernardino Mountains of surfaces formed prior to uplift provides information on the topographic evolution of the range, in particular the transition from slope-gradient-dependent rates of denudation to a regime where denudation rates are controlled by rates of tectonic uplift. This type of transition may represent a general model for the denudational response to orogenic uplift and topographic evolution during the early stages of mountain building. ?? 2007 The Geological Society of America.

  2. Thermotactile perception thresholds measurement conditions.

    PubMed

    Maeda, Setsuo; Sakakibara, Hisataka

    2002-10-01

    The purpose of this paper is to investigate the effects of posture, push force and rate of temperature change on thermotactile thresholds and to clarify suitable measuring conditions for Japanese people. Thermotactile (warm and cold) thresholds on the right middle finger were measured with an HVLab thermal aesthesiometer. Subjects were eight healthy male Japanese students. The effects of posture in measurement were examined in the posture of a straight hand and forearm placed on a support, the same posture without a support, and the fingers and hand flexed at the wrist with the elbow placed on a desk. The finger push force applied to the applicator of the thermal aesthesiometer was controlled at a 0.5, 1.0, 2.0 and 3.0 N. The applicator temperature was changed to 0.5, 1.0, 1.5, 2.0 and 2.5 degrees C/s. After each measurement, subjects were asked about comfort under the measuring conditions. Three series of experiments were conducted on different days to evaluate repeatability. Repeated measures ANOVA showed that warm thresholds were affected by the push force and the rate of temperature change and that cold thresholds were influenced by posture and push force. The comfort assessment indicated that the measurement posture of a straight hand and forearm laid on a support was the most comfortable for the subjects. Relatively high repeatability was obtained under measurement conditions of a 1 degrees C/s temperature change rate and a 0.5 N push force. Measurement posture, push force and rate of temperature change can affect the thermal threshold. Judging from the repeatability, a push force of 0.5 N and a temperature change of 1.0 degrees C/s in the posture with the straight hand and forearm laid on a support are recommended for warm and cold threshold measurements.

  3. Comparison of body mass index, waist circumference, and waist to height ratio in the prediction of hypertension and diabetes mellitus: Filipino-American women cardiovascular study.

    PubMed

    Battie, Cynthia A; Borja-Hart, Nancy; Ancheta, Irma B; Flores, Rene; Rao, Goutham; Palaniappan, Latha

    2016-12-01

    The relative ability of three obesity indices to predict hypertension (HTN) and diabetes (DM) and the validity of using Asian-specific thresholds of these indices were examined in Filipino-American women (FAW). Filipino-American women ( n  = 382), 40-65 years of age were screened for hypertension (HTN) and diabetes (DM) in four major US cities. Body mass index (BMI), waist circumference (WC) and waist circumference to height ratio (WHtR) were measured. ROC analyses determined that the three obesity measurements were similar in predicting HTN and DM (AUC: 0.6-0.7). The universal WC threshold of ≥ 35 in. missed 13% of the hypertensive patients and 12% of the diabetic patients. The Asian WC threshold of ≥ 31.5 in. increased detection of HTN and DM but with a high rate of false positives. The traditional BMI ≥ 25 kg/m 2 threshold missed 35% of those with hypertension and 24% of those with diabetes. The Asian BMI threshold improved detection but resulted in a high rate of false positives. The suggested WHtR cut-off of ≥ 0.5 missed only 1% of those with HTN and 0% of those with DM. The three obesity measurements had similar but modest ability to predict HTN and DM in FAW. Using Asian-specific thresholds increased accuracy but with a high rate of false positives. Whether FAW, especially at older ages, should be encouraged to reach these lower thresholds needs further investigation because of the high false positive rates.

  4. Threshold analysis of reimbursing physicians for the application of fluoride varnish in young children.

    PubMed

    Hendrix, Kristin S; Downs, Stephen M; Brophy, Ginger; Carney Doebbeling, Caroline; Swigonski, Nancy L

    2013-01-01

    Most state Medicaid programs reimburse physicians for providing fluoride varnish, yet the only published studies of cost-effectiveness do not show cost-savings. Our objective is to apply state-specific claims data to an existing published model to quickly and inexpensively estimate the cost-savings of a policy consideration to better inform decisions - specifically, to assess whether Indiana Medicaid children's restorative service rates met the threshold to generate cost-savings. Threshold analysis was based on the 2006 model by Quiñonez et al. Simple calculations were used to "align" the Indiana Medicaid data with the published model. Quarterly likelihoods that a child would receive treatment for caries were annualized. The probability of a tooth developing a cavitated lesion was multiplied by the probability of using restorative services. Finally, this rate of restorative services given cavitation was multiplied by 1.5 to generate the threshold to attain cost-savings. Restorative services utilization rates, extrapolated from available Indiana Medicaid claims, were compared with these thresholds. For children 1-2 years old, restorative services utilization was 2.6 percent, which was below the 5.8 percent threshold for cost-savings. However, for children 3-5 years of age, restorative services utilization was 23.3 percent, exceeding the 14.5 percent threshold that suggests cost-savings. Combining a published model with state-specific data, we were able to quickly and inexpensively demonstrate that restorative service utilization rates for children 36 months and older in Indiana are high enough that fluoride varnish regularly applied by physicians to children starting at 9 months of age could save Medicaid funds over a 3-year horizon. © 2013 American Association of Public Health Dentistry.

  5. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test

    PubMed Central

    Ham, Joo-ho; Park, Hun-Young; Kim, Youn-ho; Bae, Sang-kon; Ko, Byung-hoon

    2017-01-01

    [Purpose] The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. [Methods] We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20–59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. [Results] Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. [Conclusion] These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. PMID:29036765

  6. Development of an anaerobic threshold (HRLT, HRVT) estimation equation using the heart rate threshold (HRT) during the treadmill incremental exercise test.

    PubMed

    Ham, Joo-Ho; Park, Hun-Young; Kim, Youn-Ho; Bae, Sang-Kon; Ko, Byung-Hoon; Nam, Sang-Seok

    2017-09-30

    The purpose of this study was to develop a regression model to estimate the heart rate at the lactate threshold (HRLT) and the heart rate at the ventilatory threshold (HRVT) using the heart rate threshold (HRT), and to test the validity of the regression model. We performed a graded exercise test with a treadmill in 220 normal individuals (men: 112, women: 108) aged 20-59 years. HRT, HRLT, and HRVT were measured in all subjects. A regression model was developed to estimate HRLT and HRVT using HRT with 70% of the data (men: 79, women: 76) through randomization (7:3), with the Bernoulli trial. The validity of the regression model developed with the remaining 30% of the data (men: 33, women: 32) was also examined. Based on the regression coefficient, we found that the independent variable HRT was a significant variable in all regression models. The adjusted R2 of the developed regression models averaged about 70%, and the standard error of estimation of the validity test results was 11 bpm, which is similar to that of the developed model. These results suggest that HRT is a useful parameter for predicting HRLT and HRVT. ©2017 The Korean Society for Exercise Nutrition

  7. Hydroxyisohexyl 3-cyclohexene carboxaldehyde allergy: relationship between patch test and repeated open application test thresholds.

    PubMed

    Fischer, L A; Menné, T; Avnstorp, C; Kasting, G B; Johansen, J D

    2009-09-01

    Hydroxyisohexyl 3-cyclohexene carboxaldehyde (HICC) is a synthetic fragrance ingredient. Case reports of allergy to HICC appeared in the 1980s, and HICC has recently been included in the European baseline series. Human elicitation dose-response studies performed with different allergens have shown a significant relationship between the patch-test threshold and the repeated open application test (ROAT) threshold, which mimics some real-life exposure situations. Fragrance ingredients are special as significant amounts of allergen may evaporate from the skin. The study aimed to investigate the relationship between elicitation threshold doses at the patch test and the ROAT, using HICC as the allergen. The expected evaporation rate was calculated. Seventeen HICC-allergic persons were tested with a dilution series of HICC in a patch test and a ROAT (duration up to 21 days). Seventeen persons with no HICC allergy were included as control group for the ROAT. Results The response frequency to the ROAT (in microg HICC cm(-2) per application) was significantly higher than the response frequency to the patch test at one of the tested doses. Furthermore the response rate to the accumulated ROAT dose was significantly lower at half of the doses compared with the patch test. The evaporation rate of HICC was calculated to be 72% over a 24-h period. The ROAT threshold in dose per area per application is lower than the patch test threshold; furthermore the accumulated ROAT threshold is higher than the patch test threshold, which can probably be explained by the evaporation of HICC from the skin in the open test.

  8. Comparison of Intrinsic Rate of Different House Fly Densities in a Simulated Condition: A Prediction for House Fly Population and Control Threshold.

    PubMed

    Ong, Song-Quan; Ahmad, Hamdan; Jaal, Zairi; Rus, Adanan; Fadzlah, Fadhlina Hazwani Mohd

    2017-01-01

    Determining the control threshold for a pest is common prior to initiating a pest control program; however, previous studies related to the house fly control threshold for a poultry farm are insufficient for determining such a threshold. This study aimed to predict the population changes of house fly population by comparing the intrinsic rate of increase (r m ) for different house fly densities in a simulated system. This study first defined the knee points of a known population growth curve as a control threshold by comparing the r m of five densities of house flies in a simulated condition. Later, to understand the interactions between the larval and adult populations, the correlation between larval and adult capacity rate (r c ) was studied. The r m values of 300- and 500-fly densities were significantly higher compared with the r m values at densities of 50 and 100 flies. This result indicated their representative indices as candidates for a control threshold. The r c of larval and adult populations were negatively correlated with densities of fewer than 300 flies; this implicated adult populations with fewer than 300 flies as declining while the larval population was growing; therefore, control approaches should focus on the immature stages. The results in the present study suggest a control threshold for house fly populations. Future works should focus on calibrating the threshold indices in field conditions. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Predicting coral bleaching hotspots: the role of regional variability in thermal stress and potential adaptation rates

    NASA Astrophysics Data System (ADS)

    Teneva, Lida; Karnauskas, Mandy; Logan, Cheryl A.; Bianucci, Laura; Currie, Jock C.; Kleypas, Joan A.

    2012-03-01

    Sea surface temperature fields (1870-2100) forced by CO2-induced climate change under the IPCC SRES A1B CO2 scenario, from three World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (WCRP CMIP3) models (CCSM3, CSIRO MK 3.5, and GFDL CM 2.1), were used to examine how coral sensitivity to thermal stress and rates of adaption affect global projections of coral-reef bleaching. The focus of this study was two-fold, to: (1) assess how the impact of Degree-Heating-Month (DHM) thermal stress threshold choice affects potential bleaching predictions and (2) examine the effect of hypothetical adaptation rates of corals to rising temperature. DHM values were estimated using a conventional threshold of 1°C and a variability-based threshold of 2σ above the climatological maximum Coral adaptation rates were simulated as a function of historical 100-year exposure to maximum annual SSTs with a dynamic rather than static climatological maximum based on the previous 100 years, for a given reef cell. Within CCSM3 simulations, the 1°C threshold predicted later onset of mild bleaching every 5 years for the fraction of reef grid cells where 1°C > 2σ of the climatology time series of annual SST maxima (1961-1990). Alternatively, DHM values using both thresholds, with CSIRO MK 3.5 and GFDL CM 2.1 SSTs, did not produce drastically different onset timing for bleaching every 5 years. Across models, DHMs based on 1°C thermal stress threshold show the most threatened reefs by 2100 could be in the Central and Western Equatorial Pacific, whereas use of the variability-based threshold for DHMs yields the Coral Triangle and parts of Micronesia and Melanesia as bleaching hotspots. Simulations that allow corals to adapt to increases in maximum SST drastically reduce the rates of bleaching. These findings highlight the importance of considering the thermal stress threshold in DHM estimates as well as potential adaptation models in future coral bleaching projections.

  10. Beneficial laggards: multilevel selection, cooperative polymorphism and division of labour in threshold public good games

    PubMed Central

    2010-01-01

    Background The origin and stability of cooperation is a hot topic in social and behavioural sciences. A complicated conundrum exists as defectors have an advantage over cooperators, whenever cooperation is costly so consequently, not cooperating pays off. In addition, the discovery that humans and some animal populations, such as lions, are polymorphic, where cooperators and defectors stably live together -- while defectors are not being punished--, is even more puzzling. Here we offer a novel explanation based on a Threshold Public Good Game (PGG) that includes the interaction of individual and group level selection, where individuals can contribute to multiple collective actions, in our model group hunting and group defense. Results Our results show that there are polymorphic equilibria in Threshold PGGs; that multi-level selection does not select for the most cooperators per group but selects those close to the optimum number of cooperators (in terms of the Threshold PGG). In particular for medium cost values division of labour evolves within the group with regard to the two types of cooperative actions (hunting vs. defense). Moreover we show evidence that spatial population structure promotes cooperation in multiple PGGs. We also demonstrate that these results apply for a wide range of non-linear benefit function types. Conclusions We demonstrate that cooperation can be stable in Threshold PGG, even when the proportion of so called free riders is high in the population. A fundamentally new mechanism is proposed how laggards, individuals that have a high tendency to defect during one specific group action can actually contribute to the fitness of the group, by playing part in an optimal resource allocation in Threshold Public Good Games. In general, our results show that acknowledging a multilevel selection process will open up novel explanations for collective actions. PMID:21044340

  11. The anaerobic threshold: over-valued or under-utilized? A novel concept to enhance lipid optimization!

    PubMed

    Connolly, Declan A J

    2012-09-01

    The purpose of this article is to assess the value of the anaerobic threshold for use in clinical populations with the intent to improve exercise adaptations and outcomes. The anaerobic threshold is generally poorly understood, improperly used, and poorly measured. It is rarely used in clinical settings and often reserved for athletic performance testing. Increased exercise participation within both clinical and other less healthy populations has increased our attention to optimizing exercise outcomes. Of particular interest is the optimization of lipid metabolism during exercise in order to improve numerous conditions such as blood lipid profile, insulin sensitivity and secretion, and weight loss. Numerous authors report on the benefits of appropriate exercise intensity in optimizing outcomes even though regulation of intensity has proved difficult for many. Despite limited use, selected exercise physiology markers have considerable merit in exercise-intensity regulation. The anaerobic threshold, and other markers such as heart rate, may well provide a simple and valuable mechanism for regulating exercising intensity. The use of the anaerobic threshold and accurate target heart rate to regulate exercise intensity is a valuable approach that is under-utilized across populations. The measurement of the anaerobic threshold can be simplified to allow clients to use nonlaboratory measures, for example heart rate, in order to self-regulate exercise intensity and improve outcomes.

  12. Application of Data Mining and Knowledge Discovery Techniques to Enhance Binary Target Detection and Decision-Making for Compromised Visual Images

    DTIC Science & Technology

    2004-11-01

    affords exciting opportunities in target detection. The input signal may be a sum of sine waves, it could be an auditory signal, or possibly a visual...rendering of a scene. Since image processing is an area in which the original data are stationary in some sense ( auditory signals suffer from...11 Example 1 of SR - Identification of a Subliminal Signal below a Threshold .......................... 13 Example 2 of SR

  13. The SEIS Experiment for the Insight Mission: Development and management plan

    NASA Astrophysics Data System (ADS)

    Laudet, P.

    2015-10-01

    SEIS is a Mars seismometer, provided by CNES to JPL to be the threshold instrument of the next Mars mission, InSight, to be launched by NASA in March 2016. Discovery missions leads to a very strict frame of development, where schedule is driving development and qualification plans. We will explain how this constraint has been taken into account during development phases, until delivery of flight model, with a context of international cooperation without exchange of founds between partners.

  14. Bioclimatic Thresholds, Thermal Constants and Survival of Mealybug, Phenacoccus solenopsis (Hemiptera: Pseudococcidae) in Response to Constant Temperatures on Hibiscus

    PubMed Central

    Sreedevi, Gudapati; Prasad, Yenumula Gerard; Prabhakar, Mathyam; Rao, Gubbala Ramachandra; Vennila, Sengottaiyan; Venkateswarlu, Bandi

    2013-01-01

    Temperature-driven development and survival rates of the mealybug, Phenacoccussolenopsis Tinsley (Hemiptera: Pseudococcidae) were examined at nine constant temperatures (15, 20, 25, 27, 30, 32, 35 and 40°C) on hibiscus ( Hibiscus rosa -sinensis L.). Crawlers successfully completed development to adult stage between 15 and 35°C, although their survival was affected at low temperatures. Two linear and four nonlinear models were fitted to describe developmental rates of P . solenopsis as a function of temperature, and for estimating thermal constants and bioclimatic thresholds (lower, optimum and upper temperature thresholds for development: Tmin, Topt and Tmax, respectively). Estimated thresholds between the two linear models were statistically similar. Ikemoto and Takai’s linear model permitted testing the equivalence of lower developmental thresholds for life stages of P . solenopsis reared on two hosts, hibiscus and cotton. Thermal constants required for completion of cumulative development of female and male nymphs and for the whole generation were significantly lower on hibiscus (222.2, 237.0, 308.6 degree-days, respectively) compared to cotton. Three nonlinear models performed better in describing the developmental rate for immature instars and cumulative life stages of female and male and for generation based on goodness-of-fit criteria. The simplified β type distribution function estimated Topt values closer to the observed maximum rates. Thermodynamic SSI model indicated no significant differences in the intrinsic optimum temperature estimates for different geographical populations of P . solenopsis . The estimated bioclimatic thresholds and the observed survival rates of P . solenopsis indicate the species to be high-temperature adaptive, and explained the field abundance of P . solenopsis on its host plants. PMID:24086597

  15. On the Disappearance of Kilohertz Quasi-periodic Oscillations at a High Mass Accretion Rate in Low-Mass X-Ray Binaries

    NASA Astrophysics Data System (ADS)

    Cui, Wei

    2000-05-01

    For all sources in which the phenomenon of kilohertz quasi-periodic oscillation (kHz QPO) is observed, the QPOs disappear abruptly when the inferred mass accretion rate exceeds a certain threshold. Although the threshold cannot at present be accurately determined (or even quantified) observationally, it is clearly higher for bright Z sources than for faint atoll sources. Here we propose that the observational manifestation of kHz QPOs requires direct interaction between the neutron star magnetosphere and the Keplerian accretion disk and that the cessation of kHz QPOs at a high accretion rate is due to the lack of such an interaction when the Keplerian disk terminates at the last stable orbit and yet the magnetosphere is pushed farther inward. The threshold is therefore dependent on the magnetic field strength-the stronger the magnetic field, the higher the threshold. This is certainly in agreement with the atoll/Z paradigm, but we argue that it is also generally true, even for individual sources within each (atoll or Z) category. For atoll sources, the kHz QPOs also seem to vanish at a low accretion rate. Perhaps the ``disengagement'' between the magnetosphere and the Keplerian disk also takes place under such circumstances because of, for instance, the presence of quasi-spherical advection-dominated accretion flow (ADAF) close to the neutron star. Unfortunately, in this case, the estimation of the accretion rate threshold would require a knowledge of the physical mechanisms that cause the disengagement. If the ADAF is responsible, the threshold is likely dependent on the magnetic field of the neutron star.

  16. Low-z Type Ia Supernova Calibration

    NASA Astrophysics Data System (ADS)

    Hamuy, Mario

    The discovery of acceleration and dark energy in 1998 arguably constitutes one of the most revolutionary discoveries in astrophysics in recent years. This paradigm shift was possible thanks to one of the most traditional cosmological tests: the redshift-distance relation between galaxies. This discovery was based on a differential measurement of the expansion rate of the universe: the current one provided by nearby (low-z) type Ia supernovae and the one in the past measured from distant (high-z) supernovae. This paper focuses on the first part of this journey: the calibration of the type Ia supernova luminosities and the local expansion rate of the universe, which was made possible thanks to the introduction of digital CCD (charge-coupled device) digital photometry. The new technology permitted us in the early 1990s to convert supernovae as precise tools to measure extragalactic distances through two key surveys: (1) the "Tololo Supernova Program" which made possible the critical discovery of the "peak luminosity-decline rate" relation for type Ia supernovae, the key underlying idea today behind precise cosmology from supernovae, and (2) the Calán/Tololo project which provided the low - z type Ia supernova sample for the discovery of acceleration.

  17. Influence of proprioceptive feedback on the firing rate and recruitment of motoneurons

    NASA Astrophysics Data System (ADS)

    De Luca, C. J.; Kline, J. C.

    2012-02-01

    We investigated the relationships of the firing rate and maximal recruitment threshold of motoneurons recorded during isometric contraction with the number of spindles in individual muscles. At force levels above 10% of maximal voluntary contraction, the firing rate was inversely related to the number of spindles in a muscle, with the slope of the relationship increasing with force. The maximal recruitment threshold of motor units increased linearly with the number of spindles in the muscle. Thus, muscles with a greater number of spindles had lower firing rates and a greater maximal recruitment threshold. These findings may be explained by a mechanical interaction between muscle fibres and adjacent spindles. During low-level (0% to 10%) voluntary contractions, muscle fibres of recruited motor units produce force twitches that activate nearby spindles to respond with an immediate excitatory feedback that reaches maximal level. As the force increases further, the twitches overlap and tend towards tetanization, the muscle fibres shorten, the spindles slacken, their excitatory firings decrease, and the net excitation to the homonymous motoneurons decreases. Motoneurons of muscles with greater number of spindles receive a greater decrease in excitation which reduces their firing rates, increases their maximal recruitment threshold, and changes the motoneuron recruitment distribution.

  18. Estimating the exceedance probability of rain rate by logistic regression

    NASA Technical Reports Server (NTRS)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  19. On the asymmetric evolution of the perihelion distances of near-Earth Jupiter family comets around the discovery time

    NASA Astrophysics Data System (ADS)

    Sosa, A.; Fernández, J. A.; Pais, P.

    2012-12-01

    We study the dynamical evolution of the near-Earth Jupiter family comets (NEJFCs) that came close to or crossed the Earth's orbit at the epoch of their discovery (perihelion distances qdisc < 1.3 AU). We found a minimum in the time evolution of the mean perihelion distance bar{q} of the NEJFCs at the discovery time of each comet (taken as t = 0) and a past-future asymmetry of bar{q} in an interval -1000 yr, +1000 yr centred on t = 0, confirming previous results. The asymmetry indicates that there are more comets with greater q in the past than in the future. For comparison purposes, we also analysed the population of near-Earth asteroids in cometary orbits (defined as those with aphelion distances Q > 4.5 AU) and with absolute magnitudes H < 18. We found some remarkable differences in the dynamical evolution of both populations that argue against a common origin. To further analyse the dynamical evolution of NEJFCs, we integrated in time a large sample of fictitious comets, cloned from the observed NEJFCs, over a 20 000 yr time interval and started the integration before the comet's discovery time, when it had a perihelion distance q > 2 AU. By assuming that NEJFCs are mostly discovered when they decrease their perihelion distances below a certain threshold qthre = 1.05 AU for the first time during their evolution, we were able to reproduce the main features of the observed bar{q} evolution in the interval [-1000, 1000] yr with respect to the discovery time. Our best fits indicate that 40% of the population of NEJFCs would be composed of young, fresh comets that entered the region q < 2 AU a few hundred years before decreasing their perihelion distances below qthre, while 60% would be composed of older, more evolved comets, discovered after spending at least 3000 yr in the q < 2 AU region before their perihelion distances drop below qthre. As a byproduct, we put some constraints on the physical lifetime τphys of NEJFCs in the q < 2 AU region. We found a lower limit of a few hundreds of revolutions and an upper limit of about 10 000-12 000 yr, or about 1600-2000 revolutions, somewhat longer than some previous estimates. These constraints are consistent with other estimates of τphys, based either on mass loss (sublimation, outbursts, splittings) or on the extinction rate of Jupiter family comets (JFCs).

  20. PERSONAL AND CIRCUMSTANTIAL FACTORS INFLUENCING THE ACT OF DISCOVERY.

    ERIC Educational Resources Information Center

    OSTRANDER, EDWARD R.

    HOW STUDENTS SAY THEY LEARN WAS INVESTIGATED. INTERVIEWS WITH A RANDOM SAMPLE OF 74 WOMEN STUDENTS POSED QUESTIONS ABOUT THE NATURE, FREQUENCY, PATTERNS, AND CIRCUMSTANCES UNDER WHICH ACTS OF DISCOVERY TAKE PLACE IN THE ACADEMIC SETTING. STUDENTS WERE ASSIGNED DISCOVERY RATINGS BASED ON READINGS OF TYPESCRIPTS. EACH STUDENT WAS CLASSIFIED AND…

  1. Step-rate cut-points for physical activity intensity in patients with multiple sclerosis: The effect of disability status.

    PubMed

    Agiovlasitis, Stamatis; Sandroff, Brian M; Motl, Robert W

    2016-02-15

    Evaluating the relationship between step-rate and rate of oxygen uptake (VO2) may allow for practical physical activity assessment in patients with multiple sclerosis (MS) of differing disability levels. To examine whether the VO2 to step-rate relationship during over-ground walking differs across varying disability levels among patients with MS and to develop step-rate thresholds for moderate- and vigorous-intensity physical activity. Adults with MS (N=58; age: 51 ± 9 years; 48 women) completed one over-ground walking trial at comfortable speed, one at 0.22 m · s(-1) slower, and one at 0.22 m · s(-1) faster. Each trial lasted 6 min. VO2 was measured with portable spirometry and steps with hand-tally. Disability status was classified as mild, moderate, or severe based on Expanded Disability Status Scale scores. Multi-level regression indicated that step-rate, disability status, and height significantly predicted VO2 (p<0.05). Based on this model, we developed step-rate thresholds for activity intensity that vary by disability status and height. A separate regression without height allowed for development of step-rate thresholds that vary only by disability status. The VO2 during over-ground walking differs among ambulatory patients with MS based on disability level and height, yielding different step-rate thresholds for physical activity intensity. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. False discovery rates in spectral identification.

    PubMed

    Jeong, Kyowon; Kim, Sangtae; Bandeira, Nuno

    2012-01-01

    Automated database search engines are one of the fundamental engines of high-throughput proteomics enabling daily identifications of hundreds of thousands of peptides and proteins from tandem mass (MS/MS) spectrometry data. Nevertheless, this automation also makes it humanly impossible to manually validate the vast lists of resulting identifications from such high-throughput searches. This challenge is usually addressed by using a Target-Decoy Approach (TDA) to impose an empirical False Discovery Rate (FDR) at a pre-determined threshold x% with the expectation that at most x% of the returned identifications would be false positives. But despite the fundamental importance of FDR estimates in ensuring the utility of large lists of identifications, there is surprisingly little consensus on exactly how TDA should be applied to minimize the chances of biased FDR estimates. In fact, since less rigorous TDA/FDR estimates tend to result in more identifications (at higher 'true' FDR), there is often little incentive to enforce strict TDA/FDR procedures in studies where the major metric of success is the size of the list of identifications and there are no follow up studies imposing hard cost constraints on the number of reported false positives. Here we address the problem of the accuracy of TDA estimates of empirical FDR. Using MS/MS spectra from samples where we were able to define a factual FDR estimator of 'true' FDR we evaluate several popular variants of the TDA procedure in a variety of database search contexts. We show that the fraction of false identifications can sometimes be over 10× higher than reported and may be unavoidably high for certain types of searches. In addition, we further report that the two-pass search strategy seems the most promising database search strategy. While unavoidably constrained by the particulars of any specific evaluation dataset, our observations support a series of recommendations towards maximizing the number of resulting identifications while controlling database searches with robust and reproducible TDA estimation of empirical FDR.

  3. A Scalable Approach for Protein False Discovery Rate Estimation in Large Proteomic Data Sets.

    PubMed

    Savitski, Mikhail M; Wilhelm, Mathias; Hahne, Hannes; Kuster, Bernhard; Bantscheff, Marcus

    2015-09-01

    Calculating the number of confidently identified proteins and estimating false discovery rate (FDR) is a challenge when analyzing very large proteomic data sets such as entire human proteomes. Biological and technical heterogeneity in proteomic experiments further add to the challenge and there are strong differences in opinion regarding the conceptual validity of a protein FDR and no consensus regarding the methodology for protein FDR determination. There are also limitations inherent to the widely used classic target-decoy strategy that particularly show when analyzing very large data sets and that lead to a strong over-representation of decoy identifications. In this study, we investigated the merits of the classic, as well as a novel target-decoy-based protein FDR estimation approach, taking advantage of a heterogeneous data collection comprised of ∼19,000 LC-MS/MS runs deposited in ProteomicsDB (https://www.proteomicsdb.org). The "picked" protein FDR approach treats target and decoy sequences of the same protein as a pair rather than as individual entities and chooses either the target or the decoy sequence depending on which receives the highest score. We investigated the performance of this approach in combination with q-value based peptide scoring to normalize sample-, instrument-, and search engine-specific differences. The "picked" target-decoy strategy performed best when protein scoring was based on the best peptide q-value for each protein yielding a stable number of true positive protein identifications over a wide range of q-value thresholds. We show that this simple and unbiased strategy eliminates a conceptual issue in the commonly used "classic" protein FDR approach that causes overprediction of false-positive protein identification in large data sets. The approach scales from small to very large data sets without losing performance, consistently increases the number of true-positive protein identifications and is readily implemented in proteomics analysis software. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  4. A Scalable Approach for Protein False Discovery Rate Estimation in Large Proteomic Data Sets

    PubMed Central

    Savitski, Mikhail M.; Wilhelm, Mathias; Hahne, Hannes; Kuster, Bernhard; Bantscheff, Marcus

    2015-01-01

    Calculating the number of confidently identified proteins and estimating false discovery rate (FDR) is a challenge when analyzing very large proteomic data sets such as entire human proteomes. Biological and technical heterogeneity in proteomic experiments further add to the challenge and there are strong differences in opinion regarding the conceptual validity of a protein FDR and no consensus regarding the methodology for protein FDR determination. There are also limitations inherent to the widely used classic target–decoy strategy that particularly show when analyzing very large data sets and that lead to a strong over-representation of decoy identifications. In this study, we investigated the merits of the classic, as well as a novel target–decoy-based protein FDR estimation approach, taking advantage of a heterogeneous data collection comprised of ∼19,000 LC-MS/MS runs deposited in ProteomicsDB (https://www.proteomicsdb.org). The “picked” protein FDR approach treats target and decoy sequences of the same protein as a pair rather than as individual entities and chooses either the target or the decoy sequence depending on which receives the highest score. We investigated the performance of this approach in combination with q-value based peptide scoring to normalize sample-, instrument-, and search engine-specific differences. The “picked” target–decoy strategy performed best when protein scoring was based on the best peptide q-value for each protein yielding a stable number of true positive protein identifications over a wide range of q-value thresholds. We show that this simple and unbiased strategy eliminates a conceptual issue in the commonly used “classic” protein FDR approach that causes overprediction of false-positive protein identification in large data sets. The approach scales from small to very large data sets without losing performance, consistently increases the number of true-positive protein identifications and is readily implemented in proteomics analysis software. PMID:25987413

  5. 78 FR 4032 - Prompt Corrective Action, Requirements for Insurance, and Promulgation of NCUA Rules and Regulations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-18

    ... interest rate risk requirements. The amended IRPS increases the asset threshold that identifies credit... asset threshold used to define a ``complex'' credit union for determining whether risk-based net worth... or credit unions) with assets of $50 million or less from interest rate risk rule requirements. To...

  6. Time Poverty Thresholds and Rates for the US Population

    ERIC Educational Resources Information Center

    Kalenkoski, Charlene M.; Hamrick, Karen S.; Andrews, Margaret

    2011-01-01

    Time constraints, like money constraints, affect Americans' well-being. This paper defines what it means to be time poor based on the concepts of necessary and committed time and presents time poverty thresholds and rates for the US population and certain subgroups. Multivariate regression techniques are used to identify the key variables…

  7. Auditory brainstem response latency in forward masking, a marker of sensory deficits in listeners with normal hearing thresholds

    PubMed Central

    Mehraei, Golbarg; Gallardo, Andreu Paredes; Shinn-Cunningham, Barbara G.; Dau, Torsten

    2017-01-01

    In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels and in noise, are more susceptible to degeneration than high-spontaneous rate fibers. The change in auditory brainstem response (ABR) wave-V latency with noise level has been shown to be associated with auditory nerve deafferentation. Here, we measured ABR in a forward masking paradigm and evaluated wave-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low-SR fibers results in a faster recovery of wave-V latency as the slow contribution of these fibers is reduced. Results showed that in young audiometrically normal listeners, a larger change in wave-V latency with increasing masker-to-probe interval was related to a greater effect of a preceding masker behaviorally. Further, the amount of wave-V latency change with masker-to-probe interval was positively correlated with the rate of change in forward masking detection thresholds. Although we cannot rule out central contributions, these findings are consistent with the hypothesis that auditory nerve fiber deafferentation occurs in humans and may predict how well individuals can hear in noisy environments. PMID:28159652

  8. Evaluation of Mandarin Chinese Speech Recognition in Adults with Cochlear Implants Using the Spectral Ripple Discrimination Test

    PubMed Central

    Dai, Chuanfu; Zhao, Zeqi; Zhang, Duo; Lei, Guanxiong

    2018-01-01

    Background The aim of this study was to explore the value of the spectral ripple discrimination test in speech recognition evaluation among a deaf (post-lingual) Mandarin-speaking population in China following cochlear implantation. Material/Methods The study included 23 Mandarin-speaking adult subjects with normal hearing (normal-hearing group) and 17 deaf adults who were former Mandarin-speakers, with cochlear implants (cochlear implantation group). The normal-hearing subjects were divided into men (n=10) and women (n=13). The spectral ripple discrimination thresholds between the groups were compared. The correlation between spectral ripple discrimination thresholds and Mandarin speech recognition rates in the cochlear implantation group were studied. Results Spectral ripple discrimination thresholds did not correlate with age (r=−0.19; p=0.22), and there was no significant difference in spectral ripple discrimination thresholds between the male and female groups (p=0.654). Spectral ripple discrimination thresholds of deaf adults with cochlear implants were significantly correlated with monosyllabic recognition rates (r=0.84; p=0.000). Conclusions In a Mandarin Chinese speaking population, spectral ripple discrimination thresholds of normal-hearing individuals were unaffected by both gender and age. Spectral ripple discrimination thresholds were correlated with Mandarin monosyllabic recognition rates of Mandarin-speaking in post-lingual deaf adults with cochlear implants. The spectral ripple discrimination test is a promising method for speech recognition evaluation in adults following cochlear implantation in China. PMID:29806954

  9. Evaluation of Mandarin Chinese Speech Recognition in Adults with Cochlear Implants Using the Spectral Ripple Discrimination Test.

    PubMed

    Dai, Chuanfu; Zhao, Zeqi; Shen, Weidong; Zhang, Duo; Lei, Guanxiong; Qiao, Yuehua; Yang, Shiming

    2018-05-28

    BACKGROUND The aim of this study was to explore the value of the spectral ripple discrimination test in speech recognition evaluation among a deaf (post-lingual) Mandarin-speaking population in China following cochlear implantation. MATERIAL AND METHODS The study included 23 Mandarin-speaking adult subjects with normal hearing (normal-hearing group) and 17 deaf adults who were former Mandarin-speakers, with cochlear implants (cochlear implantation group). The normal-hearing subjects were divided into men (n=10) and women (n=13). The spectral ripple discrimination thresholds between the groups were compared. The correlation between spectral ripple discrimination thresholds and Mandarin speech recognition rates in the cochlear implantation group were studied. RESULTS Spectral ripple discrimination thresholds did not correlate with age (r=-0.19; p=0.22), and there was no significant difference in spectral ripple discrimination thresholds between the male and female groups (p=0.654). Spectral ripple discrimination thresholds of deaf adults with cochlear implants were significantly correlated with monosyllabic recognition rates (r=0.84; p=0.000). CONCLUSIONS In a Mandarin Chinese speaking population, spectral ripple discrimination thresholds of normal-hearing individuals were unaffected by both gender and age. Spectral ripple discrimination thresholds were correlated with Mandarin monosyllabic recognition rates of Mandarin-speaking in post-lingual deaf adults with cochlear implants. The spectral ripple discrimination test is a promising method for speech recognition evaluation in adults following cochlear implantation in China.

  10. A flexible cure rate model with dependent censoring and a known cure threshold.

    PubMed

    Bernhardt, Paul W

    2016-11-10

    We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Blood eosinophil count thresholds and exacerbations in patients with chronic obstructive pulmonary disease.

    PubMed

    Yun, Jeong H; Lamb, Andrew; Chase, Robert; Singh, Dave; Parker, Margaret M; Saferali, Aabida; Vestbo, Jørgen; Tal-Singer, Ruth; Castaldi, Peter J; Silverman, Edwin K; Hersh, Craig P

    2018-06-01

    Eosinophilic airway inflammation in patients with chronic obstructive pulmonary disease (COPD) is associated with exacerbations and responsivity to steroids, suggesting potential shared mechanisms with eosinophilic asthma. However, there is no consistent blood eosinophil count that has been used to define the increased exacerbation risk. We sought to investigate blood eosinophil counts associated with exacerbation risk in patients with COPD. Blood eosinophil counts and exacerbation risk were analyzed in patients with moderate-to-severe COPD by using 2 independent studies of former and current smokers with longitudinal data. The Genetic Epidemiology of COPD (COPDGene) study was analyzed for discovery (n = 1,553), and the Evaluation of COPD Longitudinally to Identify Predictive Surrogate Endpoints (ECLIPSE) study was analyzed for validation (n = 1,895). A subset of the ECLIPSE study subjects were used to assess the stability of blood eosinophil counts over time. COPD exacerbation risk increased with higher eosinophil counts. An eosinophil count threshold of 300 cells/μL or greater showed adjusted incidence rate ratios for exacerbations of 1.32 in the COPDGene study (95% CI, 1.10-1.63). The cutoff of 300 cells/μL or greater was validated for prospective risk of exacerbation in the ECLIPSE study, with adjusted incidence rate ratios of 1.22 (95% CI, 1.06-1.41) using 3-year follow-up data. Stratified analysis confirmed that the increased exacerbation risk associated with an eosinophil count of 300 cells/μL or greater was driven by subjects with a history of frequent exacerbations in both the COPDGene and ECLIPSE studies. Patients with moderate-to-severe COPD and blood eosinophil counts of 300 cells/μL or greater had an increased risk exacerbations in the COPDGene study, which was prospectively validated in the ECLIPSE study. Copyright © 2018 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  12. False Discovery Control in Large-Scale Spatial Multiple Testing

    PubMed Central

    Sun, Wenguang; Reich, Brian J.; Cai, T. Tony; Guindani, Michele; Schwartzman, Armin

    2014-01-01

    Summary This article develops a unified theoretical and computational framework for false discovery control in multiple testing of spatial signals. We consider both point-wise and cluster-wise spatial analyses, and derive oracle procedures which optimally control the false discovery rate, false discovery exceedance and false cluster rate, respectively. A data-driven finite approximation strategy is developed to mimic the oracle procedures on a continuous spatial domain. Our multiple testing procedures are asymptotically valid and can be effectively implemented using Bayesian computational algorithms for analysis of large spatial data sets. Numerical results show that the proposed procedures lead to more accurate error control and better power performance than conventional methods. We demonstrate our methods for analyzing the time trends in tropospheric ozone in eastern US. PMID:25642138

  13. Experimental and Finite Element Modeling of Near-Threshold Fatigue Crack Growth for the K-Decreasing Test Method

    NASA Technical Reports Server (NTRS)

    Smith, Stephen W.; Seshadri, Banavara R.; Newman, John A.

    2015-01-01

    The experimental methods to determine near-threshold fatigue crack growth rate data are prescribed in ASTM standard E647. To produce near-threshold data at a constant stress ratio (R), the applied stress-intensity factor (K) is decreased as the crack grows based on a specified K-gradient. Consequently, as the fatigue crack growth rate threshold is approached and the crack tip opening displacement decreases, remote crack wake contact may occur due to the plastically deformed crack wake surfaces and shield the growing crack tip resulting in a reduced crack tip driving force and non-representative crack growth rate data. If such data are used to life a component, the evaluation could yield highly non-conservative predictions. Although this anomalous behavior has been shown to be affected by K-gradient, starting K level, residual stresses, environmental assisted cracking, specimen geometry, and material type, the specifications within the standard to avoid this effect are limited to a maximum fatigue crack growth rate and a suggestion for the K-gradient value. This paper provides parallel experimental and computational simulations for the K-decreasing method for two materials (an aluminum alloy, AA 2024-T3 and a titanium alloy, Ti 6-2-2-2-2) to aid in establishing clear understanding of appropriate testing requirements. These simulations investigate the effect of K-gradient, the maximum value of stress-intensity factor applied, and material type. A material independent term is developed to guide in the selection of appropriate test conditions for most engineering alloys. With the use of such a term, near-threshold fatigue crack growth rate tests can be performed at accelerated rates, near-threshold data can be acquired in days instead of weeks without having to establish testing criteria through trial and error, and these data can be acquired for most engineering materials, even those that are produced in relatively small product forms.

  14. Theories of Lethal Mutagenesis: From Error Catastrophe to Lethal Defection.

    PubMed

    Tejero, Héctor; Montero, Francisco; Nuño, Juan Carlos

    2016-01-01

    RNA viruses get extinct in a process called lethal mutagenesis when subjected to an increase in their mutation rate, for instance, by the action of mutagenic drugs. Several approaches have been proposed to understand this phenomenon. The extinction of RNA viruses by increased mutational pressure was inspired by the concept of the error threshold. The now classic quasispecies model predicts the existence of a limit to the mutation rate beyond which the genetic information of the wild type could not be efficiently transmitted to the next generation. This limit was called the error threshold, and for mutation rates larger than this threshold, the quasispecies was said to enter into error catastrophe. This transition has been assumed to foster the extinction of the whole population. Alternative explanations of lethal mutagenesis have been proposed recently. In the first place, a distinction is made between the error threshold and the extinction threshold, the mutation rate beyond which a population gets extinct. Extinction is explained from the effect the mutation rate has, throughout the mutational load, on the reproductive ability of the whole population. Secondly, lethal defection takes also into account the effect of interactions within mutant spectra, which have been shown to be determinant for the understanding the extinction of RNA virus due to an augmented mutational pressure. Nonetheless, some relevant issues concerning lethal mutagenesis are not completely understood yet, as so survival of the flattest, i.e. the development of resistance to lethal mutagenesis by evolving towards mutationally more robust regions of sequence space, or sublethal mutagenesis, i.e., the increase of the mutation rate below the extinction threshold which may boost the adaptability of RNA virus, increasing their ability to develop resistance to drugs (including mutagens). A better design of antiviral therapies will still require an improvement of our knowledge about lethal mutagenesis.

  15. Influence of Threshold for Bedrock Erosion on River Long Profile Development and Knickzone Retreat in Response to Tectonic Perturbation

    NASA Astrophysics Data System (ADS)

    Attal, M.; Hobley, D.; Cowie, P. A.; Whittaker, A. C.; Tucker, G. E.; Roberts, G. P.

    2008-12-01

    Prominent convexities in channel long profiles, or knickzones, are an expected feature of bedrock rivers responding to a change in the rate of base level fall driven by tectonic processes. In response to a change in relative uplift rate, the simple stream power model which is characterized by a slope exponent equal to unity predicts that knickzone retreat velocity is independent of uplift rate and that channel slope and uplift rate are linearly related along the reaches which have re-equilibrated with respect to the new uplift condition (i.e., downstream of the profile convexity). However, a threshold for erosion has been shown to introduce non- linearity between slope and uplift rate when associated with stochastic rainfall variability. We present field data regarding the height and retreat rates of knickzones in rivers upstream of active normal faults in the central Apennines, Italy, where excellent constraints exist on the temporal and spatial history of fault movement. The knickzones developed in response to an independently-constrained increase in fault throw rate 0.75 Ma. Channel characteristics and Shield stress values suggest that these rivers lie close to the detachment-limited end-member but the knickzone retreat velocity (calculated from the time since fault acceleration) has been found to scale systematically with the known fault throw rates, even after accounting for differences in drainage area. In addition, the relationship between measured channel slope and relative uplift rate is non-linear, suggesting that a threshold for erosion might be effective in this setting. We use the Channel-Hillslope Integrated Landscape Development (CHILD) model to quantify the effect of such a threshold on river long profile development and knickzone retreat in response to tectonic perturbation. In particular, we investigate the evolutions of 3 Italian catchments of different size characterized by contrasted degree of tectonic perturbation, using physically realistic threshold values based on sediment grain-size measurements along the studied rivers. We show that the threshold alone cannot account for field observations of the size, position and retreat rate of profile convexities and that other factors neglected by the simple stream power law (e.g. role of sediments) have to be invoked to explain the discrepancy between field observations and modeled topographies.

  16. Broca’s area network in language function: a pooling-data connectivity study

    PubMed Central

    Bernal, Byron; Ardila, Alfredo; Rosselli, Monica

    2015-01-01

    Background and Objective: Modern neuroimaging developments have demonstrated that cognitive functions correlate with brain networks rather than specific areas. The purpose of this paper was to analyze the connectivity of Broca’s area based on language tasks. Methods: A connectivity modeling study was performed by pooling data of Broca’s activation in language tasks. Fifty-seven papers that included 883 subjects in 84 experiments were analyzed. Analysis of Likelihood Estimates of pooled data was utilized to generate the map; thresholds at p < 0.01 were corrected for multiple comparisons and false discovery rate. Resulting images were co-registered into MNI standard space. Results: A network consisting of 16 clusters of activation was obtained. Main clusters were located in the frontal operculum, left posterior temporal region, supplementary motor area, and the parietal lobe. Less common clusters were seen in the sub-cortical structures including the left thalamus, left putamen, secondary visual areas, and the right cerebellum. Conclusion: Broca’s area-44-related networks involved in language processing were demonstrated utilizing a pooling-data connectivity study. Significance, interpretation, and limitations of the results are discussed. PMID:26074842

  17. Ammonia oxidation kinetics determine niche separation of nitrifying Archaea and Bacteria.

    PubMed

    Martens-Habbena, Willm; Berube, Paul M; Urakawa, Hidetoshi; de la Torre, José R; Stahl, David A

    2009-10-15

    The discovery of ammonia oxidation by mesophilic and thermophilic Crenarchaeota and the widespread distribution of these organisms in marine and terrestrial environments indicated an important role for them in the global nitrogen cycle. However, very little is known about their physiology or their contribution to nitrification. Here we report oligotrophic ammonia oxidation kinetics and cellular characteristics of the mesophilic crenarchaeon 'Candidatus Nitrosopumilus maritimus' strain SCM1. Unlike characterized ammonia-oxidizing bacteria, SCM1 is adapted to life under extreme nutrient limitation, sustaining high specific oxidation rates at ammonium concentrations found in open oceans. Its half-saturation constant (K(m) = 133 nM total ammonium) and substrate threshold (

  18. Laser heating and ablation at high repetition rate in thermal confinement regime

    NASA Astrophysics Data System (ADS)

    Brygo, François; Semerok, A.; Oltra, R.; Weulersse, J.-M.; Fomichev, S.

    2006-09-01

    Laser heating and ablation of materials with low absorption and thermal conductivity (paint and cement) were under experimental and theoretical investigations. The experiments were made with a high repetition rate Q-switched Nd:YAG laser (10 kHz, 90 ns pulse duration and λ = 532 nm). High repetition rate laser heating resulted in pulse per pulse heat accumulation. A theoretical model of laser heating was developed and demonstrated a good agreement between the experimental temperatures measured with the infrared pyrometer and the calculated ones. With the fixed wavelength and laser pulse duration, the ablation threshold fluence of paint was found to depend on the repetition rate and the number of applied pulses. With a high repetition rate, the threshold fluence decreased significantly when the number of applied pulses was increasing. The experimentally obtained thresholds were well described by the developed theoretical model. Some specific features of paint heating and ablation with high repetition rate lasers are discussed.

  19. Folate network genetic variation, plasma homocysteine, and global genomic methylation content: a genetic association study

    PubMed Central

    2011-01-01

    Background Sequence variants in genes functioning in folate-mediated one-carbon metabolism are hypothesized to lead to changes in levels of homocysteine and DNA methylation, which, in turn, are associated with risk of cardiovascular disease. Methods 330 SNPs in 52 genes were studied in relation to plasma homocysteine and global genomic DNA methylation. SNPs were selected based on functional effects and gene coverage, and assays were completed on the Illumina Goldengate platform. Age-, smoking-, and nutrient-adjusted genotype--phenotype associations were estimated in regression models. Results Using a nominal P ≤ 0.005 threshold for statistical significance, 20 SNPs were associated with plasma homocysteine, 8 with Alu methylation, and 1 with LINE-1 methylation. Using a more stringent false discovery rate threshold, SNPs in FTCD, SLC19A1, and SLC19A3 genes remained associated with plasma homocysteine. Gene by vitamin B-6 interactions were identified for both Alu and LINE-1 methylation, and epistatic interactions with the MTHFR rs1801133 SNP were identified for the plasma homocysteine phenotype. Pleiotropy involving the MTHFD1L and SARDH genes for both plasma homocysteine and Alu methylation phenotypes was identified. Conclusions No single gene was associated with all three phenotypes, and the set of the most statistically significant SNPs predictive of homocysteine or Alu or LINE-1 methylation was unique to each phenotype. Genetic variation in folate-mediated one-carbon metabolism, other than the well-known effects of the MTHFR c.665C>T (known as c.677 C>T, rs1801133, p.Ala222Val), is predictive of cardiovascular disease biomarkers. PMID:22103680

  20. Redefining the Speed Limit of Phase Change Memory Revealed by Time-resolved Steep Threshold-Switching Dynamics of AgInSbTe Devices

    NASA Astrophysics Data System (ADS)

    Shukla, Krishna Dayal; Saxena, Nishant; Durai, Suresh; Manivannan, Anbarasu

    2016-11-01

    Although phase-change memory (PCM) offers promising features for a ‘universal memory’ owing to high-speed and non-volatility, achieving fast electrical switching remains a key challenge. In this work, a correlation between the rate of applied voltage and the dynamics of threshold-switching is investigated at picosecond-timescale. A distinct characteristic feature of enabling a rapid threshold-switching at a critical voltage known as the threshold voltage as validated by an instantaneous response of steep current rise from an amorphous off to on state is achieved within 250 picoseconds and this is followed by a slower current rise leading to crystallization. Also, we demonstrate that the extraordinary nature of threshold-switching dynamics in AgInSbTe cells is independent to the rate of applied voltage unlike other chalcogenide-based phase change materials exhibiting the voltage dependent transient switching characteristics. Furthermore, numerical solutions of time-dependent conduction process validate the experimental results, which reveal the electronic nature of threshold-switching. These findings of steep threshold-switching of ‘sub-50 ps delay time’, opens up a new way for achieving high-speed non-volatile memory for mainstream computing.

  1. Discovery of Host Factors and Pathways Utilized in Hantaviral Infection

    DTIC Science & Technology

    2016-09-01

    AWARD NUMBER: W81XWH-14-1-0204 TITLE: Discovery of Host Factors and Pathways Utilized in Hantaviral Infection PRINCIPAL INVESTIGATOR: Paul...Aug 2016 4. TITLE AND SUBTITLE Discovery of Host Factors and Pathways Utilized in Hantaviral Infection 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...after significance values were calculated and corrected for false discovery rate. The top hit is ATP6V0A1, a gene encoding a subunit of a vacuolar

  2. Application of a Threshold Method to Airborne-Spaceborne Attenuating-Wavelength Radars for the Estimation of Space-Time Rain-Rate Statistics.

    NASA Astrophysics Data System (ADS)

    Meneghini, Robert

    1998-09-01

    A method is proposed for estimating the area-average rain-rate distribution from attenuating-wavelength spaceborne or airborne radar data. Because highly attenuated radar returns yield unreliable estimates of the rain rate, these are eliminated by means of a proxy variable, Q, derived from the apparent radar reflectivity factors and a power law relating the attenuation coefficient and the reflectivity factor. In determining the probability distribution function of areawide rain rates, the elimination of attenuated measurements at high rain rates and the loss of data at light rain rates, because of low signal-to-noise ratios, leads to truncation of the distribution at the low and high ends. To estimate it over all rain rates, a lognormal distribution is assumed, the parameters of which are obtained from a nonlinear least squares fit to the truncated distribution. Implementation of this type of threshold method depends on the method used in estimating the high-resolution rain-rate estimates (e.g., either the standard Z-R or the Hitschfeld-Bordan estimate) and on the type of rain-rate estimate (either point or path averaged). To test the method, measured drop size distributions are used to characterize the rain along the radar beam. Comparisons with the standard single-threshold method or with the sample mean, taken over the high-resolution estimates, show that the present method usually provides more accurate determinations of the area-averaged rain rate if the values of the threshold parameter, QT, are chosen in the range from 0.2 to 0.4.

  3. Effect of Mild Cognitive Impairment and Alzheimer Disease on Auditory Steady-State Responses

    PubMed Central

    Shahmiri, Elaheh; Jafari, Zahra; Noroozian, Maryam; Zendehbad, Azadeh; Haddadzadeh Niri, Hassan; Yoonessi, Ali

    2017-01-01

    Introduction: Mild Cognitive Impairment (MCI), a disorder of the elderly people, is difficult to diagnose and often progresses to Alzheimer Disease (AD). Temporal region is one of the initial areas, which gets impaired in the early stage of AD. Therefore, auditory cortical evoked potential could be a valuable neuromarker for detecting MCI and AD. Methods: In this study, the thresholds of Auditory Steady-State Response (ASSR) to 40 Hz and 80 Hz were compared between Alzheimer Disease (AD), MCI, and control groups. A total of 42 patients (12 with AD, 15 with MCI, and 15 elderly normal controls) were tested for ASSR. Hearing thresholds at 500, 1000, and 2000 Hz in both ears with modulation rates of 40 and 80 Hz were obtained. Results: Significant differences in normal subjects were observed in estimated ASSR thresholds with 2 modulation rates in 3 frequencies in both ears. However, the difference was significant only in 500 Hz in the MCI group, and no significant differences were observed in the AD group. In addition, significant differences were observed between the normal subjects and AD patients with regard to the estimated ASSR thresholds with 2 modulation rates and 3 frequencies in both ears. A significant difference was observed between the normal and MCI groups at 2000 Hz, too. An increase in estimated 40 Hz ASSR thresholds in patients with AD and MCI suggests neural changes in auditory cortex compared to that in normal ageing. Conclusion: Auditory threshold estimation with low and high modulation rates by ASSR test could be a potentially helpful test for detecting cognitive impairment. PMID:29158880

  4. Biophysical Insights into How Spike Threshold Depends on the Rate of Membrane Potential Depolarization in Type I and Type II Neurons

    PubMed Central

    Yi, Guo-Sheng; Wang, Jiang; Tsang, Kai-Ming; Wei, Xi-Le; Deng, Bin

    2015-01-01

    Dynamic spike threshold plays a critical role in neuronal input-output relations. In many neurons, the threshold potential depends on the rate of membrane potential depolarization (dV/dt) preceding a spike. There are two basic classes of neural excitability, i.e., Type I and Type II, according to input-output properties. Although the dynamical and biophysical basis of their spike initiation has been established, the spike threshold dynamic for each cell type has not been well described. Here, we use a biophysical model to investigate how spike threshold depends on dV/dt in two types of neuron. It is observed that Type II spike threshold is more depolarized and more sensitive to dV/dt than Type I. With phase plane analysis, we show that each threshold dynamic arises from the different separatrix and K+ current kinetics. By analyzing subthreshold properties of membrane currents, we find the activation of hyperpolarizing current prior to spike initiation is a major factor that regulates the threshold dynamics. The outward K+ current in Type I neuron does not activate at the perithresholds, which makes its spike threshold insensitive to dV/dt. The Type II K+ current activates prior to spike initiation and there is a large net hyperpolarizing current at the perithresholds, which results in a depolarized threshold as well as a pronounced threshold dynamic. These predictions are further attested in several other functionally equivalent cases of neural excitability. Our study provides a fundamental description about how intrinsic biophysical properties contribute to the threshold dynamics in Type I and Type II neurons, which could decipher their significant functions in neural coding. PMID:26083350

  5. KSC-06pp1618

    NASA Image and Video Library

    2006-07-17

    KENNEDY SPACE CENTER, FLA. - Vapor trails flow from Discovery's wing tips as it makes a speedy approach to Runway 15 at NASA's Shuttle Landing Facility, completing mission STS-121 to the International Space Station. At touchdown -- nominally about 2,500 ft. beyond the runway threshold -- the orbiter is traveling at a speed ranging from 213 to 226 mph. Discovery traveled 5.3 million miles, landing on orbit 202. Mission elapsed time was 12 days, 18 hours, 37 minutes and 54 seconds. Main gear touchdown occurred on time at 9:14:43 EDT. Wheel stop was at 9:15:49 EDT. The returning crew members aboard are Commander Steven Lindsey, Pilot Mark Kelly and Mission Specialists Piers Sellers, Michael Fossum, Lisa Nowak and Stephanie Wilson. Mission Specialist Thomas Reiter, who launched with the crew on July 4, remained on the station to join the Expedition 13 crew there. The landing is the 62nd at Kennedy Space Center and the 32nd for Discovery. During the mission, the STS-121 crew tested new equipment and procedures to improve shuttle safety, and delivered supplies and made repairs to the International Space Station. Photo credit: NASA/Tony Gray & Tim Powers

  6. KSC-06pp1619

    NASA Image and Video Library

    2006-07-17

    KENNEDY SPACE CENTER, FLA. - Vapor trails flow from Discovery's wing tips as it makes a speedy approach to Runway 15 at NASA's Shuttle Landing Facility, completing mission STS-121 to the International Space Station. At touchdown -- nominally about 2,500 ft. beyond the runway threshold -- the orbiter is traveling at a speed ranging from 213 to 226 mph. Discovery traveled 5.3 million miles, landing on orbit 202. Mission elapsed time was 12 days, 18 hours, 37 minutes and 54 seconds. Main gear touchdown occurred on time at 9:14:43 EDT. Wheel stop was at 9:15:49 EDT. The returning crew members aboard are Commander Steven Lindsey, Pilot Mark Kelly and Mission Specialists Piers Sellers, Michael Fossum, Lisa Nowak and Stephanie Wilson. Mission Specialist Thomas Reiter, who launched with the crew on July 4, remained on the station to join the Expedition 13 crew there. The landing is the 62nd at Kennedy Space Center and the 32nd for Discovery. During the mission, the STS-121 crew tested new equipment and procedures to improve shuttle safety, and delivered supplies and made repairs to the International Space Station. Photo credit: NASA/Tony Gray & Tim Powers

  7. Protograph LDPC Codes with Node Degrees at Least 3

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  8. Defining ADHD symptom persistence in adulthood: optimizing sensitivity and specificity.

    PubMed

    Sibley, Margaret H; Swanson, James M; Arnold, L Eugene; Hechtman, Lily T; Owens, Elizabeth B; Stehli, Annamarie; Abikoff, Howard; Hinshaw, Stephen P; Molina, Brooke S G; Mitchell, John T; Jensen, Peter S; Howard, Andrea L; Lakes, Kimberley D; Pelham, William E

    2017-06-01

    Longitudinal studies of children diagnosed with ADHD report widely ranging ADHD persistence rates in adulthood (5-75%). This study documents how information source (parent vs. self-report), method (rating scale vs. interview), and symptom threshold (DSM vs. norm-based) influence reported ADHD persistence rates in adulthood. Five hundred seventy-nine children were diagnosed with DSM-IV ADHD-Combined Type at baseline (ages 7.0-9.9 years) 289 classmates served as a local normative comparison group (LNCG), 476 and 241 of whom respectively were evaluated in adulthood (Mean Age = 24.7). Parent and self-reports of symptoms and impairment on rating scales and structured interviews were used to investigate ADHD persistence in adulthood. Persistence rates were higher when using parent rather than self-reports, structured interviews rather than rating scales (for self-report but not parent report), and a norm-based (NB) threshold of 4 symptoms rather than DSM criteria. Receiver-Operating Characteristics (ROC) analyses revealed that sensitivity and specificity were optimized by combining parent and self-reports on a rating scale and applying a NB threshold. The interview format optimizes young adult self-reporting when parent reports are not available. However, the combination of parent and self-reports from rating scales, using an 'or' rule and a NB threshold optimized the balance between sensitivity and specificity. With this definition, 60% of the ADHD group demonstrated symptom persistence and 41% met both symptom and impairment criteria in adulthood. © 2016 Association for Child and Adolescent Mental Health.

  9. Estimating the rate of biological introductions: Lessepsian fishes in the Mediterranean.

    PubMed

    Belmaker, Jonathan; Brokovich, Eran; China, Victor; Golani, Daniel; Kiflawi, Moshe

    2009-04-01

    Sampling issues preclude the direct use of the discovery rate of exotic species as a robust estimate of their rate of introduction. Recently, a method was advanced that allows maximum-likelihood estimation of both the observational probability and the introduction rate from the discovery record. Here, we propose an alternative approach that utilizes the discovery record of native species to control for sampling effort. Implemented in a Bayesian framework using Markov chain Monte Carlo simulations, the approach provides estimates of the rate of introduction of the exotic species, and of additional parameters such as the size of the species pool from which they are drawn. We illustrate the approach using Red Sea fishes recorded in the eastern Mediterranean, after crossing the Suez Canal, and show that the two approaches may lead to different conclusions. The analytical framework is highly flexible and could provide a basis for easy modification to other systems for which first-sighting data on native and introduced species are available.

  10. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    PubMed

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. An assessment of nutrients and sedimentation in the St. Thomas East End Reserves, US Virgin Islands.

    PubMed

    Pait, Anthony S; Galdo, Francis R; Ian Hartwell, S; Apeti, Dennis A; Mason, Andrew L

    2018-04-09

    Nutrients and sedimentation were monitored for approximately 2 years at six sites in the St. Thomas East End Reserves (STEER), St. Thomas, USVI, as part of a NOAA project to develop an integrated environmental assessment. Concentrations of ammonium (NH 4 + ) and dissolved inorganic nitrogen (DIN) were higher in Mangrove Lagoon and Benner Bay in the western portion of STEER than in the other sites further east (i.e., Cowpet Bay, Rotto Cay, St. James, and Little St. James). There was no correlation between rainfall and nutrient concentrations. Using a set of suggested nutrient thresholds that have been developed to indicate the potential for the overgrowth of algae on reefs, approximately 60% of the samples collected in STEER were above the threshold for orthophosphate (HPO 4 = ), while 55% of samples were above the DIN threshold. Benner Bay had the highest sedimentation rate of any site monitored in STEER, including Mangrove Lagoon. There was also an east to west and a north to south gradient in sedimentation, indicative of higher sedimentation rates in the western, more populated areas surrounding STEER, and sites closer to the shore of the main island of St. Thomas. Although none of the sites had a mean or average sedimentation rate above a suggested sedimentation threshold, the mean sedimentation rate in Benner Bay was just below the threshold.

  12. A study of life prediction differences for a nickel-base Alloy 690 using a threshold and a non-threshold model

    NASA Astrophysics Data System (ADS)

    Young, B. A.; Gao, Xiaosheng; Srivatsan, T. S.

    2009-10-01

    In this paper we compare and contrast the crack growth rate of a nickel-base superalloy (Alloy 690) in the Pressurized Water Reactor (PWR) environment. Over the last few years, a preponderance of test data has been gathered on both Alloy 690 thick plate and Alloy 690 tubing. The original model, essentially based on a small data set for thick plate, compensated for temperature, load ratio and stress-intensity range but did not compensate for the fatigue threshold of the material. As additional test data on both plate and tube product became available the model was gradually revised to account for threshold properties. Both the original and revised models generated acceptable results for data that were above 1 × 10 -11 m/s. However, the test data at the lower growth rates were over-predicted by the non-threshold model. Since the original model did not take the fatigue threshold into account, this model predicted no operating stress below which the material would effectively undergo fatigue crack growth. Because of an over-prediction of the growth rate below 1 × 10 -11 m/s, due to a combination of low stress, small crack size and long rise-time, the model in general leads to an under-prediction of the total available life of the components.

  13. Security of a semi-quantum protocol where reflections contribute to the secret key

    NASA Astrophysics Data System (ADS)

    Krawec, Walter O.

    2016-05-01

    In this paper, we provide a proof of unconditional security for a semi-quantum key distribution protocol introduced in a previous work. This particular protocol demonstrated the possibility of using X basis states to contribute to the raw key of the two users (as opposed to using only direct measurement results) even though a semi-quantum participant cannot directly manipulate such states. In this work, we provide a complete proof of security by deriving a lower bound of the protocol's key rate in the asymptotic scenario. Using this bound, we are able to find an error threshold value such that for all error rates less than this threshold, it is guaranteed that A and B may distill a secure secret key; for error rates larger than this threshold, A and B should abort. We demonstrate that this error threshold compares favorably to several fully quantum protocols. We also comment on some interesting observations about the behavior of this protocol under certain noise scenarios.

  14. Spin-transfer torque switched magnetic tunnel junctions in magnetic random access memory

    NASA Astrophysics Data System (ADS)

    Sun, Jonathan Z.

    2016-10-01

    Spin-transfer torque (or spin-torque, or STT) based magnetic tunnel junction (MTJ) is at the heart of a new generation of magnetism-based solid-state memory, the so-called spin-transfer-torque magnetic random access memory, or STT-MRAM. Over the past decades, STT-based switchable magnetic tunnel junction has seen progress on many fronts, including the discovery of (001) MgO as the most favored tunnel barrier, which together with (bcc) Fe or FeCo alloy are yielding best demonstrated tunnel magneto-resistance (TMR); the development of perpendicularly magnetized ultrathin CoFeB-type of thin films sufficient to support high density memories with junction sizes demonstrated down to 11nm in diameter; and record-low spin-torque switching threshold current, giving best reported switching efficiency over 5 kBT/μA. Here we review the basic device properties focusing on the perpendicularly magnetized MTJs, both in terms of switching efficiency as measured by sub-threshold, quasi-static methods, and of switching speed at super-threshold, forced switching. We focus on device behaviors important for memory applications that are rooted in fundamental device physics, which highlights the trade-off of device parameters for best suitable system integration.

  15. Mitochondrial threshold effects.

    PubMed Central

    Rossignol, Rodrigue; Faustin, Benjamin; Rocher, Christophe; Malgat, Monique; Mazat, Jean-Pierre; Letellier, Thierry

    2003-01-01

    The study of mitochondrial diseases has revealed dramatic variability in the phenotypic presentation of mitochondrial genetic defects. To attempt to understand this variability, different authors have studied energy metabolism in transmitochondrial cell lines carrying different proportions of various pathogenic mutations in their mitochondrial DNA. The same kinds of experiments have been performed on isolated mitochondria and on tissue biopsies taken from patients with mitochondrial diseases. The results have shown that, in most cases, phenotypic manifestation of the genetic defect occurs only when a threshold level is exceeded, and this phenomenon has been named the 'phenotypic threshold effect'. Subsequently, several authors showed that it was possible to inhibit considerably the activity of a respiratory chain complex, up to a critical value, without affecting the rate of mitochondrial respiration or ATP synthesis. This phenomenon was called the 'biochemical threshold effect'. More recently, quantitative analysis of the effects of various mutations in mitochondrial DNA on the rate of mitochondrial protein synthesis has revealed the existence of a 'translational threshold effect'. In this review these different mitochondrial threshold effects are discussed, along with their molecular bases and the roles that they play in the presentation of mitochondrial diseases. PMID:12467494

  16. Study of impacts of different evaluation criteria on gamma pass rates in VMAT QA using MatriXX and EPID

    NASA Astrophysics Data System (ADS)

    Noufal, Manthala Padannayil; Abdullah, Kallikuzhiyil Kochunny; Niyas, Puzhakkal; Subha, Pallimanhayil Abdul Raheem

    2017-12-01

    Aim: This study evaluates the impacts of using different evaluation criteria on gamma pass rates in two commercially available QA methods employed for the verification of VMAT plans using different hypothetical planning target volumes (PTVs) and anatomical regions. Introduction: Volumetric modulated arc therapy (VMAT) is a widely accepted technique to deliver highly conformal treatment in a very efficient manner. As their level of complexity is high in comparison to intensity-modulated radiotherapy (IMRT), the implementation of stringent quality assurance (QA) before treatment delivery is of paramount importance. Material and Methods: Two sets of VMAT plans were generated using Eclipse planning systems, one with five different complex hypothetical three-dimensional PTVs and one including three anatomical regions. The verification of these plans was performed using a MatriXX ionization chamber array embedded inside a MultiCube phantom and a Varian EPID dosimetric system attached to a Clinac iX. The plans were evaluated based on the 3%/3 mm, 2%/2 mm, and 1%/1 mm global gamma criteria and with three low-dose threshold values (0%, 10%, and 20%). Results: The gamma pass rates were above 95% in all VMAT plans, when the 3%/3mm gamma criterion was used and no threshold was applied. In both systems, the pass rates decreased as the criteria become stricter. Higher pass rates were observed when no threshold was applied and they tended to decrease for 10% and 20% thresholds. Conclusion: The results confirm the suitability of the equipments used and the validity of the plans. The study also confirmed that the threshold settings greatly affect the gamma pass rates, especially for lower gamma criteria.

  17. Methods for threshold determination in multiplexed assays

    DOEpatents

    Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J

    2014-06-24

    Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.

  18. Extracellular Vesicles in Bile as Markers of Malignant Biliary Stenoses.

    PubMed

    Severino, Valeria; Dumonceau, Jean-Marc; Delhaye, Myriam; Moll, Solange; Annessi-Ramseyer, Isabelle; Robin, Xavier; Frossard, Jean-Louis; Farina, Annarita

    2017-08-01

    Algorithms for diagnosis of malignant common bile duct (CBD) stenoses are complex and lack accuracy. Malignant tumors secrete large numbers of extracellular vesicles (EVs) into surrounding fluids; EVs might therefore serve as biomarkers for diagnosis. We investigated whether concentrations of EVs in bile could discriminate malignant from nonmalignant CBD stenoses. We collected bile and blood samples from 50 patients undergoing therapeutic endoscopic retrograde cholangiopancreatography at university hospitals in Europe for CBD stenosis of malignant (pancreatic cancer, n = 20 or cholangiocarcinoma, n = 5) or nonmalignant (chronic pancreatitis [CP], n = 15) origin. Ten patients with CBD obstruction due to biliary stones were included as controls. EV concentrations in samples were determined by nanoparticle tracking analyses. The discovery cohort comprised the first 10 patients with a diagnosis of pancreatic cancer, based on tissue analysis, and 10 consecutive controls. Using samples from these subjects, we identified a threshold concentration of bile EVs that could best discriminate between patients with pancreatic cancer from controls. We verified the diagnostic performance of bile EV concentration by analyzing samples from the 30 consecutive patients with a diagnosis of malignant (pancreatic cancer or cholangiocarcinoma, n = 15) or nonmalignant (CP, n = 15) CBD stenosis. Samples were compared using the Mann-Whitney test and nonparametric Spearman correlation analysis. Receiver operating characteristic area under the curve was used to determine diagnostic accuracy. In both cohorts, the median concentration of EVs was significantly higher in bile samples from patients with malignant CBD stenoses than controls or nonmalignant CBD stenoses (2.41 × 10 15 vs 1.60 × 10 14 nanoparticles/L in the discovery cohort; P < .0001 and 4.00 × 10 15 vs 1.26 × 10 14 nanoparticles/L in the verification cohort; P < .0001). A threshold of 9.46 × 10 14 nanoparticles/L in bile best distinguished patients with malignant CBD from controls in the discovery cohort. In the verification cohort, this threshold discriminated malignant from nonmalignant CBD stenoses with 100% accuracy. Serum concentration of EVs distinguished patients with malignant vs patients with nonmalignant CBD stenoses with 63.3% diagnostic accuracy. Concentration of EVs in bile samples discriminates between patients with malignant vs nonmalignant CBD stenosis with 100% accuracy. Further studies are needed to confirm these findings. Clinical Trial registration no: ISRCTN66835592. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  19. Study on the threshold of a stochastic SIR epidemic model and its extensions

    NASA Astrophysics Data System (ADS)

    Zhao, Dianli

    2016-09-01

    This paper provides a simple but effective method for estimating the threshold of a class of the stochastic epidemic models by use of the nonnegative semimartingale convergence theorem. Firstly, the threshold R0SIR is obtained for the stochastic SIR model with a saturated incidence rate, whose value is below 1 or above 1 will completely determine the disease to go extinct or prevail for any size of the white noise. Besides, when R0SIR > 1 , the system is proved to be convergent in time mean. Then, the threshold of the stochastic SIVS models with or without saturated incidence rate are also established by the same method. Comparing with the previously-known literatures, the related results are improved, and the method is simpler than before.

  20. Spacewatch search for near-Earth asteroids

    NASA Technical Reports Server (NTRS)

    Gehreis, Tom

    1991-01-01

    The objective of the Spacewatch Program is to develop new techniques for the discovery of near-earth asteroids and to prove the efficiency of the techniques. Extensive experience was obtained with the 0.91-m Spacewatch Telescope on Kitt Peak that now has the largest CCD detector in the world: a Tektronix 2048 x 2048 with 27-micron pixel size. During the past year, software and hardware for optimizing the discovery of near-earth asteroids were installed. As a result, automatic detection of objects that move with rates between 0.1 and 4 degrees per day has become routine since September 1990. Apparently, one or two near-earth asteroids are discovered per month, on average. The follow up is with astrometry over as long an arc as the geometry and faintness of the object allow, typically three months following the discovery observations. During the second half of 1990, replacing the 0.91-m mirror with a larger one, to increase the discovery rate, was considered. Studies and planning for this switch are proposed for funding during the coming year. It was also proposed that the Spacewatch Telescope be turned on the sky, instead of having the drive turned off, in order to increase the rate of discoveries by perhaps a factor of two.

  1. Influence of drug load on dissolution behavior of tablets containing a poorly water-soluble drug: estimation of the percolation threshold.

    PubMed

    Wenzel, Tim; Stillhart, Cordula; Kleinebudde, Peter; Szepes, Anikó

    2017-08-01

    Drug load plays an important role in the development of solid dosage forms, since it can significantly influence both processability and final product properties. The percolation threshold of the active pharmaceutical ingredient (API) corresponds to a critical concentration, above which an abrupt change in drug product characteristics can occur. The objective of this study was to identify the percolation threshold of a poorly water-soluble drug with regard to the dissolution behavior from immediate release tablets. The influence of the API particle size on the percolation threshold was also studied. Formulations with increasing drug loads were manufactured via roll compaction using constant process parameters and subsequent tableting. Drug dissolution was investigated in biorelevant medium. The percolation threshold was estimated via a model dependent and a model independent method based on the dissolution data. The intragranular concentration of mefenamic acid had a significant effect on granules and tablet characteristics, such as particle size distribution, compactibility and tablet disintegration. Increasing the intragranular drug concentration of the tablets resulted in lower dissolution rates. A percolation threshold of approximately 20% v/v could be determined for both particle sizes of the API above which an abrupt decrease of the dissolution rate occurred. However, the increasing drug load had a more pronounced effect on dissolution rate of tablets containing the micronized API, which can be attributed to the high agglomeration tendency of micronized substances during manufacturing steps, such as roll compaction and tableting. Both methods that were applied for the estimation of percolation threshold provided comparable values.

  2. Influence of proprioceptive feedback on the firing rate and recruitment of motoneurons

    PubMed Central

    De Luca, C J; Kline, J C

    2012-01-01

    We investigated the relationships of the firing rate and maximal recruitment threshold of motoneurons recorded during isometric contraction with the number of spindles in individual muscles. At force levels above 10% of maximal voluntary contraction, the firing rate was inversely related to the number of spindles in a muscle, with the slope of the relationship increasing with force. The maximal recruitment threshold of motor units increased linearly with the number of spindles in the muscle. Thus, muscles with a greater number of spindles had lower firing rates and a greater maximal recruitment threshold. These findings may be explained by a mechanical interaction between muscle fibres and adjacent spindles. During low-level (0 to 10%) voluntary contractions, muscle fibres of recruited motor units produce force-twitches that activate nearby spindles to respond with an immediate excitatory feedback that reaches maximal level. As the force increases further, the twitches overlap and tend towards tetanization, the muscle fibres shorten, the spindles slacken, their excitatory firings decrease, and the net excitation to the homonymous motoneurons decreases. Motoneurons of muscles with greater number of spindles receive a greater decrease in excitation which reduces their firing rates, increases their maximal recruitment threshold, and changes the motoneuron recruitment distribution. PMID:22183300

  3. Hierarchical control of motor units in voluntary contractions.

    PubMed

    De Luca, Carlo J; Contessa, Paola

    2012-01-01

    For the past five decades there has been wide acceptance of a relationship between the firing rate of motor units and the afterhyperpolarization of motoneurons. It has been promulgated that the higher-threshold, larger-soma, motoneurons fire faster than the lower-threshold, smaller-soma, motor units. This relationship was based on studies on anesthetized cats with electrically stimulated motoneurons. We questioned its applicability to motor unit control during voluntary contractions in humans. We found that during linearly force-increasing contractions, firing rates increased as exponential functions. At any time and force level, including at recruitment, the firing rate values were inversely related to the recruitment threshold of the motor unit. The time constants of the exponential functions were directly related to the recruitment threshold. From the Henneman size principle it follows that the characteristics of the firing rates are also related to the size of the soma. The "firing rate spectrum" presents a beautifully simple control scheme in which, at any given time or force, the firing rate value of earlier-recruited motor units is greater than that of later-recruited motor units. This hierarchical control scheme describes a mechanism that provides an effective economy of force generation for the earlier-recruited lower force-twitch motor units, and reduces the fatigue of later-recruited higher force-twitch motor units-both characteristics being well suited for generating and sustaining force during the fight-or-flight response.

  4. Hierarchical control of motor units in voluntary contractions

    PubMed Central

    Contessa, Paola

    2012-01-01

    For the past five decades there has been wide acceptance of a relationship between the firing rate of motor units and the afterhyperpolarization of motoneurons. It has been promulgated that the higher-threshold, larger-soma, motoneurons fire faster than the lower-threshold, smaller-soma, motor units. This relationship was based on studies on anesthetized cats with electrically stimulated motoneurons. We questioned its applicability to motor unit control during voluntary contractions in humans. We found that during linearly force-increasing contractions, firing rates increased as exponential functions. At any time and force level, including at recruitment, the firing rate values were inversely related to the recruitment threshold of the motor unit. The time constants of the exponential functions were directly related to the recruitment threshold. From the Henneman size principle it follows that the characteristics of the firing rates are also related to the size of the soma. The “firing rate spectrum” presents a beautifully simple control scheme in which, at any given time or force, the firing rate value of earlier-recruited motor units is greater than that of later-recruited motor units. This hierarchical control scheme describes a mechanism that provides an effective economy of force generation for the earlier-recruited lower force-twitch motor units, and reduces the fatigue of later-recruited higher force-twitch motor units—both characteristics being well suited for generating and sustaining force during the fight-or-flight response. PMID:21975447

  5. What is the optimal rate of caesarean section at population level? A systematic review of ecologic studies.

    PubMed

    Betran, Ana Pilar; Torloni, Maria Regina; Zhang, Jun; Ye, Jiangfeng; Mikolajczyk, Rafael; Deneux-Tharaux, Catherine; Oladapo, Olufemi Taiwo; Souza, João Paulo; Tunçalp, Özge; Vogel, Joshua Peter; Gülmezoglu, Ahmet Metin

    2015-06-21

    In 1985, WHO stated that there was no justification for caesarean section (CS) rates higher than 10-15% at population-level. While the CS rates worldwide have continued to increase in an unprecedented manner over the subsequent three decades, concern has been raised about the validity of the 1985 landmark statement. We conducted a systematic review to identify, critically appraise and synthesize the analyses of the ecologic association between CS rates and maternal, neonatal and infant outcomes. Four electronic databases were searched for ecologic studies published between 2000 and 2014 that analysed the possible association between CS rates and maternal, neonatal or infant mortality or morbidity. Two reviewers performed study selection, data extraction and quality assessment independently. We identified 11,832 unique citations and eight studies were included in the review. Seven studies correlated CS rates with maternal mortality, five with neonatal mortality, four with infant mortality, two with LBW and one with stillbirths. Except for one, all studies were cross-sectional in design and five were global analyses of national-level CS rates versus mortality outcomes. Although the overall quality of the studies was acceptable; only two studies controlled for socio-economic factors and none controlled for clinical or demographic characteristics of the population. In unadjusted analyses, authors found a strong inverse relationship between CS rates and the mortality outcomes so that maternal, neonatal and infant mortality decrease as CS rates increase up to a certain threshold. In the eight studies included in this review, this threshold was at CS rates between 9 and 16%. However, in the two studies that adjusted for socio-economic factors, this relationship was either weakened or disappeared after controlling for these confounders. CS rates above the threshold of 9-16% were not associated with decreases in mortality outcomes regardless of adjustments. Our findings could be interpreted to mean that at CS rates below this threshold, socio-economic development may be driving the ecologic association between CS rates and mortality. On the other hand, at rates higher than this threshold, there is no association between CS and mortality outcomes regardless of adjustment. The ecological association between CS rates and relevant morbidity outcomes needs to be evaluated before drawing more definite conclusions at population level.

  6. Network meta-analysis of diagnostic test accuracy studies identifies and ranks the optimal diagnostic tests and thresholds for health care policy and decision-making.

    PubMed

    Owen, Rhiannon K; Cooper, Nicola J; Quinn, Terence J; Lees, Rosalind; Sutton, Alex J

    2018-07-01

    Network meta-analyses (NMA) have extensively been used to compare the effectiveness of multiple interventions for health care policy and decision-making. However, methods for evaluating the performance of multiple diagnostic tests are less established. In a decision-making context, we are often interested in comparing and ranking the performance of multiple diagnostic tests, at varying levels of test thresholds, in one simultaneous analysis. Motivated by an example of cognitive impairment diagnosis following stroke, we synthesized data from 13 studies assessing the efficiency of two diagnostic tests: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA), at two test thresholds: MMSE <25/30 and <27/30, and MoCA <22/30 and <26/30. Using Markov chain Monte Carlo (MCMC) methods, we fitted a bivariate network meta-analysis model incorporating constraints on increasing test threshold, and accounting for the correlations between multiple test accuracy measures from the same study. We developed and successfully fitted a model comparing multiple tests/threshold combinations while imposing threshold constraints. Using this model, we found that MoCA at threshold <26/30 appeared to have the best true positive rate, whereas MMSE at threshold <25/30 appeared to have the best true negative rate. The combined analysis of multiple tests at multiple thresholds allowed for more rigorous comparisons between competing diagnostics tests for decision making. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Using threshold messages to promote physical activity: implications for public perceptions of health effects.

    PubMed

    Knox, Emily C L; Webb, Oliver J; Esliger, Dale W; Biddle, Stuart J H; Sherar, Lauren B

    2014-04-01

    The promotion of physical activity (PA) guidelines to the general public is an important issue that lacks empirical investigation. PA campaigns often feature participation thresholds that cite PA guidelines verbatim [e.g., 150 min/week moderate-to-vigorous physical activity (MVPA)]. Some campaigns instead prefer to use generic PA messages (e.g., do as much MVPA as possible). 'Thresholds' may disrupt understanding of the health benefits of modest PA participation. This study examined the perception of health benefits of PA after exposure to PA messages that did and did not contain a duration threshold. Brief structured interviews were conducted with a convenience sample of adults (n = 1100). Participants received a threshold message (150 min/week MVPA), a message that presented the threshold as a minimum; a generic message or no message. Participants rated perceived health effects of seven PA durations. One-way analyses of variance with post hoc tests for group differences were used to assess raw perception ratings for each duration of PA. Recipients of all three messages held more positive perceptions of >150 min/week of MVPA relative to those not receiving any message. For MVPA durations <150 min/week, the generic PA message group perceived the greatest health benefits. Those receiving the threshold message tended to have the least positive perceptions of durations <150 min/week. Threshold messages were associated with lower perceived health benefits for modest PA durations. Campaigns based on threshold messages may be limited when promoting small PA increases at a population level.

  8. Threshold Assessment of Gear Diagnostic Tools on Flight and Test Rig Data

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Mosher, Marianne; Huff, Edward M.

    2003-01-01

    A method for defining thresholds for vibration-based algorithms that provides the minimum number of false alarms while maintaining sensitivity to gear damage was developed. This analysis focused on two vibration based gear damage detection algorithms, FM4 and MSA. This method was developed using vibration data collected during surface fatigue tests performed in a spur gearbox rig. The thresholds were defined based on damage progression during tests with damage. The thresholds false alarm rates were then evaluated on spur gear tests without damage. Next, the same thresholds were applied to flight data from an OH-58 helicopter transmission. Results showed that thresholds defined in test rigs can be used to define thresholds in flight to correctly classify the transmission operation as normal.

  9. Calculating the dim light melatonin onset: the impact of threshold and sampling rate.

    PubMed

    Molina, Thomas A; Burgess, Helen J

    2011-10-01

    The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p < .001). However, in up to 19% of cases the DLMO derived from hourly sampling was >30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.

  10. Discharge variability and bedrock river incision on the Hawaiian island of Kaua'i

    NASA Astrophysics Data System (ADS)

    Huppert, K.; Deal, E.; Perron, J. T.; Ferrier, K.; Braun, J.

    2017-12-01

    Bedrock river incision occurs during floods that generate sufficient shear stress to strip riverbeds of sediment cover and erode underlying bedrock. Thresholds for incision can prevent erosion at low flows and slow down erosion at higher flows that do generate excess shear stress. Because discharge distributions typically display power-law tails, with non-negligible frequencies of floods much greater than the mean, models incorporating stochastic discharge and incision thresholds predict that discharge variability can sometimes have greater effects on long-term incision rates than mean discharge. This occurs when the commonly observed inverse scalings between mean discharge and discharge variability are weak or when incision thresholds are high. Because the effects of thresholds and discharge variability have only been documented in a few locations, their influence on long-term river incision rates remains uncertain. The Hawaiian island of Kaua'i provides an ideal natural laboratory to evaluate the effects of discharge variability and thresholds on bedrock river incision because it has one of Earth's steepest spatial gradients in mean annual rainfall and it also experiences dramatic spatial variations in rainfall and discharge variability, spanning a wide range of the conditions reported on Earth. Kaua'i otherwise has minimal variations in lithology, vertical motion, and other factors that can influence erosion. River incision rates averaged over 1.5 - 4.5 Myr timescales can be estimated along the lengths of Kauaian channels from the depths of river canyons and lava flow ages. We characterize rainfall and discharge variability on Kaua'i using records from an extensive network of rain and stream gauges spanning the past century. We use these characterizations to model long-term bedrock river incision along Kauaian channels with a threshold-dependent incision law, modulated by site-specific discharge-channel width scalings. Our comparisons between modeled and observed erosion rates suggest that variations in river incision rates on Kaua'i are dominated by variations in mean rainfall and discharge, rather than by differences in storminess across the island. We explore the implications of this result for the threshold dependence of river incision across Earth's varied climates.

  11. The relation between auditory-nerve temporal responses and perceptual rate integration in cochlear implants

    PubMed Central

    Hughes, Michelle L.; Baudhuin, Jacquelyn L.; Goehring, Jenny L.

    2014-01-01

    The purpose of this study was to examine auditory-nerve temporal response properties and their relation to psychophysical threshold for electrical pulse trains of varying rates (“rate integration”). The primary hypothesis was that better rate integration (steeper slope) would be correlated with smaller decrements in ECAP amplitude as a function of stimulation rate (shallower slope of the amplitude-rate function), reflecting a larger percentage of the neural population contributing more synchronously to each pulse in the train. Data were obtained for 26 ears in 23 cochlear-implant recipients. Electrically evoked compound action potential (ECAP) amplitudes were measured in response to each of 21 pulses in a pulse train for the following rates: 900, 1200, 1800, 2400, and 3500 pps. Psychophysical thresholds were obtained using a 3-interval, forced-choice adaptive procedure for 300-ms pulse trains of the same rates as used for the ECAP measures, which formed the rate-integration function. For each electrode, the slope of the psychophysical rate-integration function was compared to the following ECAP measures: (1) slope of the function comparing average normalized ECAP amplitude across pulses versus stimulation rate (“adaptation”), (2) the rate that produced the maximum alternation depth across the pulse train, and (3) rate at which the alternating pattern ceased (stochastic rate). Results showed no significant relations between the slope of the rate-integration function and any of the ECAP measures when data were collapsed across subjects. However, group data showed that both threshold and average ECAP amplitude decreased with increased stimulus rate, and within-subject analyses showed significant positive correlations between psychophysical thresholds and mean ECAP response amplitudes across the pulse train. These data suggest that ECAP temporal response patterns are complex and further study is required to better understand the relative contributions of adaptation, desynchronization, and firing probabilities of individual neurons that contribute to the aggregate ECAP response. PMID:25093283

  12. The safe volume threshold for chest drain removal following pulmonary resection.

    PubMed

    Yap, Kok Hooi; Soon, Jia Lin; Ong, Boon Hean; Loh, Yee Jim

    2017-11-01

    A best evidence topic in thoracic surgery was written according to a structured protocol. The question addressed was 'In patients undergoing pulmonary resection, is there a safe drainage volume threshold for chest drain removal?' Altogether 1054 papers were found, of which 5 papers represented the best evidence. The authors, journal, date and country of publication, patient group studied, study type, relevant outcomes and results of these papers are tabulated. Chest drainage threshold, where used, ranged from 250 to 500 ml/day. Both randomized controlled trials showed no significant difference in reintervention rates with a higher chest drainage volume threshold. Four studies that performed analysis on other complications showed no statistical significant difference with a higher chest drainage volume threshold. Four studies evaluating length of hospital stay showed reduced or no difference in the length of stay with a higher chest drainage volume threshold. Two cohort studies reported the mortality rate of 0-0.01% with a higher chest drainage volume threshold. We conclude that early chest drain removal after pulmonary resection, accepting a higher chest drainage volume threshold of 250-500 ml/day is safe, and may result in shorter hospital stay without increasing reintervention, morbidity or mortality. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  13. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 2. How a mistake led BEIR I to adopt LNT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calabrese, Edward J., E-mail: edwardc@schoolph.uma

    This paper reveals that nearly 25 years after the used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. -- Highlights: • The BEAR I Genetics Panel made an error in denying dose rate for mutation. •more » The BEIR I Genetics Subcommittee attempted to correct this dose rate error. • The control group used for risk assessment by BEIR I is now known to be in error. • Correcting this error contradicts the LNT, supporting a threshold model.« less

  14. Effects of heterogeneous convergence rate on consensus in opinion dynamics

    NASA Astrophysics Data System (ADS)

    Huang, Changwei; Dai, Qionglin; Han, Wenchen; Feng, Yuee; Cheng, Hongyan; Li, Haihong

    2018-06-01

    The Deffuant model has attracted much attention in the study of opinion dynamics. Here, we propose a modified version by introducing into the model a heterogeneous convergence rate which is dependent on the opinion difference between interacting agents and a tunable parameter κ. We study the effects of heterogeneous convergence rate on consensus by investigating the probability of complete consensus, the size of the largest opinion cluster, the number of opinion clusters, and the relaxation time. We find that the decrease of the convergence rate is favorable to decreasing the confidence threshold for the population to always reach complete consensus, and there exists optimal κ resulting in the minimal bounded confidence threshold. Moreover, we find that there exists a window before the threshold of confidence in which complete consensus may be reached with a nonzero probability when κ is not too large. We also find that, within a certain confidence range, decreasing the convergence rate will reduce the relaxation time, which is somewhat counterintuitive.

  15. Eccentric muscle damage has variable effects on motor unit recruitment thresholds and discharge patterns in elbow flexor muscles.

    PubMed

    Dartnall, Tamara J; Rogasch, Nigel C; Nordstrom, Michael A; Semmler, John G

    2009-07-01

    The purpose of this study was to determine the effect of eccentric muscle damage on recruitment threshold force and repetitive discharge properties of low-threshold motor units. Ten subjects performed four tasks involving isometric contraction of elbow flexors while electromyographic (EMG) data were recorded from human biceps brachii and brachialis muscles. Tasks were 1) maximum voluntary contraction (MVC); 2) constant-force contraction at various submaximal targets; 3) motor unit recruitment threshold task; and 4) minimum motor unit discharge rate task. These tasks were performed on three separate days before, immediately after, and 24 h after eccentric exercise of elbow flexor muscles. MVC force declined (42%) immediately after exercise and remained depressed (29%) 24 h later, indicative of muscle damage. Mean motor unit recruitment threshold for biceps brachii was 8.4+/-4.2% MVC, (n=34) before eccentric exercise, and was reduced by 41% (5.0+/-3.0% MVC, n=34) immediately after and by 39% (5.2+/-2.5% MVC, n=34) 24 h after exercise. No significant changes in motor unit recruitment threshold were observed in the brachialis muscle. However, for the minimum tonic discharge rate task, motor units in both muscles discharged 11% faster (10.8+/-2.0 vs. 9.7+/-1.7 Hz) immediately after (n=29) exercise compared with that before (n=32). The minimum discharge rate variability was greater in brachialis muscle immediately after exercise (13.8+/-3.1%) compared with that before (11.9+/-3.1%) and 24 h after exercise (11.7+/-2.4%). No significant changes in minimum discharge rate variability were observed in the biceps brachii motor units after exercise. These results indicate that muscle damage from eccentric exercise alters motor unit recruitment thresholds for >or=24 h, but the effect is not the same in the different elbow flexor muscles.

  16. Recognition ROCS Are Curvilinear--Or Are They? On Premature Arguments against the Two-High-Threshold Model of Recognition

    ERIC Educational Resources Information Center

    Broder, Arndt; Schutz, Julia

    2009-01-01

    Recent reviews of recognition receiver operating characteristics (ROCs) claim that their curvilinear shape rules out threshold models of recognition. However, the shape of ROCs based on confidence ratings is not diagnostic to refute threshold models, whereas ROCs based on experimental bias manipulations are. Also, fitting predicted frequencies to…

  17. Demand for Colonoscopy in Colorectal Cancer Screening Using a Quantitative Fecal Immunochemical Test and Age/Sex-Specific Thresholds for Test Positivity.

    PubMed

    Chen, Sam Li-Sheng; Hsu, Chen-Yang; Yen, Amy Ming-Fang; Young, Graeme P; Chiu, Sherry Yueh-Hsia; Fann, Jean Ching-Yuan; Lee, Yi-Chia; Chiu, Han-Mo; Chiou, Shu-Ti; Chen, Hsiu-Hsi

    2018-06-01

    Background: Despite age and sex differences in fecal hemoglobin (f-Hb) concentrations, most fecal immunochemical test (FIT) screening programs use population-average cut-points for test positivity. The impact of age/sex-specific threshold on FIT accuracy and colonoscopy demand for colorectal cancer screening are unknown. Methods: Using data from 723,113 participants enrolled in a Taiwanese population-based colorectal cancer screening with single FIT between 2004 and 2009, sensitivity and specificity were estimated for various f-Hb thresholds for test positivity. This included estimates based on a "universal" threshold, receiver-operating-characteristic curve-derived threshold, targeted sensitivity, targeted false-positive rate, and a colonoscopy-capacity-adjusted method integrating colonoscopy workload with and without age/sex adjustments. Results: Optimal age/sex-specific thresholds were found to be equal to or lower than the universal 20 μg Hb/g threshold. For older males, a higher threshold (24 μg Hb/g) was identified using a 5% false-positive rate. Importantly, a nonlinear relationship was observed between sensitivity and colonoscopy workload with workload rising disproportionately to sensitivity at 16 μg Hb/g. At this "colonoscopy-capacity-adjusted" threshold, the test positivity (colonoscopy workload) was 4.67% and sensitivity was 79.5%, compared with a lower 4.0% workload and a lower 78.7% sensitivity using 20 μg Hb/g. When constrained on capacity, age/sex-adjusted estimates were generally lower. However, optimizing age/-sex-adjusted thresholds increased colonoscopy demand across models by 17% or greater compared with a universal threshold. Conclusions: Age/sex-specific thresholds improve FIT accuracy with modest increases in colonoscopy demand. Impact: Colonoscopy-capacity-adjusted and age/sex-specific f-Hb thresholds may be useful in optimizing individual screening programs based on detection accuracy, population characteristics, and clinical capacity. Cancer Epidemiol Biomarkers Prev; 27(6); 704-9. ©2018 AACR . ©2018 American Association for Cancer Research.

  18. Recovery of cat retinal ganglion cell sensitivity following pigment bleaching.

    PubMed Central

    Bonds, A B; Enroth-Cugell, C

    1979-01-01

    1. The threshold illuminance for small spot stimulation of on-centre cat retinal ganglion cells was plotted vs. time after exposure to adapting light sufficiently strong to bleach significant amounts of rhodopsin. 2. When the entire receptive field of an X- or Y-type ganglion cell is bleached by at most 40%, recovery of the cell's rod-system proceeds in two phases: an early relatively fast one during which the response appears transient, and a late, slower one during which responses become more sustained. Log threshold during the later phase is well fit by an exponential in time (tau = 11.5-38 min). 3. After bleaches of 90% of the underlying pigment, threshold is cone-determined for as long as 40 min. Rod threshold continues to decrease for at least 85 min after the bleach. 4. The rate of recovery is slower after strong than after weak bleaches; 10 and 90% bleaches yield time constants for the later phase of 11.5 and 38 min, respectively. This contrasts with an approximate time constant of 11 min for rhodopsin regeneration following any bleach. 5. The relationship between the initial elevation of log rod threshold extrapolated from the fitted exponential curves and the initial amount of pigment bleached is monotonic, but nonlinear. 6. After a bleaching exposure, the maintained discharge is initially very regular. The firing rate first rises, then falls to the pre-bleach level, with more extended time courses of change in firing rate after stronger exposures. The discharge rate is restored before threshold has recovered fully. 7. The change in the response vs. log stimulus relationship after bleaching is described as a shift of the curve to the right, paired with a decrease in slope of the linear segment of the curve. PMID:521963

  19. Effects of Airgun Sounds on Bowhead Whale Calling Rates: Evidence for Two Behavioral Thresholds

    PubMed Central

    Blackwell, Susanna B.; Nations, Christopher S.; McDonald, Trent L.; Thode, Aaron M.; Mathias, Delphine; Kim, Katherine H.; Greene, Charles R.; Macrander, A. Michael

    2015-01-01

    In proximity to seismic operations, bowhead whales (Balaena mysticetus) decrease their calling rates. Here, we investigate the transition from normal calling behavior to decreased calling and identify two threshold levels of received sound from airgun pulses at which calling behavior changes. Data were collected in August–October 2007–2010, during the westward autumn migration in the Alaskan Beaufort Sea. Up to 40 directional acoustic recorders (DASARs) were deployed at five sites offshore of the Alaskan North Slope. Using triangulation, whale calls localized within 2 km of each DASAR were identified and tallied every 10 minutes each season, so that the detected call rate could be interpreted as the actual call production rate. Moreover, airgun pulses were identified on each DASAR, analyzed, and a cumulative sound exposure level was computed for each 10-min period each season (CSEL10-min). A Poisson regression model was used to examine the relationship between the received CSEL10-min from airguns and the number of detected bowhead calls. Calling rates increased as soon as airgun pulses were detectable, compared to calling rates in the absence of airgun pulses. After the initial increase, calling rates leveled off at a received CSEL10-min of ~94 dB re 1 μPa2-s (the lower threshold). In contrast, once CSEL10-min exceeded ~127 dB re 1 μPa2-s (the upper threshold), whale calling rates began decreasing, and when CSEL10-min values were above ~160 dB re 1 μPa2-s, the whales were virtually silent. PMID:26039218

  20. Effects of airgun sounds on bowhead whale calling rates: evidence for two behavioral thresholds.

    PubMed

    Blackwell, Susanna B; Nations, Christopher S; McDonald, Trent L; Thode, Aaron M; Mathias, Delphine; Kim, Katherine H; Greene, Charles R; Macrander, A Michael

    2015-01-01

    In proximity to seismic operations, bowhead whales (Balaena mysticetus) decrease their calling rates. Here, we investigate the transition from normal calling behavior to decreased calling and identify two threshold levels of received sound from airgun pulses at which calling behavior changes. Data were collected in August-October 2007-2010, during the westward autumn migration in the Alaskan Beaufort Sea. Up to 40 directional acoustic recorders (DASARs) were deployed at five sites offshore of the Alaskan North Slope. Using triangulation, whale calls localized within 2 km of each DASAR were identified and tallied every 10 minutes each season, so that the detected call rate could be interpreted as the actual call production rate. Moreover, airgun pulses were identified on each DASAR, analyzed, and a cumulative sound exposure level was computed for each 10-min period each season (CSEL10-min). A Poisson regression model was used to examine the relationship between the received CSEL10-min from airguns and the number of detected bowhead calls. Calling rates increased as soon as airgun pulses were detectable, compared to calling rates in the absence of airgun pulses. After the initial increase, calling rates leveled off at a received CSEL10-min of ~94 dB re 1 μPa2-s (the lower threshold). In contrast, once CSEL10-min exceeded ~127 dB re 1 μPa2-s (the upper threshold), whale calling rates began decreasing, and when CSEL10-min values were above ~160 dB re 1 μPa2-s, the whales were virtually silent.

  1. Sensitivity to Envelope Interaural Time Differences at High Modulation Rates

    PubMed Central

    Bleeck, Stefan; McAlpine, David

    2015-01-01

    Sensitivity to interaural time differences (ITDs) conveyed in the temporal fine structure of low-frequency tones and the modulated envelopes of high-frequency sounds are considered comparable, particularly for envelopes shaped to transmit similar fidelity of temporal information normally present for low-frequency sounds. Nevertheless, discrimination performance for envelope modulation rates above a few hundred Hertz is reported to be poor—to the point of discrimination thresholds being unattainable—compared with the much higher (>1,000 Hz) limit for low-frequency ITD sensitivity, suggesting the presence of a low-pass filter in the envelope domain. Further, performance for identical modulation rates appears to decline with increasing carrier frequency, supporting the view that the low-pass characteristics observed for envelope ITD processing is carrier-frequency dependent. Here, we assessed listeners’ sensitivity to ITDs conveyed in pure tones and in the modulated envelopes of high-frequency tones. ITD discrimination for the modulated high-frequency tones was measured as a function of both modulation rate and carrier frequency. Some well-trained listeners appear able to discriminate ITDs extremely well, even at modulation rates well beyond 500 Hz, for 4-kHz carriers. For one listener, thresholds were even obtained for a modulation rate of 800 Hz. The highest modulation rate for which thresholds could be obtained declined with increasing carrier frequency for all listeners. At 10 kHz, the highest modulation rate at which thresholds could be obtained was 600 Hz. The upper limit of sensitivity to ITDs conveyed in the envelope of high-frequency modulated sounds appears to be higher than previously considered. PMID:26721926

  2. Anaerobic Threshold: Its Concept and Role in Endurance Sport

    PubMed Central

    Ghosh, Asok Kumar

    2004-01-01

    aerobic to anaerobic transition intensity is one of the most significant physiological variable in endurance sports. Scientists have explained the term in various ways, like, Lactate Threshold, Ventilatory Anaerobic Threshold, Onset of Blood Lactate Accumulation, Onset of Plasma Lactate Accumulation, Heart Rate Deflection Point and Maximum Lactate Steady State. But all of these have great role both in monitoring training schedule and in determining sports performance. Individuals endowed with the possibility to obtain a high oxygen uptake need to complement with rigorous training program in order to achieve maximal performance. If they engage in endurance events, they must also develop the ability to sustain a high fractional utilization of their maximal oxygen uptake (%VO2 max) and become physiologically efficient in performing their activity. Anaerobic threshold is highly correlated to the distance running performance as compared to maximum aerobic capacity or VO2 max, because sustaining a high fractional utilization of the VO2 max for a long time delays the metabolic acidosis. Training at or little above the anaerobic threshold intensity improves both the aerobic capacity and anaerobic threshold level. Anaerobic Threshold can also be determined from the speed-heart rate relationship in the field situation, without undergoing sophisticated laboratory techniques. However, controversies also exist among scientists regarding its role in high performance sports. PMID:22977357

  3. Anaerobic threshold: its concept and role in endurance sport.

    PubMed

    Ghosh, Asok Kumar

    2004-01-01

    aerobic to anaerobic transition intensity is one of the most significant physiological variable in endurance sports. Scientists have explained the term in various ways, like, Lactate Threshold, Ventilatory Anaerobic Threshold, Onset of Blood Lactate Accumulation, Onset of Plasma Lactate Accumulation, Heart Rate Deflection Point and Maximum Lactate Steady State. But all of these have great role both in monitoring training schedule and in determining sports performance. Individuals endowed with the possibility to obtain a high oxygen uptake need to complement with rigorous training program in order to achieve maximal performance. If they engage in endurance events, they must also develop the ability to sustain a high fractional utilization of their maximal oxygen uptake (%VO(2) max) and become physiologically efficient in performing their activity. Anaerobic threshold is highly correlated to the distance running performance as compared to maximum aerobic capacity or VO(2) max, because sustaining a high fractional utilization of the VO(2) max for a long time delays the metabolic acidosis. Training at or little above the anaerobic threshold intensity improves both the aerobic capacity and anaerobic threshold level. Anaerobic Threshold can also be determined from the speed-heart rate relationship in the field situation, without undergoing sophisticated laboratory techniques. However, controversies also exist among scientists regarding its role in high performance sports.

  4. The colocalization transition of homologous chromosomes at meiosis

    NASA Astrophysics Data System (ADS)

    Nicodemi, Mario; Panning, Barbara; Prisco, Antonella

    2008-06-01

    Meiosis is the specialized cell division required in sexual reproduction. During its early stages, in the mother cell nucleus, homologous chromosomes recognize each other and colocalize in a crucial step that remains one of the most mysterious of meiosis. Starting from recent discoveries on the system molecular components and interactions, we discuss a statistical mechanics model of chromosome early pairing. Binding molecules mediate long-distance interaction of special DNA recognition sequences and, if their concentration exceeds a critical threshold, they induce a spontaneous colocalization transition of chromosomes, otherwise independently diffusing.

  5. The development of rating of perceived exertion-based tests of physical working capacity.

    PubMed

    Mielke, Michelle; Housh, Terry J; Malek, Moh H; Beck, Travis W; Schmidt, Richard J; Johnson, Glen O

    2008-01-01

    The purpose of the present study was to use ratings of perceived exertion (RPE) from the Borg (6-20) and OMNI-Leg (0-10) scales to determine the Physical Working Capacity at the Borg and OMNI thresholds (PWC(BORG) and PWC(OMNI)). PWC(BORG) and PWC(OMNI) were compared with other fatigue thresholds determined from the measurement of heart rate (the Physical Working Capacity at the Heart Rate Threshold: PWC(HRT)), and oxygen consumption (the Physical Working Capacity at the Oxygen Consumption Threshold, PWC(VO2)), as well as the ventilatory threshold (VT). Fifteen men and women volunteers (mean age +/- SD = 22 +/- 1 years) performed an incremental test to exhaustion on an electronically braked ergometer for the determination of VO2 peak and VT. The subjects also performed 4 randomly ordered workbouts to exhaustion at different power outputs (ranging from 60 to 206W) for the determination of PWC(BORG), PWC(OMNI), PWC(HRT), and PWC(VO2). The results indicated that there were no significant mean differences among the fatigue thresholds: PWC(BORG) (mean +/- SD = 133 +/- 37W; 67 +/- 8% of VO2 peak), PWC(OMNI) (137 +/- 44W; 68 +/- 9% of VO2 peak), PWC(HRT) (135 +/- 36W; 68 +/- 8% of VO2 peak), PWC(VO2) (145 +/- 41W; 72 +/- 7% of VO2 peak) and VT (131 +/- 45W; 66 +/- 8% of VO2 peak). The results of this study indicated that the mathematical model used to estimate PWC(HRT) and PWC(VO2) can be applied to ratings of perceived exertion to determine PWC(BORG) and PWC(OMNI) during cycle ergometry. Salient features of the PWC(BORG) and PWC(OMNI) tests are that they are simple to administer and require the use of only an RPE scale, a stopwatch, and a cycle ergometer. Furthermore, the power outputs at the PWC(BORG) and PWC(OMNI) may be useful to estimate the VT noninvasively and without the need for expired gas analysis.

  6. Influence of taekwondo as security martial arts training on anaerobic threshold, cardiorespiratory fitness, and blood lactate recovery.

    PubMed

    Kim, Dae-Young; Seo, Byoung-Do; Choi, Pan-Am

    2014-04-01

    [Purpose] This study was conducted to determine the influence of Taekwondo as security martial arts training on anaerobic threshold, cardiorespiratory fitness, and blood lactate recovery. [Subjects and Methods] Fourteen healthy university students were recruited and divided into an exercise group and a control group (n = 7 in each group). The subjects who participated in the experiment were subjected to an exercise loading test in which anaerobic threshold, value of ventilation, oxygen uptake, maximal oxygen uptake, heart rate, and maximal values of ventilation / heart rate were measured during the exercise, immediately after maximum exercise loading, and at 1, 3, 5, 10, and 15 min of recovery. [Results] At the anaerobic threshold time point, the exercise group showed a significantly longer time to reach anaerobic threshold. The exercise group showed significantly higher values for the time to reach VO2max, maximal values of ventilation, maximal oxygen uptake and maximal values of ventilation / heart rate. Significant changes were observed in the value of ventilation volumes at the 1- and 5-min recovery time points within the exercise group; oxygen uptake and maximal oxygen uptake were significantly different at the 5- and 10-min time points; heart rate was significantly different at the 1- and 3-min time points; and maximal values of ventilation / heart rate was significantly different at the 5-min time point. The exercise group showed significant decreases in blood lactate levels at the 15- and 30-min recovery time points. [Conclusion] The study results revealed that Taekwondo as a security martial arts training increases the maximal oxygen uptake and anaerobic threshold and accelerates an individual's recovery to the normal state of cardiorespiratory fitness and blood lactate level. These results are expected to contribute to the execution of more effective security services in emergencies in which violence can occur.

  7. Repetition rate dependency of low-density plasma effects during femtosecond-laser-based surgery of biological tissue

    NASA Astrophysics Data System (ADS)

    Kuetemeyer, K.; Baumgart, J.; Lubatschowski, H.; Heisterkamp, A.

    2009-11-01

    Femtosecond laser based nanosurgery of biological tissue is usually done in two different regimes. Depending on the application, low kHz repetition rates above the optical breakdown threshold or high MHz repetition rates in the low-density plasma regime are used. In contrast to the well understood optical breakdown, mechanisms leading to dissection below this threshold are not well known due to the complexity of chemical effects with high numbers of interacting molecules. Furthermore, the laser repetition rate may influence their efficiency. In this paper, we present our study on low-density plasma effects in biological tissue depending on repetition rate by static exposure of porcine corneal stroma to femtosecond pulses. We observed a continuous increase of the laser-induced damage with decreasing repetition rate over two orders of magnitude at constant numbers of applied laser pulses or constant laser pulse energies. Therefore, low repetition rates in the kHz regime are advantageous to minimize the total delivered energy to biological tissue during femtosecond laser irradiation. However, due to frequent excessive damage in this regime directly above the threshold, MHz repetition rates are preferable to create nanometer-sized cuts in the low-density plasma regime.

  8. A study of FM threshold extension techniques

    NASA Technical Reports Server (NTRS)

    Arndt, G. D.; Loch, F. J.

    1972-01-01

    The characteristics of three postdetection threshold extension techniques are evaluated with respect to the ability of such techniques to improve the performance of a phase lock loop demodulator. These techniques include impulse-noise elimination, signal correlation for the detection of impulse noise, and delta modulation signal processing. Experimental results from signal to noise ratio data and bit error rate data indicate that a 2- to 3-decibel threshold extension is readily achievable by using the various techniques. This threshold improvement is in addition to the threshold extension that is usually achieved through the use of a phase lock loop demodulator.

  9. Ventilatory thresholds determined from HRV: comparison of 2 methods in obese adolescents.

    PubMed

    Quinart, S; Mourot, L; Nègre, V; Simon-Rigaud, M-L; Nicolet-Guénat, M; Bertrand, A-M; Meneveau, N; Mougin, F

    2014-03-01

    The development of personalised training programmes is crucial in the management of obesity. We evaluated the ability of 2 heart rate variability analyses to determine ventilatory thresholds (VT) in obese adolescents. 20 adolescents (mean age 14.3±1.6 years and body mass index z-score 4.2±0.1) performed an incremental test to exhaustion before and after a 9-month multidisciplinary management programme. The first (VT1) and second (VT2) ventilatory thresholds were identified by the reference method (gas exchanges). We recorded RR intervals to estimate VT1 and VT2 from heart rate variability using time-domain analysis and time-varying spectral-domain analysis. The coefficient correlations between thresholds were higher with spectral-domain analysis compared to time-domain analysis: Heart rate at VT1: r=0.91 vs. =0.66 and VT2: r=0.91 vs. =0.66; power at VT1: r=0.91 vs. =0.74 and VT2: r=0.93 vs. =0.78; spectral-domain vs. time-domain analysis respectively). No systematic bias in heart rate at VT1 and VT2 with standard deviations <6 bpm were found, confirming that spectral-domain analysis could replace the reference method for the detection of ventilatory thresholds. Furthermore, this technique is sensitive to rehabilitation and re-training, which underlines its utility in clinical practice. This inexpensive and non-invasive tool is promising for prescribing physical activity programs in obese adolescents. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Petroleum-resource appraisal and discovery rate forecasting in partially explored regions

    USGS Publications Warehouse

    Drew, Lawrence J.; Schuenemeyer, J.H.; Root, David H.; Attanasi, E.D.

    1980-01-01

    PART A: A model of the discovery process can be used to predict the size distribution of future petroleum discoveries in partially explored basins. The parameters of the model are estimated directly from the historical drilling record, rather than being determined by assumptions or analogies. The model is based on the concept of the area of influence of a drill hole, which states that the area of a basin exhausted by a drill hole varies with the size and shape of targets in the basin and with the density of previously drilled wells. It also uses the concept of discovery efficiency, which measures the rate of discovery within several classes of deposit size. The model was tested using 25 years of historical exploration data (1949-74) from the Denver basin. From the trend in the discovery rate (the number of discoveries per unit area exhausted), the discovery efficiencies in each class of deposit size were estimated. Using pre-1956 discovery and drilling data, the model accurately predicted the size distribution of discoveries for the 1956-74 period. PART B: A stochastic model of the discovery process has been developed to predict, using past drilling and discovery data, the distribution of future petroleum deposits in partially explored basins, and the basic mathematical properties of the model have been established. The model has two exogenous parameters, the efficiency of exploration and the effective basin size. The first parameter is the ratio of the probability that an actual exploratory well will make a discovery to the probability that a randomly sited well will make a discovery. The second parameter, the effective basin size, is the area of that part of the basin in which drillers are willing to site wells. Methods for estimating these parameters from locations of past wells and from the sizes and locations of past discoveries were derived, and the properties of estimators of the parameters were studied by simulation. PART C: This study examines the temporal properties and determinants of petroleum exploration for firms operating in the Denver basin. Expectations associated with the favorability of a specific area are modeled by using distributed lag proxy variables (of previous discoveries) and predictions from a discovery process model. In the second part of the study, a discovery process model is linked with a behavioral well-drilling model in order to predict the supply of new reserves. Results of the study indicate that the positive effects of new discoveries on drilling increase for several periods and then diminish to zero within 2? years after the deposit discovery date. Tests of alternative specifications of the argument of the distributed lag function using alternative minimum size classes of deposits produced little change in the model's explanatory power. This result suggests that, once an exploration play is underway, favorable operator expectations are sustained by the quantity of oil found per time period rather than by the discovery of specific size deposits. When predictions of the value of undiscovered deposits (generated from a discovery process model) were substituted for the expectations variable in models used to explain exploration effort, operator behavior was found to be consistent with these predictions. This result suggests that operators, on the average, were efficiently using information contained in the discovery history of the basin in carrying out their exploration plans. Comparison of the two approaches to modeling unobservable operator expectations indicates that the two models produced very similar results. The integration of the behavioral well-drilling model and discovery process model to predict the additions to reserves per unit time was successful only when the quarterly predictions were aggregated to annual values. The accuracy of the aggregated predictions was also found to be reasonably robust to errors in predictions from the behavioral well-drilling equation.

  11. Sensitivity to coincidences and paranormal belief.

    PubMed

    Hadlaczky, Gergö; Westerlund, Joakim

    2011-12-01

    Often it is difficult to find a natural explanation as to why a surprising coincidence occurs. In attempting to find one, people may be inclined to accept paranormal explanations. The objective of this study was to investigate whether people with a lower threshold for being surprised by coincidences have a greater propensity to become believers compared to those with a higher threshold. Participants were exposed to artificial coincidences, which were formally defined as less or more probable, and were asked to provide remarkability ratings. Paranormal belief was measured by the Australian Sheep-Goat Scale. An analysis of the remarkability ratings revealed a significant interaction effect between Sheep-Goat score and type of coincidence, suggesting that people with lower thresholds of surprise, when experiencing coincidences, harbor higher paranormal belief than those with a higher threshold. The theoretical aspects of these findings were discussed.

  12. Very Low Threshold ASE and Lasing Using Auger-Suppressed Nanocrystal Quantum Dots

    NASA Astrophysics Data System (ADS)

    Park, Young-Shin; Bae, Wan Ki; Fidler, Andrew; Baker, Tomas; Lim, Jaehoon; Pietryga, Jeffrey; Klimov, Victor

    2015-03-01

    We report amplified spontaneous emission (ASE) and lasing with very low thresholds obtained using thin films made of engineered thick-shell CdSe/CdS QDs that have a CdSeS alloyed layer between the CdSe core and the CdS shell. These ``alloyed'' QDs exhibit considerable reduction of Auger decay rates, which results in high biexciton emission quantum yields (QBX of ~ 12%) and extended biexciton lifetimes (τBX of ~ 4ns). By using a fs laser (400 nm at 1 kHz repetition rate) as a pump source, we measured the threshold intensity of biexciton ASE as low as 5 μJ/cm2, which is about 5 times lower than the lowest ASE thresholds reported for thick-shell QDs without interfacial alloying. Interestingly, we also observed biexciton random lasing from the same QD film. Lasing spectrum comprises several sharp peaks (linewidth ~0.2 nm), and the heights and the spectral positions of these peaks show strong dependence on the exact position of the excitation spot on the QD film. Our study suggests that further suppression of nonradiative Auger decay rates via even finer grading of the core/shell interface could lead to a further reduction in the lasing threshold and potentially realization of lasing under continuous-wave excitation.

  13. Auditory sensitivity to spectral modulation phase reversal as a function of modulation depth

    PubMed Central

    Grose, John

    2018-01-01

    The present study evaluated auditory sensitivity to spectral modulation by determining the modulation depth required to detect modulation phase reversal. This approach may be preferable to spectral modulation detection with a spectrally flat standard, since listeners appear unable to perform the task based on the detection of temporal modulation. While phase reversal thresholds are often evaluated by holding modulation depth constant and adjusting modulation rate, holding rate constant and adjusting modulation depth supports rate-specific assessment of modulation processing. Stimuli were pink noise samples, filtered into seven octave-wide bands (0.125–8 kHz) and spectrally modulated in dB. Experiment 1 measured performance as a function of modulation depth to determine appropriate units for adaptive threshold estimation. Experiment 2 compared thresholds in dB for modulation detection with a flat standard and modulation phase reversal; results supported the idea that temporal cues were available at high rates for the former but not the latter. Experiment 3 evaluated spectral modulation phase reversal thresholds for modulation that was restricted to either one or two neighboring bands. Flanking bands of unmodulated noise had a larger detrimental effect on one-band than two-band targets. Thresholds for high-rate modulation improved with increasing carrier frequency up to 2 kHz, whereas low-rate modulation appeared more consistent across frequency, particularly in the two-band condition. Experiment 4 measured spectral weights for spectral modulation phase reversal detection and found higher weights for bands in the spectral center of the stimulus than for the lowest (0.125 kHz) or highest (8 kHz) band. Experiment 5 compared performance for highly practiced and relatively naïve listeners, and found weak evidence of a larger practice effect at high than low spectral modulation rates. These results provide preliminary data for a task that may provide a better estimate of sensitivity to spectral modulation than spectral modulation detection with a flat standard. PMID:29621338

  14. Potentiated antibodies to mu-opiate receptors: effect on integrative activity of the brain.

    PubMed

    Geiko, V V; Vorob'eva, T M; Berchenko, O G; Epstein, O I

    2003-01-01

    The effect of homeopathically potentiated antibodies to mu-receptors (10(-100) wt %) on integrative activity of rat brain was studied using the models of self-stimulation of the lateral hypothalamus and convulsions produced by electric current. Electric current was delivered through electrodes implanted into the ventromedial hypothalamus. Single treatment with potentiated antibodies to mu-receptors increased the rate of self-stimulation and decreased the threshold of convulsive seizures. Administration of these antibodies for 7 days led to further activation of the positive reinforcement system and decrease in seizure thresholds. Distilled water did not change the rate of self-stimulation and seizure threshold.

  15. Relation Between Cochlear Mechanics and Performance of Temporal Fine Structure-Based Tasks.

    PubMed

    Otsuka, Sho; Furukawa, Shigeto; Yamagishi, Shimpei; Hirota, Koich; Kashino, Makio

    2016-12-01

    This study examined whether the mechanical characteristics of the cochlea could influence individual variation in the ability to use temporal fine structure (TFS) information. Cochlear mechanical functioning was evaluated by swept-tone evoked otoacoustic emissions (OAEs), which are thought to comprise linear reflection by micromechanical impedance perturbations, such as spatial variations in the number or geometry of outer hair cells, on the basilar membrane (BM). Low-rate (2 Hz) frequency modulation detection limens (FMDLs) were measured for carrier frequency of 1000 Hz and interaural phase difference (IPD) thresholds as indices of TFS sensitivity and high-rate (16 Hz) FMDLs and amplitude modulation detection limens (AMDLs) as indices of sensitivity to non-TFS cues. Significant correlations were found among low-rate FMDLs, low-rate AMDLs, and IPD thresholds (R = 0.47-0.59). A principal component analysis was used to show a common factor that could account for 81.1, 74.1, and 62.9 % of the variance in low-rate FMDLs, low-rate AMDLs, and IPD thresholds, respectively. An OAE feature, specifically a characteristic dip around 2-2.5 kHz in OAE spectra, showed a significant correlation with the common factor (R = 0.54). High-rate FMDLs and AMDLs were correlated with each other (R = 0.56) but not with the other measures. The results can be interpreted as indicating that (1) the low-rate AMDLs, as well as the IPD thresholds and low-rate FMDLs, depend on the use of TFS information coded in neural phase locking and (2) the use of TFS information is influenced by a particular aspect of cochlear mechanics, such as mechanical irregularity along the BM.

  16. Recruitment and rate coding organisation for soleus motor units across entire range of voluntary isometric plantar flexions.

    PubMed

    Oya, Tomomichi; Riek, Stephan; Cresswell, Andrew G

    2009-10-01

    Unlike upper limb muscles, it remains undocumented as to how motor units in the soleus muscle are organised in terms of recruitment range and discharge rates with respect to their recruitment and de-recruitment thresholds. The possible influence of neuromodulation, such as persistent inward currents (PICs) on lower limb motor unit recruitment and discharge rates has also yet to be reported. To address these issues, electromyographic (EMG) activities from the soleus muscle were recorded using selective branched-wire intramuscular electrodes during ramp-and-hold contractions with intensities up to maximal voluntary contraction (MVC). The multiple single motor unit activities were then derived using a decomposition technique. The onset-offset hysteresis of motor unit discharge, i.e. a difference between recruitment and de-recruitment thresholds, as well as PIC magnitude calculated by a paired motor unit analysis were used to examine the neuromodulatory effects on discharge behaviours, such as minimum firing rate, peak firing rate and degree of increase in firing rate. Forty-two clearly identified motor units from five subjects revealed that soleus motor units are recruited progressively from rest to contraction strengths close to 95% of MVC, with low-threshold motor units discharging action potentials slower at their recruitment and with a lower peak rate than later recruited high-threshold units. This observation is in contrast to the 'onion skin phenomenon' often reported for the upper limb muscles. Based on positive correlations of the peak discharge rates, initial rates and recruitment order of the units with the magnitude of the onset-offset hysteresis and not PIC contribution, we conclude that discharge behaviours among motor units appear to be related to a variation in an intrinsic property other than PICs.

  17. Amyloid β deposition, neurodegeneration, and cognitive decline in sporadic Alzheimer's disease: a prospective cohort study.

    PubMed

    Villemagne, Victor L; Burnham, Samantha; Bourgeat, Pierrick; Brown, Belinda; Ellis, Kathryn A; Salvado, Olivier; Szoeke, Cassandra; Macaulay, S Lance; Martins, Ralph; Maruff, Paul; Ames, David; Rowe, Christopher C; Masters, Colin L

    2013-04-01

    Similar to most chronic diseases, Alzheimer's disease (AD) develops slowly from a preclinical phase into a fully expressed clinical syndrome. We aimed to use longitudinal data to calculate the rates of amyloid β (Aβ) deposition, cerebral atrophy, and cognitive decline. In this prospective cohort study, healthy controls, patients with mild cognitive impairment (MCI), and patients with AD were assessed at enrolment and every 18 months. At every visit, participants underwent neuropsychological examination, MRI, and a carbon-11-labelled Pittsburgh compound B ((11)C-PiB) PET scan. We included participants with three or more (11)C-PiB PET follow-up assessments. Aβ burden was expressed as (11)C-PiB standardised uptake value ratio (SUVR) with the cerebellar cortex as reference region. An SUVR of 1·5 was used to discriminate high from low Aβ burdens. The slope of the regression plots over 3-5 years was used to estimate rates of change for Aβ deposition, MRI volumetrics, and cognition. We included those participants with a positive rate of Aβ deposition to calculate the trajectory of each variable over time. 200 participants (145 healthy controls, 36 participants with MCI, and 19 participants with AD) were assessed at enrolment and every 18 months for a mean follow-up of 3·8 (95% CI CI 3·6-3·9) years. At baseline, significantly higher Aβ burdens were noted in patients with AD (2·27, SD 0·43) and those with MCI (1·94, 0·64) than in healthy controls (1·38, 0·39). At follow-up, 163 (82%) of the 200 participants showed positive rates of Aβ accumulation. Aβ deposition was estimated to take 19·2 (95% CI 16·8-22·5) years in an almost linear fashion-with a mean increase of 0·043 (95% CI 0·037-0·049) SUVR per year-to go from the threshold of (11)C-PiB positivity (1·5 SUVR) to the levels observed in AD. It was estimated to take 12·0 (95% CI 10·1-14·9) years from the levels observed in healthy controls with low Aβ deposition (1·2 [SD 0·1] SUVR) to the threshold of (11)C-PiB positivity. As AD progressed, the rate of Aβ deposition slowed towards a plateau. Our projections suggest a prolonged preclinical phase of AD in which Aβ deposition reaches our threshold of positivity at 17·0 (95% CI 14·9-19·9) years, hippocampal atrophy at 4·2 (3·6-5·1) years, and memory impairment at 3·3 (2·5-4·5) years before the onset of dementia (clinical dementia rating score 1). Aβ deposition is slow and protracted, likely to extend for more than two decades. Such predictions of the rate of preclinical changes and the onset of the clinical phase of AD will facilitate the design and timing of therapeutic interventions aimed at modifying the course of this illness. Science and Industry Endowment Fund (Australia), The Commonwealth Scientific and Industrial Research Organisation (Australia), The National Health and Medical Research Council of Australia Program and Project Grants, the Austin Hospital Medical Research Foundation, Victorian State Government, The Alzheimer's Drug Discovery Foundation, and the Alzheimer's Association. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Southeast PAVE PAWS Radar System. Environmental Assessment.

    DTIC Science & Technology

    1983-03-01

    reported, including fatigue, irritability, sleepiness, partial loss of memory, lower heart- beat rates, hypertension, hypotension, cardiac pain, and...Because such audiograms do not test hearing above 8 klz, binaural hearing thresholds were also determined for seven of the subjects for frequencies...perception and hearing ability above 8 kl:z as determined from the binaural thresholds. The average threshold pulse power density for 15-microsecond

  19. An Abrupt Transition to an Intergranular Failure Mode in the Near-Threshold Fatigue Crack Growth Regime in Ni-Based Superalloys

    NASA Astrophysics Data System (ADS)

    Telesman, J.; Smith, T. M.; Gabb, T. P.; Ring, A. J.

    2018-06-01

    Cyclic near-threshold fatigue crack growth (FCG) behavior of two disk superalloys was evaluated and was shown to exhibit an unexpected sudden failure mode transition from a mostly transgranular failure mode at higher stress intensity factor ranges to an almost completely intergranular failure mode in the threshold regime. The change in failure modes was associated with a crossover of FCG resistance curves in which the conditions that produced higher FCG rates in the Paris regime resulted in lower FCG rates and increased ΔK th values in the threshold region. High-resolution scanning and transmission electron microscopy were used to carefully characterize the crack tips at these near-threshold conditions. Formation of stable Al-oxide followed by Cr-oxide and Ti-oxides was found to occur at the crack tip prior to formation of unstable oxides. To contrast with the threshold failure mode regime, a quantitative assessment of the role that the intergranular failure mode has on cyclic FCG behavior in the Paris regime was also performed. It was demonstrated that even a very limited intergranular failure content dominates the FCG response under mixed mode failure conditions.

  20. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    NASA Astrophysics Data System (ADS)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  1. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  2. A novel association rule mining approach using TID intermediate itemset.

    PubMed

    Aqra, Iyad; Herawan, Tutut; Abdul Ghani, Norjihan; Akhunzada, Adnan; Ali, Akhtar; Bin Razali, Ramdan; Ilahi, Manzoor; Raymond Choo, Kim-Kwang

    2018-01-01

    Designing an efficient association rule mining (ARM) algorithm for multilevel knowledge-based transactional databases that is appropriate for real-world deployments is of paramount concern. However, dynamic decision making that needs to modify the threshold either to minimize or maximize the output knowledge certainly necessitates the extant state-of-the-art algorithms to rescan the entire database. Subsequently, the process incurs heavy computation cost and is not feasible for real-time applications. The paper addresses efficiently the problem of threshold dynamic updation for a given purpose. The paper contributes by presenting a novel ARM approach that creates an intermediate itemset and applies a threshold to extract categorical frequent itemsets with diverse threshold values. Thus, improving the overall efficiency as we no longer needs to scan the whole database. After the entire itemset is built, we are able to obtain real support without the need of rebuilding the itemset (e.g. Itemset list is intersected to obtain the actual support). Moreover, the algorithm supports to extract many frequent itemsets according to a pre-determined minimum support with an independent purpose. Additionally, the experimental results of our proposed approach demonstrate the capability to be deployed in any mining system in a fully parallel mode; consequently, increasing the efficiency of the real-time association rules discovery process. The proposed approach outperforms the extant state-of-the-art and shows promising results that reduce computation cost, increase accuracy, and produce all possible itemsets.

  3. A novel association rule mining approach using TID intermediate itemset

    PubMed Central

    Ali, Akhtar; Bin Razali, Ramdan; Ilahi, Manzoor; Raymond Choo, Kim-Kwang

    2018-01-01

    Designing an efficient association rule mining (ARM) algorithm for multilevel knowledge-based transactional databases that is appropriate for real-world deployments is of paramount concern. However, dynamic decision making that needs to modify the threshold either to minimize or maximize the output knowledge certainly necessitates the extant state-of-the-art algorithms to rescan the entire database. Subsequently, the process incurs heavy computation cost and is not feasible for real-time applications. The paper addresses efficiently the problem of threshold dynamic updation for a given purpose. The paper contributes by presenting a novel ARM approach that creates an intermediate itemset and applies a threshold to extract categorical frequent itemsets with diverse threshold values. Thus, improving the overall efficiency as we no longer needs to scan the whole database. After the entire itemset is built, we are able to obtain real support without the need of rebuilding the itemset (e.g. Itemset list is intersected to obtain the actual support). Moreover, the algorithm supports to extract many frequent itemsets according to a pre-determined minimum support with an independent purpose. Additionally, the experimental results of our proposed approach demonstrate the capability to be deployed in any mining system in a fully parallel mode; consequently, increasing the efficiency of the real-time association rules discovery process. The proposed approach outperforms the extant state-of-the-art and shows promising results that reduce computation cost, increase accuracy, and produce all possible itemsets. PMID:29351287

  4. Pain thresholds, supra-threshold pain and lidocaine sensitivity in patients with erythromelalgia, including the I848Tmutation in NaV 1.7.

    PubMed

    Helås, T; Sagafos, D; Kleggetveit, I P; Quiding, H; Jönsson, B; Segerdahl, M; Zhang, Z; Salter, H; Schmelz, M; Jørum, E

    2017-09-01

    Nociceptive thresholds and supra-threshold pain ratings as well as their reduction upon local injection with lidocaine were compared between healthy subjects and patients with erythromelalgia (EM). Lidocaine (0.25, 0.50, 1.0 or 10 mg/mL) or placebo (saline) was injected intradermally in non-painful areas of the lower arm, in a randomized, double-blind manner, to test the effect on dynamic and static mechanical sensitivity, mechanical pain sensitivity, thermal thresholds and supra-threshold heat pain sensitivity. Heat pain thresholds and pain ratings to supra-threshold heat stimulation did not differ between EM-patients (n = 27) and controls (n = 25), neither did the dose-response curves for lidocaine. Only the subgroup of EM-patients with mutations in sodium channel subunits Na V 1.7, 1.8 or 1.9 (n = 8) had increased lidocaine sensitivity for supra-threshold heat stimuli, contrasting lower sensitivity to strong mechanical stimuli. This pattern was particularly clear in the two patients carrying the Na V 1.7 I848T mutations in whom lidocaine's hyperalgesic effect on mechanical pain sensitivity contrasted more effective heat analgesia. Heat pain thresholds are not sensitized in EM patients, even in those with gain-of-function mutations in Na V 1.7. Differential lidocaine sensitivity was overt only for noxious stimuli in the supra-threshold range suggesting that sensitized supra-threshold encoding is important for the clinical pain phenotype in EM in addition to lower activation threshold. Intracutaneous lidocaine dose-dependently blocked nociceptive sensations, but we did not identify EM patients with particular high lidocaine sensitivity that could have provided valuable therapeutic guidance. Acute pain thresholds and supra-threshold heat pain in controls and patients with erythromelalgia do not differ and have the same lidocaine sensitivity. Acute heat pain thresholds even in EM patients with the Na V 1.7 I848T mutation are normal and only nociceptor sensitivity to intradermal lidocaine is changed. Only in EM patients with mutations in Na V 1.7, 1.8 or 1.9 supra-threshold heat and mechanical pain shows differential lidocaine sensitivity as compared to controls. © 2017 European Pain Federation - EFIC®.

  5. Discrimination of nonlinear frequency glides.

    PubMed

    Thyer, Nick; Mahar, Doug

    2006-05-01

    Discrimination thresholds for short duration nonlinear tone glides that differed in glide rate were measured in order to determine whether cues related to rate of frequency change alone were sufficient for discrimination. Thresholds for rising and falling nonlinear glides of 50-ms and 400-ms duration, spanning three frequency excursions (0.5, 1, and 2 ERBs) at three center frequencies (0.5, 2.0, and 6.0 kHz) were measured. Results showed that glide discrimination was possible when duration and initial and final frequencies were identical. Thresholds were of a different order to those found in previous studies using linear frequency glides where endpoint frequency or duration information is available as added cues. The pattern of results was suggestive of a mechanism sensitive to spectral changes in time. Thresholds increased as the rate of transition span increased, particularly above spans of 1 ERB. The Weber fraction associated with these changes was 0.6-0.7. Overall, the results were consistent with an excitation pattern model of nonlinear glide detection that has difficulty in tracking signals with rapid frequency changes that exceed the width of an auditory filter and are of short duration.

  6. Assessing the Role of Place and Timing Cues in Coding Frequency and Amplitude Modulation as a Function of Age.

    PubMed

    Whiteford, Kelly L; Kreft, Heather A; Oxenham, Andrew J

    2017-08-01

    Natural sounds can be characterized by their fluctuations in amplitude and frequency. Ageing may affect sensitivity to some forms of fluctuations more than others. The present study used individual differences across a wide age range (20-79 years) to test the hypothesis that slow-rate, low-carrier frequency modulation (FM) is coded by phase-locked auditory-nerve responses to temporal fine structure (TFS), whereas fast-rate FM is coded via rate-place (tonotopic) cues, based on amplitude modulation (AM) of the temporal envelope after cochlear filtering. Using a low (500 Hz) carrier frequency, diotic FM and AM detection thresholds were measured at slow (1 Hz) and fast (20 Hz) rates in 85 listeners. Frequency selectivity and TFS coding were assessed using forward masking patterns and interaural phase disparity tasks (slow dichotic FM), respectively. Comparable interaural level disparity tasks (slow and fast dichotic AM and fast dichotic FM) were measured to control for effects of binaural processing not specifically related to TFS coding. Thresholds in FM and AM tasks were correlated, even across tasks thought to use separate peripheral codes. Age was correlated with slow and fast FM thresholds in both diotic and dichotic conditions. The relationship between age and AM thresholds was generally not significant. Once accounting for AM sensitivity, only diotic slow-rate FM thresholds remained significantly correlated with age. Overall, results indicate stronger effects of age on FM than AM. However, because of similar effects for both slow and fast FM when not accounting for AM sensitivity, the effects cannot be unambiguously ascribed to TFS coding.

  7. Do poison center triage guidelines affect healthcare facility referrals?

    PubMed

    Benson, B E; Smith, C A; McKinney, P E; Litovitz, T L; Tandberg, W D

    2001-01-01

    The purpose of this study was to determine the extent to which poison center triage guidelines influence healthcare facility referral rates for acute, unintentional acetaminophen-only poisoning and acute, unintentional adult formulation iron poisoning. Managers of US poison centers were interviewed by telephone to determine their center's triage threshold value (mg/kg) for acute iron and acute acetaminophen poisoning in 1997. Triage threshold values and healthcare facility referral rates were fit to a univariate logistic regression model for acetaminophen and iron using maximum likelihood estimation. Triage threshold values ranged from 120-201 mg/kg (acetaminophen) and 16-61 mg/kg (iron). Referral rates ranged from 3.1% to 24% (acetaminophen) and 3.7% to 46.7% (iron). There was a statistically significant inverse relationship between the triage value and the referral rate for acetaminophen (p < 0.001) and iron (p = 0.0013). The model explained 31.7% of the referral variation for acetaminophen but only 4.1% of the variation for iron. There is great variability in poison center triage values and referral rates for iron and acetaminophen poisoning. Guidelines can account for a meaningful proportion of referral variation. Their influence appears to be substance dependent. These data suggest that efforts to determine and utilize the highest, safe, triage threshold value could substantially decrease healthcare costs for poisonings as long as patient medical outcomes are not compromised.

  8. Anaerobic Threshold and Salivary α-amylase during Incremental Exercise.

    PubMed

    Akizuki, Kazunori; Yazaki, Syouichirou; Echizenya, Yuki; Ohashi, Yukari

    2014-07-01

    [Purpose] The purpose of this study was to clarify the validity of salivary α-amylase as a method of quickly estimating anaerobic threshold and to establish the relationship between salivary α-amylase and double-product breakpoint in order to create a way to adjust exercise intensity to a safe and effective range. [Subjects and Methods] Eleven healthy young adults performed an incremental exercise test using a cycle ergometer. During the incremental exercise test, oxygen consumption, carbon dioxide production, and ventilatory equivalent were measured using a breath-by-breath gas analyzer. Systolic blood pressure and heart rate were measured to calculate the double product, from which double-product breakpoint was determined. Salivary α-amylase was measured to calculate the salivary threshold. [Results] One-way ANOVA revealed no significant differences among workloads at the anaerobic threshold, double-product breakpoint, and salivary threshold. Significant correlations were found between anaerobic threshold and salivary threshold and between anaerobic threshold and double-product breakpoint. [Conclusion] As a method for estimating anaerobic threshold, salivary threshold was as good as or better than determination of double-product breakpoint because the correlation between anaerobic threshold and salivary threshold was higher than the correlation between anaerobic threshold and double-product breakpoint. Therefore, salivary threshold is a useful index of anaerobic threshold during an incremental workload.

  9. Minimum follow-up time required for the estimation of statistical cure of cancer patients: verification using data from 42 cancer sites in the SEER database

    PubMed Central

    Tai, Patricia; Yu, Edward; Cserni, Gábor; Vlastos, Georges; Royce, Melanie; Kunkler, Ian; Vinh-Hung, Vincent

    2005-01-01

    Background The present commonly used five-year survival rates are not adequate to represent the statistical cure. In the present study, we established the minimum number of years required for follow-up to estimate statistical cure rate, by using a lognormal distribution of the survival time of those who died of their cancer. We introduced the term, threshold year, the follow-up time for patients dying from the specific cancer covers most of the survival data, leaving less than 2.25% uncovered. This is close enough to cure from that specific cancer. Methods Data from the Surveillance, Epidemiology and End Results (SEER) database were tested if the survival times of cancer patients who died of their disease followed the lognormal distribution using a minimum chi-square method. Patients diagnosed from 1973–1992 in the registries of Connecticut and Detroit were chosen so that a maximum of 27 years was allowed for follow-up to 1999. A total of 49 specific organ sites were tested. The parameters of those lognormal distributions were found for each cancer site. The cancer-specific survival rates at the threshold years were compared with the longest available Kaplan-Meier survival estimates. Results The characteristics of the cancer-specific survival times of cancer patients who died of their disease from 42 cancer sites out of 49 sites were verified to follow different lognormal distributions. The threshold years validated for statistical cure varied for different cancer sites, from 2.6 years for pancreas cancer to 25.2 years for cancer of salivary gland. At the threshold year, the statistical cure rates estimated for 40 cancer sites were found to match the actuarial long-term survival rates estimated by the Kaplan-Meier method within six percentage points. For two cancer sites: breast and thyroid, the threshold years were so long that the cancer-specific survival rates could yet not be obtained because the SEER data do not provide sufficiently long follow-up. Conclusion The present study suggests a certain threshold year is required to wait before the statistical cure rate can be estimated for each cancer site. For some cancers, such as breast and thyroid, the 5- or 10-year survival rates inadequately reflect statistical cure rates, and highlight the need for long-term follow-up of these patients. PMID:15904508

  10. Science of the science, drug discovery and artificial neural networks.

    PubMed

    Patel, Jigneshkumar

    2013-03-01

    Drug discovery process many times encounters complex problems, which may be difficult to solve by human intelligence. Artificial Neural Networks (ANNs) are one of the Artificial Intelligence (AI) technologies used for solving such complex problems. ANNs are widely used for primary virtual screening of compounds, quantitative structure activity relationship studies, receptor modeling, formulation development, pharmacokinetics and in all other processes involving complex mathematical modeling. Despite having such advanced technologies and enough understanding of biological systems, drug discovery is still a lengthy, expensive, difficult and inefficient process with low rate of new successful therapeutic discovery. In this paper, author has discussed the drug discovery science and ANN from very basic angle, which may be helpful to understand the application of ANN for drug discovery to improve efficiency.

  11. A note on the false discovery rate of novel peptides in proteogenomics.

    PubMed

    Zhang, Kun; Fu, Yan; Zeng, Wen-Feng; He, Kun; Chi, Hao; Liu, Chao; Li, Yan-Chang; Gao, Yuan; Xu, Ping; He, Si-Min

    2015-10-15

    Proteogenomics has been well accepted as a tool to discover novel genes. In most conventional proteogenomic studies, a global false discovery rate is used to filter out false positives for identifying credible novel peptides. However, it has been found that the actual level of false positives in novel peptides is often out of control and behaves differently for different genomes. To quantitatively model this problem, we theoretically analyze the subgroup false discovery rates of annotated and novel peptides. Our analysis shows that the annotation completeness ratio of a genome is the dominant factor influencing the subgroup FDR of novel peptides. Experimental results on two real datasets of Escherichia coli and Mycobacterium tuberculosis support our conjecture. yfu@amss.ac.cn or xupingghy@gmail.com or smhe@ict.ac.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  12. Leveraging cell type specific regulatory regions to detect SNPs associated with tissue factor pathway inhibitor plasma levels.

    PubMed

    Dennis, Jessica; Medina-Rivera, Alejandra; Truong, Vinh; Antounians, Lina; Zwingerman, Nora; Carrasco, Giovana; Strug, Lisa; Wells, Phil; Trégouët, David-Alexandre; Morange, Pierre-Emmanuel; Wilson, Michael D; Gagnon, France

    2017-07-01

    Tissue factor pathway inhibitor (TFPI) regulates the formation of intravascular blood clots, which manifest clinically as ischemic heart disease, ischemic stroke, and venous thromboembolism (VTE). TFPI plasma levels are heritable, but the genetics underlying TFPI plasma level variability are poorly understood. Herein we report the first genome-wide association scan (GWAS) of TFPI plasma levels, conducted in 251 individuals from five extended French-Canadian Families ascertained on VTE. To improve discovery, we also applied a hypothesis-driven (HD) GWAS approach that prioritized single nucleotide polymorphisms (SNPs) in (1) hemostasis pathway genes, and (2) vascular endothelial cell (EC) regulatory regions, which are among the highest expressers of TFPI. Our GWAS identified 131 SNPs with suggestive evidence of association (P-value < 5 × 10 -8 ), but no SNPs reached the genome-wide threshold for statistical significance. Hemostasis pathway genes were not enriched for TFPI plasma level associated SNPs (global hypothesis test P-value = 0.147), but EC regulatory regions contained more TFPI plasma level associated SNPs than expected by chance (global hypothesis test P-value = 0.046). We therefore stratified our genome-wide SNPs, prioritizing those in EC regulatory regions via stratified false discovery rate (sFDR) control, and reranked the SNPs by q-value. The minimum q-value was 0.27, and the top-ranked SNPs did not show association evidence in the MARTHA replication sample of 1,033 unrelated VTE cases. Although this study did not result in new loci for TFPI, our work lays out a strategy to utilize epigenomic data in prioritization schemes for future GWAS studies. © 2017 WILEY PERIODICALS, INC.

  13. Genome-Wide Association of CKD Progression: The Chronic Renal Insufficiency Cohort Study.

    PubMed

    Parsa, Afshin; Kanetsky, Peter A; Xiao, Rui; Gupta, Jayanta; Mitra, Nandita; Limou, Sophie; Xie, Dawei; Xu, Huichun; Anderson, Amanda Hyre; Ojo, Akinlolu; Kusek, John W; Lora, Claudia M; Hamm, L Lee; He, Jiang; Sandholm, Niina; Jeff, Janina; Raj, Dominic E; Böger, Carsten A; Bottinger, Erwin; Salimi, Shabnam; Parekh, Rulan S; Adler, Sharon G; Langefeld, Carl D; Bowden, Donald W; Groop, Per-Henrik; Forsblom, Carol; Freedman, Barry I; Lipkowitz, Michael; Fox, Caroline S; Winkler, Cheryl A; Feldman, Harold I

    2017-03-01

    The rate of decline of renal function varies significantly among individuals with CKD. To understand better the contribution of genetics to CKD progression, we performed a genome-wide association study among participants in the Chronic Renal Insufficiency Cohort Study. Our outcome of interest was CKD progression measured as change in eGFR over time among 1331 blacks and 1476 whites with CKD. We stratified all analyses by race and subsequently, diabetes status. Single-nucleotide polymorphisms (SNPs) that surpassed a significance threshold of P <1×10 -6 for association with eGFR slope were selected as candidates for follow-up and secondarily tested for association with proteinuria and time to ESRD. We identified 12 such SNPs among black patients and six such SNPs among white patients. We were able to conduct follow-up analyses of three candidate SNPs in similar (replication) cohorts and eight candidate SNPs in phenotype-related (validation) cohorts. Among blacks without diabetes, rs653747 in LINC00923 replicated in the African American Study of Kidney Disease and Hypertension cohort (discovery P =5.42×10 -7 ; replication P =0.039; combined P =7.42×10 -9 ). This SNP also associated with ESRD (hazard ratio, 2.0 (95% confidence interval, 1.5 to 2.7); P =4.90×10 -6 ). Similarly, rs931891 in LINC00923 associated with eGFR decline ( P =1.44×10 -4 ) in white patients without diabetes. In summary, SNPs in LINC00923 , an RNA gene expressed in the kidney, significantly associated with CKD progression in individuals with nondiabetic CKD. However, the lack of equivalent cohorts hampered replication for most discovery loci. Further replication of our findings in comparable study populations is warranted. Copyright © 2017 by the American Society of Nephrology.

  14. Comprehensive analysis of yeast metabolite GC x GC-TOFMS data: combining discovery-mode and deconvolution chemometric software.

    PubMed

    Mohler, Rachel E; Dombek, Kenneth M; Hoggard, Jamin C; Pierce, Karisa M; Young, Elton T; Synovec, Robert E

    2007-08-01

    The first extensive study of yeast metabolite GC x GC-TOFMS data from cells grown under fermenting, R, and respiring, DR, conditions is reported. In this study, recently developed chemometric software for use with three-dimensional instrumentation data was implemented, using a statistically-based Fisher ratio method. The Fisher ratio method is fully automated and will rapidly reduce the data to pinpoint two-dimensional chromatographic peaks differentiating sample types while utilizing all the mass channels. The effect of lowering the Fisher ratio threshold on peak identification was studied. At the lowest threshold (just above the noise level), 73 metabolite peaks were identified, nearly three-fold greater than the number of previously reported metabolite peaks identified (26). In addition to the 73 identified metabolites, 81 unknown metabolites were also located. A Parallel Factor Analysis graphical user interface (PARAFAC GUI) was applied to selected mass channels to obtain a concentration ratio, for each metabolite under the two growth conditions. Of the 73 known metabolites identified by the Fisher ratio method, 54 were statistically changing to the 95% confidence limit between the DR and R conditions according to the rigorous Student's t-test. PARAFAC determined the concentration ratio and provided a fully-deconvoluted (i.e. mathematically resolved) mass spectrum for each of the metabolites. The combination of the Fisher ratio method with the PARAFAC GUI provides high-throughput software for discovery-based metabolomics research, and is novel for GC x GC-TOFMS data due to the use of the entire data set in the analysis (640 MB x 70 runs, double precision floating point).

  15. Active Duty - U.S. Army Noise Induced Hearing Injury Surveillance Calendar Years 2009-2013

    DTIC Science & Technology

    2014-06-01

    rates for sensorineural hearing loss, significant threshold shift, tinnitus , and Noise-Induced Hearing Loss. The intention is to monitor the morbidity...surveillance. These code groups include sensorineural hearing loss (SNHL), significant threshold shift (STS), noise-induced hearing loss (NIHL) and tinnitus ... Tinnitus ) was analyzed using a regression model to determine the trend of incidence rates from 2007 to the current year. Statistical significance of a

  16. Comparison of Heart Rate Response to Tennis Activity between Persons with and without Spinal Cord Injuries: Implications for a Training Threshold

    ERIC Educational Resources Information Center

    Barfield, J. P.; Malone, Laurie A.; Coleman, Tristica A.

    2009-01-01

    The purpose of this study was to evaluate the ability of individuals with spinal cord injury (SCI) to reach a training threshold during on-court sport activity. Monitors collected heart rate (HR) data every 5 s for 11 wheelchair tennis players (WCT) with low paraplegia and 11 able-bodied controls matched on experience and skill level (ABT).…

  17. Characterization of Mode 1 and Mode 2 delamination growth and thresholds in graphite/peek composites

    NASA Technical Reports Server (NTRS)

    Martin, Roderick H.; Murri, Gretchen B.

    1988-01-01

    Composite materials often fail by delamination. The onset and growth of delamination in AS4/PEEK, a tough thermoplastic matrix composite, was characterized for mode 1 and mode 2 loadings, using the Double Cantilever Beam (DCB) and the End Notched Flexure (ENF) test specimens. Delamination growth per fatigue cycle, da/dN, was related to strain energy release rate, G, by means of a power law. However, the exponents of these power laws were too large for them to be adequately used as a life prediction tool. A small error in the estimated applied loads could lead to large errors in the delamination growth rates. Hence strain energy release rate thresholds, G sub th, below which no delamination would occur were also measured. Mode 1 and 2 threshold G values for no delamination growth were found by monitoring the number of cycles to delamination onset in the DCB and ENF specimens. The maximum applied G for which no delamination growth had occurred until at least 1,000,000 cycles was considered the threshold strain energy release rate. Comments are given on how testing effects, facial interference or delamination front damage, may invalidate the experimental determination of the constants in the expression.

  18. Characterization of Mode I and Mode II delamination growth and thresholds in AS4/PEEK composites

    NASA Technical Reports Server (NTRS)

    Martin, Roderick H.; Murri, Gretchen Bostaph

    1990-01-01

    Composite materials often fail by delamination. The onset and growth of delamination in AS4/PEEK, a tough thermoplastic matrix composite, was characterized for mode 1 and mode 2 loadings, using the Double Cantilever Beam (DCB) and the End Notched Flexure (ENF) test specimens. Delamination growth per fatigue cycle, da/dN, was related to strain energy release rate, G, by means of a power law. However, the exponents of these power laws were too large for them to be adequately used as a life prediction tool. A small error in the estimated applied loads could lead to large errors in the delamination growth rates. Hence strain energy release rate thresholds, G sub th, below which no delamination would occur were also measured. Mode 1 and 2 threshold G values for no delamination growth were found by monitoring the number of cycles to delamination onset in the DCB and ENF specimens. The maximum applied G for which no delamination growth had occurred until at least 1,000,000 cycles was considered the threshold strain energy release rate. Comments are given on how testing effects, facial interference or delamination front damage, may invalidate the experimental determination of the constants in the expression.

  19. Comparing the effects of age on amplitude modulation and frequency modulation detection.

    PubMed

    Wallaert, Nicolas; Moore, Brian C J; Lorenzi, Christian

    2016-06-01

    Frequency modulation (FM) and amplitude modulation (AM) detection thresholds were measured at 40 dB sensation level for young (22-28 yrs) and older (44-66 yrs) listeners with normal audiograms for a carrier frequency of 500 Hz and modulation rates of 2 and 20 Hz. The number of modulation cycles, N, varied between 2 and 9. For FM detection, uninformative AM at the same rate as the FM was superimposed to disrupt excitation-pattern cues. For both groups, AM and FM detection thresholds were lower for the 2-Hz than for the 20-Hz rate, and AM and FM detection thresholds decreased with increasing N. Thresholds were higher for older than for younger listeners, especially for FM detection at 2 Hz, possibly reflecting the effect of age on the use of temporal-fine-structure cues for 2-Hz FM detection. The effect of increasing N was similar across groups for both AM and FM. However, at 20 Hz, older listeners showed a greater effect of increasing N than younger listeners for both AM and FM. The results suggest that ageing reduces sensitivity to both excitation-pattern and temporal-fine-structure cues for modulation detection, but more so for the latter, while sparing temporal integration of these cues at low modulation rates.

  20. From Extrasolar Planets to Exo-Earths

    NASA Astrophysics Data System (ADS)

    Fischer, Debra

    2018-06-01

    The ancient Greeks debated whether the Earth was unique, or innumerable worlds existed around other Suns. Twenty five years ago, technology and human ingenuity enabled the discovery of the first extrasolar planet candidates. The architectures of these first systems, with gas giant planets in star-skirting orbits, were unexpected and again raised an echo of that ancient question: is the Earth typical or unique? We are interested in this seemingly anthropocentric question because with all of our searching and discoveries, Earth is the only place where life has been found. It is the question of whether life exists elsewhere that energizes the search for exoplanets. The trajectory of this field has been stunning. After a steady stream of detections with the radial velocity method, a burst of discovery was made possible with the NASA Kepler mission. While thousands of smaller planets have now been found, true Earth analogs have eluded robust detection. However, we are sharpening the knives of our technology and without a doubt we now stand at the threshold of detecting hundreds of Earth analogs. Using Gaia, TESS, WFIRST, JWST and new ground-based spectrographs, we will learn the names and addresses of the worlds that orbit nearby stars and we will be ready to probe their atmospheres. We will finally resolve the ancient question of whether life is unique or common.

  1. A Size Effect on the Fatigue Crack Growth Rate Threshold of Alloy 718

    NASA Technical Reports Server (NTRS)

    Garr, K. R.; Hresko, G. C., III

    1998-01-01

    Fatigue crack growth rate (FCGR) tests were conducted on Alloy 718 in the solution annealed and aged condition at room temperature. In each test, the FCGR threshold was measured using the decreasing (Delta)K method. Initial testing was at two facilities, one of which used C(T) specimens with W = 127 mm. Previous data at the other facility had been obtained with specimens with W = 50.8 mm. A comparison of test results at R = 0.1 showed that the threshold for the 127 mm specimen was considerably higher than that of the 50.8 mm specimen. A check showed that this difference was not due to a heat-to-heat or lab-to-lab variation. Additional tests were conducted on specimens with W = 25.4 mm and at other R values. Data for the various specimens is presented along with parameters usually used to describe threshold behavior.

  2. Fault tolerance with noisy and slow measurements and preparation.

    PubMed

    Paz-Silva, Gerardo A; Brennen, Gavin K; Twamley, Jason

    2010-09-03

    It is not so well known that measurement-free quantum error correction protocols can be designed to achieve fault-tolerant quantum computing. Despite their potential advantages in terms of the relaxation of accuracy, speed, and addressing requirements, they have usually been overlooked since they are expected to yield a very bad threshold. We show that this is not the case. We design fault-tolerant circuits for the 9-qubit Bacon-Shor code and find an error threshold for unitary gates and preparation of p((p,g)thresh)=3.76×10(-5) (30% of the best known result for the same code using measurement) while admitting up to 1/3 error rates for measurements and allocating no constraints on measurement speed. We further show that demanding gate error rates sufficiently below the threshold pushes the preparation threshold up to p((p)thresh)=1/3.

  3. Variability of argon laser-induced sensory and pain thresholds on human oral mucosa and skin.

    PubMed Central

    Svensson, P.; Bjerring, P.; Arendt-Nielsen, L.; Kaaber, S.

    1991-01-01

    The variability of laser-induced pain perception on human oral mucosa and hairy skin was investigated in order to establish a new method for evaluation of pain in the orofacial region. A high-energy argon laser was used for experimental pain stimulation, and sensory and pain thresholds were determined. The intra-individual coefficients of variation for oral thresholds were comparable to cutaneous thresholds. However, inter-individual variation was smaller for oral thresholds, which could be due to larger variation in cutaneous optical properties. The short-term and 24-hr changes in thresholds on both surfaces were less than 9%. The results indicate that habituation to laser thresholds may account for part of the intra-individual variation observed. However, the subjective ratings of the intensity of the laser stimuli were constant. Thus, oral thresholds may, like cutaneous thresholds, be used for assessment and quantification of analgesic efficacies and to investigate various pain conditions. PMID:1814248

  4. Discovery of stationary operation of quiescent H-mode plasmas with net-zero neutral beam injection torque and high energy confinement on DIII-D [Discovery of stationary operation of quiescent H-mode plasmas with Net-Zero NBI torque and high energy confinement on DIII-D

    DOE PAGES

    Burrell, Keith H.; Barada, Kshitish; Chen, Xi; ...

    2016-03-11

    Here, recent experiments in DIII-D have led to the discovery of a means of modifying edge turbulence to achieve stationary, high confinement operation without Edge Localized Mode (ELM) instabilities and with no net external torque input. Eliminating the ELM-induced heat bursts and controlling plasma stability at low rotation represent two of the great challenges for fusion energy. By exploiting edge turbulence in a novel manner, we achieved excellent tokamak performance, well above the H 98y2 international tokamak energy confinement scaling (H 98y2=1.25), thus meeting an additional confinement challenge that is usually difficult at low torque. The new regime is triggeredmore » in double null plasmas by ramping the injected torque to zero and then maintaining it there. This lowers ExB rotation shear in the plasma edge, allowing low-k, broadband, electromagnetic turbulence to increase. In the H-mode edge, a narrow transport barrier usually grows until MHD instability (a peeling ballooning mode) leads to the ELM heat burst. However, the increased turbulence reduces the pressure gradient, allowing the development of a broader and thus higher transport barrier. A 60% increase in pedestal pressure and 40% increase in energy confinement result. An increase in the ExB shearing rate inside of the edge pedestal is a key factor in the confinement increase. Strong double-null plasma shaping raises the threshold for the ELM instability, allowing the plasma to reach a transport-limited state near but below the explosive ELM stability boundary. The resulting plasmas have burning-plasma-relevant β N=1.6-1.8 and run without the need for extra torque from 3D magnetic fields. To date, stationary conditions have been produced for 2 s or 12 energy confinement times, limited only by external hardware constraints. Stationary operation with improved pedestal conditions is highly significant for future burning plasma devices, since operation without ELMs at low rotation and good confinement is key for fusion energy production.« less

  5. Discovery of stationary operation of quiescent H-mode plasmas with net-zero neutral beam injection torque and high energy confinement on DIII-D [Discovery of stationary operation of quiescent H-mode plasmas with Net-Zero NBI torque and high energy confinement on DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrell, Keith H.; Barada, Kshitish; Chen, Xi

    Here, recent experiments in DIII-D have led to the discovery of a means of modifying edge turbulence to achieve stationary, high confinement operation without Edge Localized Mode (ELM) instabilities and with no net external torque input. Eliminating the ELM-induced heat bursts and controlling plasma stability at low rotation represent two of the great challenges for fusion energy. By exploiting edge turbulence in a novel manner, we achieved excellent tokamak performance, well above the H 98y2 international tokamak energy confinement scaling (H 98y2=1.25), thus meeting an additional confinement challenge that is usually difficult at low torque. The new regime is triggeredmore » in double null plasmas by ramping the injected torque to zero and then maintaining it there. This lowers ExB rotation shear in the plasma edge, allowing low-k, broadband, electromagnetic turbulence to increase. In the H-mode edge, a narrow transport barrier usually grows until MHD instability (a peeling ballooning mode) leads to the ELM heat burst. However, the increased turbulence reduces the pressure gradient, allowing the development of a broader and thus higher transport barrier. A 60% increase in pedestal pressure and 40% increase in energy confinement result. An increase in the ExB shearing rate inside of the edge pedestal is a key factor in the confinement increase. Strong double-null plasma shaping raises the threshold for the ELM instability, allowing the plasma to reach a transport-limited state near but below the explosive ELM stability boundary. The resulting plasmas have burning-plasma-relevant β N=1.6-1.8 and run without the need for extra torque from 3D magnetic fields. To date, stationary conditions have been produced for 2 s or 12 energy confinement times, limited only by external hardware constraints. Stationary operation with improved pedestal conditions is highly significant for future burning plasma devices, since operation without ELMs at low rotation and good confinement is key for fusion energy production.« less

  6. Serendipity in Cancer Drug Discovery: Rational or Coincidence?

    PubMed

    Prasad, Sahdeo; Gupta, Subash C; Aggarwal, Bharat B

    2016-06-01

    Novel drug development leading to final approval by the US FDA can cost as much as two billion dollars. Why the cost of novel drug discovery is so expensive is unclear, but high failure rates at the preclinical and clinical stages are major reasons. Although therapies targeting a given cell signaling pathway or a protein have become prominent in drug discovery, such treatments have done little in preventing or treating any disease alone because most chronic diseases have been found to be multigenic. A review of the discovery of numerous drugs currently being used for various diseases including cancer, diabetes, cardiovascular, pulmonary, and autoimmune diseases indicates that serendipity has played a major role in the discovery. In this review we provide evidence that rational drug discovery and targeted therapies have minimal roles in drug discovery, and that serendipity and coincidence have played and continue to play major roles. The primary focus in this review is on cancer-related drug discovery. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Large-scale exploratory genetic analysis of cognitive impairment in Parkinson's disease.

    PubMed

    Mata, Ignacio F; Johnson, Catherine O; Leverenz, James B; Weintraub, Daniel; Trojanowski, John Q; Van Deerlin, Vivianna M; Ritz, Beate; Rausch, Rebecca; Factor, Stewart A; Wood-Siverio, Cathy; Quinn, Joseph F; Chung, Kathryn A; Peterson-Hiller, Amie L; Espay, Alberto J; Revilla, Fredy J; Devoto, Johnna; Yearout, Dora; Hu, Shu-Ching; Cholerton, Brenna A; Montine, Thomas J; Edwards, Karen L; Zabetian, Cyrus P

    2017-08-01

    Cognitive impairment is a common and disabling problem in Parkinson's disease (PD). Identification of genetic variants that influence the presence or severity of cognitive deficits in PD might provide a clearer understanding of the pathophysiology underlying this important nonmotor feature. We genotyped 1105 PD patients from the PD Cognitive Genetics Consortium for 249,336 variants using the NeuroX array. Participants underwent assessments of learning and memory (Hopkins Verbal Learning Test-Revised [HVLT-R]), working memory/executive function (Letter-Number Sequencing and Trail Making Test [TMT] A and B), language processing (semantic and phonemic verbal fluency), visuospatial abilities (Benton Judgment of Line Orientation [JoLO]), and global cognitive function (Montreal Cognitive Assessment). For common variants, we used linear regression to test for association between genotype and cognitive performance with adjustment for important covariates. Rare variants were analyzed using the optimal unified sequence kernel association test. The significance threshold was defined as a false discovery rate-corrected p-value (P FDR ) of 0.05. Eighteen common variants in 13 genomic regions exceeded the significance threshold for one of the cognitive tests. These included GBA rs2230288 (E326K; P FDR  = 2.7 × 10 -4 ) for JoLO, PARP4 rs9318600 (P FDR  = 0.006), and rs9581094 (P FDR  = 0.006) for HVLT-R total recall, and MTCL1 rs34877994 (P FDR  = 0.01) for TMT B-A. Analysis of rare variants did not yield any significant gene regions. We have conducted the first large-scale PD cognitive genetics analysis and nominated several new putative susceptibility genes for cognitive impairment in PD. These results will require replication in independent PD cohorts. Published by Elsevier Inc.

  8. Finite mixture modeling for vehicle crash data with application to hotspot identification.

    PubMed

    Park, Byung-Jung; Lord, Dominique; Lee, Chungwon

    2014-10-01

    The application of finite mixture regression models has recently gained an interest from highway safety researchers because of its considerable potential for addressing unobserved heterogeneity. Finite mixture models assume that the observations of a sample arise from two or more unobserved components with unknown proportions. Both fixed and varying weight parameter models have been shown to be useful for explaining the heterogeneity and the nature of the dispersion in crash data. Given the superior performance of the finite mixture model, this study, using observed and simulated data, investigated the relative performance of the finite mixture model and the traditional negative binomial (NB) model in terms of hotspot identification. For the observed data, rural multilane segment crash data for divided highways in California and Texas were used. The results showed that the difference measured by the percentage deviation in ranking orders was relatively small for this dataset. Nevertheless, the ranking results from the finite mixture model were considered more reliable than the NB model because of the better model specification. This finding was also supported by the simulation study which produced a high number of false positives and negatives when a mis-specified model was used for hotspot identification. Regarding an optimal threshold value for identifying hotspots, another simulation analysis indicated that there is a discrepancy between false discovery (increasing) and false negative rates (decreasing). Since the costs associated with false positives and false negatives are different, it is suggested that the selected optimal threshold value should be decided by considering the trade-offs between these two costs so that unnecessary expenses are minimized. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Predicting pain relief: Use of pre-surgical trigeminal nerve diffusion metrics in trigeminal neuralgia.

    PubMed

    Hung, Peter S-P; Chen, David Q; Davis, Karen D; Zhong, Jidan; Hodaie, Mojgan

    2017-01-01

    Trigeminal neuralgia (TN) is a chronic neuropathic facial pain disorder that commonly responds to surgery. A proportion of patients, however, do not benefit and suffer ongoing pain. There are currently no imaging tools that permit the prediction of treatment response. To address this paucity, we used diffusion tensor imaging (DTI) to determine whether pre-surgical trigeminal nerve microstructural diffusivities can prognosticate response to TN treatment. In 31 TN patients and 16 healthy controls, multi-tensor tractography was used to extract DTI-derived metrics-axial (AD), radial (RD), mean diffusivity (MD), and fractional anisotropy (FA)-from the cisternal segment, root entry zone and pontine segment of trigeminal nerves for false discovery rate-corrected Student's t -tests. Ipsilateral diffusivities were bootstrap resampled to visualize group-level diffusivity thresholds of long-term response. To obtain an individual-level statistical classifier of surgical response, we conducted discriminant function analysis (DFA) with the type of surgery chosen alongside ipsilateral measurements and ipsilateral/contralateral ratios of AD and RD from all regions of interest as prediction variables. Abnormal diffusivity in the trigeminal pontine fibers, demonstrated by increased AD, highlighted non-responders (n = 14) compared to controls. Bootstrap resampling revealed three ipsilateral diffusivity thresholds of response-pontine AD, MD, cisternal FA-separating 85% of non-responders from responders. DFA produced an 83.9% (71.0% using leave-one-out-cross-validation) accurate prognosticator of response that successfully identified 12/14 non-responders. Our study demonstrates that pre-surgical DTI metrics can serve as a highly predictive, individualized tool to prognosticate surgical response. We further highlight abnormal pontine segment diffusivities as key features of treatment non-response and confirm the axiom that central pain does not commonly benefit from peripheral treatments.

  10. Contributions of adaptation currents to dynamic spike threshold on slow timescales: Biophysical insights from conductance-based models

    NASA Astrophysics Data System (ADS)

    Yi, Guosheng; Wang, Jiang; Wei, Xile; Deng, Bin; Li, Huiyan; Che, Yanqiu

    2017-06-01

    Spike-frequency adaptation (SFA) mediated by various adaptation currents, such as voltage-gated K+ current (IM), Ca2+-gated K+ current (IAHP), or Na+-activated K+ current (IKNa), exists in many types of neurons, which has been shown to effectively shape their information transmission properties on slow timescales. Here we use conductance-based models to investigate how the activation of three adaptation currents regulates the threshold voltage for action potential (AP) initiation during the course of SFA. It is observed that the spike threshold gets depolarized and the rate of membrane depolarization (dV/dt) preceding AP is reduced as adaptation currents reduce firing rate. It is indicated that the presence of inhibitory adaptation currents enables the neuron to generate a dynamic threshold inversely correlated with preceding dV/dt on slower timescales than fast dynamics of AP generation. By analyzing the interactions of ionic currents at subthreshold potentials, we find that the activation of adaptation currents increase the outward level of net membrane current prior to AP initiation, which antagonizes inward Na+ to result in a depolarized threshold and lower dV/dt from one AP to the next. Our simulations demonstrate that the threshold dynamics on slow timescales is a secondary effect caused by the activation of adaptation currents. These findings have provided a biophysical interpretation of the relationship between adaptation currents and spike threshold.

  11. Estimating population extinction thresholds with categorical classification trees for Louisiana black bears

    USGS Publications Warehouse

    Laufenberg, Jared S.; Clark, Joseph D.; Chandler, Richard B.

    2018-01-01

    Monitoring vulnerable species is critical for their conservation. Thresholds or tipping points are commonly used to indicate when populations become vulnerable to extinction and to trigger changes in conservation actions. However, quantitative methods to determine such thresholds have not been well explored. The Louisiana black bear (Ursus americanus luteolus) was removed from the list of threatened and endangered species under the U.S. Endangered Species Act in 2016 and our objectives were to determine the most appropriate parameters and thresholds for monitoring and management action. Capture mark recapture (CMR) data from 2006 to 2012 were used to estimate population parameters and variances. We used stochastic population simulations and conditional classification trees to identify demographic rates for monitoring that would be most indicative of heighted extinction risk. We then identified thresholds that would be reliable predictors of population viability. Conditional classification trees indicated that annual apparent survival rates for adult females averaged over 5 years () was the best predictor of population persistence. Specifically, population persistence was estimated to be ≥95% over 100 years when , suggesting that this statistic can be used as threshold to trigger management intervention. Our evaluation produced monitoring protocols that reliably predicted population persistence and was cost-effective. We conclude that population projections and conditional classification trees can be valuable tools for identifying extinction thresholds used in monitoring programs.

  12. Estimating population extinction thresholds with categorical classification trees for Louisiana black bears.

    PubMed

    Laufenberg, Jared S; Clark, Joseph D; Chandler, Richard B

    2018-01-01

    Monitoring vulnerable species is critical for their conservation. Thresholds or tipping points are commonly used to indicate when populations become vulnerable to extinction and to trigger changes in conservation actions. However, quantitative methods to determine such thresholds have not been well explored. The Louisiana black bear (Ursus americanus luteolus) was removed from the list of threatened and endangered species under the U.S. Endangered Species Act in 2016 and our objectives were to determine the most appropriate parameters and thresholds for monitoring and management action. Capture mark recapture (CMR) data from 2006 to 2012 were used to estimate population parameters and variances. We used stochastic population simulations and conditional classification trees to identify demographic rates for monitoring that would be most indicative of heighted extinction risk. We then identified thresholds that would be reliable predictors of population viability. Conditional classification trees indicated that annual apparent survival rates for adult females averaged over 5 years ([Formula: see text]) was the best predictor of population persistence. Specifically, population persistence was estimated to be ≥95% over 100 years when [Formula: see text], suggesting that this statistic can be used as threshold to trigger management intervention. Our evaluation produced monitoring protocols that reliably predicted population persistence and was cost-effective. We conclude that population projections and conditional classification trees can be valuable tools for identifying extinction thresholds used in monitoring programs.

  13. Rate-Compatible Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelson, Alan S; Gerrish, Philip J

    The constructive creativity of natural selection originates from its paradoxical ability to foster cooperation through competition. Cooperating communities ranging from complex societies to somatic tissue are constantly under attack, however, by non-cooperating mutants or transformants, called 'cheaters'. Structure in these communities promotes the formation of cooperating clusters whose competitive superiority can alone be sufficient to thwart outgrowths of cheaters and thereby maintain cooperation. But we find that when cheaters appear too frequently -- exceeding a threshold mutation or transformation rate -- their scattered outgrowths infiltrate and break up cooperating clusters, resulting in a cascading loss of community integrity, a switchmore » to net positive selection for cheaters, and ultimately in the loss of cooperation. We find that this threshold mutation rate is directly proportional to the fitness support received from each cooperating neighbor minus the individual fitness benefit of cheating. When mutation rate also evolves, this threshold is crossed spontaneously after thousands of generations, at which point cheaters rapidly invade. In a structured community, cooperation can persist only if the mutation rate remains below a critical value.« less

  15. A Knowledge Discovery Approach to Diagnosing Intracranial Hematomas on Brain CT: Recognition, Measurement and Classification

    NASA Astrophysics Data System (ADS)

    Liao, Chun-Chih; Xiao, Furen; Wong, Jau-Min; Chiang, I.-Jen

    Computed tomography (CT) of the brain is preferred study on neurological emergencies. Physicians use CT to diagnose various types of intracranial hematomas, including epidural, subdural and intracerebral hematomas according to their locations and shapes. We propose a novel method that can automatically diagnose intracranial hematomas by combining machine vision and knowledge discovery techniques. The skull on the CT slice is located and the depth of each intracranial pixel is labeled. After normalization of the pixel intensities by their depth, the hyperdense area of intracranial hematoma is segmented with multi-resolution thresholding and region-growing. We then apply C4.5 algorithm to construct a decision tree using the features of the segmented hematoma and the diagnoses made by physicians. The algorithm was evaluated on 48 pathological images treated in a single institute. The two discovered rules closely resemble those used by human experts, and are able to make correct diagnoses in all cases.

  16. Catch bonds govern adhesion through L-selectin at threshold shear.

    PubMed

    Yago, Tadayuki; Wu, Jianhua; Wey, C Diana; Klopocki, Arkadiusz G; Zhu, Cheng; McEver, Rodger P

    2004-09-13

    Flow-enhanced cell adhesion is an unexplained phenomenon that might result from a transport-dependent increase in on-rates or a force-dependent decrease in off-rates of adhesive bonds. L-selectin requires a threshold shear to support leukocyte rolling on P-selectin glycoprotein ligand-1 (PSGL-1) and other vascular ligands. Low forces decrease L-selectin-PSGL-1 off-rates (catch bonds), whereas higher forces increase off-rates (slip bonds). We determined that a force-dependent decrease in off-rates dictated flow-enhanced rolling of L-selectin-bearing microspheres or neutrophils on PSGL-1. Catch bonds enabled increasing force to convert short-lived tethers into longer-lived tethers, which decreased rolling velocities and increased the regularity of rolling steps as shear rose from the threshold to an optimal value. As shear increased above the optimum, transitions to slip bonds shortened tether lifetimes, which increased rolling velocities and decreased rolling regularity. Thus, force-dependent alterations of bond lifetimes govern L-selectin-dependent cell adhesion below and above the shear optimum. These findings establish the first biological function for catch bonds as a mechanism for flow-enhanced cell adhesion.

  17. Altering gender role expectations: effects on pain tolerance, pain threshold, and pain ratings.

    PubMed

    Robinson, Michael E; Gagnon, Christine M; Riley, Joseph L; Price, Donald D

    2003-06-01

    The literature demonstrating sex differences in pain is sizable. Most explanations for these differences have focused on biologic mechanisms, and only a few studies have examined social learning. The purpose of this study was to examine the contribution of gender-role stereotypes to sex differences in pain. This study used experimental manipulation of gender-role expectations for men and women. One hundred twenty students participated in the cold pressor task. Before the pain task, participants were given 1 of 3 instructional sets: no expectation, 30-second performance expectation, or a 90-second performance expectation. Pain ratings, threshold, and tolerance were recorded. Significant sex differences in the "no expectation" condition for pain tolerance (t = 2.32, df = 38, P <.05) and post-cold pressor pain ratings (t = 2.6, df = 37, P <.05) were found. Women had briefer tolerance times and higher post-cold pressor ratings than men. When given gender-specific tolerance expectations, men and women did not differ in their pain tolerance, pain threshold, or pain ratings. This is the first empirical study to show that manipulation of expectations alters sex differences in laboratory pain.

  18. Decay rates of magnetic modes below the threshold of a turbulent dynamo.

    PubMed

    Herault, J; Pétrélis, F; Fauve, S

    2014-04-01

    We measure the decay rates of magnetic field modes in a turbulent flow of liquid sodium below the dynamo threshold. We observe that turbulent fluctuations induce energy transfers between modes with different symmetries (dipolar and quadrupolar). Using symmetry properties, we show how to measure the decay rate of each mode without being restricted to the one with the smallest damping rate. We observe that the respective values of the decay rates of these modes depend on the shape of the propellers driving the flow. Dynamical regimes, including field reversals, are observed only when the modes are both nearly marginal. This is in line with a recently proposed model.

  19. Accounting for graduate medical education production of primary care physicians and general surgeons: timing of measurement matters.

    PubMed

    Petterson, Stephen; Burke, Matthew; Phillips, Robert; Teevan, Bridget

    2011-05-01

    Legislation proposed in 2009 to expand GME set institutional primary care and general surgery production eligibility thresholds at 25% at entry into training. The authors measured institutions' production of primary care physicians and general surgeons on completion of first residency versus two to four years after graduation to inform debate and explore residency expansion and physician workforce implications. Production of primary care physicians and general surgeons was assessed by retrospective analysis of the 2009 American Medical Association Masterfile, which includes physicians' training institution, residency specialty, and year of completion for up to six training experiences. The authors measured production rates for each institution based on physicians completing their first residency during 2005-2007 in family or internal medicine, pediatrics, or general surgery. They then reassessed rates to account for those who completed additional training. They compared these rates with proposed expansion eligibility thresholds and current workforce needs. Of 116,004 physicians completing their first residency, 54,245 (46.8%) were in primary care and general surgery. Of 683 training institutions, 586 met the 25% threshold for expansion eligibility. At two to four years out, only 29,963 physicians (25.8%) remained in primary care or general surgery, and 135 institutions lost eligibility. A 35% threshold eliminated 314 institutions collectively training 93,774 residents (80.8%). Residency expansion thresholds that do not account for production at least two to four years after completion of first residency overestimate eligibility. The overall primary care production rate from GME will not sustain the current physician workforce composition. Copyright © by the Association of American medical Colleges.

  20. Laser damage of free-standing nanometer membranes

    NASA Astrophysics Data System (ADS)

    Morimoto, Yuya; Roland, Iännis; Rennesson, Stéphanie; Semond, Fabrice; Boucaud, Philippe; Baum, Peter

    2017-12-01

    Many high-field/attosecond and ultrafast electron diffraction/microscopy experiments on condensed matter require samples in the form of free-standing membranes with nanometer thickness. Here, we report the measurement of the laser-induced damage threshold of 11 different free-standing nanometer-thin membranes of metallic, semiconducting, and insulating materials for 1-ps, 1030-nm laser pulses at 50 kHz repetition rate. We find a laser damage threshold that is very similar to each corresponding bulk material. The measurements also reveal a band gap dependence of the damage threshold as a consequence of different ionization rates. These results establish the suitability of free-standing nanometer membranes for high-field pump-probe experiments.

  1. Transfusion strategy for acute upper gastrointestinal bleeding.

    PubMed

    Handel, James; Lang, Eddy

    2015-09-01

    Clinical question Does a hemoglobin transfusion threshold of 70 g/L yield better patient outcomes than a threshold of 90 g/L in patients with acute upper gastrointestinal bleeding? Article chosen Villanueva C, Colomo A, Bosch A, et al. Transfusion strategies for acute upper gastrointestinal bleeding. N Engl J Med 2013;368(1):11-21. Study objectives The authors of this study measured mortality, from any cause, within the first 45 days, in patients with acute upper gastrointestinal bleeding, who were managed with a hemoglobin threshold for red cell transfusion of either 70 g/L or 90 g/L. The secondary outcome measures included rate of further bleeding and rate of adverse events.

  2. Study of Near-Threshold Fatigue Crack Propagation in Pipeline Steels in High Pressure Environments

    NASA Technical Reports Server (NTRS)

    Mitchell, M.

    1981-01-01

    Near threshold fatigue crack propagation in pipeline steels in high pressure environments was studied. The objective was to determine the level of threshold stress intensity for fatigue crack growth rate behavior in a high strength low alloy X60 pipeline-type steel. Complete results have been generated for gaseous hydrogen at ambient pressure, laboratory air at ambient pressure and approximately 60% relative humidity as well as vacuum of 0.000067 Pa ( 0.0000005 torr) at R-ratios = K(min)/K(max) of 0.1, 0.5, and 0.8. Fatigue crack growth rate behavior in gaseous hydrogen, methane, and methane plus 10 percent hydrogen at 6.89 MPa (100 psi) was determined.

  3. Electric Organ Discharges of Mormyrid Fish as a Possible Cue for Predatory Catfish

    NASA Astrophysics Data System (ADS)

    Hanika, S.; Kramer, B.

    During reproductive migration the electroreceptive African sharptooth catfish, Clarias gariepinus (Siluriformes), preys mainly on a weakly electric fish, the bulldog Marcusenius macrolepidotus (Mormyridae; Merron 1993). This is puzzling because the electric organ discharges of known Marcusenius species are pulses of a duration (<1ms) too short for being detected by the catfishes' low-frequency electroreceptive system (optimum sensitivity, 10-30Hz Peters and Bretschneider 1981). On the recent discovery that M. macrolepidotus males emit discharges lasting approximately ten times longer than those of females (Kramer 1997a) we determined behavioral thresholds for discharges of both sexes, using synthetic playbacks of field-recorded discharges. C. gariepinus detected M. macrolepidotus male discharges down to a field gradient of 103μVpeak-peak/cm and up to a distance of 1.5m at natural field conditions. In contrast, thresholds for female discharges were not reached with our setup, and we presume the bulldogs eaten by catfish are predominantly male.

  4. The probability of probability and research truths.

    PubMed

    Fatovich, Daniel M; Phillips, Michael

    2017-04-01

    The foundation of much medical research rests on the statistical significance of the P-value, but we have fallen prey to the seductive certainty of significance. Other scientific disciplines work to a different standard. This may partly explain why medical reversal is an increasing phenomenon, whereby new studies (based on the 0.05 standard) overturn previous significant findings. This has generated a crisis in the rigour of evidence-based medicine, as many people erroneously believe that a P < 0.05 means the treatment effect is clinically important. However, statistics are not facts about the world. Nor should they be based on an arbitrary threshold that arose for historical reasons. This arbitrary threshold encourages an unthinking automatic response that contributes to industry's influence on medical research. Examples from emergency medicine practice illustrate these themes. Study replication needs to be valued as much as discovery. Careful and thoughtful unbiased thinking about the results we do have is undervalued. © 2017 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  5. Retrospective analysis of natural products provides insights for future discovery trends.

    PubMed

    Pye, Cameron R; Bertin, Matthew J; Lokey, R Scott; Gerwick, William H; Linington, Roger G

    2017-05-30

    Understanding of the capacity of the natural world to produce secondary metabolites is important to a broad range of fields, including drug discovery, ecology, biosynthesis, and chemical biology, among others. Both the absolute number and the rate of discovery of natural products have increased significantly in recent years. However, there is a perception and concern that the fundamental novelty of these discoveries is decreasing relative to previously known natural products. This study presents a quantitative examination of the field from the perspective of both number of compounds and compound novelty using a dataset of all published microbial and marine-derived natural products. This analysis aimed to explore a number of key questions, such as how the rate of discovery of new natural products has changed over the past decades, how the average natural product structural novelty has changed as a function of time, whether exploring novel taxonomic space affords an advantage in terms of novel compound discovery, and whether it is possible to estimate how close we are to having described all of the chemical space covered by natural products. Our analyses demonstrate that most natural products being published today bear structural similarity to previously published compounds, and that the range of scaffolds readily accessible from nature is limited. However, the analysis also shows that the field continues to discover appreciable numbers of natural products with no structural precedent. Together, these results suggest that the development of innovative discovery methods will continue to yield compounds with unique structural and biological properties.

  6. Study on system dynamics of evolutionary mix-game models

    NASA Astrophysics Data System (ADS)

    Gou, Chengling; Guo, Xiaoqian; Chen, Fang

    2008-11-01

    Mix-game model is ameliorated from an agent-based MG model, which is used to simulate the real financial market. Different from MG, there are two groups of agents in Mix-game: Group 1 plays a majority game and Group 2 plays a minority game. These two groups of agents have different bounded abilities to deal with historical information and to count their own performance. In this paper, we modify Mix-game model by assigning the evolution abilities to agents: if the winning rates of agents are smaller than a threshold, they will copy the best strategies the other agent has; and agents will repeat such evolution at certain time intervals. Through simulations this paper finds: (1) the average winning rates of agents in Group 1 and the mean volatilities increase with the increases of the thresholds of Group 1; (2) the average winning rates of both groups decrease but the mean volatilities of system increase with the increase of the thresholds of Group 2; (3) the thresholds of Group 2 have greater impact on system dynamics than the thresholds of Group 1; (4) the characteristics of system dynamics under different time intervals of strategy change are similar to each other qualitatively, but they are different quantitatively; (5) As the time interval of strategy change increases from 1 to 20, the system behaves more and more stable and the performances of agents in both groups become better also.

  7. The mutation-drift balance in spatially structured populations.

    PubMed

    Schneider, David M; Martins, Ayana B; de Aguiar, Marcus A M

    2016-08-07

    In finite populations the action of neutral mutations is balanced by genetic drift, leading to a stationary distribution of alleles that displays a transition between two different behaviors. For small mutation rates most individuals will carry the same allele at equilibrium, whereas for high mutation rates of the alleles will be randomly distributed with frequencies close to one half for a biallelic gene. For well-mixed haploid populations the mutation threshold is μc=1/2N, where N is the population size. In this paper we study how spatial structure affects this mutation threshold. Specifically, we study the stationary allele distribution for populations placed on regular networks where connected nodes represent potential mating partners. We show that the mutation threshold is sensitive to spatial structure only if the number of potential mates is very small. In this limit, the mutation threshold decreases substantially, increasing the diversity of the population at considerably low mutation rates. Defining kc as the degree of the network for which the mutation threshold drops to half of its value in well-mixed populations we show that kc grows slowly as a function of the population size, following a power law. Our calculations and simulations are based on the Moran model and on a mapping between the Moran model with mutations and the voter model with opinion makers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Accelerating the Rate of Astronomical Discovery

    NASA Astrophysics Data System (ADS)

    Norris, Ray P. Ruggles, Clive L. N.

    2010-05-01

    Special Session 5 on Accelerating the Rate of Astronomical Discovery addressed a range of potential limits to progress - paradigmatic, technological, organisational, and political - examining each issue both from modern and historical perspectives, and drawing lessons to guide future progress. A number of issues were identified which potentially regulate the flow of discoveries, such as the balance between large strongly-focussed projects and instruments, designed to answer the most fundamental questions confronting us, and the need to maintain a creative environment with room for unorthodox thinkers and bold, high risk, projects. Also important is the need to maintain historical and cultural perspectives, and the need to engage the minds of the most brilliant young people on the planet, regardless of their background, ethnicity, gender, or geography.

  9. SpS5: Accelerating the Rate of Astronomical Discovery

    NASA Astrophysics Data System (ADS)

    Norris, Ray P.

    2010-11-01

    Special Session 5 on Accelerating the Rate of Astronomical Discovery addressed a range of potential limits to progress: paradigmatic, technological, organizational, and political. It examined each issue both from modern and historical perspectives, and drew lessons to guide future progress. A number of issues were identified which may regulate the flow of discoveries, such as the balance between large strongly-focussed projects and instruments, designed to answer the most fundamental questions confronting us, and the need to maintain a creative environment with room for unorthodox thinkers and bold, high risk, projects. Also important is the need to maintain historical and cultural perspectives, and the need to engage the minds of the most brilliant young people on the planet, regardless of their background, ethnicity, gender, or geography.

  10. Climatic shocks associate with innovation in science and technology.

    PubMed

    De Dreu, Carsten K W; van Dijk, Mathijs A

    2018-01-01

    Human history is shaped by landmark discoveries in science and technology. However, across both time and space the rate of innovation is erratic: Periods of relative inertia alternate with bursts of creative science and rapid cascades of technological innovations. While the origins of the rise and fall in rates of discovery and innovation remain poorly understood, they may reflect adaptive responses to exogenously emerging threats and pressures. Here we examined this possibility by fitting annual rates of scientific discovery and technological innovation to climatic variability and its associated economic pressures and resource scarcity. In time-series data from Europe (1500-1900CE), we indeed found that rates of innovation are higher during prolonged periods of cold (versus warm) surface temperature and during the presence (versus absence) of volcanic dust veils. This negative temperature-innovation link was confirmed in annual time-series for France, Germany, and the United Kingdom (1901-1965CE). Combined, across almost 500 years and over 5,000 documented innovations and discoveries, a 0.5°C increase in temperature associates with a sizable 0.30-0.60 standard deviation decrease in innovation. Results were robust to controlling for fluctuations in population size. Furthermore, and consistent with economic theory and micro-level data on group innovation, path analyses revealed that the relation between harsher climatic conditions between 1500-1900CE and more innovation is mediated by climate-induced economic pressures and resource scarcity.

  11. CO32- concentration and pCO2 thresholds for calcification and dissolution on the Molokai reef flat, Hawaii

    USGS Publications Warehouse

    Yates, K.K.; Halley, R.B.

    2006-01-01

    The severity of the impact of elevated atmospheric pCO2 to coral reef ecosystems depends, in part, on how sea-water pCO2 affects the balance between calcification and dissolution of carbonate sediments. Presently, there are insufficient published data that relate concentrations of pCO 2 and CO32- to in situ rates of reef calcification in natural settings to accurately predict the impact of elevated atmospheric pCO2 on calcification and dissolution processes. Rates of net calcification and dissolution, CO32- concentrations, and pCO2 were measured, in situ, on patch reefs, bare sand, and coral rubble on the Molokai reef flat in Hawaii. Rates of calcification ranged from 0.03 to 2.30 mmol CaCO3 m-2 h-1 and dissolution ranged from -0.05 to -3.3 mmol CaCO3 m-2 h-1. Calcification and dissolution varied diurnally with net calcification primarily occurring during the day and net dissolution occurring at night. These data were used to calculate threshold values for pCO2 and CO32- at which rates of calcification and dissolution are equivalent. Results indicate that calcification and dissolution are linearly correlated with both CO32- and pCO2. Threshold pCO2 and CO32- values for individual substrate types showed considerable variation. The average pCO2 threshold value for all substrate types was 654??195 ??atm and ranged from 467 to 1003 ??atm. The average CO32- threshold value was 152??24 ??mol kg-1, ranging from 113 to 184 ??mol kg-1. Ambient seawater measurements of pCO2 and CO32- indicate that CO32- and pCO2 threshold values for all substrate types were both exceeded, simultaneously, 13% of the time at present day atmospheric pCO2 concentrations. It is predicted that atmospheric pCO2 will exceed the average pCO2 threshold value for calcification and dissolution on the Molokai reef flat by the year 2100.

  12. Innovative Methodology in the Discovery of Novel Drug Targets in the Free-Living Amoebae

    PubMed

    Baig, Abdul Mannan

    2018-04-25

    Despite advances in drug discovery and modifications in the chemotherapeutic regimens, human infections caused by free-living amoebae (FLA) have high mortality rates (~95%). The FLA that cause fatal human cerebral infections include Naegleria fowleri, Balamuthia mandrillaris and Acanthamoeba spp. Novel drug-target discovery remains the only viable option to tackle these central nervous system (CNS) infection in order to lower the mortality rates caused by the FLA. Of these FLA, N. fowleri causes primary amoebic meningoencephalitis (PAM), while the A. castellanii and B. Mandrillaris are known to cause granulomatous amoebic encephalitis (GAE). The infections caused by the FLA have been treated with drugs like Rifampin, Fluconazole, Amphotericin-B and Miltefosine. Miltefosine is an anti-leishmanial agent and an experimental anti-cancer drug. With only rare incidences of success, these drugs have remained unsuccessful to lower the mortality rates of the cerebral infection caused by FLA. Recently, with the help of bioinformatic computational tools and the discovered genomic data of the FLA, discovery of newer drug targets has become possible. These cellular targets are proteins that are either unique to the FLA or shared between the humans and these unicellular eukaryotes. The latter group of proteins has shown to be targets of some FDA approved drugs prescribed in non-infectious diseases. This review out-lines the bioinformatic methodologies that can be used in the discovery of such novel drug-targets, their chronicle by in-vitro assays done in the past and the translational value of such target discoveries in human diseases caused by FLA. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  13. Wind tunnel simulation of Martian sand storms

    NASA Technical Reports Server (NTRS)

    Greeley, R.

    1980-01-01

    The physics and geological relationships of particles driven by the wind under near Martian conditions were examined in the Martian Surface Wind Tunnel. Emphasis was placed on aeolian activity as a planetary process. Threshold speeds, rates of erosion, trajectories of windblown particles, and flow fields over various landforms were among the factors considered. Results of experiments on particles thresholds, rates of erosion, and the effects of electrostatics on particles in the aeolian environment are presented.

  14. Salivary Cortisol and Cold Pain Sensitivity in Female Twins

    PubMed Central

    Godfrey, Kathryn M; Strachan, Eric; Dansie, Elizabeth; Crofford, Leslie J; Buchwald, Dedra; Goldberg, Jack; Poeschla, Brian; Succop, Annemarie; Noonan, Carolyn; Afari, Niloofar

    2013-01-01

    Background There is a dearth of knowledge about the link between cortisol and pain sensitivity. Purpose We examined the association of salivary cortisol with indices of cold pain sensitivity in 198 female twins and explored the role of familial confounding. Methods Three-day saliva samples were collected for cortisol levels and a cold pressor test was used to collect pain ratings and time to threshold and tolerance. Linear regression modeling with generalized estimating equations examined the overall and within-pair associations. Results Lower diurnal variation of cortisol was associated with higher pain ratings at threshold (p = 0.02) and tolerance (p < 0.01). The relationship of diurnal variation with pain ratings at threshold and tolerance was minimally influenced by familial factors (i.e., genetics and common environment). Conclusions Understanding the genetic and non-genetic mechanisms underlying the link between HPA axis dysregulation and pain sensitivity may help to prevent chronic pain development and maintenance. PMID:23955075

  15. Study of communications data compression methods

    NASA Technical Reports Server (NTRS)

    Jones, H. W.

    1978-01-01

    A simple monochrome conditional replenishment system was extended to higher compression and to higher motion levels, by incorporating spatially adaptive quantizers and field repeating. Conditional replenishment combines intraframe and interframe compression, and both areas are investigated. The gain of conditional replenishment depends on the fraction of the image changing, since only changed parts of the image need to be transmitted. If the transmission rate is set so that only one fourth of the image can be transmitted in each field, greater change fractions will overload the system. A computer simulation was prepared which incorporated (1) field repeat of changes, (2) a variable change threshold, (3) frame repeat for high change, and (4) two mode, variable rate Hadamard intraframe quantizers. The field repeat gives 2:1 compression in moving areas without noticeable degradation. Variable change threshold allows some flexibility in dealing with varying change rates, but the threshold variation must be limited for acceptable performance.

  16. Thresholds and tolerance of physical pain among young adults who self-injure

    PubMed Central

    McCoy, Katrina; Fremouw, William; McNeil, Daniel W

    2010-01-01

    Prevalence rates of nonsuicidal self-injury among college students range from 17% to 38%. Research indicates that individuals with borderline personality disorder who self-injure sometimes report an absence of pain during self-injury. Furthermore, self-injury in the absence of pain has been associated with more frequent suicide attempts. The present study examined pain thresholds and tolerance among 44 college students (11 who engaged in self-injury and 33 who did not). Pain thresholds and tolerance were measured using an algometer pressure device that was used to produce pain in previous laboratory research. Participants who engaged in self-injury had a higher pain tolerance than those who did not. In addition, participants who engaged in self-injury rated the pain as less intense than participants who did not. ANCOVAs revealed that depression was associated with pain rating and pain tolerance. PMID:21165371

  17. Erosive Burning Study Utilizing Ultrasonic Measurement Techniques

    NASA Technical Reports Server (NTRS)

    Furfaro, James A.

    2003-01-01

    A 6-segment subscale motor was developed to generate a range of internal environments from which multiple propellants could be characterized for erosive burning. The motor test bed was designed to provide a high Mach number, high mass flux environment. Propellant regression rates were monitored for each segment utilizing ultrasonic measurement techniques. These data were obtained for three propellants RSRM, ETM- 03, and Castor@ IVA, which span two propellant types, PBAN (polybutadiene acrylonitrile) and HTPB (hydroxyl terminated polybutadiene). The characterization of these propellants indicates a remarkably similar erosive burning response to the induced flow environment. Propellant burnrates for each type had a conventional response with respect to pressure up to a bulk flow velocity threshold. Each propellant, however, had a unique threshold at which it would experience an increase in observed propellant burn rate. Above the observed threshold each propellant again demonstrated a similar enhanced burn rate response corresponding to the local flow environment.

  18. Epidemic spreading between two coupled subpopulations with inner structures

    NASA Astrophysics Data System (ADS)

    Ruan, Zhongyuan; Tang, Ming; Gu, Changgui; Xu, Jinshan

    2017-10-01

    The structure of underlying contact network and the mobility of agents are two decisive factors for epidemic spreading in reality. Here, we study a model consisting of two coupled subpopulations with intra-structures that emphasizes both the contact structure and the recurrent mobility pattern of individuals simultaneously. We show that the coupling of the two subpopulations (via interconnections between them and round trips of individuals) makes the epidemic threshold in each subnetwork to be the same. Moreover, we find that the interconnection probability between two subpopulations and the travel rate are important factors for spreading dynamics. In particular, as a function of interconnection probability, the epidemic threshold in each subpopulation decreases monotonously, which enhances the risks of an epidemic. While the epidemic threshold displays a non-monotonic variation as travel rate increases. Moreover, the asymptotic infected density as a function of travel rate in each subpopulation behaves differently depending on the interconnection probability.

  19. Radiation damage limits to XPCS studies of protein dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vodnala, Preeti, E-mail: preeti.vodnala@gmail.com; Karunaratne, Nuwan; Lurio, Laurence

    2016-07-27

    The limitations to x-ray photon correlation spectroscopy (XPCS) imposed by radiation damage have been evaluated for suspensions of alpha crystallin. We find that the threshold for radiation damage to the measured protein diffusion rate is significantly lower than the threshold for damage to the protein structure. We provide damage thresholds beyond which the measured diffusion coeffcients have been modified using both XPCS and dynamic light scattering (DLS).

  20. Ultrashort pulsed laser (USPL) application in dentistry: basic investigations of ablation rates and thresholds on oral hard tissue and restorative materials.

    PubMed

    Schelle, Florian; Polz, Sebastian; Haloui, Hatim; Braun, Andreas; Dehn, Claudia; Frentzen, Matthias; Meister, Jörg

    2014-11-01

    Modern ultrashort pulse lasers with scanning systems provide a huge set of parameters affecting the suitability for dental applications. The present study investigates thresholds and ablation rates of oral hard tissues and restorative materials with a view towards a clinical application system. The functional system consists of a 10 W Nd:YVO4 laser emitting pulses with a duration of 8 ps at 1,064 nm. Measurements were performed on dentin, enamel, ceramic, composite, and mammoth ivory at a repetition rate of 500 kHz. By employing a scanning system, square-shaped cavities with an edge length of 1 mm were created. Ablation threshold and rate measurements were assessed by variation of the applied fluence. Examinations were carried out employing a scanning electron microscope and optical profilometer. Irradiation time was recorded by the scanner software in order to calculate the overall ablated volume per time. First high power ablation rate measurements were performed employing a laser source with up to 50 W. Threshold values in the range of 0.45 J/cm(2) (composite) to 1.54 J/cm(2) (enamel) were observed. Differences between any two materials are statistically significant (p < 0.05). Preparation speeds up to 37.53 mm(3)/min (composite) were achieved with the 10 W laser source and differed statistically significant for any two materials (p < 0.05) with the exception of dentin and mammoth ivory (p > 0.05). By employing the 50 W laser source, increased rates up to ∼50 mm(3)/min for dentin were obtained. The results indicate that modern USPL systems provide sufficient ablation rates to be seen as a promising technology for dental applications.

  1. Analysis of parenchymal patterns using conspicuous spatial frequency features in mammograms applied to the BI-RADS density rating scheme

    NASA Astrophysics Data System (ADS)

    Perconti, Philip; Loew, Murray

    2006-03-01

    Automatic classification of the density of breast parenchyma is shown using a measure that is correlated to the human observer performance, and compared against the BI-RADS density rating. Increasingly popular in the United States, the Breast Imaging Reporting and Data System (BI-RADS) is used to draw attention to the increased screening difficulty associated with greater breast density; however, the BI-RADS rating scheme is subjective and is not intended as an objective measure of breast density. So, while popular, BI-RADS does not define density classes using a standardized measure, which leads to increased variability among observers. The adaptive thresholding technique is a more quantitative approach for assessing the percentage breast density, but considerable reader interaction is required. We calculate an objective density rating that is derived using a measure of local feature salience. Previously, this measure was shown to correlate well with radiologists' localization and discrimination of true positive and true negative regions-of-interest. Using conspicuous spatial frequency features, an objective density rating is obtained and correlated with adaptive thresholding, and the subjectively ascertained BI-RADS density ratings. Using 100 cases, obtained from the University of South Florida's DDSM database, we show that an automated breast density measure can be derived that is correlated with the interactive thresholding method for continuous percentage breast density, but not with the BI-RADS density rating categories for the selected cases. Comparison between interactive thresholding and the new salience percentage density resulted in a Pearson correlation of 76.7%. Using a four-category scale equivalent to the BI-RADS density categories, a Spearman correlation coefficient of 79.8% was found.

  2. Ia Afferent input alters the recruitment thresholds and firing rates of single human motor units.

    PubMed

    Grande, G; Cafarelli, E

    2003-06-01

    Vibration of the patellar tendon recruits motor units in the knee extensors via excitation of muscle spindles and subsequent Ia afferent input to the alpha-motoneuron pool. Our first purpose was to determine if the recruitment threshold and firing rate of the same motor unit differed when recruited involuntarily via reflex or voluntarily via descending spinal pathways. Although Ia input is excitatory to the alpha-motoneuron pool, it has also been shown paradoxically to inhibit itself. Our second purpose was to determine if vibration of the patellar tendon during a voluntary knee extension causes a change in the firing rate of already recruited motor units. In the first protocol, 10 subjects voluntarily reproduced the same isometric force profile of the knee extensors that was elicited by vibration of the patellar tendon. Single motor unit recordings from the vastus lateralis (VL) were obtained with tungsten microelectrodes and unitary behaviour was examined during both reflex and voluntary knee extensions. Recordings from 135 single motor units showed that both recruitment thresholds and firing rates were lower during reflex contractions. In the second protocol, 7 subjects maintained a voluntary knee extension at 30 N for approximately 40-45 s. Three bursts of patellar tendon vibration were superimposed at regular intervals throughout the contraction and changes in the firing rate of already recruited motor units were examined. A total of 35 motor units were recorded and each burst of superimposed vibration caused a momentary reduction in the firing rates and recruitment of additional units. Our data provide evidence that Ia input modulates the recruitment thresholds and firing rates of motor units providing more flexibility within the neuromuscular system to grade force at low levels of force production.

  3. Application of Johnson et al.'s speciation threshold model to apparent colonization times of island biotas.

    PubMed

    Ricklefs, Robert E; Bermingham, Eldredge

    2004-08-01

    Understanding patterns of diversity can be furthered by analysis of the dynamics of colonization, speciation, and extinction on islands using historical information provided by molecular phylogeography. The land birds of the Lesser Antilles are one of the most thoroughly described regional faunas in this context. In an analysis of colonization times, Ricklefs and Bermingham (2001) found that the cumulative distribution of lineages with respect to increasing time since colonization exhibits a striking change in slope at a genetic distance of about 2% mitochondrial DNA sequence divergence (about one million years). They further showed how this heterogeneity could be explained by either an abrupt increase in colonization rates or a mass extinction event. Cherry et al. (2002), referring to a model developed by Johnson et al. (2000), argued instead that the pattern resulted from a speciation threshold for reproductive isolation of island populations from their continental source populations. Prior to this threshold, genetic divergence is slowed by migration from the source, and species of varying age accumulate at a low genetic distance. After the threshold is reached, source and island populations diverge more rapidly, creating heterogeneity in the distribution of apparent ages of island taxa. We simulated of Johnson et al.'s speciation-threshold model, incorporating genetic divergence at rate k and fixation at rate M of genes that have migrated between the source and the island population. Fixation resets the divergence clock to zero. The speciation-threshold model fits the distribution of divergence times of Lesser Antillean birds well with biologically plausible parameter estimates. Application of the model to the Hawaiian avifauna, which does not exhibit marked heterogeneity of genetic divergence, and the West Indian herpetofauna, which does, required unreasonably high migration-fixation rates, several orders of magnitude greater than the colonization rate. However, the plausibility of the speciation-divergence model for Lesser Antillean birds emphasizes the importance of further investigation of historical biogeography on a regional scale for whole biotas, as well as the migration of genes between populations on long time scales and the achievement of reproductive isolation.

  4. Stress/strain changes and triggered seismicity at The Geysers, California

    USGS Publications Warehouse

    Gomberg, J.; Davis, S.

    1996-01-01

    The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency or equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.

  5. Stress/strain changes and triggered seismicity at The Geysers, California

    NASA Astrophysics Data System (ADS)

    Gomberg, Joan; Davis, Scott

    1996-01-01

    The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency, or, equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.

  6. Tertiary oil discoveries whet explorer interest off Tunisia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, M.

    Prospects for increased Tertiary oil production in the S. Mediterranean have brightened with discoveries off Tunisia, but more evaluation is needed before commercial potential is known. Several groups of U.S. and European companies have tested oil in the relatively unexplored Miocene in the Gulf of Hannamet. These include groups operated by Buttes Resources Tunisia, Elf-Aquitaine Tunisia, and Shell Tunirex. Oil test rates of 1,790 to 1,800 bpd have been reported by the Buttes group in 2 Gulf of Hammamet wells. The initial discovery probably was the first Tertiary oil ever tested in that part of the Mediterranean. The discoveries havemore » helped boost exploratory interest in the northern waters of Tunisia and northeast toward Sicily. There are reports more U.S. and European companies are requesting exploration permits from the government of Tunisia. Companies with permits are planning new exploration for 1978. Probably the most significant discovery to date has been the Buttes group's 1 Jasmine (2 BGH). The group tested high-quality 39.5'-gravity oil at a rate of 1,790 bpd. Test flow was from the Sabri Sand at 6,490 to 6,590 ft. The well was drilled in 458 ft of water.« less

  7. High-throughput discovery of rare human nucleotide polymorphisms by Ecotilling

    PubMed Central

    Till, Bradley J.; Zerr, Troy; Bowers, Elisabeth; Greene, Elizabeth A.; Comai, Luca; Henikoff, Steven

    2006-01-01

    Human individuals differ from one another at only ∼0.1% of nucleotide positions, but these single nucleotide differences account for most heritable phenotypic variation. Large-scale efforts to discover and genotype human variation have been limited to common polymorphisms. However, these efforts overlook rare nucleotide changes that may contribute to phenotypic diversity and genetic disorders, including cancer. Thus, there is an increasing need for high-throughput methods to robustly detect rare nucleotide differences. Toward this end, we have adapted the mismatch discovery method known as Ecotilling for the discovery of human single nucleotide polymorphisms. To increase throughput and reduce costs, we developed a universal primer strategy and implemented algorithms for automated band detection. Ecotilling was validated by screening 90 human DNA samples for nucleotide changes in 5 gene targets and by comparing results to public resequencing data. To increase throughput for discovery of rare alleles, we pooled samples 8-fold and found Ecotilling to be efficient relative to resequencing, with a false negative rate of 5% and a false discovery rate of 4%. We identified 28 new rare alleles, including some that are predicted to damage protein function. The detection of rare damaging mutations has implications for models of human disease. PMID:16893952

  8. Missing value imputation strategies for metabolomics data.

    PubMed

    Armitage, Emily Grace; Godzien, Joanna; Alonso-Herranz, Vanesa; López-Gonzálvez, Ángeles; Barbas, Coral

    2015-12-01

    The origin of missing values can be caused by different reasons and depending on these origins missing values should be considered differently and dealt with in different ways. In this research, four methods of imputation have been compared with respect to revealing their effects on the normality and variance of data, on statistical significance and on the approximation of a suitable threshold to accept missing data as truly missing. Additionally, the effects of different strategies for controlling familywise error rate or false discovery and how they work with the different strategies for missing value imputation have been evaluated. Missing values were found to affect normality and variance of data and k-means nearest neighbour imputation was the best method tested for restoring this. Bonferroni correction was the best method for maximizing true positives and minimizing false positives and it was observed that as low as 40% missing data could be truly missing. The range between 40 and 70% missing values was defined as a "gray area" and therefore a strategy has been proposed that provides a balance between the optimal imputation strategy that was k-means nearest neighbor and the best approximation of positioning real zeros. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Enhanced storage capacity with errors in scale-free Hopfield neural networks: An analytical study.

    PubMed

    Kim, Do-Hyun; Park, Jinha; Kahng, Byungnam

    2017-01-01

    The Hopfield model is a pioneering neural network model with associative memory retrieval. The analytical solution of the model in mean field limit revealed that memories can be retrieved without any error up to a finite storage capacity of O(N), where N is the system size. Beyond the threshold, they are completely lost. Since the introduction of the Hopfield model, the theory of neural networks has been further developed toward realistic neural networks using analog neurons, spiking neurons, etc. Nevertheless, those advances are based on fully connected networks, which are inconsistent with recent experimental discovery that the number of connections of each neuron seems to be heterogeneous, following a heavy-tailed distribution. Motivated by this observation, we consider the Hopfield model on scale-free networks and obtain a different pattern of associative memory retrieval from that obtained on the fully connected network: the storage capacity becomes tremendously enhanced but with some error in the memory retrieval, which appears as the heterogeneity of the connections is increased. Moreover, the error rates are also obtained on several real neural networks and are indeed similar to that on scale-free model networks.

  10. A masking level difference due to harmonicity.

    PubMed

    Treurniet, W C; Boucher, D R

    2001-01-01

    The role of harmonicity in masking was studied by comparing the effect of harmonic and inharmonic maskers on the masked thresholds of noise probes using a three-alternative, forced-choice method. Harmonic maskers were created by selecting sets of partials from a harmonic series with an 88-Hz fundamental and 45 consecutive partials. Inharmonic maskers differed in that the partial frequencies were perturbed to nearby values that were not integer multiples of the fundamental frequency. Average simultaneous-masked thresholds were as much as 10 dB lower with the harmonic masker than with the inharmonic masker, and this difference was unaffected by masker level. It was reduced or eliminated when the harmonic partials were separated by more than 176 Hz, suggesting that the effect is related to the extent to which the harmonics are resolved by auditory filters. The threshold difference was not observed in a forward-masking experiment. Finally, an across-channel mechanism was implicated when the threshold difference was found between a harmonic masker flanked by harmonic bands and a harmonic masker flanked by inharmonic bands. A model developed to explain the observed difference recognizes that an auditory filter output envelope is modulated when the filter passes two or more sinusoids, and that the modulation rate depends on the differences among the input frequencies. For a harmonic masker, the frequency differences of adjacent partials are identical, and all auditory filters have the same dominant modulation rate. For an inharmonic masker, however, the frequency differences are not constant and the envelope modulation rate varies across filters. The model proposes that a lower variability facilitates detection of a probe-induced change in the variability, thus accounting for the masked threshold difference. The model was supported by significantly improved predictions of observed thresholds when the predictor variables included envelope modulation rate variance measured using simulated auditory filters.

  11. 38 CFR 4.16 - Total disability ratings for compensation based on unemployability of the individual.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the U.S. Department of Commerce, Bureau of the Census, as the poverty threshold for one person... income exceeds the poverty threshold. Consideration shall be given in all claims to the nature of the...

  12. 38 CFR 4.16 - Total disability ratings for compensation based on unemployability of the individual.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the U.S. Department of Commerce, Bureau of the Census, as the poverty threshold for one person... income exceeds the poverty threshold. Consideration shall be given in all claims to the nature of the...

  13. 38 CFR 4.16 - Total disability ratings for compensation based on unemployability of the individual.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the U.S. Department of Commerce, Bureau of the Census, as the poverty threshold for one person... income exceeds the poverty threshold. Consideration shall be given in all claims to the nature of the...

  14. The development of biomarkers to reduce attrition rate in drug discovery focused on oncology and central nervous system.

    PubMed

    Safavi, Maliheh; Sabourian, Reyhaneh; Abdollahi, Mohammad

    2016-10-01

    The task of discovery and development of novel therapeutic agents remains an expensive, uncertain, time-consuming, competitive, and inefficient enterprise. Due to a steady increase in the cost and time of drug development and the considerable amount of resources required, a predictive tool is needed for assessing the safety and efficacy of a new chemical entity. This study is focused on the high attrition rate in discovery and development of oncology and central nervous system (CNS) medicines, because the failure rate of these medicines is higher than others. Some approaches valuable in reducing attrition rates are proposed and the judicious use of biomarkers is discussed. Unlike the significant progress made in identifying and characterizing novel mechanisms of disease processes and targeted therapies, the process of novel drug development is associated with an unacceptably high attrition rate. The application of clinically qualified predictive biomarkers holds great promise for further development of therapeutic targets, improved survival, and ultimately personalized medicine sets for patients. Decisions such as candidate selection, development risks, dose ranging, early proof of concept/principle, and patient stratification are based on the measurements of biologically and/or clinically validated biomarkers.

  15. Periodical cicadas: A minimal automaton model

    NASA Astrophysics Data System (ADS)

    de O. Cardozo, Giovano; de A. M. M. Silvestre, Daniel; Colato, Alexandre

    2007-08-01

    The Magicicada spp. life cycles with its prime periods and highly synchronized emergence have defied reasonable scientific explanation since its discovery. During the last decade several models and explanations for this phenomenon appeared in the literature along with a great deal of discussion. Despite this considerable effort, there is no final conclusion about this long standing biological problem. Here, we construct a minimal automaton model without predation/parasitism which reproduces some of these aspects. Our results point towards competition between different strains with limited dispersal threshold as the main factor leading to the emergence of prime numbered life cycles.

  16. Temperature-dependent change in the nature of glass fracture under electron bombardment

    NASA Astrophysics Data System (ADS)

    Kravchenko, A. A.

    1991-04-01

    We report the experimental discovery of a temperature-dependent change in the nature of glass fracture under low-energy (<10 keV) electron bombardment. This is shown to depend on the transition from the thermal-shock to the thermalfluctuation mechanism of fracture at the limiting temperature T1 = (Tg - 150) °C. The high-temperature cleavage fracture of K8 and TF1 glasses was studied and the threshold value of the critical power initiating cleavage fracture was determined (for the glasses studied Θthr = 50 70 W·sec·cm-2).

  17. Interplay between the local information based behavioral responses and the epidemic spreading in complex networks.

    PubMed

    Liu, Can; Xie, Jia-Rong; Chen, Han-Shuang; Zhang, Hai-Feng; Tang, Ming

    2015-10-01

    The spreading of an infectious disease can trigger human behavior responses to the disease, which in turn plays a crucial role on the spreading of epidemic. In this study, to illustrate the impacts of the human behavioral responses, a new class of individuals, S(F), is introduced to the classical susceptible-infected-recovered model. In the model, S(F) state represents that susceptible individuals who take self-initiate protective measures to lower the probability of being infected, and a susceptible individual may go to S(F) state with a response rate when contacting an infectious neighbor. Via the percolation method, the theoretical formulas for the epidemic threshold as well as the prevalence of epidemic are derived. Our finding indicates that, with the increasing of the response rate, the epidemic threshold is enhanced and the prevalence of epidemic is reduced. The analytical results are also verified by the numerical simulations. In addition, we demonstrate that, because the mean field method neglects the dynamic correlations, a wrong result based on the mean field method is obtained-the epidemic threshold is not related to the response rate, i.e., the additional S(F) state has no impact on the epidemic threshold.

  18. Mississippi State University Center for Air Sea Technology. FY93 and FY 94 Research Program in Navy Ocean Modeling and Prediction

    DTIC Science & Technology

    1994-09-30

    relational versus object oriented DBMS, knowledge discovery, data models, rnetadata, data filtering, clustering techniques, and synthetic data. A secondary...The first was the investigation of Al/ES Lapplications (knowledge discovery, data mining, and clustering ). Here CAST collabo.rated with Dr. Fred Petry...knowledge discovery system based on clustering techniques; implemented an on-line data browser to the DBMS; completed preliminary efforts to apply object

  19. New Perspectives on How to Discover Drugs from Herbal Medicines: CAM's Outstanding Contribution to Modern Therapeutics.

    PubMed

    Pan, Si-Yuan; Zhou, Shu-Feng; Gao, Si-Hua; Yu, Zhi-Ling; Zhang, Shuo-Feng; Tang, Min-Ke; Sun, Jian-Ning; Ma, Dik-Lung; Han, Yi-Fan; Fong, Wang-Fun; Ko, Kam-Ming

    2013-01-01

    With tens of thousands of plant species on earth, we are endowed with an enormous wealth of medicinal remedies from Mother Nature. Natural products and their derivatives represent more than 50% of all the drugs in modern therapeutics. Because of the low success rate and huge capital investment need, the research and development of conventional drugs are very costly and difficult. Over the past few decades, researchers have focused on drug discovery from herbal medicines or botanical sources, an important group of complementary and alternative medicine (CAM) therapy. With a long history of herbal usage for the clinical management of a variety of diseases in indigenous cultures, the success rate of developing a new drug from herbal medicinal preparations should, in theory, be higher than that from chemical synthesis. While the endeavor for drug discovery from herbal medicines is "experience driven," the search for a therapeutically useful synthetic drug, like "looking for a needle in a haystack," is a daunting task. In this paper, we first illustrated various approaches of drug discovery from herbal medicines. Typical examples of successful drug discovery from botanical sources were given. In addition, problems in drug discovery from herbal medicines were described and possible solutions were proposed. The prospect of drug discovery from herbal medicines in the postgenomic era was made with the provision of future directions in this area of drug development.

  20. 75 FR 22394 - Combined Notice of Filings No. 2

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-28

    ... 21, 2010. Take notice that the Commission has received the following Natural Gas Pipeline Rate and Refund Report filings: Docket Numbers: RP10-539-001. Applicants: Discovery Gas Transmission LLC. Description: Discovery Gas Transmission, LLC submits Substitute First Revised Sheet 225 et al. to FERC Gas...

  1. The ventilatory anaerobic threshold is related to, but is lower than, the critical power, but does not explain exercise tolerance at this workrate.

    PubMed

    Okudan, N; Gökbel, H

    2006-03-01

    The aim of the present study was to investigate the relationships between critical power (CP), maximal aerobic power and the anaerobic threshold and whether exercise time to exhaustion and work at the CP can be used as an index in the determination of endurance. An incremental maximal cycle exercise test was performed on 30 untrained males aged 18-22 years. Lactate analysis was carried out on capillary blood samples at every 2 minutes. From gas exchange parameters and heart rate and lactate values, ventilatory anaerobic thresholds, heart rate deflection point and the onset of blood lactate accumulation were calculated. CP was determined with linear work-time method using 3 loads. The subjects exercised until they could no longer maintain a cadence above 24 rpm at their CP and exercise time to exhaustion was determined. CP was lower than the power output corresponding to VO2max, higher than the power outputs corresponding to anaerobic threshold. CP was correlated with VO2max and anaerobic threshold. Exercise time to exhaustion and work at CP were not correlated with VO2max and anaerobic threshold. Because of the correlations of the CP with VO2max and anaerobic threshold and no correlation of exercise time to exhaustion and work at the CP with these parameters, we conclude that exercise time to exhaustion and work at the CP cannot be used as an index in the determination of endurance.

  2. Developing Bayesian adaptive methods for estimating sensitivity thresholds (d′) in Yes-No and forced-choice tasks

    PubMed Central

    Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.

    2015-01-01

    Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798

  3. Testing efficacy of distance and tree-based methods for DNA barcoding of grasses (Poaceae tribe Poeae) in Australia

    PubMed Central

    Walsh, Neville G.; Cantrill, David J.; Holmes, Gareth D.; Murphy, Daniel J.

    2017-01-01

    In Australia, Poaceae tribe Poeae are represented by 19 genera and 99 species, including economically and environmentally important native and introduced pasture grasses [e.g. Poa (Tussock-grasses) and Lolium (Ryegrasses)]. We used this tribe, which are well characterised in regards to morphological diversity and evolutionary relationships, to test the efficacy of DNA barcoding methods. A reference library was generated that included 93.9% of species in Australia (408 individuals, x¯ = 3.7 individuals per species). Molecular data were generated for official plant barcoding markers (rbcL, matK) and the nuclear ribosomal internal transcribed spacer (ITS) region. We investigated accuracy of specimen identifications using distance- (nearest neighbour, best-close match, and threshold identification) and tree-based (maximum likelihood, Bayesian inference) methods and applied species discovery methods (automatic barcode gap discovery, Poisson tree processes) based on molecular data to assess congruence with recognised species. Across all methods, success rate for specimen identification of genera was high (87.5–99.5%) and of species was low (25.6–44.6%). Distance- and tree-based methods were equally ineffective in providing accurate identifications for specimens to species rank (26.1–44.6% and 25.6–31.3%, respectively). The ITS marker achieved the highest success rate for specimen identification at both generic and species ranks across the majority of methods. For distance-based analyses the best-close match method provided the greatest accuracy for identification of individuals with a high percentage of “correct” (97.6%) and a low percentage of “incorrect” (0.3%) generic identifications, based on the ITS marker. For tribe Poeae, and likely for other grass lineages, sequence data in the standard DNA barcode markers are not variable enough for accurate identification of specimens to species rank. For recently diverged grass species similar challenges are encountered in the application of genetic and morphological data to species delimitations, with taxonomic signal limited by extensive infra-specific variation and shared polymorphisms among species in both data types. PMID:29084279

  4. Discovery of a 105 ms X-Ray Pulsar in Kesteven 79: On the Nature of Compact Central Objects in Supernova Remnants

    NASA Astrophysics Data System (ADS)

    Gotthelf, E. V.; Halpern, J. P.; Seward, F. D.

    2005-07-01

    We report the discovery of 105 ms X-ray pulsations from the compact central object (CCO) in the supernova remnant Kes 79 using data acquired with the Newton X-Ray Multi-Mirror Mission (XMM-Newton). Two observations of the pulsar taken 6 days apart yield an upper limit on its spin-down rate of P˙<7×10-14 s s-1 and no evidence for binary orbital motion. The implied energy loss rate is E˙<2×1036 ergs s-1, the surface magnetic field strength is Bp<3×1012 G, and the spin-down age is τ>24 kyr. The latter exceeds the remnant's estimated age, suggesting that the pulsar was born spinning near its current period. The X-ray spectrum of PSR J1852+0040 is best characterized by a blackbody model of temperature kTBB=0.44+/-0.03 keV, radius RBB~0.9 km, and Lbol=3.7×1033 ergs s-1 at d=7.1 kpc. The sinusoidal light curve is modulated with a pulsed fraction of >45%, suggestive of a small hot spot on the surface of the rotating neutron star. The lack of a discernible pulsar wind nebula is consistent with an interpretation of PSR J1852+0040 as a rotation-powered pulsar whose spin-down luminosity falls below the empirical threshold for generating bright wind nebulae, E˙c~4×1036 ergs s-1. The age discrepancy implies that its E˙ has always been below E˙c, perhaps a distinguishing property of the CCOs. Alternatively, the X-ray spectrum of PSR J1852+0040 suggests a low-luminosity anomalous X-ray pulsar (AXP), but the weak inferred Bp field is incompatible with a magnetar theory of its X-ray luminosity. We cannot exclude accretion from a fallback disk. The ordinary spin parameters discovered from PSR J1852+0040 highlight the difficulty that existing theories of isolated neutron stars have in explaining the high luminosities and temperatures of CCO thermal X-ray spectra.

  5. Testing efficacy of distance and tree-based methods for DNA barcoding of grasses (Poaceae tribe Poeae) in Australia.

    PubMed

    Birch, Joanne L; Walsh, Neville G; Cantrill, David J; Holmes, Gareth D; Murphy, Daniel J

    2017-01-01

    In Australia, Poaceae tribe Poeae are represented by 19 genera and 99 species, including economically and environmentally important native and introduced pasture grasses [e.g. Poa (Tussock-grasses) and Lolium (Ryegrasses)]. We used this tribe, which are well characterised in regards to morphological diversity and evolutionary relationships, to test the efficacy of DNA barcoding methods. A reference library was generated that included 93.9% of species in Australia (408 individuals, [Formula: see text] = 3.7 individuals per species). Molecular data were generated for official plant barcoding markers (rbcL, matK) and the nuclear ribosomal internal transcribed spacer (ITS) region. We investigated accuracy of specimen identifications using distance- (nearest neighbour, best-close match, and threshold identification) and tree-based (maximum likelihood, Bayesian inference) methods and applied species discovery methods (automatic barcode gap discovery, Poisson tree processes) based on molecular data to assess congruence with recognised species. Across all methods, success rate for specimen identification of genera was high (87.5-99.5%) and of species was low (25.6-44.6%). Distance- and tree-based methods were equally ineffective in providing accurate identifications for specimens to species rank (26.1-44.6% and 25.6-31.3%, respectively). The ITS marker achieved the highest success rate for specimen identification at both generic and species ranks across the majority of methods. For distance-based analyses the best-close match method provided the greatest accuracy for identification of individuals with a high percentage of "correct" (97.6%) and a low percentage of "incorrect" (0.3%) generic identifications, based on the ITS marker. For tribe Poeae, and likely for other grass lineages, sequence data in the standard DNA barcode markers are not variable enough for accurate identification of specimens to species rank. For recently diverged grass species similar challenges are encountered in the application of genetic and morphological data to species delimitations, with taxonomic signal limited by extensive infra-specific variation and shared polymorphisms among species in both data types.

  6. Discovery of a 105-ms X-ray Pulsar in Kesteven-79: On the Nature of Compact Central Objects in Supernova Remnants

    NASA Technical Reports Server (NTRS)

    Gotthelf, E. V.; Halpern, J. P.; Seward, F. D.

    2005-01-01

    We report the discovery of 105-ms X-ray pulsations from the compact central object (CCO) in the supernova remnant \\snr\\ using data acquired with the {\\it Newton X-Ray Multi-Mirror Mission). Using two observations of the pulsar taken 6-days apart we derive an upper limit on its spin-down rate of $\\dot P < 9 \\times 10"{-14}$-s-${-l)$,a nd find no evidence for binary orbital motion. The implied energy loss rate is $\\dot E < 3 \\times 10A{36)$-ergs-s$A{-1)$, polar magnetic field strength is $B-{\\rm p) < 3 \\times 10A{12)$-G, and spin-down age is $\\tau > 18.5$-kyr. The latter exceeds the remnant's estimated age, suggesting that the pulsar was born spinning near its current period. The X-ray spectrum of \\psr\\ is best characterized as a blackbody of temperature $kT {BB) =, 0.43\\pm0.02$ keV, radius $R-{BB) \\approx 1.3$-km, and $I{\\rm bol) = 5.2 \\times 10A{33)$ ergs-sSA{-1)$ at $d = 7.1$-kpc. The sinusoidal light curve is modulated with a pulsed fraction of $>45\\%$, suggestive of a small hot spot on the surface of the rotating neutron star. The lack of a discernible pulsar wind nebula is consistent with an interpretation of \\psr\\ as a rotation-powered pulsar whose spin-down luminosity falls below the empirical threshold for generating bright wind nebulae, $\\dot E-{\\rm c) = 4 \\times 10A{36)$-ergs-sSA{-I)$. The age discrepancy suggests that its $\\dot E$ has always been below $\\dot E c$, perhaps a distinguishing property of the CCOs. Alternatively, the X-ray spectrum of \\psr\\ suggests a low-luminosity AXP, but the weak inferred $B-{\\rm p)$ field is incompatible with a magnetar theory of its X-ray luminosity. The ordinary spin parameters discovered from \\psr\\ highlight the inability of existing theories to explain the high luminosities and temperatures of CCO thermal X-ray spectra.

  7. Barostat testing of rectal sensation and compliance in humans: comparison of results across two centres and overall reproducibility.

    PubMed

    Cremonini, F; Houghton, L A; Camilleri, M; Ferber, I; Fell, C; Cox, V; Castillo, E J; Alpers, D H; Dewit, O E; Gray, E; Lea, R; Zinsmeister, A R; Whorwell, P J

    2005-12-01

    We assessed reproducibility of measurements of rectal compliance and sensation in health in studies conducted at two centres. We estimated samples size necessary to show clinically meaningful changes in future studies. We performed rectal barostat tests three times (day 1, day 1 after 4 h and 14-17 days later) in 34 healthy participants. We measured compliance and pressure thresholds for first sensation, urgency, discomfort and pain using ascending method of limits and symptom ratings for gas, urgency, discomfort and pain during four phasic distensions (12, 24, 36 and 48 mmHg) in random order. Results obtained at the two centres differed minimally. Reproducibility of sensory end points varies with type of sensation, pressure level and method of distension. Pressure threshold for pain and sensory ratings for non-painful sensations at 36 and 48 mmHg distension were most reproducible in the two centres. Sample size calculations suggested that crossover design is preferable in therapeutic trials: for each dose of medication tested, a sample of 21 should be sufficient to demonstrate 30% changes in all sensory thresholds and almost all sensory ratings. We conclude that reproducibility varies with sensation type, pressure level and distension method, but in a two-centre study, differences in observed results of sensation are minimal and pressure threshold for pain and sensory ratings at 36-48 mmHg of distension are reproducible.

  8. Progress on Poverty? New Estimates of Historical Trends Using an Anchored Supplemental Poverty Measure.

    PubMed

    Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane

    2016-08-01

    This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau's recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families' expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM's 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a post-tax/post-transfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time.

  9. Progress on Poverty? New Estimates of Historical Trends Using an Anchored Supplemental Poverty Measure

    PubMed Central

    Wimer, Christopher; Fox, Liana; Garfinkel, Irwin; Kaushal, Neeraj; Waldfogel, Jane

    2016-01-01

    This study examines historical trends in poverty using an anchored version of the U.S. Census Bureau’s recently developed Research Supplemental Poverty Measure (SPM) estimated back to 1967. Although the SPM is estimated each year using a quasi-relative poverty threshold that varies over time with changes in families’ expenditures on a core basket of goods and services, this study explores trends in poverty using an absolute, or anchored, SPM threshold. We believe the anchored measure offers two advantages. First, setting the threshold at the SPM’s 2012 levels and estimating it back to 1967, adjusted only for changes in prices, is more directly comparable to the approach taken in official poverty statistics. Second, it allows for a better accounting of the roles that social policy, the labor market, and changing demographics play in trends in poverty rates over time, given that changes in the threshold are held constant. Results indicate that unlike official statistics that have shown poverty rates to be fairly flat since the 1960s, poverty rates have dropped by 40 % when measured using a historical anchored SPM over the same period. Results obtained from comparing poverty rates using a pretax/pretransfer measure of resources versus a posttax/posttransfer measure of resources further show that government policies, not market incomes, are driving the declines observed over time. PMID:27352076

  10. Critical Mutation Rate Has an Exponential Dependence on Population Size in Haploid and Diploid Populations

    PubMed Central

    Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.

    2013-01-01

    Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200

  11. Neural Correlates and Mechanisms of Spatial Release From Masking: Single-Unit and Population Responses in the Inferior Colliculus

    PubMed Central

    Lane, Courtney C.; Delgutte, Bertrand

    2007-01-01

    Spatial release from masking (SRM), a factor in listening in noisy environments, is the improvement in auditory signal detection obtained when a signal is separated in space from a masker. To study the neural mechanisms of SRM, we recorded from single units in the inferior colliculus (IC) of barbiturate-anesthetized cats, focusing on low-frequency neurons sensitive to interaural time differences. The stimulus was a broadband chirp train with a 40-Hz repetition rate in continuous broadband noise, and the unit responses were measured for several signal and masker (virtual) locations. Masked thresholds (the lowest signal-to-noise ratio, SNR, for which the signal could be detected for 75% of the stimulus presentations) changed systematically with signal and masker location. Single-unit thresholds did not necessarily improve with signal and masker separation; instead, they tended to reflect the units’ azimuth preference. Both how the signal was detected (through a rate increase or decrease) and how the noise masked the signal response (suppressive or excitatory masking) changed with signal and masker azimuth, consistent with a cross-correlator model of binaural processing. However, additional processing, perhaps related to the signal’s amplitude modulation rate, appeared to influence the units’ responses. The population masked thresholds (the most sensitive unit’s threshold at each signal and masker location) did improve with signal and masker separation as a result of the variety of azimuth preferences in our unit sample. The population thresholds were similar to human behavioral thresholds in both SNR value and shape, indicating that these units may provide a neural substrate for low-frequency SRM. PMID:15857966

  12. D-dimer threshold increase with pretest probability unlikely for pulmonary embolism to decrease unnecessary computerized tomographic pulmonary angiography

    PubMed Central

    Hogg, Melanie M.; Courtney, D. Mark; Miller, Chadwick D.; Jones, Alan E.; Smithline, Howard A

    2012-01-01

    Background Increasing the threshold to define a positive D-dimer could reduce unnecessary computed tomographic pulmonary angiography (CTPA) for suspected PE but might increase rates of missed PE and missed pneumonia, the most common nonthromboembolic diagnosis seen on CTPA. Objective Measure the effect of doubling the standard D-dimer threshold for “PE unlikely” Revised Geneva (RGS) or Wells’ scores on the exclusion rate, frequency and size of missed PE and missed pneumonia. Methods Patients evaluated for suspected PE with 64-channel CTPA were prospectively enrolled from EDs and inpatient units of four hospitals. Pretest probability data were collected in real time and the D-dimer was measured in a central laboratory. Criterion standard was CPTA interpretation by two independent radiologists combined with clinical outcome at 30 days. Results Of 678 patients enrolled, 126 (19%) were PE+ and 93 (14%) had pneumonia. Use of either Wells≤4 or RGS≤6 produced similar results. For example, with RGS≤6 and standard threshold (<500 ng/mL), D-dimer was negative in 110/678 (16%), and 4/110 were PE+ (posterior probability 3.8%), and 9/110 (8.2%) had pneumonia. With RGS≤6 and a threshold <1000 ng/mL, D-dimer was negative in 208/678 (31%) and 11/208 (5.3%) were PE+, but 10/11 missed PEs were subsegmental, and none had concomitant DVT. Pneumonia was found in 12/208 (5.4%) with RGS≤6 and D-dimer<1000 ng/mL. Conclusions Doubling the threshold for a positive D-dimer with a PE unlikely pretest probability could reduce CTPA scanning with a slightly increased risk of missed isolated subsegmental PE, and no increase in rate of missed pneumonia. PMID:22284935

  13. Seizure threshold increases can be predicted by EEG quality in right unilateral ultrabrief ECT.

    PubMed

    Gálvez, Verònica; Hadzi-Pavlovic, Dusan; Waite, Susan; Loo, Colleen K

    2017-12-01

    Increases in seizure threshold (ST) over a course of brief pulse ECT can be predicted by decreases in EEG quality, informing ECT dose adjustment to maintain adequate supra-threshold dosing. ST increases also occur over a course of right unilateral ultrabrief (RUL UB) ECT, but no data exist on the relationship between ST increases and EEG indices. This study (n = 35) investigated if increases in ST over RUL UB ECT treatments could be predicted by a decline in seizure quality. ST titration was performed at ECT session one and seven, with treatment dosing maintained stable (at 6-8 times ST) in intervening sessions. Seizure quality indices (slow-wave onset, mid-ictal amplitude, regularity, stereotypy, and post-ictal suppression) were manually rated at the first supra-threshold treatment, and last supra-threshold treatment before re-titration, using a structured rating scale, by a single trained rater blinded to the ECT session being rated. Twenty-one subjects (60%) had a ST increase. The association between ST changes and EEG quality indices was analysed by logistic regression, yielding a significant model (p < 0.001). Initial ST (p < 0.05) and percentage change in mid-ictal amplitude (p < 0.05) were significant predictors of change in ST. Percentage change in post-ictal suppression reached trend level significance (p = 0.065). Increases in ST over a RUL UB ECT course may be predicted by decreases in seizure quality, specifically decline in mid-ictal amplitude and potentially in post-ictal suppression. Such EEG indices may be able to inform when dose adjustments are necessary to maintain adequate supra-threshold dosing in RUL UB ECT.

  14. Stability of plasma cylinder with current in a helical plasma flow

    NASA Astrophysics Data System (ADS)

    Leonovich, Anatoly S.; Kozlov, Daniil A.; Zong, Qiugang

    2018-04-01

    Stability of a plasma cylinder with a current wrapped by a helical plasma flow is studied. Unstable surface modes of magnetohydrodynamic (MHD) oscillations develop at the boundary of the cylinder enwrapped by the plasma flow. Unstable eigenmodes can also develop for which the plasma cylinder is a waveguide. The growth rate of the surface modes is much higher than that for the eigenmodes. It is shown that the asymmetric MHD modes in the plasma cylinder are stable if the velocity of the plasma flow is below a certain threshold. Such a plasma flow velocity threshold is absent for the symmetric modes. They are unstable in any arbitrarily slow plasma flows. For all surface modes there is an upper threshold for the flow velocity above which they are stable. The helicity index of the flow around the plasma cylinder significantly affects both the Mach number dependence of the surface wave growth rate and the velocity threshold values. The higher the index, the lower the upper threshold of the velocity jump above which the surface waves become stable. Calculations have been carried out for the growth rates of unstable oscillations in an equilibrium plasma cylinder with current serving as a model of the low-latitude boundary layer (LLBL) of the Earth's magnetic tail. A tangential discontinuity model is used to simulate the geomagnetic tail boundary. It is shown that the magnetopause in the geotail LLBL is unstable to a surface wave (having the highest growth rate) in low- and medium-speed solar wind flows, but becomes stable to this wave in high-speed flows. However, it can remain weakly unstable to the radiative modes of MHD oscillations.

  15. Maximal lipidic power in high competitive level triathletes and cyclists

    PubMed Central

    González‐Haro, C; Galilea, P A; González‐de‐Suso, J M; Drobnic, F; Escanero, J F

    2007-01-01

    Objective To describe the fat‐oxidation rate in triathlon and different modalities of endurance cycling. Methods 34 endurance athletes (15 male triathletes, 4 female triathletes, 11 road cyclists and 4 male mountain bikers) underwent a progressive cycloergometer test until exhaustion. Relative work intensity (VO2max), minimal lactate concentration (La−min), lactic threshold, individual lactic threshold (ILT), maximal fat‐oxidation rate (Fatmax, Fatmax zone) and minimal fat‐oxidation rate (Fatmin) were determined in each of the groups and were compared by means of one‐way analysis of variance. Results No significant differences were found for Fatmax, Fatmin or for the Fatmax zone expressed as fat oxidation rate (g/min). Intensities −20%, −10% and −5% Fatmax were significantly lower for mountain bikers with respect to road cyclists and female triathletes, expressed as % VO2max. Intensities 20%, 10% and 5% Fatmax were significantly lower for mountain bikers with respect to male triathletes and female triathletes, and for male triathletes in comparison with female triathletes, expressed as % VO2max. Lactic threshold and La−min did not show significant differences with respect to Fatmax. Lactic threshold was found at the same VO2max with respect to the higher part of the Fatmax zone, and La−min at the same VO2max with respect to the lower part of the Fatmax zone. Conclusions The VO2max of Fatmax and the Fatmax zone may explain the different endurance adaptations of the athletes according to their sporting discipline. Lactic threshold and La−min were found at different relative work intensities with respect to those of Fatmax even though they belonged to the Fatmax zone. PMID:17062656

  16. Using perceptual cues for brake response to a lead vehicle: Comparing threshold and accumulator models of visual looming.

    PubMed

    Xue, Qingwan; Markkula, Gustav; Yan, Xuedong; Merat, Natasha

    2018-06-18

    Previous studies have shown the effect of a lead vehicle's speed, deceleration rate and headway distance on drivers' brake response times. However, how drivers perceive this information and use it to determine when to apply braking is still not quite clear. To better understand the underlying mechanisms, a driving simulator experiment was performed where each participant experienced nine deceleration scenarios. Previously reported effects of the lead vehicle's speed, deceleration rate and headway distance on brake response time were firstly verified in this paper, using a multilevel model. Then, as an alternative to measures of speed, deceleration rate and distance, two visual looming-based metrics (angular expansion rate θ˙ of the lead vehicle on the driver's retina, and inverse tau τ -1 , the ratio between θ˙ and the optical size θ), considered to be more in line with typical human psycho-perceptual responses, were adopted to quantify situation urgency. These metrics were used in two previously proposed mechanistic models predicting brake onset: either when looming surpasses a threshold, or when the accumulated evidence (looming and other cues) reaches a threshold. Results showed that the looming threshold model did not capture the distribution of brake response time. However, regardless of looming metric, the accumulator models fitted the distribution of brake response times better than the pure threshold models. Accumulator models, including brake lights, provided a better model fit than looming-only versions. For all versions of the mechanistic models, models using τ -1 as the measure of looming fitted better than those using θ˙, indicating that the visual cues drivers used during rear-end collision avoidance may be more close to τ -1 . Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Retrospective analysis of natural products provides insights for future discovery trends

    PubMed Central

    Pye, Cameron R.; Bertin, Matthew J.; Lokey, R. Scott; Gerwick, William H.

    2017-01-01

    Understanding of the capacity of the natural world to produce secondary metabolites is important to a broad range of fields, including drug discovery, ecology, biosynthesis, and chemical biology, among others. Both the absolute number and the rate of discovery of natural products have increased significantly in recent years. However, there is a perception and concern that the fundamental novelty of these discoveries is decreasing relative to previously known natural products. This study presents a quantitative examination of the field from the perspective of both number of compounds and compound novelty using a dataset of all published microbial and marine-derived natural products. This analysis aimed to explore a number of key questions, such as how the rate of discovery of new natural products has changed over the past decades, how the average natural product structural novelty has changed as a function of time, whether exploring novel taxonomic space affords an advantage in terms of novel compound discovery, and whether it is possible to estimate how close we are to having described all of the chemical space covered by natural products. Our analyses demonstrate that most natural products being published today bear structural similarity to previously published compounds, and that the range of scaffolds readily accessible from nature is limited. However, the analysis also shows that the field continues to discover appreciable numbers of natural products with no structural precedent. Together, these results suggest that the development of innovative discovery methods will continue to yield compounds with unique structural and biological properties. PMID:28461474

  18. Academic drug discovery: current status and prospects.

    PubMed

    Everett, Jeremy R

    2015-01-01

    The contraction in pharmaceutical drug discovery operations in the past decade has been counter-balanced by a significant rise in the number of academic drug discovery groups. In addition, pharmaceutical companies that used to operate in completely independent, vertically integrated operations for drug discovery, are now collaborating more with each other, and with academic groups. We are in a new era of drug discovery. This review provides an overview of the current status of academic drug discovery groups, their achievements and the challenges they face, together with perspectives on ways to achieve improved outcomes. Academic groups have made important contributions to drug discovery, from its earliest days and continue to do so today. However, modern drug discovery and development is exceedingly complex, and has high failure rates, principally because human biology is complex and poorly understood. Academic drug discovery groups need to play to their strengths and not just copy what has gone before. However, there are lessons to be learnt from the experiences of the industrial drug discoverers and four areas are highlighted for attention: i) increased validation of targets; ii) elimination of false hits from high throughput screening (HTS); iii) increasing the quality of molecular probes; and iv) investing in a high-quality informatics infrastructure.

  19. Auditory-nerve single-neuron thresholds to electrical stimulation from scala tympani electrodes.

    PubMed

    Parkins, C W; Colombo, J

    1987-12-31

    Single auditory-nerve neuron thresholds were studied in sensory-deafened squirrel monkeys to determine the effects of electrical stimulus shape and frequency on single-neuron thresholds. Frequency was separated into its components, pulse width and pulse rate, which were analyzed separately. Square and sinusoidal pulse shapes were compared. There were no or questionably significant threshold differences in charge per phase between sinusoidal and square pulses of the same pulse width. There was a small (less than 0.5 dB) but significant threshold advantage for 200 microseconds/phase pulses delivered at low pulse rates (156 pps) compared to higher pulse rates (625 pps and 2500 pps). Pulse width was demonstrated to be the prime determinant of single-neuron threshold, resulting in strength-duration curves similar to other mammalian myelinated neurons, but with longer chronaxies. The most efficient electrical stimulus pulse width to use for cochlear implant stimulation was determined to be 100 microseconds/phase. This pulse width delivers the lowest charge/phase at threshold. The single-neuron strength-duration curves were compared to strength-duration curves of a computer model based on the specific anatomy of auditory-nerve neurons. The membrane capacitance and resulting chronaxie of the model can be varied by altering the length of the unmyelinated termination of the neuron, representing the unmyelinated portion of the neuron between the habenula perforata and the hair cell. This unmyelinated segment of the auditory-nerve neuron may be subject to aminoglycoside damage. Simulating a 10 micron unmyelinated termination for this model neuron produces a strength-duration curve that closely fits the single-neuron data obtained from aminoglycoside deafened animals. Both the model and the single-neuron strength-duration curves differ significantly from behavioral threshold data obtained from monkeys and humans with cochlear implants. This discrepancy can best be explained by the involvement of higher level neurologic processes in the behavioral responses. These findings suggest that the basic principles of neural membrane function must be considered in developing or analyzing electrical stimulation strategies for cochlear prostheses if the appropriate stimulation of frequency specific populations of auditory-nerve neurons is the objective.

  20. Long-term outcomes in patients with septic shock transfused at a lower versus a higher haemoglobin threshold: the TRISS randomised, multicentre clinical trial.

    PubMed

    Rygård, Sofie L; Holst, Lars B; Wetterslev, Jørn; Winkel, Per; Johansson, Pär I; Wernerman, Jan; Guttormsen, Anne B; Karlsson, Sari; Perner, Anders

    2016-11-01

    We assessed the predefined long-term outcomes in patients randomised in the Transfusion Requirements in Septic Shock (TRISS) trial. In 32 Scandinavian ICUs, we randomised 1005 patients with septic shock and haemoglobin of 9 g/dl or less to receive single units of leuko-reduced red cells when haemoglobin level was 7 g/dl or less (lower threshold) or 9 g/dl or less (higher threshold) during ICU stay. We assessed mortality rates 1 year after randomisation and again in all patients at time of longest follow-up in the intention-to-treat population (n = 998) and health-related quality of life (HRQoL) 1 year after randomisation in the Danish patients only (n = 777). Mortality rates in the lower- versus higher-threshold group at 1 year were 53.5 % (268/501 patients) versus 54.6 % (271/496) [relative risk 0.97; 95 % confidence interval (CI) 0.85-1.09; P = 0.62]; at longest follow-up (median 21 months), they were 56.7 % (284/501) versus 61.0 % (302/495) (hazard ratio 0.88; 95 % CI 0.75-1.03; P = 0.12). We obtained HRQoL data at 1 year in 629 of the 777 (81 %) Danish patients, and mean differences between the lower- and higher-threshold group in scores of physical HRQoL were 0.4 (95 % CI -2.4 to 3.1; P = 0.79) and in mental HRQoL 0.5 (95 % CI -3.1 to 4.0; P = 0.79). Long-term mortality rates and HRQoL did not differ in patients with septic shock and anaemia who were transfused at a haemoglobin threshold of 7 g/dl versus a threshold of 9 g/dl. We may reject a more than 3 % increased hazard of death in the lower- versus higher-threshold group at the time of longest follow-up.

  1. Information transmission using non-poisson regular firing.

    PubMed

    Koyama, Shinsuke; Omi, Takahiro; Kass, Robert E; Shinomoto, Shigeru

    2013-04-01

    In many cortical areas, neural spike trains do not follow a Poisson process. In this study, we investigate a possible benefit of non-Poisson spiking for information transmission by studying the minimal rate fluctuation that can be detected by a Bayesian estimator. The idea is that an inhomogeneous Poisson process may make it difficult for downstream decoders to resolve subtle changes in rate fluctuation, but by using a more regular non-Poisson process, the nervous system can make rate fluctuations easier to detect. We evaluate the degree to which regular firing reduces the rate fluctuation detection threshold. We find that the threshold for detection is reduced in proportion to the coefficient of variation of interspike intervals.

  2. The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.

    PubMed

    Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R

    2013-01-01

    In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.

  3. 48 CFR 41.401 - Monthly and annual review.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... values exceeding the simplified acquisition threshold, on an annual basis. Annual reviews of accounts with annual values at or below the simplified acquisition threshold shall be conducted when deemed... services to each facility under the utility's most economical, applicable rate and to examine competitive...

  4. Security of six-state quantum key distribution protocol with threshold detectors

    PubMed Central

    Kato, Go; Tamaki, Kiyoshi

    2016-01-01

    The security of quantum key distribution (QKD) is established by a security proof, and the security proof puts some assumptions on the devices consisting of a QKD system. Among such assumptions, security proofs of the six-state protocol assume the use of photon number resolving (PNR) detector, and as a result the bit error rate threshold for secure key generation for the six-state protocol is higher than that for the BB84 protocol. Unfortunately, however, this type of detector is demanding in terms of technological level compared to the standard threshold detector, and removing the necessity of such a detector enhances the feasibility of the implementation of the six-state protocol. Here, we develop the security proof for the six-state protocol and show that we can use the threshold detector for the six-state protocol. Importantly, the bit error rate threshold for the key generation for the six-state protocol (12.611%) remains almost the same as the one (12.619%) that is derived from the existing security proofs assuming the use of PNR detectors. This clearly demonstrates feasibility of the six-state protocol with practical devices. PMID:27443610

  5. A test of critical thresholds and their indicators in a desertification-prone ecosystem: more resilience than we thought

    USGS Publications Warehouse

    Bestelmeyer, Brandon T.; Duniway, Michael C.; James, Darren K.; Burkett, Laura M.; Havstad, Kris M.

    2013-01-01

    Theoretical models predict that drylands can cross critical thresholds, but experimental manipulations to evaluate them are non-existent. We used a long-term (13-year) pulse-perturbation experiment featuring heavy grazing and shrub removal to determine if critical thresholds and their determinants can be demonstrated in Chihuahuan Desert grasslands. We asked if cover values or patch-size metrics could predict vegetation recovery, supporting their use as early-warning indicators. We found that season of grazing, but not the presence of competing shrubs, mediated the severity of grazing impacts on dominant grasses. Recovery occurred at the same rate irrespective of grazing history, suggesting that critical thresholds were not crossed, even at low cover levels. Grass cover, but not patch size metrics, predicted variation in recovery rates. Some transition-prone ecosystems are surprisingly resilient; management of grazing impacts and simple cover measurements can be used to avert undesired transitions and initiate restoration.

  6. Discrete diffraction managed solitons: Threshold phenomena and rapid decay for general nonlinearities

    NASA Astrophysics Data System (ADS)

    Choi, Mi-Ran; Hundertmark, Dirk; Lee, Young-Ran

    2017-10-01

    We prove a threshold phenomenon for the existence/non-existence of energy minimizing solitary solutions of the diffraction management equation for strictly positive and zero average diffraction. Our methods allow for a large class of nonlinearities; they are, for example, allowed to change sign, and the weakest possible condition, it only has to be locally integrable, on the local diffraction profile. The solutions are found as minimizers of a nonlinear and nonlocal variational problem which is translation invariant. There exists a critical threshold λcr such that minimizers for this variational problem exist if their power is bigger than λcr and no minimizers exist with power less than the critical threshold. We also give simple criteria for the finiteness and strict positivity of the critical threshold. Our proof of existence of minimizers is rather direct and avoids the use of Lions' concentration compactness argument. Furthermore, we give precise quantitative lower bounds on the exponential decay rate of the diffraction management solitons, which confirm the physical heuristic prediction for the asymptotic decay rate. Moreover, for ground state solutions, these bounds give a quantitative lower bound for the divergence of the exponential decay rate in the limit of vanishing average diffraction. For zero average diffraction, we prove quantitative bounds which show that the solitons decay much faster than exponentially. Our results considerably extend and strengthen the results of Hundertmark and Lee [J. Nonlinear Sci. 22, 1-38 (2012) and Commun. Math. Phys. 309(1), 1-21 (2012)].

  7. Coincidence detection in the medial superior olive: mechanistic implications of an analysis of input spiking patterns

    PubMed Central

    Franken, Tom P.; Bremen, Peter; Joris, Philip X.

    2014-01-01

    Coincidence detection by binaural neurons in the medial superior olive underlies sensitivity to interaural time difference (ITD) and interaural correlation (ρ). It is unclear whether this process is akin to a counting of individual coinciding spikes, or rather to a correlation of membrane potential waveforms resulting from converging inputs from each side. We analyzed spike trains of axons of the cat trapezoid body (TB) and auditory nerve (AN) in a binaural coincidence scheme. ITD was studied by delaying “ipsi-” vs. “contralateral” inputs; ρ was studied by using responses to different noises. We varied the number of inputs; the monaural and binaural threshold and the coincidence window duration. We examined physiological plausibility of output “spike trains” by comparing their rate and tuning to ITD and ρ to those of binaural cells. We found that multiple inputs are required to obtain a plausible output spike rate. In contrast to previous suggestions, monaural threshold almost invariably needed to exceed binaural threshold. Elevation of the binaural threshold to values larger than 2 spikes caused a drastic decrease in rate for a short coincidence window. Longer coincidence windows allowed a lower number of inputs and higher binaural thresholds, but decreased the depth of modulation. Compared to AN fibers, TB fibers allowed higher output spike rates for a low number of inputs, but also generated more monaural coincidences. We conclude that, within the parameter space explored, the temporal patterns of monaural fibers require convergence of multiple inputs to achieve physiological binaural spike rates; that monaural coincidences have to be suppressed relative to binaural ones; and that the neuron has to be sensitive to single binaural coincidences of spikes, for a number of excitatory inputs per side of 10 or less. These findings suggest that the fundamental operation in the mammalian binaural circuit is coincidence counting of single binaural input spikes. PMID:24822037

  8. The Threshold Bias Model: A Mathematical Model for the Nomothetic Approach of Suicide

    PubMed Central

    Folly, Walter Sydney Dutra

    2011-01-01

    Background Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. Methodology/Principal Findings A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. Conclusions/Significance The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health. PMID:21909431

  9. The threshold bias model: a mathematical model for the nomothetic approach of suicide.

    PubMed

    Folly, Walter Sydney Dutra

    2011-01-01

    Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health.

  10. Comparing adaptive procedures for estimating the psychometric function for an auditory gap detection task.

    PubMed

    Shen, Yi

    2013-05-01

    A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.

  11. Climatic shocks associate with innovation in science and technology

    PubMed Central

    van Dijk, Mathijs A.

    2018-01-01

    Human history is shaped by landmark discoveries in science and technology. However, across both time and space the rate of innovation is erratic: Periods of relative inertia alternate with bursts of creative science and rapid cascades of technological innovations. While the origins of the rise and fall in rates of discovery and innovation remain poorly understood, they may reflect adaptive responses to exogenously emerging threats and pressures. Here we examined this possibility by fitting annual rates of scientific discovery and technological innovation to climatic variability and its associated economic pressures and resource scarcity. In time-series data from Europe (1500–1900CE), we indeed found that rates of innovation are higher during prolonged periods of cold (versus warm) surface temperature and during the presence (versus absence) of volcanic dust veils. This negative temperature–innovation link was confirmed in annual time-series for France, Germany, and the United Kingdom (1901–1965CE). Combined, across almost 500 years and over 5,000 documented innovations and discoveries, a 0.5°C increase in temperature associates with a sizable 0.30–0.60 standard deviation decrease in innovation. Results were robust to controlling for fluctuations in population size. Furthermore, and consistent with economic theory and micro-level data on group innovation, path analyses revealed that the relation between harsher climatic conditions between 1500–1900CE and more innovation is mediated by climate-induced economic pressures and resource scarcity. PMID:29364910

  12. Biophysical mechanism of spike threshold dependence on the rate of rise of the membrane potential by sodium channel inactivation or subthreshold axonal potassium current

    PubMed Central

    Wester, Jason C.

    2013-01-01

    Spike threshold filters incoming inputs and thus gates activity flow through neuronal networks. Threshold is variable, and in many types of neurons there is a relationship between the threshold voltage and the rate of rise of the membrane potential (dVm/dt) leading to the spike. In primary sensory cortex this relationship enhances the sensitivity of neurons to a particular stimulus feature. While Na+ channel inactivation may contribute to this relationship, recent evidence indicates that K+ currents located in the spike initiation zone are crucial. Here we used a simple Hodgkin-Huxley biophysical model to systematically investigate the role of K+ and Na+ current parameters (activation voltages and kinetics) in regulating spike threshold as a function of dVm/dt. Threshold was determined empirically and not estimated from the shape of the Vm prior to a spike. This allowed us to investigate intrinsic currents and values of gating variables at the precise voltage threshold. We found that Na+ inactivation is sufficient to produce the relationship provided it occurs at hyperpolarized voltages combined with slow kinetics. Alternatively, hyperpolarization of the K+ current activation voltage, even in the absence of Na+ inactivation, is also sufficient to produce the relationship. This hyperpolarized shift of K+ activation allows an outward current prior to spike initiation to antagonize the Na+ inward current such that it becomes self-sustaining at a more depolarized voltage. Our simulations demonstrate parameter constraints on Na+ inactivation and the biophysical mechanism by which an outward current regulates spike threshold as a function of dVm/dt. PMID:23344915

  13. Leptophobic Z' in models with multiple Higgs doublet fields

    NASA Astrophysics Data System (ADS)

    Chiang, Cheng-Wei; Nomura, Takaaki; Yagyu, Kei

    2015-05-01

    We study the collider phenomenology of the leptophobic Z' boson from an extra U(1)' gauge symmetry in models with N -Higgs doublet fields. We assume that the Z' boson at tree level has (i) no Z- Z' mixing, (ii) no interaction with the charged leptons, and (iii) no flavour-changing neutral current. Under such a setup, it is shown that in the N = 1 case, all the U(1)' charges of left-handed quark doublets and right-handed up- and down- type quarks are required to be the same, while in the N ≥ 3 case one can take different charges for the three types of quarks. The N = 2 case is not well-defined under the above three requirements. We study the processes ( V = γ , Z and W ±) with the leptonic decays of Z and W ± at the LHC. The most promising discovery channel or the most stringent constraint on the U(1)' gauge coupling constant comes from the Z'γ process below the threshold and from the process above the threshold. Assuming the collision energy of 8 TeV and integrated luminosity of 19.6 fb-1, we find that the constraint from the Z'γ search in the lower mass regime can be stronger than that from the UA2 experiment. In the N ≥ 3 case, we consider four benchmark points for the Z' couplings with quarks. If such a Z' is discovered, a careful comparison between the Z'γ and Z' W signals is crucial to reveal the nature of Z' couplings with quarks. We also present the discovery reach of the Z' boson at the 14-TeV LHC in both N = 1 and N ≥ 3 cases.

  14. Flaw Growth of 6Al-4V Titanium in a Freon TF Environment

    NASA Technical Reports Server (NTRS)

    Tiffany, C. F.; Masters, J. N.; Bixler, W. D.

    1969-01-01

    The plane strain threshold stress intensity and sustained stress flaw growth rates were experimentally determined for 6AI-4V S.T.A. titanium forging and weldments in environments of Freon TF at room temperature. Sustained load tests of surface flawed specimens were conducted with the experimental approach based on linear elastic fracture mechanics. It was concluded that sustained stress flaw growth rates, in conjunction with threshold stress intensities, can be used in assessing the service life of pressure vessels.

  15. Threshold analysis of pulsed lasers with application to a room-temperature Co:MgF2 laser

    NASA Technical Reports Server (NTRS)

    Harrison, James; Welford, David; Moulton, Peter F.

    1989-01-01

    Rate-equation calculations are used to model accurately the near-threshold behavior of a Co:MgF2 laser operating at room temperature. The results demonstrate the limitations of the conventional threshold analysis in cases of practical interest. This conclusion is applicable to pulsed solid-state lasers in general. The calculations, together with experimental data, are used to determine emission cross sections for the Co:MgF2 laser.

  16. Temperature-activity relationships in Meligethes aeneus: implications for pest management

    PubMed Central

    Ferguson, Andrew W; Nevard, Lucy M; Clark, Suzanne J; Cook, Samantha M

    2015-01-01

    BACKGROUND Pollen beetle (Meligethes aeneus F.) management in oilseed rape (Brassica napus L.) has become an urgent issue in the light of insecticide resistance. Risk prediction advice has relied upon flight temperature thresholds, while risk assessment uses simple economic thresholds. However, there is variation in the reported temperature of migration, and economic thresholds vary widely across Europe, probably owing to climatic factors interacting with beetle activity and plant compensation for damage. The effect of temperature on flight, feeding and oviposition activity of M. aeneus was examined in controlled conditions. RESULTS Escape from a release vial was taken as evidence of flight and was supported by video observations. The propensity to fly followed a sigmoid temperature–response curve between 6 and 23 °C; the 10, 25 and 50% flight temperature thresholds were 12.0–12.5 °C, 13.6–14.2 °C and 15.5–16.2 °C, respectively. Thresholds were slightly higher in the second of two flight bioassays, suggesting an effect of beetle age. Strong positive relationships were found between temperature (6–20 °C) and the rates of feeding and oviposition on flower buds of oilseed rape. CONCLUSION These temperature relationships could be used to improve M. aeneus migration risk assessment, refine weather-based decision support systems and modulate damage thresholds according to rates of bud damage. © 2014 Society of Chemical Industry PMID:25052810

  17. Estimation of pulse rate from ambulatory PPG using ensemble empirical mode decomposition and adaptive thresholding.

    PubMed

    Pittara, Melpo; Theocharides, Theocharis; Orphanidou, Christina

    2017-07-01

    A new method for deriving pulse rate from PPG obtained from ambulatory patients is presented. The method employs Ensemble Empirical Mode Decomposition to identify the pulsatile component from noise-corrupted PPG, and then uses a set of physiologically-relevant rules followed by adaptive thresholding, in order to estimate the pulse rate in the presence of noise. The method was optimized and validated using 63 hours of data obtained from ambulatory hospital patients. The F1 score obtained with respect to expertly annotated data was 0.857 and the mean absolute errors of estimated pulse rates with respect to heart rates obtained from ECG collected in parallel were 1.72 bpm for "good" quality PPG and 4.49 bpm for "bad" quality PPG. Both errors are within the clinically acceptable margin-of-error for pulse rate/heart rate measurements, showing the promise of the proposed approach for inclusion in next generation wearable sensors.

  18. 64 x 64 thresholding photodetector array for optical pattern recognition

    NASA Astrophysics Data System (ADS)

    Langenbacher, Harry; Chao, Tien-Hsin; Shaw, Timothy; Yu, Jeffrey W.

    1993-10-01

    A high performance 32 X 32 peak detector array is introduced. This detector consists of a 32 X 32 array of thresholding photo-transistor cells, manufactured with a standard MOSIS digital 2-micron CMOS process. A built-in thresholding function that is able to perform 1024 thresholding operations in parallel strongly distinguishes this chip from available CCD detectors. This high speed detector offers responses from one to 10 milliseconds that is much higher than the commercially available CCD detectors operating at a TV frame rate. The parallel multiple peaks thresholding detection capability makes it particularly suitable for optical correlator and optoelectronically implemented neural networks. The principle of operation, circuit design and the performance characteristics are described. Experimental demonstration of correlation peak detection is also provided. Recently, we have also designed and built an advanced version of a 64 X 64 thresholding photodetector array chip. Experimental investigation of using this chip for pattern recognition is ongoing.

  19. New Perspectives on How to Discover Drugs from Herbal Medicines: CAM's Outstanding Contribution to Modern Therapeutics

    PubMed Central

    Pan, Si-Yuan; Zhou, Shu-Feng; Gao, Si-Hua; Yu, Zhi-Ling; Zhang, Shuo-Feng; Tang, Min-Ke; Sun, Jian-Ning; Han, Yi-Fan; Fong, Wang-Fun; Ko, Kam-Ming

    2013-01-01

    With tens of thousands of plant species on earth, we are endowed with an enormous wealth of medicinal remedies from Mother Nature. Natural products and their derivatives represent more than 50% of all the drugs in modern therapeutics. Because of the low success rate and huge capital investment need, the research and development of conventional drugs are very costly and difficult. Over the past few decades, researchers have focused on drug discovery from herbal medicines or botanical sources, an important group of complementary and alternative medicine (CAM) therapy. With a long history of herbal usage for the clinical management of a variety of diseases in indigenous cultures, the success rate of developing a new drug from herbal medicinal preparations should, in theory, be higher than that from chemical synthesis. While the endeavor for drug discovery from herbal medicines is “experience driven,” the search for a therapeutically useful synthetic drug, like “looking for a needle in a haystack,” is a daunting task. In this paper, we first illustrated various approaches of drug discovery from herbal medicines. Typical examples of successful drug discovery from botanical sources were given. In addition, problems in drug discovery from herbal medicines were described and possible solutions were proposed. The prospect of drug discovery from herbal medicines in the postgenomic era was made with the provision of future directions in this area of drug development. PMID:23634172

  20. Unimolecular HCl and HF elimination reactions of 1,2-dichloroethane, 1,2-difluoroethane, and 1,2-chlorofluoroethane: assignment of threshold energies.

    PubMed

    Duncan, Juliana R; Solaka, Sarah A; Setser, D W; Holmes, Bert E

    2010-01-21

    The recombination of CH(2)Cl and CH(2)F radicals generates vibrationally excited CH(2)ClCH(2)Cl, CH(2)FCH(2)F, and CH(2)ClCH(2)F molecules with about 90 kcal mol(-1) of energy in a room temperature bath gas. New experimental data for CH(2)ClCH(2)F have been obtained that are combined with previously published studies for C(2)H(4)Cl(2) and C(2)H(4)F(2) to define reliable rate constants of 3.0 x 10(8) (C(2)H(4)F(2)), 2.4 x 10(8) (C(2)H(4)Cl(2)), and 1.9 x 10(8) (CH(2)ClCH(2)F) s(-1) for HCl and HF elimination. The product branching ratio for CH(2)ClCH(2)F is approximately 1. These experimental rate constants are compared to calculated statistical rate constants (RRKM) to assign threshold energies for HF and HCl elimination. The calculated rate constants are based on transition-state models obtained from calculations of electronic structures; the energy levels of the asymmetric, hindered, internal rotation were directly included in the state counting to obtain a more realistic measure for the density of internal states for the molecules. The assigned threshold energies for C(2)H(4)F(2) and C(2)H(4)Cl(2) are both 63 +/- 2 kcal mol(-1). The threshold energies for CH(2)ClCH(2)F are 65 +/- 2 (HCl) and 63 +/- 2 (HF) kcal mol(-1). These threshold energies are 5-7 kcal mol(-1) higher than the corresponding values for C(2)H(5)Cl or C(2)H(5)F, and beta-substitution of F or Cl atoms raises threshold energies for HF or HCl elimination reactions. The treatment presented here for obtaining the densities of states and the entropy of activation from models with asymmetric internal rotations with high barriers can be used to judge the validity of using a symmetric internal-rotor approximation for other cases. Finally, threshold energies for the 1,2-fluorochloroethanes are compared to those of the 1,1-fluorochloroethanes to illustrate substituent effects on the relative energies of the isomeric transition states.

  1. Probing the Cosmic Gamma-Ray Burst Rate with Trigger Simulations of the Swift Burst Alert Telescope

    NASA Technical Reports Server (NTRS)

    Lien, Amy; Sakamoto, Takanori; Gehrels, Neil; Palmer, David M.; Barthelmy, Scott D.; Graziani, Carlo; Cannizzo, John K.

    2013-01-01

    The gamma-ray burst (GRB) rate is essential for revealing the connection between GRBs, supernovae and stellar evolution. Additionally, the GRB rate at high redshift provides a strong probe of star formation history in the early universe. While hundreds of GRBs are observed by Swift, it remains difficult to determine the intrinsic GRB rate due to the complex trigger algorithm of Swift. Current studies of the GRB rate usually approximate the Swift trigger algorithm by a single detection threshold. However, unlike the previously own GRB instruments, Swift has over 500 trigger criteria based on photon count rate and additional image threshold for localization. To investigate possible systematic biases and explore the intrinsic GRB properties, we develop a program that is capable of simulating all the rate trigger criteria and mimicking the image threshold. Our simulations show that adopting the complex trigger algorithm of Swift increases the detection rate of dim bursts. As a result, our simulations suggest bursts need to be dimmer than previously expected to avoid over-producing the number of detections and to match with Swift observations. Moreover, our results indicate that these dim bursts are more likely to be high redshift events than low-luminosity GRBs. This would imply an even higher cosmic GRB rate at large redshifts than previous expectations based on star-formation rate measurements, unless other factors, such as the luminosity evolution, are taken into account. The GRB rate from our best result gives a total number of 4568 +825 -1429 GRBs per year that are beamed toward us in the whole universe.

  2. Gram-scale cryogenic calorimeters for rare-event searches

    NASA Astrophysics Data System (ADS)

    Strauss, R.; Rothe, J.; Angloher, G.; Bento, A.; Gütlein, A.; Hauff, D.; Kluck, H.; Mancuso, M.; Oberauer, L.; Petricca, F.; Pröbst, F.; Schieck, J.; Schönert, S.; Seidel, W.; Stodolsky, L.

    2017-07-01

    The energy threshold of a cryogenic calorimeter can be lowered by reducing its size. This is of importance since the resulting increase in signal rate enables new approaches in rare-event searches, including the detection of MeV mass dark matter and coherent scattering of reactor or solar neutrinos. A scaling law for energy threshold vs detector size is given. We analyze the possibility of lowering the threshold of a gram-scale cryogenic calorimeter to the few eV regime. A prototype 0.5 g Al2 O3 device achieved an energy threshold of Eth=(19.7 ±0.9 ) eV , the lowest value reported for a macroscopic calorimeter.

  3. Erosive Augmentation of Solid Propellant Burning Rate: Motor Size Scaling Effect

    NASA Technical Reports Server (NTRS)

    Strand, L. D.; Cohen, Norman S.

    1990-01-01

    Two different independent variable forms, a difference form and a ratio form, were investigated for correlating the normalized magnitude of the measured erosive burning rate augmentation above the threshold in terms of the amount that the driving parameter (mass flux or Reynolds number) exceeds the threshold value for erosive augmentation at the test condition. The latter was calculated from the previously determined threshold correlation. Either variable form provided a correlation for each of the two motor size data bases individually. However, the data showed a motor size effect, supporting the general observation that the magnitude of erosive burning rate augmentation is reduced for larger rocket motors. For both independent variable forms, the required motor size scaling was attained by including the motor port radius raised to a power in the independent parameter. A boundary layer theory analysis confirmed the experimental finding, but showed that the magnitude of the scale effect is itself dependent upon scale, tending to diminish with increasing motor size.

  4. Using generalized additive modeling to empirically identify thresholds within the ITERS in relation to toddlers' cognitive development.

    PubMed

    Setodji, Claude Messan; Le, Vi-Nhuan; Schaack, Diana

    2013-04-01

    Research linking high-quality child care programs and children's cognitive development has contributed to the growing popularity of child care quality benchmarking efforts such as quality rating and improvement systems (QRIS). Consequently, there has been an increased interest in and a need for approaches to identifying thresholds, or cutpoints, in the child care quality measures used in these benchmarking efforts that differentiate between different levels of children's cognitive functioning. To date, research has provided little guidance to policymakers as to where these thresholds should be set. Using the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B) data set, this study explores the use of generalized additive modeling (GAM) as a method of identifying thresholds on the Infant/Toddler Environment Rating Scale (ITERS) in relation to toddlers' performance on the Mental Development subscale of the Bayley Scales of Infant Development (the Bayley Mental Development Scale Short Form-Research Edition, or BMDSF-R). The present findings suggest that simple linear models do not always correctly depict the relationships between ITERS scores and BMDSF-R scores and that GAM-derived thresholds were more effective at differentiating among children's performance levels on the BMDSF-R. Additionally, the present findings suggest that there is a minimum threshold on the ITERS that must be exceeded before significant improvements in children's cognitive development can be expected. There may also be a ceiling threshold on the ITERS, such that beyond a certain level, only marginal increases in children's BMDSF-R scores are observed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  5. Population dynamics of obligate cooperators

    PubMed Central

    Courchamp, F.; Grenfell, B.; Clutton-Brock, T.

    1999-01-01

    Obligate cooperative breeding species demonstrate a high rate of group extinction, which may be due to the existence of a critical number of helpers below which the group cannot subsist. Through a simple model, we study the population dynamics of obligate cooperative breeding species, taking into account the existence of a lower threshold below which the instantaneous growth rate becomes negative. The model successively incorporates (i) a distinction between species that need helpers for reproduction, survival or both, (ii) the existence of a migration rate accounting for dispersal, and (iii) stochastic mortality to simulate the effects of random catastrophic events. Our results suggest that the need for a minimum number of helpers increases the risk of extinction for obligate cooperative breeding species. The constraint imposed by this threshold is higher when helpers are needed for reproduction only or for both reproduction and survival. By driving them below this lower threshold, stochastic mortality of lower amplitude and/or lower frequency than for non-cooperative breeders may be sufficient to cause the extinction of obligate cooperative breeding groups. Migration may have a buffering effect only for groups where immigration is higher than emigration; otherwise (when immigrants from nearby groups are not available) it lowers the difference between actual group size and critical threshold, thereby constituting a higher constraint.

  6. A Tutorial on Multiple Testing: False Discovery Control

    NASA Astrophysics Data System (ADS)

    Chatelain, F.

    2016-09-01

    This paper presents an overview of criteria and methods in multiple testing, with an emphasis on the false discovery rate control. The popular Benjamini and Hochberg procedure is described. The rationale for this approach is explained through a simple Bayesian interpretation. Some state-of-the-art variations and extensions are also presented.

  7. A petroleum discovery-rate forecast revisited-The problem of field growth

    USGS Publications Warehouse

    Drew, L.J.; Schuenemeyer, J.H.

    1992-01-01

    A forecast of the future rates of discovery of crude oil and natural gas for the 123,027-km2 Miocene/Pliocene trend in the Gulf of Mexico was made in 1980. This forecast was evaluated in 1988 by comparing two sets of data: (1) the actual versus the forecasted number of fields discovered, and (2) the actual versus the forecasted volumes of crude oil and natural gas discovered with the drilling of 1,820 wildcat wells along the trend between January 1, 1977, and December 31, 1985. The forecast specified that this level of drilling would result in the discovery of 217 fields containing 1.78 billion barrels of oil equivalent; however, 238 fields containing 3.57 billion barrels of oil equivalent were actually discovered. This underestimation is attributed to biases introduced by field growth and, to a lesser degree, the artificially low, pre-1970's price of natural gas that prevented many smaller gas fields from being brought into production at the time of their discovery; most of these fields contained less than 50 billion cubic feet of producible natural gas. ?? 1992 Oxford University Press.

  8. Network-based discovery through mechanistic systems biology. Implications for applications--SMEs and drug discovery: where the action is.

    PubMed

    Benson, Neil

    2015-08-01

    Phase II attrition remains the most important challenge for drug discovery. Tackling the problem requires improved understanding of the complexity of disease biology. Systems biology approaches to this problem can, in principle, deliver this. This article reviews the reports of the application of mechanistic systems models to drug discovery questions and discusses the added value. Although we are on the journey to the virtual human, the length, path and rate of learning from this remain an open question. Success will be dependent on the will to invest and make the most of the insight generated along the way. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Drug Discovery for Neglected Diseases: Molecular Target-Based and Phenotypic Approaches

    PubMed Central

    2013-01-01

    Drug discovery for neglected tropical diseases is carried out using both target-based and phenotypic approaches. In this paper, target-based approaches are discussed, with a particular focus on human African trypanosomiasis. Target-based drug discovery can be successful, but careful selection of targets is required. There are still very few fully validated drug targets in neglected diseases, and there is a high attrition rate in target-based drug discovery for these diseases. Phenotypic screening is a powerful method in both neglected and non-neglected diseases and has been very successfully used. Identification of molecular targets from phenotypic approaches can be a way to identify potential new drug targets. PMID:24015767

  10. No minimum threshold for ozone-induced changes in soybean canopy fluxes

    USDA-ARS?s Scientific Manuscript database

    Tropospheric ozone concentrations [O3] are increasing at rates that exceed any other pollutant. This highly reactive gas drives reductions in plant productivity and canopy water use while also increasing canopy temperature and sensible heat flux. It is not clear whether a minimum threshold of ozone ...

  11. Absorbed dose thresholds and absorbed dose rate limitations for studies of electron radiation effects on polyetherimides

    NASA Technical Reports Server (NTRS)

    Long, Edward R., Jr.; Long, Sheila Ann T.; Gray, Stephanie L.; Collins, William D.

    1989-01-01

    The threshold values of total absorbed dose for causing changes in tensile properties of a polyetherimide film and the limitations of the absorbed dose rate for accelerated-exposure evaluation of the effects of electron radiation in geosynchronous orbit were studied. Total absorbed doses from 1 kGy to 100 MGy and absorbed dose rates from 0.01 MGy/hr to 100 MGy/hr were investigated, where 1 Gy equals 100 rads. Total doses less than 2.5 MGy did not significantly change the tensile properties of the film whereas doses higher than 2.5 MGy significantly reduced elongation-to-failure. There was no measurable effect of the dose rate on the tensile properties for accelerated electron exposures.

  12. Photo-assisted etching of silicon in chlorine- and bromine-containing plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Weiye; Sridhar, Shyam; Liu, Lei

    2014-05-28

    Cl{sub 2}, Br{sub 2}, HBr, Br{sub 2}/Cl{sub 2}, and HBr/Cl{sub 2} feed gases diluted in Ar (50%–50% by volume) were used to study etching of p-type Si(100) in a rf inductively coupled, Faraday-shielded plasma, with a focus on the photo-assisted etching component. Etching rates were measured as a function of ion energy. Etching at ion energies below the threshold for ion-assisted etching was observed in all cases, with Br{sub 2}/Ar and HBr/Cl{sub 2}/Ar plasmas having the lowest and highest sub-threshold etching rates, respectively. Sub-threshold etching rates scaled with the product of surface halogen coverage (measured by X-ray photoelectron spectroscopy) andmore » Ar emission intensity (7504 Å). Etching rates measured under MgF{sub 2}, quartz, and opaque windows showed that sub-threshold etching is due to photon-stimulated processes on the surface, with vacuum ultraviolet photons being much more effective than longer wavelengths. Scanning electron and atomic force microscopy revealed that photo-etched surfaces were very rough, quite likely due to the inability of the photo-assisted process to remove contaminants from the surface. Photo-assisted etching in Cl{sub 2}/Ar plasmas resulted in the formation of 4-sided pyramidal features with bases that formed an angle of 45° with respect to 〈110〉 cleavage planes, suggesting that photo-assisted etching can be sensitive to crystal orientation.« less

  13. Use of video-based education and tele-health home monitoring after liver transplantation: Results of a novel pilot study.

    PubMed

    Ertel, Audrey E; Kaiser, Tiffany E; Abbott, Daniel E; Shah, Shimul A

    2016-10-01

    In this observational study, we analyzed the feasibility and early results of a perioperative, video-based educational program and tele-health home monitoring model on postoperative care management and readmissions for patients undergoing liver transplantation. Twenty consecutive liver transplantation recipients were provided with tele-health home monitoring and an educational video program during the perioperative period. Vital statistics were tracked and monitored daily with emphasis placed on readings outside of the normal range (threshold violations). Additionally, responses to effectiveness questionnaires were collected retrospectively for analysis. In the study, 19 of the 20 patients responded to the effectiveness questionnaire, with 95% reporting having watched all 10 videos, 68% watching some more than once, and 100% finding them effective in improving their preparedness for understanding their postoperative care. Among these 20 patients, there was an observed 19% threshold violation rate for systolic blood pressure, 6% threshold violation rate for mean blood glucose concentrations, and 8% threshold violation rate for mean weights. This subset of patients had a 90-day readmission rate of 30%. This observational study demonstrates that tele-health home monitoring and video-based educational programs are feasible in liver transplantation recipients and seem to be effective in enhancing the monitoring of vital statistics postoperatively. These data suggest that smart technology is effective in creating a greater awareness and understanding of how to manage postoperative care after liver transplantation. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. The self-perception of dyspnoea threshold during the 6-min walk test: a good alternative to estimate the ventilatory threshold in chronic obstructive pulmonary disease.

    PubMed

    Couillard, Annabelle; Tremey, Emilie; Prefaut, Christian; Varray, Alain; Heraud, Nelly

    2016-12-01

    To determine and/or adjust exercise training intensity for patients when the cardiopulmonary exercise test is not accessible, the determination of dyspnoea threshold (defined as the onset of self-perceived breathing discomfort) during the 6-min walk test (6MWT) could be a good alternative. The aim of this study was to evaluate the feasibility and reproducibility of self-perceived dyspnoea threshold and to determine whether a useful equation to estimate ventilatory threshold from self-perceived dyspnoea threshold could be derived. A total of 82 patients were included and performed two 6MWTs, during which they raised a hand to signal self-perceived dyspnoea threshold. The reproducibility in terms of heart rate (HR) was analysed. On a subsample of patients (n=27), a stepwise regression analysis was carried out to obtain a predictive equation of HR at ventilatory threshold measured during a cardiopulmonary exercise test estimated from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s. Overall, 80% of patients could identify self-perceived dyspnoea threshold during the 6MWT. Self-perceived dyspnoea threshold was reproducibly expressed in HR (coefficient of variation=2.8%). A stepwise regression analysis enabled estimation of HR at ventilatory threshold from HR at self-perceived dyspnoea threshold, age and forced expiratory volume in 1 s (adjusted r=0.79, r=0.63, and relative standard deviation=9.8 bpm). This study shows that a majority of patients with chronic obstructive pulmonary disease can identify a self-perceived dyspnoea threshold during the 6MWT. This HR at the dyspnoea threshold is highly reproducible and enable estimation of the HR at the ventilatory threshold.

  15. [Determination of the anaerobic threshold by the rate of ventilation and cardio interval variability].

    PubMed

    Seluianov, V N; Kalinin, E M; Pak, G D; Maevskaia, V I; Konrad, A H

    2011-01-01

    The aim of this work is to develop methods for determining the anaerobic threshold according to the rate of ventilation and cardio interval variability during the test with stepwise increases load on the cycle ergometer and treadmill. In the first phase developed the method for determining the anaerobic threshold for lung ventilation. 49 highly skilled skiers took part in the experiment. They performed a treadmill ski-walking test with sticks with gradually increasing slope from 0 to 25 degrees, the slope increased by one degree every minute. In the second phase we developed a method for determining the anaerobic threshold according dynamics ofcardio interval variability during the test. The study included 86 athletes of different sports specialties who performed pedaling on the cycle ergometer "Monarch" in advance. Initial output was 25 W, power increased by 25 W every 2 min. The pace was steady--75 rev/min. Measurement of pulmonary ventilation and oxygen and carbon dioxide content was performed using gas analyzer COSMED K4. Sampling of arterial blood was carried from the ear lobe or finger, blood lactate concentration was determined using an "Akusport" instrument. RR-intervals registration was performed using heart rate monitor Polar s810i. As a result, it was shown that the graphical method for determining the onset of anaerobic threshold ventilation (VAnP) coincides with the accumulation of blood lactate 3.8 +/- 0.1 mmol/l when testing on a treadmill and 4.1 +/- 0.6 mmol/1 on the cycle ergometer. The connection between the measure of oxygen consumption at VAnP and the dispersion of cardio intervals (SD1), derived regression equation: VO2AnT = 0.35 + 0.01SD1W + 0.0016SD1HR + + 0.106SD1(ms), l/min; (R = 0.98, error evaluation function 0.26 L/min, p < 0.001), where W (W)--Power, HR--heart rate (beats/min), SD1--cardio intervals dispersion (ms) at the moment of registration of cardio interval threshold.

  16. Testing the limits of Rodent Sperm Analysis: azoospermia in an otherwise healthy wild rodent population.

    PubMed

    Tannenbaum, Lawrence V; Thran, Brandolyn H; Willams, Keith J

    2009-01-01

    By comparing the sperm parameters of small rodents trapped at contaminated terrestrial sites and nearby habitat-matched noncontaminated locations, the patent-pending Rodent Sperm Analysis (RSA) method provides a direct health status appraisal for the maximally chemical-exposed mammalian ecological receptor in the wild. RSA outcomes have consistently allowed for as definitive determinations of receptor health as are possible at the present time, thereby streamlining the ecological risk assessment (ERA) process. Here, we describe the unanticipated discovery, at a contaminated US EPA Superfund National Priorities List site, of a population of Hispid cotton rats (Sigmodon hispidus), with a high percentage of adult males lacking sperm entirely (azoospermia). In light of the RSA method's role in streamlining ERAs and in bringing contaminated Superfund-type site investigations to closure, we consider the consequences of the discovery. The two matters specifically discussed are (1) the computation of a population's average sperm count where azoospermia is present and (2) the merits of the RSA method and its sperm parameter thresholds-for-effect when azoospermia is masked in an otherwise apparently healthy rodent population.

  17. KSC-06pd2878

    NASA Image and Video Library

    2006-12-22

    KENNEDY SPACE CENTER, FLA. -- Bill Gerstenmaier, NASA associate administrator for Space Operations; Sigmar Wittig, head of the DLR, the German Space Agency; Mike Griffin, NASA administrator; and Michel Tognini, head of the European Astronaut Center, examine the thermal protection system tiles beneath Space Shuttle Discovery following the landing of mission STS-116 on Runway 15 at NASA Kennedy Space Center's Shuttle Landing Facility. During the STS-116 mission, three spacewalks attached the P5 integrated truss structure to the station, and completed the rewiring of the orbiting laboratory's power system. A fourth spacewalk retracted a stubborn solar array. Main gear touchdown was at 5:32 p.m. EST. Nose gear touchdown was at 5:32:12 p.m. and wheel stop was at 5:32:52 p.m. At touchdown -- nominally about 2,500 ft. beyond the runway threshold -- the orbiter is traveling at a speed ranging from 213 to 226 mph. Discovery traveled 5,330,000 miles, landing on orbit 204. Mission elapsed time was 12 days, 20 hours, 44 minutes and 16 seconds. This is the 64th landing at KSC. Photo credit: NASA/Kim Shiflett

  18. KSC-06pd2879

    NASA Image and Video Library

    2006-12-22

    KENNEDY SPACE CENTER, FLA. -- Sigmar Wittig, head of the DLR, the German Space Agency; Bill Gerstenmaier, NASA associate administrator for Space Operations; Mike Griffin, NASA administrator; Michel Tognini, head of the European Astronaut Center; and Bill Parsons, Kennedy Space Center deputy director, examine the thermal protection system tiles beneath Space Shuttle Discovery following the landing of mission STS-116 on Runway 15 at NASA Kennedy Space Center's Shuttle Landing Facility. During the STS-116 mission, three spacewalks attached the P5 integrated truss structure to the station, and completed the rewiring of the orbiting laboratory's power system. A fourth spacewalk retracted a stubborn solar array. Main gear touchdown was at 5:32 p.m. EST. Nose gear touchdown was at 5:32:12 p.m. and wheel stop was at 5:32:52 p.m. At touchdown -- nominally about 2,500 ft. beyond the runway threshold -- the orbiter is traveling at a speed ranging from 213 to 226 mph. Discovery traveled 5,330,000 miles, landing on orbit 204. Mission elapsed time was 12 days, 20 hours, 44 minutes and 16 seconds. This is the 64th landing at KSC. Photo credit: NASA/Kim Shiflett

  19. Effective Tooling for Linked Data Publishing in Scientific Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purohit, Sumit; Smith, William P.; Chappell, Alan R.

    Challenges that make it difficult to find, share, and combine published data, such as data heterogeneity and resource discovery, have led to increased adoption of semantic data standards and data publishing technologies. To make data more accessible, interconnected and discoverable, some domains are being encouraged to publish their data as Linked Data. Consequently, this trend greatly increases the amount of data that semantic web tools are required to process, store, and interconnect. In attempting to process and manipulate large data sets, tools–ranging from simple text editors to modern triplestores– eventually breakdown upon reaching undefined thresholds. This paper offers a systematicmore » approach that data publishers can use to categorize suitable tools to meet their data publishing needs. We present a real-world use case, the Resource Discovery for Extreme Scale Collaboration (RDESC), which features a scientific dataset(maximum size of 1.4 billion triples) used to evaluate a toolbox for data publishing in climate research. This paper also introduces a semantic data publishing software suite developed for the RDESC project.« less

  20. Cup inclination angle of greater than 50 degrees increases whole blood concentrations of cobalt and chromium ions after metal-on-metal hip resurfacing.

    PubMed

    Hart, A J; Buddhdev, P; Winship, P; Faria, N; Powell, J J; Skinner, J A

    2008-01-01

    A cup inclination angle greater than 45 degrees is associated with increased wear rates of metal on polyethylene (MOP) hip replacements. The same maybe true for metal on metal (MOM) hips yet this has not been clearly shown. We measured the acetabular inclination angle from plain radiographs, and whole blood metal ion levels using Inductively Coupled Plasma Mass Spectrometry of 26 patients (mean Harris Hip Score 94 and mean time post op of 22 months) with Birmingham Hip Resurfacings. We identified a threshold level of 50 degrees cup inclination. Below this threshold, the mean whole blood cobalt and chromium were 1.6 ppb and 1.88 ppb respectively; above this threshold, the mean blood cobalt and chromium were 4.45 ppb and 4.3 ppb respectively. These differences were significant cobalt (p<0.01) and chromium (p=0.01). All patients above the threshold had metal levels greater than any of the patients below the threshold. For 14 patients, who returned one year later for a repeat blood metal level measurement, cobalt and chromium levels were very similar. The effect of an acetabular inclination angle of greater than 50 degrees on wear rates of MOM hips, as measured through blood metal ion levels, appears to be similar to that seen with MOP hips. Additionally, our new analytical methods may allow blood metal levels to be used as a realistic biomarker of in vivo wear rate of MOM hips. The implication is that metal levels can be minimised with optimal orientation of the acetabular component.

  1. Pulse oximeter based mobile biotelemetry application.

    PubMed

    Işik, Ali Hakan; Güler, Inan

    2012-01-01

    Quality and features of tele-homecare are improved by information and communication technologies. In this context, a pulse oximeter-based mobile biotelemetry application is developed. With this application, patients can measure own oxygen saturation and heart rate through Bluetooth pulse oximeter at home. Bluetooth virtual serial port protocol is used to send the test results from pulse oximeter to the smart phone. These data are converted into XML type and transmitted to remote web server database via smart phone. In transmission of data, GPRS, WLAN or 3G can be used. The rule based algorithm is used in the decision making process. By default, the threshold value of oxygen saturation is 80; the heart rate threshold values are 40 and 150 respectively. If the patient's heart rate is out of the threshold values or the oxygen saturation is below the threshold value, an emergency SMS is sent to the doctor. By this way, the directing of an ambulance to the patient can be performed by doctor. The doctor for different patients can change these threshold values. The conversion of the result of the evaluated data to SMS XML template is done on the web server. Another important component of the application is web-based monitoring of pulse oximeter data. The web page provides access to of all patient data, so the doctors can follow their patients and send e-mail related to the evaluation of the disease. In addition, patients can follow own data on this page. Eight patients have become part of the procedure. It is believed that developed application will facilitate pulse oximeter-based measurement from anywhere and at anytime.

  2. Pain Intensity Recognition Rates via Biopotential Feature Patterns with Support Vector Machines

    PubMed Central

    Gruss, Sascha; Treister, Roi; Werner, Philipp; Traue, Harald C.; Crawcour, Stephen; Andrade, Adriano; Walter, Steffen

    2015-01-01

    Background The clinically used methods of pain diagnosis do not allow for objective and robust measurement, and physicians must rely on the patient’s report on the pain sensation. Verbal scales, visual analog scales (VAS) or numeric rating scales (NRS) count among the most common tools, which are restricted to patients with normal mental abilities. There also exist instruments for pain assessment in people with verbal and / or cognitive impairments and instruments for pain assessment in people who are sedated and automated ventilated. However, all these diagnostic methods either have limited reliability and validity or are very time-consuming. In contrast, biopotentials can be automatically analyzed with machine learning algorithms to provide a surrogate measure of pain intensity. Methods In this context, we created a database of biopotentials to advance an automated pain recognition system, determine its theoretical testing quality, and optimize its performance. Eighty-five participants were subjected to painful heat stimuli (baseline, pain threshold, two intermediate thresholds, and pain tolerance threshold) under controlled conditions and the signals of electromyography, skin conductance level, and electrocardiography were collected. A total of 159 features were extracted from the mathematical groupings of amplitude, frequency, stationarity, entropy, linearity, variability, and similarity. Results We achieved classification rates of 90.94% for baseline vs. pain tolerance threshold and 79.29% for baseline vs. pain threshold. The most selected pain features stemmed from the amplitude and similarity group and were derived from facial electromyography. Conclusion The machine learning measurement of pain in patients could provide valuable information for a clinical team and thus support the treatment assessment. PMID:26474183

  3. Baseline Tumor Lipiodol Uptake after Transarterial Chemoembolization for Hepatocellular Carcinoma: Identification of a Threshold Value Predicting Tumor Recurrence.

    PubMed

    Matsui, Yusuke; Horikawa, Masahiro; Jahangiri Noudeh, Younes; Kaufman, John A; Kolbeck, Kenneth J; Farsad, Khashayar

    2017-12-01

    The aim of the study was to evaluate the association between baseline Lipiodol uptake in hepatocellular carcinoma (HCC) after transarterial chemoembolization (TACE) with early tumor recurrence, and to identify a threshold baseline uptake value predicting tumor response. A single-institution retrospective database of HCC treated with Lipiodol-TACE was reviewed. Forty-six tumors in 30 patients treated with a Lipiodol-chemotherapy emulsion and no additional particle embolization were included. Baseline Lipiodol uptake was measured as the mean Hounsfield units (HU) on a CT within one week after TACE. Washout rate was calculated dividing the difference in HU between the baseline CT and follow-up CT by time (HU/month). Cox proportional hazard models were used to correlate baseline Lipiodol uptake and other variables with tumor response. A receiver operating characteristic (ROC) curve was used to identify the optimal threshold for baseline Lipiodol uptake predicting tumor response. During the follow-up period (mean 5.6 months), 19 (41.3%) tumors recurred (mean time to recurrence = 3.6 months). In a multivariate model, low baseline Lipiodol uptake and higher washout rate were significant predictors of early tumor recurrence ( P = 0.001 and < 0.0001, respectively). On ROC analysis, a threshold Lipiodol uptake of 270.2 HU was significantly associated with tumor response (95% sensitivity, 93% specificity). Baseline Lipiodol uptake and washout rate on follow-up were independent predictors of early tumor recurrence. A threshold value of baseline Lipiodol uptake > 270.2 HU was highly sensitive and specific for tumor response. These findings may prove useful for determining subsequent treatment strategies after Lipiodol TACE.

  4. Load type influences motor unit recruitment in biceps brachii during a sustained contraction.

    PubMed

    Baudry, Stéphane; Rudroff, Thorsten; Pierpoint, Lauren A; Enoka, Roger M

    2009-09-01

    Twenty subjects participated in four experiments designed to compare time to task failure and motor-unit recruitment threshold during contractions sustained at 15% of maximum as the elbow flexor muscles either supported an inertial load (position task) or exerted an equivalent constant torque against a rigid restraint (force task). Subcutaneous branched bipolar electrodes were used to record single motor unit activity from the biceps brachii muscle during ramp contractions performed before and at 50 and 90% of the time to failure for the position task during both fatiguing contractions. The time to task failure was briefer for the position task than for the force task (P=0.0002). Thirty and 29 motor units were isolated during the force and position tasks, respectively. The recruitment threshold declined by 48 and 30% (P=0.0001) during the position task for motor units with an initial recruitment threshold below and above the target force, respectively, whereas no significant change in recruitment threshold was observed during the force task. Changes in recruitment threshold were associated with a decrease in the mean discharge rate (-16%), an increase in discharge rate variability (+40%), and a prolongation of the first two interspike intervals (+29 and +13%). These data indicate that there were faster changes in motor unit recruitment and rate coding during the position task than the force task despite a similar net muscle torque during both tasks. Moreover, the results suggest that the differential synaptic input observed during the position task influences most of the motor unit pool.

  5. The Perth Automated Supernova Search

    NASA Astrophysics Data System (ADS)

    Williams, A. J.

    1997-12-01

    An automated search for supernovae in late spiral galaxies has been established at Perth Observatory, Western Australia. This automated search uses three low-cost PC-clone computers, a liquid nitrogen cooled CCD camera built locally, and a 61-cm telescope automated for the search. The images are all analysed automatically in real-time by routines in Perth Vista, the image processing system ported to the PC architecture for the search system. The telescope control software written for the project, Teljoy, maintains open-loop position accuracy better than 30" of arc after hundreds of jumps over an entire night. Total capital cost to establish and run this supernova search over the seven years of development and operation was around US$30,000. To date, the system has discovered a total of 6 confirmed supernovae, made an independent detection of a seventh, and detected one unconfirmed event assumed to be a supernova. The various software and hardware components of the search system are described in detail, the analysis of the first three years of data is discussed, and results presented. We find a Type Ib/c rate of 0.43 +/- 0.43 SNu, and a Type IIP rate of 0.86 +/- 0.49 SNu, where SNu are 'supernova units', expressed in supernovae per 10^10 solar blue luminosity galaxy per century. These values are for a Hubble constant of 75 km/s per Mpc, and scale as (H0/75)^2. The small number of discoveries has left large statistical uncertainties, but our strategy of frequent observations has reduced systematic errors - altering detection threshold or peak supernova luminosity by +/- 0.5 mag changes estimated rates by only around 20%. Similarly, adoption of different light curve templates for Type Ia and Type IIP supernovae has a minimal effect on the final statistics (2% and 4% change, respectively).

  6. The dark energy survey Y1 supernova search: Survey strategy compared to forecasts and the photometric type Is SN volumetric rate

    NASA Astrophysics Data System (ADS)

    Fischer, John Arthur

    For 70 years, the physics community operated under the assumption that the expansion of the Universe must be slowing due to gravitational attraction. Then, in 1998, two teams of scientists used Type Ia supernovae to discover that cosmic expansion was actually acceler- ating due to a mysterious "dark energy." As a result, Type Ia supernovae have become the most cosmologically important transient events in the last 20 years, with a large amount of effort going into their discovery as well as understanding their progenitor systems. One such probe for understanding Type Ia supernovae is to use rate measurements to de- termine the time delay between star formation and supernova explosion. For the last 30 years, the discovery of individual Type Ia supernova events has been accelerating. How- ever, those discoveries were happening in time-domain surveys that probed only a portion of the redshift range where expansion was impacted by dark energy. The Dark Energy Survey (DES) is the first project in the "next generation" of time-domain surveys that will discovery thousands of Type Ia supernovae out to a redshift of 1.2 (where dark energy be- comes subdominant) and DES will have better systematic uncertainties over that redshift range than any survey to date. In order to gauge the discovery effectiveness of this survey, we will use the first season's 469 photometrically typed supernovee and compare it with simulations in order to update the full survey Type Ia projections from 3500 to 2250. We will then use 165 of the 469 supernovae out to a redshift of 0.6 to measure the supernovae rate both as a function of comoving volume and of the star formation rate as it evolves with redshift. We find the most statistically significant prompt fraction of any survey to date (with a 3.9? prompt fraction detection). We will also reinforce the already existing tension in the measurement of the delayed fraction between high (z > 1.2) and low red- shift rate measurements, where we find no significant evidence of a delayed fraction at all in our photometric sample.

  7. Effects of linoleic acid on sweet, sour, salty, and bitter taste thresholds and intensity ratings of adults.

    PubMed

    Mattes, Richard D

    2007-05-01

    Evidence supporting a taste component for dietary fat has prompted study of plausible transduction mechanisms. One hypothesizes that long-chain, unsaturated fatty acids block selected delayed-rectifying potassium channels, resulting in a sensitization of taste receptor cells to stimulation by other taste compounds. This was tested in 17 male and 17 female adult (mean +/- SE age = 23.4 +/- 0.7 yr) propylthiouracil tasters with normal resting triglyceride concentrations (87.3 +/- 5.6 mg/day) and body mass index (23.3 +/- 0.4 kg/m(2)). Participants were tested during two approximately 30-min test sessions per week for 8 wk. Eight stimuli were assessed in duplicate via an ascending, three-alternative, forced-choice procedure. Qualities were randomized over weeks. Stimuli were presented as room-temperature, 5-ml portions. They included 1% solutions of linoleic acid with added sodium chloride (salty), sucrose (sweet), citric acid (sour), and caffeine (bitter) as well as solutions of these taste compounds alone. Participants also rated the intensity of the five strongest concentrations using the general labeled magnitude scale. The suprathreshold samples were presented in random order with a rinse between each. Subjects made the ratings self-paced while wearing nose clips. It was hypothesized that taste thresholds would be lower and absolute intensity ratings or slopes of intensity functions would be higher for the stimuli mixed with the linoleic acid. Thresholds were compared by paired t-tests and intensity ratings by repeated measures analysis of variance. Thresholds were significantly higher (i.e., lower sensitivity) for the sodium chloride, citric acid, and caffeine solutions with added fatty acid. Sweet, sour, and salty intensity ratings were lower or unchanged by the addition of a fatty acid. The two highest concentrations of caffeine were rated as weaker in the presence of linoleic acid. These data do not support a mechanism for detecting dietary fats whereby fatty acids sensitize taste receptor cells to stimulation by taste compounds.

  8. Intensity level for exercise training in fibromyalgia by using mathematical models.

    PubMed

    Lemos, Maria Carolina D; Valim, Valéria; Zandonade, Eliana; Natour, Jamil

    2010-03-22

    It has not been assessed before whether mathematical models described in the literature for prescriptions of exercise can be used for fibromyalgia syndrome patients. The objective of this paper was to determine how age-predicted heart rate formulas can be used with fibromyalgia syndrome populations as well as to find out which mathematical models are more accurate to control exercise intensity. A total of 60 women aged 18-65 years with fibromyalgia syndrome were included; 32 were randomized to walking training at anaerobic threshold. Age-predicted formulas to maximum heart rate ("220 minus age" and "208 minus 0.7 x age") were correlated with achieved maximum heart rate (HRMax) obtained by spiroergometry. Subsequently, six mathematical models using heart rate reserve (HRR) and age-predicted HRMax formulas were studied to estimate the intensity level of exercise training corresponding to heart rate at anaerobic threshold (HRAT) obtained by spiroergometry. Linear and nonlinear regression models were used for correlations and residues analysis for the adequacy of the models. Age-predicted HRMax and HRAT formulas had a good correlation with achieved heart rate obtained in spiroergometry (r = 0.642; p < 0.05). For exercise prescription in the anaerobic threshold intensity, the percentages were 52.2-60.6% HRR and 75.5-80.9% HRMax. Formulas using HRR and the achieved HRMax showed better correlation. Furthermore, the percentages of HRMax and HRR were significantly higher for the trained individuals (p < 0.05). Age-predicted formulas can be used for estimating HRMax and for exercise prescriptions in women with fibromyalgia syndrome. Karnoven's formula using heart rate achieved in ergometric test showed a better correlation. For the prescription of exercises in the threshold intensity, 52% to 60% HRR or 75% to 80% HRMax must be used in sedentary women with fibromyalgia syndrome and these values are higher and must be corrected for trained patients.

  9. Intensity level for exercise training in fibromyalgia by using mathematical models

    PubMed Central

    2010-01-01

    Background It has not been assessed before whether mathematical models described in the literature for prescriptions of exercise can be used for fibromyalgia syndrome patients. The objective of this paper was to determine how age-predicted heart rate formulas can be used with fibromyalgia syndrome populations as well as to find out which mathematical models are more accurate to control exercise intensity. Methods A total of 60 women aged 18-65 years with fibromyalgia syndrome were included; 32 were randomized to walking training at anaerobic threshold. Age-predicted formulas to maximum heart rate ("220 minus age" and "208 minus 0.7 × age") were correlated with achieved maximum heart rate (HRMax) obtained by spiroergometry. Subsequently, six mathematical models using heart rate reserve (HRR) and age-predicted HRMax formulas were studied to estimate the intensity level of exercise training corresponding to heart rate at anaerobic threshold (HRAT) obtained by spiroergometry. Linear and nonlinear regression models were used for correlations and residues analysis for the adequacy of the models. Results Age-predicted HRMax and HRAT formulas had a good correlation with achieved heart rate obtained in spiroergometry (r = 0.642; p < 0.05). For exercise prescription in the anaerobic threshold intensity, the percentages were 52.2-60.6% HRR and 75.5-80.9% HRMax. Formulas using HRR and the achieved HRMax showed better correlation. Furthermore, the percentages of HRMax and HRR were significantly higher for the trained individuals (p < 0.05). Conclusion Age-predicted formulas can be used for estimating HRMax and for exercise prescriptions in women with fibromyalgia syndrome. Karnoven's formula using heart rate achieved in ergometric test showed a better correlation. For the prescription of exercises in the threshold intensity, 52% to 60% HRR or 75% to 80% HRMax must be used in sedentary women with fibromyalgia syndrome and these values are higher and must be corrected for trained patients. PMID:20307323

  10. Bitter-tasting and kokumi-enhancing molecules in thermally processed avocado (Persea americana Mill.).

    PubMed

    Degenhardt, Andreas Georg; Hofmann, Thomas

    2010-12-22

    Sequential application of solvent extraction and RP-HPLC in combination with taste dilution analyses (TDA) and comparative TDA, followed by LC-MS and 1D/2D NMR experiments, led to the discovery of 10 C(17)-C(21) oxylipins with 1,2,4-trihydroxy-, 1-acetoxy-2,4-dihydroxy-, and 1-acetoxy-2-hydroxy-4-oxo motifs, respectively, besides 1-O-stearoyl-glycerol and 1-O-linoleoyl-glycerol as bitter-tasting compounds in thermally processed avocado (Persea americana Mill.). On the basis of quantitative data, dose-over-threshold (DoT) factors, and taste re-engineering experiments, these phytochemicals, among which 1-acetoxy-2-hydroxy-4-oxo-octadeca-12-ene was found with the highest taste impact, were confirmed to be the key contributors to the bitter off-taste developed upon thermal processing of avocado. For the first time, those C(17)-C(21) oxylipins exhibiting a 1-acetoxy-2,4-dihydroxy- and a 1-acetoxy-2-hydroxy-4-oxo motif, respectively, were discovered to induce a mouthfulness (kokumi)-enhancing activity in sub-bitter threshold concentrations.

  11. Implications of heavy quark-diquark symmetry for excited doubly heavy baryons and tetraquarks

    NASA Astrophysics Data System (ADS)

    Mehen, Thomas

    2017-11-01

    We give heavy quark-diquark symmetry predictions for doubly heavy baryons and tetraquarks in light of the recent discovery of the Ξcc ++ by LHCb. For five excited doubly charm baryons that are predicted to lie below the ΛcD threshold, we give predictions for their electromagnetic and strong decays using a previously developed chiral Lagrangian with heavy quark-diquark symmetry. Based on the mass of the Ξcc ++, the existence of a doubly heavy bottom I =0 tetraquark that is stable to strong and electromagnetic decays has been predicted. If the mass of this state is below 10405 MeV, as predicted in some models, we argue using heavy quark-diquark symmetry that the JP=1+ I =1 doubly bottom tetraquark state will lie just below the open bottom threshold and likely be a narrow state as well. In this scenario, we compute strong decay width for this state using a new Lagrangian for doubly heavy tetraquarks which is related to the singly heavy baryon Lagrangian by heavy quark-diquark symmetry.

  12. Fault detection and diagnosis for refrigerator from compressor sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keres, Stephen L.; Gomes, Alberto Regio; Litch, Andrew D.

    A refrigerator, a sealed refrigerant system, and method are provided where the refrigerator includes at least a refrigerated compartment and a sealed refrigerant system including an evaporator, a compressor, a condenser, a controller, an evaporator fan, and a condenser fan. The method includes monitoring a frequency of the compressor, and identifying a fault condition in the at least one component of the refrigerant sealed system in response to the compressor frequency. The method may further comprise calculating a compressor frequency rate based upon the rate of change of the compressor frequency, wherein a fault in the condenser fan is identifiedmore » if the compressor frequency rate is positive and exceeds a condenser fan fault threshold rate, and wherein a fault in the evaporator fan is identified if the compressor frequency rate is negative and exceeds an evaporator fan fault threshold rate.« less

  13. Growth rate and mitotic index analysis of Vicia faba L. roots exposed to 60-Hz electric fields.

    PubMed

    Inoue, M; Miller, M W; Cox, C; Carstesen, E L

    1985-01-01

    Growth, mitotic index, and growth rate recovery were determined for Vicia faba L. roots exposed to 60-Hz electric fields of 200, 290, and 360 V/m in an aqueous inorganic nutrient medium (conductivity 0.07-0.09 S/m). Root growth rate decreased in proportion to the increasing strength; the electric field threshold for a growth rate effect was about 230 V/m. The induced transmembrane potential at the threshold exposure was about 4-7 mV. The mitotic index was not affected by an electric field exposure sufficient to reduce root growth rate to about 35% of control. Root growth rate recovery from 31-96% of control occurred in 4 days after cessation of the 360 V/m exposure. The results support the postulate that the site of action of the applied electric fields is the cell membrane.

  14. Neurometric amplitude-modulation detection threshold in the guinea-pig ventral cochlear nucleus

    PubMed Central

    Sayles, Mark; Füllgrabe, Christian; Winter, Ian M

    2013-01-01

    Amplitude modulation (AM) is a pervasive feature of natural sounds. Neural detection and processing of modulation cues is behaviourally important across species. Although most ecologically relevant sounds are not fully modulated, physiological studies have usually concentrated on fully modulated (100% modulation depth) signals. Psychoacoustic experiments mainly operate at low modulation depths, around detection threshold (∼5% AM). We presented sinusoidal amplitude-modulated tones, systematically varying modulation depth between zero and 100%, at a range of modulation frequencies, to anaesthetised guinea-pigs while recording spikes from neurons in the ventral cochlear nucleus (VCN). The cochlear nucleus is the site of the first synapse in the central auditory system. At this locus significant signal processing occurs with respect to representation of AM signals. Spike trains were analysed in terms of the vector strength of spike synchrony to the amplitude envelope. Neurons showed either low-pass or band-pass temporal modulation transfer functions, with the proportion of band-pass responses increasing with increasing sound level. The proportion of units showing a band-pass response varies with unit type: sustained chopper (CS) > transient chopper (CT) > primary-like (PL). Spike synchrony increased with increasing modulation depth. At the lowest modulation depth (6%), significant spike synchrony was only observed near to the unit's best modulation frequency for all unit types tested. Modulation tuning therefore became sharper with decreasing modulation depth. AM detection threshold was calculated for each individual unit as a function of modulation frequency. Chopper units have significantly better AM detection thresholds than do primary-like units. AM detection threshold is significantly worse at 40 dB vs. 10 dB above pure-tone spike rate threshold. Mean modulation detection thresholds for sounds 10 dB above pure-tone spike rate threshold at best modulation frequency are (95% CI) 11.6% (10.0–13.1) for PL units, 9.8% (8.2–11.5) for CT units, and 10.8% (8.4–13.2) for CS units. The most sensitive guinea-pig VCN single unit AM detection thresholds are similar to human psychophysical performance (∼3% AM), while the mean neurometric thresholds approach whole animal behavioural performance (∼10% AM). PMID:23629508

  15. Melodic interval perception by normal-hearing listeners and cochlear implant users

    PubMed Central

    Luo, Xin; Masterson, Megan E.; Wu, Ching-Chih

    2014-01-01

    The perception of melodic intervals (sequential pitch differences) is essential to music perception. This study tested melodic interval perception in normal-hearing (NH) listeners and cochlear implant (CI) users. Melodic interval ranking was tested using an adaptive procedure. CI users had slightly higher interval ranking thresholds than NH listeners. Both groups' interval ranking thresholds, although not affected by root note, significantly increased with standard interval size and were higher for descending intervals than for ascending intervals. The pitch direction effect may be due to a procedural artifact or a difference in central processing. In another test, familiar melodies were played with all the intervals scaled by a single factor. Subjects rated how in tune the melodies were and adjusted the scaling factor until the melodies sounded the most in tune. CI users had lower final interval ratings and less change in interval rating as a function of scaling factor than NH listeners. For CI users, the root-mean-square error of the final scaling factors and the width of the interval rating function were significantly correlated with the average ranking threshold for ascending rather than descending intervals, suggesting that CI users may have focused on ascending intervals when rating and adjusting the melodies. PMID:25324084

  16. Better cancer biomarker discovery through better study design.

    PubMed

    Rundle, Andrew; Ahsan, Habibul; Vineis, Paolo

    2012-12-01

    High-throughput laboratory technologies coupled with sophisticated bioinformatics algorithms have tremendous potential for discovering novel biomarkers, or profiles of biomarkers, that could serve as predictors of disease risk, response to treatment or prognosis. We discuss methodological issues in wedding high-throughput approaches for biomarker discovery with the case-control study designs typically used in biomarker discovery studies, especially focusing on nested case-control designs. We review principles for nested case-control study design in relation to biomarker discovery studies and describe how the efficiency of biomarker discovery can be effected by study design choices. We develop a simulated prostate cancer cohort data set and a series of biomarker discovery case-control studies nested within the cohort to illustrate how study design choices can influence biomarker discovery process. Common elements of nested case-control design, incidence density sampling and matching of controls to cases are not typically factored correctly into biomarker discovery analyses, inducing bias in the discovery process. We illustrate how incidence density sampling and matching of controls to cases reduce the apparent specificity of truly valid biomarkers 'discovered' in a nested case-control study. We also propose and demonstrate a new case-control matching protocol, we call 'antimatching', that improves the efficiency of biomarker discovery studies. For a valid, but as yet undiscovered, biomarker(s) disjunctions between correctly designed epidemiologic studies and the practice of biomarker discovery reduce the likelihood that true biomarker(s) will be discovered and increases the false-positive discovery rate. © 2012 The Authors. European Journal of Clinical Investigation © 2012 Stichting European Society for Clinical Investigation Journal Foundation.

  17. Temperature-activity relationships in Meligethes aeneus: implications for pest management.

    PubMed

    Ferguson, Andrew W; Nevard, Lucy M; Clark, Suzanne J; Cook, Samantha M

    2015-03-01

    Pollen beetle (Meligethes aeneus F.) management in oilseed rape (Brassica napus L.) has become an urgent issue in the light of insecticide resistance. Risk prediction advice has relied upon flight temperature thresholds, while risk assessment uses simple economic thresholds. However, there is variation in the reported temperature of migration, and economic thresholds vary widely across Europe, probably owing to climatic factors interacting with beetle activity and plant compensation for damage. The effect of temperature on flight, feeding and oviposition activity of M. aeneus was examined in controlled conditions. Escape from a release vial was taken as evidence of flight and was supported by video observations. The propensity to fly followed a sigmoid temperature-response curve between 6 and 23 °C; the 10, 25 and 50% flight temperature thresholds were 12.0-12.5 °C, 13.6-14.2 °C and 15.5-16.2 °C, respectively. Thresholds were slightly higher in the second of two flight bioassays, suggesting an effect of beetle age. Strong positive relationships were found between temperature (6-20 °C) and the rates of feeding and oviposition on flower buds of oilseed rape. These temperature relationships could be used to improve M. aeneus migration risk assessment, refine weather-based decision support systems and modulate damage thresholds according to rates of bud damage. © 2014 The Authors. Pest Management Science published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.

  18. Viewpoint: Sustainability of piñon-juniper ecosystems - A unifying perspective of soil erosion thresholds

    USGS Publications Warehouse

    Davenport, David W.; Breshears, D.D.; Wilcox, B.P.; Allen, Craig D.

    1998-01-01

    Many pinon-juniper ecosystem in the western U.S. are subject to accelerated erosion while others are undergoing little or no erosion. Controversy has developed over whether invading or encroaching pinon and juniper species are inherently harmful to rangeland ecosystems. We developed a conceptual model of soil erosion in pinon-jumper ecosystems that is consistent with both sides of the controversy and suggests that the diverse perspectives on this issue arise from threshold effects operating under very different site conditions. Soil erosion rate can be viewed as a function of (1) site erosion potential (SEP), determined by climate, geomorphology and soil erodibility; and (2) ground cover. Site erosion potential and cove act synergistically to determine soil erosion rates, as evident even from simple USLE predictions of erosion. In pinon-juniper ecosystem with high SEP, the erosion rate is highly sensitive to ground cover and can cross a threshold so that erosion increases dramatically in response to a small decrease in cover. The sensitivity of erosion rate to SEP and cover can be visualized as a cusp catastrophe surface on which changes may occur rapidly and irreversibly. The mechanisms associated with a rapid shift from low to high erosion rate can be illustrated using percolation theory to incorporate spatial, temporal, and scale-dependent patterns of water storage capacity on a hillslope. Percolation theory demonstrates how hillslope runoff can undergo a threshold response to a minor change in storage capacity. Our conceptual model suggests that pinion and juniper contribute to accelerated erosion only under a limited range of site conditions which, however, may exist over large areas.

  19. High-threshold motor unit firing reflects force recovery following a bout of damaging eccentric exercise.

    PubMed

    Macgregor, Lewis J; Hunter, Angus M

    2018-01-01

    Exercise-induced muscle damage (EIMD) is associated with impaired muscle function and reduced neuromuscular recruitment. However, motor unit firing behaviour throughout the recovery period is unclear. EIMD impairment of maximal voluntary force (MVC) will, in part, be caused by reduced high-threshold motor unit firing, which will subsequently increase to recover MVC. Fourteen healthy active males completed a bout of eccentric exercise on the knee extensors, with measurements of MVC, rate of torque development and surface electromyography performed pre-exercise and 2, 3, 7 and 14 days post-exercise, on both damaged and control limb. EIMD was associated with decreased MVC (235.2 ± 49.3 Nm vs. 161.3 ± 52.5 Nm; p <0.001) and rate of torque development (495.7 ± 136.9 Nm.s-1 vs. 163.4 ± 163.7 Nm.s-1; p <0.001) 48h post-exercise. Mean motor unit firing rate was reduced (16.4 ± 2.2 Hz vs. 12.6 ± 1.7 Hz; p <0.01) in high-threshold motor units only, 48h post-exercise, and common drive was elevated (0.36 ± 0.027 vs. 0.56 ± 0.032; p< 0.001) 48h post-exercise. The firing rate of high-threshold motor units was reduced in parallel with impaired muscle function, whilst early recruited motor units remained unaltered. Common drive of motor units increased in offset to the firing rate impairment. These alterations correlated with the recovery of force decrement, but not of pain elevation. This study provides fresh insight into the central mechanisms associated with EIMD recovery, relative to muscle function. These findings may in turn lead to development of novel management and preventative procedures.

  20. Discharge properties of abductor hallucis before, during, and after an isometric fatigue task.

    PubMed

    Kelly, Luke A; Racinais, Sebastien; Cresswell, Andrew G

    2013-08-01

    Abductor hallucis is the largest muscle in the arch of the human foot and comprises few motor units relative to its physiological cross-sectional area. It has been described as a postural muscle, aiding in the stabilization of the longitudinal arch during stance and gait. The purpose of this study was to describe the discharge properties of abductor hallucis motor units during ramp and hold isometric contractions, as well as its discharge characteristics during fatigue. Intramuscular electromyographic recordings from abductor hallucis were made in 5 subjects; from those recordings, 42 single motor units were decomposed. Data were recorded during isometric ramp contractions at 60% maximum voluntary contraction (MVC), performed before and after a submaximal isometric contraction to failure (mean force 41.3 ± 15.3% MVC, mean duration 233 ± 116 s). Motor unit recruitment thresholds ranged from 10.3 to 54.2% MVC. No significant difference was observed between recruitment and derecruitment thresholds or their respective discharge rates for both the initial and postfatigue ramp contractions (all P > 0.25). Recruitment threshold was positively correlated with recruitment discharge rate (r = 0.47, P < 0.03). All motor units attained similar peak discharge rates (14.0 ± 0.25 pulses/s) and were not correlated with recruitment threshold. Thirteen motor units could be followed during the isometric fatigue task, with a decline in discharge rate and increase in discharge rate variability occurring in the final 25% of the task (both P < 0.05). We have shown that abductor hallucis motor units discharge relatively slowly and are considerably resistant to fatigue. These characteristics may be effective for generating and sustaining the substantial level of force that is required to stabilize the longitudinal arch during weight bearing.

  1. High-threshold motor unit firing reflects force recovery following a bout of damaging eccentric exercise

    PubMed Central

    Macgregor, Lewis J.

    2018-01-01

    Exercise-induced muscle damage (EIMD) is associated with impaired muscle function and reduced neuromuscular recruitment. However, motor unit firing behaviour throughout the recovery period is unclear. EIMD impairment of maximal voluntary force (MVC) will, in part, be caused by reduced high-threshold motor unit firing, which will subsequently increase to recover MVC. Fourteen healthy active males completed a bout of eccentric exercise on the knee extensors, with measurements of MVC, rate of torque development and surface electromyography performed pre-exercise and 2, 3, 7 and 14 days post-exercise, on both damaged and control limb. EIMD was associated with decreased MVC (235.2 ± 49.3 Nm vs. 161.3 ± 52.5 Nm; p <0.001) and rate of torque development (495.7 ± 136.9 Nm.s-1 vs. 163.4 ± 163.7 Nm.s-1; p <0.001) 48h post-exercise. Mean motor unit firing rate was reduced (16.4 ± 2.2 Hz vs. 12.6 ± 1.7 Hz; p <0.01) in high-threshold motor units only, 48h post-exercise, and common drive was elevated (0.36 ± 0.027 vs. 0.56 ± 0.032; p< 0.001) 48h post-exercise. The firing rate of high-threshold motor units was reduced in parallel with impaired muscle function, whilst early recruited motor units remained unaltered. Common drive of motor units increased in offset to the firing rate impairment. These alterations correlated with the recovery of force decrement, but not of pain elevation. This study provides fresh insight into the central mechanisms associated with EIMD recovery, relative to muscle function. These findings may in turn lead to development of novel management and preventative procedures. PMID:29630622

  2. Emilio Segrè and Spontaneous Fission

    Science.gov Websites

    fissioned instead. The discovery of fission led in turn to the discovery of the chain reaction that, if material apart before it had a chance to undergo an efficient chain reaction. The possibility of chain reaction. If a similar rate was found in plutonium, it might rule out the use of that element as

  3. Nd:YAG 1.44 laser ablation of human cartilage

    NASA Astrophysics Data System (ADS)

    Cummings, Robert S.; Prodoehl, John A.; Rhodes, Anthony L.; Black, Johnathan D.; Sherk, Henry H.

    1993-07-01

    This study determined the effectiveness of a Neodymium:YAG 1.44 micrometers wavelength laser on human cartilage. This wavelength is strongly absorbed by water. Cadaveric meniscal fibrocartilage and articular hyaline cartilage were harvested and placed in normal saline during the study. A 600 micrometers quartz fiber was applied perpendicularly to the tissues with a force of 0.098 N. Quantitative measurements were then made of the ablation rate as a function of fluence. The laser energy was delivered at a constant repetition rate of 5 Hz., 650 microsecond(s) pulsewidth, and energy levels ranging from 0.5 joules to 2.0 joules. Following the ablation of the tissue, the specimens were fixed in formalin for histologic evaluation. The results of the study indicate that the ablation rate is 0.03 mm/mj/mm2 for hyaline cartilage and fibrocartilage. Fibrocartilage was cut at approximately the same rate as hyaline cartilage. There was a threshold fluence projected to be 987 mj/mm2 for hyaline cartilage and fibrocartilage. Our results indicate that the pulsed Nd:YAG laser operating at 1.44 micrometers has a threshold fluence above which it will ablate human cartilage, and that its ablation rate is directly proportional to fluence over the range of parameters tested. Fibrocartilage and hyaline cartilage demonstrated similar threshold fluence and ablation rates which is related to the high water content of these tissues.

  4. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    PubMed

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  5. Estimating sensitivity and specificity for technology assessment based on observer studies.

    PubMed

    Nishikawa, Robert M; Pesce, Lorenzo L

    2013-07-01

    The goal of this study was to determine the accuracy and precision of using scores from a receiver operating characteristic rating scale to estimate sensitivity and specificity. We used data collected in a previous study that measured the improvements in radiologists' ability to classify mammographic microcalcification clusters as benign or malignant with and without the use of a computer-aided diagnosis scheme. Sensitivity and specificity were estimated from the rating data from a question that directly asked the radiologists their biopsy recommendations, which was used as the "truth," because it is the actual recall decision, thus it is their subjective truth. By thresholding the rating data, sensitivity and specificity were estimated for different threshold values. Because of interreader and intrareader variability, estimated sensitivity and specificity values for individual readers could be as much as 100% in error when using rating data compared to using the biopsy recommendation data. When pooled together, the estimates using thresholding the rating data were in good agreement with sensitivity and specificity estimated from the recommendation data. However, the statistical power of the rating data estimates was lower. By simply asking the observer his or her explicit recommendation (eg, biopsy or no biopsy), sensitivity and specificity can be measured directly, giving a more accurate description of empirical variability and the power of the study can be maximized. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  6. Effect of Repetition Rate on Femtosecond Laser-Induced Homogenous Microstructures

    PubMed Central

    Biswas, Sanchari; Karthikeyan, Adya; Kietzig, Anne-Marie

    2016-01-01

    We report on the effect of repetition rate on the formation and surface texture of the laser induced homogenous microstructures. Different microstructures were micromachined on copper (Cu) and titanium (Ti) using femtosecond pulses at 1 and 10 kHz. We studied the effect of the repetition rate on structure formation by comparing the threshold accumulated pulse (FΣpulse) values and the effect on the surface texture through lacunarity analysis. Machining both metals at low FΣpulse resulted in microstructures with higher lacunarity at 10 kHz compared to 1 kHz. On increasing FΣpulse, the microstructures showed higher lacunarity at 1 kHz. The effect of the repetition rate on the threshold FΣpulse values were, however, considerably different on the two metals. With an increase in repetition rate, we observed a decrease in the threshold FΣpulse on Cu, while on Ti we observed an increase. These differences were successfully allied to the respective material characteristics and the resulting melt dynamics. While machining Ti at 10 kHz, the melt layer induced by one laser pulse persists until the next pulse arrives, acting as a dielectric for the subsequent pulse, thereby increasing FΣpulse. However, on Cu, the melt layer quickly resolidifies and no such dielectric like phase is observed. Our study contributes to the current knowledge on the effect of the repetition rate as an irradiation parameter. PMID:28774143

  7. Empirical Bayes method for reducing false discovery rates of correlation matrices with block diagonal structure.

    PubMed

    Pacini, Clare; Ajioka, James W; Micklem, Gos

    2017-04-12

    Correlation matrices are important in inferring relationships and networks between regulatory or signalling elements in biological systems. With currently available technology sample sizes for experiments are typically small, meaning that these correlations can be difficult to estimate. At a genome-wide scale estimation of correlation matrices can also be computationally demanding. We develop an empirical Bayes approach to improve covariance estimates for gene expression, where we assume the covariance matrix takes a block diagonal form. Our method shows lower false discovery rates than existing methods on simulated data. Applied to a real data set from Bacillus subtilis we demonstrate it's ability to detecting known regulatory units and interactions between them. We demonstrate that, compared to existing methods, our method is able to find significant covariances and also to control false discovery rates, even when the sample size is small (n=10). The method can be used to find potential regulatory networks, and it may also be used as a pre-processing step for methods that calculate, for example, partial correlations, so enabling the inference of the causal and hierarchical structure of the networks.

  8. The Localized Discovery and Recovery for Query Packet Losses in Wireless Sensor Networks with Distributed Detector Clusters

    PubMed Central

    Teng, Rui; Leibnitz, Kenji; Miura, Ryu

    2013-01-01

    An essential application of wireless sensor networks is to successfully respond to user queries. Query packet losses occur in the query dissemination due to wireless communication problems such as interference, multipath fading, packet collisions, etc. The losses of query messages at sensor nodes result in the failure of sensor nodes reporting the requested data. Hence, the reliable and successful dissemination of query messages to sensor nodes is a non-trivial problem. The target of this paper is to enable highly successful query delivery to sensor nodes by localized and energy-efficient discovery, and recovery of query losses. We adopt local and collective cooperation among sensor nodes to increase the success rate of distributed discoveries and recoveries. To enable the scalability in the operations of discoveries and recoveries, we employ a distributed name resolution mechanism at each sensor node to allow sensor nodes to self-detect the correlated queries and query losses, and then efficiently locally respond to the query losses. We prove that the collective discovery of query losses has a high impact on the success of query dissemination and reveal that scalability can be achieved by using the proposed approach. We further study the novel features of the cooperation and competition in the collective recovery at PHY and MAC layers, and show that the appropriate number of detectors can achieve optimal successful recovery rate. We evaluate the proposed approach with both mathematical analyses and computer simulations. The proposed approach enables a high rate of successful delivery of query messages and it results in short route lengths to recover from query losses. The proposed approach is scalable and operates in a fully distributed manner. PMID:23748172

  9. Stochastic dynamics of dengue epidemics.

    PubMed

    de Souza, David R; Tomé, Tânia; Pinho, Suani T R; Barreto, Florisneide R; de Oliveira, Mário J

    2013-01-01

    We use a stochastic Markovian dynamics approach to describe the spreading of vector-transmitted diseases, such as dengue, and the threshold of the disease. The coexistence space is composed of two structures representing the human and mosquito populations. The human population follows a susceptible-infected-recovered (SIR) type dynamics and the mosquito population follows a susceptible-infected-susceptible (SIS) type dynamics. The human infection is caused by infected mosquitoes and vice versa, so that the SIS and SIR dynamics are interconnected. We develop a truncation scheme to solve the evolution equations from which we get the threshold of the disease and the reproductive ratio. The threshold of the disease is also obtained by performing numerical simulations. We found that for certain values of the infection rates the spreading of the disease is impossible, for any death rate of infected mosquitoes.

  10. Error suppression via complementary gauge choices in Reed-Muller codes

    NASA Astrophysics Data System (ADS)

    Chamberland, Christopher; Jochym-O'Connor, Tomas

    2017-09-01

    Concatenation of two quantum error-correcting codes with complementary sets of transversal gates can provide a means toward universal fault-tolerant quantum computation. We first show that it is generally preferable to choose the inner code with the higher pseudo-threshold to achieve lower logical failure rates. We then explore the threshold properties of a wide range of concatenation schemes. Notably, we demonstrate that the concatenation of complementary sets of Reed-Muller codes can increase the code capacity threshold under depolarizing noise when compared to extensions of previously proposed concatenation models. We also analyze the properties of logical errors under circuit-level noise, showing that smaller codes perform better for all sampled physical error rates. Our work provides new insights into the performance of universal concatenated quantum codes for both code capacity and circuit-level noise.

  11. Generation Process of Large-Amplitude Upper-Band Chorus Emissions Observed by Van Allen Probes

    DOE PAGES

    Kubota, Yuko; Omura, Yoshiharu; Kletzing, Craig; ...

    2018-04-19

    In this paper, we analyze large-amplitude upper-band chorus emissions measured near the magnetic equator by the Electric and Magnetic Field Instrument Suite and Integrated Science instrument package on board the Van Allen Probes. In setting up the parameters of source electrons exciting the emissions based on theoretical analyses and observational results measured by the Helium Oxygen Proton Electron instrument, we calculate threshold and optimum amplitudes with the nonlinear wave growth theory. We find that the optimum amplitude is larger than the threshold amplitude obtained in the frequency range of the chorus emissions and that the wave amplitudes grow between themore » threshold and optimum amplitudes. Finally, in the frame of the wave growth process, the nonlinear growth rates are much greater than the linear growth rates.« less

  12. Prefixed-threshold real-time selection method in free-space quantum key distribution

    NASA Astrophysics Data System (ADS)

    Wang, Wenyuan; Xu, Feihu; Lo, Hoi-Kwong

    2018-03-01

    Free-space quantum key distribution allows two parties to share a random key with unconditional security, between ground stations, between mobile platforms, and even in satellite-ground quantum communications. Atmospheric turbulence causes fluctuations in transmittance, which further affect the quantum bit error rate and the secure key rate. Previous postselection methods to combat atmospheric turbulence require a threshold value determined after all quantum transmission. In contrast, here we propose a method where we predetermine the optimal threshold value even before quantum transmission. Therefore, the receiver can discard useless data immediately, thus greatly reducing data storage requirements and computing resources. Furthermore, our method can be applied to a variety of protocols, including, for example, not only single-photon BB84 but also asymptotic and finite-size decoy-state BB84, which can greatly increase its practicality.

  13. Generation Process of Large-Amplitude Upper-Band Chorus Emissions Observed by Van Allen Probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kubota, Yuko; Omura, Yoshiharu; Kletzing, Craig

    In this paper, we analyze large-amplitude upper-band chorus emissions measured near the magnetic equator by the Electric and Magnetic Field Instrument Suite and Integrated Science instrument package on board the Van Allen Probes. In setting up the parameters of source electrons exciting the emissions based on theoretical analyses and observational results measured by the Helium Oxygen Proton Electron instrument, we calculate threshold and optimum amplitudes with the nonlinear wave growth theory. We find that the optimum amplitude is larger than the threshold amplitude obtained in the frequency range of the chorus emissions and that the wave amplitudes grow between themore » threshold and optimum amplitudes. Finally, in the frame of the wave growth process, the nonlinear growth rates are much greater than the linear growth rates.« less

  14. The timing of sequences of saccades in visual search.

    PubMed Central

    Van Loon, E M; Hooge, I Th C; Van den Berg, A V

    2002-01-01

    According to the LATER model (linear approach to thresholds with ergodic rate), the latency of a single saccade in response to target appearance can be understood as a decision process, which is subject to (i) variations in the rate of (visual) information processing; and (ii) the threshold for the decision. We tested whether the LATER model can also be applied to the sequences of saccades in a multiple fixation search, during which latencies of second and subsequent saccades are typically shorter than that of the initial saccade. We found that the distributions of the reciprocal latencies for later saccades, unlike those of the first saccade, are highly asymmetrical, much like a gamma distribution. This suggests that the normal distribution of the rate r, which the LATER model assumes, is not appropriate to describe the rate distributions of subsequent saccades in a scanning sequence. By contrast, the gamma distribution is also appropriate to describe the distribution of reciprocal latencies for the first saccade. The change of the gamma distribution parameters as a function of the ordinal number of the saccade suggests a lowering of the threshold for second and later saccades, as well as a reduction in the number of target elements analysed. PMID:12184827

  15. Dynamic Multiple-Threshold Call Admission Control Based on Optimized Genetic Algorithm in Wireless/Mobile Networks

    NASA Astrophysics Data System (ADS)

    Wang, Shengling; Cui, Yong; Koodli, Rajeev; Hou, Yibin; Huang, Zhangqin

    Due to the dynamics of topology and resources, Call Admission Control (CAC) plays a significant role for increasing resource utilization ratio and guaranteeing users' QoS requirements in wireless/mobile networks. In this paper, a dynamic multi-threshold CAC scheme is proposed to serve multi-class service in a wireless/mobile network. The thresholds are renewed at the beginning of each time interval to react to the changing mobility rate and network load. To find suitable thresholds, a reward-penalty model is designed, which provides different priorities between different service classes and call types through different reward/penalty policies according to network load and average call arrival rate. To speed up the running time of CAC, an Optimized Genetic Algorithm (OGA) is presented, whose components such as encoding, population initialization, fitness function and mutation etc., are all optimized in terms of the traits of the CAC problem. The simulation demonstrates that the proposed CAC scheme outperforms the similar schemes, which means the optimization is realized. Finally, the simulation shows the efficiency of OGA.

  16. Estimation of ultrashort laser irradiation effect over thin transparent biopolymer films morphology

    NASA Astrophysics Data System (ADS)

    Daskalova, A.; Nathala, C.; Bliznakova, I.; Slavov, D.; Husinsky, W.

    2015-01-01

    The collagen - elastin biopolymer thin films treated by CPA Ti:Sapphire laser (Femtopower - Compact Pro) at 800nm central wavelength with 30fs and 1kHz repetition rate are investigated. A process of surface modifications and microporous scaffold creation after ultrashort laser irradiation has been observed. The single-shot (N=1) and multi-shot (N<1) ablation threshold values were estimated by studying the linear relationship between the square of the crater diameter D2 and the logarithm of the laser fluence F for determination of the threshold fluences for N=1, 2, 5, 10, 15 and 30 number of laser pulses. The incubation analysis by calculation of the incubation coefficient ξ for multi - shot fluence threshold for selected materials by power - law relationship form Fth(N)=Fth(1)Nξ-1 was also obtained. In this paper, we have also shown another consideration of the multi - shot ablation threshold calculation by logarithmic dependence of the ablation rate d on the laser fluence. The morphological surface changes of the modified regions were characterized by scanning electron microscopy to estimate the generated variations after the laser treatment.

  17. Dynamical predictors of an imminent phenotypic switch in bacteria

    NASA Astrophysics Data System (ADS)

    Wang, Huijing; Ray, J. Christian J.

    2017-08-01

    Single cells can stochastically switch across thresholds imposed by regulatory networks. Such thresholds can act as a tipping point, drastically changing global phenotypic states. In ecology and economics, imminent transitions across such tipping points can be predicted using dynamical early warning indicators. A typical example is ‘flickering’ of a fast variable, predicting a longer-lasting switch from a low to a high state or vice versa. Considering the different timescales between metabolite and protein fluctuations in bacteria, we hypothesized that metabolic early warning indicators predict imminent transitions across a network threshold caused by enzyme saturation. We used stochastic simulations to determine if flickering predicts phenotypic transitions, accounting for a variety of molecular physiological parameters, including enzyme affinity, burstiness of enzyme gene expression, homeostatic feedback, and rates of metabolic precursor influx. In most cases, we found that metabolic flickering rates are robustly peaked near the enzyme saturation threshold. The degree of fluctuation was amplified by product inhibition of the enzyme. We conclude that sensitivity to flickering in fast variables may be a possible natural or synthetic strategy to prepare physiological states for an imminent transition.

  18. Comparisons of Fatty Acid Taste Detection Thresholds in People Who Are Lean vs. Overweight or Obese: A Systematic Review and Meta-Analysis.

    PubMed

    Tucker, Robin M; Kaiser, Kathryn A; Parman, Mariel A; George, Brandon J; Allison, David B; Mattes, Richard D

    2017-01-01

    Given the increasing evidence that supports the ability of humans to taste non-esterified fatty acids (NEFA), recent studies have sought to determine if relationships exist between oral sensitivity to NEFA (measured as thresholds), food intake and obesity. Published findings suggest there is either no association or an inverse association. A systematic review and meta-analysis was conducted to determine if differences in fatty acid taste sensitivity or intensity ratings exist between individuals who are lean or obese. A total of 7 studies that reported measurement of taste sensations to non-esterified fatty acids by psychophysical methods (e.g.,studies using model systems rather than foods, detection thresholds as measured by a 3-alternative forced choice ascending methodology were included in the meta-analysis. Two other studies that measured intensity ratings to graded suprathreshold NEFA concentrations were evaluated qualitatively. No significant differences in fatty acid taste thresholds or intensity were observed. Thus, differences in fatty acid taste sensitivity do not appear to precede or result from obesity.

  19. Effects of anaerobic digestion on chlortetracycline and oxytetracycline degradation efficiency for swine manure.

    PubMed

    Yin, Fubin; Dong, Hongmin; Ji, Chao; Tao, Xiuping; Chen, Yongxing

    2016-10-01

    Manure containing antibiotics is considered a hazardous substance that poses a serious health risk to the environment and to human health. Anaerobic digestion (AD) could not only treatment animal waste but also generate valuable biogas. However, the interaction between antibiotics in manure and the AD process has not been clearly understood. In this study, experiments on biochemical methane potential (BMP) were conducted to determine the inhibition of the AD process from antibiotics and the threshold of complete antibiotic removal. The thresholds of the complete antibiotic removal were 60 and 40mg/kg·TS for CTC and OTC, respectively. CTC and OTC with concentrations below thresholds could increase the BMP of manure. When the CTC and OTC concentrations exceeded the thresholds, they inhibited manure fermentation, and the CTC removal rate declined exponentially with concentration (60-500mg/kg·TS). The relationship between OTC antibiotic concentration and its removal rate in AD treatment was described with exponential (40-100mg/kg·TS) and linear equations (100-500mg/kg·TS). Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Lower thresholds for lifetime health effects in mammals from high-LET radiation - Comparison with chronic low-LET radiation.

    PubMed

    Sazykina, Tatiana G; Kryshev, Alexander I

    2016-12-01

    Lower threshold dose rates and confidence limits are quantified for lifetime radiation effects in mammalian animals from internally deposited alpha-emitting radionuclides. Extensive datasets on effects from internal alpha-emitters are compiled from the International Radiobiological Archives. In total, the compiled database includes 257 records, which are analyzed by means of non-parametric order statistics. The generic lower threshold for alpha-emitters in mammalian animals (combined datasets) is 6.6·10 -5  Gy day -1 . Thresholds for individual alpha-emitting elements differ considerably: plutonium and americium - 2.0·10 -5  Gy day -1 ; radium - 2.1·10 -4  Gy day -1 . Threshold for chronic low-LET radiation is previously estimated at 1·10 -3  Gy day -1 . For low exposures, the following values of alpha radiation weighting factor w R for internally deposited alpha-emitters in mammals are quantified: w R (α) = 15 as a generic value for the whole group of alpha-emitters; w R (Pu) = 50 for plutonium; w R (Am) = 50 for americium; w R (Ra) = 5 for radium. These values are proposed to serve as radiation weighting factors in calculations of equivalent doses to non-human biota. The lower threshold dose rate for long-lived mammals (dogs) is significantly lower than comparing with the threshold for short-lived mammals (mice): 2.7·10 -5  Gy day -1 , and 2.0·10 -4  Gy day -1 , respectively. The difference in thresholds is exactly reflecting the relationship between the natural longevity of these two species. Graded scale of severity in lifetime radiation effects in mammals is developed, based on compiled datasets. Being placed on the severity scale, the effects of internal alpha-emitters are situated in the zones of considerably lower dose rates than effects of the same severity caused by low-LET radiation. RBE values, calculated for effects of equal severity, are found to depend on the intensity of chronic exposure: different RBE values are characteristic for low, moderate, and high lifetime exposures (30, 70, and 13, respectively). The results of the study provide a basis for selecting correct values of radiation weighting factors in dose assessment to non-human biota. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Nonintrusive Flow Rate Determination Through Space Shuttle Water Coolant Loop Floodlight Coldplate

    NASA Technical Reports Server (NTRS)

    Werlink, Rudolph; Johnson, Harry; Margasahayam, Ravi

    1997-01-01

    Using a Nonintrusive Flow Measurement System (NFMS), the flow rates through the Space Shuttle water coolant coldplate were determined. The objective of this in situ flow measurement was to prove or disprove a potential block inside the affected coldplate had contributed to a reduced flow rate and the subsequent ice formation on the Space Shuttle Discovery. Flow through the coldplate was originally calculated to be 35 to 38 pounds per hour. This application of ultrasonic technology advanced the envelope of flow measurements through use of 1/4-inch-diameter tubing, which resulted in extremely low flow velocities (5 to 30 pounds per hour). In situ measurements on the orbiters Discovery and Atlantis indicated both vehicles, on the average, experienced similar flow rates through the coldplate (around 25 pounds per hour), but lower rates than the designed flow. Based on the noninvasive checks, further invasive troubleshooting was eliminated. Permanent monitoring using the NFMS was recommended.

  2. Comparisons between detection threshold and loudness perception for individual cochlear implant channels

    PubMed Central

    Bierer, Julie Arenberg; Nye, Amberly D

    2014-01-01

    Objective The objective of the present study, performed in cochlear implant listeners, was to examine how the level of current required to detect single-channel electrical pulse trains relates to loudness perception on the same channel. The working hypothesis was that channels with relatively high thresholds, when measured with a focused current pattern, interface poorly to the auditory nerve. For such channels a smaller dynamic range between perceptual threshold and the most comfortable loudness would result, in part, from a greater sensitivity to changes in electrical field spread compared to low-threshold channels. The narrower range of comfortable listening levels may have important implications for speech perception. Design Data were collected from eight, adult cochlear implant listeners implanted with the HiRes90k cochlear implant (Advanced Bionics Corp.). The partial tripolar (pTP) electrode configuration, consisting of one intracochlear active electrode, two flanking electrodes carrying a fraction (σ) of the return current, and an extracochlear ground, was used for stimulation. Single-channel detection thresholds and most comfortable listening levels were acquired using the most focused pTP configuration possible (σ ≥ 0.8) to identify three channels for further testing – those with the highest, median, and lowest thresholds – for each subject. Threshold, equal-loudness contours (at 50% of the monopolar dynamic range), and loudness growth functions were measured for each of these three test channels using various partial tripolar fractions. Results For all test channels, thresholds increased as the electrode configuration became more focused. The rate of increase with the focusing parameter σ was greatest for the high-threshold channel compared to the median- and low-threshold channels. The 50% equal-loudness contours exhibited similar rates of increase in level across test channels and subjects. Additionally, test channels with the highest thresholds had the narrowest dynamic ranges (for σ ≥ 0.5) and steepest growth of loudness functions for all electrode configurations. Conclusions Together with previous studies using focused stimulation, the results suggest that auditory responses to electrical stimuli at both threshold and suprathreshold current levels are not uniform across the electrode array of individual cochlear implant listeners. Specifically, the steeper growth of loudness and thus smaller dynamic ranges observed for high-threshold channels are consistent with a degraded electrode-neuron interface, which could stem from lower numbers of functioning auditory neurons or a relatively large distance between the neurons and electrodes. These findings may have potential implications for how stimulation levels are set during the clinical mapping procedure, particularly for speech-processing strategies that use focused electrical fields. PMID:25036146

  3. Regular threshold-energy increase with charge for neutral-particle emission in collisions of electrons with oligonucleotide anions.

    PubMed

    Tanabe, T; Noda, K; Saito, M; Starikov, E B; Tateno, M

    2004-07-23

    Electron-DNA anion collisions were studied using an electrostatic storage ring with a merging electron-beam technique. The rate of neutral particles emitted in collisions started to increase from definite threshold energies, which increased regularly with ion charges in steps of about 10 eV. These threshold energies were almost independent of the length and sequence of DNA, but depended strongly on the ion charges. Neutral particles came from breaks of DNAs, rather than electron detachment. The step of the threshold energy increase approximately agreed with the plasmon excitation energy. It is deduced that plasmon excitation is closely related to the reaction mechanism. Copyright 2004 The American Physical Society

  4. Discovery and Classification in Astronomy

    NASA Astrophysics Data System (ADS)

    Dick, Steven J.

    2012-01-01

    Three decades after Martin Harwit's pioneering Cosmic Discovery (1981), and following on the recent IAU Symposium "Accelerating the Rate of Astronomical Discovery,” we have revisited the problem of discovery in astronomy, emphasizing new classes of objects. 82 such classes have been identified and analyzed, including 22 in the realm of the planets, 36 in the realm of the stars, and 24 in the realm of the galaxies. We find an extended structure of discovery, consisting of detection, interpretation and understanding, each with its own nuances and a microstructure including conceptual, technological and social roles. This is true with a remarkable degree of consistency over the last 400 years of telescopic astronomy, ranging from Galileo's discovery of satellites, planetary rings and star clusters, to the discovery of quasars and pulsars. Telescopes have served as "engines of discovery” in several ways, ranging from telescope size and sensitivity (planetary nebulae and spiral galaxies), to specialized detectors (TNOs) and the opening of the electromagnetic spectrum for astronomy (pulsars, pulsar planets, and most active galaxies). A few classes (radiation belts, the solar wind and cosmic rays), were initially discovered without the telescope. Classification also plays an important role in discovery. While it might seem that classification marks the end of discovery, or a post-discovery phase, in fact it often marks the beginning, even a pre-discovery phase. Nowhere is this more clearly seen than in the classification of stellar spectra, long before dwarfs, giants and supergiants were known, or their evolutionary sequence recognized. Classification may also be part of a post-discovery phase, as in the MK system of stellar classification, constructed after the discovery of stellar luminosity classes. Some classes are declared rather than discovered, as in the case of gas and ice giant planets, and, infamously, Pluto as a dwarf planet.

  5. How molecular profiling could revolutionize drug discovery.

    PubMed

    Stoughton, Roland B; Friend, Stephen H

    2005-04-01

    Information from genomic, proteomic and metabolomic measurements has already benefited target discovery and validation, assessment of efficacy and toxicity of compounds, identification of disease subgroups and the prediction of responses of individual patients. Greater benefits can be expected from the application of these technologies on a significantly larger scale; by simultaneously collecting diverse measurements from the same subjects or cell cultures; by exploiting the steadily improving quantitative accuracy of the technologies; and by interpreting the emerging data in the context of underlying biological models of increasing sophistication. The benefits of applying molecular profiling to drug discovery and development will include much lower failure rates at all stages of the drug development pipeline, faster progression from discovery through to clinical trials and more successful therapies for patient subgroups. Upheavals in existing organizational structures in the current 'conveyor belt' models of drug discovery might be required to take full advantage of these methods.

  6. Probing the cosmic gamma-ray burst rate with trigger simulations of the swift burst alert telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lien, Amy; Cannizzo, John K.; Sakamoto, Takanori

    The gamma-ray burst (GRB) rate is essential for revealing the connection between GRBs, supernovae, and stellar evolution. Additionally, the GRB rate at high redshift provides a strong probe of star formation history in the early universe. While hundreds of GRBs are observed by Swift, it remains difficult to determine the intrinsic GRB rate due to the complex trigger algorithm of Swift. Current studies of the GRB rate usually approximate the Swift trigger algorithm by a single detection threshold. However, unlike the previously flown GRB instruments, Swift has over 500 trigger criteria based on photon count rate and an additional imagemore » threshold for localization. To investigate possible systematic biases and explore the intrinsic GRB properties, we develop a program that is capable of simulating all the rate trigger criteria and mimicking the image threshold. Our simulations show that adopting the complex trigger algorithm of Swift increases the detection rate of dim bursts. As a result, our simulations suggest that bursts need to be dimmer than previously expected to avoid overproducing the number of detections and to match with Swift observations. Moreover, our results indicate that these dim bursts are more likely to be high redshift events than low-luminosity GRBs. This would imply an even higher cosmic GRB rate at large redshifts than previous expectations based on star formation rate measurements, unless other factors, such as the luminosity evolution, are taken into account. The GRB rate from our best result gives a total number of 4568{sub −1429}{sup +825} GRBs per year that are beamed toward us in the whole universe.« less

  7. Carbon cycling at the tipping point: Does ecosystem structure predict resistance to disturbance?

    NASA Astrophysics Data System (ADS)

    Gough, C. M.; Bond-Lamberty, B. P.; Stuart-Haentjens, E.; Atkins, J.; Haber, L.; Fahey, R. T.

    2017-12-01

    Ecosystems worldwide are subjected to disturbances that reshape their physical and biological structure and modify biogeochemical processes, including carbon storage and cycling rates. Disturbances, including those from insect pests, pathogens, and extreme weather, span a continuum of severity and, accordingly, may have different effects on carbon cycling processes. Some ecosystems resist biogeochemical changes following disturbance, until a critical threshold of severity is exceeded. The ecosystem properties underlying such functional resistance, and signifying when a tipping point will occur, however, are almost entirely unknown. Here, we present observational and experimental results from forests in the Great Lakes region, showing ecosystem structure is closely coupled with carbon cycling responses to disturbance, with shifts in structure predicting thresholds of and, in some cases, increases in carbon storage. We find, among forests in the region, that carbon storage regularly exhibits a non-linear threshold response to increasing disturbance levels, but the severity at which a threshold is reached varies among disturbed forests. More biologically and structurally complex forest ecosystems sometimes exhibit greater functional resistance than simpler forests, and consequently may have a higher disturbance severity threshold. Counter to model predictions but consistent with some theoretical frameworks, empirical data show moderate levels of disturbance may increase ecosystem complexity to a point, thereby increasing rates of carbon storage. Disturbances that increase complexity therefore may stimulate carbon storage, while severe disturbances at or beyond thresholds may simplify structure, leading to carbon storage declines. We conclude that ecosystem structural attributes are closely coupled with biogeochemical thresholds across disturbance severity gradients, suggesting that improved predictions of disturbance-related changes in the carbon cycle require better representation of ecosystem structure in models.

  8. Prolonged noise exposure-induced auditory threshold shifts in rats

    PubMed Central

    Chen, Guang-Di; Decker, Brandon; Muthaiah, Vijaya Prakash Krishnan; Sheppard, Adam; Salvi, Richard

    2014-01-01

    Noise-induced hearing loss (NIHL) initially increases with exposure duration, but eventually reaches an asymptotic threshold shift (ATS) once the exposure duration exceeds 18-24 h. Equations for predicting the ATS have been developed for several species, but not for rats, even though this species is extensively used in noise exposure research. To fill this void, we exposed rats to narrowband noise (NBN, 16-20 kHz) for 5 weeks starting at 80 dB SPL in the first week and then increasing the level by 6 dB per week to a final level of 104 dB SPL. Auditory brainstem responses (ABR) were recorded before, during, and following the exposure to determine the amount of hearing loss. The noise induced threshold shift to continuous long-term exposure, defined as compound threshold shift (CTS), within and above 16-20 kHz increased with noise level at the rate of 1.82 dB threshold shift per dB of noise level (NL) above a critical level (C) of 77.2 dB SPL i.e. CTS = 1.82(NL-77.2). The normalized amplitude of the largest ABR peak measured at 100 dB SPL decreased at the rate of 3.1% per dB of NL above the critical level of 76.9 dB SPL, i.e., %ABR Reduction = 3.1%(NL-76.9). ABR thresholds measured >30 days post-exposure only partially recovered resulting in a permanent threshold shift of 30-40 dB along with severe hair cell loss in the basal, high-frequency region of the cochlea. In the rat, CTS increases with noise level with a slope similar to humans and chinchillas. The critical level (C) in the rat is similar to that of humans, but higher than that of chinchillas. PMID:25219503

  9. Accelerometer thresholds: Accounting for body mass reduces discrepancies between measures of physical activity for individuals with overweight and obesity.

    PubMed

    Raiber, Lilian; Christensen, Rebecca A G; Jamnik, Veronica K; Kuk, Jennifer L

    2017-01-01

    The objective of this study was to explore whether accelerometer thresholds that are adjusted to account for differences in body mass influence discrepancies between self-report and accelerometer-measured physical activity (PA) volume for individuals with overweight and obesity. We analyzed 6164 adults from the National Health and Nutrition Examination Survey between 2003-2006. Established accelerometer thresholds were adjusted to account for differences in body mass to produce a similar energy expenditure (EE) rate as individuals with normal weight. Moderate-, vigorous-, and moderate- to vigorous-intensity PA (MVPA) durations were measured using established and adjusted accelerometer thresholds and compared with self-report. Durations of self-report were longer than accelerometer-measured MVPA using established thresholds (normal weight: 57.8 ± 2.4 vs 9.0 ± 0.5 min/day, overweight: 56.1 ± 2.7 vs 7.4 ± 0.5 min/day, and obesity: 46.5 ± 2.2 vs 3.7 ± 0.3 min/day). Durations of subjective and objective PA were negatively associated with body mass index (BMI) (P < 0.05). Using adjusted thresholds increased MVPA durations, and reduced discrepancies between accelerometer and self-report measures for overweight and obese groups by 6.0 ± 0.3 min/day and 17.7 ± 0.8 min/day, respectively (P < 0.05). Using accelerometer thresholds that represent equal EE rates across BMI categories reduced the discrepancies between durations of subjective and objective PA for overweight and obese groups. However, accelerometer-measured PA generally remained shorter than durations of self-report within all BMI categories. Further research may be necessary to improve analytical approaches when using objective measures of PA for individuals with overweight or obesity.

  10. Pulsed DF chain-laser breakdown induced by maritime aerosols

    NASA Astrophysics Data System (ADS)

    Amimoto, S. T.; Whittier, J. S.; Ronkowski, F. G.; Valenzuela, P. R.; Harper, G.

    1982-08-01

    Thresholds for breakdown induced by liquid and solid aerosols in room air have been measured for a 1 microsec-duration pulsed D2-F2 laser of 3.58 -4.78 micron bandwidth. The DF laser beam was directed into an aerosol chamber that simulated maritime atmospheres on the open sea. Both focus and collimated beams were studied. For a focused beam in which the largest encountered aerosol particles were of 1 to 4 micron diameter, pulsed DF breakdown thresholds were measured to lie in the range 0.6 to 1.8 GW/sq cm. Salt-water aerosol breakdown thresholds for micron-size particles were found to be 15 to 30% higher than the corresponding thresholds for fresh-water particles. For a collimated beam that encountered particle diameters as large as 100 microns, breakdown could not be induced using 0.5- microsec (FWHM) pulses at peak intensities of 59 MW/sq cm. Image converter camera measurements of the radial plasma growth rate of 1.3 cm/microsec (at 1.4 GW/sq cm) were consistent with measurements of the cutoff rate of the transmitted laser beam. Pulsed DF breakdown thresholds of 32 MW/sq cm for 30- micron diameter Al2O3 particles were also measured to permit comparison with the earlier pulsed-HF breakdown results of Lencioni, et al.; the solid-particle threshold measurements agree with the Lencioni data if one assumes that the thresholds for microsecond-duration pulses scales is 1/lambda. An approximate theoretical model of the water particle breakdown process is presented that permits the scaling of the present results to other laser pulse durations, aerosol distributions, and transmission path lengths.

  11. Thresholds for the perception of whole-body linear sinusoidal motion in the horizontal plane

    NASA Technical Reports Server (NTRS)

    Mah, Robert W.; Young, Laurence R.; Steele, Charles R.; Schubert, Earl D.

    1989-01-01

    An improved linear sled has been developed to provide precise motion stimuli without generating perceptible extraneous motion cues (a noiseless environment). A modified adaptive forced-choice method was employed to determine perceptual thresholds to whole-body linear sinusoidal motion in 25 subjects. Thresholds for the detection of movement in the horizontal plane were found to be lower than those reported previously. At frequencies of 0.2 to 0.5 Hz, thresholds were shown to be independent of frequency, while at frequencies of 1.0 to 3.0 Hz, thresholds showed a decreasing sensitivity with increasing frequency, indicating that the perceptual process is not sensitive to the rate change of acceleration of the motion stimulus. The results suggest that the perception of motion behaves as an integrating accelerometer with a bandwidth of at least 3 Hz.

  12. An economics systems analysis of land mobile radio telephone services

    NASA Technical Reports Server (NTRS)

    Leroy, B. E.; Stevenson, S. M.

    1980-01-01

    The economic interaction of the terrestrial and satellite systems is considered. Parametric equations are formulated to allow examination of necessary user thresholds and growth rates as a function of system costs. Conversely, first order allowable systems costs are found as a function of user thresholds and growth rates. Transitions between satellite and terrestrial service systems are examined. User growth rate density (user/year/sq km) is shown to be a key parameter in the analysis of systems compatibility. The concept of system design matching the price/demand curves is introduced and examples are given. The role of satellite systems is critically examined and the economic conditions necessary for the introduction of satellite service are identified.

  13. A globally convergent MC algorithm with an adaptive learning rate.

    PubMed

    Peng, Dezhong; Yi, Zhang; Xiang, Yong; Zhang, Haixian

    2012-02-01

    This brief deals with the problem of minor component analysis (MCA). Artificial neural networks can be exploited to achieve the task of MCA. Recent research works show that convergence of neural networks based MCA algorithms can be guaranteed if the learning rates are less than certain thresholds. However, the computation of these thresholds needs information about the eigenvalues of the autocorrelation matrix of data set, which is unavailable in online extraction of minor component from input data stream. In this correspondence, we introduce an adaptive learning rate into the OJAn MCA algorithm, such that its convergence condition does not depend on any unobtainable information, and can be easily satisfied in practical applications.

  14. Can Functional Magnetic Resonance Imaging Improve Success Rates in CNS Drug Discovery?

    PubMed Central

    Borsook, David; Hargreaves, Richard; Becerra, Lino

    2011-01-01

    Introduction The bar for developing new treatments for CNS disease is getting progressively higher and fewer novel mechanisms are being discovered, validated and developed. The high costs of drug discovery necessitate early decisions to ensure the best molecules and hypotheses are tested in expensive late stage clinical trials. The discovery of brain imaging biomarkers that can bridge preclinical to clinical CNS drug discovery and provide a ‘language of translation’ affords the opportunity to improve the objectivity of decision-making. Areas Covered This review discusses the benefits, challenges and potential issues of using a science based biomarker strategy to change the paradigm of CNS drug development and increase success rates in the discovery of new medicines. The authors have summarized PubMed and Google Scholar based publication searches to identify recent advances in functional, structural and chemical brain imaging and have discussed how these techniques may be useful in defining CNS disease state and drug effects during drug development. Expert opinion The use of novel brain imaging biomarkers holds the bold promise of making neuroscience drug discovery smarter by increasing the objectivity of decision making thereby improving the probability of success of identifying useful drugs to treat CNS diseases. Functional imaging holds the promise to: (1) define pharmacodynamic markers as an index of target engagement (2) improve translational medicine paradigms to predict efficacy; (3) evaluate CNS efficacy and safety based on brain activation; (4) determine brain activity drug dose-response relationships and (5) provide an objective evaluation of symptom response and disease modification. PMID:21765857

  15. A stimulus-dependent spike threshold is an optimal neural coder

    PubMed Central

    Jones, Douglas L.; Johnson, Erik C.; Ratnam, Rama

    2015-01-01

    A neural code based on sequences of spikes can consume a significant portion of the brain's energy budget. Thus, energy considerations would dictate that spiking activity be kept as low as possible. However, a high spike-rate improves the coding and representation of signals in spike trains, particularly in sensory systems. These are competing demands, and selective pressure has presumably worked to optimize coding by apportioning a minimum number of spikes so as to maximize coding fidelity. The mechanisms by which a neuron generates spikes while maintaining a fidelity criterion are not known. Here, we show that a signal-dependent neural threshold, similar to a dynamic or adapting threshold, optimizes the trade-off between spike generation (encoding) and fidelity (decoding). The threshold mimics a post-synaptic membrane (a low-pass filter) and serves as an internal decoder. Further, it sets the average firing rate (the energy constraint). The decoding process provides an internal copy of the coding error to the spike-generator which emits a spike when the error equals or exceeds a spike threshold. When optimized, the trade-off leads to a deterministic spike firing-rule that generates optimally timed spikes so as to maximize fidelity. The optimal coder is derived in closed-form in the limit of high spike-rates, when the signal can be approximated as a piece-wise constant signal. The predicted spike-times are close to those obtained experimentally in the primary electrosensory afferent neurons of weakly electric fish (Apteronotus leptorhynchus) and pyramidal neurons from the somatosensory cortex of the rat. We suggest that KCNQ/Kv7 channels (underlying the M-current) are good candidates for the decoder. They are widely coupled to metabolic processes and do not inactivate. We conclude that the neural threshold is optimized to generate an energy-efficient and high-fidelity neural code. PMID:26082710

  16. Experimental investigation of different regimes of mode-locking in a high repetition rate passively mode-locked semiconductor quantum-dot laser.

    PubMed

    Kéfélian, Fabien; O'Donoghue, Shane; Todaro, Maria Teresa; McInerney, John; Huyet, Guillaume

    2009-04-13

    We report experimental investigations on a two-section 16-GHz repetition rate InAs/GaAs quantum dot passively mode-locked laser. Near the threshold current, pseudo-periodic Q-switching with complex dynamics is exhibited. Mode-locking operation regimes characterized by different repetition rates and timing jitter levels are encountered up to twice the threshold current. Evolution of the RF spectrum and optical spectrum with current is compared. The different mode-locked regimes are shown to be associated with different spectral and temporal shapes, ranging from 1.3 to 6 ps. This point is discussed by introducing the existence of two different supermodes. Repetition rate evolution and timing jitter increase is attributed to the coupling between the dominant and the secondary supermodes.

  17. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  18. A Critical, Nonlinear Threshold Dictates Bacterial Invasion and Initial Kinetics During Influenza

    NASA Astrophysics Data System (ADS)

    Smith, Amber M.; Smith, Amanda P.

    2016-12-01

    Secondary bacterial infections increase morbidity and mortality of influenza A virus (IAV) infections. Bacteria are able to invade due to virus-induced depletion of alveolar macrophages (AMs), but this is not the only contributing factor. By analyzing a kinetic model, we uncovered a nonlinear initial dose threshold that is dependent on the amount of virus-induced AM depletion. The threshold separates the growth and clearance phenotypes such that bacteria decline for dose-AM depletion combinations below the threshold, stay constant near the threshold, and increase above the threshold. In addition, the distance from the threshold correlates to the growth rate. Because AM depletion changes throughout an IAV infection, the dose requirement for bacterial invasion also changes accordingly. Using the threshold, we found that the dose requirement drops dramatically during the first 7d of IAV infection. We then validated these analytical predictions by infecting mice with doses below or above the predicted threshold over the course of IAV infection. These results identify the nonlinear way in which two independent factors work together to support successful post-influenza bacterial invasion. They provide insight into coinfection timing, the heterogeneity in outcome, the probability of acquiring a coinfection, and the use of new therapeutic strategies to combat viral-bacterial coinfections.

  19. A Critical, Nonlinear Threshold Dictates Bacterial Invasion and Initial Kinetics During Influenza.

    PubMed

    Smith, Amber M; Smith, Amanda P

    2016-12-15

    Secondary bacterial infections increase morbidity and mortality of influenza A virus (IAV) infections. Bacteria are able to invade due to virus-induced depletion of alveolar macrophages (AMs), but this is not the only contributing factor. By analyzing a kinetic model, we uncovered a nonlinear initial dose threshold that is dependent on the amount of virus-induced AM depletion. The threshold separates the growth and clearance phenotypes such that bacteria decline for dose-AM depletion combinations below the threshold, stay constant near the threshold, and increase above the threshold. In addition, the distance from the threshold correlates to the growth rate. Because AM depletion changes throughout an IAV infection, the dose requirement for bacterial invasion also changes accordingly. Using the threshold, we found that the dose requirement drops dramatically during the first 7d of IAV infection. We then validated these analytical predictions by infecting mice with doses below or above the predicted threshold over the course of IAV infection. These results identify the nonlinear way in which two independent factors work together to support successful post-influenza bacterial invasion. They provide insight into coinfection timing, the heterogeneity in outcome, the probability of acquiring a coinfection, and the use of new therapeutic strategies to combat viral-bacterial coinfections.

  20. Computational modeling approaches to quantitative structure-binding kinetics relationships in drug discovery.

    PubMed

    De Benedetti, Pier G; Fanelli, Francesca

    2018-03-21

    Simple comparative correlation analyses and quantitative structure-kinetics relationship (QSKR) models highlight the interplay of kinetic rates and binding affinity as an essential feature in drug design and discovery. The choice of the molecular series, and their structural variations, used in QSKR modeling is fundamental to understanding the mechanistic implications of ligand and/or drug-target binding and/or unbinding processes. Here, we discuss the implications of linear correlations between kinetic rates and binding affinity constants and the relevance of the computational approaches to QSKR modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Motor unit recruitment in human biceps brachii during sustained voluntary contractions.

    PubMed

    Riley, Zachary A; Maerz, Adam H; Litsey, Jane C; Enoka, Roger M

    2008-04-15

    The purpose of the study was to examine the influence of the difference between the recruitment threshold of a motor unit and the target force of the sustained contraction on the discharge of the motor unit at recruitment. The discharge characteristics of 53 motor units in biceps brachii were recorded after being recruited during a sustained contraction. Some motor units (n = 22) discharged action potentials tonically after being recruited, whereas others (n = 31) discharged intermittent trains of action potentials. The two groups of motor units were distinguished by the difference between the recruitment threshold of the motor unit and the target force for the sustained contraction: tonic, 5.9 +/- 2.5%; intermittent, 10.7 +/- 2.9%. Discharge rate for the tonic units decreased progressively (13.9 +/- 2.7 to 11.7 +/- 2.6 pulses s(-1); P = 0.04) during the 99 +/- 111 s contraction. Train rate, train duration and average discharge rate for the intermittent motor units did not change across 211 +/- 153 s of intermittent discharge. The initial discharge rate at recruitment during the sustained contraction was lower for the intermittent motor units (11.0 +/- 3.3 pulses s(-1)) than the tonic motor units (13.7 +/- 3.3 pulses s(-1); P = 0.005), and the coefficient of variation for interspike interval was higher for the intermittent motor units (34.6 +/- 12.3%) than the tonic motor units (21.2 +/- 9.4%) at recruitment (P = 0.001) and remained elevated for discharge duration (34.6 +/- 9.2% versus 19.1 +/- 11.7%, P < 0.001). In an additional experiment, 12 motor units were recorded at two different target forces below recruitment threshold (5.7 +/- 1.9% and 10.5 +/- 2.4%). Each motor unit exhibited the two discharge patterns (tonic and intermittent) as observed for the 53 motor units. The results suggest that newly recruited motor units with recruitment thresholds closer to the target force experienced less synaptic noise at the time of recruitment that resulted in them discharging action potentials at more regular and greater rates than motor units with recruitment thresholds further from the target force.

  2. A probabilistic Poisson-based model accounts for an extensive set of absolute auditory threshold measurements.

    PubMed

    Heil, Peter; Matysiak, Artur; Neubauer, Heinrich

    2017-09-01

    Thresholds for detecting sounds in quiet decrease with increasing sound duration in every species studied. The neural mechanisms underlying this trade-off, often referred to as temporal integration, are not fully understood. Here, we probe the human auditory system with a large set of tone stimuli differing in duration, shape of the temporal amplitude envelope, duration of silent gaps between bursts, and frequency. Duration was varied by varying the plateau duration of plateau-burst (PB) stimuli, the duration of the onsets and offsets of onset-offset (OO) stimuli, and the number of identical bursts of multiple-burst (MB) stimuli. Absolute thresholds for a large number of ears (>230) were measured using a 3-interval-3-alternative forced choice (3I-3AFC) procedure. Thresholds decreased with increasing sound duration in a manner that depended on the temporal envelope. Most commonly, thresholds for MB stimuli were highest followed by thresholds for OO and PB stimuli of corresponding durations. Differences in the thresholds for MB and OO stimuli and in the thresholds for MB and PB stimuli, however, varied widely across ears, were negative in some ears, and were tightly correlated. We show that the variation and correlation of MB-OO and MB-PB threshold differences are linked to threshold microstructure, which affects the relative detectability of the sidebands of the MB stimuli and affects estimates of the bandwidth of auditory filters. We also found that thresholds for MB stimuli increased with increasing duration of the silent gaps between bursts. We propose a new model and show that it accurately accounts for our results and does so considerably better than a leaky-integrator-of-intensity model and a probabilistic model proposed by others. Our model is based on the assumption that sensory events are generated by a Poisson point process with a low rate in the absence of stimulation and higher, time-varying rates in the presence of stimulation. A subject in a 3I-3AFC task is assumed to choose the interval in which the greatest number of events occurred or randomly chooses among intervals which are tied for the greatest number of events. The subject is further assumed to count events over the duration of an evaluation interval that has the same timing and duration as the expected stimulus. The increase in the rate of the events caused by stimulation is proportional to the time-varying amplitude envelope of the bandpass-filtered signal raised to an exponent. We find the exponent to be about 3, consistent with our previous studies. This challenges models that are based on the assumption of the integration of a neural response that is directly proportional to the stimulus amplitude or proportional to its square (i.e., proportional to the stimulus intensity or power). Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Sign language spotting with a threshold model based on conditional random fields.

    PubMed

    Yang, Hee-Deok; Sclaroff, Stan; Lee, Seong-Whan

    2009-07-01

    Sign language spotting is the task of detecting and recognizing signs in a signed utterance, in a set vocabulary. The difficulty of sign language spotting is that instances of signs vary in both motion and appearance. Moreover, signs appear within a continuous gesture stream, interspersed with transitional movements between signs in a vocabulary and nonsign patterns (which include out-of-vocabulary signs, epentheses, and other movements that do not correspond to signs). In this paper, a novel method for designing threshold models in a conditional random field (CRF) model is proposed which performs an adaptive threshold for distinguishing between signs in a vocabulary and nonsign patterns. A short-sign detector, a hand appearance-based sign verification method, and a subsign reasoning method are included to further improve sign language spotting accuracy. Experiments demonstrate that our system can spot signs from continuous data with an 87.0 percent spotting rate and can recognize signs from isolated data with a 93.5 percent recognition rate versus 73.5 percent and 85.4 percent, respectively, for CRFs without a threshold model, short-sign detection, subsign reasoning, and hand appearance-based sign verification. Our system can also achieve a 15.0 percent sign error rate (SER) from continuous data and a 6.4 percent SER from isolated data versus 76.2 percent and 14.5 percent, respectively, for conventional CRFs.

  4. Acute effects of dynamic exercises on the relationship between the motor unit firing rate and the recruitment threshold.

    PubMed

    Ye, Xin; Beck, Travis W; DeFreitas, Jason M; Wages, Nathan P

    2015-04-01

    The aim of this study was to compare the acute effects of concentric versus eccentric exercise on motor control strategies. Fifteen men performed six sets of 10 repetitions of maximal concentric exercises or eccentric isokinetic exercises with their dominant elbow flexors on separate experimental visits. Before and after the exercise, maximal strength testing and submaximal trapezoid isometric contractions (40% of the maximal force) were performed. Both exercise conditions caused significant strength loss in the elbow flexors, but the loss was greater following the eccentric exercise (t=2.401, P=.031). The surface electromyographic signals obtained from the submaximal trapezoid isometric contractions were decomposed into individual motor unit action potential trains. For each submaximal trapezoid isometric contraction, the relationship between the average motor unit firing rate and the recruitment threshold was examined using linear regression analysis. In contrast to the concentric exercise, which did not cause significant changes in the mean linear slope coefficient and y-intercept of the linear regression line, the eccentric exercise resulted in a lower mean linear slope and an increased mean y-intercept, thereby indicating that increasing the firing rates of low-threshold motor units may be more important than recruiting high-threshold motor units to compensate for eccentric exercise-induced strength loss. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. False-positive rate determination of protein target discovery using a covalent modification- and mass spectrometry-based proteomics platform.

    PubMed

    Strickland, Erin C; Geer, M Ariel; Hong, Jiyong; Fitzgerald, Michael C

    2014-01-01

    Detection and quantitation of protein-ligand binding interactions is important in many areas of biological research. Stability of proteins from rates of oxidation (SPROX) is an energetics-based technique for identifying the proteins targets of ligands in complex biological mixtures. Knowing the false-positive rate of protein target discovery in proteome-wide SPROX experiments is important for the correct interpretation of results. Reported here are the results of a control SPROX experiment in which chemical denaturation data is obtained on the proteins in two samples that originated from the same yeast lysate, as would be done in a typical SPROX experiment except that one sample would be spiked with the test ligand. False-positive rates of 1.2-2.2% and <0.8% are calculated for SPROX experiments using Q-TOF and Orbitrap mass spectrometer systems, respectively. Our results indicate that the false-positive rate is largely determined by random errors associated with the mass spectral analysis of the isobaric mass tag (e.g., iTRAQ®) reporter ions used for peptide quantitation. Our results also suggest that technical replicates can be used to effectively eliminate such false positives that result from this random error, as is demonstrated in a SPROX experiment to identify yeast protein targets of the drug, manassantin A. The impact of ion purity in the tandem mass spectral analyses and of background oxidation on the false-positive rate of protein target discovery using SPROX is also discussed.

  6. Estimating Alarm Thresholds for Process Monitoring Data under Different Assumptions about the Data Generating Mechanism

    DOE PAGES

    Burr, Tom; Hamada, Michael S.; Howell, John; ...

    2013-01-01

    Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data − prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals.more » Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.« less

  7. Genetic variation in threshold reaction norms for alternative reproductive tactics in male Atlantic salmon, Salmo salar.

    PubMed

    Piché, Jacinthe; Hutchings, Jeffrey A; Blanchard, Wade

    2008-07-07

    Alternative reproductive tactics may be a product of adaptive phenotypic plasticity, such that discontinuous variation in life history depends on both the genotype and the environment. Phenotypes that fall below a genetically determined threshold adopt one tactic, while those exceeding the threshold adopt the alternative tactic. We report evidence of genetic variability in maturation thresholds for male Atlantic salmon (Salmo salar) that mature either as large (more than 1 kg) anadromous males or as small (10-150 g) parr. Using a common-garden experimental protocol, we find that the growth rate at which the sneaker parr phenotype is expressed differs among pure- and mixed-population crosses. Maturation thresholds of hybrids were intermediate to those of pure crosses, consistent with the hypothesis that the life-history switch points are heritable. Our work provides evidence, for a vertebrate, that thresholds for alternative reproductive tactics differ genetically among populations and can be modelled as discontinuous reaction norms for age and size at maturity.

  8. Cost-effectiveness of colorectal cancer screening with computed tomography colonography according to a polyp size threshold for polypectomy.

    PubMed

    Heresbach, Denis; Chauvin, Pauline; Hess-Migliorretti, Aurélie; Riou, Françoise; Grolier, Jacques; Josselin, Jean-Michel

    2010-06-01

    Computed tomography colonography (CTC) has an acceptable accuracy in detecting colonic lesions, especially for polyps at least 6 mm. The aim of this analysis is to determine the cost-effectiveness of population-based screening for colorectal cancer (CRC) using CTC with a polyp size threshold. The cost-effectiveness ratios of CTC performed at 50, 60 and 70 years old, without (PL strategy) or with (TS strategy) polyp size threshold were compared using a Markov process. Incremental cost-effectiveness ratios (ICER) were calculated per life-years gained (LYG) for a time horizon of 30 years. The ICER of PL and TS strategies were 12 042 and 2765 euro/LYG associated to CRC prevention rates of 37.9 and 36.5%. The ICER of PL and TS strategies dropped to 9687 and 1857 euro/LYG when advanced adenoma (AA) prevalence increased from 6.9 to 8.6% for male participants and 3.8-4.9% for female participants or to 9482 and 2067 euro/LYG when adenoma and AA annual recurrence rates dropped to 3.2 and 0.25%. The ICER for PL and TS strategies decreased to 7947 and 954 euro/LYG or when only two CTC were performed at 50 and 60-years-old. Conversely, the ICER did not significantly change when varying population participation rate or accuracy of CTC. CTC with a 6 mm threshold for polypectomy is associated to a substantial cost reduction without significant loss of efficacy. Cost-effectiveness depends more on the AA prevalence or transition rate to CRC than on CTC accuracy or screening compliance.

  9. Net reclassification index at event rate: properties and relationships.

    PubMed

    Pencina, Michael J; Steyerberg, Ewout W; D'Agostino, Ralph B

    2017-12-10

    The net reclassification improvement (NRI) is an attractively simple summary measure quantifying improvement in performance because of addition of new risk marker(s) to a prediction model. Originally proposed for settings with well-established classification thresholds, it quickly extended into applications with no thresholds in common use. Here we aim to explore properties of the NRI at event rate. We express this NRI as a difference in performance measures for the new versus old model and show that the quantity underlying this difference is related to several global as well as decision analytic measures of model performance. It maximizes the relative utility (standardized net benefit) across all classification thresholds and can be viewed as the Kolmogorov-Smirnov distance between the distributions of risk among events and non-events. It can be expressed as a special case of the continuous NRI, measuring reclassification from the 'null' model with no predictors. It is also a criterion based on the value of information and quantifies the reduction in expected regret for a given regret function, casting the NRI at event rate as a measure of incremental reduction in expected regret. More generally, we find it informative to present plots of standardized net benefit/relative utility for the new versus old model across the domain of classification thresholds. Then, these plots can be summarized with their maximum values, and the increment in model performance can be described by the NRI at event rate. We provide theoretical examples and a clinical application on the evaluation of prognostic biomarkers for atrial fibrillation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Identification and Classification of Conserved RNA Secondary Structures in the Human Genome

    PubMed Central

    Pedersen, Jakob Skou; Bejerano, Gill; Siepel, Adam; Rosenbloom, Kate; Lindblad-Toh, Kerstin; Lander, Eric S; Kent, Jim; Miller, Webb; Haussler, David

    2006-01-01

    The discoveries of microRNAs and riboswitches, among others, have shown functional RNAs to be biologically more important and genomically more prevalent than previously anticipated. We have developed a general comparative genomics method based on phylogenetic stochastic context-free grammars for identifying functional RNAs encoded in the human genome and used it to survey an eight-way genome-wide alignment of the human, chimpanzee, mouse, rat, dog, chicken, zebra-fish, and puffer-fish genomes for deeply conserved functional RNAs. At a loose threshold for acceptance, this search resulted in a set of 48,479 candidate RNA structures. This screen finds a large number of known functional RNAs, including 195 miRNAs, 62 histone 3′UTR stem loops, and various types of known genetic recoding elements. Among the highest-scoring new predictions are 169 new miRNA candidates, as well as new candidate selenocysteine insertion sites, RNA editing hairpins, RNAs involved in transcript auto regulation, and many folds that form singletons or small functional RNA families of completely unknown function. While the rate of false positives in the overall set is difficult to estimate and is likely to be substantial, the results nevertheless provide evidence for many new human functional RNAs and present specific predictions to facilitate their further characterization. PMID:16628248

  11. Trans-10, cis-12-conjugated linoleic acid alters hepatic gene expression in a polygenic obese line of mice displaying hepatic lipidosis.

    PubMed

    Ashwell, Melissa S; Ceddia, Ryan P; House, Ralph L; Cassady, Joseph P; Eisen, Eugene J; Eling, Thomas E; Collins, Jennifer B; Grissom, Sherry F; Odle, Jack

    2010-09-01

    The trans-10, cis-12 isomer of conjugated linoleic acid (CLA) causes a rapid reduction of body and adipose mass in mice. In addition to changes in adipose tissue, numerous studies have reported alterations in hepatic lipid metabolism. Livers of CLA-fed mice gain mass, partly due to lipid accumulation; however, the precise molecular mechanisms are unknown. To elucidate these mechanisms, we examined fatty acid composition and gene expression profiles of livers from a polygenic obese line of mice fed 1% trans-10, cis-12-CLA for 14 days. Analysis of gene expression data led to the identification of 1393 genes differentially expressed in the liver of CLA-fed male mice at a nominal P value of .01, and 775 were considered significant using a false discovery rate (FDR) threshold of .05. While surprisingly few genes in lipid metabolism were impacted, pathway analysis found that protein kinase A (PKA) and cyclic adenosine monophosphate (cAMP) pathways signaling pathways were affected by CLA treatment and 98 of the 775 genes were found to be regulated by hepatocyte nuclear factor 4alpha, a transcription factor important in controlling liver metabolic status. Copyright 2010 Elsevier Inc. All rights reserved.

  12. Gene Expression Profiling in Human Lung Cells Exposed to Isoprene-Derived Secondary Organic Aerosol.

    PubMed

    Lin, Ying-Hsuan; Arashiro, Maiko; Clapp, Phillip W; Cui, Tianqu; Sexton, Kenneth G; Vizuete, William; Gold, Avram; Jaspers, Ilona; Fry, Rebecca C; Surratt, Jason D

    2017-07-18

    Secondary organic aerosol (SOA) derived from the photochemical oxidation of isoprene contributes a substantial mass fraction to atmospheric fine particulate matter (PM 2.5 ). The formation of isoprene SOA is influenced largely by anthropogenic emissions through multiphase chemistry of its multigenerational oxidation products. Considering the abundance of isoprene SOA in the troposphere, understanding mechanisms of adverse health effects through inhalation exposure is critical to mitigating its potential impact on public health. In this study, we assessed the effects of isoprene SOA on gene expression in human airway epithelial cells (BEAS-2B) through an air-liquid interface exposure. Gene expression profiling of 84 oxidative stress and 249 inflammation-associated human genes was performed. Our results show that the expression levels of 29 genes were significantly altered upon isoprene SOA exposure under noncytotoxic conditions (p < 0.05), with the majority (22/29) of genes passing a false discovery rate threshold of 0.3. The most significantly affected genes belong to the nuclear factor (erythroid-derived 2)-like 2 (Nrf2) transcription factor network. The Nrf2 function is confirmed through a reporter cell line. Together with detailed characterization of SOA constituents, this study reveals the impact of isoprene SOA exposure on lung responses and highlights the importance of further understanding its potential health outcomes.

  13. Bayesian hierarchical modeling for subject-level response classification in peptide microarray immunoassays

    PubMed Central

    Imholte, Gregory; Gottardo, Raphael

    2017-01-01

    Summary The peptide microarray immunoassay simultaneously screens sample serum against thousands of peptides, determining the presence of antibodies bound to array probes. Peptide microarrays tiling immunogenic regions of pathogens (e.g. envelope proteins of a virus) are an important high throughput tool for querying and mapping antibody binding. Because of the assay’s many steps, from probe synthesis to incubation, peptide microarray data can be noisy with extreme outliers. In addition, subjects may produce different antibody profiles in response to an identical vaccine stimulus or infection, due to variability among subjects’ immune systems. We present a robust Bayesian hierarchical model for peptide microarray experiments, pepBayes, to estimate the probability of antibody response for each subject/peptide combination. Heavy-tailed error distributions accommodate outliers and extreme responses, and tailored random effect terms automatically incorporate technical effects prevalent in the assay. We apply our model to two vaccine trial datasets to demonstrate model performance. Our approach enjoys high sensitivity and specificity when detecting vaccine induced antibody responses. A simulation study shows an adaptive thresholding classification method has appropriate false discovery rate control with high sensitivity, and receiver operating characteristics generated on vaccine trial data suggest that pepBayes clearly separates responses from non-responses. PMID:27061097

  14. Identifying Key Drivers of Return Reversal with Dynamical Bayesian Factor Graph.

    PubMed

    Zhao, Shuai; Tong, Yunhai; Wang, Zitian; Tan, Shaohua

    2016-01-01

    In the stock market, return reversal occurs when investors sell overbought stocks and buy oversold stocks, reversing the stocks' price trends. In this paper, we develop a new method to identify key drivers of return reversal by incorporating a comprehensive set of factors derived from different economic theories into one unified dynamical Bayesian factor graph. We then use the model to depict factor relationships and their dynamics, from which we make some interesting discoveries about the mechanism behind return reversals. Through extensive experiments on the US stock market, we conclude that among the various factors, the liquidity factors consistently emerge as key drivers of return reversal, which is in support of the theory of liquidity effect. Specifically, we find that stocks with high turnover rates or high Amihud illiquidity measures have a greater probability of experiencing return reversals. Apart from the consistent drivers, we find other drivers of return reversal that generally change from year to year, and they serve as important characteristics for evaluating the trends of stock returns. Besides, we also identify some seldom discussed yet enlightening inter-factor relationships, one of which shows that stocks in Finance and Insurance industry are more likely to have high Amihud illiquidity measures in comparison with those in other industries. These conclusions are robust for return reversals under different thresholds.

  15. The Condensate Database for Big Data Analysis

    NASA Astrophysics Data System (ADS)

    Gallaher, D. W.; Lv, Q.; Grant, G.; Campbell, G. G.; Liu, Q.

    2014-12-01

    Although massive amounts of cryospheric data have been and are being generated at an unprecedented rate, a vast majority of the otherwise valuable data have been ``sitting in the dark'', with very limited quality assurance or runtime access for higher-level data analytics such as anomaly detection. This has significantly hindered data-driven scientific discovery and advances in the polar research and Earth sciences community. In an effort to solve this problem we have investigated and developed innovative techniques for the construction of ``condensate database'', which is much smaller than the original data yet still captures the key characteristics (e.g., spatio-temporal norm and changes). In addition we are taking advantage of parallel databases that make use of low cost GPU processors. As a result, efficient anomaly detection and quality assurance can be achieved with in-memory data analysis or limited I/O requests. The challenges lie in the fact that cryospheric data are massive and diverse, with normal/abnomal patterns spanning a wide range of spatial and temporal scales. This project consists of investigations in three main areas: (1) adaptive neighborhood-based thresholding in both space and time; (2) compressive-domain pattern detection and change analysis; and (3) hybrid and adaptive condensation of multi-modal, multi-scale cryospheric data.

  16. Crack Growth Behavior in the Threshold Region for High Cycle Fatigue Loading

    NASA Technical Reports Server (NTRS)

    Forman, R. G.; Zanganeh, M.

    2014-01-01

    This paper describes the results of a research program conducted to improve the understanding of fatigue crack growth rate behavior in the threshold growth rate region and to answer a question on the validity of threshold region test data. The validity question relates to the view held by some experimentalists that using the ASTM load shedding test method does not produce valid threshold test results and material properties. The question involves the fanning behavior observed in threshold region of da/dN plots for some materials in which the low R-ratio data fans out from the high R-ratio data. This fanning behavior or elevation of threshold values in the low R-ratio tests is generally assumed to be caused by an increase in crack closure in the low R-ratio tests. Also, the increase in crack closure is assumed by some experimentalists to result from using the ASTM load shedding test procedure. The belief is that this procedure induces load history effects which cause remote closure from plasticity and/or roughness changes in the surface morphology. However, experimental studies performed by the authors have shown that the increase in crack closure is a result of extensive crack tip bifurcations that can occur in some materials, particularly in aluminum alloys, when the crack tip cyclic yield zone size becomes less than the grain size of the alloy. This behavior is related to the high stacking fault energy (SFE) property of aluminum alloys which results in easier slip characteristics. Therefore, the fanning behavior which occurs in aluminum alloys is a function of intrinsic dislocation property of the alloy, and therefore, the fanned data does represent the true threshold properties of the material. However, for the corrosion sensitive steel alloys tested in laboratory air, the occurrence of fanning results from fretting corrosion at the crack tips, and these results should not be considered to be representative of valid threshold properties because the fanning is eliminated when testing is performed in dry air.

  17. EAP recordings in ineraid patients--correlations with psychophysical measures and possible implications for patient fitting.

    PubMed

    Zimmerling, Martin J; Hochmair, Erwin S

    2002-04-01

    Objective measurements can be helpful for cochlear implant fitting of difficult populations, as for example very young children. One method, the recording of the electrically evoked compound action potential (EAP), measures the nerve recruitment in the cochlea in response to stimulation through the implant. For coding strategies implemented at a moderate stimulation rate of 250 pps per channel, useful correlations between EAP data and psychophysical data have been already found. With new systems running at higher rates, it is important to check these correlations again. This study investigates the correlations between psychophysical data and EAP measures calculated from EAP amplitude growth functions. EAP data were recorded in 12 Ineraid subjects. Additionally, behavioral thresholds (THR) and maximum acceptable loudness levels (MAL) were determined for stimulation rates of 80 pps and 2,020 pps for each electrode. Useful correlations between EAP data and psychophysical data were found at the low stimulation rate (80 pps). However, at the higher stimulation rate (2,020 pps) correlations were not significant. They were improved substantially, however, by introducing a factor that corrected for disparities due to temporal integration. Incorporation of this factor, which controls for the influence of the stimulation rate on the threshold, improved the correlations between EAP measures recorded at 80 pps and psychophysical MALs measured at 2,020 pps to better than r = 0.70. EAP data as such can only be used to predict behavioral THRs or MCLs at low stimulation rates. To cope with temporal integration effects at higher stimulation rates, EAP data must be rate corrected. The introduction of a threshold-rate-factor is a promising way to achieve that goal. Further investigations need to be performed.

  18. 75 FR 68790 - Medicare Program; Medicare Part B Monthly Actuarial Rates, Premium Rate, and Annual Deductible...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-09

    ... 0938-AP81 Medicare Program; Medicare Part B Monthly Actuarial Rates, Premium Rate, and Annual... (SMI) program beginning January 1, 2011. In addition, this notice announces the monthly premium for... beneficiaries with modified adjusted gross income above certain threshold amounts. The monthly actuarial rates...

  19. 76 FR 67572 - Medicare Program; Medicare Part B Monthly Actuarial Rates, Premium Rate, and Annual Deductible...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-01

    ... 0938-AQ16 Medicare Program; Medicare Part B Monthly Actuarial Rates, Premium Rate, and Annual... (SMI) program beginning January 1, 2012. In addition, this notice announces the monthly premium for... beneficiaries with modified adjusted gross income above certain threshold amounts. The monthly actuarial rates...

  20. 78 FR 64943 - Medicare Program; Medicare Part B Monthly Actuarial Rates, Premium Rate, and Annual Deductible...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-30

    ... 0938-AR58 Medicare Program; Medicare Part B Monthly Actuarial Rates, Premium Rate, and Annual... (SMI) program beginning January 1, 2014. In addition, this notice announces the monthly premium for... beneficiaries with modified adjusted gross income above certain threshold amounts. The monthly actuarial rates...

  1. Should English healthcare providers be penalised for failing to collect patient-reported outcome measures? A retrospective analysis

    PubMed Central

    Street, Andrew; Gomes, Manuel; Bojke, Chris

    2015-01-01

    Summary Objective The best practice tariff for hip and knee replacement in the English National Health Service (NHS) rewards providers based on improvements in patient-reported outcome measures (PROMs) collected before and after surgery. Providers only receive a bonus if at least 50% of their patients complete the preoperative questionnaire. We determined how many providers failed to meet this threshold prior to the policy introduction and assessed longitudinal stability of participation rates. Design Retrospective observational study using data from Hospital Episode Statistics and the national PROM programme from April 2009 to March 2012. We calculated participation rates based on either (a) all PROM records or (b) only those that could be linked to inpatient records; constructed confidence intervals around rates to account for sampling variation; applied precision weighting to allow for volume; and applied risk adjustment. Setting NHS hospitals and private providers in England. Participants NHS patients undergoing elective unilateral hip and knee replacement surgery. Main outcome measures Number of providers with participation rates statistically significantly below 50%. Results Crude rates identified many providers that failed to achieve the 50% threshold but there were substantially fewer after adjusting for uncertainty and precision. While important, risk adjustment required restricting the analysis to linked data. Year-on-year correlation between provider participation rates was moderate. Conclusions Participation rates have improved over time and only a small number of providers now fall below the threshold, but administering preoperative questionnaires remains problematic in some providers. We recommend that participation rates are based on linked data and take into account sampling variation. PMID:25827906

  2. Should English healthcare providers be penalised for failing to collect patient-reported outcome measures? A retrospective analysis.

    PubMed

    Gutacker, Nils; Street, Andrew; Gomes, Manuel; Bojke, Chris

    2015-08-01

    The best practice tariff for hip and knee replacement in the English National Health Service (NHS) rewards providers based on improvements in patient-reported outcome measures (PROMs) collected before and after surgery. Providers only receive a bonus if at least 50% of their patients complete the preoperative questionnaire. We determined how many providers failed to meet this threshold prior to the policy introduction and assessed longitudinal stability of participation rates. Retrospective observational study using data from Hospital Episode Statistics and the national PROM programme from April 2009 to March 2012. We calculated participation rates based on either (a) all PROM records or (b) only those that could be linked to inpatient records; constructed confidence intervals around rates to account for sampling variation; applied precision weighting to allow for volume; and applied risk adjustment. NHS hospitals and private providers in England. NHS patients undergoing elective unilateral hip and knee replacement surgery. Number of providers with participation rates statistically significantly below 50%. Crude rates identified many providers that failed to achieve the 50% threshold but there were substantially fewer after adjusting for uncertainty and precision. While important, risk adjustment required restricting the analysis to linked data. Year-on-year correlation between provider participation rates was moderate. Participation rates have improved over time and only a small number of providers now fall below the threshold, but administering preoperative questionnaires remains problematic in some providers. We recommend that participation rates are based on linked data and take into account sampling variation. © The Royal Society of Medicine.

  3. Successes in drug discovery and design.

    PubMed

    2004-04-01

    The Society for Medicines Research (SMR) held a one-day meeting on case histories in drug discovery on December 4, 2003, at the National Heart and Lung Institute in London. These meetings have been organized by the SMR biannually for many years, and this latest meeting proved extremely popular, attracting a capacity audience of more than 130 registrants. The purpose of these meetings is educational; they allow those interested in drug discovery to hear key learnings from recent successful drug discovery programs. There was no overall linking theme between the talks, other than each success story has led to the introduction of a new and improved product of therapeutic use. The drug discovery stories covered in the meeting were extremely varied and, put together, they emphasized that each successful story is unique and special. This meeting is also special for the SMR because it presents the "SMR Award for Drug Discovery" in recognition of outstanding achievement and contribution in the area. It should be remembered that drug discovery is an extremely risky business and an extremely costly and complicated process in which the success rate is, at best, low. (c) 2004 Prous Science. All rights reserved.

  4. Strategies for bringing drug delivery tools into discovery.

    PubMed

    Kwong, Elizabeth; Higgins, John; Templeton, Allen C

    2011-06-30

    The past decade has yielded a significant body of literature discussing approaches for development and discovery collaboration in the pharmaceutical industry. As a result, collaborations between discovery groups and development scientists have increased considerably. The productivity of pharma companies to deliver new drugs to the market, however, has not increased and development costs continue to rise. Inability to predict clinical and toxicological response underlies the high attrition rate of leads at every step of drug development. A partial solution to this high attrition rate could be provided by better preclinical pharmacokinetics measurements that inform PD response based on key pathways that drive disease progression and therapeutic response. A critical link between these key pharmacology, pharmacokinetics and toxicology studies is the formulation. The challenges in pre-clinical formulation development include limited availability of compounds, rapid turn-around requirements and the frequent un-optimized physical properties of the lead compounds. Despite these challenges, this paper illustrates some successes resulting from close collaboration between formulation scientists and discovery teams. This close collaboration has resulted in development of formulations that meet biopharmaceutical needs from early stage preclinical in vivo model development through toxicity testing and development risk assessment of pre-clinical drug candidates. Published by Elsevier B.V.

  5. Use of Biotechnological Devices in the Quantification of Psychophysiological Workload of Professional Chess Players.

    PubMed

    Fuentes, Juan P; Villafaina, Santos; Collado-Mateo, Daniel; de la Vega, Ricardo; Gusi, Narcis; Clemente-Suárez, Vicente Javier

    2018-01-19

    Psychophysiological requirements of chess players are poorly understood, and periodization of training is often made without any empirical basis. For this reason, the aim of the present study was to investigate the psychophysiological response and quantify the player internal load during, and after playing a chess game. The participant was an elite 33 year-old male chess player ranked among the 300 best chess players in the world. Thus, cortical arousal by critical flicker fusion threshold, electroencephalogram by the theta Fz/alpha Pz ratio and autonomic modulation by heart rate variability were analyzed. Data revealed that cortical arousal by critical flicker fusion threshold and theta Fz/alpha Pz ratio increased and heart rate variability decreased during chess game. All these changes indicated that internal load increased during the chess game. In addition, pre-activation was detected in pre-game measure, suggesting that the prefrontal cortex might be preparatory activated. For these reasons, electroencephalogram, critical flicker fusion threshold and heart rate variability analysis may be highly applicable tools to control and monitor workload in chess player.

  6. 75 FR 42835 - Medicare Program; Inpatient Rehabilitation Facility Prospective Payment System for Federal Fiscal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-22

    ... estimated cost of the case exceeds the adjusted outlier threshold. We calculate the adjusted outlier... to 80 percent of the difference between the estimated cost of the case and the outlier threshold. In... Federal Prospective Payment Rates VI. Update to Payments for High-Cost Outliers under the IRF PPS A...

  7. Spatially implicit approaches to understand the manipulation of mating success for insect invasion management

    Treesearch

    Takehiko Yamanaka; Andrew M. Liebhold

    2009-01-01

    Recent work indicates that Allee effects (the positive relationship between population size and per capita growth rate) are critical in determining the successful establishment of invading species. Allee effects may create population thresholds, and failure to establish is likely if invading populations fall below these thresholds. There are many mechanisms that may...

  8. Does Citrulline Malate Enhance Physical Performance

    DTIC Science & Technology

    2010-10-01

    Jakeman, P.M. (2010). Citrulline malate enhances athletic anaerobic performance and relieves muscle soreness. Journal of Strength and Conditioning...returned for their third and final VO2 max test. VO2max, lactate threshold , maximum watts reached, ratings of perceived exertion and pre- and post...7 Figure 7. Lactate threshold (as % of VO2max) for all three conditions

  9. Using Generalized Additive Modeling to Empirically Identify Thresholds within the ITERS in Relation to Toddlers' Cognitive Development

    ERIC Educational Resources Information Center

    Setodji, Claude Messan; Le, Vi-Nhuan; Schaack, Diana

    2013-01-01

    Research linking high-quality child care programs and children's cognitive development has contributed to the growing popularity of child care quality benchmarking efforts such as quality rating and improvement systems (QRIS). Consequently, there has been an increased interest in and a need for approaches to identifying thresholds, or cutpoints,…

  10. Retrospective analysis of pulse oximeter alarm settings in an intensive care unit patient population.

    PubMed

    Lansdowne, Krystal; Strauss, David G; Scully, Christopher G

    2016-01-01

    The cacophony of alerts and alarms in a hospital produced by medical devices results in alarm fatigue. The pulse oximeter is one of the most common sources of alarms. One of the ways to reduce alarm rates is to adjust alarm settings at the bedside. This study is aimed to retrospectively examine individual pulse oximeter alarm settings on alarm rates and inter- and intra- patient variability. Nine hundred sixty-two previously collected intensive care unit (ICU) patient records were obtained from the Multiparameter Intelligent Monitoring in Intensive Care II Database (Beth Israel Deaconess Medical Center, Boston, MA). Inclusion criteria included patient records that contained SpO2 trend data sampled at 1 Hz for at least 1 h and a matching clinical record. SpO2 alarm rates were simulated by applying a range of thresholds (84, 86, 88, and 90 %) and delay times (10 to 60 s) to the SpO2 data. Patient records with at least 12 h of SpO2 data were examined for the variability in alarm rate over time. Decreasing SpO2 thresholds and increasing delay times resulted in decreased alarm rates. A limited number of patient records accounted for most alarms, and this number increased as alarm settings loosened (the top 10 % of patient records were responsible for 57.4 % of all alarms at an SpO2 threshold of 90 % and 15 s delay and 81.6 % at an SpO2 threshold of 84 % and 45 s delay). Alarm rates were not consistent over time for individual patients with periods of high and low alarms for all alarm settings. Pulse oximeter SpO2 alarm rates are variable between patients and over time, and the alarm rate and the extent of inter- and intra-patient variability can be affected by the alarm settings. Personalized alarm settings for a patient's current status may help to reduce alarm fatigue for nurses.

  11. A quantitative model of honey bee colony population dynamics.

    PubMed

    Khoury, David S; Myerscough, Mary R; Barron, Andrew B

    2011-04-18

    Since 2006 the rate of honey bee colony failure has increased significantly. As an aid to testing hypotheses for the causes of colony failure we have developed a compartment model of honey bee colony population dynamics to explore the impact of different death rates of forager bees on colony growth and development. The model predicts a critical threshold forager death rate beneath which colonies regulate a stable population size. If death rates are sustained higher than this threshold rapid population decline is predicted and colony failure is inevitable. The model also predicts that high forager death rates draw hive bees into the foraging population at much younger ages than normal, which acts to accelerate colony failure. The model suggests that colony failure can be understood in terms of observed principles of honey bee population dynamics, and provides a theoretical framework for experimental investigation of the problem.

  12. Blurred Star Image Processing for Star Sensors under Dynamic Conditions

    PubMed Central

    Zhang, Weina; Quan, Wei; Guo, Lei

    2012-01-01

    The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large angular rate. The adaptive threshold is adopted for denoising the star image when the angular rate is in the dynamic range. Then, the mathematical model of motion blur is deduced so as to restore the blurred star map due to large angular rate. Simulation results validate the effectiveness of the proposed method, which is suitable for blurred star image processing and practical for attitude determination of satellites under dynamic conditions. PMID:22778666

  13. Heart rate variability and pain: associations of two interrelated homeostatic processes.

    PubMed

    Appelhans, Bradley M; Luecken, Linda J

    2008-02-01

    Between-person variability in pain sensitivity remains poorly understood. Given a conceptualization of pain as a homeostatic emotion, we hypothesized inverse associations between measures of resting heart rate variability (HRV), an index of autonomic regulation of heart rate that has been linked to emotionality, and sensitivity to subsequently administered thermal pain. Resting electrocardiography was collected, and frequency-domain measures of HRV were derived through spectral analysis. Fifty-nine right-handed participants provided ratings of pain intensity and unpleasantness following exposure to 4 degrees C thermal pain stimulation, and indicated their thresholds for barely noticeable and moderate pain during three exposures to decreasing temperature. Greater low-frequency HRV was associated with lower ratings of 4 degrees C pain unpleasantness and higher thresholds for barely noticeable and moderate pain. High-frequency HRV was unrelated to measures of pain sensitivity. Findings suggest pain sensitivity is influenced by characteristics of a central homeostatic system also involved in emotion.

  14. Extended range radiation dose-rate monitor

    DOEpatents

    Valentine, Kenneth H.

    1988-01-01

    An extended range dose-rate monitor is provided which utilizes the pulse pileup phenomenon that occurs in conventional counting systems to alter the dynamic response of the system to extend the dose-rate counting range. The current pulses from a solid-state detector generated by radiation events are amplified and shaped prior to applying the pulses to the input of a comparator. The comparator generates one logic pulse for each input pulse which exceeds the comparator reference threshold. These pulses are integrated and applied to a meter calibrated to indicate the measured dose-rate in response to the integrator output. A portion of the output signal from the integrator is fed back to vary the comparator reference threshold in proportion to the output count rate to extend the sensitive dynamic detection range by delaying the asymptotic approach of the integrator output toward full scale as measured by the meter.

  15. Real-time detection of faecally contaminated drinking water with tryptophan-like fluorescence: defining threshold values.

    PubMed

    Sorensen, James P R; Baker, Andy; Cumberland, Susan A; Lapworth, Dan J; MacDonald, Alan M; Pedley, Steve; Taylor, Richard G; Ward, Jade S T

    2018-05-01

    We assess the use of fluorescent dissolved organic matter at excitation-emission wavelengths of 280nm and 360nm, termed tryptophan-like fluorescence (TLF), as an indicator of faecally contaminated drinking water. A significant logistic regression model was developed using TLF as a predictor of thermotolerant coliforms (TTCs) using data from groundwater- and surface water-derived drinking water sources in India, Malawi, South Africa and Zambia. A TLF threshold of 1.3ppb dissolved tryptophan was selected to classify TTC contamination. Validation of the TLF threshold indicated a false-negative error rate of 15% and a false-positive error rate of 18%. The threshold was unsuccessful at classifying contaminated sources containing <10 TTC cfu per 100mL, which we consider the current limit of detection. If only sources above this limit were classified, the false-negative error rate was very low at 4%. TLF intensity was very strongly correlated with TTC concentration (ρ s =0.80). A higher threshold of 6.9ppb dissolved tryptophan is proposed to indicate heavily contaminated sources (≥100 TTC cfu per 100mL). Current commercially available fluorimeters are easy-to-use, suitable for use online and in remote environments, require neither reagents nor consumables, and crucially provide an instantaneous reading. TLF measurements are not appreciably impaired by common intereferents, such as pH, turbidity and temperature, within typical natural ranges. The technology is a viable option for the real-time screening of faecally contaminated drinking water globally. Copyright © 2017 Natural Environment Research Council (NERC), as represented by the British Geological Survey (BGS. Published by Elsevier B.V. All rights reserved.

  16. Near-threshold fatigue behavior of copper alloys in air and aqueous environments: A high cyclic frequency study

    NASA Astrophysics Data System (ADS)

    Ahmed, Tawfik M.

    The near-threshold fatigue crack propagation behavior of alpha-phase copper alloys in desiccated air and several aqueous environments has been investigated. Three commercial alloys of nominal composition Cu-30Ni (Cu-Ni), Cu-30Zn (Cu-Zn) and 90Cu-7Al-3Fe (Cu-Al) were tested. Fatigue tests were conducted using standard prefatigued single edged notched (SEN) specimens loaded in tension at a high frequency of ˜100 Hz. Different R-ratios were employed, mostly at R-ratios of 0.5. Low loading levels were used that corresponded to the threshold and near-threshold regions where Delta Kth ≤ DeltaK ≤ 11 MPa√m. Fatigue tests in the aqueous solutions showed that the effect of different corrosive environments during high frequency testing (˜100 Hz) was not as pronounced as was expected when compared relative to air. Further testing revealed that environmental effects were present and fatigue crack growth rates were influenced by the fluid-induced closure effects which are generally reported in the fatigue literature to be operative only in viscous liquids, not in aqueous solutions. It was concluded that high frequency testing in aqueous environments consistently decreased crack growth rates in a manner similar to crack retardation effects in viscous fluids. Several theoretical models reported in the literature have underestimated, if not failed, to adequately predict the fluid induced closure in aqueous solutions. Results from the desiccated air tests confirmed that, under closure-free conditions (high R-ratios), both threshold values and fatigue crack growth rate of stage II can be related to Young's modulus, in agreement with results from the literature. The role of different mechanical and environmental variables on fatigue behavior becomes most visible in the low R -ratio regime, and contribute to various closure processes.

  17. STS-131 Discovery Launch

    NASA Image and Video Library

    2010-04-04

    Contrails are seen as workers leave the Launch Control Center after the launch of the space shuttle Discovery and the start of the STS-131 mission at NASA Kennedy Space Center in Cape Canaveral, Fla. on Monday April 5, 2010. Discovery is carrying a multi-purpose logistics module filled with science racks for the laboratories aboard the station. The mission has three planned spacewalks, with work to include replacing an ammonia tank assembly, retrieving a Japanese experiment from the station’s exterior, and switching out a rate gyro assembly on the station’s truss structure. Photo Credit: (NASA/Bill Ingalls)

  18. STS-131 Discovery Launch

    NASA Image and Video Library

    2010-04-04

    NASA Administrator Charles Bolden looks out the window of Firing Room Four in the Launch Control Center during the launch of the space shuttle Discovery and the start of the STS-131 mission at NASA Kennedy Space Center in Cape Canaveral, Fla. on Monday April 5, 2010. Discovery is carrying a multi-purpose logistics module filled with science racks for the laboratories aboard the station. The mission has three planned spacewalks, with work to include replacing an ammonia tank assembly, retrieving a Japanese experiment from the station’s exterior, and switching out a rate gyro assembly on the station’s truss structure. Photo Credit: (NASA/Bill Ingalls)

  19. 42 CFR 422.260 - Appeals of quality bonus payment determinations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... overall star rating. (ii) The reconsideration official's decision is final and binding unless a request... the star ratings (including the calculation of the overall star ratings); cut-off points for determining measure thresholds; the set of measures included in the star rating system; and the methodology...

  20. 42 CFR 422.260 - Appeals of quality bonus payment determinations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... overall star rating. (ii) The reconsideration official's decision is final and binding unless a request... the star ratings (including the calculation of the overall star ratings); cut-off points for determining measure thresholds; the set of measures included in the star rating system; and the methodology...

Top