Sample records for sharp utilization thresholds

  1. Single-event effects experienced by astronauts and microelectronic circuits flown in space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McNulty, P.J.

    Models developed for explaining the light flashes experienced by astronauts on Apollo and Skylab missions were used with slight modification to explain upsets observed in microelectronic circuits. Both phenomena can be explained by the simple assumption that an event occurs whenever a threshold number of ionizations or isomerizations are generated within a sensitive volume. Evidence is consistent with the threshold being sharp in both cases, but fluctuations in the physical stimuli lead to a gradual rather than sharp increase in cross section with LET. Successful use of the model requires knowledge of the dimensions of the sensitive volume and themore » value of threshold. Techniques have been developed to determine these SEU parameters in modern circuits.« less

  2. Forecasting Solar Flares Using Magnetogram-based Predictors and Machine Learning

    NASA Astrophysics Data System (ADS)

    Florios, Kostas; Kontogiannis, Ioannis; Park, Sung-Hong; Guerra, Jordan A.; Benvenuto, Federico; Bloomfield, D. Shaun; Georgoulis, Manolis K.

    2018-02-01

    We propose a forecasting approach for solar flares based on data from Solar Cycle 24, taken by the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) mission. In particular, we use the Space-weather HMI Active Region Patches (SHARP) product that facilitates cut-out magnetograms of solar active regions (AR) in the Sun in near-realtime (NRT), taken over a five-year interval (2012 - 2016). Our approach utilizes a set of thirteen predictors, which are not included in the SHARP metadata, extracted from line-of-sight and vector photospheric magnetograms. We exploit several machine learning (ML) and conventional statistics techniques to predict flares of peak magnitude {>} M1 and {>} C1 within a 24 h forecast window. The ML methods used are multi-layer perceptrons (MLP), support vector machines (SVM), and random forests (RF). We conclude that random forests could be the prediction technique of choice for our sample, with the second-best method being multi-layer perceptrons, subject to an entropy objective function. A Monte Carlo simulation showed that the best-performing method gives accuracy ACC=0.93(0.00), true skill statistic TSS=0.74(0.02), and Heidke skill score HSS=0.49(0.01) for {>} M1 flare prediction with probability threshold 15% and ACC=0.84(0.00), TSS=0.60(0.01), and HSS=0.59(0.01) for {>} C1 flare prediction with probability threshold 35%.

  3. Ultra-low threshold polariton condensation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steger, Mark; Fluegel, Brian; Alberi, Kirstin

    Here, we demonstrate the condensation of microcavity polaritons with a very sharp threshold occurring at a two orders of magnitude pump intensity lower than previous demonstrations of condensation. The long cavity lifetime and trapping and pumping geometries are crucial to the realization of this low threshold. Polariton condensation, or 'polariton lasing' has long been proposed as a promising source of coherent light at a lower threshold than traditional lasing, and these results indicate some considerations for optimizing designs for lower thresholds.

  4. Ultra-low threshold polariton condensation

    DOE PAGES

    Steger, Mark; Fluegel, Brian; Alberi, Kirstin; ...

    2017-03-13

    Here, we demonstrate the condensation of microcavity polaritons with a very sharp threshold occurring at a two orders of magnitude pump intensity lower than previous demonstrations of condensation. The long cavity lifetime and trapping and pumping geometries are crucial to the realization of this low threshold. Polariton condensation, or 'polariton lasing' has long been proposed as a promising source of coherent light at a lower threshold than traditional lasing, and these results indicate some considerations for optimizing designs for lower thresholds.

  5. Micro-scale patterning of indium tin oxide film by spatially modulated pulsed Nd:YAG laser beam

    NASA Astrophysics Data System (ADS)

    Lee, Jinsoo; Kim, Seongsu; Lee, Myeongkyu

    2012-09-01

    Here we demonstrate that indium tin oxide (ITO) films deposited on glass can be directly patterned by a spatially -modulated pulsed Nd-YAG laser beam (wavelength = 1064 nm, pulse width = 6 ns) incident onto the film. This method utilizes a pulsed laser-induced thermo-elastic force exerting on the film which plays a role to detach it from the substrate. Sharp-edged clean patterns with feature size as small as 4 μm could be obtained. The threshold pulse energy density for patterning was estimated to be ˜0.8 J/cm2 for 150 nm-thick ITO film, making it possible to pattern over one square centimeter by a single pulse with energy of 850 mJ. Not only being free from photoresist and chemical etching steps, the presented method can also provide much higher throughput than the tradition photoablation process utilizing a tightly focused beam.

  6. The Second Spiking Threshold: Dynamics of Laminar Network Spiking in the Visual Cortex

    PubMed Central

    Forsberg, Lars E.; Bonde, Lars H.; Harvey, Michael A.; Roland, Per E.

    2016-01-01

    Most neurons have a threshold separating the silent non-spiking state and the state of producing temporal sequences of spikes. But neurons in vivo also have a second threshold, found recently in granular layer neurons of the primary visual cortex, separating spontaneous ongoing spiking from visually evoked spiking driven by sharp transients. Here we examine whether this second threshold exists outside the granular layer and examine details of transitions between spiking states in ferrets exposed to moving objects. We found the second threshold, separating spiking states evoked by stationary and moving visual stimuli from the spontaneous ongoing spiking state, in all layers and zones of areas 17 and 18 indicating that the second threshold is a property of the network. Spontaneous and evoked spiking, thus can easily be distinguished. In addition, the trajectories of spontaneous ongoing states were slow, frequently changing direction. In single trials, sharp as well as smooth and slow transients transform the trajectories to be outward directed, fast and crossing the threshold to become evoked. Although the speeds of the evolution of the evoked states differ, the same domain of the state space is explored indicating uniformity of the evoked states. All evoked states return to the spontaneous evoked spiking state as in a typical mono-stable dynamical system. In single trials, neither the original spiking rates, nor the temporal evolution in state space could distinguish simple visual scenes. PMID:27582693

  7. Ultra-high spatial resolution multi-energy CT using photon counting detector technology

    NASA Astrophysics Data System (ADS)

    Leng, S.; Gutjahr, R.; Ferrero, A.; Kappler, S.; Henning, A.; Halaweish, A.; Zhou, W.; Montoya, J.; McCollough, C.

    2017-03-01

    Two ultra-high-resolution (UHR) imaging modes, each with two energy thresholds, were implemented on a research, whole-body photon-counting-detector (PCD) CT scanner, referred to as sharp and UHR, respectively. The UHR mode has a pixel size of 0.25 mm at iso-center for both energy thresholds, with a collimation of 32 × 0.25 mm. The sharp mode has a 0.25 mm pixel for the low-energy threshold and 0.5 mm for the high-energy threshold, with a collimation of 48 × 0.25 mm. Kidney stones with mixed mineral composition and lung nodules with different shapes were scanned using both modes, and with the standard imaging mode, referred to as macro mode (0.5 mm pixel and 32 × 0.5 mm collimation). Evaluation and comparison of the three modes focused on the ability to accurately delineate anatomic structures using the high-spatial resolution capability and the ability to quantify stone composition using the multi-energy capability. The low-energy threshold images of the sharp and UHR modes showed better shape and texture information due to the achieved higher spatial resolution, although noise was also higher. No noticeable benefit was shown in multi-energy analysis using UHR compared to standard resolution (macro mode) when standard doses were used. This was due to excessive noise in the higher resolution images. However, UHR scans at higher dose showed improvement in multi-energy analysis over macro mode with regular dose. To fully take advantage of the higher spatial resolution in multi-energy analysis, either increased radiation dose, or application of noise reduction techniques, is needed.

  8. Sharp threshold of blow-up and scattering for the fractional Hartree equation

    NASA Astrophysics Data System (ADS)

    Guo, Qing; Zhu, Shihui

    2018-02-01

    We consider the fractional Hartree equation in the L2-supercritical case, and find a sharp threshold of the scattering versus blow-up dichotomy for radial data: If M[u0 ]s -sc/sc E [u0 ] < M[ Q ]s -sc/sc E [ Q ] and M[u0 ]s -sc/sc ‖u0‖ H˙s 2 < M[ Q ]s -sc/sc ‖Q‖ H˙s 2 , then the solution u (t) is globally well-posed and scatters; if M[u0 ]s -sc/sc E [u0 ] < M[ Q ]s -sc/sc E [ Q ] and M[u0 ]s -sc/sc ‖u0‖ H˙s 2 > M[ Q ]s -sc/sc ‖Q‖ H˙s 2 , the solution u (t) blows up in finite time. This condition is sharp in the sense that the solitary wave solution eit Q (x) is global but not scattering, which satisfies the equality in the above conditions. Here, Q is the ground-state solution for the fractional Hartree equation.

  9. Real time algorithms for sharp wave ripple detection.

    PubMed

    Sethi, Ankit; Kemere, Caleb

    2014-01-01

    Neural activity during sharp wave ripples (SWR), short bursts of co-ordinated oscillatory activity in the CA1 region of the rodent hippocampus, is implicated in a variety of memory functions from consolidation to recall. Detection of these events in an algorithmic framework, has thus far relied on simple thresholding techniques with heuristically derived parameters. This study is an investigation into testing and improving the current methods for detection of SWR events in neural recordings. We propose and profile methods to reduce latency in ripple detection. Proposed algorithms are tested on simulated ripple data. The findings show that simple realtime algorithms can improve upon existing power thresholding methods and can detect ripple activity with latencies in the range of 10-20 ms.

  10. Molecular clouds without detectable CO

    NASA Technical Reports Server (NTRS)

    Blitz, Leo; Bazell, David; Desert, F. Xavier

    1990-01-01

    The clouds identified by Desert, Bazell, and Boulanger (DBB clouds) in their search for high-latitude molecular clouds were observed in the CO (J = 1-0) line, but only 13 percent of the sample was detected. The remaining 87 percent are diffuse molecular clouds with CO abundances of about 10 to the -6th, a typical value for diffuse clouds. This hypothesis is shown to be consistent with Copernicus data. The DBB clouds are shown to ben an essentially complete catalog of diffuse molecular clouds in the solar vicinity. The total molecular surface density in the vicinity of the sun is then only about 20 percent greater than the 1.3 solar masses/sq pc determined by Dame et al. (1987). Analysis of the CO detections indicates that there is a sharp threshold in extinction of 0.25 mag before CO is detectable and is derived from the IRAS I(100) micron threshold of 4 MJy/sr. This threshold is presumably where the CO abundance exhibits a sharp increase

  11. Molecular clouds without detectable CO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blitz, L.; Bazell, D.; Desert, F.X.

    1990-03-01

    The clouds identified by Desert, Bazell, and Boulanger (DBB clouds) in their search for high-latitude molecular clouds were observed in the CO (J = 1-0) line, but only 13 percent of the sample was detected. The remaining 87 percent are diffuse molecular clouds with CO abundances of about 10 to the -6th, a typical value for diffuse clouds. This hypothesis is shown to be consistent with Copernicus data. The DBB clouds are shown to be an essentially complete catalog of diffuse molecular clouds in the solar vicinity. The total molecular surface density in the vicinity of the sun is thenmore » only about 20 percent greater than the 1.3 solar masses/sq pc determined by Dame et al. (1987). Analysis of the CO detections indicates that there is a sharp threshold in extinction of 0.25 mag before CO is detectable and is derived from the IRAS I(100) micron threshold of 4 MJy/sr. This threshold is presumably where the CO abundance exhibits a sharp increase 18 refs.« less

  12. Decoding synchronized oscillations within the brain: phase-delayed inhibition provides a robust mechanism for creating a sharp synchrony filter.

    PubMed

    Patel, Mainak; Joshi, Badal

    2013-10-07

    The widespread presence of synchronized neuronal oscillations within the brain suggests that a mechanism must exist that is capable of decoding such activity. Two realistic designs for such a decoder include: (1) a read-out neuron with a high spike threshold, or (2) a phase-delayed inhibition network motif. Despite requiring a more elaborate network architecture, phase-delayed inhibition has been observed in multiple systems, suggesting that it may provide inherent advantages over simply imposing a high spike threshold. In this work, we use a computational and mathematical approach to investigate the efficacy of the phase-delayed inhibition motif in detecting synchronized oscillations. We show that phase-delayed inhibition is capable of creating a synchrony detector with sharp synchrony filtering properties that depend critically on the time course of inputs. Additionally, we show that phase-delayed inhibition creates a synchrony filter that is far more robust than that created by a high spike threshold. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Low threshold lasing of bubble-containing glass microspheres by non-whispering gallery mode excitation over a wide wavelength range

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumagai, Tsutaru, E-mail: kumagai.t.af@m.titech.ac.jp; Kishi, Tetsuo; Yano, Tetsuji

    2015-03-21

    Bubble-containing Nd{sup 3+}-doped tellurite glass microspheres were fabricated by localized laser heating technique to investigate their optical properties for use as microresonators. Fluorescence and excitation spectra measurements were performed by pumping with a tunable CW-Ti:Sapphire laser. The excitation spectra manifested several sharp peaks due to the conventional whispering gallery mode (WGM) when the pumping laser was irradiated to the edge part of the microsphere. However, when the excitation light was irradiated on the bubble position inside the microsphere, “non-WGM excitation” was induced, giving rise to numerous peaks at a broad wavelength range in the excitation spectra. Thus, efficient excitation wasmore » achieved over a wide wavelength range. Lasing threshold excited at the bubble position was much lower than that for the excitation at the edges of the microsphere. The lowest value of the laser threshold was 34 μW for a 4 μm sphere containing a 0.5 μm bubble. Efficiency of the excitation at the bubble position with broadband light was calculated to be 5 times higher than that for the edge of the microsphere. The bubble-containing microsphere enables efficient utilization of broadband light excitation from light-emitting diodes and solar light.« less

  14. Composition-dependent nanoelectronics of amido-phenazines: non-volatile RRAM and WORM memory devices.

    PubMed

    Maiti, Dilip K; Debnath, Sudipto; Nawaz, Sk Masum; Dey, Bapi; Dinda, Enakhi; Roy, Dipanwita; Ray, Sudipta; Mallik, Abhijit; Hussain, Syed A

    2017-10-17

    A metal-free three component cyclization reaction with amidation is devised for direct synthesis of DFT-designed amido-phenazine derivative bearing noncovalent gluing interactions to fabricate organic nanomaterials. Composition-dependent organic nanoelectronics for nonvolatile memory devices are discovered using mixed phenazine-stearic acid (SA) nanomaterials. We discovered simultaneous two different types of nonmagnetic and non-moisture sensitive switching resistance properties of fabricated devices utilizing mixed organic nanomaterials: (a) sample-1(8:SA = 1:3) is initially off, turning on at a threshold, but it does not turn off again with the application of any voltage, and (b) sample-2 (8:SA = 3:1) is initially off, turning on at a sharp threshold and off again by reversing the polarity. No negative differential resistance is observed in either type. These samples have different device implementations: sample-1 is attractive for write-once-read-many-times memory devices, such as novel non-editable database, archival memory, electronic voting, radio frequency identification, sample-2 is useful for resistive-switching random access memory application.

  15. Threshold law for positron-atom impact ionisation

    NASA Technical Reports Server (NTRS)

    Temkin, A.

    1982-01-01

    The threshold law for ionisation of atoms by positron impact is adduced in analogy with our approach to the electron-atom ionization. It is concluded the Coulomb-dipole region of the potential gives the essential part of the interaction in both cases and leads to the same kind of result: a modulated linear law. An additional process which enters positron ionization is positronium formation in the continuum, but that will not dominate the threshold yield. The result is in sharp contrast to the positron threshold law as recently derived by Klar on the basis of a Wannier-type analysis.

  16. Biodiversity response to natural gradients of multiple stressors on continental margins

    PubMed Central

    Sperling, Erik A.; Frieder, Christina A.; Levin, Lisa A.

    2016-01-01

    Sharp increases in atmospheric CO2 are resulting in ocean warming, acidification and deoxygenation that threaten marine organisms on continental margins and their ecological functions and resulting ecosystem services. The relative influence of these stressors on biodiversity remains unclear, as well as the threshold levels for change and when secondary stressors become important. One strategy to interpret adaptation potential and predict future faunal change is to examine ecological shifts along natural gradients in the modern ocean. Here, we assess the explanatory power of temperature, oxygen and the carbonate system for macrofaunal diversity and evenness along continental upwelling margins using variance partitioning techniques. Oxygen levels have the strongest explanatory capacity for variation in species diversity. Sharp drops in diversity are seen as O2 levels decline through the 0.5–0.15 ml l−1 (approx. 22–6 µM; approx. 21–5 matm) range, and as temperature increases through the 7–10°C range. pCO2 is the best explanatory variable in the Arabian Sea, but explains little of the variance in diversity in the eastern Pacific Ocean. By contrast, very little variation in evenness is explained by these three global change variables. The identification of sharp thresholds in ecological response are used here to predict areas of the seafloor where diversity is most at risk to future marine global change, noting that the existence of clear regional differences cautions against applying global thresholds. PMID:27122565

  17. On the performance of digital phase locked loops in the threshold region

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    Extended Kalman filter algorithms are used to obtain a digital phase lock loop structure for demodulation of angle modulated signals. It is shown that the error variance equations obtained directly from this structure enable one to predict threshold if one retains higher frequency terms. This is in sharp contrast to the similar analysis of the analog phase lock loop, where the higher frequency terms are filtered out because of the low pass filter in the loop. Results are compared to actual simulation results and threshold region results obtained previously.

  18. Noise adaptive wavelet thresholding for speckle noise removal in optical coherence tomography.

    PubMed

    Zaki, Farzana; Wang, Yahui; Su, Hao; Yuan, Xin; Liu, Xuan

    2017-05-01

    Optical coherence tomography (OCT) is based on coherence detection of interferometric signals and hence inevitably suffers from speckle noise. To remove speckle noise in OCT images, wavelet domain thresholding has demonstrated significant advantages in suppressing noise magnitude while preserving image sharpness. However, speckle noise in OCT images has different characteristics in different spatial scales, which has not been considered in previous applications of wavelet domain thresholding. In this study, we demonstrate a noise adaptive wavelet thresholding (NAWT) algorithm that exploits the difference of noise characteristics in different wavelet sub-bands. The algorithm is simple, fast, effective and is closely related to the physical origin of speckle noise in OCT image. Our results demonstrate that NAWT outperforms conventional wavelet thresholding.

  19. Wavelet tree structure based speckle noise removal for optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Yuan, Xin; Liu, Xuan; Liu, Yang

    2018-02-01

    We report a new speckle noise removal algorithm in optical coherence tomography (OCT). Though wavelet domain thresholding algorithms have demonstrated superior advantages in suppressing noise magnitude and preserving image sharpness in OCT, the wavelet tree structure has not been investigated in previous applications. In this work, we propose an adaptive wavelet thresholding algorithm via exploiting the tree structure in wavelet coefficients to remove the speckle noise in OCT images. The threshold for each wavelet band is adaptively selected following a special rule to retain the structure of the image across different wavelet layers. Our results demonstrate that the proposed algorithm outperforms conventional wavelet thresholding, with significant advantages in preserving image features.

  20. Summer High School Apprenticeship Research Program (SHARP)

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The summer of 1997 will not only be noted by NASA for the mission to Mars by the Pathfinder but also for the 179 brilliant apprentices that participated in the SHARP Program. Apprentice participation increased 17% over last year's total of 153 participants. As indicated by the End-of-the-Program Evaluations, 96% of the programs' participants rated the summer experience from very good to excellent. The SHARP Management Team began the year by meeting in Cocoa Beach, Florida for the annual SHARP Planning Conference. Participants strengthened their Education Division Computer Aided Tracking System (EDCATS) skills, toured the world-renowned Kennedy Space Center, and took a journey into space during the Alien Encounter Exercise. The participants returned to their Centers with the same goals and objectives in mind. The 1997 SHARP Program goals were: (1) Utilize NASA's mission, unique facilities and specialized workforce to provide exposure, education, and enrichment experiences to expand participants' career horizons and inspire excellence in formal education and lifelong learning. (2) Develop and implement innovative education reform initiatives which support NASA's Education Strategic Plan and national education goals. (3) Utilize established statistical indicators to measure the effectiveness of SHARP's program goals. (4) Explore new recruiting methods which target the student population for which SHARP was specifically designed. (5) Increase the number of participants in the program. All of the SHARP Coordinators reported that the goals and objectives for the overall program as well as their individual program goals were achieved. Some of the goals and objectives for the Centers were: (1) To increase the students' awareness of science, mathematics, engineering, and computer technology; (2) To provide students with the opportunity to broaden their career objectives; and (3) To expose students to a variety of enrichment activities. Most of the Center goals and objectives were consistent with the overall program goals. Modem Technology Systems, Inc., was able to meet the SHARP Apprentices, Coordinators and Mentors during their site visits to Stennis Space Center, Ames Research Center and Dryden Flight Research Center. All three Centers had very efficient programs and adhered to SHARP's general guidelines and procedures. MTSI was able to meet the apprentices from the other Centers via satellite in July during the SHARP Video-Teleconference(ViTS). The ViTS offered the apprentices and the NASA and SHARP Coordinators the opportunity to introduce themselves. The apprentices from each Center presented topical "Cutting Edge Projects". Some of the accomplishments for the 1997 SHARP Program year included: MTSI hiring apprentices from four of the nine NASA Centers, the full utilization of the EDCATS by apprentices and NASA/SHARP Coordinators, the distribution of the SHARP Apprentice College and Scholarship Directory, a reunion with former apprentices from Langley Research Center and the development of a SHARP Recruitment Poster. MTSI developed another exciting newsletter containing graphics and articles submitted by the apprentices and the SHARP Management Team.

  1. Ultrasensitivity and sharp threshold theorems for multisite systems

    NASA Astrophysics Data System (ADS)

    Dougoud, M.; Mazza, C.; Vinckenbosch, L.

    2017-02-01

    This work studies the ultrasensitivity of multisite binding processes where ligand molecules can bind to several binding sites. It considers more particularly recent models involving complex chemical reactions in allosteric phosphorylation processes and for transcription factors and nucleosomes competing for binding on DNA. New statistics-based formulas for the Hill coefficient and the effective Hill coefficient are provided and necessary conditions for a system to be ultrasensitive are exhibited. It is first shown that the ultrasensitivity of binding processes can be approached using sharp-threshold theorems which have been developed in applied probability theory and statistical mechanics for studying sharp threshold phenomena in reliability theory, random graph theory and percolation theory. Special classes of binding process are then introduced and are described as density dependent birth and death process. New precise large deviation results for the steady state distribution of the process are obtained, which permits to show that switch-like ultrasensitive responses are strongly related to the multi-modality of the steady state distribution. Ultrasensitivity occurs if and only if the entropy of the dynamical system has more than one global minimum for some critical ligand concentration. In this case, the Hill coefficient is proportional to the number of binding sites, and the system is highly ultrasensitive. The classical effective Hill coefficient I is extended to a new cooperativity index I q , for which we recommend the computation of a broad range of values of q instead of just the standard one I  =  I 0.9 corresponding to the 10%-90% variation in the dose-response. It is shown that this single choice can sometimes mislead the conclusion by not detecting ultrasensitivity. This new approach allows a better understanding of multisite ultrasensitive systems and provides new tools for the design of such systems.

  2. Survival of translocated sharp-tailed grouse: Temporal threshold and age effects

    USGS Publications Warehouse

    Mathews, Steven; Coates, Peter S.; Delehanty, David J.

    2016-01-01

    Context: The Columbian sharp-tailed grouse (Tympanuchus phasianellus columbianus) is a subspecies of conservation concern in the western United States, currently occupying ≤10% of its historic range. Land and management agencies are employing translocation techniques to restore Columbian sharp-tailed grouse (CSTG) populations. However, establishing self-sustaining populations by translocating grouse often is unsuccessful, owing, in part, to low survivorship of translocated grouse following release.Aims: We measured and modelled patterns of CSTG mortality for 150 days following translocation into historic range, to better understand patterns and causes of success or failure in conservation efforts to re-establish grouse populations.Methods: We conducted two independent multi-year translocations and evaluated individual and temporal factors associated with CSTG survival up to 150 days following their release. Both translocations were reintroduction attempts in Nevada, USA, to establish viable populations of CSTG into their historic range.Key results: We observed a clear temporal threshold in survival probability, with CSTG mortality substantially higher during the first 50 days following release than during the subsequent 100 days. Additionally, translocated yearling grouse exhibited higher overall survival (0.669 ± 0.062) than did adults (0.420 ± 0.052) across the 150-day period and higher survival than adults both before and after the 50-day temporal threshold.Conclusions: Translocated CSTG are especially vulnerable to mortality for 50 days following release, whereas translocated yearling grouse are more resistant to mortality than are adult grouse. On the basis of the likelihood of survival, yearling CSTG are better candidates for population restoration through translocation than are adult grouse.Implications: Management actions that ameliorate mortality factors for 50 days following translocation and translocations that employ yearling grouse will increase the likelihood of population establishment.

  3. The effect of pumping noise on the characteristics of a single-stage parametric amplifier

    NASA Astrophysics Data System (ADS)

    Medvedev, S. Iu.; Muzychuk, O. V.

    1983-10-01

    An analysis is made of the operation of a single-stage parametric amplifier based on a varactor with a sharp transition. Analytical expressions are obtained for the statistical moments of the output signal, the signal-noise ratio, and other characteristics in the case when the output signal and the pump are a mixture of harmonic oscillation and Gaussian noise. It is shown that, when a noise component is present in the pump, an increase of its harmonic component to values close to the threshold leads to a sharp decrease in the signal-noise ratio at the amplifier output.

  4. Correlation of Retinal Nerve Fiber Layer Thickness and Visual Fields in Glaucoma: A broken stick model

    PubMed Central

    Alasil, Tarek; Wang, Kaidi; Yu, Fei; Field, Matthew G.; Lee, Hang; Baniasadi, Neda; de Boer, Johannes F.; Coleman, Anne L.; Chen, Teresa C.

    2015-01-01

    Purpose To determine the retinal nerve fiber layer (RNFL) thickness at which visual field (VF) damage becomes detectable and associated with structural loss. Design Retrospective cross-sectional study. Methods Eighty seven healthy and 108 glaucoma subjects (one eye per subject) were recruited from an academic institution. All patients had VF examinations (Swedish Interactive Threshold Algorithm 24-2 test of the Humphrey visual field analyzer 750i; Carl Zeiss Meditec, Dublin, CA) and spectral domain optical coherence tomography RNFL scans (Spectralis, Heidelberg Engineering, Heidelberg, Germany). Comparison of RNFL thicknesses values with VF threshold values showed a plateau of VF threshold values at high RNFL thickness values and then a sharp decrease at lower RNFL thickness values. A broken stick statistical analysis was utilized to estimate the tipping point at which RNFL thickness values are associated with VF defects. The slope for the association between structure and function was computed for data above and below the tipping point. Results The mean RNFL thickness value that was associated with initial VF loss was 89 μm. The superior RNFL thickness value that was associated with initial corresponding inferior VF loss was 100 μm. The inferior RNFL thickness value that was associated with initial corresponding superior VF loss was 73 μm. The differences between all the slopes above and below the aforementioned tipping points were statistically significant (p<0.001). Conclusions In open angle glaucoma, substantial RNFL thinning or structural loss appears to be necessary before functional visual field defects become detectable. PMID:24487047

  5. Unconscious Inhibition and Facilitation at the Objective Detection Threshold: Replicable and Qualitatively Different Unconscious Perceptual Effects

    ERIC Educational Resources Information Center

    Snodgrass, Michael; Shevrin, Howard

    2006-01-01

    Although the veridicality of unconscious perception is increasingly accepted, core issues remain unresolved [Jack, A., & Shallice, T. (2001). Introspective physicalism as an approach to the science of consciousness. "Cognition, 79," 161-196], and sharp disagreement persists regarding fundamental methodological and theoretical issues. The most…

  6. Audibility threshold spectrum for prominent discrete tone analysis

    NASA Astrophysics Data System (ADS)

    Kimizuka, Ikuo

    2005-09-01

    To evaluate the annoyance of tonal components in noise emissions, ANSI S1.13 (for general purposes) and/or ISO 7779/ECMA-74 (dedicatedfor IT equipment) state two similar metrics: tone-to-noise ratio (TNR) and prominence ratio(PR). By these or either of these two parameters, noise of question with a sharp spectral peak is analyzed by high resolution FFF and classified as prominent when it exceeds some criterion curve. According to present procedures, however this designation is dependent on only the spectral shape. To resolve this problem, the author proposes a threshold spectrum of human ear audibility. The spectrum is based on the reference threshold of hearing which is defined in ISO 389-7 and/or ISO 226. With this spectrum, one can objectively define whether the noise peak of question is audible or not, by simple comparison of the peak amplitude of noise emission and the corresponding value of threshold. Applying the threshold, one can avoid overkilling or unnecessary action for noise. Such a peak with absolutely low amplitude is not audible.

  7. Polarization asymmetry in two-electron photodetachment - A cogent test of the ionization threshold law

    NASA Technical Reports Server (NTRS)

    Temkin, A.; Bhatia, A. K.

    1988-01-01

    A very sensitive test of the electron-atom ionization threshold law is suggested: for spin-aligned heavy negative ions it consists of measuring the polarization asymmetry A(PA) coming from double detachment by left- versus right-circularly polarized light. The respective yields are worked out for the Te(-) (5p)5 2P(3/2) ion. The Coulomb-dipole theory predicts A(PA) to be the ratio of two oscillating functions in sharp contrast to any power law (specifically that of Wannier, 1953) for which the ratio is expected to be a smooth function of energy.

  8. The Effects of School Wide Bonuses on Student Achievement: Regression Discontinuity Evidence from North Carolina

    ERIC Educational Resources Information Center

    Lauen, Douglas Lee

    2011-01-01

    This study examines the incentive effects of North Carolina's practice of awarding performance bonuses on test score achievement on the state tests. Bonuses were awarded based solely on whether a school exceeds a threshold on a continuous performance metric. The study uses a sharp regression discontinuity design, an approach with strong internal…

  9. Threshold Switching Characteristics of Nb/NbO 2 /TiN Vertical Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yuhan; Comes, Ryan B.; Wolf, Stuart A.

    2016-01-01

    Nb/NbO2/TiN vertical structures were synthesized in-situ and patterned to devices with different contact areas. The devices exhibited threshold resistive switching with minimal hysteresis and a small EThreshold (60~90 kV/cm). The switching behavior was unipolar, and demonstrated good repeatability. A less sharp but still sizable change in the device resistance was observed up to 150 °C. It was found that the resistive switching without Nb capping layer exhibited the hysteretic behavior and much larger EThreshold (~250 kV/cm) likely due to a 2-3 nm surface Nb2O5 layer. The stable threshold switching behavior well above room temperature shows the potential applications of thismore » device as an electronic switch.« less

  10. Thresholds for conservation and management: structured decision making as a conceptual framework

    USGS Publications Warehouse

    Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.

    2014-01-01

    changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.

  11. New technique for mouse oocyte injection via a modified holding pipette.

    PubMed

    Lyu, Q F; Deng, L; Xue, S G; Cao, S F; Liu, X Y; Jin, W; Wu, L Q; Kuang, Y P

    2010-11-01

    To improve mouse oocyte survival from intracytoplasmic sperm injection, the sharp tip of the injection pipette has been modified to have a flat end. Here, for the same goal but for a more convenient manipulation, a sharp injection pipette was kept whereas the holding pipette was modified to have a trumpet-shaped opening, which allows deeper injection into the oocyte as it is held. Mouse oocyte injection with mouse and human spermatozoa was performed at 37°C. For the injection of mouse oocyte with mouse sperm head, a significantly higher survival rate (83%) was achieved by utilizing the modified holding pipette than the conventional one (21%; P<0.001) and the fertilization rates were normal and comparable for both methods (82% versus 81%). A superior survival rate (82%) and acceptable normal fertilization rate (71%) were also achieved by utilizing the modified holding pipette for interspecies ICSI (injecting mouse oocyte with human spermatozoon). Taken together, by utilizing a holding pipette with a trumpet-shaped opening, acceptable rates of mouse oocyte survival and fertilization can be achieved using a sharp injection pipette under conditions usual for human oocyte injection. Copyright © 2010 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  12. On the Structure of the Iron K-Edge

    NASA Technical Reports Server (NTRS)

    Palmeri, P.; Mendoza, C.; Kallman, T. R.; Bautista, M. A.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    It is shown that the commonly held view of a sharp Fe K edge must be modified if the decay pathways of the series of resonances converging to the K thresholds are adequately taken into account. These resonances display damped Lorentzian profiles of nearly constant widths that are smeared to impose continuity across the threshold. By modeling the effects of K damping on opacities, it is found that the broadening of the K edge grows with the ionization level of the plasma, and the appearance at high ionization of a localized absorption feature at 7.2 keV is identified as the Kbeta unresolved transition array.

  13. Edge Sharpness Assessment by Parametric Modeling: Application to Magnetic Resonance Imaging.

    PubMed

    Ahmad, R; Ding, Y; Simonetti, O P

    2015-05-01

    In biomedical imaging, edge sharpness is an important yet often overlooked image quality metric. In this work, a semi-automatic method to quantify edge sharpness in the presence of significant noise is presented with application to magnetic resonance imaging (MRI). The method is based on parametric modeling of image edges. First, an edge map is automatically generated and one or more edges-of-interest (EOI) are manually selected using graphical user interface. Multiple exclusion criteria are then enforced to eliminate edge pixels that are potentially not suitable for sharpness assessment. Second, at each pixel of the EOI, an image intensity profile is read along a small line segment that runs locally normal to the EOI. Third, the profiles corresponding to all EOI pixels are individually fitted with a sigmoid function characterized by four parameters, including one that represents edge sharpness. Last, the distribution of the sharpness parameter is used to quantify edge sharpness. For validation, the method is applied to simulated data as well as MRI data from both phantom imaging and cine imaging experiments. This method allows for fast, quantitative evaluation of edge sharpness even in images with poor signal-to-noise ratio. Although the utility of this method is demonstrated for MRI, it can be adapted for other medical imaging applications.

  14. Effect of an Expenditure Cap on Low-Income Seniors' Drug Use and Spending in a State Pharmacy Assistance Program

    PubMed Central

    Bishop, Christine E; Ryan, Andrew M; Gilden, Daniel M; Kubisiak, Joanna; Thomas, Cindy Parks

    2009-01-01

    Objective To estimate the impact of a soft cap (a ceiling on utilization beyond which insured enrollees pay a higher copayment) on low-income elders' use of prescription drugs. Data Sources and Setting Claims and enrollment files for the first year (June 2002 through May 2003) of the Illinois SeniorCare program, a state pharmacy assistance program, and Medicare claims and enrollment files, 2001 through 2003. SeniorCare enrolled non-Medicaid-eligible elders with income less than 200 percent of Federal Poverty Level. Minimal copays increased by 20 percent of prescription cost when enrollee expenditures reached $1,750. Research Design Models were estimated for three dependent variables: enrollees' average monthly utilization (number of prescriptions), spending, and the proportion of drugs that were generic rather than brand. Observations included all program enrollees who exceeded the cap and covered two periods, before and after the cap was exceeded. Principle Findings On average, enrollees exceeding the cap reduced the number of drugs they purchased by 14 percent, monthly expenditures decreased by 19 percent, and the proportion generic increased by 4 percent, all significant at p<.01. Impacts were greater for enrollees with greater initial spending, for enrollees without one of five chronic illness diagnoses in the previous calendar year, and for enrollees with lower income. Conclusions Near-poor elders enrolled in plans with caps or coverage gaps, including Part D plans, may face sharp declines in utilization when they exceed these thresholds. PMID:19291168

  15. Dangers of dermatologic surgery: protect your feet.

    PubMed

    Barr, Jerome; Siegel, Daniel

    2004-12-01

    Dermatologists frequently utilize scalpels, which are reported to be to culprit in around seven percent of the 385,000 sharps-related injuries sustained by healthcare personnel a year. Injuries from sharp devices are associated with the occupational transmission of more than 20 pathogens. Dropped scalpels may penetrate unprotected lower extremity skin, and there is no published data regarding what a shoe's actual degree of protection is against the danger of falling sharps. The purpose of this study was to evaluate and determine which types of shoes will protect their wearers. Although every shoe decreased falling sharp's degree of penetration into the feet, shoes cannot be relied on to prevent injury. More than half of the shoes allowed the scalpel blade to pass through the shoes and penetrate into the meat.

  16. Differential equation models for sharp threshold dynamics.

    PubMed

    Schramm, Harrison C; Dimitrov, Nedialko B

    2014-01-01

    We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. Published by Elsevier Inc.

  17. Field emission from isolated individual vertically aligned carbon nanocones

    NASA Astrophysics Data System (ADS)

    Baylor, L. R.; Merkulov, V. I.; Ellis, E. D.; Guillorn, M. A.; Lowndes, D. H.; Melechko, A. V.; Simpson, M. L.; Whealton, J. H.

    2002-04-01

    Field emission from isolated individual vertically aligned carbon nanocones (VACNCs) has been measured using a small-diameter moveable probe. The probe was scanned parallel to the sample plane to locate the VACNCs, and perpendicular to the sample plane to measure the emission turn-on electric field of each VACNC. Individual VACNCs can be good field emitters. The emission threshold field depends on the geometric aspect ratio (height/tip radius) of the VACNC and is lowest when a sharp tip is present. VACNCs exposed to a reactive ion etch process demonstrate a lowered emission threshold field while maintaining a similar aspect ratio. Individual VACNCs can have low emission thresholds, carry high current densities, and have long emission lifetime. This makes them very promising for various field emission applications for which deterministic placement of the emitter with submicron accuracy is needed.

  18. Critical dynamics on a large human Open Connectome network

    NASA Astrophysics Data System (ADS)

    Ódor, Géza

    2016-12-01

    Extended numerical simulations of threshold models have been performed on a human brain network with N =836 733 connected nodes available from the Open Connectome Project. While in the case of simple threshold models a sharp discontinuous phase transition without any critical dynamics arises, variable threshold models exhibit extended power-law scaling regions. This is attributed to fact that Griffiths effects, stemming from the topological or interaction heterogeneity of the network, can become relevant if the input sensitivity of nodes is equalized. I have studied the effects of link directness, as well as the consequence of inhibitory connections. Nonuniversal power-law avalanche size and time distributions have been found with exponents agreeing with the values obtained in electrode experiments of the human brain. The dynamical critical region occurs in an extended control parameter space without the assumption of self-organized criticality.

  19. Quantitative measurement of electron number in nanosecond and picosecond laser-induced air breakdown

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yue; Sawyer, Jordan C.; Su, Liu

    2016-05-07

    Here we present quantitative measurements of total electron numbers in laser-induced air breakdown at pressures ranging from atmospheric to 40 bar{sub g} by 10 ns and 100 ps laser pulses. A quantifiable definition for the laser-induced breakdown threshold is identified by a sharp increase in the measurable total electron numbers via dielectric-calibrated coherent microwave scattering. For the 10 ns laser pulse, the threshold of laser-induced breakdown in atmospheric air is defined as the total electron number of ∼10{sup 6}. This breakdown threshold decreases with an increase of pressure and laser photon energy (shorter wavelength), which is consistent with the theory of initialmore » multiphoton ionization and subsequent avalanche processes. For the 100 ps laser pulse cases, a clear threshold is not present and only marginal pressure effects can be observed, which is due to the short pulse duration leading to stronger multiphoton ionization and minimal collisional avalanche ionization.« less

  20. Carbon dioxide laser polishing of fused silica surfaces for increased laser-damage resistance at 1064 nm.

    PubMed

    Temple, P A; Lowdermilk, W H; Milam, D

    1982-09-15

    Mechanically polished fused silica surfaces were heated with continuous-wave CO(2) laser radiation. Laser-damage thresholds of the surfaces were measured with 1064-nm 9-nsec pulses focused to small spots and with large-spot, 1064-nm, 1-nsec irradiation. A sharp transition from laser-damage-prone to highly laser-damage-resistant took place over a small range in CO(2) laser power. The transition to high damage resistance occurred at a silica surface temperature where material softening began to take place as evidenced by the onset of residual strain in the CO(2) laser-processed part. The small-spot damage measurements show that some CO(2) laser-treated surfaces have a local damage threshold as high as the bulk damage threshold of SiO(2). On some CO(2) laser-treated surfaces, large-spot damage thresholds were increased by a factor of 3-4 over thresholds of the original mechanically polished surface. These treated parts show no obvious change in surface appearance as seen in bright-field, Nomarski, or total internal reflection microscopy. They also show little change in transmissive figure. Further, antireflection films deposited on CO(2) laser-treated surfaces have thresholds greater than the thresholds of antireflection films on mechanically polished surfaces.

  1. Differential Equation Models for Sharp Threshold Dynamics

    DTIC Science & Technology

    2012-08-01

    dynamics, and the Lanchester model of armed conflict, where the loss of a key capability drastically changes dynamics. We derive and demonstrate a step...dynamics using differential equations. 15. SUBJECT TERMS Differential Equations, Markov Population Process, S-I-R Epidemic, Lanchester Model 16...infection, where a detection event drastically changes dynamics, and the Lanchester model of armed conflict, where the loss of a key capability

  2. On the Mechanisms of Formation of Memory Channels and Development of Negative Differential Resistance in Solid Solutions of the TlInTe2-TlYbTe2 System

    NASA Astrophysics Data System (ADS)

    Akhmedova, A. M.

    2018-04-01

    The behavior of an electronic subsystem is investigated in the course of formation and development of a memory channel in solid solutions of the TlInTe2-TlYbTe2 system. An analysis of the current-voltage characteristics allows getting an insight into the reason for a sharp change in electrical conductance of the specimens under study during their transition from the high-resistance to high-conductance state and the reasons for the well known instability of threshold converters, which makes it possible to design devices with high threshold voltage stability.

  3. Neural tuning characteristics of auditory primary afferents in the chicken embryo.

    PubMed

    Jones, S M; Jones, T A

    1995-02-01

    Primary afferent activity was recorded from the cochlear ganglion in chicken embryos (Gallus domesticus) at 19 days of incubation (E19). The ganglion was accessed via the recessus scala tympani and impaled with glass micropipettes. Frequency tuning curves were obtained using a computerized threshold tracking procedure. Tuning curves were evaluated to determine characteristics frequencies (CFs), CF thresholds, slopes of low and high frequency flanks, and tip sharpness (Q10dB). The majority of tuning curves exhibited the typical 'V' shape described for older birds and, on average, appeared relatively mature based on mean values for CF thresholds (59.6 +/- 20.3 dBSPL) and tip sharpness (Q10dB = 5.2 +/- 3). The mean slopes of low (61.9 +/- 37 dB/octave) and high (64.6 +/- 33 dB/octave) frequency flanks although comparable were somewhat less than those reported for 21-day-old chickens. Approximately 14% of the tuning curves displayed an unusual 'saw-tooth' pattern. CFs ranged from 188 to 1623 Hz. The highest CF was well below those reported for post-hatch birds. In addition, a broader range of Q10dB values (1.2 to 16.9) may related to a greater variability in embryonic tuning curves. Overall, these data suggest that an impressive functional maturity exists in the embryo at E19. The most significant sign of immaturity was the limited expression of high frequencies. It is argued that the limited high CF in part may be due to the developing middle ear transfer function and/or to a functionally immature cochlear base.

  4. Neural tuning characteristics of auditory primary afferents in the chicken embryo

    NASA Technical Reports Server (NTRS)

    Jones, S. M.; Jones, T. A.

    1995-01-01

    Primary afferent activity was recorded from the cochlear ganglion in chicken embryos (Gallus domesticus) at 19 days of incubation (E19). The ganglion was accessed via the recessus scala tympani and impaled with glass micropipettes. Frequency tuning curves were obtained using a computerized threshold tracking procedure. Tuning curves were evaluated to determine characteristics frequencies (CFs), CF thresholds, slopes of low and high frequency flanks, and tip sharpness (Q10dB). The majority of tuning curves exhibited the typical 'V' shape described for older birds and, on average, appeared relatively mature based on mean values for CF thresholds (59.6 +/- 20.3 dBSPL) and tip sharpness (Q10dB = 5.2 +/- 3). The mean slopes of low (61.9 +/- 37 dB/octave) and high (64.6 +/- 33 dB/octave) frequency flanks although comparable were somewhat less than those reported for 21-day-old chickens. Approximately 14% of the tuning curves displayed an unusual 'saw-tooth' pattern. CFs ranged from 188 to 1623 Hz. The highest CF was well below those reported for post-hatch birds. In addition, a broader range of Q10dB values (1.2 to 16.9) may related to a greater variability in embryonic tuning curves. Overall, these data suggest that an impressive functional maturity exists in the embryo at E19. The most significant sign of immaturity was the limited expression of high frequencies. It is argued that the limited high CF in part may be due to the developing middle ear transfer function and/or to a functionally immature cochlear base.

  5. Financial states of world financial and commodities markets around sovereign debt crisis

    NASA Astrophysics Data System (ADS)

    Nobi, Ashadun; Lee, Jae Woo

    2017-11-01

    We applied a threshold method to construct a complex network from cross-correlations coefficients of 46 daily time series comprised of 23 global indices and 23 commodity futures from 2010 - 2014. We identify financial states of both global indices and commodity futures based on the change of the network structure. The trend of the average correlation is decreasing except sharp peak during crises during the study period. The threshold networks are generated at a threshold value of θ = 0.1 and the change of degrees of each node over time is used to identify the financial state for each index. We observe that commodity futures, such as EU CO2 emission, live cattle, natural gas as well as the financial indices of Jakarta and Indonesia stock exchange (JKSE) and Kuala Lumpur stock exchange (KLSE) change states frequently. By the average change in links we identify the indices which are more reactive to crises.

  6. Thin-phase screen estimates of TID effects on midlatitude transionospheric radio paths

    NASA Astrophysics Data System (ADS)

    Reilly, Michael H.

    1993-11-01

    The thin-phase screen model for ionospheric irregularity perturbations to transionospheric radio propagation is redefined. It is argued that the phase screen normal should be along the line of sight (LOS) between a receiver on the ground and a space transmitter, rather than in the zenith direction at the point of intersection with the LOS, which is traditional. The model is applied to a calculation of TID strength thresholds for the occurrence of multipath and scintillation. The results are in sharp disagreement with the traditional model, which predicts thresholds lower by an order of magnitude in typical cases. Midlatitude observations of TID strengths are reviewed, and it is found that multipath thresholds can be exceeded under one or more favorable circumstances, which include frequencies below about 100 MHz, low elevation angles, winter, night, atmospheric gravity wave velocity near the magnetic field direction and away from parallel with the LOS, and low solar activity.

  7. Magnon Polarons in the Spin Seebeck Effect.

    PubMed

    Kikkawa, Takashi; Shen, Ka; Flebus, Benedetta; Duine, Rembert A; Uchida, Ken-Ichi; Qiu, Zhiyong; Bauer, Gerrit E W; Saitoh, Eiji

    2016-11-11

    Sharp structures in the magnetic field-dependent spin Seebeck effect (SSE) voltages of Pt/Y_{3}Fe_{5}O_{12} at low temperatures are attributed to the magnon-phonon interaction. Experimental results are well reproduced by a Boltzmann theory that includes magnetoelastic coupling. The SSE anomalies coincide with magnetic fields tuned to the threshold of magnon-polaron formation. The effect gives insight into the relative quality of the lattice and magnetization dynamics.

  8. Lacosamide and Levetiracetam Have No Effect on Sharp-Wave Ripple Rate.

    PubMed

    Kudlacek, Jan; Chvojka, Jan; Posusta, Antonin; Kovacova, Lubica; Hong, Seung Bong; Weiss, Shennan; Volna, Kamila; Marusic, Petr; Otahal, Jakub; Jiruska, Premysl

    2017-01-01

    Pathological high-frequency oscillations are a novel marker used to improve the delineation of epileptogenic tissue and, hence, the outcome of epilepsy surgery. Their practical clinical utilization is curtailed by the inability to discriminate them from physiological oscillations due to frequency overlap. Although it is well documented that pathological HFOs are suppressed by antiepileptic drugs (AEDs), the effect of AEDs on normal HFOs is not well known. In this experimental study, we have explored whether physiological HFOs (sharp-wave ripples) of hippocampal origin respond to AED treatment. The results show that application of a single dose of levetiracetam or lacosamide does not reduce the rate of sharp-wave ripples. In addition, it seems that these new generation drugs do not negatively affect the cellular and network mechanisms involved in sharp-wave ripple generation, which may provide a plausible explanation for the absence of significant negative effects on cognitive functions of these drugs, particularly on memory.

  9. K-shell Photoionization of Na-like to Cl-like Ions of Mg, Si, S, Ar, and Ca

    NASA Technical Reports Server (NTRS)

    Witthoeft, M. C.; Garcia, J.; Kallman, T. R.; Bautista, M. A.; Mendoza, C.; Palmeri, P.; Quinet, P.

    2010-01-01

    We present R-matrix calculations of photoabsorption and photoionization cross sections across the K edge of Mg, Si, S, Ar, and Ca ions with more than 10 electrons. The calculations include the effects of radiative and Auger damping by means of an optical potential. The wave functions are constructed from single-electron. orbital bases obtained using a Thomas-Fermi-Dirac statistical model potential. Configuration interaction is considered among all states up to n = 3. The damping processes affect the resonances converging to the K-thresholds causing them to display symmetric profiles of constant width that smear the otherwise sharp edge at the photoionization threshold. These data are important for the modeling of features found in photoionized plasmas.

  10. Phase-locking transition in a chirped superconducting Josephson resonator.

    PubMed

    Naaman, O; Aumentado, J; Friedland, L; Wurtele, J S; Siddiqi, I

    2008-09-12

    We observe a sharp threshold for dynamic phase locking in a high-Q transmission line resonator embedded with a Josephson tunnel junction, and driven with a purely ac, chirped microwave signal. When the drive amplitude is below a critical value, which depends on the chirp rate and is sensitive to the junction critical current I0, the resonator is only excited near its linear resonance frequency. For a larger amplitude, the resonator phase locks to the chirped drive and its amplitude grows until a deterministic maximum is reached. Near threshold, the oscillator evolves smoothly in one of two diverging trajectories, providing a way to discriminate small changes in I0 with a nonswitching detector, with potential applications in quantum state measurement.

  11. Pulse Width Affects Scalp Sensation of Transcranial Magnetic Stimulation.

    PubMed

    Peterchev, Angel V; Luber, Bruce; Westin, Gregory G; Lisanby, Sarah H

    Scalp sensation and pain comprise the most common side effect of transcranial magnetic stimulation (TMS), which can reduce tolerability and complicate experimental blinding. We explored whether changing the width of single TMS pulses affects the quality and tolerability of the resultant somatic sensation. Using a controllable pulse parameter TMS device with a figure-8 coil, single monophasic magnetic pulses inducing electric field with initial phase width of 30, 60, and 120 µs were delivered in 23 healthy volunteers. Resting motor threshold of the right first dorsal interosseus was determined for each pulse width, as reported previously. Subsequently, pulses were delivered over the left dorsolateral prefrontal cortex at each of the three pulse widths at two amplitudes (100% and 120% of the pulse-width-specific motor threshold), with 20 repetitions per condition delivered in random order. After each pulse, subjects rated 0-to-10 visual analog scales for Discomfort, Sharpness, and Strength of the sensation. Briefer TMS pulses with amplitude normalized to the motor threshold were perceived as slightly more uncomfortable than longer pulses (with an average 0.89 point increase on the Discomfort scale for pulse width of 30 µs compared to 120 µs). The sensation of the briefer pulses was felt to be substantially sharper (2.95 points increase for 30 µs compared to 120 µs pulse width), but not stronger than longer pulses. As expected, higher amplitude pulses increased the perceived discomfort and strength, and, to a lesser degree the perceived sharpness. Our findings contradict a previously published hypothesis that briefer TMS pulses are more tolerable. We discovered that the opposite is true, which merits further study as a means of enhancing tolerability in the context of repetitive TMS. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Pulse width affects scalp sensation of transcranial magnetic stimulation

    PubMed Central

    Peterchev, Angel V.; Luber, Bruce; Westin, Gregory G.; Lisanby, Sarah H.

    2016-01-01

    Background Scalp sensation and pain comprise the most common side effect of transcranial magnetic stimulation (TMS), which can reduce tolerability and complicate experimental blinding. Objective We explored whether changing the width of single TMS pulses affects the quality and tolerability of the resultant somatic sensation. Methods Using a controllable pulse parameter TMS device with a figure-8 coil, single monophasic magnetic pulses inducing electric field with initial phase width of 30, 60, and 120 µs were delivered in 23 healthy volunteers. Resting motor threshold of the right first dorsal interosseus was determined for each pulse width, as reported previously. Subsequently, pulses were delivered over the left dorsolateral prefrontal cortex at each of the three pulse widths at two amplitudes (100% and 120% of the pulse-width-specific motor threshold), with 20 repetitions per condition delivered in random order. After each pulse, subjects rated 0-to-10 visual analog scales for Discomfort, Sharpness, and Strength of the sensation. Results Briefer TMS pulses with amplitude normalized to the motor threshold were perceived as slightly more uncomfortable than longer pulses (with an average 0.89 points increase on the Discomfort scale for pulse width of 30 µs compared to 120 µs). The sensation of the briefer pulses was felt to be substantially sharper (2.95 point increase for 30 µs compared to 120 µs pulse width), but not stronger than longer pulses. As expected, higher amplitude pulses increased the perceived discomfort and strength, and, to a lesser degree the perceived sharpness. Conclusions Our findings contradict a previously published hypothesis that briefer TMS pulses are more tolerable. We discovered that the opposite is true, which merits further study as a means of enhancing tolerability in the context of repetitive TMS. PMID:28029593

  13. Noradrenaline decreases spike voltage threshold and induces electrographic sharp waves in turtle medial cortex in vitro.

    PubMed

    Lorenzo, Daniel; Velluti, Julio C

    2004-01-01

    The noradrenergic modulation of neuronal properties has been described at different levels of the mammalian brain. Although the anatomical characteristics of the noradrenergic system are well known in reptiles, functional data are scarce. In our study the noradrenergic modulation of cortical electrogenesis in the turtle medial cortex was studied in vitro using a combination of field and intracellular recordings. Turtle EEG consists of a low voltage background interspersed by spontaneous large sharp waves (LSWs). Noradrenaline (NA, 5-40 microM) induced (or enhanced) the generation of LSWs in a dose-dependent manner. Pharmacological experiments suggest the participation of alpha and beta receptors in this effect. In medial cortex neurons NA induced a hyperpolarization of the resting potential and a decrease of input resistance. Both effects were observed also after TTX treatment. Noradrenaline increased the response of the cells to depolarizing pulses, resulting in an upward shift of the frequency/current relation. In most cells the excitability change was mediated by a decrease of the spike voltage threshold resulting in the reduction of the amount of depolarization needed to fire the cell (voltage threshold minus resting potential). As opposed to the mechanisms reported in mammalian neurons, no changes in the frequency adaptation or the post-train afterhyperpolarization were observed. The NA effects at the cellular level were not reproduced by noradrenergic agonists. Age- and species-dependent properties in the pharmacology of adrenergic receptors could be involved in this result. Cellular effects of NA in turtle cortex are similar to those described in mammals, although the increase in cellular excitability seems to be mediated by a different mechanism. Copyright 2004 S. Karger AG, Basel

  14. Investigation of micromixing by acoustically oscillated sharp-edges

    PubMed Central

    Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco

    2016-01-01

    Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel. PMID:27158292

  15. Investigation of micromixing by acoustically oscillated sharp-edges.

    PubMed

    Nama, Nitesh; Huang, Po-Hsun; Huang, Tony Jun; Costanzo, Francesco

    2016-03-01

    Recently, acoustically oscillated sharp-edges have been utilized to achieve rapid and homogeneous mixing in microchannels. Here, we present a numerical model to investigate acoustic mixing inside a sharp-edge-based micromixer in the presence of a background flow. We extend our previously reported numerical model to include the mixing phenomena by using perturbation analysis and the Generalized Lagrangian Mean (GLM) theory in conjunction with the convection-diffusion equation. We divide the flow variables into zeroth-order, first-order, and second-order variables. This results in three sets of equations representing the background flow, acoustic response, and the time-averaged streaming flow, respectively. These equations are then solved successively to obtain the mean Lagrangian velocity which is combined with the convection-diffusion equation to predict the concentration profile. We validate our numerical model via a comparison of the numerical results with the experimentally obtained values of the mixing index for different flow rates. Further, we employ our model to study the effect of the applied input power and the background flow on the mixing performance of the sharp-edge-based micromixer. We also suggest potential design changes to the previously reported sharp-edge-based micromixer to improve its performance. Finally, we investigate the generation of a tunable concentration gradient by a linear arrangement of the sharp-edge structures inside the microchannel.

  16. SHARP Multiphysics Tutorials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y. Q.; Shemon, E. R.; Mahadevan, Vijay S.

    SHARP, developed under the NEAMS Reactor Product Line, is an advanced modeling and simulation toolkit for the analysis of advanced nuclear reactors. SHARP is comprised of three physics modules currently including neutronics, thermal hydraulics, and structural mechanics. SHARP empowers designers to produce accurate results for modeling physical phenomena that have been identified as important for nuclear reactor analysis. SHARP can use existing physics codes and take advantage of existing infrastructure capabilities in the MOAB framework and the coupling driver/solver library, the Coupled Physics Environment (CouPE), which utilizes the widely used, scalable PETSc library. This report aims at identifying the coupled-physicsmore » simulation capability of SHARP by introducing the demonstration example called sahex in advance of the SHARP release expected by Mar 2016. sahex consists of 6 fuel pins with cladding, 1 control rod, sodium coolant and an outer duct wall that encloses all the other components. This example is carefully chosen to demonstrate the proof of concept for solving more complex demonstration examples such as EBR II assembly and ABTR full core. The workflow of preparing the input files, running the case and analyzing the results is demonstrated in this report. Moreover, an extension of the sahex model called sahex_core, which adds six homogenized neighboring assemblies to the full heterogeneous sahex model, is presented to test homogenization capabilities in both Nek5000 and PROTEUS. Some primary information on the configuration and build aspects for the SHARP toolkit, which includes capability to auto-download dependencies and configure/install with optimal flags in an architecture-aware fashion, is also covered by this report. A step-by-step instruction is provided to help users to create their cases. Details on these processes will be provided in the SHARP user manual that will accompany the first release.« less

  17. WOOLLY CUSPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nauenberg, M.; Pais, A.

    1962-04-01

    A study is made of the elastic scattering 1 + 2 yields 1 + 2 in the energy region where the inelastic process 1 + 2 yields 3 + 4 sets in, for the case that particle 3 is unstable. By woolly cusp'' is meant the phenomenon that corresponds to the sharp cusp in the stable case. The procedure followed is to consider the inelastic channel to be of the three-body type, where the three- body states are parametrized by a Breit-Wigner formula around a mean mass m of particle 3. The connection between a woolly and a sharp cuspmore » is made evident. The problem is studied in terms of a twochannel S-wave K matrix. In the two- channel approximation the woolly cusp necessarily shows a decrease in the elastic cross section sigma above a characteristic energy. As a function of energy, sigma must either show a maximum or an inflection point. In either case, the energy at which this happens may lie above or below the inelastic threshold for the fictitious case that particle 3 has a sharp mass m. The sign and magnitude of the elastic scattering phase shift at this m point'' approximately determines which case is actually realized. (auth)« less

  18. Spectrally selective solar absorber with sharp and temperature dependent cut-off based on semiconductor nanowire arrays

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Zhou, Lin; Zheng, Qinghui; Lu, Hong; Gan, Qiaoqiang; Yu, Zongfu; Zhu, Jia

    2017-05-01

    Spectrally selective absorbers (SSA) with high selectivity of absorption and sharp cut-off between high absorptivity and low emissivity are critical for efficient solar energy conversion. Here, we report the semiconductor nanowire enabled SSA with not only high absorption selectivity but also temperature dependent sharp absorption cut-off. By taking advantage of the temperature dependent bandgap of semiconductors, we systematically demonstrate that the absorption cut-off profile of the semiconductor-nanowire-based SSA can be flexibly tuned, which is quite different from most of the other SSA reported so far. As an example, silicon nanowire based selective absorbers are fabricated, with the measured absorption efficiency above (below) bandgap ˜97% (15%) combined with an extremely sharp absorption cut-off (transition region ˜200 nm), the sharpest SSA demonstrated so far. The demonstrated semiconductor-nanowire-based SSA can enable a high solar thermal efficiency of ≳86% under a wide range of operating conditions, which would be competitive candidates for the concentrated solar energy utilizations.

  19. Enhancement of dielectric constant at percolation threshold in CaCu3 Ti4 O12 ceramic fabricated by both solid state and sol-gel process

    NASA Astrophysics Data System (ADS)

    Mukherjee, Rupam; Garcia, Lucia; Lawes, Gavin; Nadgorny, Boris

    2014-03-01

    We have investigated the large dielectric enhancement at the percolation threshold by introducing metallic RuO2 grains into a matrix of CaCu3Ti4O12 (CCTO). The intrinsic response of the pure CCTO samples prepared by solid state and sol-gel processes results in a dielectric constant on the order of 104 and 103 respectively with low loss. Scanning electron microscopy and energy dispersive x-ray spectroscopy indicate that a difference in the thickness of the copper oxide enriched grain boundary is the main reason for the different dielectric properties between these two samples. Introducing RuO2 metallic fillers in these CCTO samples yields a sharp increase of the dielectric constant at percolation threshold fc, by a factor of 6 and 3 respectively. The temperature dependence of the dielectric constant shows that the dipolar relaxation plays an important role in enhancing dielectric constant in composite systems.

  20. Quantitative analysis reveals how EGFR activation and downregulation are coupled in normal but not in cancer cells

    PubMed Central

    Capuani, Fabrizio; Conte, Alexia; Argenzio, Elisabetta; Marchetti, Luca; Priami, Corrado; Polo, Simona; Di Fiore, Pier Paolo; Sigismund, Sara; Ciliberto, Andrea

    2015-01-01

    Ubiquitination of the epidermal growth factor receptor (EGFR) that occurs when Cbl and Grb2 bind to three phosphotyrosine residues (pY1045, pY1068 and pY1086) on the receptor displays a sharp threshold effect as a function of EGF concentration. Here we use a simple modelling approach together with experiments to show that the establishment of the threshold requires both the multiplicity of binding sites and cooperative binding of Cbl and Grb2 to the EGFR. While the threshold is remarkably robust, a more sophisticated model predicted that it could be modulated as a function of EGFR levels on the cell surface. We confirmed experimentally that the system has evolved to perform optimally at physiological levels of EGFR. As a consequence, this system displays an intrinsic weakness that causes—at the supraphysiological levels of receptor and/or ligand associated with cancer—uncoupling of the mechanisms leading to signalling through phosphorylation and attenuation through ubiquitination. PMID:26264748

  1. Retinal image quality assessment based on image clarity and content

    NASA Astrophysics Data System (ADS)

    Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim

    2016-09-01

    Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.

  2. Field emission and photoluminescence characteristics of ZnS nanowires via vapor phase growth

    NASA Astrophysics Data System (ADS)

    Chang, Yongqin; Wang, Mingwei; Chen, Xihong; Ni, Saili; Qiang, Weijing

    2007-05-01

    Large-area ZnS nanowires were synthesized through a vapor phase deposition method. X-ray diffraction and electron microscopy results show that the products are composed of single crystalline ZnS nanowires with a cubic structure. The nanowires have sharp tips and are distributed uniformly on silicon substrates. The diameter of the bases is in the range of 320-530 nm and that of the tips is around 20-30 nm. The strong ultraviolet emission in the photoluminescence spectra also demonstrates that the ZnS nanowires are of high crystalline perfection. Field emission measurements reveal that the ZnS nanowires have a fairly low threshold field, which may be ascribed to their very sharp tips, rough surfaces and high crystal quality. The perfect field emission ability of the ZnS nanowires makes them a promising candidate for the fabrication of flexible cold cathodes.

  3. Sparse Recovery via Differential Inclusions

    DTIC Science & Technology

    2014-07-01

    2242. [Wai09] Martin J. Wainwright, Sharp thresholds for high-dimensional and noisy spar- sity recovery using l1 -constrained quadratic programming...solution, (1.11) βt = { 0, if t < 1/y; y(1− e−κ(t−1/y)), otherwise, which converges to the unbiased Bregman ISS estimator exponentially fast. Let us ...are not given the support set S, so the following two prop- erties are used to evaluate the performance of an estimator β̂. 1. Model selection

  4. Very Low Threshold ASE and Lasing Using Auger-Suppressed Nanocrystal Quantum Dots

    NASA Astrophysics Data System (ADS)

    Park, Young-Shin; Bae, Wan Ki; Fidler, Andrew; Baker, Tomas; Lim, Jaehoon; Pietryga, Jeffrey; Klimov, Victor

    2015-03-01

    We report amplified spontaneous emission (ASE) and lasing with very low thresholds obtained using thin films made of engineered thick-shell CdSe/CdS QDs that have a CdSeS alloyed layer between the CdSe core and the CdS shell. These ``alloyed'' QDs exhibit considerable reduction of Auger decay rates, which results in high biexciton emission quantum yields (QBX of ~ 12%) and extended biexciton lifetimes (τBX of ~ 4ns). By using a fs laser (400 nm at 1 kHz repetition rate) as a pump source, we measured the threshold intensity of biexciton ASE as low as 5 μJ/cm2, which is about 5 times lower than the lowest ASE thresholds reported for thick-shell QDs without interfacial alloying. Interestingly, we also observed biexciton random lasing from the same QD film. Lasing spectrum comprises several sharp peaks (linewidth ~0.2 nm), and the heights and the spectral positions of these peaks show strong dependence on the exact position of the excitation spot on the QD film. Our study suggests that further suppression of nonradiative Auger decay rates via even finer grading of the core/shell interface could lead to a further reduction in the lasing threshold and potentially realization of lasing under continuous-wave excitation.

  5. Quantifying and Modelling the Effect of Cloud Shadows on the Surface Irradiance at Tropical and Midlatitude Forests

    NASA Astrophysics Data System (ADS)

    Kivalov, Sergey N.; Fitzjarrald, David R.

    2018-02-01

    Cloud shadows lead to alternating light and dark periods at the surface, with the most abrupt changes occurring in the presence of low-level forced cumulus clouds. We examine multiyear irradiance time series observed at a research tower in a midlatitude mixed deciduous forest (Harvard Forest, Massachusetts, USA: 42.53{°}N, 72.17{°}W) and one made at a similar tower in a tropical rain forest (Tapajós National Forest, Pará, Brazil: 2.86{°}S, 54.96{°}W). We link the durations of these periods statistically to conventional meteorological reports of sky type and cloud height at the two forests and present a method to synthesize the surface irradiance time series from sky-type information. Four classes of events describing distinct sequential irradiance changes at the transition from cloud shadow and direct sunlight are identified: sharp-to-sharp, slow-to-slow, sharp-to-slow, and slow-to-sharp. Lognormal and the Weibull statistical distributions distinguish among cloudy-sky types. Observers' qualitative reports of `scattered' and `broken' clouds are quantitatively distinguished by a threshold value of the ratio of mean clear to cloudy period durations. Generated synthetic time series based on these statistics adequately simulate the temporal "radiative forcing" linked to sky type. Our results offer a quantitative way to connect the conventional meteorological sky type to the time series of irradiance experienced at the surface.

  6. Characteristics of indium-gallium-nitride multiple-quantum-well blue laser diodes grown by MOCVD

    NASA Astrophysics Data System (ADS)

    Mack, M. P.; Abare, A. C.; Hansen, M.; Kozodoy, P.; Keller, S.; Mishra, U.; Coldren, L. A.; DenBaars, S. P.

    1998-06-01

    Room temperature (RT) pulsed operation of blue (420 nm) nitride-based multi-quantum well (MQW) laser diodes grown on c-plane sapphire substrates has been demonstrated. Atmospheric pressure MOCVD was used to grow the active region of the device which consisted of a 10 pair In 0.21Ga 0.79N (2.5 nm)/In 0.07Ga 0.93N (5 nm) InGaN MQW. Threshold current densities as low as 12.6 kA/cm 2 were observed for 10×1200 μm lasers with uncoated reactive ion etched (RIE) facets. The emission is strongly TE polarized and has a sharp transition in the far-field pattern above threshold. Laser diodes were tested under pulsed conditions lasted up to 6 h at room temperature.

  7. Field-induced dielectric response saturation in $o$ -TaS 3

    DOE PAGES

    Ma, Yongchang; Lu, Cuimin; Wang, Xuewei; ...

    2016-08-03

    The temperature and electric field dependent conductivity spectra of o-TaS 3 sample with 10 μm 2 in cross section were measured. Besides the classical electric threshold E T₋Cl, we observed another novel threshold E T₋N at a larger electric field, where an S-shaped I-V relation revealed. The appearance of E T₋N may be due to the establishment of coherence among small charge-density- wave domains. Under a stable field E > E T-N, a sharp dispersion emerged below kHz. At a fixed temperature, the scattering rate of the charged condensate was extremely small and decreased with increasing field. With decreasing temperature,more » the scattering Fröhlic-mode conductivity would be consistent with the meta-stable state.« less

  8. The effect of the stability threshold on time to stabilization and its reliability following a single leg drop jump landing.

    PubMed

    Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H

    2016-02-08

    We aimed to provide insight in how threshold selection affects time to stabilization (TTS) and its reliability to support selection of methods to determine TTS. Eighty-two elite youth soccer players performed six single leg drop jump landings. The TTS was calculated based on four processed signals: raw ground reaction force (GRF) signal (RAW), moving root mean square window (RMS), sequential average (SA) or unbounded third order polynomial fit (TOP). For each trial and processing method a wide range of thresholds was applied. Per threshold, reliability of the TTS was assessed through intra-class correlation coefficients (ICC) for the vertical (V), anteroposterior (AP) and mediolateral (ML) direction of force. Low thresholds resulted in a sharp increase of TTS values and in the percentage of trials in which TTS exceeded trial duration. The TTS and ICC were essentially similar for RAW and RMS in all directions; ICC's were mostly 'insufficient' (<0.4) to 'fair' (0.4-0.6) for the entire range of thresholds. The SA signals resulted in the most stable ICC values across thresholds, being 'substantial' (>0.8) for V, and 'moderate' (0.6-0.8) for AP and ML. The ICC's for TOP were 'substantial' for V, 'moderate' for AP, and 'fair' for ML. The present findings did not reveal an optimal threshold to assess TTS in elite youth soccer players following a single leg drop jump landing. Irrespective of threshold selection, the SA and TOP methods yielded sufficiently reliable TTS values, while for RAW and RMS the reliability was insufficient to differentiate between players. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Application of threshold concepts to ecological management problems: occupancy of Golden Eagles in Denali National Park, Alaska: Chapter 5

    USGS Publications Warehouse

    Eaton, Mitchell J.; Martin, Julien; Nichols, James D.; McIntyre, Carol; McCluskie, Maggie C.; Schmutz, Joel A.; Lubow, Bruce L.; Runge, Michael C.; Edited by Guntenspergen, Glenn R.

    2014-01-01

    In this chapter, we demonstrate the application of the various classes of thresholds, detailed in earlier chapters and elsewhere, via an actual but simplified natural resource management case study. We intend our example to provide the reader with the ability to recognize and apply the theoretical concepts of utility, ecological and decision thresholds to management problems through a formalized decision-analytic process. Our case study concerns the management of human recreational activities in Alaska’s Denali National Park, USA, and the possible impacts of such activities on nesting Golden Eagles, Aquila chrysaetos. Managers desire to allow visitors the greatest amount of access to park lands, provided that eagle nesting-site occupancy is maintained at a level determined to be acceptable by the managers themselves. As these two management objectives are potentially at odds, we treat minimum desired occupancy level as a utility threshold which, then, serves to guide the selection of annual management alternatives in the decision process. As human disturbance is not the only factor influencing eagle occupancy, we model nesting-site dynamics as a function of both disturbance and prey availability. We incorporate uncertainty in these dynamics by considering several hypotheses, including a hypothesis that site occupancy is affected only at a threshold level of prey abundance (i.e., an ecological threshold effect). By considering competing management objectives and accounting for two forms of thresholds in the decision process, we are able to determine the optimal number of annual nesting-site restrictions that will produce the greatest long-term benefits for both eagles and humans. Setting a utility threshold of 75 occupied sites, out of a total of 90 potential nesting sites, the optimization specified a decision threshold at approximately 80 occupied sites. At the point that current occupancy falls below 80 sites, the recommended decision is to begin restricting access to humans; above this level, it is recommended that all eagle territories be opened to human recreation. We evaluated the sensitivity of the decision threshold to uncertainty in system dynamics and to management objectives (i.e., to the utility threshold).

  10. Enhancements to the SHARP Build System and NEK5000 Coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alex; Bennett, Andrew R.; Billings, Jay Jay

    The SHARP project for the Department of Energy's Nuclear Energy Advanced Modeling and Simulation (NEAMS) program provides a multiphysics framework for coupled simulations of advanced nuclear reactor designs. It provides an overall coupling environment that utilizes custom interfaces to couple existing physics codes through a common spatial decomposition and unique solution transfer component. As of this writing, SHARP couples neutronics, thermal hydraulics, and structural mechanics using PROTEUS, Nek5000, and Diablo respectively. This report details two primary SHARP improvements regarding the Nek5000 and Diablo individual physics codes: (1) an improved Nek5000 coupling interface that lets SHARP achieve a vast increase inmore » overall solution accuracy by manipulating the structure of the internal Nek5000 spatial mesh, and (2) the capability to seamlessly couple structural mechanics calculations into the framework through improvements to the SHARP build system. The Nek5000 coupling interface now uses a barycentric Lagrange interpolation method that takes the vertex-based power and density computed from the PROTEUS neutronics solver and maps it to the user-specified, general-order Nek5000 spectral element mesh. Before this work, SHARP handled this vertex-based solution transfer in an averaging-based manner. SHARP users can now achieve higher levels of accuracy by specifying any arbitrary Nek5000 spectral mesh order. This improvement takes the average percentage error between the PROTEUS power solution and the Nek5000 interpolated result down drastically from over 23 % to just above 2 %, and maintains the correct power profile. We have integrated Diablo into the SHARP build system to facilitate the future coupling of structural mechanics calculations into SHARP. Previously, simulations involving Diablo were done in an iterative manner, requiring a large amount manual work, and left only as a task for advanced users. This report will detail a new Diablo build system that was implemented using GNU Autotools, mirroring much of the current SHARP build system, and easing the use of structural mechanics calculations for end-users of the SHARP multiphysics framework. It lets users easily build and use Diablo as a stand-alone simulation, as well as fully couple with the other SHARP physics modules. The top-level SHARP build system was modified to allow Diablo to hook in directly. New dependency handlers were implemented to let SHARP users easily build the framework with these new simulation capabilities. The remainder of this report will describe this work in full, with a detailed discussion of the overall design philosophy of SHARP, the new solution interpolation method introduced, and the Diablo integration work. We will conclude with a discussion of possible future SHARP improvements that will serve to increase solution accuracy and framework capability.« less

  11. Optimal threshold estimator of a prognostic marker by maximizing a time-dependent expected utility function for a patient-centered stratified medicine.

    PubMed

    Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe

    2018-06-01

    Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.

  12. Observation of random lasing in gold-silica nanoshell/water solution

    NASA Astrophysics Data System (ADS)

    Kang, Jin U.

    2006-11-01

    The author reports experimental observation of resonant surface plasmon enhanced random lasing in gold-silica nanoshells in de-ionized water. The gold-silica nanoshell/water solution with concentration of 8×109particles/ml was pumped above the surface plasmon resonance frequency using 514nm argon-krypton laser. When pumping power was above the lasing threshold, sharp random lasing peaks occurred near and below the plasmon peak from 720to860nm with a lasing linewidth less than 1nm.

  13. The Emergence of Visual Awareness: Temporal Dynamics in Relation to Task and Mask Type

    PubMed Central

    Kiefer, Markus; Kammer, Thomas

    2017-01-01

    One aspect of consciousness phenomena, the temporal emergence of visual awareness, has been subject of a controversial debate. How can visual awareness, that is the experiential quality of visual stimuli, be characterized best? Is there a sharp discontinuous or dichotomous transition between unaware and fully aware states, or does awareness emerge gradually encompassing intermediate states? Previous studies yielded conflicting results and supported both dichotomous and gradual views. It is well conceivable that these conflicting results are more than noise, but reflect the dynamic nature of the temporal emergence of visual awareness. Using a psychophysical approach, the present research tested whether the emergence of visual awareness is context-dependent with a temporal two-alternative forced choice task. During backward masking of word targets, it was assessed whether the relative temporal sequence of stimulus thresholds is modulated by the task (stimulus presence, letter case, lexical decision, and semantic category) and by mask type. Four masks with different similarity to the target features were created. Psychophysical functions were then fitted to the accuracy data in the different task conditions as a function of the stimulus mask SOA in order to determine the inflection point (conscious threshold of each feature) and slope of the psychophysical function (transition from unaware to aware within each feature). Depending on feature-mask similarity, thresholds in the different tasks were highly dispersed suggesting a graded transition from unawareness to awareness or had less differentiated thresholds indicating that clusters of features probed by the tasks quite simultaneously contribute to the percept. The latter observation, although not compatible with the notion of a sharp all-or-none transition between unaware and aware states, suggests a less gradual or more discontinuous emergence of awareness. Analyses of slopes of the fitted psychophysical functions also indicated that the emergence of awareness of single features is variable and might be influenced by the continuity of the feature dimensions. The present work thus suggests that the emergence of awareness is neither purely gradual nor dichotomous, but highly dynamic depending on the task and mask type. PMID:28316583

  14. Identifying Thresholds for Ecosystem-Based Management

    PubMed Central

    Samhouri, Jameal F.; Levin, Phillip S.; Ainsworth, Cameron H.

    2010-01-01

    Background One of the greatest obstacles to moving ecosystem-based management (EBM) from concept to practice is the lack of a systematic approach to defining ecosystem-level decision criteria, or reference points that trigger management action. Methodology/Principal Findings To assist resource managers and policymakers in developing EBM decision criteria, we introduce a quantitative, transferable method for identifying utility thresholds. A utility threshold is the level of human-induced pressure (e.g., pollution) at which small changes produce substantial improvements toward the EBM goal of protecting an ecosystem's structural (e.g., diversity) and functional (e.g., resilience) attributes. The analytical approach is based on the detection of nonlinearities in relationships between ecosystem attributes and pressures. We illustrate the method with a hypothetical case study of (1) fishing and (2) nearshore habitat pressure using an empirically-validated marine ecosystem model for British Columbia, Canada, and derive numerical threshold values in terms of the density of two empirically-tractable indicator groups, sablefish and jellyfish. We also describe how to incorporate uncertainty into the estimation of utility thresholds and highlight their value in the context of understanding EBM trade-offs. Conclusions/Significance For any policy scenario, an understanding of utility thresholds provides insight into the amount and type of management intervention required to make significant progress toward improved ecosystem structure and function. The approach outlined in this paper can be applied in the context of single or multiple human-induced pressures, to any marine, freshwater, or terrestrial ecosystem, and should facilitate more effective management. PMID:20126647

  15. Threshold analysis of reimbursing physicians for the application of fluoride varnish in young children.

    PubMed

    Hendrix, Kristin S; Downs, Stephen M; Brophy, Ginger; Carney Doebbeling, Caroline; Swigonski, Nancy L

    2013-01-01

    Most state Medicaid programs reimburse physicians for providing fluoride varnish, yet the only published studies of cost-effectiveness do not show cost-savings. Our objective is to apply state-specific claims data to an existing published model to quickly and inexpensively estimate the cost-savings of a policy consideration to better inform decisions - specifically, to assess whether Indiana Medicaid children's restorative service rates met the threshold to generate cost-savings. Threshold analysis was based on the 2006 model by Quiñonez et al. Simple calculations were used to "align" the Indiana Medicaid data with the published model. Quarterly likelihoods that a child would receive treatment for caries were annualized. The probability of a tooth developing a cavitated lesion was multiplied by the probability of using restorative services. Finally, this rate of restorative services given cavitation was multiplied by 1.5 to generate the threshold to attain cost-savings. Restorative services utilization rates, extrapolated from available Indiana Medicaid claims, were compared with these thresholds. For children 1-2 years old, restorative services utilization was 2.6 percent, which was below the 5.8 percent threshold for cost-savings. However, for children 3-5 years of age, restorative services utilization was 23.3 percent, exceeding the 14.5 percent threshold that suggests cost-savings. Combining a published model with state-specific data, we were able to quickly and inexpensively demonstrate that restorative service utilization rates for children 36 months and older in Indiana are high enough that fluoride varnish regularly applied by physicians to children starting at 9 months of age could save Medicaid funds over a 3-year horizon. © 2013 American Association of Public Health Dentistry.

  16. Density thresholds for Mopeia virus invasion and persistence in its host Mastomys natalensis.

    PubMed

    Goyens, J; Reijniers, J; Borremans, B; Leirs, H

    2013-01-21

    Well-established theoretical models predict host density thresholds for invasion and persistence of parasites with a density-dependent transmission. Studying such thresholds in reality, however, is not obvious because it requires long-term data for several fluctuating populations of different size. We developed a spatially explicit and individual-based SEIR model of Mopeia virus in multimammate mice Mastomys natalensis. This is an interesting model system for studying abundance thresholds because the host is the most common African rodent, populations fluctuate considerably and the virus is closely related to Lassa virus but non-pathogenic to humans so can be studied safely in the field. The simulations show that, while host density clearly is important, sharp thresholds are only to be expected for persistence (and not for invasion), since at short time-spans (as during invasion), stochasticity is determining. Besides host density, also the spatial extent of the host population is important. We observe the repeated local occurrence of herd immunity, leading to a decrease in transmission of the virus, while even a limited amount of dispersal can have a strong influence in spreading and re-igniting the transmission. The model is most sensitive to the duration of the infectious stage, the size of the home range and the transmission coefficient, so these are important factors to determine experimentally in the future. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. A high accuracy femto-/picosecond laser damage test facility dedicated to the study of optical thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mangote, B.; Gallais, L.; Zerrad, M.

    2012-01-15

    A laser damage test facility delivering pulses from 100 fs to 3 ps and designed to operate at 1030 nm is presented. The different details of its implementation and performances are given. The originality of this system relies the online damage detection system based on Nomarski microscopy and the use of a non-conventional energy detection method based on the utilization of a cooled CCD that offers the possibility to obtain the laser induced damage threshold (LIDT) with high accuracy. Applications of this instrument to study thin films under laser irradiation are presented. Particularly the deterministic behavior of the sub-picosecond damagemore » is investigated in the case of fused silica and oxide films. It is demonstrated that the transition of 0-1 damage probability is very sharp and the LIDT is perfectly deterministic at few hundreds of femtoseconds. The damage process in dielectric materials being the results of electronic processes, specific information such as the material bandgap is needed for the interpretation of results and applications of scaling laws. A review of the different approaches for the estimation of the absorption gap of optical dielectric coatings is conducted and the results given by the different methods are compared and discussed. The LIDT and gap of several oxide materials are then measured with the presented instrument: Al{sub 2}O{sub 3}, Nb{sub 2}O{sub 5}, HfO{sub 2}, SiO{sub 2}, Ta{sub 2}O{sub 5}, and ZrO{sub 2}. The obtained relation between the LIDT and gap at 1030 nm confirms the linear evolution of the threshold with the bandgap that exists at 800 nm, and our work expands the number of tested materials.« less

  18. Pulsed operation of (Al,Ga,In)N blue laser diodes

    NASA Astrophysics Data System (ADS)

    Abare, Amber C.; Mack, Michael P.; Hansen, Mark W.; Sink, R. K.; Kozodoy, Peter; Keller, Sarah L.; Hu, Evelyn L.; Speck, James S.; Bowers, John E.; Mishra, Umesh K.; Coldren, Larry A.; DenBaars, Steven P.

    1998-04-01

    Room temperature (RT) pulsed operation of blue (420 nm) nitride based multi-quantum well (MQW) laser diodes grown on a-plane and c-plane sapphire substrates has been demonstrated. A combination of atmospheric and low pressure metal organic chemical vapor deposition (MOCVD) using a modified two-flow horizontal reactor was employed. The emission is strongly TE polarized and has a sharp transition in the far field pattern above threshold. Threshold current densities as low as 12.6 kA/cm2 were observed for 10 X 1200 micrometer lasers with uncoated reactive ion etched (RIE) facets on c-plane sapphire. Cleaved facet lasers were also demonstrated with similar performance on a-plane sapphire. Differential efficiencies as high as 7% and output powers up to 77 mW were observed. Laser diodes tested under pulsed conditions operated up to 6 hours at room temperature. Performance was limited by resistive heating during the electrical pulses. Lasing was achieved up to 95 degrees Celsius and up to a 150 ns pulse length (RT). Threshold current increased with temperature with a characteristic temperature, T0, of 125 K.

  19. The fragmentation threshold and implications for explosive eruptions

    NASA Astrophysics Data System (ADS)

    Kennedy, B.; Spieler, O.; Kueppers, U.; Scheu, B.; Mueller, S.; Taddeucci, J.; Dingwell, D.

    2003-04-01

    The fragmentation threshold is the minimum pressure differential required to cause a porous volcanic rock to form pyroclasts. This is a critical parameter when considering the shift from effusive to explosive eruptions. We fragmented a variety of natural volcanic rock samples at room temperature (20oC) and high temperature (850oC) using a shock tube modified after Aldibirov and Dingwell (1996). This apparatus creates a pressure differential which drives fragmentation. Pressurized gas in the vesicles of the rock suddenly expands, blowing the sample apart. For this reason, the porosity is the primary control on the fragmentation threshold. On a graph of porosity against fragmentation threshold, our results from a variety of natural samples at both low and high temperatures all plot on the same curve and show the threshold increasing steeply at low porosities. A sharp decrease in the fragmentation threshold occurs as porosity increases from 0- 15%, while a more gradual decrease is seen from 15- 85%. The high temperature experiments form a curve with less variability than the low temperature experiments. For this reason, we have chosen to model the high temperature thresholds. The curve can be roughly predicted by the tensile strength of glass (140 MPa) divided by the porosity. Fractured phenocrysts in the majority of our samples reduces the overall strength of the sample. For this reason, the threshold values can be more accurately predicted by % matrix x the tensile strength/ porosity. At very high porosities the fragmentation threshold varies significantly due to the effect of bubble shape and size distributions on the permeability (Mueller et al, 2003). For example, high thresholds are seen for samples with very high permeabilities, where gas flow reduces the local pressure differential. These results allow us to predict the fragmentation threshold for any volcanic rock for which the porosity and crystal contents are known. During explosive eruptions, the fragmentation threshold may be exceeded in two ways: (1) by building an overpressure within the vesicles above the fragmentation threshold or (2) by unloading and exposing lithostatically pressurised magma to lower pressures. Using this data, we can in principle estimate the height of dome collapse or amount of overpressure necessary to produce an explosive eruption.

  20. Identifying Threshold Temperatures Associated with Bristlecone Pine Growth Signals in the Great Basin, USA

    NASA Astrophysics Data System (ADS)

    Weiss, S. B.; Bunn, A. G.; Tran, T. J.; Bruening, J. M.; Salzer, M. W.; Hughes, M. K.

    2016-12-01

    The interpretation of ring-width patterns in high elevation Great Basin bristlecone pine is hampered by the presence of sharp ecophysiological gradients that can lead to mixed growth signals depending on topographic setting of individual trees. We have identified a temperature threshold near the upper forest border above which trees are limited more strongly by temperature, and below which trees tend to be moisture limited. We combined temperature loggers and GIS modeling at a scale of tens of meters to examine trees with different limiting factors. We found that the dual-signal patterns in radial growth can be partially explained by the topoclimate setting of individual trees, with trees in locations where growing season mean temperatures below about 7.4°C to 8°C were more strongly associated with temperature variability than with moisture availability. Using this threshold we show that it is possible to build both temperature and drought reconstructions over the common era from bristlecone pine near the alpine treeline. While our findings might allow for a better physiological understanding of bristlecone pine growth, they also raise questions about the interpretation of temperature reconstructions given the threshold nature of the growth response and the dynamic nature of the treeline ecotone over past millennia.

  1. Gas composition sensing using carbon nanotube arrays

    NASA Technical Reports Server (NTRS)

    Li, Jing (Inventor); Meyyappan, Meyya (Inventor)

    2008-01-01

    A method and system for estimating one, two or more unknown components in a gas. A first array of spaced apart carbon nanotubes (''CNTs'') is connected to a variable pulse voltage source at a first end of at least one of the CNTs. A second end of the at least one CNT is provided with a relatively sharp tip and is located at a distance within a selected range of a constant voltage plate. A sequence of voltage pulses {V(t.sub.n)}.sub.n at times t=t.sub.n (n=1, . . . , N1; N1.gtoreq.3) is applied to the at least one CNT, and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of a curve I(t.sub.n) for current or a curve e(t.sub.n) for electric charge transported from the at least one CNT to the constant voltage plate. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas.

  2. A New Optical Technique for Rapid Determination of Creep and Fatigue Thresholds at High Temperature.

    DTIC Science & Technology

    1984-04-01

    measurements, made far away from the crack tip, produced much smoother and more sensible results. Measurements by Macha et al (16) agree very well with...dependent upon the measurement positin. It becomes independent of position far enough away from the tip; this is consistent with the results of Macha , et...D. E. Macha , W. N. Sharpe, Jr., and A. P. ’ral(11, ’.., "A Laser Interferometry Method for ,xp.rim,-rit;a1 Stress Intensity Factor Calibration", AST

  3. Dynamics of a Sonoluminescing Bubble in Sulfuric Acid

    NASA Astrophysics Data System (ADS)

    Hopkins, Stephen D.; Putterman, Seth J.; Kappus, Brian A.; Suslick, Kenneth S.; Camara, Carlos G.

    2005-12-01

    The spectral shape and observed sonoluminescence emission from Xe bubbles in concentrated sulfuric acid is consistent only with blackbody emission from a spherical surface that fills the bubble. The interior of the observed 7000 K blackbody must be at least 4 times hotter than the emitting surface in order that the equilibrium light-matter interaction length be smaller than the radius. Bright emission is correlated with long emission times (˜10ns), sharp thresholds, unstable translational motion, and implosions that are sufficiently weak that contributions from the van der Waals hard core are small.

  4. Dynamics of a sonoluminescing bubble in sulfuric acid.

    PubMed

    Hopkins, Stephen D; Putterman, Seth J; Kappus, Brian A; Suslick, Kenneth S; Camara, Carlos G

    2005-12-16

    The spectral shape and observed sonoluminescence emission from Xe bubbles in concentrated sulfuric acid is consistent only with blackbody emission from a spherical surface that fills the bubble. The interior of the observed 7000 K blackbody must be at least 4 times hotter than the emitting surface in order that the equilibrium light-matter interaction length be smaller than the radius. Bright emission is correlated with long emission times (approximately 10 ns), sharp thresholds, unstable translational motion, and implosions that are sufficiently weak that contributions from the van der Waals hard core are small.

  5. An improved moving average technical trading rule

    NASA Astrophysics Data System (ADS)

    Papailias, Fotis; Thomakos, Dimitrios D.

    2015-06-01

    This paper proposes a modified version of the widely used price and moving average cross-over trading strategies. The suggested approach (presented in its 'long only' version) is a combination of cross-over 'buy' signals and a dynamic threshold value which acts as a dynamic trailing stop. The trading behaviour and performance from this modified strategy are different from the standard approach with results showing that, on average, the proposed modification increases the cumulative return and the Sharpe ratio of the investor while exhibiting smaller maximum drawdown and smaller drawdown duration than the standard strategy.

  6. Electron attachment in F2 - Conclusive demonstration of nonresonant, s-wave coupling in the limit of zero electron energy

    NASA Technical Reports Server (NTRS)

    Chutjian, A.; Alajajian, S. H.

    1987-01-01

    Dissociative electron attachment to F2 has been observed in the energy range 0-140 meV, at a resolution of 6 meV (full width at half maximum). Results show conclusively a sharp, resolution-limited threshold behavior consistent with an s-wave cross section varying as sq rt of epsilon. Two accurate theoretical calculations predict only p-wave behavior varying as the sq rt of epsilon. Several nonadiabatic coupling effects leading to s-wave behavior are outlined.

  7. The Role of Threshold Concepts in an Interdisciplinary Curriculum: A Case Study in Neuroscience

    ERIC Educational Resources Information Center

    Holley, Karri A.

    2018-01-01

    Threshold concepts have been widely utilized to understand learning in academic disciplines and student experiences in a disciplinary curriculum. This study considered how threshold concepts might operate within an interdisciplinary setting. Data were collected through interviews with 40 doctoral students enrolled in an interdisciplinary program…

  8. K-Shell Photoionization of Nickel Ions Using R-Matrix

    NASA Technical Reports Server (NTRS)

    Witthoeft, M. C.; Bautista, M. A.; Garcia, J.; Kallman, T. R.; Mendoza, C.; Palmeri, P.; Quinet, P.

    2011-01-01

    We present R-matrix calculations of photoabsorption and photoionization cross sections across the K edge of the Li-like to Ca-like ions stages of Ni. Level-resolved, Breit-Pauli calculations were performed for the Li-like to Na-like stages. Term-resolved calculations, which include the mass-velocity and Darwin relativistic corrections, were performed for the Mg-like to Ca-like ion stages. This data set is extended up to Fe-like Ni using the distorted wave approximation as implemented by AUTOSTRUCTURE. The R-matrix calculations include the effects of radiative and Auger dampings by means of an optical potential. The damping processes affect the absorption resonances converging to the K thresholds causing them to display symmetric profiles of constant width that smear the otherwise sharp edge at the K-shell photoionization threshold. These data are important for the modeling of features found in photoionized plasmas.

  9. Nonequilibrium Steady State Generated by a Moving Defect: The Supersonic Threshold

    NASA Astrophysics Data System (ADS)

    Bastianello, Alvise; De Luca, Andrea

    2018-02-01

    We consider the dynamics of a system of free fermions on a 1D lattice in the presence of a defect moving at constant velocity. The defect has the form of a localized time-dependent variation of the chemical potential and induces at long times a nonequilibrium steady state (NESS), which spreads around the defect. We present a general formulation that allows recasting the time-dependent protocol in a scattering problem on a static potential. We obtain a complete characterization of the NESS. In particular, we show a strong dependence on the defect velocity and the existence of a sharp threshold when such velocity exceeds the speed of sound. Beyond this value, the NESS is not produced and, remarkably, the defect travels without significantly perturbing the system. We present an exact solution for a δ -like defect traveling with an arbitrary velocity and we develop a semiclassical approximation that provides accurate results for smooth defects.

  10. Collisionless microtearing modes in hot tokamaks: Effect of trapped electrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swamy, Aditya K.; Ganesh, R., E-mail: ganesh@ipr.res.in; Brunner, S.

    2015-07-15

    Collisionless microtearing modes have recently been found linearly unstable in sharp temperature gradient regions of large aspect ratio tokamaks. The magnetic drift resonance of passing electrons has been found to be sufficient to destabilise these modes above a threshold plasma β. A global gyrokinetic study, including both passing electrons as well as trapped electrons, shows that the non-adiabatic contribution of the trapped electrons provides a resonant destabilization, especially at large toroidal mode numbers, for a given aspect ratio. The global 2D mode structures show important changes to the destabilising electrostatic potential. The β threshold for the onset of the instabilitymore » is found to be generally downshifted by the inclusion of trapped electrons. A scan in the aspect ratio of the tokamak configuration, from medium to large but finite values, clearly indicates a significant destabilizing contribution from trapped electrons at small aspect ratio, with a diminishing role at larger aspect ratios.« less

  11. Optical isolation with nonlinear topological photonics

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Wang, You; Leykam, Daniel; Chong, Y. D.

    2017-09-01

    It is shown that the concept of topological phase transitions can be used to design nonlinear photonic structures exhibiting power thresholds and discontinuities in their transmittance. This provides a novel route to devising nonlinear optical isolators. We study three representative designs: (i) a waveguide array implementing a nonlinear 1D Su-Schrieffer-Heeger model, (ii) a waveguide array implementing a nonlinear 2D Haldane model, and (iii) a 2D lattice of coupled-ring waveguides. In the first two cases, we find a correspondence between the topological transition of the underlying linear lattice and the power threshold of the transmittance, and show that the transmission behavior is attributable to the emergence of a self-induced topological soliton. In the third case, we show that the topological transition produces a discontinuity in the transmittance curve, which can be exploited to achieve sharp jumps in the power-dependent isolation ratio.

  12. A NASA SHARP Mentoring Experience Utilizing GP-B

    NASA Technical Reports Server (NTRS)

    Estes, Howard

    2004-01-01

    The goal of this program is to increase the participation and success rates of students who are traditionally underrepresented in science, mathematics, technology , and geography. Students are selected based on aptitude and oi nterests. It is a NASA sponsored, 8 week program.

  13. A coordinated sequence of distinct flagellar waveforms enables a sharp flagellar turn mediated by squid sperm pH-taxis.

    PubMed

    Iida, Tomohiro; Iwata, Yoko; Mohri, Tatsuma; Baba, Shoji A; Hirohashi, Noritaka

    2017-10-11

    Animal spermatozoa navigate by sensing ambient chemicals to reach the site of fertilization. Generally, such chemicals derive from the female reproductive organs or cells. Exceptionally, squid spermatozoa mutually release and perceive carbon dioxide to form clusters after ejaculation. We previously identified the pH-taxis by which each spermatozoon can execute a sharp turn, but how flagellar dynamics enable this movement remains unknown. Here, we show that initiation of the turn motion requires a swim down a steep proton gradient (a theoretical estimation of ≥0.025 pH/s), crossing a threshold pH value of ~5.5. Time-resolved kinematic analysis revealed that the turn sequence results from the rhythmic exercise of two flagellar motions: a stereotypical flagellar 'bent-cane' shape followed by asymmetric wave propagation, which enables a sharp turn in the realm of low Reynolds numbers. This turning episode is terminated by an 'overshoot' trajectory that differs from either straight-line motility or turning. As with bidirectional pH-taxes in some bacteria, squid spermatozoa also showed repulsion from strong acid conditions with similar flagellar kinematics as in positive pH-taxis. These findings indicate that squid spermatozoa might have a unique reorientation mechanism, which could be dissimilar to that of classical egg-guided sperm chemotaxis in other marine invertebrates.

  14. Transition Prediction in Hypersonic Boundary Layers Using Receptivity and Freestream Spectra

    NASA Technical Reports Server (NTRS)

    Balakumar, P.; Chou, Amanda

    2016-01-01

    Boundary-layer transition in hypersonic flows over a straight cone can be predicted using measured freestream spectra, receptivity, and threshold values for the wall pressure fluctuations at the transition onset points. Simulations are performed for hypersonic boundary-layer flows over a 7-degree half-angle straight cone with varying bluntness at a freestream Mach number of 10. The steady and the unsteady flow fields are obtained by solving the two-dimensional Navier-Stokes equations in axisymmetric coordinates using a 5th-order accurate weighted essentially non-oscillatory (WENO) scheme for space discretization and using a third-order total-variation-diminishing (TVD) Runge-Kutta scheme for time integration. The calculated N-factors at the transition onset location increase gradually with increasing unit Reynolds numbers for flow over a sharp cone and remain almost the same for flow over a blunt cone. The receptivity coefficient increases slightly with increasing unit Reynolds numbers. They are on the order of 4 for a sharp cone and are on the order of 1 for a blunt cone. The location of transition onset predicted from the simulation including the freestream spectrum, receptivity, and the linear and the weakly nonlinear evolutions yields a solution close to the measured onset location for the sharp cone. The simulations over-predict transition onset by about twenty percent for the blunt cone.

  15. Processing circuitry for single channel radiation detector

    NASA Technical Reports Server (NTRS)

    Holland, Samuel D. (Inventor); Delaune, Paul B. (Inventor); Turner, Kathryn M. (Inventor)

    2009-01-01

    Processing circuitry is provided for a high voltage operated radiation detector. An event detector utilizes a comparator configured to produce an event signal based on a leading edge threshold value. A preferred event detector does not produce another event signal until a trailing edge threshold value is satisfied. The event signal can be utilized for counting the number of particle hits and also for controlling data collection operation for a peak detect circuit and timer. The leading edge threshold value is programmable such that it can be reprogrammed by a remote computer. A digital high voltage control is preferably operable to monitor and adjust high voltage for the detector.

  16. Indian pharmaceutical patent prosecution: The changing role of Section 3(d).

    PubMed

    Sampat, Bhaven N; Shadlen, Kenneth C

    2018-01-01

    India, like many developing countries, only recently began to grant pharmaceutical product patents. Indian patent law includes a provision, Section 3(d), which tries to limit grant of "secondary" pharmaceutical patents, i.e. patents on new forms of existing molecules and drugs. Previous research suggests the provision was rarely used against secondary applications in the years immediately following its enactment, and where it was, was redundant to other aspects of the patent law, raising concerns that 3(d) was being under-utilized by the Indian Patent Office. This paper uses a novel data source, the patent office's first examination reports, to examine changes in the use of the provision. We find a sharp increase over time in the use of Section 3(d), including on the main claims of patent applications, though it continues to be used in conjunction with other types of objections to patentability. More surprisingly, see a sharp increase in the use of the provision against primary patent applications, contrary to its intent, raising concerns about potential over-utilization.

  17. N-terminal pro-B-type Natriuretic Peptides' Prognostic Utility Is Overestimated in Meta-analyses Using Study-specific Optimal Diagnostic Thresholds.

    PubMed

    Potgieter, Danielle; Simmers, Dale; Ryan, Lisa; Biccard, Bruce M; Lurati-Buse, Giovanna A; Cardinale, Daniela M; Chong, Carol P W; Cnotliwy, Miloslaw; Farzi, Sylvia I; Jankovic, Radmilo J; Lim, Wen Kwang; Mahla, Elisabeth; Manikandan, Ramaswamy; Oscarsson, Anna; Phy, Michael P; Rajagopalan, Sriram; Van Gaal, William J; Waliszek, Marek; Rodseth, Reitze N

    2015-08-01

    N-terminal fragment B-type natriuretic peptide (NT-proBNP) prognostic utility is commonly determined post hoc by identifying a single optimal discrimination threshold tailored to the individual study population. The authors aimed to determine how using these study-specific post hoc thresholds impacts meta-analysis results. The authors conducted a systematic review of studies reporting the ability of preoperative NT-proBNP measurements to predict the composite outcome of all-cause mortality and nonfatal myocardial infarction at 30 days after noncardiac surgery. Individual patient-level data NT-proBNP thresholds were determined using two different methodologies. First, a single combined NT-proBNP threshold was determined for the entire cohort of patients, and a meta-analysis conducted using this single threshold. Second, study-specific thresholds were determined for each individual study, with meta-analysis being conducted using these study-specific thresholds. The authors obtained individual patient data from 14 studies (n = 2,196). Using a single NT-proBNP cohort threshold, the odds ratio (OR) associated with an increased NT-proBNP measurement was 3.43 (95% CI, 2.08 to 5.64). Using individual study-specific thresholds, the OR associated with an increased NT-proBNP measurement was 6.45 (95% CI, 3.98 to 10.46). In smaller studies (<100 patients) a single cohort threshold was associated with an OR of 5.4 (95% CI, 2.27 to 12.84) as compared with an OR of 14.38 (95% CI, 6.08 to 34.01) for study-specific thresholds. Post hoc identification of study-specific prognostic biomarker thresholds artificially maximizes biomarker predictive power, resulting in an amplification or overestimation during meta-analysis of these results. This effect is accentuated in small studies.

  18. Utilizing Objective Drought Thresholds to Improve Drought Monitoring with the SPI

    NASA Astrophysics Data System (ADS)

    Leasor, Z. T.; Quiring, S. M.

    2017-12-01

    Drought is a prominent climatic hazard in the south-central United States. Droughts are frequently monitored using the severity categories determined by the U.S. Drought Monitor (USDM). This study uses the Standardized Precipitation Index (SPI) to conduct a drought frequency analysis across Texas, Oklahoma, and Kansas using PRISM precipitation data from 1900-2015. The SPI is shown to be spatiotemporally variant across the south-central United States. In particular, utilizing the default USDM severity thresholds may underestimate drought severity in arid regions. Objective drought thresholds were implemented by fitting a CDF to each location's SPI distribution. This approach results in a more homogeneous distribution of drought frequencies across each severity category. Results also indicate that it may be beneficial to develop objective drought thresholds for each season and SPI timescale. This research serves as a proof-of-concept and demonstrates how drought thresholds should be objectively developed so that they are appropriate for each climatic region.

  19. Solving Cordelia's Dilemma: Threshold Concepts within a Punctuated Model of Learning

    ERIC Educational Resources Information Center

    Kinchin, Ian M.

    2010-01-01

    The consideration of threshold concepts is offered in the context of biological education as a theoretical framework that may have utility in the teaching and learning of biology at all levels. Threshold concepts may provide a mechanism to explain the observed punctuated nature of conceptual change. This perspective raises the profile of periods…

  20. Predicting the threshold of pulse-train electrical stimuli using a stochastic auditory nerve model: the effects of stimulus noise.

    PubMed

    Xu, Yifang; Collins, Leslie M

    2004-04-01

    The incorporation of low levels of noise into an electrical stimulus has been shown to improve auditory thresholds in some human subjects (Zeng et al., 2000). In this paper, thresholds for noise-modulated pulse-train stimuli are predicted utilizing a stochastic neural-behavioral model of ensemble fiber responses to bi-phasic stimuli. The neural refractory effect is described using a Markov model for a noise-free pulse-train stimulus and a closed-form solution for the steady-state neural response is provided. For noise-modulated pulse-train stimuli, a recursive method using the conditional probability is utilized to track the neural responses to each successive pulse. A neural spike count rule has been presented for both threshold and intensity discrimination under the assumption that auditory perception occurs via integration over a relatively long time period (Bruce et al., 1999). An alternative approach originates from the hypothesis of the multilook model (Viemeister and Wakefield, 1991), which argues that auditory perception is based on several shorter time integrations and may suggest an NofM model for prediction of pulse-train threshold. This motivates analyzing the neural response to each individual pulse within a pulse train, which is considered to be the brief look. A logarithmic rule is hypothesized for pulse-train threshold. Predictions from the multilook model are shown to match trends in psychophysical data for noise-free stimuli that are not always matched by the long-time integration rule. Theoretical predictions indicate that threshold decreases as noise variance increases. Theoretical models of the neural response to pulse-train stimuli not only reduce calculational overhead but also facilitate utilization of signal detection theory and are easily extended to multichannel psychophysical tasks.

  1. Detectability Thresholds and Optimal Algorithms for Community Structure in Dynamic Networks

    NASA Astrophysics Data System (ADS)

    Ghasemian, Amir; Zhang, Pan; Clauset, Aaron; Moore, Cristopher; Peel, Leto

    2016-07-01

    The detection of communities within a dynamic network is a common means for obtaining a coarse-grained view of a complex system and for investigating its underlying processes. While a number of methods have been proposed in the machine learning and physics literature, we lack a theoretical analysis of their strengths and weaknesses, or of the ultimate limits on when communities can be detected. Here, we study the fundamental limits of detecting community structure in dynamic networks. Specifically, we analyze the limits of detectability for a dynamic stochastic block model where nodes change their community memberships over time, but where edges are generated independently at each time step. Using the cavity method, we derive a precise detectability threshold as a function of the rate of change and the strength of the communities. Below this sharp threshold, we claim that no efficient algorithm can identify the communities better than chance. We then give two algorithms that are optimal in the sense that they succeed all the way down to this threshold. The first uses belief propagation, which gives asymptotically optimal accuracy, and the second is a fast spectral clustering algorithm, based on linearizing the belief propagation equations. These results extend our understanding of the limits of community detection in an important direction, and introduce new mathematical tools for similar extensions to networks with other types of auxiliary information.

  2. Contrast-enhanced ultrasound evaluation of pancreatic cancer xenografts in nude mice after irradiation with sub-threshold focused ultrasound for tumor ablation

    PubMed Central

    Wang, Rui; Guo, Qian; Chen, Yi Ni; Hu, Bing; Jiang, Li Xin

    2017-01-01

    We evaluated the efficacy of contrast-enhanced ultrasound for assessing tumors after irradiation with sub-threshold focused ultrasound (FUS) ablation in pancreatic cancer xenografts in nude mice. Thirty tumor-bearing nude mice were divided into three groups: Group A received sham irradiation, Group B received a moderate-acoustic energy dose (sub-threshold), and Group C received a high-acoustic energy dose. In Group B, B-mode ultrasound (US), color Doppler US, and dynamic contrast-enhanced ultrasound (DCE-US) studies were conducted before and after irradiation. After irradiation, tumor growth was inhibited in Group B, and the tumors shrank in Group C. In Group A, the tumor sizes were unchanged. In Group B, contrast-enhanced ultrasound (CEUS) images showed a rapid rush of contrast agent into and out of tumors before irradiation. After irradiation, CEUS revealed contrast agent perfusion only at the tumor periphery and irregular, un-perfused volumes of contrast agent within the tumors. DCE-US perfusion parameters, including peak intensity (PI) and area under the curve (AUC), had decreased 24 hours after irradiation. PI and AUC were increased 48 hours and 2weeks after irradiation. Time to peak (TP) and sharpness were increased 24 hours after irradiation. TP decreased at 48 hours and 2 weeks after irradiation. CEUS is thus an effective method for early evaluation after irradiation with sub-threshold FUS. PMID:28402267

  3. Chronic Widespread Back Pain is Distinct From Chronic Local Back Pain: Evidence From Quantitative Sensory Testing, Pain Drawings, and Psychometrics.

    PubMed

    Gerhardt, Andreas; Eich, Wolfgang; Janke, Susanne; Leisner, Sabine; Treede, Rolf-Detlef; Tesarz, Jonas

    2016-07-01

    Whether chronic localized pain (CLP) and chronic widespread pain (CWP) have different mechanisms or to what extent they overlap in their pathophysiology is controversial. The study compared quantitative sensory testing profiles of nonspecific chronic back pain patients with CLP (n=48) and CWP (n=29) with and fibromyalgia syndrome (FMS) patients (n=90) and pain-free controls (n = 40). The quantitative sensory testing protocol of the "German-Research-Network-on-Neuropathic-Pain" was used to measure evoked pain on the painful area in the lower back and the pain-free hand (thermal and mechanical detection and pain thresholds, vibration threshold, pain sensitivity to sharp and blunt mechanical stimuli). Ongoing pain and psychometrics were captured with pain drawings and questionnaires. CLP patients did not differ from pain-free controls, except for lower pressure pain threshold (PPT) on the back. CWP and FMS patients showed lower heat pain threshold and higher wind-up ratio on the back and lower heat pain threshold and cold pain threshold on the hand. FMS showed lower PPT on back and hand, and higher comorbidity of anxiety and depression and more functional impairment than all other groups. Even after long duration CLP presents with a local hypersensitivity for PPT, suggesting a somatotopically specific sensitization of nociceptive processing. However, CWP patients show widespread ongoing pain and hyperalgesia for different stimuli that is generalized in space, suggesting the involvement of descending control systems, as also suggested for FMS patients. Because mechanisms in nonspecific chronic back pain with CLP and CWP differ, these patients should be distinguished in future research and allocated to different treatments.

  4. Rational use of Xpert testing in patients with presumptive TB: clinicians should be encouraged to use the test-treat threshold.

    PubMed

    Decroo, Tom; Henríquez-Trujillo, Aquiles R; De Weggheleire, Anja; Lynen, Lutgarde

    2017-10-11

    A recently published Ugandan study on tuberculosis (TB) diagnosis in HIV-positive patients with presumptive smear-negative TB, which showed that out of 90 patients who started TB treatment, 20% (18/90) had a positive Xpert MTB/RIF (Xpert) test, 24% (22/90) had a negative Xpert test, and 56% (50/90) were started without Xpert testing. Although Xpert testing was available, clinicians did not use it systematically. Here we aim to show more objectively the process of clinical decision-making. First, we estimated that pre-test probability of TB, or the prevalence of TB in smear-negative HIV infected patients with signs of presumptive TB in Uganda, was 17%. Second, we argue that the treatment threshold, the probability of disease at which the utility of treating and not treating is the same, and above which treatment should be started, should be determined. In Uganda, the treatment threshold was not yet formally established. In Rwanda, the calculated treatment threshold was 12%. Hence, one could argue that the threshold was reached without even considering additional tests. Still, Xpert testing can be useful when the probability of disease is above the treatment threshold, but only when a negative Xpert result can lower the probability of disease enough to cross the treatment threshold. This occurs when the pre-test probability is lower than the test-treat threshold, the probability of disease at which the utility of testing and the utility of treating without testing is the same. We estimated that the test-treatment threshold was 28%. Finally, to show the effect of the presence or absence of arguments on the probability of TB, we use confirming and excluding power, and a log10 odds scale to combine arguments. If the pre-test probability is above the test-treat threshold, empirical treatment is justified, because even a negative Xpert will not lower the post-test probability below the treatment threshold. However, Xpert testing for the diagnosis of TB should be performed in patients for whom the probability of TB was lower than the test-treat threshold. Especially in resource constrained settings clinicians should be encouraged to take clinical decisions and use scarce resources rationally.

  5. The Next Generation of Ground Operations Command and Control; Scripting in C Sharp and Visual Basic

    NASA Technical Reports Server (NTRS)

    Ritter, George; Pedoto, Ramon

    2010-01-01

    This slide presentation reviews the use of scripting languages in Ground Operations Command and Control. It describes the use of scripting languages in a historical context, the advantages and disadvantages of scripts. It describes the Enhanced and Redesigned Scripting (ERS) language, that was designed to combine the features of a scripting language and the graphical and IDE richness of a programming language with the utility of scripting languages. ERS uses the Microsoft Visual Studio programming environment and offers custom controls that enable an ERS developer to extend the Visual Basic and C sharp language interface with the Payload Operations Integration Center (POIC) telemetry and command system.

  6. Threshold concepts: implications for the management of natural resources

    USGS Publications Warehouse

    Guntenspergen, Glenn R.; Gross, John

    2014-01-01

    Threshold concepts can have broad relevance in natural resource management. However, the concept of ecological thresholds has not been widely incorporated or adopted in management goals. This largely stems from the uncertainty revolving around threshold levels and the post hoc analyses that have generally been used to identify them. Natural resource managers have a need for new tools and approaches that will help them assess the existence and detection of conditions that demand management actions. Recognition of additional threshold concepts include: utility thresholds (which are based on human values about ecological systems) and decision thresholds (which reflect management objectives and values and include ecological knowledge about a system) as well as ecological thresholds. All of these concepts provide a framework for considering the use of threshold concepts in natural resource decision making.

  7. An Interpreter’s Interpretation: Sign Language Interpreters’ View of Musculoskeletal Disorders

    DTIC Science & Technology

    2003-01-01

    utilization (11%) and massage therapy (11.1%) in 1997 as the current study. However, in sharp contrast to the current study, Eisenberg and colleagues...should be examined include the use of complementary alternative medicine such as chiropractic care, acupuncture, massage therapy , and relaxation

  8. Grid Standards and Codes | Grid Modernization | NREL

    Science.gov Websites

    simulations that take advantage of advanced concepts such as hardware-in-the-loop testing. Such methods of methods and solutions. Projects Accelerating Systems Integration Standards Sharp increases in goal of this project is to develop streamlined and accurate methods for New York utilities to determine

  9. Optical properties of solid-core photonic crystal fibers filled with nonlinear absorbers.

    PubMed

    Butler, James J; Bowcock, Alec S; Sueoka, Stacey R; Montgomery, Steven R; Flom, Steven R; Friebele, E Joseph; Wright, Barbara M; Peele, John R; Pong, Richard G S; Shirk, James S; Hu, Jonathan; Menyuk, Curtis R; Taunay, T F

    2013-09-09

    A theoretical and experimental investigation of the transmission of solid-core photonic crystal fibers (PCFs) filled with nonlinear absorbers shows a sharp change in the threshold for optical limiting and in leakage loss as the refractive index of the material in the holes approaches that of the glass matrix. Theoretical calculations of the mode profiles and leakage loss of the PCF are in agreement with experimental results and indicate that the change in limiting response is due to the interaction of the evanescent field of the guided mode with the nonlinear absorbers in the holes.

  10. Electrical detection of microwave assisted magnetization reversal by spin pumping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Siddharth; Subhra Mukherjee, Sankha; Elyasi, Mehrdad

    2014-03-24

    Microwave assisted magnetization reversal has been investigated in a bilayer system of Pt/ferromagnet by detecting a change in the polarity of the spin pumping signal. The reversal process is studied in two material systems, Pt/CoFeB and Pt/NiFe, for different aspect ratios. The onset of the switching behavior is indicated by a sharp transition in the spin pumping voltage. At a threshold value of the external field, the switching process changes from partial to full reversal with increasing microwave power. The proposed method provides a simple way to detect microwave assisted magnetization reversal.

  11. Real-Time Symbol Extraction From Grey-Level Images

    NASA Astrophysics Data System (ADS)

    Massen, R.; Simnacher, M.; Rosch, J.; Herre, E.; Wuhrer, H. W.

    1988-04-01

    A VME-bus image pipeline processor for extracting vectorized contours from grey-level images in real-time is presented. This 3 Giga operation per second processor uses large kernel convolvers and new non-linear neighbourhood processing algorithms to compute true 1-pixel wide and noise-free contours without thresholding even from grey-level images with quite varying edge sharpness. The local edge orientation is used as an additional cue to compute a list of vectors describing the closed and open contours in real-time and to dump a CAD-like symbolic image description into a symbol memory at pixel clock rate.

  12. Anaerobic Threshold: Its Concept and Role in Endurance Sport

    PubMed Central

    Ghosh, Asok Kumar

    2004-01-01

    aerobic to anaerobic transition intensity is one of the most significant physiological variable in endurance sports. Scientists have explained the term in various ways, like, Lactate Threshold, Ventilatory Anaerobic Threshold, Onset of Blood Lactate Accumulation, Onset of Plasma Lactate Accumulation, Heart Rate Deflection Point and Maximum Lactate Steady State. But all of these have great role both in monitoring training schedule and in determining sports performance. Individuals endowed with the possibility to obtain a high oxygen uptake need to complement with rigorous training program in order to achieve maximal performance. If they engage in endurance events, they must also develop the ability to sustain a high fractional utilization of their maximal oxygen uptake (%VO2 max) and become physiologically efficient in performing their activity. Anaerobic threshold is highly correlated to the distance running performance as compared to maximum aerobic capacity or VO2 max, because sustaining a high fractional utilization of the VO2 max for a long time delays the metabolic acidosis. Training at or little above the anaerobic threshold intensity improves both the aerobic capacity and anaerobic threshold level. Anaerobic Threshold can also be determined from the speed-heart rate relationship in the field situation, without undergoing sophisticated laboratory techniques. However, controversies also exist among scientists regarding its role in high performance sports. PMID:22977357

  13. Anaerobic threshold: its concept and role in endurance sport.

    PubMed

    Ghosh, Asok Kumar

    2004-01-01

    aerobic to anaerobic transition intensity is one of the most significant physiological variable in endurance sports. Scientists have explained the term in various ways, like, Lactate Threshold, Ventilatory Anaerobic Threshold, Onset of Blood Lactate Accumulation, Onset of Plasma Lactate Accumulation, Heart Rate Deflection Point and Maximum Lactate Steady State. But all of these have great role both in monitoring training schedule and in determining sports performance. Individuals endowed with the possibility to obtain a high oxygen uptake need to complement with rigorous training program in order to achieve maximal performance. If they engage in endurance events, they must also develop the ability to sustain a high fractional utilization of their maximal oxygen uptake (%VO(2) max) and become physiologically efficient in performing their activity. Anaerobic threshold is highly correlated to the distance running performance as compared to maximum aerobic capacity or VO(2) max, because sustaining a high fractional utilization of the VO(2) max for a long time delays the metabolic acidosis. Training at or little above the anaerobic threshold intensity improves both the aerobic capacity and anaerobic threshold level. Anaerobic Threshold can also be determined from the speed-heart rate relationship in the field situation, without undergoing sophisticated laboratory techniques. However, controversies also exist among scientists regarding its role in high performance sports.

  14. SHARP's systems engineering challenge: rectifying integrated product team requirements with performance issues in an evolutionary spiral development acquisition

    NASA Astrophysics Data System (ADS)

    Kuehl, C. Stephen

    2003-08-01

    Completing its final development and early deployment on the Navy's multi-role aircraft, the F/A-18 E/F Super Hornet, the SHAred Reconnaissance Pod (SHARP) provides the war fighter with the latest digital tactical reconnaissance (TAC Recce) Electro-Optical/Infrared (EO/IR) sensor system. The SHARP program is an evolutionary acquisition that used a spiral development process across a prototype development phase tightly coupled into overlapping Engineering and Manufacturing Development (EMD) and Low Rate Initial Production (LRIP) phases. Under a tight budget environment with a highly compressed schedule, SHARP challenged traditional acquisition strategies and systems engineering (SE) processes. Adopting tailored state-of-the-art systems engineering process models allowd the SHARP program to overcome the technical knowledge transition challenges imposed by a compressed program schedule. The program's original goal was the deployment of digital TAC Recce mission capabilities to the fleet customer by summer of 2003. Hardware and software integration technical challenges resulted from requirements definition and analysis activities performed across a government-industry led Integrated Product Team (IPT) involving Navy engineering and test sites, Boeing, and RTSC-EPS (with its subcontracted hardware and government furnished equipment vendors). Requirements development from a bottoms-up approach was adopted using an electronic requirements capture environment to clarify and establish the SHARP EMD product baseline specifications as relevant technical data became available. Applying Earned-Value Management (EVM) against an Integrated Master Schedule (IMS) resulted in efficiently managing SE task assignments and product deliveries in a dynamically evolving customer requirements environment. Application of Six Sigma improvement methodologies resulted in the uncovering of root causes of errors in wiring interconnectivity drawings, pod manufacturing processes, and avionics requirements specifications. Utilizing the draft NAVAIR SE guideline handbook and the ANSI/EIA-632 standard: Processes for Engineering a System, a systems engineering tailored process approach was adopted for the accelerated SHARP EMD prgram. Tailoring SE processes in this accelerated product delivery environment provided unique opportunities to be technically creative in the establishment of a product performance baseline. This paper provides an historical overview of the systems engineering activities spanning the prototype phase through the EMD SHARP program phase, the performance requirement capture activities and refinement process challenges, and what SE process improvements can be applied to future SHARP-like programs adopting a compressed, evolutionary spiral development acquisition paradigm.

  15. Excess vitamin intake: An unrecognized risk factor for obesity.

    PubMed

    Zhou, Shi-Sheng; Zhou, Yiming

    2014-02-15

    Over the past few decades, food fortification and infant formula supplementation with high levels of vitamins have led to a sharp increase in vitamin intake among infants, children and adults. This is followed by a sharp increase in the prevalence of obesity and related diseases, with significant disparities among countries and different groups within a country. It has long been known that B vitamins at doses below their toxicity threshold strongly promote body fat gain. Studies have demonstrated that formulas, which have very high levels of vitamins, significantly promote infant weight gain, especially fat mass gain, a known risk factor for children developing obesity. Furthermore, ecological studies have shown that increased B vitamin consumption is strongly correlated with the prevalence of obesity and diabetes. We therefore hypothesize that excess vitamins may play a causal role in the increased prevalence of obesity. This review will discuss: (1) the causes of increased vitamin intake; (2) the non-monotonic effect of excess vitamin intake on weight and fat gain; and (3) the role of vitamin fortification in obesity disparities among countries and different groups within a country.

  16. Excess vitamin intake: An unrecognized risk factor for obesity

    PubMed Central

    Zhou, Shi-Sheng; Zhou, Yiming

    2014-01-01

    Over the past few decades, food fortification and infant formula supplementation with high levels of vitamins have led to a sharp increase in vitamin intake among infants, children and adults. This is followed by a sharp increase in the prevalence of obesity and related diseases, with significant disparities among countries and different groups within a country. It has long been known that B vitamins at doses below their toxicity threshold strongly promote body fat gain. Studies have demonstrated that formulas, which have very high levels of vitamins, significantly promote infant weight gain, especially fat mass gain, a known risk factor for children developing obesity. Furthermore, ecological studies have shown that increased B vitamin consumption is strongly correlated with the prevalence of obesity and diabetes. We therefore hypothesize that excess vitamins may play a causal role in the increased prevalence of obesity. This review will discuss: (1) the causes of increased vitamin intake; (2) the non-monotonic effect of excess vitamin intake on weight and fat gain; and (3) the role of vitamin fortification in obesity disparities among countries and different groups within a country. PMID:24567797

  17. Highly tunable magnetism in silicene doped with Cr and Fe atoms under isotropic and uniaxial tensile strain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Rui; Ni, Jun, E-mail: junni@mail.tsinghua.edu.cn; Collaborative Innovative Center of Quantum Matter, Beijing 100084

    2015-12-28

    We have investigated the magnetic properties of silicene doped with Cr and Fe atoms under isotropic and uniaxial tensile strain by the first-principles calculations. We find that Cr and Fe doped silicenes show strain-tunable magnetism. (1) The magnetism of Cr and Fe doped silicenes exhibits sharp transitions from low spin states to high spin states by a small isotropic tensile strain. Specially for Fe doped silicene, a nearly nonmagnetic state changes to a high magnetic state by a small isotropic tensile strain. (2) The magnetic moments of Fe doped silicene also show a sharp jump to ∼2 μ{sub B} at amore » small threshold of the uniaxial strain, and the magnetic moments of Cr doped silicene increase gradually to ∼4 μ{sub B} with the increase of uniaxial strain. (3) The electronic and magnetic properties of Cr and Fe doped silicenes are sensitive to the magnitude and direction of the external strain. The highly tunable magnetism may be applied in the spintronic devices.« less

  18. Excitation and inhibition compete to control spiking during hippocampal ripples: intracellular study in behaving mice.

    PubMed

    English, Daniel F; Peyrache, Adrien; Stark, Eran; Roux, Lisa; Vallentin, Daniela; Long, Michael A; Buzsáki, György

    2014-12-03

    High-frequency ripple oscillations, observed most prominently in the hippocampal CA1 pyramidal layer, are associated with memory consolidation. The cellular and network mechanisms underlying the generation of the rhythm and the recruitment of spikes from pyramidal neurons are still poorly understood. Using intracellular, sharp electrode recordings in freely moving, drug-free mice, we observed consistent large depolarizations in CA1 pyramidal cells during sharp wave ripples, which are associated with ripple frequency fluctuation of the membrane potential ("intracellular ripple"). Despite consistent depolarization, often exceeding pre-ripple spike threshold values, current pulse-induced spikes were strongly suppressed, indicating that spiking was under the control of concurrent shunting inhibition. Ripple events were followed by a prominent afterhyperpolarization and spike suppression. Action potentials during and outside ripples were orthodromic, arguing against ectopic spike generation, which has been postulated by computational models of ripple generation. These findings indicate that dendritic excitation of pyramidal neurons during ripples is countered by shunting of the membrane and postripple silence is mediated by hyperpolarizing inhibition. Copyright © 2014 the authors 0270-6474/14/3316509-09$15.00/0.

  19. Largely ignored: the impact of the threshold value for a QALY on the importance of a transferability factor.

    PubMed

    Vemer, Pepijn; Rutten-van Mölken, Maureen P M H

    2011-10-01

    Recently, several checklists systematically assessed factors that affect the transferability of cost-effectiveness (CE) studies between jurisdictions. The role of the threshold value for a QALY has been given little consideration in these checklists, even though the importance of a factor as a cause of between country differences in CE depends on this threshold. In this paper, we study the impact of the willingness-to-pay (WTP) per QALY on the importance of transferability factors in the case of smoking cessation support (SCS). We investigated, for several values of the WTP, how differences between six countries affect the incremental net monetary benefit (INMB) of SCS. The investigated factors were demography, smoking prevalence, mortality, epidemiology and costs of smoking-related diseases, resource use and unit costs of SCS, utility weights and discount rates. We found that when the WTP decreased, factors that mainly affect health outcomes became less important and factors that mainly effect costs became more important. With a WTP below 1,000, the factors most responsible for between country differences in INMB were resource use and unit costs of SCS and the costs of smoking-related diseases. Utility values had little impact. At a threshold above 10,000, between country differences were primarily due to different discount rates, utility weights and epidemiology of smoking-related diseases. Costs of smoking-related diseases had little impact. At all thresholds, demography had little impact. We concluded that, when judging the transferability of a CE study, we should consider the between country differences in WTP threshold values.

  20. Using a Photon Beam for Thermal Nociceptive Threshold Experiments

    NASA Astrophysics Data System (ADS)

    Walker, Azida; Anderson, Jeffery; Sherwood, Spencer

    In humans, risk of diabetes and diabetic complications increases with age and duration of prediabetic state. In an effort to understand the progression of this disease scientists have evaluated the deterioration of the nervous system. One of the current methods used in the evaluation of the deterioration of the nervous system is through thermal threshold experiments. An incremental Hot / Cold Plate Analgesia Meter (IITC Life Science,CA is used to linearly increase the plate temperature at a rate of 10 ºC min-1 with a cutoff temperature of 55 ºC. Hind limb heat pain threshold (HPT) will be defined as a plate temperature at which the animal abruptly withdraws either one of its hind feet from the plate surface in a sharp move, typically followed by licking of the lifted paw. One of the disadvantages of using this hot plate method is in determining the true temperature at which the paw was withdrawn. While the temperature of the plate is known the position of the paw on the surface may vary; occasionally being cupped resulting in a temperature differentiation between the plate and the paw. During experiments the rats may urine onto the plate changing the temperature of the surface again resulting in reduced accuracy as to the withdrawal threshold. We propose here a new method for nociceptive somatic experiments involving the heat pain threshold experiments. This design employs the use of a photon beam to detect thermal response from an animal model. The details of this design is presented. Funded by the Undergraduate Research Council at the University of Central Arkansas.

  1. Post-abortion and induced abortion services in two public hospitals in Colombia.

    PubMed

    Darney, Blair G; Simancas-Mendoza, Willis; Edelman, Alison B; Guerra-Palacio, Camilo; Tolosa, Jorge E; Rodriguez, Maria I

    2014-07-01

    Until 2006, legal induced abortion was completely banned in Colombia. Few facilities are equipped or willing to offer abortion services; often adolescents experience even greater barriers of access in this context. We examined post abortion care (PAC) and legal induced abortion in two large public hospitals. We tested the association of hospital site, procedure type (manual vacuum aspiration vs. sharp curettage), and age (adolescents vs. women 20 years and over) with service type (PAC or legal induced abortion). Retrospective cohort study using 2010 billing data routinely collected for reimbursement (N=1353 procedures). We utilized descriptive statistics, multivariable logistic regression and predicted probabilities. Adolescents made up 22% of the overall sample (300/1353). Manual vacuum aspiration was used in one-third of cases (vs. sharp curettage). Adolescents had lower odds of documented PAC (vs. induced abortion) compared with women over age 20 (OR=0.42; 95% CI=0.21-0.86). The absolute difference of service type by age, however, is very small, controlling for hospital site and procedure type (.97 probability of PAC for adolescents compared with .99 for women 20 and over). Regardless of age, PAC via sharp curettage is the current standard in these two public hospitals. Both adolescents and women over 20 are in need of access to legal abortion services utilizing modern technologies in the public sector in Colombia. Documentation of abortion care is an essential first step to determining barriers to access and opportunities for quality improvement and better health outcomes for women. Following partial decriminalization of abortion in Colombia, in public hospitals nearly all abortion services are post-abortion care, not induced abortion. Sharp curettage is the dominant treatment for both adolescents and women over 20. Women seek care in the public sector for abortion, and must have access to safe, quality services. Copyright © 2014. Published by Elsevier Inc.

  2. The variance needed to accurately describe jump height from vertical ground reaction force data.

    PubMed

    Richter, Chris; McGuinness, Kevin; O'Connor, Noel E; Moran, Kieran

    2014-12-01

    In functional principal component analysis (fPCA) a threshold is chosen to define the number of retained principal components, which corresponds to the amount of preserved information. A variety of thresholds have been used in previous studies and the chosen threshold is often not evaluated. The aim of this study is to identify the optimal threshold that preserves the information needed to describe a jump height accurately utilizing vertical ground reaction force (vGRF) curves. To find an optimal threshold, a neural network was used to predict jump height from vGRF curve measures generated using different fPCA thresholds. The findings indicate that a threshold from 99% to 99.9% (6-11 principal components) is optimal for describing jump height, as these thresholds generated significantly lower jump height prediction errors than other thresholds.

  3. Kanerva's sparse distributed memory with multiple hamming thresholds

    NASA Technical Reports Server (NTRS)

    Pohja, Seppo; Kaski, Kimmo

    1992-01-01

    If the stored input patterns of Kanerva's Sparse Distributed Memory (SDM) are highly correlated, utilization of the storage capacity is very low compared to the case of uniformly distributed random input patterns. We consider a variation of SDM that has a better storage capacity utilization for correlated input patterns. This approach uses a separate selection threshold for each physical storage address or hard location. The selection of the hard locations for reading or writing can be done in parallel of which SDM implementations can benefit.

  4. Investigations of ionospheric sporadic Es layer using oblique sounding method

    NASA Astrophysics Data System (ADS)

    Minullin, R.

    The characteristics of Es layer have been studied using oblique sounding at 28 radiolines at the frequencies of 34 -- 73 MHz at the transmission paths 400 -- 1600 km long during 30 years. Reflections from Es layer with a few hours duration were observed. The amplitude of the reflected signal reached 1000 μ V with the registration threshold 0,1 μ V. The borderlines between reflected and scattered signals were observed as sharp curves in 60 -- 100 s range on the distributions of duration of reflected signals for decameter waves. The duration of continuous Es reflections were decreased upon amplification of oblique sounding frequency. The distributions of duration of reflected signals for meter waves showed sharp curves in the range 200 -- 300 s, representing borderlines between signals reflected from meteoric traces and from Es layer. The filling coefficient for the oblique sounding as well as the Es layer emersion probability for the vertical sounding were shown to undergo daily, seasonal and periodic variations. The daily variations of the filling coefficient of Es signals showed clear-cut maximums at 10 -- 12 and 18 -- 20 hours and minimum at 4 -- 6 hours at all paths in summer time and the maximum at 12 -- 14 hours in winter time. The values of the filling coefficient for Es layer declined with the increase of oblique sounding frequency. The minimal values of the filling coefficient were observed in winter and early spring, while the maximal values were observed from May to August. Provided that the averaged filling coefficient is equal to one in summer, it reaches the level 0,25 in equinox and does not exceed the level 0,12 in winter as evident by the of oblique sounding. The filling coefficient relation to the value of the voltage detection threshold was approximated by power-mode law. The filling coefficients for summer period showed exponential relation with equivalent sounding frequencies. The experimental evidence was generalized in an analytical model. Using this model the averaged Es layer filling coefficients for particular season of the year can be forecasted in case of given sounding frequency, path length, and voltage threshold.

  5. Theoretical and experimental approaches to possible thresholds of response in carcinogenicity

    EPA Science Inventory

    The determination and utilization of the actual low dose-response relationship for chemical carcinogens has long interested toxicologists, experimental pathologists, modelers and risk assessors. To date, no unequivocal examples of carcinogenic thresholds in humans are known. Ho...

  6. Performance comparison of two resolution modeling PET reconstruction algorithms in terms of physical figures of merit used in quantitative imaging.

    PubMed

    Matheoud, R; Ferrando, O; Valzano, S; Lizio, D; Sacchetti, G; Ciarmiello, A; Foppiano, F; Brambilla, M

    2015-07-01

    Resolution modeling (RM) of PET systems has been introduced in iterative reconstruction algorithms for oncologic PET. The RM recovers the loss of resolution and reduces the associated partial volume effect. While these methods improved the observer performance, particularly in the detection of small and faint lesions, their impact on quantification accuracy still requires thorough investigation. The aim of this study was to characterize the performances of the RM algorithms under controlled conditions simulating a typical (18)F-FDG oncologic study, using an anthropomorphic phantom and selected physical figures of merit, used for image quantification. Measurements were performed on Biograph HiREZ (B_HiREZ) and Discovery 710 (D_710) PET/CT scanners and reconstructions were performed using the standard iterative reconstructions and the RM algorithms associated to each scanner: TrueX and SharpIR, respectively. RM determined a significant improvement in contrast recovery for small targets (≤17 mm diameter) only for the D_710 scanner. The maximum standardized uptake value (SUVmax) increased when RM was applied using both scanners. The SUVmax of small targets was on average lower with the B_HiREZ than with the D_710. Sharp IR improved the accuracy of SUVmax determination, whilst TrueX showed an overestimation of SUVmax for sphere dimensions greater than 22 mm. The goodness of fit of adaptive threshold algorithms worsened significantly when RM algorithms were employed for both scanners. Differences in general quantitative performance were observed for the PET scanners analyzed. Segmentation of PET images using adaptive threshold algorithms should not be undertaken in conjunction with RM reconstructions. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. The effect of climate variability on urinary stone attacks: increased incidence associated with temperature over 18 °C: a population-based study.

    PubMed

    Park, Hyoung Keun; Bae, Sang Rak; Kim, Satbyul E; Choi, Woo Suk; Paick, Sung Hyun; Ho, Kim; Kim, Hyeong Gon; Lho, Yong Soo

    2015-02-01

    The aim of this study was to evaluate the effect of seasonal variation and climate parameters on urinary tract stone attack and investigate whether stone attack is increased sharply at a specific point. Nationwide data of total urinary tract stone attack numbers per month between January 2006 and December 2010 were obtained from the Korean Health Insurance Review and Assessment Service. The effects of climatic factors on monthly urinary stone attack were assessed using auto-regressive integrated moving average (ARIMA) regression method. A total of 1,702,913 stone attack cases were identified. Mean monthly and monthly average daily urinary stone attack cases were 28,382 ± 2,760 and 933 ± 85, respectively. The stone attack showed seasonal trends of sharp incline in June, a peak plateau from July to September, and a sharp decline after September. The correlation analysis showed that ambient temperature (r = 0.557, p < 0.001) and relative humidity (r = 0.513, p < 0.001) were significantly associated with urinary stone attack cases. However, after adjustment for trends and seasonality, ambient temperature was the only climate factor associated with the stone attack cases in ARIMA regression test (p = 0.04). Threshold temperature was estimated as 18.4 °C. Risk of urinary stone attack significantly increases 1.71% (1.02-2.41 %, 95% confidence intervals) with a 1 °C increase of ambient temperature above the threshold point. In conclusion, monthly urinary stone attack cases were changed according to seasonal variation. Among the climates variables, only temperature had consistent association with stone attack and when the temperature is over 18.4 °C, urinary stone attack would be increased sharply.

  8. Liquid-phase epitaxy grown PbSnTe distributed feedback laser diodes with broad continuous single-mode tuning range

    NASA Technical Reports Server (NTRS)

    Hsieh, H.-H.; Fonstad, C. G.

    1980-01-01

    Distributed feedback (DFB) pulsed laser operation has been demonstrated in stripe geometry Pb(1-x)Sn(x)Te double-heterostructures grown by liquid-phase epitaxy. The grating structure of 0.79 micron periodicity operates in first order near 12.8 microns and was fabricated prior to the liquid-phase epitaxial growth using holographic exposure techniques. These DFB lasers had moderate thresholds, 3.6 kA/sq cm, and the output power versus current curves exhibited a sharp turn-on free of kinks. Clean, single-mode emission spectra, continuously tunable over a range in excess of 20 per cm, centered about 780 per cm (12.8 microns), and at an average rate of 1.2 per cm-K from 9 to 26 K, were observed. While weaker modes could at times be seen in the spectrum, substantially single-mode operation was obtained over the entire operating range and to over 10 times threshold.

  9. Projecting the need for hip replacement over the next three decades: influence of changing demography and threshold for surgery

    PubMed Central

    Birrell, F.; Johnell, O.; Silman, A.

    1999-01-01

    OBJECTIVES—To estimate the requirement for total hip replacement in the United Kingdom over the next three decades
METHODS—Projection of age and sex specific hip replacements in the UK over 10 year intervals taking account of demographic change and the extrapolation of arthroplasty rates from Sweden; a country with recently introduced guidelines.
RESULTS—Assuming no change in the age and sex specific arthroplasty rates, the estimated number of hip replacements will increase by 40% over the next 30 year period because of demographic change alone. The proportionate change will be substantially higher in men (51%) than women (33%), with a doubling of the number of male hip replacements in those aged over 85. Changes in the threshold for surgery may increase this further—up to double the current number.
CONCLUSION—A sharp rise in hip replacements will be needed to satisfy needs in the UK population over the next 30 years.

 PMID:10460191

  10. Sense, Nonsense, and Violence: Levinas and the Internal Logic of School Shootings

    ERIC Educational Resources Information Center

    Keehn, Gabriel; Boyles, Deron

    2015-01-01

    Utilizing a broadly Levinasian framework, specifically the interplay among his ideas of possession, violence, and negation, Gabriel Keehn and Deron Boyles illustrate how the relatively recent sharp turn toward the hypercorporatized school and the concomitant transition of the student from simple (potential) customer to a type of hybrid…

  11. Focused Group Interviews as an Innovative Quanti-Qualitative Methodology (QQM): Integrating Quantitative Elements into a Qualitative Methodology

    ERIC Educational Resources Information Center

    Grim, Brian J.; Harmon, Alison H.; Gromis, Judy C.

    2006-01-01

    There is a sharp divide between quantitative and qualitative methodologies in the social sciences. We investigate an innovative way to bridge this gap that incorporates quantitative techniques into a qualitative method, the "quanti-qualitative method" (QQM). Specifically, our research utilized small survey questionnaires and experiment-like…

  12. Indian pharmaceutical patent prosecution: The changing role of Section 3(d)

    PubMed Central

    2018-01-01

    India, like many developing countries, only recently began to grant pharmaceutical product patents. Indian patent law includes a provision, Section 3(d), which tries to limit grant of “secondary” pharmaceutical patents, i.e. patents on new forms of existing molecules and drugs. Previous research suggests the provision was rarely used against secondary applications in the years immediately following its enactment, and where it was, was redundant to other aspects of the patent law, raising concerns that 3(d) was being under-utilized by the Indian Patent Office. This paper uses a novel data source, the patent office’s first examination reports, to examine changes in the use of the provision. We find a sharp increase over time in the use of Section 3(d), including on the main claims of patent applications, though it continues to be used in conjunction with other types of objections to patentability. More surprisingly, see a sharp increase in the use of the provision against primary patent applications, contrary to its intent, raising concerns about potential over-utilization. PMID:29608604

  13. Regression Discontinuity for Causal Effect Estimation in Epidemiology.

    PubMed

    Oldenburg, Catherine E; Moscoe, Ellen; Bärnighausen, Till

    Regression discontinuity analyses can generate estimates of the causal effects of an exposure when a continuously measured variable is used to assign the exposure to individuals based on a threshold rule. Individuals just above the threshold are expected to be similar in their distribution of measured and unmeasured baseline covariates to individuals just below the threshold, resulting in exchangeability. At the threshold exchangeability is guaranteed if there is random variation in the continuous assignment variable, e.g., due to random measurement error. Under exchangeability, causal effects can be identified at the threshold. The regression discontinuity intention-to-treat (RD-ITT) effect on an outcome can be estimated as the difference in the outcome between individuals just above (or below) versus just below (or above) the threshold. This effect is analogous to the ITT effect in a randomized controlled trial. Instrumental variable methods can be used to estimate the effect of exposure itself utilizing the threshold as the instrument. We review the recent epidemiologic literature reporting regression discontinuity studies and find that while regression discontinuity designs are beginning to be utilized in a variety of applications in epidemiology, they are still relatively rare, and analytic and reporting practices vary. Regression discontinuity has the potential to greatly contribute to the evidence base in epidemiology, in particular on the real-life and long-term effects and side-effects of medical treatments that are provided based on threshold rules - such as treatments for low birth weight, hypertension or diabetes.

  14. Thresholds and the rising pion inclusive cross section

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, S.T.

    In the context of the hypothesis of the Pomeron-f identity, it is shown that the rising pion inclusive cross section can be explained over a wide range of energies as a series of threshold effects. Low-mass thresholds are seen to be important. In order to understand the contributions of high-mass thresholds (flavoring), a simple two-channel multiperipheral model is examined. The analysis sheds light on the relation between thresholds and Mueller-Regge couplings. In particular, it is seen that inclusive-, and total-cross-section threshold mechanisms may differ. A quantitative model based on this idea and utilizing previous total-cross-section fits is seen to agreemore » well with experiment.« less

  15. Methods for automatic trigger threshold adjustment

    DOEpatents

    Welch, Benjamin J; Partridge, Michael E

    2014-03-18

    Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.

  16. Note: Simple hysteresis parameter inspector for camera module with liquid lens

    NASA Astrophysics Data System (ADS)

    Chen, Po-Jui; Liao, Tai-Shan; Hwang, Chi-Hung

    2010-05-01

    A method to inspect hysteresis parameter is presented in this article. The hysteresis of whole camera module with liquid lens can be measured rather than a single lens merely. Because the variation in focal length influences image quality, we propose utilizing the sharpness of images which is captured from camera module for hysteresis evaluation. Experiments reveal that the profile of sharpness hysteresis corresponds to the characteristic of contact angle of liquid lens. Therefore, it can infer that the hysteresis of camera module is induced by the contact angle of liquid lens. An inspection process takes only 20 s to complete. Thus comparing with other instruments, this inspection method is more suitable to integrate into the mass production lines for online quality assurance.

  17. Three-Dimensional Self-Assembled Photonic Crystal Waveguide

    NASA Astrophysics Data System (ADS)

    Baek, Kang-Hyun

    Photonic crystals (PCs), two- or three-dimensionally periodic, artificial, and dielectric structures, have a specific forbidden band for electromagnetic waves, referred to as photonic bandgap (PBG). The PBG is analogous to the electronic bandgap in natural crystal structures with periodic atomic arrangement. A well-defined and embedded planar, line, or point defect within the PCs causes a break in its structural periodicity, and introduces a state in the PBG for light localization. It offers various applications in integrated optics and photonics including optical filters, sharp bending light guides and very low threshold lasers. Using nanofabrication processes, PCs of the 2-D slab-type and 3-D layer-by-layer structures have been investigated widely. Alternatively, simple and low-cost self-assembled PCs with full 3-D PBG, inverse opals, have been suggested. A template with face centered cubic closed packed structure, opal, may initially be built by self-assembly of colloidal spheres, and is selectively removed after infiltrating high refractive index materials into the interstitials of spheres. In this dissertation, the optical waveguides utilizing the 3-D self-assembled PCs are discussed. The waveguides were fabricated by microfabrication technology. For high-quality colloidal silica spheres and PCs, reliable synthesis, self-assembly, and characterization techniques were developed. Its theoretical and experimental demonstrations are provided and correlated. They suggest that the self-assembled PCs with PBG are feasible for the applications in integrated optics and photonics.

  18. The anaerobic threshold: over-valued or under-utilized? A novel concept to enhance lipid optimization!

    PubMed

    Connolly, Declan A J

    2012-09-01

    The purpose of this article is to assess the value of the anaerobic threshold for use in clinical populations with the intent to improve exercise adaptations and outcomes. The anaerobic threshold is generally poorly understood, improperly used, and poorly measured. It is rarely used in clinical settings and often reserved for athletic performance testing. Increased exercise participation within both clinical and other less healthy populations has increased our attention to optimizing exercise outcomes. Of particular interest is the optimization of lipid metabolism during exercise in order to improve numerous conditions such as blood lipid profile, insulin sensitivity and secretion, and weight loss. Numerous authors report on the benefits of appropriate exercise intensity in optimizing outcomes even though regulation of intensity has proved difficult for many. Despite limited use, selected exercise physiology markers have considerable merit in exercise-intensity regulation. The anaerobic threshold, and other markers such as heart rate, may well provide a simple and valuable mechanism for regulating exercising intensity. The use of the anaerobic threshold and accurate target heart rate to regulate exercise intensity is a valuable approach that is under-utilized across populations. The measurement of the anaerobic threshold can be simplified to allow clients to use nonlaboratory measures, for example heart rate, in order to self-regulate exercise intensity and improve outcomes.

  19. Improved liquid-film electron stripper

    DOEpatents

    Gavin, B.F.

    1984-11-01

    An improved liquid-film electron stripper particularly for high intensity heavy ion beams which produces constant regenerated, stable, free-standing liquid films having an adjustable thickness between 0.3 to 0.05 microns. The improved electron stripper is basically composed of at least one high speed, rotating disc with a very sharp, precision-like, ground edge on one side of the disc's periphery and with highly polished, flat, radial surface adjacent the sharp edge. A fine stream of liquid, such as oil, impinges at a 90/sup 0/ angle adjacent the disc's sharp outer edge. Film terminators, located at a selected distance from the disc perimeter are positioned approximately perpendicular to the film. The terminators support, shape, and stretch the film and are arranged to assist in the prevention of liquid droplet formation by directing the collected film to a reservoir below without breaking or interfering with the film. One embodiment utilizes two rotating discs and associated terminators, with the discs rotating so as to form films in opposite directions, and with the second disc being located down beam-line relative to the first disc.

  20. Liquid-film electron stripper

    DOEpatents

    Gavin, Basil F.

    1986-01-01

    An improved liquid-film electron stripper particularly for high intensity heavy ion beams which produces constant regenerated, stable, free-standing liquid films having an adjustable thickness between 0.3 to 0.05 microns. The improved electron stripper is basically composed of at least one high speed, rotating disc with a very sharp, precision-like, ground edge on one said of the disc's periphery and with a highly polished, flat, radial surface adjacent the sharp edge. A fine stream of liquid, such as oil, impinges at a 90.degree. angle adjacent the disc's sharp outer edge. Film terminators, located at a selected distance from the disc perimeter are positioned approximately perpendicular to the film. The terminators support, shape, and stretch the film and are arranged to assist in the prevention of liquid droplet formation by directing the collected film to a reservoir below without breaking or interfering with the film. One embodiment utilizes two rotating discs and associated terminators, with the discs rotating so as to form films in opposite directions, and with the second disc being located down beam-line relative to the first disc.

  1. Improving the segmentation of therapy-induced leukoencephalopathy using apriori information and a gradient magnitude threshold

    NASA Astrophysics Data System (ADS)

    Glass, John O.; Reddick, Wilburn E.; Reeves, Cara; Pui, Ching-Hon

    2004-05-01

    Reliably quantifying therapy-induced leukoencephalopathy in children treated for cancer is a challenging task due to its varying MR properties and similarity to normal tissues and imaging artifacts. T1, T2, PD, and FLAIR images were analyzed for a subset of 15 children from an institutional protocol for the treatment of acute lymphoblastic leukemia. Three different analysis techniques were compared to examine improvements in the segmentation accuracy of leukoencephalopathy versus manual tracings by two expert observers. The first technique utilized no apriori information and a white matter mask based on the segmentation of the first serial examination of each patient. MR images were then segmented with a Kohonen Self-Organizing Map. The other two techniques combine apriori maps from the ICBM atlas spatially normalized to each patient and resliced using SPM99 software. The apriori maps were included as input and a gradient magnitude threshold calculated on the FLAIR images was also utilized. The second technique used a 2-dimensional threshold, while the third algorithm utilized a 3-dimensional threshold. Kappa values were compared for the three techniques to each observer, and improvements were seen with each addition to the original algorithm (Observer 1: 0.651, 0.653, 0.744; Observer 2: 0.603, 0.615, 0.699).

  2. 48 CFR 41.401 - Monthly and annual review.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... values exceeding the simplified acquisition threshold, on an annual basis. Annual reviews of accounts with annual values at or below the simplified acquisition threshold shall be conducted when deemed... services to each facility under the utility's most economical, applicable rate and to examine competitive...

  3. Frequency-Locked Detector Threshold Setting Criteria Based on Mean-Time-To-Lose-Lock (MTLL) for GPS Receivers

    PubMed Central

    Zhao, Na; Qin, Honglei; Sun, Kewen; Ji, Yuanfa

    2017-01-01

    Frequency-locked detector (FLD) has been widely utilized in tracking loops of Global Positioning System (GPS) receivers to indicate their locking status. The relation between FLD and lock status has been seldom discussed. The traditional PLL experience is not suitable for FLL. In this paper, the threshold setting criteria for frequency-locked detector in the GPS receiver has been proposed by analyzing statistical characteristic of FLD output. The approximate probability distribution of frequency-locked detector is theoretically derived by using a statistical approach, which reveals the relationship between probabilities of frequency-locked detector and the carrier-to-noise ratio (C/N0) of the received GPS signal. The relationship among mean-time-to-lose-lock (MTLL), detection threshold and lock probability related to C/N0 can be further discovered by utilizing this probability. Therefore, a theoretical basis for threshold setting criteria in frequency locked loops for GPS receivers is provided based on mean-time-to-lose-lock analysis. PMID:29207546

  4. Frequency-Locked Detector Threshold Setting Criteria Based on Mean-Time-To-Lose-Lock (MTLL) for GPS Receivers.

    PubMed

    Jin, Tian; Yuan, Heliang; Zhao, Na; Qin, Honglei; Sun, Kewen; Ji, Yuanfa

    2017-12-04

    Frequency-locked detector (FLD) has been widely utilized in tracking loops of Global Positioning System (GPS) receivers to indicate their locking status. The relation between FLD and lock status has been seldom discussed. The traditional PLL experience is not suitable for FLL. In this paper, the threshold setting criteria for frequency-locked detector in the GPS receiver has been proposed by analyzing statistical characteristic of FLD output. The approximate probability distribution of frequency-locked detector is theoretically derived by using a statistical approach, which reveals the relationship between probabilities of frequency-locked detector and the carrier-to-noise ratio ( C / N ₀) of the received GPS signal. The relationship among mean-time-to-lose-lock (MTLL), detection threshold and lock probability related to C / N ₀ can be further discovered by utilizing this probability. Therefore, a theoretical basis for threshold setting criteria in frequency locked loops for GPS receivers is provided based on mean-time-to-lose-lock analysis.

  5. Dual processing model of medical decision-making.

    PubMed

    Djulbegovic, Benjamin; Hozo, Iztok; Beckstead, Jason; Tsalatsanis, Athanasios; Pauker, Stephen G

    2012-09-03

    Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. We show that physician's beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker's threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the large extent dominated by expected utility theory. The model also provides a platform for reconciling two groups of competing dual processing theories (parallel competitive with default-interventionalist theories).

  6. Space Resources Utilization Roundtable

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This volume contains abstracts that have been accepted for presentation at the Space Resources Utilization Roundtable, October 27-29, 1999, in Golden, Colorado. The program committee consisted of M. B. Duke (Lunar and Planetary Institute), G. Baughman (Colorado School of Mines), D. Criswell (University of Houston), C. Graham (Canadian Mining Industry Research Organization), H. H. Schmitt (Apollo Astronaut), W. Sharp (Colorado School of Mines), L. Taylor (University of Tennessee), and a space manufacturing representative. Administration and publications support for this meeting were provided by the staff of the Publications and Program Services Department at the Lunar and Planetary Institute.

  7. Vehicle anti-rollover control strategy based on load transferring rate

    NASA Astrophysics Data System (ADS)

    Dai, W. T.; Du, H. Q.; Zhang, L.

    2018-03-01

    When vehicles is drived on a low adhesion road or going on a high speed and sharp turn, it is prone to product some lateral stability problems, such as lateral sideslip or rollover. In order to improve the vehicle anti-rollover stability under these limited conditions, a SUV vehicle model with high mass center was built based on the software of CarSim and the rollover stability controller was designed using the static threshold value method for the lateral load transferring rate (LTR). The simulations are shown that the vehicle anti-rollover stability under limit conditions is improved using the SUV model.

  8. Exploring the robustness of a noise correlation resonance in a Zeeman EIT system

    NASA Astrophysics Data System (ADS)

    O'Leary, Shannon; Crescimanno, Michael; Strehlow, Henry; Snider, Chad

    2011-05-01

    Using a single diode laser with large phase noise (linewidth ~100 MHz) resonant with Zeeman EIT in rubidium vapor, we examine intensity noise correlations of orthogonally-polarized laser components. A sharp correlation feature (~100 Hz) is shown to be power-broadening resistant at low powers. However, the limitations of this resistance are revealed, with the onset of a power-broadening regime once a threshold power is crossed. Possible mechanisms for this broadening, due to decoherence of the ground state superposition, are experimentally explored and results are compared to a model. Understanding the limits of this noise correlation feature is essential to practical applications such as magnetometry.

  9. Synthesis of Copper–Silica Core–Shell Nanostructures with Sharp and Stable Localized Surface Plasmon Resonance

    DOE PAGES

    Crane, Cameron C.; Wang, Feng; Li, Jun; ...

    2017-02-21

    Copper nanoparticles exhibit intense and sharp localized surface plasmon resonance (LSPR) in the visible region; however, the LSPR peaks become weak and broad when exposed to air due to the oxidation of Cu. In this work, the Cu nanoparticles are successfully encapsulated in SiO 2 by employing trioctyl-n-phosphine (TOP)-capped Cu nanoparticles for the sol–gel reaction, yielding an aqueous Cu–SiO 2 core–shell suspension with stable and well-preserved LSPR properties of the Cu cores. With the TOP capping, the oxidation of the Cu cores in the microemulsion was significantly reduced, thus allowing the Cu cores to sustain the sol–gel process used formore » coating the SiO 2 protection layer. It was found that the self-assembled TOP-capped Cu nanoparticles were spontaneously disassembled during the sol–gel reaction, thus recovering the LSPR of individual particles. During the disassembling progress, the extinction spectrum of the nanocube agglomerates evolved from a broad extinction profile to a narrow and sharp peak. For a mixture of nanocubes and nanorods, the spectra evolved to two distinct peaks during the dissembling process. The observed spectra match well with the numerical simulations. In conclusion, these Cu–SiO 2 core–shell nanoparticles with sharp and stable LSPR may greatly expand the utilization of Cu nanoparticles in aqueous environments.« less

  10. Synthesis of Copper–Silica Core–Shell Nanostructures with Sharp and Stable Localized Surface Plasmon Resonance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crane, Cameron C.; Wang, Feng; Li, Jun

    Copper nanoparticles exhibit intense and sharp localized surface plasmon resonance (LSPR) in the visible region; however, the LSPR peaks become weak and broad when exposed to air due to the oxidation of Cu. In this work, the Cu nanoparticles are successfully encapsulated in SiO 2 by employing trioctyl-n-phosphine (TOP)-capped Cu nanoparticles for the sol–gel reaction, yielding an aqueous Cu–SiO 2 core–shell suspension with stable and well-preserved LSPR properties of the Cu cores. With the TOP capping, the oxidation of the Cu cores in the microemulsion was significantly reduced, thus allowing the Cu cores to sustain the sol–gel process used formore » coating the SiO 2 protection layer. It was found that the self-assembled TOP-capped Cu nanoparticles were spontaneously disassembled during the sol–gel reaction, thus recovering the LSPR of individual particles. During the disassembling progress, the extinction spectrum of the nanocube agglomerates evolved from a broad extinction profile to a narrow and sharp peak. For a mixture of nanocubes and nanorods, the spectra evolved to two distinct peaks during the dissembling process. The observed spectra match well with the numerical simulations. In conclusion, these Cu–SiO 2 core–shell nanoparticles with sharp and stable LSPR may greatly expand the utilization of Cu nanoparticles in aqueous environments.« less

  11. Structured decision making as a conceptual framework to identify thresholds for conservation and management

    USGS Publications Warehouse

    Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.

    2009-01-01

    Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.

  12. 77 FR 12930 - Federal Acquisition Regulation: Socioeconomic Program Parity

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ... on May 6, 2011, reinstating the Rule of Two. C. Sole Source Dollar Thresholds Vary Among the... all socioeconomic programs had the same sole source dollar threshold. Response: The sole source dollar... business socioeconomic contracting program to utilize. D. Sole Source Authority Under the SDVOSB Program...

  13. Intensively exploited Mediterranean aquifers: resilience to seawater intrusion and proximity to critical thresholds

    NASA Astrophysics Data System (ADS)

    Mazi, K.; Koussis, A. D.; Destouni, G.

    2014-05-01

    We investigate seawater intrusion in three prominent Mediterranean aquifers that are subject to intensive exploitation and modified hydrologic regimes by human activities: the Nile Delta, Israel Coastal and Cyprus Akrotiri aquifers. Using a generalized analytical sharp interface model, we review the salinization history and current status of these aquifers, and quantify their resilience/vulnerability to current and future seawater intrusion forcings. We identify two different critical limits of seawater intrusion under groundwater exploitation and/or climatic stress: a limit of well intrusion, at which intruded seawater reaches key locations of groundwater pumping, and a tipping point of complete seawater intrusion up to the prevailing groundwater divide of a coastal aquifer. Either limit can be reached, and ultimately crossed, under intensive aquifer exploitation and/or climate-driven change. We show that seawater intrusion vulnerability for different aquifer cases can be directly compared in terms of normalized intrusion performance curves. The site-specific assessments show that (a) the intruding seawater currently seriously threatens the Nile Delta aquifer, (b) in the Israel Coastal aquifer the sharp interface toe approaches the well location and (c) the Cyprus Akrotiri aquifer is currently somewhat less threatened by increased seawater intrusion.

  14. Functional Imaging of Sleep Vertex Sharp Transients

    PubMed Central

    Stern, John M.; Caporro, Matteo; Haneef, Zulfi; Yeh, Hsiang J.; Buttinelli, Carla; Lenartowicz, Agatha; Mumford, Jeanette A.; Parvizi, Josef; Poldrack, Russell A.

    2011-01-01

    Objective The vertex sharp transient (VST) is an electroencephalographic (EEG) discharge that is an early marker of non-REM sleep. It has been recognized since the beginning of sleep physiology research, but its source and function remain mostly unexplained. We investigated VST generation using functional MRI (fMRI). Methods Simultaneous EEG and fMRI were recorded from 7 individuals in drowsiness and light sleep. VST occurrences on EEG were modeled with fMRI using an impulse function convolved with a hemodynamic response function to identify cerebral regions correlating to the VSTs. A resulting statistical image was thresholded at Z>2.3. Results Two hundred VSTs were identified. Significantly increased signal was present bilaterally in medial central, lateral precentral, posterior superior temporal, and medial occipital cortex. No regions of decreased signal were present. Conclusion The regions are consistent with electrophysiologic evidence from animal models and functional imaging of human sleep, but the results are specific to VSTs. The regions principally encompass the primary sensorimotor cortical regions for vision, hearing, and touch. Significance The results depict a network comprising the presumed VST generator and its associated regions. The associated regions functional similarity for primary sensation suggests a role for VSTs in sensory experience during sleep. PMID:21310653

  15. Photofragment slice imaging studies of pyrrole and the Xe{center_dot}{center_dot}{center_dot}pyrrole cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubio-Lago, L.; Zaouris, D.; Sakellariou, Y.

    The photolysis of pyrrole has been studied in a molecular beam at wavelengths of 250, 240, and 193.3 nm, using two different carrier gases, He and Xe. A broad bimodal distribution of H-atom fragment velocities has been observed at all wavelengths. Near threshold at both 240 and 250 nm, sharp features have been observed in the fast part of the H-atom distribution. Under appropriate molecular beam conditions, the entire H-atom loss signal from the photolysis of pyrrole at both 240 and 250 nm (including the sharp features) disappear when using Xe as opposed to He as the carrier gas. Wemore » attribute this phenomenon to cluster formation between Xe and pyrrole, and this assumption is supported by the observation of resonance enhanced multiphoton ionization spectra for the (Xe{center_dot}{center_dot}{center_dot}pyrrole) cluster followed by photofragmentation of the nascent cation cluster. Ab initio calculations are presented for the ground states of the neutral and cationic (Xe{center_dot}{center_dot}{center_dot}pyrrole) clusters as a means of understanding their structural and energetic properties.« less

  16. THRESH—Software for tracking rainfall thresholds for landslide and debris-flow occurrence, user manual

    USGS Publications Warehouse

    Baum, Rex L.; Fischer, Sarah J.; Vigil, Jacob C.

    2018-02-28

    Precipitation thresholds are used in many areas to provide early warning of precipitation-induced landslides and debris flows, and the software distribution THRESH is designed for automated tracking of precipitation, including precipitation forecasts, relative to thresholds for landslide occurrence. This software is also useful for analyzing multiyear precipitation records to compare timing of threshold exceedance with dates and times of historical landslides. This distribution includes the main program THRESH for comparing precipitation to several kinds of thresholds, two utility programs, and a small collection of Python and shell scripts to aid the automated collection and formatting of input data and the graphing and further analysis of output results. The software programs can be deployed on computing platforms that support Fortran 95, Python 2, and certain Unix commands. The software handles rainfall intensity-duration thresholds, cumulative recent-antecedent precipitation thresholds, and peak intensity thresholds as well as various measures of antecedent precipitation. Users should have predefined rainfall thresholds before running THRESH.

  17. Discriminatory bio-adhesion over nano-patterned polymer brushes

    NASA Astrophysics Data System (ADS)

    Gon, Saugata

    Surfaces functionalized with bio-molecular targeting agents are conventionally used for highly-specific protein and cell adhesion. This thesis explores an alternative approach: Small non-biological adhesive elements are placed on a surface randomly, with the rest of the surface rendered repulsive towards biomolecules and cells. While the adhesive elements themselves, for instance in solution, typically exhibit no selectivity for various compounds within an analyte suspension, selective adhesion of targeted objects or molecules results from their placement on the repulsive surface. The mechanism of selectivity relies on recognition of length scales of the surface distribution of adhesive elements relative to species in the analyte solution, along with the competition between attractions and repulsions between various species in the suspension and different parts of the collecting surface. The resulting binding selectivity can be exquisitely sharp; however, complex mixtures generally require the use of multiple surfaces to isolate the various species: Different components will be adhered, sharply, with changes in collector composition. The key feature of these surface designs is their lack of reliance on biomolecular fragments for specificity, focusing entirely on physicochemical principles at the lengthscales from 1 - 100 nm. This, along with a lack of formal patterning, provides the advantages of simplicity and cost effectiveness. This PhD thesis demonstrates these principles using a system in which cationic poly-L-lysine (PLL) patches (10 nm) are deposited randomly on a silica substrate and the remaining surface is passivated with a bio-compatible PEG brush. TIRF microscopy revealed that the patches were randomly arranged, not clustered. By precisely controlling the number of patches per unit area, the interfaces provide sharp selectivity for adhesion of proteins and bacterial cells. For instance, it was found that a critical density of patches (on the order of 1000/mum 2) was required for fibrinogen adsorption while a greater density comprised the adhesion threshold for albumin. Surface compositions between these two thresholds discriminated binding of the two proteins. The binding behavior of the two proteins from a mixture was well anticipated by the single- protein binding behaviors of the individual proteins. The mechanism for protein capture was shown to be multivalent: protein adhesion always occurred for averages spacings of the adhesive patches smaller than the dimensions of the protein of interest. For some backfill brush architectures, the spacing between the patches at the threshold for protein capture clearly corresponded to the major dimension of the target protein. For more dense PEG brush backfills however, larger adhesion thresholds were observed, corresponding to greater numbers of patches involved with the adhesion of each protein molecule. . The thesis demonstrates the tuning of the position of the adhesion thresholds, using fibrinogen as a model protein, using variations in brush properties and ionic strength. The directions of the trends indicate that the brushes do indeed exert steric repulsions toward the proteins while the attractions are electrostatic in nature. The surfaces also demonstrated sharp adhesion thresholds for S. Aureus bacteria, at smaller concentrations of adhesive surfaces elements than those needed for the protein capture. The results suggest that bacteria may be captured while proteins are rejected from these surfaces, and there may be potential to discriminate different bacterial types. Such discrimination from protein-containing bacterial suspensions was investigated briefly in this thesis using S. Aureus and fibrinogen as a model mixture. However, due to binding of fibrinogen to the bacterial surface, the separation did not succeed. It is still expected, however, that these surfaces could be used to selectively capture bacteria in the presence of non-interacting proteins. The interaction of these brushes with two different cationic species PLL and lysozyme were studied. The thesis documents rapid and complete brush displacement by PLL, highlighting a major limitation of using such brushes in some applications. Also unanticipated, lysozyme, a small cationic protein, was found to adhere to the brushes in increasing amounts with the PEG content of the brush. This finding contradicts current understanding of protein-brush interactions that suggests increases in interfacial PEG content increase biocompatibility.

  18. Carbon deposition thresholds on nickel-based solid oxide fuel cell anodes II. Steam:carbon ratio and current density

    NASA Astrophysics Data System (ADS)

    Kuhn, J.; Kesler, O.

    2015-03-01

    For the second part of a two part publication, coking thresholds with respect to molar steam:carbon ratio (SC) and current density in nickel-based solid oxide fuel cells were determined. Anode-supported button cell samples were exposed to 2-component and 5-component gas mixtures with 1 ≤ SC ≤ 2 and zero fuel utilization for 10 h, followed by measurement of the resulting carbon mass. The effect of current density was explored by measuring carbon mass under conditions known to be prone to coking while increasing the current density until the cell was carbon-free. The SC coking thresholds were measured to be ∼1.04 and ∼1.18 at 600 and 700 °C, respectively. Current density experiments validated the thresholds measured with respect to fuel utilization and steam:carbon ratio. Coking thresholds at 600 °C could be predicted with thermodynamic equilibrium calculations when the Gibbs free energy of carbon was appropriately modified. Here, the Gibbs free energy of carbon on nickel-based anode support cermets was measured to be -6.91 ± 0.08 kJ mol-1. The results of this two part publication show that thermodynamic equilibrium calculations with appropriate modification to the Gibbs free energy of solid-phase carbon can be used to predict coking thresholds on nickel-based anodes at 600-700 °C.

  19. How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?

    PubMed

    Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J

    2004-01-01

    There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost-utility threshold above the base-case results (n = 25) were of somewhat higher quality, and were more likely to justify their sensitivity analysis parameters, than those that did not (n = 45), but the overall quality rating was only moderate. Sensitivity analyses for economic parameters are widely reported and often identify whether choosing different assumptions leads to a different conclusion regarding cost effectiveness. Changes in HR-QOL and cost parameters should be used to test alternative guideline recommendations when there is uncertainty regarding these parameters. Changes in discount rates less frequently produce results that would change the conclusion about cost effectiveness. Improving the overall quality of published studies and describing the justifications for parameter ranges would allow more meaningful conclusions to be drawn from sensitivity analyses.

  20. SIGMA Release v1.2 - Capabilities, Enhancements and Fixes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahadevan, Vijay; Grindeanu, Iulian R.; Ray, Navamita

    In this report, we present details on SIGMA toolkit along with its component structure, capabilities, and feature additions in FY15, release cycles, and continuous integration process. These software processes along with updated documentation are imperative to successfully integrate and utilize in several applications including the SHARP coupled analysis toolkit for reactor core systems funded under the NEAMS DOE-NE program.

  1. Ecological thresholds: The key to successful enviromental management or an important concept with no practical application?

    USGS Publications Warehouse

    Groffman, P.M.; Baron, Jill S.; Blett, T.; Gold, A.J.; Goodman, I.; Gunderson, L.H.; Levinson, B.M.; Palmer, Margaret A.; Paerl, H.W.; Peterson, G.D.; Poff, N.L.; Rejeski, D.W.; Reynolds, J.F.; Turner, M.G.; Weathers, K.C.; Wiens, J.

    2006-01-01

    An ecological threshold is the point at which there is an abrupt change in an ecosystem quality, property or phenomenon, or where small changes in an environmental driver produce large responses in the ecosystem. Analysis of thresholds is complicated by nonlinear dynamics and by multiple factor controls that operate at diverse spatial and temporal scales. These complexities have challenged the use and utility of threshold concepts in environmental management despite great concern about preventing dramatic state changes in valued ecosystems, the need for determining critical pollutant loads and the ubiquity of other threshold-based environmental problems. In this paper we define the scope of the thresholds concept in ecological science and discuss methods for identifying and investigating thresholds using a variety of examples from terrestrial and aquatic environments, at ecosystem, landscape and regional scales. We end with a discussion of key research needs in this area.

  2. How to determine an optimal threshold to classify real-time crash-prone traffic conditions?

    PubMed

    Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang

    2018-08-01

    One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Breast imaging with ultrasound tomography: update on a comparative study with MR

    NASA Astrophysics Data System (ADS)

    Ranger, Bryan; Littrup, Peter; Duric, Neb; Li, Cuiping; Schmidt, Steven; Rama, Olsi; Bey-Knight, Lisa

    2011-03-01

    The objective of this study is to present imaging parameters and display thresholds of an ultrasound tomography (UST) prototype in order to demonstrate analogous visualization of overall breast anatomy and lesions relative to magnetic resonance (MR). Thirty-six women were imaged with MR and our UST prototype. The UST scan generated sound speed, attenuation, and reflection images and were subjected to variable thresholds then fused together into a single UST image. Qualitative and quantitative comparisons of MR and UST images were utilized to identify anatomical similarities and mass characteristics. Overall, UST demonstrated the ability to visualize and characterize breast tissues in a manner comparable to MR without the use of IV contrast. For optimal visualization, fused images utilized thresholds of 1.46+/-0.1 km/s for sound speed to represent architectural features of the breast including parenchyma. An arithmetic combination of images using the logical .AND. and .OR. operators, along with thresholds of 1.52+/-0.03 km/s for sound speed and 0.16+/-0.04 dB/cm for attenuation, allowed for mass detection and characterization similar to MR.

  4. Gas Composition Sensing Using Carbon Nanotube Arrays

    NASA Technical Reports Server (NTRS)

    Li, Jing; Meyyappan, Meyya

    2012-01-01

    This innovation is a lightweight, small sensor for inert gases that consumes a relatively small amount of power and provides measurements that are as accurate as conventional approaches. The sensing approach is based on generating an electrical discharge and measuring the specific gas breakdown voltage associated with each gas present in a sample. An array of carbon nanotubes (CNTs) in a substrate is connected to a variable-pulse voltage source. The CNT tips are spaced appropriately from the second electrode maintained at a constant voltage. A sequence of voltage pulses is applied and a pulse discharge breakdown threshold voltage is estimated for one or more gas components, from an analysis of the current-voltage characteristics. Each estimated pulse discharge breakdown threshold voltage is compared with known threshold voltages for candidate gas components to estimate whether at least one candidate gas component is present in the gas. The procedure can be repeated at higher pulse voltages to estimate a pulse discharge breakdown threshold voltage for a second component present in the gas. The CNTs in the gas sensor have a sharp (low radius of curvature) tip; they are preferably multi-wall carbon nanotubes (MWCNTs) or carbon nanofibers (CNFs), to generate high-strength electrical fields adjacent to the tips for breakdown of the gas components with lower voltage application and generation of high current. The sensor system can provide a high-sensitivity, low-power-consumption tool that is very specific for identification of one or more gas components. The sensor can be multiplexed to measure current from multiple CNT arrays for simultaneous detection of several gas components.

  5. Dependence of the Onset of the Runaway Greenhouse Effect on the Latitudinal Surface Water Distribution of Earth-Like Planets

    NASA Astrophysics Data System (ADS)

    Kodama, T.; Nitta, A.; Genda, H.; Takao, Y.; O'ishi, R.; Abe-Ouchi, A.; Abe, Y.

    2018-02-01

    Liquid water is one of the most important materials affecting the climate and habitability of a terrestrial planet. Liquid water vaporizes entirely when planets receive insolation above a certain critical value, which is called the runaway greenhouse threshold. This threshold forms the inner most limit of the habitable zone. Here we investigate the effects of the distribution of surface water on the runaway greenhouse threshold for Earth-sized planets using a three-dimensional dynamic atmosphere model. We considered a 1 bar atmosphere whose composition is similar to the current Earth's atmosphere with a zonally uniform distribution of surface water. As previous studies have already showed, we also recognized two climate regimes: the land planet regime, which has dry low-latitude and wet high-latitude regions, and the aqua planet regime, which is globally wet. We showed that each regime is controlled by the width of the Hadley circulation, the amount of surface water, and the planetary topography. We found that the runaway greenhouse threshold varies continuously with the surface water distribution from about 130% (an aqua planet) to 180% (the extreme case of a land planet) of the present insolation at Earth's orbit. Our results indicate that the inner edge of the habitable zone is not a single sharp boundary, but a border whose location varies depending on planetary surface condition, such as the amount of surface water. Since land planets have wider habitable zones and less cloud cover, land planets would be good targets for future observations investigating planetary habitability.

  6. Photoionization of the Fe lons: Structure of the K-Edge

    NASA Technical Reports Server (NTRS)

    Palmeri, P.; Mendoza, C.; Kallman, T.; Bautista, M.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    X-ray absorption and emission features arising from the inner-shell transitions in iron are of practical importance in astrophysics due to the Fe cosmic abundance and to the absence of traits from other elements in the nearby spectrum. As a result, the strengths and energies of such features can constrain the ionization stage, elemental abundance, and column density of the gas in the vicinity of the exotic cosmic objects, e.g. active galactic nuclei (AGN) and galactic black hole candidates. Although the observational technology in X-ray astronomy is still evolving and currently lacks high spectroscopic resolution, the astrophysical models have been based on atomic calculations that predict a sudden and high step-like increase of the cross section at the K-shell threshold (see for instance. New Breit-Pauli R-matrix calculations of the photoionization cross section of the ground states of Fe XVII in the region near the K threshold are presented. They strongly support the view that the previously assumed sharp edge behaviour is not correct. The latter has been caused by the neglect of spectator Auger channels in the decay of the resonances converging to the K threshold. These decay channels include the dominant KLL channels and give rise to constant widths (independent of n). As a consequence, these series display damped Lorentzian components that rapidly blend to impose continuity at threshold, thus reformatting the previously held picture of the edge. Apparent broadened iron edges detected in the spectra of AGN and galactic black hole candidates seem to indicate that these quantum effects may be at least partially responsible for the observed broadening.

  7. Sharp-tailed Grouse Restoration; Colville Tribes Restore Habitat for Sharp-tailed Grouse, Annual Report 2002-2003.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitney, Richard

    2004-01-01

    Columbian Sharp-Tailed Grouse (Tympanuchus phasianellus columbianus) (CSTG) are an important traditional and cultural species to the Colville Confederated Tribes (CCT), Spokane Tribe of Indians (STOI), and other Tribes in the Region. They were once the most abundant upland bird in the Region. Currently, the largest remaining population in Washington State occurs on the CCT Reservation in Okanogan County. Increasing agricultural practices and other land uses has contributed to the decline of sharp-tail habitat and populations putting this species at risk. The decline of this species is not new (Yokum, 1952, Buss and Dziedzic, 1955, Zeigler, 1979, Meints 1991, and Crawfordmore » and Snyder 1994). The Tribes (CCT and STOI) are determined to protect, enhance and restore habitat for this species continued existence. When Grand Coulee and Chief Joseph Hydro-projects were constructed, inundated habitat used by this species was lost forever adding to overall decline. To compensate and prevent further habitat loss, the CCT proposed a project with Bonneville Power Administration (BPA) funding to address this species and their habitat requirements. The projects main focus is to address habitat utilized by the current CSTG population and determine ways to protect, restore, and enhance habitats for the conservation of this species over time. The project went through the NPPC Review Process and was funded through FY03 by BPA. This report addresses part of the current CCT effort to address the conservation of this species on the Colville Reservation.« less

  8. Quantifying Climate Feedbacks from Abrupt Changes in High-Latitude Trace-Gas Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlosser, Courtney Adam; Walter-Anthony, Katey; Zhuang, Qianlai

    2013-04-26

    Our overall goal was to quantify the potential for threshold changes in natural emission rates of trace gases, particularly methane and carbon dioxide, from pan-arctic terrestrial systems under the spectrum of anthropogenically forced climate warming, and the extent to which these emissions provide a strong feedback mechanism to global climate warming. This goal is motivated under the premise that polar amplification of global climate warming will induce widespread thaw and degradation of the permafrost, and would thus cause substantial changes in the extent of wetlands and lakes, especially thermokarst (thaw) lakes, over the Arctic. Through a coordinated effort of fieldmore » measurements, model development, and numerical experimentation with an integrated assessment model framework, we have investigated the following hypothesis: There exists a climate-warming threshold beyond which permafrost degradation becomes widespread and thus instigates strong and/or sharp increases in methane emissions (via thermokarst lakes and wetland expansion). These would outweigh any increased uptake of carbon (e.g. from peatlands) and would result in a strong, positive feedback to global climate warming.« less

  9. STOCHASTICITY AND EFFICIENCY IN SIMPLIFIED MODELS OF CORE-COLLAPSE SUPERNOVA EXPLOSIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardall, Christian Y.; Budiardja, Reuben D., E-mail: cardallcy@ornl.gov, E-mail: reubendb@utk.edu

    2015-11-01

    We present an initial report on 160 simulations of a highly simplified model of the post-bounce core-collapse supernova environment in three spatial dimensions (3D). We set different values of a parameter characterizing the impact of nuclear dissociation at the stalled shock in order to regulate the post-shock fluid velocity, thereby determining the relative importance of convection and the stationary accretion shock instability (SASI). While our convection-dominated runs comport with the paradigmatic notion of a “critical neutrino luminosity” for explosion at a given mass accretion rate (albeit with a nontrivial spread in explosion times just above threshold), the outcomes of ourmore » SASI-dominated runs are much more stochastic: a sharp threshold critical luminosity is “smeared out” into a rising probability of explosion over a ∼20% range of luminosity. We also find that the SASI-dominated models are able to explode with 3–4 times less efficient neutrino heating, indicating that progenitor properties, and fluid and neutrino microphysics, conducive to the SASI would make the neutrino-driven explosion mechanism more robust.« less

  10. Stochasticity and efficiency of convection-dominated vs. SASI-dominated supernova explosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-10-22

    We present an initial report on 160 simulations of a highly simplified model of the post-bounce supernova environment in three position space dimensions (3D). We set different values of a parameter characterizing the impact of nuclear dissociation at the stalled shock in order to regulate the post-shock fluid velocity, thereby determining the relative importance of convection and the stationary accretion shock instability (SASI). While our convection-dominated runs comport with the paradigmatic notion of a `critical neutrino luminosity' for explosion at a given mass accretion rate (albeit with a nontrivial spread in explosion times just above threshold), the outcomes of our SASI-dominated runs are more stochastic: a sharp threshold critical luminosity is `smeared out' into a rising probability of explosion over amore » $$\\sim 20\\%$$ range of luminosity. We also find that the SASI-dominated models are able to explode with 3 to 4 times less efficient neutrino heating, indicating that progenitor properties, and fluid and neutrino microphysics, conducive to the SASI would make the neutrino-driven explosion mechanism more robust.« less

  11. Spatially discrete thermal drawing of biodegradable microneedles for vascular drug delivery.

    PubMed

    Choi, Chang Kuk; Lee, Kang Ju; Youn, Young Nam; Jang, Eui Hwa; Kim, Woong; Min, Byung-Kwon; Ryu, WonHyoung

    2013-02-01

    Spatially discrete thermal drawing is introduced as a novel method for the fabrication of biodegradable microneedles with ultra-sharp tip ends. This method provides the enhanced control of microneedle shapes by spatially controlling the temperature of drawn polymer as well as drawing steps and speeds. Particular focus is given on the formation of sharp tip ends of microneedles at the end of thermal drawing. Previous works relied on the fracture of polymer neck by fast drawing that often causes uncontrolled shapes of microneedle tips. Instead, this approach utilizes the surface energy of heated polymer to form ultra-sharp tip ends. We have investigated the effect of such temperature control, drawing speed, and drawing steps in thermal drawing process on the final shape of microneedles using biodegradable polymers. XRD analysis was performed to analyze the effect of thermal cycle on the biodegradable polymer. Load-displacement measurement also showed the dependency of mechanical strengths of microneedles on the microneedle shapes. Ex vivo vascular tissue insertion and drug delivery demonstrated microneedle insertion to tunica media layer of canine aorta and drug distribution in the tissue layer. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. EGRET/COMPTEL Observations of an Unusual, Steep-Spectrum Gamma-Ray Source

    NASA Technical Reports Server (NTRS)

    Thompson, D. J.; Bertsch, D. L.; Hartman, R. C.; Collmar, W.; Johnson, W. N.

    1999-01-01

    During analysis of sources below the threshold of the third EGRET catalog, we have discovered a source, named GRO J1400-3956 based on the best position, with a remarkably steep spectrum. Archival analysis of COMPTEL data shows that the spectrum must have a strong turn-over in the energy range between COMPTEL and EGRET. The EGRET data show some evidence of time variability, suggesting an AGN, but the spectral change of slope is larger than that seen for most gamma-ray blazars. The sharp cutoff resembles the high-energy spectral breaks seen in some gamma-ray pulsars. There have as yet been no OSSE observations of this source.

  13. Application of the BINS superheated drop detector spectrometer to the 9Be(p,xn) neutron energy spectrum determination

    NASA Astrophysics Data System (ADS)

    Di Fulvio, A.; Ciolini, R.; Mirzajani, N.; Romei, C.; d'Errico, F.; Bedogni, R.; Esposito, J.; Zafiropoulos, D.; Colautti, P.

    2013-07-01

    In the framework of TRASCO-BNCT project, a Bubble Interactive Neutron Spectrometer (BINS) device was applied to the characterization of the angle-and energy-differential neutron spectra generated by the 9Be(p,xn)reaction. The BINS spectrometer uses two superheated emulsion detectors, sequentially operated at different temperatures and thus provides a series of six sharp threshold responses, covering the 0.1-10 MeV neutron energy range. Spectrum unfolding of the data was performed by means of MAXED code. The obtained angle, energy-differential spectra were compared with those measured with a Bonner sphere spectrometer, a silicon telescope spectrometer and literature data.

  14. Rayleigh-Taylor instability in accelerated elastic-solid slabs

    NASA Astrophysics Data System (ADS)

    Piriz, S. A.; Piriz, A. R.; Tahir, N. A.

    2017-12-01

    We develop the linear theory for the asymptotic growth of the incompressible Rayleigh-Taylor instability of an accelerated solid slab of density ρ2, shear modulus G , and thickness h , placed over a semi-infinite ideal fluid of density ρ1<ρ2 . It extends previous results for Atwood number AT=1 [B. J. Plohr and D. H. Sharp, Z. Angew. Math. Phys. 49, 786 (1998), 10.1007/s000330050121] to arbitrary values of AT and unveil the singular feature of an instability threshold below which the slab is stable for any perturbation wavelength. As a consequence, an accelerated elastic-solid slab is stable if ρ2g h /G ≤2 (1 -AT) /AT .

  15. Erosive Burning Study Utilizing Ultrasonic Measurement Techniques

    NASA Technical Reports Server (NTRS)

    Furfaro, James A.

    2003-01-01

    A 6-segment subscale motor was developed to generate a range of internal environments from which multiple propellants could be characterized for erosive burning. The motor test bed was designed to provide a high Mach number, high mass flux environment. Propellant regression rates were monitored for each segment utilizing ultrasonic measurement techniques. These data were obtained for three propellants RSRM, ETM- 03, and Castor@ IVA, which span two propellant types, PBAN (polybutadiene acrylonitrile) and HTPB (hydroxyl terminated polybutadiene). The characterization of these propellants indicates a remarkably similar erosive burning response to the induced flow environment. Propellant burnrates for each type had a conventional response with respect to pressure up to a bulk flow velocity threshold. Each propellant, however, had a unique threshold at which it would experience an increase in observed propellant burn rate. Above the observed threshold each propellant again demonstrated a similar enhanced burn rate response corresponding to the local flow environment.

  16. Evaluation of markers and risk prediction models: Overview of relationships between NRI and decision-analytic measures

    PubMed Central

    Calster, Ben Van; Vickers, Andrew J; Pencina, Michael J; Baker, Stuart G; Timmerman, Dirk; Steyerberg, Ewout W

    2014-01-01

    BACKGROUND For the evaluation and comparison of markers and risk prediction models, various novel measures have recently been introduced as alternatives to the commonly used difference in the area under the ROC curve (ΔAUC). The Net Reclassification Improvement (NRI) is increasingly popular to compare predictions with one or more risk thresholds, but decision-analytic approaches have also been proposed. OBJECTIVE We aimed to identify the mathematical relationships between novel performance measures for the situation that a single risk threshold T is used to classify patients as having the outcome or not. METHODS We considered the NRI and three utility-based measures that take misclassification costs into account: difference in Net Benefit (ΔNB), difference in Relative Utility (ΔRU), and weighted NRI (wNRI). We illustrate the behavior of these measures in 1938 women suspect of ovarian cancer (prevalence 28%). RESULTS The three utility-based measures appear transformations of each other, and hence always lead to consistent conclusions. On the other hand, conclusions may differ when using the standard NRI, depending on the adopted risk threshold T, prevalence P and the obtained differences in sensitivity and specificity of the two models that are compared. In the case study, adding the CA-125 tumor marker to a baseline set of covariates yielded a negative NRI yet a positive value for the utility-based measures. CONCLUSIONS The decision-analytic measures are each appropriate to indicate the clinical usefulness of an added marker or compare prediction models, since these measures each reflect misclassification costs. This is of practical importance as these measures may thus adjust conclusions based on purely statistical measures. A range of risk thresholds should be considered in applying these measures. PMID:23313931

  17. Dual processing model of medical decision-making

    PubMed Central

    2012-01-01

    Background Dual processing theory of human cognition postulates that reasoning and decision-making can be described as a function of both an intuitive, experiential, affective system (system I) and/or an analytical, deliberative (system II) processing system. To date no formal descriptive model of medical decision-making based on dual processing theory has been developed. Here we postulate such a model and apply it to a common clinical situation: whether treatment should be administered to the patient who may or may not have a disease. Methods We developed a mathematical model in which we linked a recently proposed descriptive psychological model of cognition with the threshold model of medical decision-making and show how this approach can be used to better understand decision-making at the bedside and explain the widespread variation in treatments observed in clinical practice. Results We show that physician’s beliefs about whether to treat at higher (lower) probability levels compared to the prescriptive therapeutic thresholds obtained via system II processing is moderated by system I and the ratio of benefit and harms as evaluated by both system I and II. Under some conditions, the system I decision maker’s threshold may dramatically drop below the expected utility threshold derived by system II. This can explain the overtreatment often seen in the contemporary practice. The opposite can also occur as in the situations where empirical evidence is considered unreliable, or when cognitive processes of decision-makers are biased through recent experience: the threshold will increase relative to the normative threshold value derived via system II using expected utility threshold. This inclination for the higher diagnostic certainty may, in turn, explain undertreatment that is also documented in the current medical practice. Conclusions We have developed the first dual processing model of medical decision-making that has potential to enrich the current medical decision-making field, which is still to the large extent dominated by expected utility theory. The model also provides a platform for reconciling two groups of competing dual processing theories (parallel competitive with default-interventionalist theories). PMID:22943520

  18. Cost-effectiveness of febrile neutropenia prevention with primary versus secondary G-CSF prophylaxis for adjuvant chemotherapy in breast cancer: a systematic review.

    PubMed

    Younis, T; Rayson, D; Jovanovic, S; Skedgel, C

    2016-10-01

    The adoption of primary (PP) versus secondary prophylaxis (SP) of febrile neutropenia (FN), with granulocyte colony-stimulating factors (G-CSF), for adjuvant chemotherapy (AC) regimens in breast cancer (BC) could be affected by its "value for money". This systematic review examined (i) cost-effectiveness of PP versus SP, (ii) FN threshold at which PP is cost-effective including the guidelines 20 % threshold and (iii) potential impact of G-CSF efficacy assumptions on outcomes. The systematic review identified all cost-effectiveness/cost-utility analyses (CEA/CUA) involving PP versus SP G-CSF for AC in BC that met predefined inclusion/exclusion criteria. Five relevant CEA/CUA were identified. These CEA/CUA examined different AC regimens (TAC = 2; FEC-D = 1; TC = 2) and G-CSF formulations (filgrastim "F" = 4; pegfilgrastim "P" = 4) with varying baseline FN-risk (range 22-32 %), mortality (range 1.4-6.0 %) and utility (range 0.33-0.47). The potential G-CSF benefit, including FN risk reduction with P versus F, varied among models. Overall, relative to SP, PP was not associated with good value for money, as per commonly utilized CE thresholds, at the baseline FN rates examined, including the consensus 20 % FN threshold, in most of these studies. The value for money associated with PP versus SP was primarily dependent on G-CSF benefit assumptions including reduced FN mortality and improved BC survival. PP G-CSF for FN prevention in BC patients undergoing AC may not be a cost-effective strategy at the guidelines 20 % FN threshold.

  19. Efficient computational model for classification of protein localization images using Extended Threshold Adjacency Statistics and Support Vector Machines.

    PubMed

    Tahir, Muhammad; Jan, Bismillah; Hayat, Maqsood; Shah, Shakir Ullah; Amin, Muhammad

    2018-04-01

    Discriminative and informative feature extraction is the core requirement for accurate and efficient classification of protein subcellular localization images so that drug development could be more effective. The objective of this paper is to propose a novel modification in the Threshold Adjacency Statistics technique and enhance its discriminative power. In this work, we utilized Threshold Adjacency Statistics from a novel perspective to enhance its discrimination power and efficiency. In this connection, we utilized seven threshold ranges to produce seven distinct feature spaces, which are then used to train seven SVMs. The final prediction is obtained through the majority voting scheme. The proposed ETAS-SubLoc system is tested on two benchmark datasets using 5-fold cross-validation technique. We observed that our proposed novel utilization of TAS technique has improved the discriminative power of the classifier. The ETAS-SubLoc system has achieved 99.2% accuracy, 99.3% sensitivity and 99.1% specificity for Endogenous dataset outperforming the classical Threshold Adjacency Statistics technique. Similarly, 91.8% accuracy, 96.3% sensitivity and 91.6% specificity values are achieved for Transfected dataset. Simulation results validated the effectiveness of ETAS-SubLoc that provides superior prediction performance compared to the existing technique. The proposed methodology aims at providing support to pharmaceutical industry as well as research community towards better drug designing and innovation in the fields of bioinformatics and computational biology. The implementation code for replicating the experiments presented in this paper is available at: https://drive.google.com/file/d/0B7IyGPObWbSqRTRMcXI2bG5CZWs/view?usp=sharing. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Interplay between spherical confinement and particle shape on the self-assembly of rounded cubes.

    PubMed

    Wang, Da; Hermes, Michiel; Kotni, Ramakrishna; Wu, Yaoting; Tasios, Nikos; Liu, Yang; de Nijs, Bart; van der Wee, Ernest B; Murray, Christopher B; Dijkstra, Marjolein; van Blaaderen, Alfons

    2018-06-08

    Self-assembly of nanoparticles (NPs) inside drying emulsion droplets provides a general strategy for hierarchical structuring of matter at different length scales. The local orientation of neighboring crystalline NPs can be crucial to optimize for instance the optical and electronic properties of the self-assembled superstructures. By integrating experiments and computer simulations, we demonstrate that the orientational correlations of cubic NPs inside drying emulsion droplets are significantly determined by their flat faces. We analyze the rich interplay of positional and orientational order as the particle shape changes from a sharp cube to a rounded cube. Sharp cubes strongly align to form simple-cubic superstructures whereas rounded cubes assemble into icosahedral clusters with additionally strong local orientational correlations. This demonstrates that the interplay between packing, confinement and shape can be utilized to develop new materials with novel properties.

  1. Modeling of Gate Bias Modulation in Carbon Nanotube Field-Effect-Transistors

    NASA Technical Reports Server (NTRS)

    Yamada, Toshishige; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The threshold voltages of a carbon nanotube (CNT) field-effect transistor (FET) are derived and compared with those of the metal oxide-semiconductor (MOS) FETs. The CNT channel is so thin that there is no voltage drop perpendicular to the gate electrode plane, which is the CNT diameter direction, and this makes the CNTFET characteristics quite different from those in MOSFETs. The relation between the voltage and the electrochemical potentials, and the mass action law for electrons and holes are examined in the context of CNTs, and it is shown that the familiar relations are still valid because of the macroscopic number of states available in the CNTs. This is in sharp contrast to the cases of quantum dots. Using these relations, we derive an inversion threshold voltage V(sub Ti) and an accumulation threshold voltage V(sub Ta) as a function of the Fermi level E(sub F) in the channel, where E(sub F) is a measure of channel doping. V(sub Ti) of the CNTFETs has a much stronger dependence than that of MOSFETs, while V(sub Ta)s of both CNTFETs and MOSFETs depend quite weakly on E(sub F) with the same functional form. This means the transition from normally-off mode to normally-on mode is much sharper in CNTFETs as the doping increases, and this property has to be taken into account in circuit design.

  2. Impact of saturation on the polariton renormalization in III-nitride based planar microcavities

    NASA Astrophysics Data System (ADS)

    Rossbach, Georg; Levrat, Jacques; Feltin, Eric; Carlin, Jean-François; Butté, Raphaël; Grandjean, Nicolas

    2013-10-01

    It has been widely observed that an increasing carrier density in a strongly coupled semiconductor microcavity (MC) alters the dispersion of cavity polaritons, below and above the condensation threshold. The interacting nature of cavity polaritons stems from their excitonic fraction being intrinsically subject to Coulomb interactions and the Pauli-blocking principle at high carrier densities. By means of injection-dependent photoluminescence studies performed nonresonantly on a GaN-based MC at various temperatures, it is shown that already below the condensation threshold saturation effects generally dominate over any energy variation in the excitonic resonance. This observation is in sharp contrast to the usually assumed picture in strongly coupled semiconductor MCs, where the impact of saturation is widely neglected. These experimental findings are confirmed by tracking the exciton emission properties of the bare MC active medium and those of a high-quality single GaN quantum well up to the Mott density. The systematic investigation of renormalization up to the polariton condensation threshold as a function of lattice temperature and exciton-cavity photon detuning is strongly hampered by photonic disorder. However, when overcoming the latter by averaging over a larger spot size, a behavior in agreement with a saturation-dominated polariton renormalization is revealed. Finally, a comparison with other inorganic material systems suggests that for correctly reproducing polariton renormalization, exciton saturation effects should be taken into account systematically.

  3. Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection

    PubMed Central

    Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479

  4. Probabilistic Magnetotelluric Inversion with Adaptive Regularisation Using the No-U-Turns Sampler

    NASA Astrophysics Data System (ADS)

    Conway, Dennis; Simpson, Janelle; Didana, Yohannes; Rugari, Joseph; Heinson, Graham

    2018-04-01

    We present the first inversion of magnetotelluric (MT) data using a Hamiltonian Monte Carlo algorithm. The inversion of MT data is an underdetermined problem which leads to an ensemble of feasible models for a given dataset. A standard approach in MT inversion is to perform a deterministic search for the single solution which is maximally smooth for a given data-fit threshold. An alternative approach is to use Markov Chain Monte Carlo (MCMC) methods, which have been used in MT inversion to explore the entire solution space and produce a suite of likely models. This approach has the advantage of assigning confidence to resistivity models, leading to better geological interpretations. Recent advances in MCMC techniques include the No-U-Turns Sampler (NUTS), an efficient and rapidly converging method which is based on Hamiltonian Monte Carlo. We have implemented a 1D MT inversion which uses the NUTS algorithm. Our model includes a fixed number of layers of variable thickness and resistivity, as well as probabilistic smoothing constraints which allow sharp and smooth transitions. We present the results of a synthetic study and show the accuracy of the technique, as well as the fast convergence, independence of starting models, and sampling efficiency. Finally, we test our technique on MT data collected from a site in Boulia, Queensland, Australia to show its utility in geological interpretation and ability to provide probabilistic estimates of features such as depth to basement.

  5. Background field removal using a region adaptive kernel for quantitative susceptibility mapping of human brain

    NASA Astrophysics Data System (ADS)

    Fang, Jinsheng; Bao, Lijun; Li, Xu; van Zijl, Peter C. M.; Chen, Zhong

    2017-08-01

    Background field removal is an important MR phase preprocessing step for quantitative susceptibility mapping (QSM). It separates the local field induced by tissue magnetic susceptibility sources from the background field generated by sources outside a region of interest, e.g. brain, such as air-tissue interface. In the vicinity of air-tissue boundary, e.g. skull and paranasal sinuses, where large susceptibility variations exist, present background field removal methods are usually insufficient and these regions often need to be excluded by brain mask erosion at the expense of losing information of local field and thus susceptibility measures in these regions. In this paper, we propose an extension to the variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP) background field removal method using a region adaptive kernel (R-SHARP), in which a scalable spherical Gaussian kernel (SGK) is employed with its kernel radius and weights adjustable according to an energy "functional" reflecting the magnitude of field variation. Such an energy functional is defined in terms of a contour and two fitting functions incorporating regularization terms, from which a curve evolution model in level set formation is derived for energy minimization. We utilize it to detect regions of with a large field gradient caused by strong susceptibility variation. In such regions, the SGK will have a small radius and high weight at the sphere center in a manner adaptive to the voxel energy of the field perturbation. Using the proposed method, the background field generated from external sources can be effectively removed to get a more accurate estimation of the local field and thus of the QSM dipole inversion to map local tissue susceptibility sources. Numerical simulation, phantom and in vivo human brain data demonstrate improved performance of R-SHARP compared to V-SHARP and RESHARP (regularization enabled SHARP) methods, even when the whole paranasal sinus regions are preserved in the brain mask. Shadow artifacts due to strong susceptibility variations in the derived QSM maps could also be largely eliminated using the R-SHARP method, leading to more accurate QSM reconstruction.

  6. Background field removal using a region adaptive kernel for quantitative susceptibility mapping of human brain.

    PubMed

    Fang, Jinsheng; Bao, Lijun; Li, Xu; van Zijl, Peter C M; Chen, Zhong

    2017-08-01

    Background field removal is an important MR phase preprocessing step for quantitative susceptibility mapping (QSM). It separates the local field induced by tissue magnetic susceptibility sources from the background field generated by sources outside a region of interest, e.g. brain, such as air-tissue interface. In the vicinity of air-tissue boundary, e.g. skull and paranasal sinuses, where large susceptibility variations exist, present background field removal methods are usually insufficient and these regions often need to be excluded by brain mask erosion at the expense of losing information of local field and thus susceptibility measures in these regions. In this paper, we propose an extension to the variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP) background field removal method using a region adaptive kernel (R-SHARP), in which a scalable spherical Gaussian kernel (SGK) is employed with its kernel radius and weights adjustable according to an energy "functional" reflecting the magnitude of field variation. Such an energy functional is defined in terms of a contour and two fitting functions incorporating regularization terms, from which a curve evolution model in level set formation is derived for energy minimization. We utilize it to detect regions of with a large field gradient caused by strong susceptibility variation. In such regions, the SGK will have a small radius and high weight at the sphere center in a manner adaptive to the voxel energy of the field perturbation. Using the proposed method, the background field generated from external sources can be effectively removed to get a more accurate estimation of the local field and thus of the QSM dipole inversion to map local tissue susceptibility sources. Numerical simulation, phantom and in vivo human brain data demonstrate improved performance of R-SHARP compared to V-SHARP and RESHARP (regularization enabled SHARP) methods, even when the whole paranasal sinus regions are preserved in the brain mask. Shadow artifacts due to strong susceptibility variations in the derived QSM maps could also be largely eliminated using the R-SHARP method, leading to more accurate QSM reconstruction. Copyright © 2017. Published by Elsevier Inc.

  7. Magnetic substorms and northward IMF turning

    NASA Astrophysics Data System (ADS)

    Troshichev, Oleg; Podorozhkina, Nataly

    To determine the relation of the northward IMF turnings to substorm sudden onsets, we separated all events with sharp northward IMF turnings observed in years of solar maximum (1999-2002) and solar minimum (2007-2008). The events (N=261) have been classified in 5 groups in accordance with average magnetic activity in auroral zone (low, moderate or high levels of AL index) at unchanged or slightly changed PC index and with dynamics of PC (steady distinct growth or distinct decline) at arbitrary values of AL index. Statistical analysis of relationships between the IMF turning and changes of PC and AL indices has been fulfilled separately for each of 5 classes. Results of the analysis showed that, irrespective of geophysical conditions and solar activity epoch, the magnetic activity in the polar caps and in the auroral zone demonstrate no response to the sudden northward IMF turning, if the moment of northward turning is taken as a key date. Sharp increases of magnetic disturbance in the auroral zone are observed only under conditions of the growing PC index and statistically they are related to moment of the PC index exceeding the threshold level (~1.5 mV/m), not to northward turnings timed, as a rule, after the moment of sudden onset. Magnetic disturbances observed in these cases in the auroral zone (magnetic substorms) are guided by behavior of the PC index, like to ordinary magnetic substorms or substorms developed under conditions of the prolonged northward IMF impact on the magnetosphere. The evident inconsistency between the sharp IMF changes measured outside of the magnetosphere and behavior of the ground-based PC index, the latter determining the substorm development, provides an additional argument in favor of the PC index as a ground-based proxy of the solar wind energy that entered into magnetosphere.

  8. Strategies for mapping synaptic inputs on dendrites in vivo by combining two-photon microscopy, sharp intracellular recording, and pharmacology

    PubMed Central

    Levy, Manuel; Schramm, Adrien E.; Kara, Prakash

    2012-01-01

    Uncovering the functional properties of individual synaptic inputs on single neurons is critical for understanding the computational role of synapses and dendrites. Previous studies combined whole-cell patch recording to load neurons with a fluorescent calcium indicator and two-photon imaging to map subcellular changes in fluorescence upon sensory stimulation. By hyperpolarizing the neuron below spike threshold, the patch electrode ensured that changes in fluorescence associated with synaptic events were isolated from those caused by back-propagating action potentials. This technique holds promise for determining whether the existence of unique cortical feature maps across different species may be associated with distinct wiring diagrams. However, the use of whole-cell patch for mapping inputs on dendrites is challenging in large mammals, due to brain pulsations and the accumulation of fluorescent dye in the extracellular milieu. Alternatively, sharp intracellular electrodes have been used to label neurons with fluorescent dyes, but the current passing capabilities of these high impedance electrodes may be insufficient to prevent spiking. In this study, we tested whether sharp electrode recording is suitable for mapping functional inputs on dendrites in the cat visual cortex. We compared three different strategies for suppressing visually evoked spikes: (1) hyperpolarization by intracellular current injection, (2) pharmacological blockade of voltage-gated sodium channels by intracellular QX-314, and (3) GABA iontophoresis from a perisomatic electrode glued to the intracellular electrode. We found that functional inputs on dendrites could be successfully imaged using all three strategies. However, the best method for preventing spikes was GABA iontophoresis with low currents (5–10 nA), which minimally affected the local circuit. Our methods advance the possibility of determining functional connectivity in preparations where whole-cell patch may be impractical. PMID:23248588

  9. Post-traumatic stress disorder dimensions and asthma morbidity in World Trade Center rescue and recovery workers.

    PubMed

    Mindlis, I; Morales-Raveendran, E; Goodman, E; Xu, K; Vila-Castelar, C; Keller, K; Crawford, G; James, S; Katz, C L; Crowley, L E; de la Hoz, R E; Markowitz, S; Wisnivesky, J P

    2017-09-01

    Using data from a cohort of World Trade Center (WTC) rescue and recovery workers with asthma, we assessed whether meeting criteria for post-traumatic stress disorder (PTSD), sub-threshold PTSD, and for specific PTSD symptom dimensions are associated with increased asthma morbidity. Participants underwent a Structured Clinical Interview for Diagnostic and Statistical Manual to assess the presence of PTSD following DSM-IV criteria during in-person interviews between December 2013 and April 2015. We defined sub-threshold PTSD as meeting criteria for two of three symptom dimensions: re-experiencing, avoidance, or hyper-arousal. Asthma control, acute asthma-related healthcare utilization, and asthma-related quality of life data were collected using validated scales. Unadjusted and multiple regression analyses were performed to assess the relationship between sub-threshold PTSD and PTSD symptom domains with asthma morbidity measures. Of the 181 WTC workers with asthma recruited into the study, 28% had PTSD and 25% had sub-threshold PTSD. Patients with PTSD showed worse asthma control, higher rates of inpatient healthcare utilization, and poorer asthma quality of life than those with sub-threshold or no PTSD. After adjusting for potential confounders, among patients not meeting the criteria for full PTSD, those presenting symptoms of re-experiencing exhibited poorer quality of life (p = 0.003). Avoidance was associated with increased acute healthcare use (p = 0.05). Sub-threshold PTSD was not associated with asthma morbidity (p > 0.05 for all comparisons). There may be benefit in assessing asthma control in patients with sub-threshold PTSD symptoms as well as those with full PTSD to more effectively identify ongoing asthma symptoms and target management strategies.

  10. Computed radiography utilizing laser-stimulated luminescence: detectability of simulated low-contrast radiographic objects.

    PubMed

    Higashida, Y; Moribe, N; Hirata, Y; Morita, K; Doudanuki, S; Sonoda, Y; Katsuda, N; Hiai, Y; Misumi, W; Matsumoto, M

    1988-01-01

    Threshold contrasts of low-contrast objects with computed radiography (CR) images were compared with those of blue and green emitting screen-film systems by employing the 18-alternative forced choice (18-AFC) procedure. The dependence of the threshold contrast on the incident X-ray exposure and also the object size was studied. The results indicated that the threshold contrasts of CR system were comparable to those of blue and green screen-film systems and decreased with increasing object size, and increased with decreasing incident X-ray exposure. The increase in threshold contrasts was small when the relative incident exposure decreased from 1 to 1/4, and was large when incident exposure was decreased further.

  11. Carbon deposition thresholds on nickel-based solid oxide fuel cell anodes I. Fuel utilization

    NASA Astrophysics Data System (ADS)

    Kuhn, J.; Kesler, O.

    2015-03-01

    In the first of a two part publication, the effect of fuel utilization (Uf) on carbon deposition rates in solid oxide fuel cell nickel-based anodes was studied. Representative 5-component CH4 reformate compositions (CH4, H2, CO, H2O, & CO2) were selected graphically by plotting the solutions to a system of mass-balance constraint equations. The centroid of the solution space was chosen to represent a typical anode gas mixture for each nominal Uf value. Selected 5-component and 3-component gas mixtures were then delivered to anode-supported cells for 10 h, followed by determination of the resulting deposited carbon mass. The empirical carbon deposition thresholds were affected by atomic carbon (C), hydrogen (H), and oxygen (O) fractions of the delivered gas mixtures and temperature. It was also found that CH4-rich gas mixtures caused irreversible damage, whereas atomically equivalent CO-rich compositions did not. The coking threshold predicted by thermodynamic equilibrium calculations employing graphite for the solid carbon phase agreed well with empirical thresholds at 700 °C (Uf ≈ 32%); however, at 600 °C, poor agreement was observed with the empirical threshold of ∼36%. Finally, cell operating temperatures correlated well with the difference in enthalpy between the supplied anode gas mixtures and their resulting thermodynamic equilibrium gas mixtures.

  12. Total variation superiorized conjugate gradient method for image reconstruction

    NASA Astrophysics Data System (ADS)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  13. Functional imaging of sleep vertex sharp transients.

    PubMed

    Stern, John M; Caporro, Matteo; Haneef, Zulfi; Yeh, Hsiang J; Buttinelli, Carla; Lenartowicz, Agatha; Mumford, Jeanette A; Parvizi, Josef; Poldrack, Russell A

    2011-07-01

    The vertex sharp transient (VST) is an electroencephalographic (EEG) discharge that is an early marker of non-REM sleep. It has been recognized since the beginning of sleep physiology research, but its source and function remain mostly unexplained. We investigated VST generation using functional MRI (fMRI). Simultaneous EEG and fMRI were recorded from seven individuals in drowsiness and light sleep. VST occurrences on EEG were modeled with fMRI using an impulse function convolved with a hemodynamic response function to identify cerebral regions correlating to the VSTs. A resulting statistical image was thresholded at Z>2.3. Two hundred VSTs were identified. Significantly increased signal was present bilaterally in medial central, lateral precentral, posterior superior temporal, and medial occipital cortex. No regions of decreased signal were present. The regions are consistent with electrophysiologic evidence from animal models and functional imaging of human sleep, but the results are specific to VSTs. The regions principally encompass the primary sensorimotor cortical regions for vision, hearing, and touch. The results depict a network comprising the presumed VST generator and its associated regions. The associated regions functional similarity for primary sensation suggests a role for VSTs in sensory experience during sleep. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  14. Product surface hardening in non-self-sustained glow discharge plasma before synthesis of superhard coatings

    NASA Astrophysics Data System (ADS)

    Krasnov, P. S.; Metel, A. S.; Nay, H. A.

    2017-05-01

    Before the synthesis of superhard coating, the product surface is hardened by means of plasma nitriding, which prevents the surface deformations and the coating brittle rupture. The product heating by ions accelerated from plasma by applied to the product bias voltage leads to overheating and blunting of the product sharp edges. To prevent the blunting, it is proposed to heat the products with a broad beam of fast nitrogen molecules. The beam injection into a working vacuum chamber results in filling of the chamber with quite homogeneous plasma suitable for nitriding. Immersion in the plasma of the electrode and heightening of its potential up to 50-100 V initiate a non-self-sustained glow discharge between the electrode and the chamber. It enhances the plasma density by an order of magnitude and reduces its spatial nonuniformity down to 5-10%. When a cutting tool is isolated from the chamber, it is bombarded by plasma ions with an energy corresponding to its floating potential, which is lower than the sputtering threshold. Hence, the sharp edges are sputtered only by fast nitrogen molecules with the same rate as other parts of the tool surface. This leads to sharpening of the cutting tools instead of blunting.

  15. Elevation of pain threshold by vaginal stimulation in women.

    PubMed

    Whipple, B; Komisaruk, B R

    1985-04-01

    In 2 studies with 10 women each, vaginal self-stimulation significantly increased the threshold to detect and tolerate painful finger compression, but did not significantly affect the threshold to detect innocuous tactile stimulation. The vaginal self-stimulation was applied with a specially designed pressure transducer assembly to produce a report of pressure or pleasure. In the first study, 6 of the women perceived the vaginal stimulation as producing pleasure. During that condition, the pain tolerance threshold increased significantly by 36.8% and the pain detection threshold increased significantly by 53%. A second study utilized other types of stimuli. Vaginal self-stimulation perceived as pressure significantly increased the pain tolerance threshold by 40.3% and the pain detection threshold by 47.4%. In the second study, when the vaginal stimulation was self-applied in a manner that produced orgasm, the pain tolerance threshold and pain detection threshold increased significantly by 74.6% and 106.7% respectively, while the tactile threshold remained unaffected. A variety of control conditions, including various types of distraction, did not significantly elevate pain or tactile thresholds. We conclude that in women, vaginal self-stimulation decreases pain sensitivity, but does not affect tactile sensitivity. This effect is apparently not due to painful or non-painful distraction.

  16. FY16 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Shemon, E. R.; Smith, M. A.

    2016-09-30

    The goal of the NEAMS neutronics effort is to develop a neutronics toolkit for use on sodium-cooled fast reactors (SFRs) which can be extended to other reactor types. The neutronics toolkit includes the high-fidelity deterministic neutron transport code PROTEUS and many supporting tools such as a cross section generation code MC 2-3, a cross section library generation code, alternative cross section generation tools, mesh generation and conversion utilities, and an automated regression test tool. The FY16 effort for NEAMS neutronics focused on supporting the release of the SHARP toolkit and existing and new users, continuing to develop PROTEUS functions necessarymore » for performance improvement as well as the SHARP release, verifying PROTEUS against available existing benchmark problems, and developing new benchmark problems as needed. The FY16 research effort was focused on further updates of PROTEUS-SN and PROTEUS-MOCEX and cross section generation capabilities as needed.« less

  17. JPRS Report, Near East & South Asia, India

    DTIC Science & Technology

    1991-09-04

    effectively utilized. The national Scheduled Castes and Scheduled Tribes High priority will, therefore, be accorded to education. Finance and Development ...the Government’s reaction. Whatever is there was a sharp reduction in our foreign exchange banned or is obstructive is a negative development ...Unless I distribution system will be increased by 85 paise per kg find substantial improvement in tax compliance in the to Rs 6. 10 per kg with effect

  18. Is the sky the limit? On the expansion threshold of a species' range.

    PubMed

    Polechová, Jitka

    2018-06-15

    More than 100 years after Grigg's influential analysis of species' borders, the causes of limits to species' ranges still represent a puzzle that has never been understood with clarity. The topic has become especially important recently as many scientists have become interested in the potential for species' ranges to shift in response to climate change-and yet nearly all of those studies fail to recognise or incorporate evolutionary genetics in a way that relates to theoretical developments. I show that range margins can be understood based on just two measurable parameters: (i) the fitness cost of dispersal-a measure of environmental heterogeneity-and (ii) the strength of genetic drift, which reduces genetic diversity. Together, these two parameters define an 'expansion threshold': adaptation fails when genetic drift reduces genetic diversity below that required for adaptation to a heterogeneous environment. When the key parameters drop below this expansion threshold locally, a sharp range margin forms. When they drop below this threshold throughout the species' range, adaptation collapses everywhere, resulting in either extinction or formation of a fragmented metapopulation. Because the effects of dispersal differ fundamentally with dimension, the second parameter-the strength of genetic drift-is qualitatively different compared to a linear habitat. In two-dimensional habitats, genetic drift becomes effectively independent of selection. It decreases with 'neighbourhood size'-the number of individuals accessible by dispersal within one generation. Moreover, in contrast to earlier predictions, which neglected evolution of genetic variance and/or stochasticity in two dimensions, dispersal into small marginal populations aids adaptation. This is because the reduction of both genetic and demographic stochasticity has a stronger effect than the cost of dispersal through increased maladaptation. The expansion threshold thus provides a novel, theoretically justified, and testable prediction for formation of the range margin and collapse of the species' range.

  19. A study of the threshold method utilizing raingage data

    NASA Technical Reports Server (NTRS)

    Short, David A.; Wolff, David B.; Rosenfeld, Daniel; Atlas, David

    1993-01-01

    The threshold method for estimation of area-average rain rate relies on determination of the fractional area where rain rate exceeds a preset level of intensity. Previous studies have shown that the optimal threshold level depends on the climatological rain-rate distribution (RRD). It has also been noted, however, that the climatological RRD may be composed of an aggregate of distributions, one for each of several distinctly different synoptic conditions, each having its own optimal threshold. In this study, the impact of RRD variations on the threshold method is shown in an analysis of 1-min rainrate data from a network of tipping-bucket gauges in Darwin, Australia. Data are analyzed for two distinct regimes: the premonsoon environment, having isolated intense thunderstorms, and the active monsoon rains, having organized convective cell clusters that generate large areas of stratiform rain. It is found that a threshold of 10 mm/h results in the same threshold coefficient for both regimes, suggesting an alternative definition of optimal threshold as that which is least sensitive to distribution variations. The observed behavior of the threshold coefficient is well simulated by assumption of lognormal distributions with different scale parameters and same shape parameters.

  20. Electric-field-driven phase transition in vanadium dioxide

    NASA Astrophysics Data System (ADS)

    Wu, B.; Zimmers, A.; Aubin, H.; Gosh, R.; Liu, Y.; Lopez, R.

    2011-03-01

    In recent years, various strongly correlated materials have shown sharp switching from insulator to metallic state in their I(V) transport curves. Determining if this is purely an out of equilibrium phenomena (due to the strong electric field applied throughout the sample) or simply a Joule heating issue is still an open question. To address this issue, we have first measured local I(V) curves in vanadium dioxide (VO2) Mott insulator at various temperatures using a conducting AFM setup and determined the voltage threshold of the insulator to metal switching. By lifting the tip above the surface (> 35 nm) , wehavethenmeasuredthepurelyelectrostaticforcebetweenthetipandsamplesurfaceasthevoltagebetweenthesetwowasincreased . Inaverynarrowtemperaturerange (below 360 K) , atipheightrange (below 60 nm) andavoltageappliedrange (above 8 V) , weobservedswitchingintheelectrostaticforce (telegraphicnoisevs . timeandvs . voltage) . ThispurelyelectricfieldeffectshowsthattheswitchingphenomenonisstillpresentevenwithoutJouleheatinginVO 2 .

  1. Application of a free-energy-landscape approach to study tension-dependent bilayer tubulation mediated by curvature-inducing proteins.

    PubMed

    Tourdot, Richard W; Ramakrishnan, N; Baumgart, Tobias; Radhakrishnan, Ravi

    2015-10-01

    We investigate the phenomenon of protein-induced tubulation of lipid bilayer membranes within a continuum framework using Monte Carlo simulations coupled with the Widom insertion technique to compute excess chemical potentials. Tubular morphologies are spontaneously formed when the density and the curvature-field strength of the membrane-bound proteins exceed their respective thresholds and this transition is marked by a sharp drop in the excess chemical potential. We find that the planar to tubular transition can be described by a micellar model and that the corresponding free-energy barrier increases with an increase in the curvature-field strength (i.e., of protein-membrane interactions) and also with an increase in membrane tension.

  2. Localized superconductivity in the quantum-critical region of the disorder-driven superconductor-insulator transition in TiN thin films.

    PubMed

    Baturina, T I; Mironov, A Yu; Vinokur, V M; Baklanov, M R; Strunk, C

    2007-12-21

    We investigate low-temperature transport properties of thin TiN superconducting films in the vicinity of the disorder-driven superconductor-insulator transition. In a zero magnetic field, we find an extremely sharp separation between superconducting and insulating phases, evidencing a direct superconductor-insulator transition without an intermediate metallic phase. At moderate temperatures, in the insulating films we reveal thermally activated conductivity with the magnetic field-dependent activation energy. At very low temperatures, we observe a zero-conductivity state, which is destroyed at some depinning threshold voltage V{T}. These findings indicate the formation of a distinct collective state of the localized Cooper pairs in the critical region at both sides of the transition.

  3. Environmental Assessment for Proposed Utility Corridors at Edwards Air Force Base, California

    DTIC Science & Technology

    2016-07-01

    AFB. Coordinating with local communities will serve to ensure all communications towers, wind turbines , residential development and other...Minimis Thresholds in Nonattainment Areas ...................................................................... 35 Table 3-4 Wind Erodibility...125 Table 4-3 Summary of Cultural Resources Associated with Proposed Utility Corridors ........................ 126 Table 4-4 Wind

  4. Optimal estimation of recurrence structures from time series

    NASA Astrophysics Data System (ADS)

    beim Graben, Peter; Sellers, Kristin K.; Fröhlich, Flavio; Hutt, Axel

    2016-05-01

    Recurrent temporal dynamics is a phenomenon observed frequently in high-dimensional complex systems and its detection is a challenging task. Recurrence quantification analysis utilizing recurrence plots may extract such dynamics, however it still encounters an unsolved pertinent problem: the optimal selection of distance thresholds for estimating the recurrence structure of dynamical systems. The present work proposes a stochastic Markov model for the recurrent dynamics that allows for the analytical derivation of a criterion for the optimal distance threshold. The goodness of fit is assessed by a utility function which assumes a local maximum for that threshold reflecting the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.

  5. A comparison of South Asian specific and established BMI thresholds for determining obesity prevalence in pregnancy and predicting pregnancy complications: findings from the Born in Bradford cohort.

    PubMed

    Bryant, M; Santorelli, G; Lawlor, D A; Farrar, D; Tuffnell, D; Bhopal, R; Wright, J

    2014-03-01

    To describe how maternal obesity prevalence varies by established international and South Asian specific body mass index (BMI) cut-offs in women of Pakistani origin and investigate whether different BMI thresholds can help to identify women at risk of adverse pregnancy and birth outcomes. Prospective bi-ethnic birth cohort study (the Born in Bradford (BiB) cohort). Bradford, a deprived city in the North of the UK. A total of 8478 South Asian and White British pregnant women participated in the BiB cohort study. Maternal obesity prevalence; prevalence of known obesity-related adverse pregnancy outcomes: mode of birth, hypertensive disorders of pregnancy (HDP), gestational diabetes, macrosomia and pre-term births. Application of South Asian BMI cut-offs increased prevalence of obesity in Pakistani women from 18.8 (95% confidence interval (CI) 17.6-19.9) to 30.9% (95% CI 29.5-32.2). With the exception of pre-term births, there was a positive linear relationship between BMI and prevalence of adverse pregnancy and birth outcomes, across almost the whole BMI distribution. Risk of gestational diabetes and HDP increased more sharply in Pakistani women after a BMI threshold of at least 30 kg m(-2), but there was no evidence of a sharp increase in any risk factors at the new, lower thresholds suggested for use in South Asian women. BMI was a good single predictor of outcomes (area under the receiver operating curve: 0.596-0.685 for different outcomes); prediction was more discriminatory and accurate with BMI as a continuous variable than as a binary variable for any possible cut-off point. Applying the new South Asian threshold to pregnant women would markedly increase those who were referred for monitoring and lifestyle advice. However, our results suggest that lowering the BMI threshold in South Asian women would not improve the predictive ability for identifying those who were at risk of adverse pregnancy outcomes.

  6. Dynamic modeling for rigid rotor bearing systems with a localized defect considering additional deformations at the sharp edges

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Shao, Yimin

    2017-06-01

    Rotor bearing systems (RBSs) play a very valuable role for wind turbine gearboxes, aero-engines, high speed spindles, and other rotational machinery. An in-depth understanding of vibrations of the RBSs is very useful for condition monitoring and diagnosis applications of these machines. A new twelve-degree-of-freedom dynamic model for rigid RBSs with a localized defect (LOD) is proposed. This model can formulate the housing support stiffness, interfacial frictional moments including load dependent and load independent components, time-varying displacement excitation caused by a LOD, additional deformations at the sharp edges of the LOD, and lubricating oil film. The time-varying displacement model is determined by a half-sine function. A new method for calculating the additional deformations at the sharp edges of the LOD is analytical derived based on an elastic quarter-space method presented in the literature. The proposed dynamic model is utilized to analyze the influences of the housing support stiffness and LOD sizes on the vibration characteristics of the rigid RBS, which cannot be predicted by the previous dynamic models in the literature. The results show that the presented method can give a new dynamic modeling method for vibration formulation for a rigid RBS with and without the LOD on the races.

  7. Metastable growth of pure wurtzite InGaAs microstructures.

    PubMed

    Ng, Kar Wei; Ko, Wai Son; Lu, Fanglu; Chang-Hasnain, Connie J

    2014-08-13

    III-V compound semiconductors can exist in two major crystal phases, namely, zincblende (ZB) and wurtzite (WZ). While ZB is thermodynamically favorable in conventional III-V epitaxy, the pure WZ phase can be stable in nanowires with diameters smaller than certain critical values. However, thin nanowires are more vulnerable to surface recombination, and this can ultimately limit their performances as practical devices. In this work, we study a metastable growth mechanism that can yield purely WZ-phased InGaAs microstructures on silicon. InGaAs nucleates as sharp nanoneedles and expand along both axial and radial directions simultaneously in a core-shell fashion. While the base can scale from tens of nanometers to over a micron, the tip can remain sharp over the entire growth. The sharpness maintains a high local surface-to-volume ratio, favoring hexagonal lattice to grow axially. These unique features lead to the formation of microsized pure WZ InGaAs structures on silicon. To verify that the WZ microstructures are truly metastable, we demonstrate, for the first time, the in situ transformation from WZ to the energy-favorable ZB phase inside a transmission electron microscope. This unconventional core-shell growth mechanism can potentially be applied to other III-V materials systems, enabling the effective utilization of the extraordinary properties of the metastable wurtzite crystals.

  8. Novel EPR characterization of the antioxidant activity of tea leaves

    NASA Astrophysics Data System (ADS)

    Morsy, M. A.; Khaled, M. M.

    2002-04-01

    Electron paramagnetic resonance (EPR) spectroscopy is utilized to investigate several categories of green and black tea: Twining green tea (TGT), Chinese green tea (CGT), Red-labels black tea (RBT). Basically, two EPR signals from all the studied samples are observed: One of them is a very weak sharp EPR signal with Δ Hpp≅10 G and g-factor=2.00023 superimposed on the other broad signal with Δ Hpp≅550 G and g-factor=2.02489. The broad signal is a characteristic one of manganese(II) complex, while the sharp signal is related to a stable radical of aromatic origin exist in a powder condition. The feature of the manganese EPR signal is attributed to manganese(II) complex and reflected the molecular behavior of Mn(II) in the protein system of the natural leaves. The sharp signal, which is most probably due to a semiquinones radicals, is observed at room temperature and its intensity is remarkably affected by photo degradation of the studied samples. The intensity of manganese(II) EPR signal is found to be related to ageing and disintegration of the tea leaves. Moreover, direct relation between the relative intensity of the semiquinones radical signal and antioxidant activity of the studied samples was also correlated.

  9. Comparison of field olfactometers in a controlled chamber using hydrogen sulfide as the test odorant.

    PubMed

    McGinley, M A; McGinley, C M

    2004-01-01

    A standard method for measuring and quantifying odour in the ambient air utilizes a portable odour detecting and measuring device known as a field olfactometer (US Public Health Service Project Grant A-58-541). The field olfactometer dynamically dilutes the ambient air with carbon-filtered air in distinct ratios known as "Dilutions-to-Threshold" dilution factors (D/Ts), i.e. 2, 4, 7, 15, etc. Thirteen US states and several cities in North America currently utilize field olfactometry as a key component of determining compliance to odour regulations and ordinances. A controlled environmental chamber was utilized, with hydrogen sulfide as the known test odorant. A hydrogen sulfide environment was created in this controlled chamber using an Advanced Calibration Designs, Inc. Cal2000 Hydrogen Sulfide Generator. The hydrogen sulfide concentration inside the chamber was monitored using an Arizona Instruments, Inc. Jerome Model 631 H2S Analyzer. When the environmental chamber reached a desired test concentration, test operators entered the chamber. The dilution-to-threshold odour concentration was measured using a Nasal Ranger Field Olfactometer (St Croix Sensory, Inc.) and a Barnebey Sutcliffe Corp. Scentometer. The actual hydrogen sulfide concentration was also measured at the location in the room where the operators were standing while using the two types of field olfactometers. This paper presents a correlation between dilution-to-threshold values (D/T) and hydrogen sulfide ambient concentration. For example, a D/T of 7 corresponds to ambient H2S concentrations of 5.7-15.6 microg/m3 (4-11 ppbv). During this study, no significant difference was found between results obtained using the Scentometer or the Nasal Ranger (r = 0.82). Also, no significant difference was found between results of multiple Nasal Ranger users (p = 0.309). The field olfactometers yielded hydrogen sulfide thresholds of 0.7-3.0 microg/m3 (0.5-2.0 ppbv). Laboratory olfactometry yielded comparable thresholds of 0.64-1.3 microg/m3 (0.45-0.9 ppbv). These thresholds are consistent with published values.

  10. A vacuum ultraviolet laser pulsed field ionization-photoion study of methane (CH 4): Determination of the appearance energy of methylium from methane with unprecedented precision and the resulting impact on the bond dissociation energies of CH 4 and CH 4 +

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Yih -Chung; Xiong, Bo; Bross, David H.

    Here, we report on the successful implementation of a high-resolution vacuum ultraviolet (VUV) laser pulsed field ionization-photoion (PFI-PI) detection method for the study of unimolecular dissociation of quantum-state- or energy-selected molecular ions. As a test case, we have determined the 0 K appearance energy (AE 0) for the formation of methylium, CH 3 +, from methane, CH 4, as AE 0 (CH 3 +/CH 4) = 14.32271 ± 0.00013 eV. This value has a significantly smaller error limit, but is otherwise consistent with previous laboratory and/or synchrotron-based studies of this dissociative photoionization onset. Furthermore, the sum of the VUV lasermore » PFI-PI spectra obtained for the parent CH 4 + ion and the fragment CH 3 + ions of methane is found to agree with the earlier VUV pulsed field ionization-photoelectron (VUV-PFI-PE) spectrum of methane, providing unambiguous validation of the previous interpretation that the sharp VUV-PFI-PE step observed at the AE 0 (CH 3 +/CH 4) threshold ensues because of higher PFI detection efficiency for fragment CH 3 + than for parent CH 4 +. This, in turn, is a consequence of the underlying high- n Rydberg dissociation mechanism for the dissociative photoionization of CH 4, which was proposed in previous synchrotron-based VUV-PFI-PE and VUV-PFI-PEPICO studies of CH 4. The present highly accurate 0 K dissociative ionization threshold for CH 4 can be utilized to derive accurate values for the bond dissociation energies of methane and methane cation. For methane, the straightforward application of sequential thermochemistry via the positive ion cycle leads to some ambiguity because of two competing VUV-PFI-PE literature values for the ionization energy of methyl radical. The ambiguity is successfully resolved by applying the Active Thermochemical Tables (ATcT) approach, resulting in D 0 (H-CH 3) = 432.463 ± 0.027 kJ/mol and D 0(H-CH 3 +) = 164.701 ± 0.038 kJ/mol.« less

  11. A vacuum ultraviolet laser pulsed field ionization-photoion study of methane (CH 4): Determination of the appearance energy of methylium from methane with unprecedented precision and the resulting impact on the bond dissociation energies of CH 4 and CH 4 +

    DOE PAGES

    Chang, Yih -Chung; Xiong, Bo; Bross, David H.; ...

    2017-03-27

    Here, we report on the successful implementation of a high-resolution vacuum ultraviolet (VUV) laser pulsed field ionization-photoion (PFI-PI) detection method for the study of unimolecular dissociation of quantum-state- or energy-selected molecular ions. As a test case, we have determined the 0 K appearance energy (AE 0) for the formation of methylium, CH 3 +, from methane, CH 4, as AE 0 (CH 3 +/CH 4) = 14.32271 ± 0.00013 eV. This value has a significantly smaller error limit, but is otherwise consistent with previous laboratory and/or synchrotron-based studies of this dissociative photoionization onset. Furthermore, the sum of the VUV lasermore » PFI-PI spectra obtained for the parent CH 4 + ion and the fragment CH 3 + ions of methane is found to agree with the earlier VUV pulsed field ionization-photoelectron (VUV-PFI-PE) spectrum of methane, providing unambiguous validation of the previous interpretation that the sharp VUV-PFI-PE step observed at the AE 0 (CH 3 +/CH 4) threshold ensues because of higher PFI detection efficiency for fragment CH 3 + than for parent CH 4 +. This, in turn, is a consequence of the underlying high- n Rydberg dissociation mechanism for the dissociative photoionization of CH 4, which was proposed in previous synchrotron-based VUV-PFI-PE and VUV-PFI-PEPICO studies of CH 4. The present highly accurate 0 K dissociative ionization threshold for CH 4 can be utilized to derive accurate values for the bond dissociation energies of methane and methane cation. For methane, the straightforward application of sequential thermochemistry via the positive ion cycle leads to some ambiguity because of two competing VUV-PFI-PE literature values for the ionization energy of methyl radical. The ambiguity is successfully resolved by applying the Active Thermochemical Tables (ATcT) approach, resulting in D 0 (H-CH 3) = 432.463 ± 0.027 kJ/mol and D 0(H-CH 3 +) = 164.701 ± 0.038 kJ/mol.« less

  12. Effects of pump recycling technique on stimulated Brillouin scattering threshold: a theoretical model.

    PubMed

    Al-Asadi, H A; Al-Mansoori, M H; Ajiya, M; Hitam, S; Saripan, M I; Mahdi, M A

    2010-10-11

    We develop a theoretical model that can be used to predict stimulated Brillouin scattering (SBS) threshold in optical fibers that arises through the effect of Brillouin pump recycling technique. Obtained simulation results from our model are in close agreement with our experimental results. The developed model utilizes single mode optical fiber of different lengths as the Brillouin gain media. For 5-km long single mode fiber, the calculated threshold power for SBS is about 16 mW for conventional technique. This value is reduced to about 8 mW when the residual Brillouin pump is recycled at the end of the fiber. The decrement of SBS threshold is due to longer interaction lengths between Brillouin pump and Stokes wave.

  13. Smeared spectrum jamming suppression based on generalized S transform and threshold segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xin; Wang, Chunyang; Tan, Ming; Fu, Xiaolong

    2018-04-01

    Smeared Spectrum (SMSP) jamming is an effective jamming in countering linear frequency modulation (LFM) radar. According to the time-frequency distribution difference between jamming and echo, a jamming suppression method based on Generalized S transform (GST) and threshold segmentation is proposed. The sub-pulse period is firstly estimated based on auto correlation function firstly. Secondly, the time-frequency image and the related gray scale image are achieved based on GST. Finally, the Tsallis cross entropy is utilized to compute the optimized segmentation threshold, and then the jamming suppression filter is constructed based on the threshold. The simulation results show that the proposed method is of good performance in the suppression of false targets produced by SMSP.

  14. AMPLITUDE DISCRIMINATOR HAVING SEPARATE TRIGGERING AND RECOVERY CONTROLS UTILIZING AUTOMATIC TRIGGERING

    DOEpatents

    Chase, R.L.

    1962-01-23

    A transistorized amplitude discriminator circuit is described in which the initial triggering sensitivity and the recovery threshold are separately adjustable in a convenient manner. The discriminator is provided with two independent bias components, one of which is for circuit hysteresis (recovery) and one of which is for trigger threshold level. A switching circuit is provided to remove the second bias component upon activation of the trigger so that the recovery threshold is always at the point where the trailing edge of the input signal pulse goes through zero or other desired value. (AEC)

  15. Effects of pulse duration on magnetostimulation thresholds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saritas, Emine U., E-mail: saritas@ee.bilkent.edu.tr; Department of Electrical and Electronics Engineering, Bilkent University, Bilkent, Ankara 06800; National Magnetic Resonance Research Center

    Purpose: Medical imaging techniques such as magnetic resonance imaging and magnetic particle imaging (MPI) utilize time-varying magnetic fields that are subject to magnetostimulation limits, which often limit the speed of the imaging process. Various human-subject experiments have studied the amplitude and frequency dependence of these thresholds for gradient or homogeneous magnetic fields. Another contributing factor was shown to be number of cycles in a magnetic pulse, where the thresholds decreased with longer pulses. The latter result was demonstrated on two subjects only, at a single frequency of 1.27 kHz. Hence, whether the observed effect was due to the number ofmore » cycles or due to the pulse duration was not specified. In addition, a gradient-type field was utilized; hence, whether the same phenomenon applies to homogeneous magnetic fields remained unknown. Here, the authors investigate the pulse duration dependence of magnetostimulation limits for a 20-fold range of frequencies using homogeneous magnetic fields, such as the ones used for the drive field in MPI. Methods: Magnetostimulation thresholds were measured in the arms of six healthy subjects (age: 27 ± 5 yr). Each experiment comprised testing the thresholds at eight different pulse durations between 2 and 125 ms at a single frequency, which took approximately 30–40 min/subject. A total of 34 experiments were performed at three different frequencies: 1.2, 5.7, and 25.5 kHz. A solenoid coil providing homogeneous magnetic field was used to induce stimulation, and the field amplitude was measured in real time. A pre-emphasis based pulse shaping method was employed to accurately control the pulse durations. Subjects reported stimulation via a mouse click whenever they felt a twitching/tingling sensation. A sigmoid function was fitted to the subject responses to find the threshold at a specific frequency and duration, and the whole procedure was repeated at all relevant frequencies and pulse durations. Results: The magnetostimulation limits decreased with increasing pulse duration (T{sub pulse}). For T{sub pulse} < 18 ms, the thresholds were significantly higher than at the longest pulse durations (p < 0.01, paired Wilcoxon signed-rank test). The normalized magnetostimulation threshold (B{sub Norm}) vs duration curve at all three frequencies agreed almost identically, indicating that the observed effect is independent of the operating frequency. At the shortest pulse duration (T{sub pulse} ≈ 2 ms), the thresholds were approximately 24% higher than at the asymptotes. The thresholds decreased to within 4% of their asymptotic values for T{sub pulse} > 20 ms. These trends were well characterized (R{sup 2} = 0.78) by a stretched exponential function given by B{sub Norm}=1+αe{sup −(T{sub p}{sub u}{sub l}{sub s}{sub e}/β){sup γ}}, where the fitted parameters were α = 0.44, β = 4.32, and γ = 0.60. Conclusions: This work shows for the first time that the magnetostimulation thresholds decrease with increasing pulse duration, and that this effect is independent of the operating frequency. Normalized threshold vs duration trends are almost identical for a 20-fold range of frequencies: the thresholds are significantly higher at short pulse durations and settle to within 4% of their asymptotic values for durations longer than 20 ms. These results emphasize the importance of matching the human-subject experiments to the imaging conditions of a particular setup. Knowing the dependence of the safety limits to all contributing factors is critical for increasing the time-efficiency of imaging systems that utilize time-varying magnetic fields.« less

  16. Polymer collapse, protein folding, and the percolation threshold.

    PubMed

    Meirovitch, Hagai

    2002-01-15

    We study the transition of polymers in the dilute regime from a swollen shape at high temperatures to their low-temperature structures. The polymers are modeled by a single self-avoiding walk (SAW) on a lattice for which l of the monomers (the H monomers) are self-attracting, i.e., if two nonbonded H monomers become nearest neighbors on the lattice they gain energy of interaction (epsilon = -/epsilon/); the second type of monomers, denoted P, are neutral. This HP model was suggested by Lau and Dill (Macromolecules 1989, 22, 3986-3997) to study protein folding, where H and P are the hydrophobic and polar amino acid residues, respectively. The model is simulated on the square and simple cubic (SC) lattices using the scanning method. We show that the ground state and the sharpness of the transition depend on the lattice, the fraction g of the H monomers, as well as on their arrangement along the chain. In particular, if the H monomers are distributed at random and g is larger than the site percolation threshold of the lattice, a collapsed transition is very likely to occur. This conclusion, drawn for the lattice models, is also applicable to proteins where an effective lattice with coordination number between that of the SC lattice and the body centered cubic lattice is defined. Thus, the average fraction of hydrophobic amino acid residues in globular proteins is found to be close to the percolation threshold of the effective lattice.

  17. Finding the signal in the noise: Could social media be utilized for early hospital notification of multiple casualty events?

    PubMed Central

    Moore, Sara; Wakam, Glenn; Hubbard, Alan E.; Cohen, Mitchell J.

    2017-01-01

    Introduction Delayed notification and lack of early information hinder timely hospital based activations in large scale multiple casualty events. We hypothesized that Twitter real-time data would produce a unique and reproducible signal within minutes of multiple casualty events and we investigated the timing of the signal compared with other hospital disaster notification mechanisms. Methods Using disaster specific search terms, all relevant tweets from the event to 7 days post-event were analyzed for 5 recent US based multiple casualty events (Boston Bombing [BB], SF Plane Crash [SF], Napa Earthquake [NE], Sandy Hook [SH], and Marysville Shooting [MV]). Quantitative and qualitative analysis of tweet utilization were compared across events. Results Over 3.8 million tweets were analyzed (SH 1.8 m, BB 1.1m, SF 430k, MV 250k, NE 205k). Peak tweets per min ranged from 209–3326. The mean followers per tweeter ranged from 3382–9992 across events. Retweets were tweeted a mean of 82–564 times per event. Tweets occurred very rapidly for all events (<2 mins) and represented 1% of the total event specific tweets in a median of 13 minutes of the first 911 calls. A 200 tweets/min threshold was reached fastest with NE (2 min), BB (7 min), and SF (18 mins). If this threshold was utilized as a signaling mechanism to place local hospitals on standby for possible large scale events, in all case studies, this signal would have preceded patient arrival. Importantly, this threshold for signaling would also have preceded traditional disaster notification mechanisms in SF, NE, and simultaneous with BB and MV. Conclusions Social media data has demonstrated that this mechanism is a powerful, predictable, and potentially important resource for optimizing disaster response. Further investigated is warranted to assess the utility of prospective signally thresholds for hospital based activation. PMID:28982201

  18. What is known about the cost-effectiveness of orphan drugs? Evidence from cost-utility analyses.

    PubMed

    Picavet, E; Cassiman, D; Simoens, S

    2015-06-01

    In times of financial and economic hardship, governments are looking to contain pharmaceutical expenditure by focusing on cost-effective drugs. Because of their high prices and difficulties in demonstrating effectiveness in small patient populations, orphan drugs are often perceived as not able to meet traditional reimbursement threshold value for money. The aim of this study was to provide an overview of the available evidence on the cost-effectiveness of orphan drugs. All orphan drugs listed as authorized on the website of the European Medicines Agency on 21 November 2013 were included in the analysis. Cost-utility analyses (CUAs) were identified by searching the Tufts Medical Center Cost-Effectiveness Analysis Registry and Embase. For each CUA, a number of variables were collected. The search identified 23 articles on the Tufts registry and 167 articles on Embase. The final analysis included 45 CUAs and 61 incremental cost-utility ratios (ICURs) for 19 orphan drugs. Of all ICURS, 16·3% were related to dominant drugs (i.e. more effective and less expensive than the comparator), 70·5% were related to drugs that are more effective, but at a higher cost, and 13·1% were related to dominated drugs (i.e. less effective and more expensive than the comparator). The median overall ICUR was €40 242 per quality-adjusted life year (QALY) with a minimum ICUR of €6311/QALY and a maximum ICUR of €974,917/QALY. This study demonstrates that orphan drugs can meet traditional reimbursement thresholds. Considering a threshold of £30,000/QALY, in this study, ten (52·6%) of a total of 19 orphan drugs for which data were available meet the threshold. As much as fifteen orphan drugs (78·9%) are eligible for reimbursement if a threshold of €80,000/QALY is considered. © 2015 John Wiley & Sons Ltd.

  19. Development of Novel Composite and Random Materials for Nonlinear Optics and Lasers

    NASA Technical Reports Server (NTRS)

    Noginov, Mikhail

    2002-01-01

    A qualitative model explaining sharp spectral peaks in emission of solid-state random laser materials with broad-band gain is proposed. The suggested mechanism of coherent emission relies on synchronization of phases in an ensemble of emitting centers, via time delays provided by a network of random scatterers, and amplification of spontaneous emission that supports the spontaneously organized coherent state. Laser-like emission from powders of solid-state luminophosphors, characterized by dramatic narrowing of the emission spectrum and shortening of emission pulses above the threshold, was first observed by Markushev et al. and further studied by a number of research groups. In particular, it has been shown that when the pumping energy significantly exceeds the threshold, one or several narrow emission lines can be observed in broad-band gain media with scatterers, such as films of ZnO nanoparticles, films of pi-conjugated polymers or infiltrated opals. The experimental features, commonly observed in various solid-state random laser materials characterized by different particle sizes, different values of the photon mean free path l*, different indexes of refraction, etc.. can be described as follows. (Liquid dye random lasers are not discussed here.)

  20. Asymmetric bubble collapse and jetting in generalized Newtonian fluids

    NASA Astrophysics Data System (ADS)

    Shukla, Ratnesh K.; Freund, Jonathan B.

    2017-11-01

    The jetting dynamics of a gas bubble near a rigid wall in a non-Newtonian fluid are investigated using an axisymmetric simulation model. The bubble gas is assumed to be homogeneous, with density and pressure related through a polytropic equation of state. An Eulerian numerical description, based on a sharp interface capturing method for the shear-free bubble-liquid interface and an incompressible Navier-Stokes flow solver for generalized fluids, is developed specifically for this problem. Detailed simulations for a range of rheological parameters in the Carreau model show both the stabilizing and destabilizing non-Newtonian effects on the jet formation and impact. In general, for fixed driving pressure ratio, stand-off distance and reference zero-shear-rate viscosity, shear-thinning and shear-thickening promote and suppress jet formation and impact, respectively. For a sufficiently large high-shear-rate limit viscosity, the jet impact is completely suppressed. Thresholds are also determined for the Carreau power-index and material time constant. The dependence of these threshold rheological parameters on the non-dimensional driving pressure ratio and wall stand-off distance is similarly established. Implications for tissue injury in therapeutic ultrasound will be discussed.

  1. Suppression tuning of distortion-product otoacoustic emissions: Results from cochlear mechanics simulation

    PubMed Central

    Liu, Yi-Wen; Neely, Stephen T.

    2013-01-01

    This paper presents the results of simulating the acoustic suppression of distortion-product otoacoustic emissions (DPOAEs) from a computer model of cochlear mechanics. A tone suppressor was introduced, causing the DPOAE level to decrease, and the decrement was plotted against an increasing suppressor level. Suppression threshold was estimated from the resulting suppression growth functions (SGFs), and suppression tuning curves (STCs) were obtained by plotting the suppression threshold as a function of suppressor frequency. Results show that the slope of SGFs is generally higher for low-frequency suppressors than high-frequency suppressors, resembling those obtained from normal hearing human ears. By comparing responses of normal (100%) vs reduced (50%) outer-hair-cell sensitivities, the model predicts that the tip-to-tail difference of the STCs correlates well with that of intra-cochlear iso-displacement tuning curves. The correlation is poorer, however, between the sharpness of the STCs and that of the intra-cochlear tuning curves. These results agree qualitatively with what was recently reported from normal-hearing and hearing-impaired human subjects, and examination of intra-cochlear model responses can provide the needed insight regarding the interpretation of DPOAE STCs obtained in individual ears. PMID:23363112

  2. Biological applications and effects of optical masers

    NASA Astrophysics Data System (ADS)

    Ham, William T., Jr.; Mueller, Harold A.; Williams, Ray C.; Geeraets, Walter J.; Ruffolo, John J., Jr.

    1988-02-01

    Research experiments and projects pertaining to the ocular hazards of lasers and other optical sources are reviewed and discussed in some detail. Early studies to determine threshold retinal damage in the rabbit from ruby and neodymium lasers are described and followed by more elaborate experiments with monkeys using the He;Ne laser. A comparison between threshold retinal lesions in human volunteers, monkeys and rabbits is given. Retinal damage in the rhesus monkey is evaluated in terms of visual acuity. Quantitative data on solar retinitis as determined in the rhesus monkey are provided and the effects of wavelength on light toxicity are evaluated for eight monochromatic laser lines. Experiments performed at Los Alamos to evaluate the ocular effects and hazards of picosecond and nanosecond pulses of radiation from CO2 and HF lasers are described. A list of published papers describing the above research is included. The HF laser effects on the rabbit cornea were very similar to those caused by exposure to the CO2 laser. The energy from the CO2 and HF lasers is completely absorbed in the superficial layers of the cornea, creating a sharp sonic or shock wave which can disrupt the epithelial cells but does not penetrate to the stroma.

  3. Two-stage effects of awareness cascade on epidemic spreading in multiplex networks

    NASA Astrophysics Data System (ADS)

    Guo, Quantong; Jiang, Xin; Lei, Yanjun; Li, Meng; Ma, Yifang; Zheng, Zhiming

    2015-01-01

    Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. The dynamic process with human awareness is called awareness cascade, during which individuals exhibit herd-like behavior because they are making decisions based on the actions of other individuals [Borge-Holthoefer et al., J. Complex Networks 1, 3 (2013), 10.1093/comnet/cnt006]. In this paper, to investigate the epidemic spreading with awareness cascade, we propose a local awareness controlled contagion spreading model on multiplex networks. By theoretical analysis using a microscopic Markov chain approach and numerical simulations, we find the emergence of an abrupt transition of epidemic threshold βc with the local awareness ratio α approximating 0.5 , which induces two-stage effects on epidemic threshold and the final epidemic size. These findings indicate that the increase of α can accelerate the outbreak of epidemics. Furthermore, a simple 1D lattice model is investigated to illustrate the two-stage-like sharp transition at αc≈0.5 . The results can give us a better understanding of why some epidemics cannot break out in reality and also provide a potential access to suppressing and controlling the awareness cascading systems.

  4. Size and stochasticity in irrigated social-ecological systems

    NASA Astrophysics Data System (ADS)

    Puy, Arnald; Muneepeerakul, Rachata; Balbo, Andrea L.

    2017-03-01

    This paper presents a systematic study of the relation between the size of irrigation systems and the management of uncertainty. We specifically focus on studying, through a stylized theoretical model, how stochasticity in water availability and taxation interacts with the stochastic behavior of the population within irrigation systems. Our results indicate the existence of two key population thresholds for the sustainability of any irrigation system: or the critical population size required to keep the irrigation system operative, and N* or the population threshold at which the incentive to work inside the irrigation system equals the incentives to work elsewhere. Crossing irretrievably leads to system collapse. N* is the population level with a sub-optimal per capita payoff towards which irrigation systems tend to gravitate. When subjected to strong stochasticity in water availability or taxation, irrigation systems might suffer sharp population drops and irreversibly disintegrate into a system collapse, via a mechanism we dub ‘collapse trap’. Our conceptual study establishes the basis for further work aiming at appraising the dynamics between size and stochasticity in irrigation systems, whose understanding is key for devising mitigation and adaptation measures to ensure their sustainability in the face of increasing and inevitable uncertainty.

  5. Each procedure matters: threshold for surgeon volume to minimize complications and decrease cost associated with adrenalectomy.

    PubMed

    Anderson, Kevin L; Thomas, Samantha M; Adam, Mohamed A; Pontius, Lauren N; Stang, Michael T; Scheri, Randall P; Roman, Sanziana A; Sosa, Julie A

    2018-01-01

    An association has been suggested between increasing surgeon volume and improved patient outcomes, but a threshold has not been defined for what constitutes a "high-volume" adrenal surgeon. Adult patients who underwent adrenalectomy by an identifiable surgeon between 1998-2009 were selected from the Healthcare Cost and Utilization Project National Inpatient Sample. Logistic regression modeling with restricted cubic splines was utilized to estimate the association between annual surgeon volume and complication rates in order to identify a volume threshold. A total of 3,496 surgeons performed adrenalectomies on 6,712 patients; median annual surgeon volume was 1 case. After adjustment, the likelihood of experiencing a complication decreased with increasing annual surgeon volume up to 5.6 cases (95% confidence interval, 3.27-5.96). After adjustment, patients undergoing resection by low-volume surgeons (<6 cases/year) were more likely to experience complications (odds ratio 1.71, 95% confidence interval, 1.27-2.31, P = .005), have a greater hospital stay (relative risk 1.46, 95% confidence interval, 1.25-1.70, P = .003), and at increased cost (+26.2%, 95% confidence interval, 12.6-39.9, P = .02). This study suggests that an annual threshold of surgeon volume (≥6 cases/year) that is associated with improved patient outcomes and decreased hospital cost. This volume threshold has implications for quality improvement, surgical referral and reimbursement, and surgical training. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    NASA Astrophysics Data System (ADS)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  7. The Utility of Selection for Military and Civilian Jobs

    DTIC Science & Technology

    1989-07-01

    parsimonious use of information; the relative ease in making threshold (break-even) judgments compared to estimating actual SDy values higher than a... threshold value, even though judges are unlikely to agree on the exact point estimate for the SDy parameter; and greater understanding of how even small...ability, spatial ability, introversion , anxiety) considered to vary or differ across individuals. A construct (sometimes called a latent variable) is not

  8. Modelling the future Israel EEWS performance using synthetic catalogue

    NASA Astrophysics Data System (ADS)

    Pinsky, V.

    2017-10-01

    In 2012 June, Israel government decided about building an Earthquake Early Warning System (EEWS) in the country. The network configuration suggested was to be a staggered line of ˜100 stations along the regional main faults: Dead Sea fault and Carmel fault and additional ˜22 stations spread over the country. The EEWS alarm system should have to utilize two approaches: the P-wave-based algorithm combined with the S-threshold method. The former utilizes first wave arrivals to several closest stations for prompt location and the three initial seconds of the waveform data for magnitude estimation. The latter issues alarm when the surface shaking (velocity or acceleration) exceeds the relatively high threshold corresponding to a magnitude 5 earthquake at a short distance of about 10 km at least for the two neighbouring stations. For each of the approaches and for a reasonable combination of them, we simulate the EEWS performance based on a synthetic catalogue. The input seismicity parameters for the processing are extracted from the real instrumental catalogue. Using a general ground-motion prediction equations for peak ground accelaration, τc and Pd we then evaluate how false and missed alarm rates depend on the corresponding thresholds. Practically, in turn, these dependencies approve choosing initial thresholds for the EEWS providing suitable false and missed alarms rates.

  9. SHARP pre-release v1.0 - Current Status and Documentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahadevan, Vijay S.; Rahaman, Ronald O.

    The NEAMS Reactor Product Line effort aims to develop an integrated multiphysics simulation capability for the design and analysis of future generations of nuclear power plants. The Reactor Product Line code suite’s multi-resolution hierarchy is being designed to ultimately span the full range of length and time scales present in relevant reactor design and safety analyses, as well as scale from desktop to petaflop computing platforms. In this report, building on a several previous report issued in September 2014, we describe our continued efforts to integrate thermal/hydraulics, neutronics, and structural mechanics modeling codes to perform coupled analysis of a representativemore » fast sodium-cooled reactor core in preparation for a unified release of the toolkit. The work reported in the current document covers the software engineering aspects of managing the entire stack of components in the SHARP toolkit and the continuous integration efforts ongoing to prepare a release candidate for interested reactor analysis users. Here we report on the continued integration effort of PROTEUS/Nek5000 and Diablo into the NEAMS framework and the software processes that enable users to utilize the capabilities without losing scientific productivity. Due to the complexity of the individual modules and their necessary/optional dependency library chain, we focus on the configuration and build aspects for the SHARP toolkit, which includes capability to autodownload dependencies and configure/install with optimal flags in an architecture-aware fashion. Such complexity is untenable without strong software engineering processes such as source management, source control, change reviews, unit tests, integration tests and continuous test suites. Details on these processes are provided in the report as a building step for a SHARP user guide that will accompany the first release, expected by Mar 2016.« less

  10. Current maternal age recommendations for prenatal diagnosis: a reappraisal using the expected utility theory.

    PubMed

    Sicherman, N; Bombard, A T; Rappoport, P

    1995-01-01

    The expected utility theory suggests eliminating an age-specific criterion for recommending prenatal diagnosis to patients. We isolate the factors which patients and physicians need to consider intelligently in prenatal diagnosis, and show that the sole use of a threshold age as a screening device is inadequate. Such a threshold fails to consider adequately patients' attitudes regarding many of the possible outcomes of prenatal diagnosis; in particular, the birth of a chromosomally abnormal child and procedural-related miscarriages. It also precludes testing younger women and encourages testing in patients who do not necessarily require or desire it. All pregnant women should be informed about their prenatal diagnosis options, screening techniques, and diagnostic procedures, including their respective limitations, risks, and benefits.

  11. SART-Type Half-Threshold Filtering Approach for CT Reconstruction

    PubMed Central

    YU, HENGYONG; WANG, GE

    2014-01-01

    The ℓ1 regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the ℓp norm (0 < p < 1) and solve the ℓp minimization problem. Very recently, Xu et al. developed an analytic solution for the ℓ1∕2 regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering. PMID:25530928

  12. SART-Type Half-Threshold Filtering Approach for CT Reconstruction.

    PubMed

    Yu, Hengyong; Wang, Ge

    2014-01-01

    The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.

  13. A principled approach to setting optimal diagnostic thresholds: where ROC and indifference curves meet.

    PubMed

    Irwin, R John; Irwin, Timothy C

    2011-06-01

    Making clinical decisions on the basis of diagnostic tests is an essential feature of medical practice and the choice of the decision threshold is therefore crucial. A test's optimal diagnostic threshold is the threshold that maximizes expected utility. It is given by the product of the prior odds of a disease and a measure of the importance of the diagnostic test's sensitivity relative to its specificity. Choosing this threshold is the same as choosing the point on the Receiver Operating Characteristic (ROC) curve whose slope equals this product. We contend that a test's likelihood ratio is the canonical decision variable and contrast diagnostic thresholds based on likelihood ratio with two popular rules of thumb for choosing a threshold. The two rules are appealing because they have clear graphical interpretations, but they yield optimal thresholds only in special cases. The optimal rule can be given similar appeal by presenting indifference curves, each of which shows a set of equally good combinations of sensitivity and specificity. The indifference curve is tangent to the ROC curve at the optimal threshold. Whereas ROC curves show what is feasible, indifference curves show what is desirable. Together they show what should be chosen. Copyright © 2010 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  14. Understanding sharps injuries in home healthcare: The Safe Home Care qualitative methods study to identify pathways for injury prevention.

    PubMed

    Markkanen, Pia; Galligan, Catherine; Laramie, Angela; Fisher, June; Sama, Susan; Quinn, Margaret

    2015-04-11

    Home healthcare is one of the fastest growing sectors in the United States. Percutaneous injuries from sharp medical devices (sharps) are a source of bloodborne pathogen infections among home healthcare workers and community members. Sharps use and disposal practices in the home are highly variable and there is no comprehensive analysis of the system of sharps procurement, use and disposal in home healthcare. This gap is a barrier to effective public health interventions. The objectives of this study were to i) identify the full range of pathways by which sharps enter and exit the home, stakeholders involved, and barriers for using sharps with injury prevention features; and ii) assess the leverage points for preventive interventions. This study employed qualitative research methods to develop two systems maps of the use of sharps and prevention of sharps injuries in home healthcare. Twenty-six in-depth interview sessions were conducted including home healthcare agency clinicians, public health practitioners, sharps device manufacturers, injury prevention advocates, pharmacists and others. Interview transcripts were audio-recorded and analyzed thematically using NVIVO qualitative research analysis software. Analysis of supporting archival material also was conducted. All findings guided development of the two maps. Sharps enter the home via multiple complex pathways involving home healthcare providers and home users. The providers reported using sharps with injury prevention features. However, home users' sharps seldom had injury prevention features and sharps were commonly re-used for convenience and cost-savings. Improperly discarded sharps present hazards to caregivers, waste handlers, and community members. The most effective intervention potential exists at the beginning of the sharps systems maps where interventions can eliminate or minimize sharps injuries, in particular with needleless treatment methods and sharps with injury prevention features. Manufacturers and insurance providers can improve safety with more affordable and accessible sharps with injury prevention features for home users. Sharps disposal campaigns, free-of-charge disposal containers, and convenient disposal options remain essential. Sharps injuries are preventable through public health actions that promote needleless treatment methods, sharps with injury prevention features, and safe disposal practices. Communication about hazards regarding sharps is needed for all home healthcare stakeholders.

  15. Comparisons of two moments‐based estimators that utilize historical and paleoflood data for the log Pearson type III distribution

    USGS Publications Warehouse

    England, John F.; Salas, José D.; Jarrett, Robert D.

    2003-01-01

    The expected moments algorithm (EMA) [Cohn et al., 1997] and the Bulletin 17B [Interagency Committee on Water Data, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed‐threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed‐threshold exceedance cases. EMA performed comparatively much better in other fixed‐threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV‐simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.

  16. Comparisons of two moments-based estimators that utilize historical and paleoflood data for the log Pearson type III distribution

    NASA Astrophysics Data System (ADS)

    England, John F.; Salas, José D.; Jarrett, Robert D.

    2003-09-01

    The expected moments algorithm (EMA) [, 1997] and the Bulletin 17B [, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed-threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed-threshold exceedance cases. EMA performed comparatively much better in other fixed-threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV-simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.

  17. Complex symmetric matrices with strongly stable iterates

    NASA Technical Reports Server (NTRS)

    Tadmor, E.

    1985-01-01

    Complex-valued symmetric matrices are studied. A simple expression for the spectral norm of such matrices is obtained, by utilizing a unitarily congruent invariant form. A sharp criterion is provided for identifying those symmetric matrices whose spectral norm is not exceeding one: such strongly stable matrices are usually sought in connection with convergent difference approximations to partial differential equations. As an example, the derived criterion is applied to conclude the strong stability of a Lax-Wendroff scheme.

  18. Convex lattice polygons of fixed area with perimeter-dependent weights.

    PubMed

    Rajesh, R; Dhar, Deepak

    2005-01-01

    We study fully convex polygons with a given area, and variable perimeter length on square and hexagonal lattices. We attach a weight tm to a convex polygon of perimeter m and show that the sum of weights of all polygons with a fixed area s varies as s(-theta(conv))eK(t)square root(s) for large s and t less than a critical threshold tc, where K(t) is a t-dependent constant, and theta(conv) is a critical exponent which does not change with t. Using heuristic arguments, we find that theta(conv) is 1/4 for the square lattice, but -1/4 for the hexagonal lattice. The reason for this unexpected nonuniversality of theta(conv) is traced to existence of sharp corners in the asymptotic shape of these polygons.

  19. Photodetachment of electrons from amide and arsenide ions - The electron affinities of NH2., and AsH2.

    NASA Technical Reports Server (NTRS)

    Smyth, K. C.; Brauman, J. I.

    1972-01-01

    The relative cross section for the gas-phase photodetachment of electrons has been determined for NH2(-) in the wavelength region of 1195 to 1695 nm and for AsH2(-) in the region from 620 to 1010 nm. An ion cyclotron resonance spectrometer was used to generate, trap, and detect negative ions. A 1000-W xenon arc lamp with a grating monochromator was used as the light source, except for one series of experiments in which a tunable laser was employed. Single sharp thresholds were observed in both cross sections, and the following electron affinity values were determined: 0.744 (plus or minus 0.022) eV for NH2. and 1.27 (plus or minus 0.03) eV for AsH2.

  20. Highly pH-responsive sensor based on amplified spontaneous emission coupled to colorimetry.

    PubMed

    Zhang, Qi; Castro Smirnov, Jose R; Xia, Ruidong; Pedrosa, Jose M; Rodriguez, Isabel; Cabanillas-Gonzalez, Juan; Huang, Wei

    2017-04-07

    We demonstrated a simple, directly-readable approach for high resolution pH sensing. The method was based on sharp changes in Amplified Spontaneous Emission (ASE) of a Stilbene 420 (ST) laser dye triggered by the pH-dependent absorption of Bromocresol Green (BG). The ASE threshold of BG:ST solution mixtures exhibited a strong dependence on BG absorption, which was drastically changed by the variations of the pH of BG solution. As a result, ASE on-off or off-on was observed with different pH levels achieved by ammonia doping. By changing the concentration of the BG solution and the BG:ST blend ratio, this approach allowed to detect pH changes with a sensitivity down to 0.05 in the 10-11 pH range.

  1. Spatial nonlinearities: Cascading effects in the earth system

    USGS Publications Warehouse

    Peters, Debra P.C.; Pielke, R.A.; Bestelmeyer, B.T.; Allen, Craig D.; Munson-McGee, Stuart; Havstad, K. M.; Canadell, Josep G.; Pataki, Diane E.; Pitelka, Louis F.

    2006-01-01

    Nonlinear behavior is prevalent in all aspects of the Earth System, including ecological responses to global change (Gallagher and Appenzeller 1999; Steffen et al. 2004). Nonlinear behavior refers to a large, discontinuous change in response to a small change in a driving variable (Rial et al. 2004). In contrast to linear systems where responses are smooth, well-behaved, continuous functions, nonlinear systems often undergo sharp or discontinuous transitions resulting from the crossing of thresholds. These nonlinear responses can result in surprising behavior that makes forecasting difficult (Kaplan and Glass 1995). Given that many system dynamics are nonlinear, it is imperative that conceptual and quantitative tools be developed to increase our understanding of the processes leading to nonlinear behavior in order to determine if forecasting can be improved under future environmental changes (Clark et al. 2001).

  2. Intermittent dynamics in complex systems driven to depletion.

    PubMed

    Escobar, Juan V; Pérez Castillo, Isaac

    2018-03-19

    When complex systems are driven to depletion by some external factor, their non-stationary dynamics can present an intermittent behaviour between relative tranquility and burst of activity whose consequences are often catastrophic. To understand and ultimately be able to predict such dynamics, we propose an underlying mechanism based on sharp thresholds of a local generalized energy density that naturally leads to negative feedback. We find a transition from a continuous regime to an intermittent one, in which avalanches can be predicted despite the stochastic nature of the process. This model may have applications in many natural and social complex systems where a rapid depletion of resources or generalized energy drives the dynamics. In particular, we show how this model accurately describes the time evolution and avalanches present in a real social system.

  3. Characteristics of edge breakdowns on Teflon samples

    NASA Technical Reports Server (NTRS)

    Yadlowsky, E. J.; Hazelton, R. C.; Churchill, R. J.

    1980-01-01

    The characteristics of electrical discharges induced on silverbacked Teflon samples irradiated by a monoenergetic electron beam have been studied under controlled laboratory conditions. Measurements of breakdown threshold voltages indicate a marked anisotropy in the electrical breakdown properties of Teflon: differences of up to 10 kV in breakdown threshold voltage are observed depending on the sample orientation. The material anisotropy can be utilized in spacecraft construction to reduce the magnitude of discharge currents.

  4. Characterization of two subsurface H2-utilizing bacteria, Desulfomicrobium hypogeium sp. nov. and Acetobacterium psammolithicum sp. nov., and their ecological roles.

    PubMed

    Krumholz, L R; Harris, S H; Tay, S T; Suflita, J M

    1999-06-01

    We examined the relative roles of acetogenic and sulfate-reducing bacteria in H2 consumption in a previously characterized subsurface sandstone ecosystem. Enrichment cultures originally inoculated with ground sandstone material obtained from a Cretaceous formation in central New Mexico were grown with hydrogen in a mineral medium supplemented with 0.02% yeast extract. Sulfate reduction and acetogenesis occurred in these cultures, and the two most abundant organisms carrying out the reactions were isolated. Based on 16S rRNA analysis data and on substrate utilization patterns, these organisms were named Desulfomicrobium hypogeium sp. nov. and Acetobacterium psammolithicum sp. nov. The steady-state H2 concentrations measured in sandstone-sediment slurries (threshold concentration, 5 nM), in pure cultures of sulfate reducers (threshold concentration, 2 nM), and in pure cultures of acetogens (threshold concentrations 195 to 414 nM) suggest that sulfate reduction is the dominant terminal electron-accepting process in the ecosystem examined. In an experiment in which direct competition for H2 between D. hypogeium and A. psammolithicum was examined, sulfate reduction was the dominant process.

  5. Metro passengers’ route choice model and its application considering perceived transfer threshold

    PubMed Central

    Jin, Fanglei; Zhang, Yongsheng; Liu, Shasha

    2017-01-01

    With the rapid development of the Metro network in China, the greatly increased route alternatives make passengers’ route choice behavior and passenger flow assignment more complicated, which presents challenges to the operation management. In this paper, a path sized logit model is adopted to analyze passengers’ route choice preferences considering such parameters as in-vehicle time, number of transfers, and transfer time. Moreover, the “perceived transfer threshold” is defined and included in the utility function to reflect the penalty difference caused by transfer time on passengers’ perceived utility under various numbers of transfers. Next, based on the revealed preference data collected in the Guangzhou Metro, the proposed model is calibrated. The appropriate perceived transfer threshold value and the route choice preferences are analyzed. Finally, the model is applied to a personalized route planning case to demonstrate the engineering practicability of route choice behavior analysis. The results show that the introduction of the perceived transfer threshold is helpful to improve the model’s explanatory abilities. In addition, personalized route planning based on route choice preferences can meet passengers’ diversified travel demands. PMID:28957376

  6. Morphology dependent near-field response in atomistic plasmonic nanocavities.

    PubMed

    Chen, Xing; Jensen, Lasse

    2018-06-21

    In this work we examine how the atomistic morphologies of plasmonic dimers control the near-field response by using an atomistic electrodynamics model. At large separations, the field enhancement in the junction follows a simple inverse power law as a function of the gap separation, which agrees with classical antenna theory. However, when the separations are smaller than 0.8 nm, the so-called quantum size regime, the field enhancement is screened and thus deviates from the simple power law. Our results show that the threshold distance for the deviation depends on the specific morphology of the junction. The near field in the junction can be localized to an area of less than 1 nm2 in the presence of an atomically sharp tip, but the separation distances leading to a large confinement of near field depend strongly on the specific atomistic configuration. More importantly, the highly confined fields lead to large field gradients particularly in a tip-to-surface junction, which indicates that such a plasmonic structure favors observing strong field gradient effects in near-field spectroscopy. We find that for atomically sharp tips the field gradient becomes significant and depends strongly on the local morphology of a tip. We expect our findings to be crucial for understanding the origin of high-resolution near-field spectroscopy and for manipulating optical cavities through atomic structures in the strongly coupled plasmonic systems.

  7. Can triggered electromyography monitoring throughout retraction predict postoperative symptomatic neuropraxia after XLIF? Results from a prospective multicenter trial.

    PubMed

    Uribe, Juan S; Isaacs, Robert E; Youssef, Jim A; Khajavi, Kaveh; Balzer, Jeffrey R; Kanter, Adam S; Küelling, Fabrice A; Peterson, Mark D

    2015-04-01

    This multicenter study aims to evaluate the utility of triggered electromyography (t-EMG) recorded throughout psoas retraction during lateral transpsoas interbody fusion to predict postoperative changes in motor function. Three hundred and twenty-three patients undergoing L4-5 minimally invasive lateral interbody fusion from 21 sites were enrolled. Intraoperative data collection included initial t-EMG thresholds in response to posterior retractor blade stimulation and subsequent t-EMG threshold values collected every 5 min throughout retraction. Additional data collection included dimensions/duration of retraction as well as pre-and postoperative lower extremity neurologic exams. Prior to expanding the retractor, the lowestt-EMG threshold was identified posterior to the retractor in 94 % of cases. Postoperatively, 13 (4.5 %) patients had a new motor weakness that was consistent with symptomatic neuropraxia (SN) of lumbar plexus nerves on the approach side. There were no significant differences between patients with or without a corresponding postoperative SN with respect to initial posterior blade reading (p = 0.600), or retraction dimensions (p > 0.05). Retraction time was significantly longer in those patients with SN vs. those without (p = 0.031). Stepwise logistic regression showed a significant positive relationship between the presence of new postoperative SN and total retraction time (p < 0.001), as well as change in t-EMG thresholds over time (p < 0.001), although false positive rates (increased threshold in patients with no new SN) remained high regardless of the absolute increase in threshold used to define an alarm criteria. Prolonged retraction time and coincident increases in t-EMG thresholds are predictors of declining nerve integrity. Increasing t-EMG thresholds, while predictive of injury, were also observed in a large number of patients without iatrogenic injury, with a greater predictive value in cases with extended duration. In addition to a careful approach with minimal muscle retraction and consistent lumbar plexus directional retraction, the incidence of postoperative motor neuropraxia may be reduced by limiting retraction time and utilizing t-EMG throughout retraction, while understanding that the specificity of this monitoring technique is low during initial retraction and increases with longer retraction duration.

  8. Development of Thresholds and Exceedance Probabilities for Influent Water Quality to Meet Drinking Water Regulations

    NASA Astrophysics Data System (ADS)

    Reeves, K. L.; Samson, C.; Summers, R. S.; Balaji, R.

    2017-12-01

    Drinking water treatment utilities (DWTU) are tasked with the challenge of meeting disinfection and disinfection byproduct (DBP) regulations to provide safe, reliable drinking water under changing climate and land surface characteristics. DBPs form in drinking water when disinfectants, commonly chlorine, react with organic matter as measured by total organic carbon (TOC), and physical removal of pathogen microorganisms are achieved by filtration and monitored by turbidity removal. Turbidity and TOC in influent waters to DWTUs are expected to increase due to variable climate and more frequent fires and droughts. Traditional methods for forecasting turbidity and TOC require catchment specific data (i.e. streamflow) and have difficulties predicting them under non-stationary climate. A modelling framework was developed to assist DWTUs with assessing their risk for future compliance with disinfection and DBP regulations under changing climate. A local polynomial method was developed to predict surface water TOC using climate data collected from NOAA, Normalized Difference Vegetation Index (NDVI) data from the IRI Data Library, and historical TOC data from three DWTUs in diverse geographic locations. Characteristics from the DWTUs were used in the EPA Water Treatment Plant model to determine thresholds for influent TOC that resulted in DBP concentrations within compliance. Lastly, extreme value theory was used to predict probabilities of threshold exceedances under the current climate. Results from the utilities were used to produce a generalized TOC threshold approach that only requires water temperature and bromide concentration. The threshold exceedance model will be used to estimate probabilities of exceedances under projected climate scenarios. Initial results show that TOC can be forecasted using widely available data via statistical methods, where temperature, precipitation, Palmer Drought Severity Index, and NDVI with various lags were shown to be important predictors of TOC, and TOC thresholds can be determined using water temperature and bromide concentration. Results include a model to predict influent turbidity and turbidity thresholds, similar to the TOC models, as well as probabilities of threshold exceedances for TOC and turbidity under changing climate.

  9. Development of wireless, chipless neural stimulator by using one-port surface acoustic wave delay line and diode-capacitor interface

    NASA Astrophysics Data System (ADS)

    Kim, Jisung; Kim, Saehan; Lee, Keekeun

    2017-06-01

    For the first time, a wireless and chipless neuron stimulator was developed by utilizing a surface acoustic wave (SAW) delay line, a diode-capacitor interface, a sharp metal tip, and antennas for the stimulation of neurons in the brain. The SAW delay line supersedes presently existing complex wireless transmission systems composed of a few thousands of transistors, enabling the fabrication of wireless and chipless transceiver systems. The diode-capacitor interface was used to convert AC signals to DC signals and induce stimulus pulses at a sharp metal probe. A 400 MHz RF energy was wirelessly radiated from antennas and then stimulation pulses were observed at a sharp gold probe. A ˜5 m reading distance was obtained using a 1 mW power from a network analyzer. The cycles of electromagnetic (EM) radiation from an antenna were controlled by shielding the antenna with an EM absorber. Stimulation pulses with different amplitudes and durations were successfully observed at the probe. The obtained pulses were ˜0.08 mV in amplitude and 3-10 Hz in frequency. Coupling-of-mode (COM) and SPICE modeling simulations were also used to determine the optimal structural parameters for SAW delay line and the values of passive elements. On the basis of the extracted parameters, the entire system was experimentally implemented and characterized.

  10. Misinterpretation of lateral acoustic variations on high-resolution seismic reflection profiles as fault offsets of Holocene bay mud beneath the southern part of San Francisco Bay, California

    USGS Publications Warehouse

    Marlow, M. S.; Hart, P.E.; Carlson, P.R.; Childs, J. R.; Mann, D. M.; Anima, R.J.; Kayen, R.E.

    1996-01-01

    We collected high-resolution seismic reflection profiles in the southern part of San Francisco Bay in 1992 and 1993 to investigate possible Holocene faulting along postulated transbay bedrock fault zones. The initial analog records show apparent offsets of reflection packages along sharp vertical boundaries. These records were originally interpreted as showing a complex series of faults along closely spaced, sharp vertical boundaries in the upper 10 m (0.013 s two-way travel time) of Holocene bay mud. A subsequent survey in 1994 was run with a different seismic reflection system, which utilized a higher power source. This second system generated records with deeper penetration (max. 20 m, 0.026 s two-way travel time) and demonstrated that the reflections originally interpreted as fault offsets by faulting were actually laterally continuous reflection horizons. The pitfall in the original interpretations was caused by lateral variations in the amplitude brightness of reflection events, coupled with a long (greater than 15 ms) source signature of the low-power system. These effects combined to show apparent offsets of reflection packages along sharp vertical boundaries. These boundaries, as shown by the second system, in fact occur where the reflection amplitude diminishes abruptly on laterally continuous reflection events. This striking lateral variation in reflection amplitude is attributable to the localized presence of biogenic(?) gas.

  11. A holistic framework for design of cost-effective minimum water utilization network.

    PubMed

    Wan Alwi, S R; Manan, Z A; Samingin, M H; Misran, N

    2008-07-01

    Water pinch analysis (WPA) is a well-established tool for the design of a maximum water recovery (MWR) network. MWR, which is primarily concerned with water recovery and regeneration, only partly addresses water minimization problem. Strictly speaking, WPA can only lead to maximum water recovery targets as opposed to the minimum water targets as widely claimed by researchers over the years. The minimum water targets can be achieved when all water minimization options including elimination, reduction, reuse/recycling, outsourcing and regeneration have been holistically applied. Even though WPA has been well established for synthesis of MWR network, research towards holistic water minimization has lagged behind. This paper describes a new holistic framework for designing a cost-effective minimum water network (CEMWN) for industry and urban systems. The framework consists of five key steps, i.e. (1) Specify the limiting water data, (2) Determine MWR targets, (3) Screen process changes using water management hierarchy (WMH), (4) Apply Systematic Hierarchical Approach for Resilient Process Screening (SHARPS) strategy, and (5) Design water network. Three key contributions have emerged from this work. First is a hierarchical approach for systematic screening of process changes guided by the WMH. Second is a set of four new heuristics for implementing process changes that considers the interactions among process changes options as well as among equipment and the implications of applying each process change on utility targets. Third is the SHARPS cost-screening technique to customize process changes and ultimately generate a minimum water utilization network that is cost-effective and affordable. The CEMWN holistic framework has been successfully implemented on semiconductor and mosque case studies and yielded results within the designer payback period criterion.

  12. A Compact Multiple Notched Ultra-Wide Band Antenna with an Analysis of the CSRR-TO-CSRR Coupling for Portable UWB Applications.

    PubMed

    Rahman, MuhibUr; Ko, Dong-Sik; Park, Jung-Dong

    2017-09-25

    We present a compact ultra-wideband (UWB) antenna integrated with sharp notches with a detailed analysis of the mutual coupling of the multiple notch resonators. By utilizing complementary split ring resonators (CSRR) on the radiating semi-circular patch, we achieve the sharp notch-filtering of various bands within the UWB band without increasing the antenna size. The notched frequency bands include WiMAX, INSAT, and lower and upper WLAN. In order to estimate the frequency shifts of the notch due to the coupling of the nearby CSRRs, an analysis of the coupling among the multiple notch resonators is carried out and we construct the lumped-circuit equivalent model. The time domain analysis of the proposed antenna is performed to show its validity on the UWB application. The measured frequency response of the input port corresponds quite well with the calculations and simulations. The radiation pattern of the implemented quad-notched UWB antenna is nearly omnidirectional in the passband.

  13. A Compact Multiple Notched Ultra-Wide Band Antenna with an Analysis of the CSRR-TO-CSRR Coupling for Portable UWB Applications

    PubMed Central

    Ko, Dong-Sik

    2017-01-01

    We present a compact ultra-wideband (UWB) antenna integrated with sharp notches with a detailed analysis of the mutual coupling of the multiple notch resonators. By utilizing complementary split ring resonators (CSRR) on the radiating semi-circular patch, we achieve the sharp notch-filtering of various bands within the UWB band without increasing the antenna size. The notched frequency bands include WiMAX, INSAT, and lower and upper WLAN. In order to estimate the frequency shifts of the notch due to the coupling of the nearby CSRRs, an analysis of the coupling among the multiple notch resonators is carried out and we construct the lumped-circuit equivalent model. The time domain analysis of the proposed antenna is performed to show its validity on the UWB application. The measured frequency response of the input port corresponds quite well with the calculations and simulations. The radiation pattern of the implemented quad-notched UWB antenna is nearly omnidirectional in the passband. PMID:28946658

  14. Graphene-edge dielectrophoretic tweezers for trapping of biomolecules.

    PubMed

    Barik, Avijit; Zhang, Yao; Grassi, Roberto; Nadappuram, Binoy Paulose; Edel, Joshua B; Low, Tony; Koester, Steven J; Oh, Sang-Hyun

    2017-11-30

    The many unique properties of graphene, such as the tunable optical, electrical, and plasmonic response make it ideally suited for applications such as biosensing. As with other surface-based biosensors, however, the performance is limited by the diffusive transport of target molecules to the surface. Here we show that atomically sharp edges of monolayer graphene can generate singular electrical field gradients for trapping biomolecules via dielectrophoresis. Graphene-edge dielectrophoresis pushes the physical limit of gradient-force-based trapping by creating atomically sharp tweezers. We have fabricated locally backgated devices with an 8-nm-thick HfO 2 dielectric layer and chemical-vapor-deposited graphene to generate 10× higher gradient forces as compared to metal electrodes. We further demonstrate near-100% position-controlled particle trapping at voltages as low as 0.45 V with nanodiamonds, nanobeads, and DNA from bulk solution within seconds. This trapping scheme can be seamlessly integrated with sensors utilizing graphene as well as other two-dimensional materials.

  15. Building Practical Apertureless Scanning Near-Field Microscopy

    NASA Astrophysics Data System (ADS)

    Gungordu, M. Zeki

    The fundamental objective of this study is to establish a functional, practical apertureless type scanning near-field optical microscope, and to figure out the working mechanism behind it. Whereas a far-field microscope can measure the propagating field's components, this gives us little information about the features of the sample. The resolution is limited to about half of the wavelength of the illuminating light. On the other hand, the a-SNOM system enables achieving non-propagating components of the field, which provides more details about the sample's features. It is really difficult to measure because the amplitude of this field decays exponentially when the tip is moved away from the sample. The sharpness of the tip is the only limitation for resolution of the a-SNOM system. Consequently, the sharp tips are achieved by using electrochemical etching, and these tips are used to detect near-field signal. Separating the weak a-SNOM system signals from the undesired background signal, the higher demodulation background suppression is utilized by lock-in detection.

  16. Dynamics of electron injection in a laser-wakefield accelerator

    NASA Astrophysics Data System (ADS)

    Xu, J.; Buck, A.; Chou, S.-W.; Schmid, K.; Shen, B.; Tajima, T.; Kaluza, M. C.; Veisz, L.

    2017-08-01

    The detailed temporal evolution of the laser-wakefield acceleration process with controlled injection, producing reproducible high-quality electron bunches, has been investigated. The localized injection of electrons into the wakefield has been realized in a simple way—called shock-front injection—utilizing a sharp drop in plasma density. Both experimental and numerical results reveal the electron injection and acceleration process as well as the electron bunch's temporal properties. The possibility to visualize the plasma wave gives invaluable spatially resolved information about the local background electron density, which in turn allows for an efficient suppression of electron self-injection before the controlled process of injection at the sharp density jump. Upper limits for the electron bunch duration of 6.6 fs FWHM, or 2.8 fs (r.m.s.) were found. These results indicate that shock-front injection not only provides stable and tunable, but also few-femtosecond short electron pulses for applications such as ultrashort radiation sources, time-resolved electron diffraction or for the seeding of further acceleration stages.

  17. Short-time quantum dynamics of sharp boundaries potentials

    NASA Astrophysics Data System (ADS)

    Granot, Er'el; Marchewka, Avi

    2015-02-01

    Despite the high prevalence of singular potential in general, and rectangular potentials in particular, in applied scattering models, to date little is known about their short time effects. The reason is that singular potentials cause a mixture of complicated local as well as non-local effects. The object of this work is to derive a generic method to calculate analytically the short-time impact of any singular potential. In this paper it is shown that the scattering of a smooth wavefunction on a singular potential is totally equivalent, in the short-time regime, to the free propagation of a singular wavefunction. However, the latter problem was totally addressed analytically in Ref. [7]. Therefore, this equivalency can be utilized in solving analytically the short time dynamics of any smooth wavefunction at the presence of a singular potentials. In particular, with this method the short-time dynamics of any problem where a sharp boundaries potential (e.g., a rectangular barrier) is turned on instantaneously can easily be solved analytically.

  18. Using a sharp metal tip to control the polarization and direction of emission from a quantum dot.

    PubMed

    Ghimire, Anil; Shafran, Eyal; Gerton, Jordan M

    2014-09-24

    Optical antennas can be used to manipulate the direction and polarization of radiation from an emitter. Usually, these metallic nanostructures utilize localized plasmon resonances to generate highly directional and strongly polarized emission, which is determined predominantly by the antenna geometry alone, and is thus not easily tuned. Here we show experimentally that the emission polarization can be manipulated using a simple, nonresonant scanning probe consisting of the sharp metallic tip of an atomic force microscope; finite element simulations reveal that the emission simultaneously becomes highly directional. Together, the measurements and simulations demonstrate that interference between light emitted directly into the far field with that elastically scattered from the tip apex in the near field is responsible for this control over polarization and directionality. Due to the relatively weak emitter-tip coupling, the tip must be positioned very precisely near the emitter, but this weak coupling also leads to highly tunable emission properties with a similar degree of polarization and directionality compared to resonant antennas.

  19. Exploring the utility of real-time hydrologic data for landslide early warning

    NASA Astrophysics Data System (ADS)

    Mirus, B. B.; Smith, J. B.; Becker, R.; Baum, R. L.; Koss, E.

    2017-12-01

    Early warning systems can provide critical information for operations managers, emergency planners, and the public to help reduce fatalities, injuries, and economic losses due to landsliding. For shallow, rainfall-triggered landslides early warning systems typically use empirical rainfall thresholds, whereas the actual triggering mechanism involves the non-linear hydrological processes of infiltration, evapotranspiration, and hillslope drainage that are more difficult to quantify. Because hydrologic monitoring has demonstrated that shallow landslides are often preceded by a rise in soil moisture and pore-water pressures, some researchers have developed early warning criteria that attempt to account for these antecedent wetness conditions through relatively simplistic storage metrics or soil-water balance modeling. Here we explore the potential for directly incorporating antecedent wetness into landslide early warning criteria using recent landslide inventories and in-situ hydrologic monitoring near Seattle, WA, and Portland, OR. We use continuous, near-real-time telemetered soil moisture and pore-water pressure data measured within a few landslide-prone hillslopes in combination with measured and forecasted rainfall totals to inform easy-to-interpret landslide initiation thresholds. Objective evaluation using somewhat limited landslide inventories suggests that our new thresholds based on subsurface hydrologic monitoring and rainfall data compare favorably to the capabilities of existing rainfall-only thresholds for the Seattle area, whereas there are no established rainfall thresholds for the Portland area. This preliminary investigation provides a proof-of-concept for the utility of developing landslide early warning criteria in two different geologic settings using real-time subsurface hydrologic measurements from in-situ instrumentation.

  20. Conductive paint-filled cement paste sensor for accelerated percolation

    NASA Astrophysics Data System (ADS)

    Laflamme, Simon; Pinto, Irvin; Saleem, Hussam S.; Elkashef, Mohamed; Wang, Kejin; Cochran, Eric

    2015-04-01

    Cementitious-based strain sensors can be used as robust monitoring systems for civil engineering applications, such as road pavements and historic structures. To enable large-scale deployments, the fillers used in creating a conductive material must be inexpensive and easy to mix homogeneously. Carbon black (CB) particles constitute a promising filler due to their low cost and ease of dispersion. However, a relatively high quantity of these particles needs to be mixed with cement in order to reach the percolation threshold. Such level may influence the physical properties of the cementitious material itself, such as compressive and tensile strengths. In this paper, we investigate the possibility of utilizing a polymer to create conductive chains of CB more quickly than in a cementitious-only medium. This way, while the resulting material would have a higher conductivity, the percolation threshold would be reached with fewer CB particles. Building on the principle that the percolation threshold provides great sensing sensitivity, it would be possible to fabricate sensors using less conducting particles. We present results from a preliminary investigation comparing the utilization of a conductive paint fabricated from a poly-Styrene-co-Ethylene-co-Butylene-co-Styrene (SEBS) polymer matrix and CB, and CB-only as fillers to create cementitious sensors. Preliminary results show that the percolation threshold can be attained with significantly less CB using the SEBS+CB mix. Also, the study of the strain sensing properties indicates that the SEBS+CB sensor has a strain sensitivity comparable to the one of a CB-only cementitious sensor when comparing specimens fabricated at their respective percolation thresholds.

  1. Effects of isoconcentration surface threshold values on the characteristics of needle-shaped precipitates in atom probe tomography data from an aged Al-Mg-Si alloy.

    PubMed

    Aruga, Yasuhiro; Kozuka, Masaya

    2016-04-01

    Needle-shaped precipitates in an aged Al-0.62Mg-0.93Si (mass%) alloy were identified using a compositional threshold method, an isoconcentration surface, in atom probe tomography (APT). The influence of thresholds on the morphological and compositional characteristics of the precipitates was investigated. Utilizing optimum parameters for the concentration space, a reliable number density of the precipitates is obtained without dependence on the elemental concentration threshold in comparison with evaluation by transmission electron microscopy (TEM). It is suggested that careful selection of the concentration space in APT can lead to a reasonable average Mg/Si ratio for the precipitates. It was found that the maximum length and maximum diameter of the precipitates are affected by the elemental concentration threshold. Adjustment of the concentration threshold gives better agreement with the precipitate dimensions measured by TEM. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    NASA Astrophysics Data System (ADS)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  3. Statistical CT noise reduction with multiscale decomposition and penalized weighted least squares in the projection domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang Shaojie; Tang Xiangyang; School of Automation, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121

    2012-09-15

    Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation ofmore » interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of 'salt-and-pepper' noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain multiscale decomposition, the proposed method is anticipated to be useful in advanced clinical and preclinical applications where the interview sampling rate varies.« less

  4. Prediction of spatially explicit rainfall intensity–duration thresholds for post-fire debris-flow generation in the western United States

    USGS Publications Warehouse

    Staley, Dennis M.; Negri, Jacquelyn; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2017-01-01

    Early warning of post-fire debris-flow occurrence during intense rainfall has traditionally relied upon a library of regionally specific empirical rainfall intensity–duration thresholds. Development of this library and the calculation of rainfall intensity-duration thresholds often require several years of monitoring local rainfall and hydrologic response to rainstorms, a time-consuming approach where results are often only applicable to the specific region where data were collected. Here, we present a new, fully predictive approach that utilizes rainfall, hydrologic response, and readily available geospatial data to predict rainfall intensity–duration thresholds for debris-flow generation in recently burned locations in the western United States. Unlike the traditional approach to defining regional thresholds from historical data, the proposed methodology permits the direct calculation of rainfall intensity–duration thresholds for areas where no such data exist. The thresholds calculated by this method are demonstrated to provide predictions that are of similar accuracy, and in some cases outperform, previously published regional intensity–duration thresholds. The method also provides improved predictions of debris-flow likelihood, which can be incorporated into existing approaches for post-fire debris-flow hazard assessment. Our results also provide guidance for the operational expansion of post-fire debris-flow early warning systems in areas where empirically defined regional rainfall intensity–duration thresholds do not currently exist.

  5. Prediction of spatially explicit rainfall intensity-duration thresholds for post-fire debris-flow generation in the western United States

    NASA Astrophysics Data System (ADS)

    Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2017-02-01

    Early warning of post-fire debris-flow occurrence during intense rainfall has traditionally relied upon a library of regionally specific empirical rainfall intensity-duration thresholds. Development of this library and the calculation of rainfall intensity-duration thresholds often require several years of monitoring local rainfall and hydrologic response to rainstorms, a time-consuming approach where results are often only applicable to the specific region where data were collected. Here, we present a new, fully predictive approach that utilizes rainfall, hydrologic response, and readily available geospatial data to predict rainfall intensity-duration thresholds for debris-flow generation in recently burned locations in the western United States. Unlike the traditional approach to defining regional thresholds from historical data, the proposed methodology permits the direct calculation of rainfall intensity-duration thresholds for areas where no such data exist. The thresholds calculated by this method are demonstrated to provide predictions that are of similar accuracy, and in some cases outperform, previously published regional intensity-duration thresholds. The method also provides improved predictions of debris-flow likelihood, which can be incorporated into existing approaches for post-fire debris-flow hazard assessment. Our results also provide guidance for the operational expansion of post-fire debris-flow early warning systems in areas where empirically defined regional rainfall intensity-duration thresholds do not currently exist.

  6. Two Bistable Switches Govern M Phase Entry.

    PubMed

    Mochida, Satoru; Rata, Scott; Hino, Hirotsugu; Nagai, Takeharu; Novák, Béla

    2016-12-19

    The abrupt and irreversible transition from interphase to M phase is essential to separate DNA replication from chromosome segregation. This transition requires the switch-like phosphorylation of hundreds of proteins by the cyclin-dependent kinase 1 (Cdk1):cyclin B (CycB) complex. Previous studies have ascribed these switch-like phosphorylations to the auto-activation of Cdk1:CycB through the removal of inhibitory phosphorylations on Cdk1-Tyr15 [1, 2]. The positive feedback in Cdk1 activation creates a bistable switch that makes mitotic commitment irreversible [2-4]. Here, we surprisingly find that Cdk1 auto-activation is dispensable for irreversible, switch-like mitotic entry due to a second mechanism, whereby Cdk1:CycB inhibits its counteracting phosphatase (PP2A:B55). We show that the PP2A:B55-inhibiting Greatwall (Gwl)-endosulfine (ENSA) pathway is both necessary and sufficient for switch-like phosphorylations of mitotic substrates. Using purified components of the Gwl-ENSA pathway in a reconstituted system, we found a sharp Cdk1 threshold for phosphorylation of a luminescent mitotic substrate. The Cdk1 threshold to induce mitotic phosphorylation is distinctly higher than the Cdk1 threshold required to maintain these phosphorylations-evidence for bistability. A combination of mathematical modeling and biochemical reconstitution show that the bistable behavior of the Gwl-ENSA pathway emerges from its mutual antagonism with PP2A:B55. Our results demonstrate that two interlinked bistable mechanisms provide a robust solution for irreversible and switch-like mitotic entry. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Cloud-Scale Genomic Signals Processing for Robust Large-Scale Cancer Genomic Microarray Data Analysis.

    PubMed

    Harvey, Benjamin Simeon; Ji, Soo-Yeon

    2017-01-01

    As microarray data available to scientists continues to increase in size and complexity, it has become overwhelmingly important to find multiple ways to bring forth oncological inference to the bioinformatics community through the analysis of large-scale cancer genomic (LSCG) DNA and mRNA microarray data that is useful to scientists. Though there have been many attempts to elucidate the issue of bringing forth biological interpretation by means of wavelet preprocessing and classification, there has not been a research effort that focuses on a cloud-scale distributed parallel (CSDP) separable 1-D wavelet decomposition technique for denoising through differential expression thresholding and classification of LSCG microarray data. This research presents a novel methodology that utilizes a CSDP separable 1-D method for wavelet-based transformation in order to initialize a threshold which will retain significantly expressed genes through the denoising process for robust classification of cancer patients. Additionally, the overall study was implemented and encompassed within CSDP environment. The utilization of cloud computing and wavelet-based thresholding for denoising was used for the classification of samples within the Global Cancer Map, Cancer Cell Line Encyclopedia, and The Cancer Genome Atlas. The results proved that separable 1-D parallel distributed wavelet denoising in the cloud and differential expression thresholding increased the computational performance and enabled the generation of higher quality LSCG microarray datasets, which led to more accurate classification results.

  8. Evaluation of early weight loss thresholds for identifying nonresponders to an intensive lifestyle intervention.

    PubMed

    Unick, Jessica L; Hogan, Patricia E; Neiberg, Rebecca H; Cheskin, Lawrence J; Dutton, Gareth R; Evans-Hudnall, Gina; Jeffery, Robert; Kitabchi, Abbas E; Nelson, Julie A; Pi-Sunyer, F Xavier; West, Delia Smith; Wing, Rena R

    2014-07-01

    Weight losses in lifestyle interventions are variable, yet prediction of long-term success is difficult. The utility of using various weight loss thresholds in the first 2 months of treatment for predicting 1-year outcomes was examined. Participants included 2327 adults with type 2 diabetes (BMI:35.8 ± 6.0) randomized to the intensive lifestyle intervention (ILI) of the Look AHEAD trial. ILI included weekly behavioral sessions designed to increase physical activity and reduce caloric intake. 1-month, 2-month, and 1-year weight changes were calculated. Participants failing to achieve a ≥2% weight loss at Month 1 were 5.6 (95% CI:4.5, 7.0) times more likely to also not achieve a ≥10% weight loss at Year 1, compared to those losing ≥2% initially. These odds were increased to 11.6 (95% CI:8.6, 15.6) when using a 3% weight loss threshold at Month 2. Only 15.2% and 8.2% of individuals failing to achieve the ≥2% and ≥3% thresholds at Months 1 and 2, respectively, go on to achieve a ≥10% weight loss at Year 1. Given the association between initial and 1-year weight loss, the first few months of treatment may be an opportune time to identify those who are unsuccessful and utilize rescue efforts. clinicaltrials.gov Identifier: NCT00017953. © 2014 The Obesity Society.

  9. Interactions between cold and water limitation along a climate gradient produce sharp thresholds in ecosystem type, carbon balance, and water cycling

    NASA Astrophysics Data System (ADS)

    Kelly, A. E.; Goulden, M.; Fellows, A. W.

    2013-12-01

    California's Mediterranean climate supports a broad diversity of ecosystem types, including Sequoia forests in the mid-montane Sierra Nevada. Understanding how winter cold and summer drought interact to produce the lush forest in the Sierra is critical to predicting the impacts of projected climate change on California's ecosystems, water supply, and carbon cycling. We investigated how smooth gradients of temperature and water availability produced sharp thresholds in biomass, productivity, growing season, water use, and ultimately ecosystem type and function. We used the climate gradient of the western slope of the Sierra Nevada as a study system. Four eddy covariance towers were situated in the major ecosystem types of the Sierra Nevada at approximately 800-m elevation intervals. Eddy flux data were combined with remote sensing and direct measurements of biomass, productivity, soil available water, and evapotranspiration to understand how weather and available water control ecosystem production and function. We found that production at the high elevation lodgepole site at 2700 m was strongly limited by winter cold. Production at the low elevation oak woodland site at 400 m was strongly limited by summer drought. The yellow pine site at 1200 m was only 4 °C cooler than the oak woodland site, yet had an order of magnitude more biomass and productivity with year-round growth. The mixed conifer site at 2000 m is 3.5 °C warmer than the lodgepole forest, yet also has higher biomass, ten times higher productivity, and year-round growth. We conclude that there is a broad climatological 'sweet spot' within the Sierra Nevada, in which the Mediterranean climate can support large-statured forest with high growth rates. The range of the mid-elevation forest was sharply bounded by water limitation at the lower edge and cold limitation at the upper edge despite small differences in precipitation and temperature across these boundaries. Our results suggest that small changes in precipitation or winter warming could markedly alter ecosystem structure and function as well as carbon and water cycling in the Sierra Nevada.

  10. Post-processing of multi-model ensemble river discharge forecasts using censored EMOS

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian

    2014-05-01

    When forecasting water levels and river discharge, ensemble weather forecasts are used as meteorological input to hydrologic process models. As hydrologic models are imperfect and the input ensembles tend to be biased and underdispersed, the output ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, statistical post-processing is required in order to achieve calibrated and sharp predictions. Standard post-processing methods such as Ensemble Model Output Statistics (EMOS) that have their origins in meteorological forecasting are now increasingly being used in hydrologic applications. Here we consider two sub-catchments of River Rhine, for which the forecasting system of the Federal Institute of Hydrology (BfG) uses runoff data that are censored below predefined thresholds. To address this methodological challenge, we develop a censored EMOS method that is tailored to such data. The censored EMOS forecast distribution can be understood as a mixture of a point mass at the censoring threshold and a continuous part based on a truncated normal distribution. Parameter estimates of the censored EMOS model are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over the training dataset. Model fitting on Box-Cox transformed data allows us to take account of the positive skewness of river discharge distributions. In order to achieve realistic forecast scenarios over an entire range of lead-times, there is a need for multivariate extensions. To this end, we smooth the marginal parameter estimates over lead-times. In order to obtain realistic scenarios of discharge evolution over time, the marginal distributions have to be linked with each other. To this end, the multivariate dependence structure can either be adopted from the raw ensemble like in Ensemble Copula Coupling (ECC), or be estimated from observations in a training period. The censored EMOS model has been applied to multi-model ensemble forecasts issued on a daily basis over a period of three years. For the two catchments considered, this resulted in well calibrated and sharp forecast distributions over all lead-times from 1 to 114 h. Training observations tended to be better indicators for the dependence structure than the raw ensemble.

  11. A general, cryogenically-based analytical technique for the determination of trace quantities of volatile organic compounds in the atmosphere

    NASA Technical Reports Server (NTRS)

    Coleman, R. A.; Cofer, W. R., III; Edahl, R. A., Jr.

    1985-01-01

    An analytical technique for the determination of trace (sub-ppbv) quantities of volatile organic compounds in air was developed. A liquid nitrogen-cooled trap operated at reduced pressures in series with a Dupont Nafion-based drying tube and a gas chromatograph was utilized. The technique is capable of analyzing a variety of organic compounds, from simple alkanes to alcohols, while offering a high level of precision, peak sharpness, and sensitivity.

  12. MO-DE-207A-12: Toward Patient-Specific 4DCT Reconstruction Using Adaptive Velocity Binning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, E.D.; Glide-Hurst, C.; Wayne State University, Detroit, MI

    2016-06-15

    Purpose: While 4DCT provides organ/tumor motion information, it often samples data over 10–20 breathing cycles. For patients presenting with compromised pulmonary function, breathing patterns can change over the acquisition time, potentially leading to tumor delineation discrepancies. This work introduces a novel adaptive velocity-modulated binning (AVB) 4DCT algorithm that modulates the reconstruction based on the respiratory waveform, yielding a patient-specific 4DCT solution. Methods: AVB was implemented in a research reconstruction configuration. After filtering the respiratory waveform, the algorithm examines neighboring data to a phase reconstruction point and the temporal gate is widened until the difference between the reconstruction point and waveformmore » exceeds a threshold value—defined as percent difference between maximum/minimum waveform amplitude. The algorithm only impacts reconstruction if the gate width exceeds a set minimum temporal width required for accurate reconstruction. A sensitivity experiment of threshold values (0.5, 1, 5, 10, and 12%) was conducted to examine the interplay between threshold, signal to noise ratio (SNR), and image sharpness for phantom and several patient 4DCT cases using ten-phase reconstructions. Individual phase reconstructions were examined. Subtraction images and regions of interest were compared to quantify changes in SNR. Results: AVB increased signal in reconstructed 4DCT slices for respiratory waveforms that met the prescribed criteria. For the end-exhale phases, where the respiratory velocity is low, patient data revealed a threshold of 0.5% demonstrated increased SNR in the AVB reconstructions. For intermediate breathing phases, threshold values were required to be >10% to notice appreciable changes in CT intensity with AVB. AVB reconstructions exhibited appreciably higher SNR and reduced noise in regions of interest that were photon deprived such as the liver. Conclusion: We demonstrated that patient-specific velocity-based 4DCT reconstruction is feasible. Image noise was reduced with AVB, suggesting potential applications for low-dose acquisitions and to improve 4DCT reconstruction for irregular breathing patients. The submitting institution holds research agreements with Philips Healthcare.« less

  13. SU-C-9A-01: Parameter Optimization in Adaptive Region-Growing for Tumor Segmentation in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, S; Huazhong University of Science and Technology, Wuhan, Hubei; Xue, M

    Purpose: To design a reliable method to determine the optimal parameter in the adaptive region-growing (ARG) algorithm for tumor segmentation in PET. Methods: The ARG uses an adaptive similarity criterion m - fσ ≤ I-PET ≤ m + fσ, so that a neighboring voxel is appended to the region based on its similarity to the current region. When increasing the relaxing factor f (f ≥ 0), the resulting volumes monotonically increased with a sharp increase when the region just grew into the background. The optimal f that separates the tumor from the background is defined as the first point withmore » the local maximum curvature on an Error function fitted to the f-volume curve. The ARG was tested on a tumor segmentation Benchmark that includes ten lung cancer patients with 3D pathologic tumor volume as ground truth. For comparison, the widely used 42% and 50% SUVmax thresholding, Otsu optimal thresholding, Active Contours (AC), Geodesic Active Contours (GAC), and Graph Cuts (GC) methods were tested. The dice similarity index (DSI), volume error (VE), and maximum axis length error (MALE) were calculated to evaluate the segmentation accuracy. Results: The ARG provided the highest accuracy among all tested methods. Specifically, the ARG has an average DSI, VE, and MALE of 0.71, 0.29, and 0.16, respectively, better than the absolute 42% thresholding (DSI=0.67, VE= 0.57, and MALE=0.23), the relative 42% thresholding (DSI=0.62, VE= 0.41, and MALE=0.23), the absolute 50% thresholding (DSI=0.62, VE=0.48, and MALE=0.21), the relative 50% thresholding (DSI=0.48, VE=0.54, and MALE=0.26), OTSU (DSI=0.44, VE=0.63, and MALE=0.30), AC (DSI=0.46, VE= 0.85, and MALE=0.47), GAC (DSI=0.40, VE= 0.85, and MALE=0.46) and GC (DSI=0.66, VE= 0.54, and MALE=0.21) methods. Conclusions: The results suggest that the proposed method reliably identified the optimal relaxing factor in ARG for tumor segmentation in PET. This work was supported in part by National Cancer Institute Grant R01 CA172638; The dataset is provided by AAPM TG211.« less

  14. Objective definition of rainfall intensity-duration thresholds for the initiation of post-fire debris flows in southern California

    USGS Publications Warehouse

    Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.

    2012-01-01

    Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this study represent a computationally simple means of deriving critical values for other studies of nonlinear phenomena characterized by thresholds.

  15. An epidemiologic cohort study reviewing the practice of blood product transfusions among a population of pediatric oncology patients.

    PubMed

    Lieberman, Lani; Liu, Yang; Portwine, Carol; Barty, Rebecca L; Heddle, Nancy M

    2014-10-01

    Despite the high utilization of blood products by pediatric oncology patients, literature in this population remains scarce. The primary objective of this study was to assess red blood cell (RBC) and platelet (PLT) utilization rates and transfusion thresholds in pediatric oncology patients. The secondary objective was to describe transfusion-related complications including RBC alloantibody development and transfusion reactions. This epidemiologic cohort study involved pediatric oncology patients at a Canadian academic children's hospital between April 2002 and December 2011. Demographic, clinical, laboratory, and transfusion variables were collected from the Transfusion Registry for Utilization Statistics and Tracking database, a large database that captures more than 50 demographic and clinical variables as well as comprehensive transfusion information and laboratory test results. Of 647 pediatric oncology patients, 430 (66%) received a RBC or PLT transfusion or both during this time period. The median transfusion threshold before a RBC and PLT transfusion was a hemoglobin (Hb) value of 72 g/L (interquartile range [IQR], 68-76 g/L) and a PLT count of 16 × 10(9) /L (IQR, 10 × 10(9) -23 × 10(9) /L), respectively. Ninety-two percent of the issued RBC and PLT products (7507/8154) were cytomegalovirus negative and 90% were irradiated (7299/8154). RBC alloantibody development and transfusion reactions were reported infrequently in 0.5% (2/423) and 4.5% (8/179) of the patients, respectively. This study assessed utilization rates, transfusion thresholds, alloantibody development, and transfusion reactions in pediatric oncology patients. The descriptive results from this epidemiologic study provide baseline information to generate hypotheses to be tested in future interventional studies. © 2014 AABB.

  16. Analysis of sharpness increase by image noise

    NASA Astrophysics Data System (ADS)

    Kurihara, Takehito; Aoki, Naokazu; Kobayashi, Hiroyuki

    2009-02-01

    Motivated by the reported increase in sharpness by image noise, we investigated how noise affects sharpness perception. We first used natural images of tree bark with different amounts of noise to see whether noise enhances sharpness. Although the result showed sharpness decreased as noise amount increased, some observers seemed to perceive more sharpness with increasing noise, while the others did not. We next used 1D and 2D uni-frequency patterns as stimuli in an attempt to reduce such variability in the judgment. The result showed, for higher frequency stimuli, sharpness decreased as the noise amount increased, while sharpness of the lower frequency stimuli increased at a certain noise level. From this result, we thought image noise might reduce sharpness at edges, but be able to improve sharpness of lower frequency component or texture in image. To prove this prediction, we experimented again with the natural image used in the first experiment. Stimuli were made by applying noise separately to edge or to texture part of the image. The result showed noise, when added to edge region, only decreased sharpness, whereas when added to texture, could improve sharpness. We think it is the interaction between noise and texture that sharpens image.

  17. The effect of crack blunting on the competition between dislocation nucleation and cleavage

    NASA Astrophysics Data System (ADS)

    Fischer, Lisa L.; Beltz, Glenn E.

    2001-03-01

    To better understand the ductile versus brittle fracture behavior of crystalline materials, attention should be directed towards physically realistic crack geometries. Currently, continuum models of ductile versus brittle behavior are typically based on the analysis of a pre-existing sharp crack in order to use analytical solutions for the stress fields around the crack tip. This paper examines the effects of crack blunting on the competition between dislocation nucleation and atomic decohesion using continuum methods. We accomplish this by assuming that the crack geometry is elliptical, which has the primary advantage that the stress fields are available in closed form. These stress field solutions are then used to calculate the thresholds for dislocation nucleation and atomic decohesion. A Peierls-type framework is used to obtain the thresholds for dislocation nucleation, in which the region of the slip plane ahead of the crack develops a distribution of slip discontinuity prior to nucleation. This slip distribution increases as the applied load is increased until an instability is reached and the governing integral equation can no longer be solved. These calculations are carried out for various crack tip geometries to ascertain the effects of crack tip blunting. The thresholds for atomic decohesion are calculated using a cohesive zone model, in which the region of the crack front develops a distribution of opening displacement prior to atomic decohesion. Again, loading of the elliptical crack tip eventually results in an instability, which marks the onset of crack advance. These calculations are carried out for various crack tip geometries. The results of these separate calculations are presented as the critical energy release rates versus the crack tip radius of curvature for a given crack length. The two threshold curves are compared simultaneously to determine which failure mode is energetically more likely at various crack tip curvatures. From these comparisons, four possible types of material fracture behavior are identified: intrinsically brittle, quasi-brittle, intrinsically ductile, and quasi-ductile. Finally, real material examples are discussed.

  18. Serotonin inhibits low-threshold spike interneurons in the striatum

    PubMed Central

    Cains, Sarah; Blomeley, Craig P; Bracci, Enrico

    2012-01-01

    Low-threshold spike interneurons (LTSIs) are important elements of the striatal architecture and the only known source of nitric oxide in this nucleus, but their rarity has so far prevented systematic studies. Here, we used transgenic mice in which green fluorescent protein is expressed under control of the neuropeptide Y (NPY) promoter and striatal NPY-expressing LTSIs can be easily identified, to investigate the effects of serotonin on these neurons. In sharp contrast with its excitatory action on other striatal interneurons, serotonin (30 μm) strongly inhibited LTSIs, reducing or abolishing their spontaneous firing activity and causing membrane hyperpolarisations. These hyperpolarisations persisted in the presence of tetrodotoxin, were mimicked by 5-HT2C receptor agonists and reversed by 5-HT2C antagonists. Voltage-clamp slow-ramp experiments showed that serotonin caused a strong increase in an outward current activated by depolarisations that was blocked by the specific M current blocker XE 991. In current-clamp experiments, XE 991 per se caused membrane depolarisations in LTSIs and subsequent application of serotonin (in the presence of XE 991) failed to affect these neurons. We concluded that serotonin strongly inhibits striatal LTSIs acting through postsynaptic 5-HT2C receptors and increasing an M type current. PMID:22495583

  19. Bayesian framework inspired no-reference region-of-interest quality measure for brain MRI images

    PubMed Central

    Osadebey, Michael; Pedersen, Marius; Arnold, Douglas; Wendel-Mitoraj, Katrina

    2017-01-01

    Abstract. We describe a postacquisition, attribute-based quality assessment method for brain magnetic resonance imaging (MRI) images. It is based on the application of Bayes theory to the relationship between entropy and image quality attributes. The entropy feature image of a slice is segmented into low- and high-entropy regions. For each entropy region, there are three separate observations of contrast, standard deviation, and sharpness quality attributes. A quality index for a quality attribute is the posterior probability of an entropy region given any corresponding region in a feature image where quality attribute is observed. Prior belief in each entropy region is determined from normalized total clique potential (TCP) energy of the slice. For TCP below the predefined threshold, the prior probability for a region is determined by deviation of its percentage composition in the slice from a standard normal distribution built from 250 MRI volume data provided by Alzheimer’s Disease Neuroimaging Initiative. For TCP above the threshold, the prior is computed using a mathematical model that describes the TCP–noise level relationship in brain MRI images. Our proposed method assesses the image quality of each entropy region and the global image. Experimental results demonstrate good correlation with subjective opinions of radiologists for different types and levels of quality distortions. PMID:28630885

  20. An observation of ablation effect of soft biotissue by pulsed Er:YAG laser

    NASA Astrophysics Data System (ADS)

    Zhang, Xianzeng; Xie, Shusen; Ye, Qing; Zhan, Zhenlin

    2007-02-01

    Because of the unique properties with regard to the absorption in organic tissue, pulsed Er:YAG laser has found most interest for various application in medicine, such as dermatology, dentistry, and cosmetic surgery. However, consensus regarding the optimal parameters for clinical use of this tool has not been reached. In this paper, the laser ablation characteristics of soft tissue by Er:YAG laser irradiation was studied. Porcine skin tissue in vitro was used in the experiment. Laser fluences ranged from 25mJ/mm2 to 200mJ/mm2, repetition rates was 5Hz, spot sizes on the tissue surface was 2mm. The ablation effects were assessed by the means of optical microscope, ablation diameters and depths were measured with reading microscope. It was shown that the ablation of soft biotissue by pulsed Er:YAG laser was a threshold process. With appropriate choice of irradiation parameters, high quality ablation with clean, sharp cuts following closely the spatial contour of the incident beam can be achieved. The curves of ablation crater diameter and depth versus laser fluence were obtained, then the ablation threshold and ablation yield were calculated subsequently, and the influence of the number of pulses fired into a crater on ablation crater depth was also discussed.

  1. Unified planar process for fabricating heterojunction bipolar transistors and buried-heterostructure lasers utilizing impurity-induced disordering

    NASA Astrophysics Data System (ADS)

    Thornton, R. L.; Mosby, W. J.; Chung, H. F.

    1988-12-01

    We describe results on a novel geometry of heterojunction bipolar transistor that has been realized by impurity-induced disordering. This structure is fabricated by a method that is compatible with techniques for the fabrication of low threshold current buried-heterostructure lasers. We have demonstrated this compatibility by fabricating a hybrid laser/transistor structure that operates as a laser with a threshold current of 6 mA at room temperature, and as a transistor with a current gain of 5.

  2. An efficient (t,n) threshold quantum secret sharing without entanglement

    NASA Astrophysics Data System (ADS)

    Qin, Huawang; Dai, Yuewei

    2016-04-01

    An efficient (t,n) threshold quantum secret sharing (QSS) scheme is proposed. In our scheme, the Hash function is used to check the eavesdropping, and no particles need to be published. So the utilization efficiency of the particles is real 100%. No entanglement is used in our scheme. The dealer uses the single particles to encode the secret information, and the participants get the secret through measuring the single particles. Compared to the existing schemes, our scheme is simpler and more efficient.

  3. Stratified cost-utility analysis of C-Leg versus mechanical knees: Findings from an Italian sample of transfemoral amputees.

    PubMed

    Cutti, Andrea Giovanni; Lettieri, Emanuele; Del Maestro, Martina; Radaelli, Giovanni; Luchetti, Martina; Verni, Gennero; Masella, Cristina

    2017-06-01

    The fitting rate of the C-Leg electronic knee (Otto-Bock, D) has increased steadily over the last 15 years. Current cost-utility studies, however, have not considered the patients' characteristics. To complete a cost-utility analysis involving C-Leg and mechanical knee users; "age at the time of enrollment," "age at the time of first prosthesis," and "experience with the current type of prosthesis" are assumed as non-nested stratification parameters. Cohort retrospective. In all, 70 C-Leg and 57 mechanical knee users were selected. For each stratification criteria, we evaluated the cost-utility of C-Leg versus mechanical knees by computing the incremental cost-utility ratio, that is, the ratio of the "difference in cost" and the "difference in utility" of the two technologies. Cost consisted of acquisition, maintenance, transportation, and lodging expenses. Utility was measured in terms of quality-adjusted life years, computed on the basis of participants' answers to the EQ-5D questionnaire. Patients over 40 years at the time of first prosthesis were the only group featuring an incremental cost-utility ratio (88,779 €/quality-adjusted life year) above the National Institute for Health and Care Excellence practical cost-utility threshold (54,120 €/quality-adjusted live year): C-Leg users experience a significant improvement of "mobility," but limited outcomes on "usual activities," "self-care," "depression/anxiety," and reduction of "pain/discomfort." The stratified cost-utility results have relevant clinical implications and provide useful information for practitioners in tailoring interventions. Clinical relevance A cost-utility analysis that considered patients characteristics provided insights on the "affordability" of C-Leg compared to mechanical knees. In particular, results suggest that C-Leg has a significant impact on "mobility" for first-time prosthetic users over 40 years, but implementation of specific low-cost physical/psychosocial interventions is required to retun within cost-utility thresholds.

  4. Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.

    PubMed

    Lee, Wen-Chung; Wu, Yun-Chun

    2016-01-01

    The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.

  5. Double Threshold Energy Detection Based Cooperative Spectrum Sensing for Cognitive Radio Networks with QoS Guarantee

    NASA Astrophysics Data System (ADS)

    Hu, Hang; Yu, Hong; Zhang, Yongzhi

    2013-03-01

    Cooperative spectrum sensing, which can greatly improve the ability of discovering the spectrum opportunities, is regarded as an enabling mechanism for cognitive radio (CR) networks. In this paper, we employ a double threshold detection method in energy detector to perform spectrum sensing, only the CR users with reliable sensing information are allowed to transmit one bit local decision to the fusion center. Simulation results will show that our proposed double threshold detection method could not only improve the sensing performance but also save the bandwidth of the reporting channel compared with the conventional detection method with one threshold. By weighting the sensing performance and the consumption of system resources in a utility function that is maximized with respect to the number of CR users, it has been shown that the optimal number of CR users is related to the price of these Quality-of-Service (QoS) requirements.

  6. Threshold kinetics of a solar-simulator-pumped iodine laser

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Lee, Y.; Weaver, W. R.; Humes, D. H.; Lee, J. H.

    1984-01-01

    A model of the chemical kinetics of the n-C3F7I solar-simulator-pumped iodine laser is utilized to study the major kinetic processes associated with the threshold behavior of this experimental system. Excited-state diffusion to the cell wall is the dominant limiting factor below 5 torr. Excited-state diffusion to the cell wall is the dominant limiting factor below 5 torr. Excited-state recombination with the alkyl radical and quenching by the parent gas control threshold at higher pressures. Treatment of the hyperfine splitting and uncertainty in the pressure broadening are important factors in fixing the threshold level. In spite of scatter in the experimental data caused by instabilities in the simulator high-pressure high-pressure arc, reasonable agreement is achieved between the model and experiment. Model parameters arrived at are within the uncertainty range of values found in the literature.

  7. Simulated mussel mortality thresholds as a function of mussel biomass and nutrient loading

    USGS Publications Warehouse

    Bril, Jeremy S.; Langenfeld, Kathryn; Just, Craig L.; Spak, Scott N.; Newton, Teresa

    2017-01-01

    A freshwater “mussel mortality threshold” was explored as a function of porewater ammonium (NH4+) concentration, mussel biomass, and total nitrogen (N) utilizing a numerical model calibrated with data from mesocosms with and without mussels. A mortality threshold of 2 mg-N L−1 porewater NH4+ was selected based on a study that estimated 100% mortality of juvenile Lampsilis mussels exposed to 1.9 mg-N L−1NH4+ in equilibrium with 0.18 mg-N L−1 NH3. At the highest simulated mussel biomass (560 g m−2) and the lowest simulated influent water “food” concentration (0.1 mg-N L−1), the porewater NH4+ concentration after a 2,160 h timespan without mussels was 0.5 mg-N L−1 compared to 2.25 mg-N L−1 with mussels. Continuing these simulations while varying mussel biomass and N content yielded a mortality threshold contour that was essentially linear which contradicted the non-linear and non-monotonic relationship suggested by Strayer (2014). Our model suggests that mussels spatially focus nutrients from the overlying water to the sediments as evidenced by elevated porewater NH4+ in mesocosms with mussels. However, our previous work and the model utilized here show elevated concentrations of nitrite and nitrate in overlying waters as an indirect consequence of mussel activity. Even when the simulated overlying water food availability was quite low, the mortality threshold was reached at a mussel biomass of about 480 g m−2. At a food concentration of 10 mg-N L−1, the mortality threshold was reached at a biomass of about 250 g m−2. Our model suggests the mortality threshold for juvenile Lampsilis species could be exceeded at low mussel biomass if exposed for even a short time to the highly elevated total N loadings endemic to the agricultural Midwest.

  8. Improving performance of Si/CdS micro-/nanoribbon p-n heterojunction light emitting diodes by trenched structure

    NASA Astrophysics Data System (ADS)

    Huang, Shiyuan; Wu, Yuanpeng; Ma, Xiangyang; Yang, Zongyin; Liu, Xu; Yang, Qing

    2018-05-01

    Realizing high performance silicon based light sources has been an unremitting pursuit for researchers. In this letter, we propose a simple structure to enhance electroluminescence emission and reduce the threshold of injected current of silicon/CdS micro-/nanoribbon p-n heterojunction visible light emitting diodes, by fabricating trenched structure on silicon substrate to mount CdS micro-/nanoribbon. A series of experiments and simulation analysis favors the rationality and validity of our mounting design. After mounting the CdS micro-/nanoribbon, the optical field confinement increases, and absorption and losses from high refractive silicon substrate are effectively reduced. Meanwhile the sharp change of silicon substrate near heterojunction also facilitates the balance between electron current and hole current, which substantially conduces to the stable amplification of electroluminescence emission in CdS micro-/nanoribbon.

  9. Sub-bandgap Voltage Electroluminescence and Magneto-oscillations in a WSe2 Light-Emitting van der Waals Heterostructure.

    PubMed

    Binder, Johannes; Withers, Freddie; Molas, Maciej R; Faugeras, Clement; Nogajewski, Karol; Watanabe, Kenji; Taniguchi, Takashi; Kozikov, Aleksey; Geim, Andre K; Novoselov, Kostya S; Potemski, Marek

    2017-03-08

    We report on experimental investigations of an electrically driven WSe 2 based light-emitting van der Waals heterostructure. We observe a threshold voltage for electroluminescence significantly lower than the corresponding single particle band gap of monolayer WSe 2 . This observation can be interpreted by considering the Coulomb interaction and a tunneling process involving excitons, well beyond the picture of independent charge carriers. An applied magnetic field reveals pronounced magneto-oscillations in the electroluminescence of the free exciton emission intensity with a 1/B periodicity. This effect is ascribed to a modulation of the tunneling probability resulting from the Landau quantization in the graphene electrodes. A sharp feature in the differential conductance indicates that the Fermi level is pinned and allows for an estimation of the acceptor binding energy.

  10. Structural diversity of supercoiled DNA

    PubMed Central

    Irobalieva, Rossitza N.; Fogg, Jonathan M.; Catanese, Daniel J.; Sutthibutpong, Thana; Chen, Muyuan; Barker, Anna K.; Ludtke, Steven J.; Harris, Sarah A.; Schmid, Michael F.; Chiu, Wah; Zechiedrich, Lynn

    2015-01-01

    By regulating access to the genetic code, DNA supercoiling strongly affects DNA metabolism. Despite its importance, however, much about supercoiled DNA (positively supercoiled DNA, in particular) remains unknown. Here we use electron cryo-tomography together with biochemical analyses to investigate structures of individual purified DNA minicircle topoisomers with defined degrees of supercoiling. Our results reveal that each topoisomer, negative or positive, adopts a unique and surprisingly wide distribution of three-dimensional conformations. Moreover, we uncover striking differences in how the topoisomers handle torsional stress. As negative supercoiling increases, bases are increasingly exposed. Beyond a sharp supercoiling threshold, we also detect exposed bases in positively supercoiled DNA. Molecular dynamics simulations independently confirm the conformational heterogeneity and provide atomistic insight into the flexibility of supercoiled DNA. Our integrated approach reveals the three-dimensional structures of DNA that are essential for its function. PMID:26455586

  11. Structural diversity of supercoiled DNA

    NASA Astrophysics Data System (ADS)

    Irobalieva, Rossitza N.; Fogg, Jonathan M.; Catanese, Daniel J.; Sutthibutpong, Thana; Chen, Muyuan; Barker, Anna K.; Ludtke, Steven J.; Harris, Sarah A.; Schmid, Michael F.; Chiu, Wah; Zechiedrich, Lynn

    2015-10-01

    By regulating access to the genetic code, DNA supercoiling strongly affects DNA metabolism. Despite its importance, however, much about supercoiled DNA (positively supercoiled DNA, in particular) remains unknown. Here we use electron cryo-tomography together with biochemical analyses to investigate structures of individual purified DNA minicircle topoisomers with defined degrees of supercoiling. Our results reveal that each topoisomer, negative or positive, adopts a unique and surprisingly wide distribution of three-dimensional conformations. Moreover, we uncover striking differences in how the topoisomers handle torsional stress. As negative supercoiling increases, bases are increasingly exposed. Beyond a sharp supercoiling threshold, we also detect exposed bases in positively supercoiled DNA. Molecular dynamics simulations independently confirm the conformational heterogeneity and provide atomistic insight into the flexibility of supercoiled DNA. Our integrated approach reveals the three-dimensional structures of DNA that are essential for its function.

  12. Sharp increase in central Oklahoma seismicity 2009-2014 induced by massive wastewater injection

    USGS Publications Warehouse

    Keranen, Kathleen M.; Abers, Geoffrey A.; Weingarten, Matthew; Bekins, Barbara A.; Ge, Shemin

    2014-01-01

    Unconventional oil and gas production provides a rapidly growing energy source; however high-producing states in the United States, such as Oklahoma, face sharply rising numbers of earthquakes. Subsurface pressure data required to unequivocally link earthquakes to injection are rarely accessible. Here we use seismicity and hydrogeological models to show that distant fluid migration from high-rate disposal wells in Oklahoma is likely responsible for the largest swarm. Earthquake hypocenters occur within disposal formations and upper-basement, between 2-5 km depth. The modeled fluid pressure perturbation propagates throughout the same depth range and tracks earthquakes to distances of 35 km, with a triggering threshold of ~0.07 MPa. Although thousands of disposal wells may operate aseismically, four of the highest-rate wells likely induced 20% of 2008-2013 central US seismicity.

  13. New technique for excitation of bulk and surface spin waves in ferromagnets

    NASA Astrophysics Data System (ADS)

    Bogacz, S. A.; Ketterson, J. B.

    1985-09-01

    A meander-line magnetic transducer is discussed in the context of bulk and surface spin-wave generation in ferromagnets. The magnetic field created by the transducer was calculated in closed analytic form for this model. The linear response of the ferromagnet to the inhomogenous surface disturbance of arbitrary ω and k was obtained as a self-consistent solution to the Bloch equation of motion and the Maxwell equations, subject to appropriate boundary condition. In particular, the energy flux through the boundary displays a sharp resonantlike absorption maximum concentrated at the frequency of the magnetostatic Damon-Eshbach (DE) surface mode; furthermore, the energy transfer spectrum is cut off abruptly below the threshold frequency of the bulk spin waves. The application of the meander line to the spin diffusion problem in NMR is also discussed.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siddiqui, Afzal; Marnay, Chris

    This paper examines a California-based microgrid s decision to invest in a distributed generation (DG) unit that operates on natural gas. While the long-term natural gas generation cost is stochastic, we initially assume that the microgrid may purchase electricity at a fixed retail rate from its utility. Using the real options approach, we find natural gas generating cost thresholds that trigger DG investment. Furthermore, the consideration of operational flexibility by the microgrid accelerates DG investment, while the option to disconnect entirely from the utility is not attractive. By allowing the electricity price to be stochastic, we next determine an investmentmore » threshold boundary and find that high electricity price volatility relative to that of natural gas generating cost delays investment while simultaneously increasing the value of the investment. We conclude by using this result to find the implicit option value of the DG unit.« less

  15. Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks.

    PubMed

    Zhang, Jing; Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho

    2017-09-15

    In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity.

  16. Optimal Energy Efficiency Fairness of Nodes in Wireless Powered Communication Networks

    PubMed Central

    Zhou, Qingjie; Ng, Derrick Wing Kwan; Jo, Minho

    2017-01-01

    In wireless powered communication networks (WPCNs), it is essential to research energy efficiency fairness in order to evaluate the balance of nodes for receiving information and harvesting energy. In this paper, we propose an efficient iterative algorithm for optimal energy efficiency proportional fairness in WPCN. The main idea is to use stochastic geometry to derive the mean proportionally fairness utility function with respect to user association probability and receive threshold. Subsequently, we prove that the relaxed proportionally fairness utility function is a concave function for user association probability and receive threshold, respectively. At the same time, a sub-optimal algorithm by exploiting alternating optimization approach is proposed. Through numerical simulations, we demonstrate that our sub-optimal algorithm can obtain a result close to optimal energy efficiency proportional fairness with significant reduction of computational complexity. PMID:28914818

  17. Cost effectiveness of a pragmatic exercise intervention (EXIMS) for people with multiple sclerosis: economic evaluation of a randomised controlled trial.

    PubMed

    Tosh, J; Dixon, S; Carter, A; Daley, A; Petty, J; Roalfe, A; Sharrack, B; Saxton, J M

    2014-07-01

    Exercise is a safe, non-pharmacological adjunctive treatment for people with multiple sclerosis but cost-effective approaches to implementing exercise within health care settings are needed. The objective of this paper is to assess the cost effectiveness of a pragmatic exercise intervention in conjunction with usual care compared to usual care only in people with mild to moderate multiple sclerosis. A cost-utility analysis of a pragmatic randomised controlled trial over nine months of follow-up was conducted. A total of 120 people with multiple sclerosis were randomised (1:1) to the intervention or usual care. Exercising participants received 18 supervised and 18 home exercise sessions over 12 weeks. The primary outcome for the cost utility analysis was the incremental cost per quality-adjusted life year (QALY) gained, calculated using utilities measured by the EQ-5D questionnaire. The incremental cost per QALY of the intervention was £10,137 per QALY gained compared to usual care. The probability of being cost effective at a £20,000 per QALY threshold was 0.75, rising to 0.78 at a £30,000 per QALY threshold. The pragmatic exercise intervention is highly likely to be cost effective at current established thresholds, and there is scope for it to be tailored to particular sub-groups of patients or services to reduce its cost impact. © The Author(s) 2013.

  18. Viral Load Criteria and Threshold Optimization to Improve HIV Incidence Assay Characteristics - A CEPHIA Analysis

    PubMed Central

    Kassanjee, Reshma; Pilcher, Christopher D; Busch, Michael P; Murphy, Gary; Facente, Shelley N; Keating, Sheila M; Mckinney, Elaine; Marson, Kara; Price, Matthew A; Martin, Jeffrey N; Little, Susan J; Hecht, Frederick M; Kallas, Esper G; Welte, Alex

    2016-01-01

    Objective Assays for classifying HIV infections as ‘recent’ or ‘non-recent’ for incidence surveillance fail to simultaneously achieve large mean durations of ‘recent’ infection (MDRIs) and low ‘false-recent’ rates (FRRs), particularly in virally suppressed persons. The potential for optimizing recent infection testing algorithms (RITAs), by introducing viral load criteria and tuning thresholds used to dichotomize quantitative measures, is explored. Design The Consortium for the Evaluation and Performance of HIV Incidence Assays characterized over 2000 possible RITAs constructed from seven assays (LAg, BED, Less-sensitive Vitros, Vitros Avidity, BioRad Avidity, Architect Avidity and Geenius) applied to 2500 diverse specimens. Methods MDRIs were estimated using regression, and FRRs as observed ‘recent’ proportions, in various specimen sets. Context-specific FRRs were estimated for hypothetical scenarios. FRRs were made directly comparable by constructing RITAs with the same MDRI through the tuning of thresholds. RITA utility was summarized by the precision of incidence estimation. Results All assays produce high FRRs amongst treated subjects and elite controllers (10%-80%). Viral load testing reduces FRRs, but diminishes MDRIs. Context-specific FRRs vary substantially by scenario – BioRad Avidity and LAg provided the lowest FRRs and highest incidence precision in scenarios considered. Conclusions The introduction of a low viral load threshold provides crucial improvements in RITAs. However, it does not eliminate non-zero FRRs, and MDRIs must be consistently estimated. The tuning of thresholds is essential for comparing and optimizing the use of assays. The translation of directly measured FRRs into context-specific FRRs critically affects their magnitudes and our understanding of the utility of assays. PMID:27454561

  19. Utility of Decision Rules for Transcutaneous Bilirubin Measurements

    PubMed Central

    Burgos, Anthony E.; Flaherman, Valerie; Chung, Esther K.; Simpson, Elizabeth A.; Goyal, Neera K.; Von Kohorn, Isabelle; Dhepyasuwan, Niramol

    2016-01-01

    BACKGROUND: Transcutaneous bilirubin (TcB) meters are widely used for screening newborns for jaundice, with a total serum bilirubin (TSB) measurement indicated when the TcB value is classified as “positive” by using a decision rule. The goal of our study was to assess the clinical utility of 3 recommended TcB screening decision rules. METHODS: Paired TcB/TSB measurements were collected at 34 newborn nursery sites. At 27 sites (sample 1), newborns were routinely screened with a TcB measurement. For sample 2, sites that typically screen with TSB levels also obtained a TcB measurement for the study. Three decision rules to define a positive TcB measurement were evaluated: ≥75th percentile on the Bhutani nomogram, 70% of the phototherapy level, and within 3 mg/dL of the phototherapy threshold. The primary outcome was a TSB level at/above the phototherapy threshold. The rate of false-negative TcB screens and percentage of blood draws avoided were calculated for each decision rule. RESULTS: For sample 1, data were analyzed on 911 paired TcB-TSB measurements from a total of 8316 TcB measurements. False-negative rates were <10% with all decision rules; none identified all 31 newborns with a TSB level at/above the phototherapy threshold. The percentage of blood draws avoided ranged from 79.4% to 90.7%. In sample 2, each rule correctly identified all 8 newborns with TSB levels at/above the phototherapy threshold. CONCLUSIONS: Although all of the decision rules can be used effectively to screen newborns for jaundice, each will “miss” some infants with a TSB level at/above the phototherapy threshold. PMID:27244792

  20. Characterization of Two Subsurface H2-Utilizing Bacteria, Desulfomicrobium hypogeium sp. nov. and Acetobacterium psammolithicum sp. nov., and Their Ecological Roles

    PubMed Central

    Krumholz, Lee R.; Harris, Steve H.; Tay, Stephen T.; Suflita, Joseph M.

    1999-01-01

    We examined the relative roles of acetogenic and sulfate-reducing bacteria in H2 consumption in a previously characterized subsurface sandstone ecosystem. Enrichment cultures originally inoculated with ground sandstone material obtained from a Cretaceous formation in central New Mexico were grown with hydrogen in a mineral medium supplemented with 0.02% yeast extract. Sulfate reduction and acetogenesis occurred in these cultures, and the two most abundant organisms carrying out the reactions were isolated. Based on 16S rRNA analysis data and on substrate utilization patterns, these organisms were named Desulfomicrobium hypogeium sp. nov. and Acetobacterium psammolithicum sp. nov. The steady-state H2 concentrations measured in sandstone-sediment slurries (threshold concentration, 5 nM), in pure cultures of sulfate reducers (threshold concentration, 2 nM), and in pure cultures of acetogens (threshold concentrations 195 to 414 nM) suggest that sulfate reduction is the dominant terminal electron-accepting process in the ecosystem examined. In an experiment in which direct competition for H2 between D. hypogeium and A. psammolithicum was examined, sulfate reduction was the dominant process. PMID:10347005

  1. A novel automated instrument designed to determine photosensitivity thresholds (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Aguilar, Mariela C.; Gonzalez, Alex; Rowaan, Cornelis; De Freitas, Carolina; Rosa, Potyra R.; Alawa, Karam; Lam, Byron L.; Parel, Jean-Marie A.

    2016-03-01

    As there is no clinically available instrument to systematically and reliably determine the photosensitivity thresholds of patients with dry eyes, blepharospasms, migraines, traumatic brain injuries, and genetic disorders such as Achromatopsia, retinitis pigmentosa and other retinal dysfunctions, a computer-controlled optoelectronics system was designed. The BPEI Photosensitivity System provides a light stimuli emitted from a bi-cupola concave, 210 white LED array with varying intensity ranging from 1 to 32,000 lux. The system can either utilize a normal or an enhanced testing mode for subjects with low light tolerance. The automated instrument adjusts the intensity of each light stimulus. The subject is instructed to indicate discomfort by pressing a hand-held button. Reliability of the responses is tracked during the test. The photosensitivity threshold is then calculated after 10 response reversals. In a preliminary study, we demonstrated that subjects suffering from Achromatopsia experienced lower photosensitivity thresholds than normal subjects. Hence, the system can safely and reliably determine the photosensitivity thresholds of healthy and light sensitive subjects by detecting and quantifying the individual differences. Future studies will be performed with this system to determine the photosensitivity threshold differences between normal subjects and subjects suffering from other conditions that affect light sensitivity.

  2. Low power ovonic threshold switching characteristics of thin GeTe{sub 6} films using conductive atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manivannan, Anbarasu, E-mail: anbarasu@iiti.ac.in, E-mail: ranjith@iith.ac.in; Sahu, Smriti; Myana, Santosh Kumar

    2014-12-15

    Minimizing the dimensions of the electrode could directly impact the energy-efficient threshold switching and programming characteristics of phase change memory devices. A ∼12–15 nm AFM probe-tip was employed as one of the electrodes for a systematic study of threshold switching of as-deposited amorphous GeTe{sub 6} thin films. This configuration enables low power threshold switching with an extremely low steady state current in the on state of 6–8 nA. Analysis of over 48 different probe locations on the sample reveals a stable Ovonic threshold switching behavior at threshold voltage, V{sub TH} of 2.4 ± 0.5 V and the off state was retained below a holding voltage,more » V{sub H} of 0.6 ± 0.1 V. All these probe locations exhibit repeatable on-off transitions for more than 175 pulses at each location. Furthermore, by utilizing longer biasing voltages while scanning, a plausible nano-scale control over the phase change behavior from as-deposited amorphous to crystalline phase was studied.« less

  3. Magnetic resonance electrical impedance tomography (MREIT) based on the solution of the convection equation using FEM with stabilization.

    PubMed

    Oran, Omer Faruk; Ider, Yusuf Ziya

    2012-08-21

    Most algorithms for magnetic resonance electrical impedance tomography (MREIT) concentrate on reconstructing the internal conductivity distribution of a conductive object from the Laplacian of only one component of the magnetic flux density (∇²B(z)) generated by the internal current distribution. In this study, a new algorithm is proposed to solve this ∇²B(z)-based MREIT problem which is mathematically formulated as the steady-state scalar pure convection equation. Numerical methods developed for the solution of the more general convection-diffusion equation are utilized. It is known that the solution of the pure convection equation is numerically unstable if sharp variations of the field variable (in this case conductivity) exist or if there are inconsistent boundary conditions. Various stabilization techniques, based on introducing artificial diffusion, are developed to handle such cases and in this study the streamline upwind Petrov-Galerkin (SUPG) stabilization method is incorporated into the Galerkin weighted residual finite element method (FEM) to numerically solve the MREIT problem. The proposed algorithm is tested with simulated and also experimental data from phantoms. Successful conductivity reconstructions are obtained by solving the related convection equation using the Galerkin weighted residual FEM when there are no sharp variations in the actual conductivity distribution. However, when there is noise in the magnetic flux density data or when there are sharp variations in conductivity, it is found that SUPG stabilization is beneficial.

  4. Dissolved organic phosphorus utilization and alkaline phosphatase activity of the dinoflagellate Gymnodinium impudicum isolated from the South Sea of Korea

    NASA Astrophysics Data System (ADS)

    Oh, Seok Jin; Kwon, Hyeong Kyu; Noh, Il Hyeon; Yang, Han-Soeb

    2010-09-01

    This study investigated alkaline phosphatase (APase) activity and dissolved organic and inorganic phosphorus utilization by the harmful dinoflagellate Gymnodinium impudicum (Fraga et Bravo) Hansen et Moestrup isolated from the South Sea of Korea. Under conditions of limited phosphorus, observation of growth kinetics in batch culture yielded a maximum growth rate (μmax) of 0.41 /day and a half saturation constant (Ks) of 0.71 μM. In time-course experiments, APase was induced as dissolved inorganic phosphorus (DIP) concentrations fell below 0.83 μM, a threshold near the estimated Ks; APase activity increased with further DIP depletion to a maximum of 0.70 pmol/cell/h in the senescent phase. Thus, Ks may be an important index of the threshold DIP concentration for APase induction. G. impudicum utilizes a wide variety of dissolved organic phosphorus compounds in addition to DIP. These results suggest that DIP limitation in the Southern Sea of Korea may have led to the spread of G. impudicum along with the harmful dinoflagellate Cochlodinium polykrikoides in recent years.

  5. Development of water level estimation algorithms using SARAL/Altika dataset and validation over the Ukai reservoir, India

    NASA Astrophysics Data System (ADS)

    Chander, Shard; Ganguly, Debojyoti

    2017-01-01

    Water level was estimated, using AltiKa radar altimeter onboard the SARAL satellite, over the Ukai reservoir using modified algorithms specifically for inland water bodies. The methodology was based on waveform classification, waveform retracking, and dedicated inland range corrections algorithms. The 40-Hz waveforms were classified based on linear discriminant analysis and Bayesian classifier. Waveforms were retracked using Brown, Ice-2, threshold, and offset center of gravity methods. Retracking algorithms were implemented on full waveform and subwaveforms (only one leading edge) for estimating the improvement in the retrieved range. European Centre for Medium-Range Weather Forecasts (ECMWF) operational, ECMWF re-analysis pressure fields, and global ionosphere maps were used to exactly estimate the range corrections. The microwave and optical images were used for estimating the extent of the water body and altimeter track location. Four global positioning system (GPS) field trips were conducted on same day as the SARAL pass using two dual frequency GPS. One GPS was mounted close to the dam in static mode and the other was used on a moving vehicle within the reservoir in Kinematic mode. In situ gauge dataset was provided by the Ukai dam authority for the time period January 1972 to March 2015. The altimeter retrieved water level results were then validated with the GPS survey and in situ gauge dataset. With good selection of virtual station (waveform classification, back scattering coefficient), Ice-2 retracker and subwaveform retracker both work better with an overall root-mean-square error <15 cm. The results support that the AltiKa dataset, due to a smaller foot-print and sharp trailing edge of the Ka-band waveform, can be utilized for more accurate water level information over inland water bodies.

  6. Idealized numerical studies of gravity wave alteration in the tropopause region

    NASA Astrophysics Data System (ADS)

    Bense, Vera; Spichtinger, Peter

    2017-04-01

    When travelling through the tropopause region, characterised by strong gradients in static stability, wind shear and trace gases, the properties of gravity waves often change drastically. Within this work, the EULAG model (Prusa et al., 2008) is used to provide an idealized setup for sensitivity studies on these modifications. The characteristics of the tropopause are introduced by specifiying environmental profiles for Brunt-Väisälä frequency and horizontal wind speed, partly extracted from measurement and reanalysis data. Tropospheric and stratospheric wave spectra extracted for flows under varying tropopause sharpness are analysed, respectively. In particular, different regimes for transmission behaviour are classified for a series of Brunt-Väisälä frequency profiles showing a tropopause inversion layer (TIL, see e.g. Birner et al., 2002). Furthermore, this study focusses on the comparison of transmission coefficients deduced from numerical simulations with values derived from asymptotical analysis of the governing equations and investigates where the threshold of linear behaviour are for the respective setups, The wave generation is implemented in the model both through topography at the lower model domain and through the prescription of wave packets at initialization of the simulations. References: Prusa, J. M., P. K. Smolarkiewicz, P. K. and A. A. Wyszogrodzki, 2008: EULAG, a computational model for multiscale flows, Computers & Fluids 37, 1193-1207 Birner, T., A. Doernbrack, and U. Schumann, 2002: How sharp is the tropopause at midlatitudes?, Geophys. Res. Lett., 29, 1700, doi:10.1029/2002GL015142.

  7. Corner smoothing of 2D milling toolpath using b-spline curve by optimizing the contour error and the feedrate

    NASA Astrophysics Data System (ADS)

    Özcan, Abdullah; Rivière-Lorphèvre, Edouard; Ducobu, François

    2018-05-01

    In part manufacturing, efficient process should minimize the cycle time needed to reach the prescribed quality on the part. In order to optimize it, the machining time needs to be as low as possible and the quality needs to meet some requirements. For a 2D milling toolpath defined by sharp corners, the programmed feedrate is different from the reachable feedrate due to kinematic limits of the motor drives. This phenomena leads to a loss of productivity. Smoothing the toolpath allows to reduce significantly the machining time but the dimensional accuracy should not be neglected. Therefore, a way to address the problem of optimizing a toolpath in part manufacturing is to take into account the manufacturing time and the part quality. On one hand, maximizing the feedrate will minimize the manufacturing time and, on the other hand, the maximum of the contour error needs to be set under a threshold to meet the quality requirements. This paper presents a method to optimize sharp corner smoothing using b-spline curves by adjusting the control points defining the curve. The objective function used in the optimization process is based on the contour error and the difference between the programmed feedrate and an estimation of the reachable feedrate. The estimation of the reachable feedrate is based on geometrical information. Some simulation results are presented in the paper and the machining times are compared in each cases.

  8. Quantitative sensory testing of temperature, pain, and touch in adults with Down syndrome.

    PubMed

    de Knegt, Nanda; Defrin, Ruth; Schuengel, Carlo; Lobbezoo, Frank; Evenhuis, Heleen; Scherder, Erik

    2015-12-01

    The spinothalamic pathway mediates sensations of temperature, pain, and touch. These functions seem impaired in children with Down syndrome (DS), but have not been extensively examined in adults. The objective of the present study was to compare the spinothalamic-mediated sensory functions between adults with DS and adults from the general population and to examine in the DS group the relationship between the sensory functions and level of intellectual functioning. Quantitative sensory testing (QST) was performed in 188 adults with DS (mean age 37.5 years) and 142 age-matched control participants (median age 40.5 years). Temperature, pain, and touch were evaluated with tests for cold-warm discrimination, sharp-dull discrimination (pinprick), and tactile threshold, respectively. Level of intellectual functioning was estimated with the Social Functioning Scale for Intellectual Disability (intellectual disability level) and the Wechsler Preschool and Primary Scale of Intelligence--Revised (intelligence level). Overall, the difference in spinothalamic-mediated sensory functions between the DS and control groups was not statistically significant. However, DS participants with a lower intelligence level had a statistically significant lower performance on the sharp-dull discrimination test than DS participants with higher intelligence level (adjusted p=.006) and control participants (adjusted p=.017). It was concluded that intellectual functioning level is an important factor to take into account for the assessment of spinothalamic-mediated sensory functioning in adults with DS: a lower level could coincide with impaired sensory functioning, but could also hamper QST assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Sharps Injuries and Other Blood and Body Fluid Exposures Among Home Health Care Nurses and Aides

    PubMed Central

    Markkanen, Pia K.; Galligan, Catherine J.; Kriebel, David; Chalupka, Stephanie M.; Kim, Hyun; Gore, Rebecca J.; Sama, Susan R.; Laramie, Angela K.; Davis, Letitia

    2009-01-01

    Objectives. We quantified risks of sharp medical device (sharps) injuries and other blood and body fluid exposures among home health care nurses and aides, identified risk factors, assessed the use of sharps with safety features, and evaluated underreporting in workplace-based surveillance. Methods. We conducted a questionnaire survey and workplace-based surveillance, collaborating with 9 home health care agencies and 2 labor unions from 2006 to 2007. Results. Approximately 35% of nurses and 6.4% of aides had experienced at least 1 sharps injury during their home health care career; corresponding figures for other blood and body fluid exposures were 15.1% and 6.7%, respectively. Annual sharps injuries incidence rates were 5.1 per 100 full-time equivalent (FTE) nurses and 1.0 per 100 FTE aides. Medical procedures contributing to sharps injuries were injecting medications, administering fingersticks and heelsticks, and drawing blood. Other contributing factors were sharps disposal, contact with waste, and patient handling. Sharps with safety features frequently were not used. Underreporting of sharps injuries to the workplace-based surveillance system was estimated to be about 50%. Conclusions. Sharps injuries and other blood and body fluid exposures are serious hazards for home health care nurses and aides. Improvements in hazard intervention are needed. PMID:19890177

  10. Needlestick and Sharps Injuries in Dermatologic Surgery: A Review of Preventative Techniques and Post-exposure Protocols

    PubMed Central

    Monroe, Holly; Orengo, Ida; Rosen, Theodore

    2016-01-01

    Background: Needlestickand sharps injuries are the leading causes of morbidity in the dermatologicfield. Among medical specialties, surgeons and dermatologists have the highest rates of needlestickand sharps injuries.The high rates of needlestickand sharps injuries in dermatology not only apply to physicians, but also to nurses, physician assistants, and technicians in the demnatologic field. Needlestickand sharps injuries are of great concern due to the monetary, opportunity, social, and emotional costs associated with their occurrence. Objective: A review of preventative techniques and post-exposure protocols for the majortypes of sharps injuries encountered in dermatologic practice. Design: The terms “needle-stick injuryT’sharps injuryTdermatologic surgery? “post-exposure prophylaxis,”and “health-care associated injury” were used in combinations to search the PubMed database. Relevant studies were reviewed for validity and included. Results The authors discuss the major types of sharps injuries that occur in the dermatologic surgery setting and summarize preventative techniques with respect to each type of sharps injury.The authors also summarize and discuss relevant post-exposure protocols in the event of a sharps injury. Conclusion: The adoption of the discussed methods, techniques, practices, and attire can result in the elimination of the vast majority of dermatologic sharps injuries. PMID:27847548

  11. Dynamics of Transformation from Segregation to Mixed Wealth Cities

    PubMed Central

    Sahasranaman, Anand; Jensen, Henrik Jeldtoft

    2016-01-01

    We model the dynamics of a variation of the Schelling model for agents described simply by a continuously distributed variable—wealth. Agent movement is not dictated by agent choice as in the classic Schelling model, but by their wealth status. Agents move to neighborhoods where their wealth is not lesser than that of some proportion of their neighbors, the threshold level. As in the case of the classic Schelling model, we find here that wealth-based segregation occurs and persists. However, introducing uncertainty into the decision to move—that is, with some probability, if agents are allowed to move even though the threshold condition is contravened—we find that even for small proportions of such disallowed moves, the dynamics no longer yield segregation but instead sharply transition into a persistent mixed wealth distribution, consistent with empirical findings of Benenson, Hatna, and Or. We investigate the nature of this sharp transformation, and find that it is because of a non-linear relationship between allowed moves (moves where threshold condition is satisfied) and disallowed moves (moves where it is not). For small increases in disallowed moves, there is a rapid corresponding increase in allowed moves (before the rate of increase tapers off and tends to zero), and it is the effect of this non-linearity on the dynamics of the system that causes the rapid transition from a segregated to a mixed wealth state. The contravention of the tolerance condition, sanctioning disallowed moves, could be interpreted as public policy interventions to drive de-segregation. Our finding therefore suggests that it might require limited, but continually implemented, public intervention—just sufficient to enable a small, persistently sustained fraction of disallowed moves so as to trigger the dynamics that drive the transformation from a segregated to mixed equilibrium. PMID:27861578

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marnay, Chris; Siddiqui, Afzal; Marnay, Chris

    This paper examines a California-based microgrid?s decision to invest in a distributed generation (DG) unit fuelled by natural gas. While the long-term natural gas generation cost is stochastic, we initially assume that the microgrid may purchase electricity at a fixed retail rate from its utility. Using the real options approach, we find a natural gas generation cost threshold that triggers DG investment. Furthermore, the consideration of operational flexibility by the microgrid increases DG investment, while the option to disconnect from the utility is not attractive. By allowing the electricity price to be stochastic, we next determine an investment threshold boundarymore » and find that high electricity price volatility relative to that of natural gas generation cost delays investment while simultaneously increasing the value of the investment. We conclude by using this result to find the implicit option value of the DG unit when two sources of uncertainty exist.« less

  13. Validation testing of shallow notched round-bar screening test specimens. [for the space shuttle main engine

    NASA Technical Reports Server (NTRS)

    Vroman, G. A.

    1975-01-01

    The capability of shallow-notched, round-bar, tensile specimens for screening critical environments as they affect the material fracture properties of the space shuttle main engine was tested and analyzed. Specimens containing a 0.050-inch-deep circumferential sharp notch were cyclically loaded in a 5000-psi hydrogen environment at temperatures of +70 and -15 F. Replication of test results and a marked change in cyclic life because of temperature variation demonstrated the validity of the specimen type to be utilized for screening tests.

  14. Fluoroscopy-Guided Endoscopic Removal of Foreign Bodies.

    PubMed

    Kim, Junhwan; Ahn, Ji Yong; So, Seol; Lee, Mingee; Oh, Kyunghwan; Jung, Hwoon-Yong

    2017-03-01

    In most cases of ingested foreign bodies, endoscopy is the first treatment of choice. Moreover, emergency endoscopic removal is required for sharp and pointed foreign bodies such as animal or fish bones, food boluses, and button batteries due to the increased risks of perforation, obstruction, and bleeding. Here, we presented two cases that needed emergency endoscopic removal of foreign bodies without sufficient fasting time. Foreign bodies could not be visualized by endoscopy due to food residue; therefore, fluoroscopic imaging was utilized for endoscopic removal of foreign bodies in both cases.

  15. Controlling upconversion nanocrystals for emerging applications

    NASA Astrophysics Data System (ADS)

    Zhou, Bo; Shi, Bingyang; Jin, Dayong; Liu, Xiaogang

    2015-11-01

    Lanthanide-doped upconversion nanocrystals enable anti-Stokes emission with pump intensities several orders of magnitude lower than required by conventional nonlinear optical techniques. Their exceptional properties, namely large anti-Stokes shifts, sharp emission spectra and long excited-state lifetimes, have led to a diversity of applications. Here, we review upconversion nanocrystals from the perspective of fundamental concepts and examine the technical challenges in relation to emission colour tuning and luminescence enhancement. In particular, we highlight the advances in functionalization strategies that enable the broad utility of upconversion nanocrystals for multimodal imaging, cancer therapy, volumetric displays and photonics.

  16. Efficient iris recognition by characterizing key local variations.

    PubMed

    Ma, Li; Tan, Tieniu; Wang, Yunhong; Zhang, Dexin

    2004-06-01

    Unlike other biometrics such as fingerprints and face, the distinct aspect of iris comes from randomly distributed features. This leads to its high reliability for personal identification, and at the same time, the difficulty in effectively representing such details in an image. This paper describes an efficient algorithm for iris recognition by characterizing key local variations. The basic idea is that local sharp variation points, denoting the appearing or vanishing of an important image structure, are utilized to represent the characteristics of the iris. The whole procedure of feature extraction includes two steps: 1) a set of one-dimensional intensity signals is constructed to effectively characterize the most important information of the original two-dimensional image; 2) using a particular class of wavelets, a position sequence of local sharp variation points in such signals is recorded as features. We also present a fast matching scheme based on exclusive OR operation to compute the similarity between a pair of position sequences. Experimental results on 2255 iris images show that the performance of the proposed method is encouraging and comparable to the best iris recognition algorithm found in the current literature.

  17. Three-dimensional vibrations of cylindrical elastic solids with V-notches and sharp radial cracks

    NASA Astrophysics Data System (ADS)

    McGee, O. G.; Kim, J. W.

    2010-02-01

    This paper provides free vibration data for cylindrical elastic solids, specifically thick circular plates and cylinders with V-notches and sharp radial cracks, for which no extensive previously published database is known to exist. Bending moment and shear force singularities are known to exist at the sharp reentrant corner of a thick V-notched plate under transverse vibratory motion, and three-dimensional (3-D) normal and transverse shear stresses are known to exist at the sharp reentrant terminus edge of a V-notched cylindrical elastic solid under 3-D free vibration. A theoretical analysis is done in this work utilizing a variational Ritz procedure including these essential singularity effects. The procedure incorporates a complete set of admissible algebraic-trigonometric polynomials in conjunction with an admissible set of " edge functions" that explicitly model the 3-D stress singularities which exist along a reentrant terminus edge (i.e., α>180°) of the V-notch. The first set of polynomials guarantees convergence to exact frequencies, as sufficient terms are retained. The second set of edge functions—in addition to representing the corner stress singularities—substantially accelerates the convergence of frequency solutions. This is demonstrated through extensive convergence studies that have been carried out by the investigators. Numerical analysis has been carried out and the results have been given for cylindrical elastic solids with various V-notch angles and depths. The relative depth of the V-notch is defined as (1- c/ a), and the notch angle is defined as (360°- α). For a very small notch angle (1° or less), the notch may be regarded as a "sharp radial crack." Accurate (four significant figure) frequencies are presented for a wide spectrum of notch angles (360°- α), depths (1- c/ a), and thickness ratios ( a/ h for plates and h/ a for cylinders). An extended database of frequencies for completely free thick sectorial, semi-circular, and segmented plates and cylinders are also reported herein as interesting special cases. A generalization of the elasticity-based Ritz analysis and findings applicable here is an arbitrarily shaped V-notched cylindrical solid, being a surface traced out by a family of generatrix, which pass through the circumference of an arbitrarily shaped V-notched directrix curve, r( θ), several of which are described for future investigations and close extensions of this work.

  18. Development and validation of a prediction model for measurement variability of lung nodule volumetry in patients with pulmonary metastases.

    PubMed

    Hwang, Eui Jin; Goo, Jin Mo; Kim, Jihye; Park, Sang Joon; Ahn, Soyeon; Park, Chang Min; Shin, Yeong-Gil

    2017-08-01

    To develop a prediction model for the variability range of lung nodule volumetry and validate the model in detecting nodule growth. For model development, 50 patients with metastatic nodules were prospectively included. Two consecutive CT scans were performed to assess volumetry for 1,586 nodules. Nodule volume, surface voxel proportion (SVP), attachment proportion (AP) and absolute percentage error (APE) were calculated for each nodule and quantile regression analyses were performed to model the 95% percentile of APE. For validation, 41 patients who underwent metastasectomy were included. After volumetry of resected nodules, sensitivity and specificity for diagnosis of metastatic nodules were compared between two different thresholds of nodule growth determination: uniform 25% volume change threshold and individualized threshold calculated from the model (estimated 95% percentile APE). SVP and AP were included in the final model: Estimated 95% percentile APE = 37.82 · SVP + 48.60 · AP-10.87. In the validation session, the individualized threshold showed significantly higher sensitivity for diagnosis of metastatic nodules than the uniform 25% threshold (75.0% vs. 66.0%, P = 0.004) CONCLUSION: Estimated 95% percentile APE as an individualized threshold of nodule growth showed greater sensitivity in diagnosing metastatic nodules than a global 25% threshold. • The 95 % percentile APE of a particular nodule can be predicted. • Estimated 95 % percentile APE can be utilized as an individualized threshold. • More sensitive diagnosis of metastasis can be made with an individualized threshold. • Tailored nodule management can be provided during nodule growth follow-up.

  19. Pre-Processes for Urban Areas Detection in SAR Images

    NASA Astrophysics Data System (ADS)

    Altay Açar, S.; Bayır, Ş.

    2017-11-01

    In this study, pre-processes for urban areas detection in synthetic aperture radar (SAR) images are examined. These pre-processes are image smoothing, thresholding and white coloured regions determination. Image smoothing is carried out to remove noises then thresholding is applied to obtain binary image. Finally, candidate urban areas are detected by using white coloured regions determination. All pre-processes are applied by utilizing the developed software. Two different SAR images which are acquired by TerraSAR-X are used in experimental study. Obtained results are shown visually.

  20. Progress and prospects of silicon-based design for optical phased array

    NASA Astrophysics Data System (ADS)

    Hu, Weiwei; Peng, Chao; Chang-Hasnain, Connie

    2016-03-01

    The high-speed, high-efficient, compact phase modulator array is indispensable in the Optical-phased array (OPA) which has been considered as a promising technology for realizing flexible and efficient beam steering. In our research, two methods are presented to utilize high-contrast grating (HCG) as high-efficient phase modulator. One is that HCG possesses high-Q resonances that origins from the cancellation of leaky waves. As a result, sharp resonance peaks appear on the reflection spectrum thus HCGs can be utilized as efficient phase shifters. Another is that low-Q mode HCG is utilized as ultra-lightweight mirror. With MEMS technology, small HCG displacement (~50 nm) leads to large phase change (~1.7π). Effective beam steering is achieved in Connie Chang-Hasnian's group. On the other hand, we theoretically and experimentally investigate the system design for silicon-based optical phased array, including the star coupler, phased array, emission elements and far-field patterns. Further, the non-uniform optical phased array is presented.

  1. Commercial Building Tenant Energy Usage Data Aggregation and Privacy: Technical Appendix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livingston, Olga V.; Pulsipher, Trenton C.; Anderson, David M.

    2014-11-12

    This technical appendix accompanies report PNNL–23786 “Commercial Building Tenant Energy Usage Data Aggregation and Privacy”. The objective is to provide background information on the methods utilized in the statistical analysis of the aggregation thresholds.

  2. Coral Reef Resilience, Tipping Points and the Strength of Herbivory

    PubMed Central

    Holbrook, Sally J.; Schmitt, Russell J.; Adam, Thomas C.; Brooks, Andrew J.

    2016-01-01

    Coral reefs increasingly are undergoing transitions from coral to macroalgal dominance. Although the functional roles of reef herbivores in controlling algae are becoming better understood, identifying possible tipping points in the herbivory-macroalgae relationships has remained a challenge. Assessment of where any coral reef ecosystem lies in relation to the coral-to-macroalgae tipping point is fundamental to understanding resilience properties, forecasting state shifts, and developing effective management practices. We conducted a multi-year field experiment in Moorea, French Polynesia to estimate these properties. While we found a sharp herbivory threshold where macroalgae escape control, ambient levels of herbivory by reef fishes were well above that needed to prevent proliferation of macroalgae. These findings are consistent with previously observed high resilience of the fore reef in Moorea. Our approach can identify vulnerable coral reef systems in urgent need of management action to both forestall shifts to macroalgae and preserve properties essential for resilience. PMID:27804977

  3. Temperature Evolution of Excitonic Absorptions in Cd(1-x)Zn(x)Te Materials

    NASA Technical Reports Server (NTRS)

    Quijada, Manuel A.; Henry, Ross

    2007-01-01

    The studies consist of measuring the frequency dependent transmittance (T) and reflectance (R) above and below the optical band-gap in the UV/Visible and infrared frequency ranges for Cd(l-x),Zn(x),Te materials for x=0 and x=0.04. Measurements were also done in the temperature range from 5 to 300 K. The results show that the optical gap near 1.49 eV at 300 K increases to 1.62 eV at 5 K. Finally, we observe sharp absorption peaks near this gap energy at low temperatures. The close proximity of these peaks to the optical transition threshold suggests that they originate from the creation of bound electron-hole pairs or excitons. The decay of these excitonic absorptions may contribute to a photoluminescence and transient background response of these back-illuminated HgCdTe CCD detectors.

  4. Topological determinants of self-sustained activity in a simple model of excitable dynamics on graphs

    PubMed Central

    Fretter, Christoph; Lesne, Annick; Hilgetag, Claus C.; Hütt, Marc-Thorsten

    2017-01-01

    Simple models of excitable dynamics on graphs are an efficient framework for studying the interplay between network topology and dynamics. This topic is of practical relevance to diverse fields, ranging from neuroscience to engineering. Here we analyze how a single excitation propagates through a random network as a function of the excitation threshold, that is, the relative amount of activity in the neighborhood required for the excitation of a node. We observe that two sharp transitions delineate a region of sustained activity. Using analytical considerations and numerical simulation, we show that these transitions originate from the presence of barriers to propagation and the excitation of topological cycles, respectively, and can be predicted from the network topology. Our findings are interpreted in the context of network reverberations and self-sustained activity in neural systems, which is a question of long-standing interest in computational neuroscience. PMID:28186182

  5. A mathematical model of algae growth in a pelagic-benthic coupled shallow aquatic ecosystem.

    PubMed

    Zhang, Jimin; Shi, Junping; Chang, Xiaoyuan

    2018-04-01

    A coupled system of ordinary differential equations and partial differential equations is proposed to describe the interaction of pelagic algae, benthic algae and one essential nutrient in an oligotrophic shallow aquatic ecosystem with ample supply of light. The existence and uniqueness of non-negative steady states are completely determined for all possible parameter range, and these results characterize sharp threshold conditions for the regime shift from extinction to coexistence of pelagic and benthic algae. The influence of environmental parameters on algal biomass density is also considered, which is an important indicator of algal blooms. Our studies suggest that the nutrient recycling from loss of algal biomass may be an important factor in the algal blooms process; and the presence of benthic algae may limit the pelagic algal biomass density as they consume common resources even if the sediment nutrient level is high.

  6. Sequential buckling of an elastic wall

    NASA Astrophysics Data System (ADS)

    Bico, Jose; Bense, Hadrien; Keiser, Ludovic; Roman, Benoit; Melo, Francisco; Abkarian, Manouk

    A beam under quasistatic compression classically buckles beyond a critical threshold. In the case of a free beam, the lowest buckling mode is selected. We investigate the case of a long ``wall'' grounded of a compliant base and compressed in the axial compression. In the case of a wall of slender rectangular cross section, the selected buckling mode adopts a nearly fixed wavelength proportional to the height of the wall. Higher compressive loads only increase the amplitude of the buckle. However if the cross section has a sharp shape (such as an Eiffel tower profile), we observe successive buckling modes of increasing wavelength. We interpret this unusual evolution in terms of scaling arguments. At small scales, this variable periodicity might be used to develop tunable optical devices. We thank ECOS C12E07, CNRS-CONICYT, and Fondecyt Grant No. N1130922 for partially funding this work.

  7. Fast global oscillations in networks of integrate-and-fire neurons with low firing rates.

    PubMed

    Brunel, N; Hakim, V

    1999-10-01

    We study analytically the dynamics of a network of sparsely connected inhibitory integrate-and-fire neurons in a regime where individual neurons emit spikes irregularly and at a low rate. In the limit when the number of neurons --> infinity, the network exhibits a sharp transition between a stationary and an oscillatory global activity regime where neurons are weakly synchronized. The activity becomes oscillatory when the inhibitory feedback is strong enough. The period of the global oscillation is found to be mainly controlled by synaptic times but depends also on the characteristics of the external input. In large but finite networks, the analysis shows that global oscillations of finite coherence time generically exist both above and below the critical inhibition threshold. Their characteristics are determined as functions of systems parameters in these two different regions. The results are found to be in good agreement with numerical simulations.

  8. Percolation transition in carbon composite on the basis of fullerenes and exfoliated graphite

    NASA Astrophysics Data System (ADS)

    Berezkin, V. I.; Popov, V. V.

    2018-01-01

    The electrical conductivity of a carbon composite on the basis of C60 fullerenes and exfoliated graphite is investigated in the range of relative contents of components from 0 to 100%. The samples are obtained by the thermal treatment of the initial dispersed mixtures in vacuum in the diffusion-adsorption process and their further cold pressing. The resistivity of the samples gradually increases with an increase in the fraction of fullerenes, and a sharp transition from the conductive state to the dielectric one is observed after achieving certain concentrations of C60. The interpretation of the results within the percolation theory makes it possible to evaluate the percolation threshold (expressed as a relative content of graphite) as equal to 4.45 wt % and the critical conductivity index as equal to 1.85 (which is typical for three-dimensional twocomponent disordered media including those having pores).

  9. Idiosyncrasies of volcanic sulfur viscosity and the triggering of unheralded volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Scolamacchia, Teresa; Cronin, Shane

    2016-03-01

    Unheralded "blue-sky" eruptions from dormant volcanoes cause serious fatalities, such as at Mt. Ontake (Japan) on 27 September 2014. Could these events result from magmatic gas being trapped within hydrothermal system aquifers by elemental sulfur (Se) clogging pores, due to sharp increases in its viscosity when heated above 159oC? This mechanism was thought to prime unheralded eruptions at Mt. Ruapehu in New Zealand. Impurities in sulfur (As, Te, Se) are known to modify S-viscosity and industry experiments showed that organic compounds, H2S, and halogens dramatically influence Se viscosity under typical hydrothermal heating/cooling rates and temperature thresholds. However, the effects of complex sulfur compositions are currently ignored at volcanoes, despite its near ubiquity in long-lived volcano-hydrothermal systems. Models of impure S behavior must be urgently formulated to detect pre-eruptive warning signs before the next "blue-sky" eruption

  10. Topological determinants of self-sustained activity in a simple model of excitable dynamics on graphs.

    PubMed

    Fretter, Christoph; Lesne, Annick; Hilgetag, Claus C; Hütt, Marc-Thorsten

    2017-02-10

    Simple models of excitable dynamics on graphs are an efficient framework for studying the interplay between network topology and dynamics. This topic is of practical relevance to diverse fields, ranging from neuroscience to engineering. Here we analyze how a single excitation propagates through a random network as a function of the excitation threshold, that is, the relative amount of activity in the neighborhood required for the excitation of a node. We observe that two sharp transitions delineate a region of sustained activity. Using analytical considerations and numerical simulation, we show that these transitions originate from the presence of barriers to propagation and the excitation of topological cycles, respectively, and can be predicted from the network topology. Our findings are interpreted in the context of network reverberations and self-sustained activity in neural systems, which is a question of long-standing interest in computational neuroscience.

  11. Longitudinal Coupled-Bunch Instability Around 1 GHz at the CERN PS Booster

    NASA Astrophysics Data System (ADS)

    Schönauer, H.; Caspers, F.; Chanel, M.; Soby, L.; D'Yachkov, M.

    1997-05-01

    The fast-growing "Ring 4" instability occurring at intensities above 6.5 10^12 protons in the top one of the four PSB rings is finally explained by an asymmetry in the 42 vacuum pump manifolds common to all rings. Impedance measurements (wire method) and numerical calculations show a sharp resonant peak (Q 2000) at 1100 MHz and shunt impedances two times higher for the Ring 4 ports as compared to the other rings. This factor is sufficient to explain that the threshold of the instability falls below the maximum intensity only in Ring 4. A final, but labour-intensive and expensive, cure consists of inserting short-circuiting sleeves into all 168 beam ports. A temporary antidote is attempted by fitting ceramic damping resistors penetrating the top cavity through spare gauge ports. Results of beam and impedance measurements and of the cure will be presented and discussed.

  12. Spin-torque resonant expulsion of the vortex core for an efficient radiofrequency detection scheme.

    PubMed

    Jenkins, A S; Lebrun, R; Grimaldi, E; Tsunegi, S; Bortolotti, P; Kubota, H; Yakushiji, K; Fukushima, A; de Loubens, G; Klein, O; Yuasa, S; Cros, V

    2016-04-01

    It has been proposed that high-frequency detectors based on the so-called spin-torque diode effect in spin transfer oscillators could eventually replace conventional Schottky diodes due to their nanoscale size, frequency tunability and large output sensitivity. Although a promising candidate for information and communications technology applications, the output voltage generated from this effect has still to be improved and, more pertinently, reduces drastically with decreasing radiofrequency (RF) current. Here we present a scheme for a new type of spintronics-based high-frequency detector based on the expulsion of the vortex core in a magnetic tunnel junction (MTJ). The resonant expulsion of the core leads to a large and sharp change in resistance associated with the difference in magnetoresistance between the vortex ground state and the final C-state configuration. Interestingly, this reversible effect is independent of the incoming RF current amplitude, offering a fast real-time RF threshold detector.

  13. Spin transfer driven resonant expulsion of a magnetic vortex core for efficient rf detector

    NASA Astrophysics Data System (ADS)

    Menshawy, S.; Jenkins, A. S.; Merazzo, K. J.; Vila, L.; Ferreira, R.; Cyrille, M.-C.; Ebels, U.; Bortolotti, P.; Kermorvant, J.; Cros, V.

    2017-05-01

    Spin transfer magnetization dynamics have led to considerable advances in Spintronics, including opportunities for new nanoscale radiofrequency devices. Among the new functionalities is the radiofrequency (rf) detection using the spin diode rectification effect in spin torque nano-oscillators (STNOs). In this study, we focus on a new phenomenon, the resonant expulsion of a magnetic vortex in STNOs. This effect is observed when the excitation vortex radius, due to spin torques associated to rf currents, becomes larger than the actual radius of the STNO. This vortex expulsion is leading to a sharp variation of the voltage at the resonant frequency. Here we show that the detected frequency can be tuned by different parameters; furthermore, a simultaneous detection of different rf signals can be achieved by real time measurements with several STNOs having different diameters. This result constitutes a first proof-of-principle towards the development of a new kind of nanoscale rf threshold detector.

  14. Controllable Edge Feature Sharpening for Dental Applications

    PubMed Central

    2014-01-01

    This paper presents a new approach to sharpen blurred edge features in scanned tooth preparation surfaces generated by structured-light scanners. It aims to efficiently enhance the edge features so that the embedded feature lines can be easily identified in dental CAD systems, and to avoid unnatural oversharpening geometry. We first separate the feature regions using graph-cut segmentation, which does not require a user-defined threshold. Then, we filter the face normal vectors to propagate the geometry from the smooth region to the feature region. In order to control the degree of the sharpness, we propose a feature distance measure which is based on normal tensor voting. Finally, the vertex positions are updated according to the modified face normal vectors. We have applied the approach to scanned tooth preparation models. The results show that the blurred edge features are enhanced without unnatural oversharpening geometry. PMID:24741376

  15. Controllable edge feature sharpening for dental applications.

    PubMed

    Fan, Ran; Jin, Xiaogang

    2014-01-01

    This paper presents a new approach to sharpen blurred edge features in scanned tooth preparation surfaces generated by structured-light scanners. It aims to efficiently enhance the edge features so that the embedded feature lines can be easily identified in dental CAD systems, and to avoid unnatural oversharpening geometry. We first separate the feature regions using graph-cut segmentation, which does not require a user-defined threshold. Then, we filter the face normal vectors to propagate the geometry from the smooth region to the feature region. In order to control the degree of the sharpness, we propose a feature distance measure which is based on normal tensor voting. Finally, the vertex positions are updated according to the modified face normal vectors. We have applied the approach to scanned tooth preparation models. The results show that the blurred edge features are enhanced without unnatural oversharpening geometry.

  16. Memory properties of a Ge nanoring MOS device fabricated by pulsed laser deposition.

    PubMed

    Ma, Xiying

    2008-07-09

    The non-volatile charge-storage properties of memory devices with MOS structure based on Ge nanorings have been studied. The two-dimensional Ge nanorings were prepared on a p-Si(100) matrix by means of pulsed laser deposition (PLD) using the droplet technique combined with rapid annealing. Complete planar nanorings with well-defined sharp inner and outer edges were formed via an elastic self-transformation droplet process, which is probably driven by the lateral strain of the Ge/Si layers and the surface tension in the presence of Ar gas. The low leakage current was attributed to the small roughness and the few interface states in the planar Ge nanorings, and also to the effect of Coulomb blockade preventing injection. A significant threshold-voltage shift of 2.5 V was observed when an operating voltage of 8 V was implemented on the device.

  17. Coral Reef Resilience, Tipping Points and the Strength of Herbivory.

    PubMed

    Holbrook, Sally J; Schmitt, Russell J; Adam, Thomas C; Brooks, Andrew J

    2016-11-02

    Coral reefs increasingly are undergoing transitions from coral to macroalgal dominance. Although the functional roles of reef herbivores in controlling algae are becoming better understood, identifying possible tipping points in the herbivory-macroalgae relationships has remained a challenge. Assessment of where any coral reef ecosystem lies in relation to the coral-to-macroalgae tipping point is fundamental to understanding resilience properties, forecasting state shifts, and developing effective management practices. We conducted a multi-year field experiment in Moorea, French Polynesia to estimate these properties. While we found a sharp herbivory threshold where macroalgae escape control, ambient levels of herbivory by reef fishes were well above that needed to prevent proliferation of macroalgae. These findings are consistent with previously observed high resilience of the fore reef in Moorea. Our approach can identify vulnerable coral reef systems in urgent need of management action to both forestall shifts to macroalgae and preserve properties essential for resilience.

  18. Autoresonant Control of Elliptical Non-neutral Plasmas

    NASA Astrophysics Data System (ADS)

    Friedland, Lazar

    1999-11-01

    It is shown that placing a magnetized non-neutral plasma column in a weak oscillating transverse quadrupolar potential with chirped oscillation frequency allows excitation and control of the ellipticity and rotation phase of the plasma cross section. For a given chirp rate of the driving frequency, the phenomenon has a sharp threshold on the amplitude of the perturbing potential. The effect is analogous to that reported in controlling Kirchhoff vortices in fluid dynamics [1]. The ellipticity of the plasma cross section is manipulated by using autoresonance (nonlinear phase locking) in the system between the ExB drifting plasma particles and adiabatically varying driving potential. A similar idea was used recently in controlling the l=1 diocotron mode in a non-neutral plasma [2]. [1] L. Friedland, Phys. Rev. E59, 4106 (1999). [2] J. Fajans, E. Gilson, and L. Friedland, Phys. Rev. Lett. 82, 4444 (1999).

  19. Topological determinants of self-sustained activity in a simple model of excitable dynamics on graphs

    NASA Astrophysics Data System (ADS)

    Fretter, Christoph; Lesne, Annick; Hilgetag, Claus C.; Hütt, Marc-Thorsten

    2017-02-01

    Simple models of excitable dynamics on graphs are an efficient framework for studying the interplay between network topology and dynamics. This topic is of practical relevance to diverse fields, ranging from neuroscience to engineering. Here we analyze how a single excitation propagates through a random network as a function of the excitation threshold, that is, the relative amount of activity in the neighborhood required for the excitation of a node. We observe that two sharp transitions delineate a region of sustained activity. Using analytical considerations and numerical simulation, we show that these transitions originate from the presence of barriers to propagation and the excitation of topological cycles, respectively, and can be predicted from the network topology. Our findings are interpreted in the context of network reverberations and self-sustained activity in neural systems, which is a question of long-standing interest in computational neuroscience.

  20. Graphene barristor, a triode device with a gate-controlled Schottky barrier.

    PubMed

    Yang, Heejun; Heo, Jinseong; Park, Seongjun; Song, Hyun Jae; Seo, David H; Byun, Kyung-Eun; Kim, Philip; Yoo, InKyeong; Chung, Hyun-Jong; Kim, Kinam

    2012-06-01

    Despite several years of research into graphene electronics, sufficient on/off current ratio I(on)/I(off) in graphene transistors with conventional device structures has been impossible to obtain. We report on a three-terminal active device, a graphene variable-barrier "barristor" (GB), in which the key is an atomically sharp interface between graphene and hydrogenated silicon. Large modulation on the device current (on/off ratio of 10(5)) is achieved by adjusting the gate voltage to control the graphene-silicon Schottky barrier. The absence of Fermi-level pinning at the interface allows the barrier's height to be tuned to 0.2 electron volt by adjusting graphene's work function, which results in large shifts of diode threshold voltages. Fabricating GBs on respective 150-mm wafers and combining complementary p- and n-type GBs, we demonstrate inverter and half-adder logic circuits.

  1. Induced earthquakes. Sharp increase in central Oklahoma seismicity since 2008 induced by massive wastewater injection.

    PubMed

    Keranen, K M; Weingarten, M; Abers, G A; Bekins, B A; Ge, S

    2014-07-25

    Unconventional oil and gas production provides a rapidly growing energy source; however, high-production states in the United States, such as Oklahoma, face sharply rising numbers of earthquakes. Subsurface pressure data required to unequivocally link earthquakes to wastewater injection are rarely accessible. Here we use seismicity and hydrogeological models to show that fluid migration from high-rate disposal wells in Oklahoma is potentially responsible for the largest swarm. Earthquake hypocenters occur within disposal formations and upper basement, between 2- and 5-kilometer depth. The modeled fluid pressure perturbation propagates throughout the same depth range and tracks earthquakes to distances of 35 kilometers, with a triggering threshold of ~0.07 megapascals. Although thousands of disposal wells operate aseismically, four of the highest-rate wells are capable of inducing 20% of 2008 to 2013 central U.S. seismicity. Copyright © 2014, American Association for the Advancement of Science.

  2. Linear fitting of multi-threshold counting data with a pixel-array detector for spectral X-ray imaging

    PubMed Central

    Muir, Ryan D.; Pogranichney, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.

    2014-01-01

    Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment. PMID:25178010

  3. Linear fitting of multi-threshold counting data with a pixel-array detector for spectral X-ray imaging.

    PubMed

    Muir, Ryan D; Pogranichney, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J

    2014-09-01

    Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment.

  4. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    NASA Astrophysics Data System (ADS)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  5. Otoliths - Accelerometer and seismometer; Implications in Vestibular Evoked Myogenic Potential (VEMP).

    PubMed

    Grant, Wally; Curthoys, Ian

    2017-09-01

    Vestibular otolithic organs are recognized as transducers of head acceleration and they function as such up to their corner frequency or undamped natural frequency. It is well recognized that these organs respond to frequencies above their corner frequency up to the 2-3 kHz range (Curthoys et al., 2016). A mechanics model for the transduction of these organs is developed that predicts the response below the undamped natural frequency as an accelerometer and above that frequency as a seismometer. The model is converted to a transfer function using hair cell bundle deflection. Measured threshold acceleration stimuli are used along with threshold deflections for threshold transfer function values. These are compared to model predicted values, both below and above their undamped natural frequency. Threshold deflection values are adjusted to match the model transfer function. The resulting threshold deflection values were well within in measure threshold bundle deflection ranges. Vestibular Evoked Myogenic Potentials (VEMPs) today routinely uses stimulus frequencies of 500 and 1000 Hz, and otoliths have been established incontrovertibly by clinical and neural evidence as the stimulus source. The mechanism for stimulus at these frequencies above the undamped natural frequency of otoliths is presented where otoliths are utilizing a seismometer mode of response for VEMP transduction. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Spacecraft Health Automated Reasoning Prototype (SHARP): The fiscal year 1989 SHARP portability evaluations task for NASA Solar System Exploration Division's Voyager project

    NASA Technical Reports Server (NTRS)

    Atkinson, David J.; Doyle, Richard J.; James, Mark L.; Kaufman, Tim; Martin, R. Gaius

    1990-01-01

    A Spacecraft Health Automated Reasoning Prototype (SHARP) portability study is presented. Some specific progress is described on the portability studies, plans for technology transfer, and potential applications of SHARP and related artificial intelligence technology to telescience operations. The application of SHARP to Voyager telecommunications was a proof-of-capability demonstration of artificial intelligence as applied to the problem of real time monitoring functions in planetary mission operations. An overview of the design and functional description of the SHARP system is also presented as it was applied to Voyager.

  7. Model Fit and Item Factor Analysis: Overfactoring, Underfactoring, and a Program to Guide Interpretation.

    PubMed

    Clark, D Angus; Bowles, Ryan P

    2018-04-23

    In exploratory item factor analysis (IFA), researchers may use model fit statistics and commonly invoked fit thresholds to help determine the dimensionality of an assessment. However, these indices and thresholds may mislead as they were developed in a confirmatory framework for models with continuous, not categorical, indicators. The present study used Monte Carlo simulation methods to investigate the ability of popular model fit statistics (chi-square, root mean square error of approximation, the comparative fit index, and the Tucker-Lewis index) and their standard cutoff values to detect the optimal number of latent dimensions underlying sets of dichotomous items. Models were fit to data generated from three-factor population structures that varied in factor loading magnitude, factor intercorrelation magnitude, number of indicators, and whether cross loadings or minor factors were included. The effectiveness of the thresholds varied across fit statistics, and was conditional on many features of the underlying model. Together, results suggest that conventional fit thresholds offer questionable utility in the context of IFA.

  8. Utilization of Satellite Data to Identify and Monitor Changes in Frequency of Meteorological Events

    NASA Astrophysics Data System (ADS)

    Mast, J. C.; Dessler, A. E.

    2017-12-01

    Increases in temperature and climate variability due to human-induced climate change is increasing the frequency and magnitude of extreme heat events (i.e., heatwaves). This will have a detrimental impact on the health of human populations and habitability of certain land locations. Here we seek to utilize satellite data records to identify and monitor extreme heat events. We analyze satellite data sets (MODIS and AIRS land surface temperatures (LST) and water vapor profiles (WV)) due to their global coverage and stable calibration. Heat waves are identified based on the frequency of maximum daily temperatures above a threshold, determined as follows. Land surface temperatures are gridded into uniform latitude/longitude bins. Maximum daily temperatures per bin are determined and probability density functions (PDF) of these maxima are constructed monthly and seasonally. For each bin, a threshold is calculated at the 95th percentile of the PDF of maximum temperatures. Per each bin, an extreme heat event is defined based on the frequency of monthly and seasonal days exceeding the threshold. To account for the decreased ability of the human body to thermoregulate with increasing moisture, and to assess lethality of the heat events, we determine the wet-bulb temperature at the locations of extreme heat events. Preliminary results will be presented.

  9. A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang

    2009-11-01

    Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.

  10. Supra-threshold epidermis injury from near-infrared laser radiation prior to ablation onset

    NASA Astrophysics Data System (ADS)

    DeLisi, Michael P.; Peterson, Amanda M.; Lile, Lily A.; Noojin, Gary D.; Shingledecker, Aurora D.; Stolarski, David J.; Zohner, Justin J.; Kumru, Semih S.; Thomas, Robert J.

    2017-02-01

    With continued advancement of solid-state laser technology, high-energy lasers operating in the near-infrared (NIR) band are being applied in an increasing number of manufacturing techniques and medical treatments. Safety-related investigations of potentially harmful laser interaction with skin are commonplace, consisting of establishing the maximum permissible exposure (MPE) thresholds under various conditions, often utilizing the minimally-visible lesion (MVL) metric as an indication of damage. Likewise, characterization of ablation onset and velocity is of interest for therapeutic and surgical use, and concerns exceptionally high irradiance levels. However, skin injury response between these two exposure ranges is not well understood. This study utilized a 1070-nm Yb-doped, diode-pumped fiber laser to explore the response of excised porcine skin tissue to high-energy exposures within the supra-threshold injury region without inducing ablation. Concurrent high-speed videography was employed to assess the effect on the epidermis, with a dichotomous response determination given for three progressive damage event categories: observable permanent distortion on the surface, formation of an epidermal bubble due to bounded intra-cutaneous water vaporization, and rupture of said bubble during laser exposure. ED50 values were calculated for these categories under various pulse configurations and beam diameters, and logistic regression models predicted injury events with approximately 90% accuracy. The distinction of skin response into categories of increasing degrees of damage expands the current understanding of high-energy laser safety while also underlining the unique biophysical effects during induced water phase change in tissue. These observations could prove useful in augmenting biothermomechanical models of laser exposure in the supra-threshold region.

  11. Net reclassification index at event rate: properties and relationships.

    PubMed

    Pencina, Michael J; Steyerberg, Ewout W; D'Agostino, Ralph B

    2017-12-10

    The net reclassification improvement (NRI) is an attractively simple summary measure quantifying improvement in performance because of addition of new risk marker(s) to a prediction model. Originally proposed for settings with well-established classification thresholds, it quickly extended into applications with no thresholds in common use. Here we aim to explore properties of the NRI at event rate. We express this NRI as a difference in performance measures for the new versus old model and show that the quantity underlying this difference is related to several global as well as decision analytic measures of model performance. It maximizes the relative utility (standardized net benefit) across all classification thresholds and can be viewed as the Kolmogorov-Smirnov distance between the distributions of risk among events and non-events. It can be expressed as a special case of the continuous NRI, measuring reclassification from the 'null' model with no predictors. It is also a criterion based on the value of information and quantifies the reduction in expected regret for a given regret function, casting the NRI at event rate as a measure of incremental reduction in expected regret. More generally, we find it informative to present plots of standardized net benefit/relative utility for the new versus old model across the domain of classification thresholds. Then, these plots can be summarized with their maximum values, and the increment in model performance can be described by the NRI at event rate. We provide theoretical examples and a clinical application on the evaluation of prognostic biomarkers for atrial fibrillation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. A comparison of quality and utilization problems in large and small group practices.

    PubMed

    Gleason, S C; Richards, M J; Quinnell, J E

    1995-12-01

    Physicians practicing in large, multispecialty medical groups share an organizational culture that differs from that of physicians in small or independent practices. Since 1980, there has been a sharp increase in the size of multispecialty group practice organizations, in part because of increased efficiencies of large group practices. The greater number of physicians and support personnel in a large group practice also requires a relatively more sophisticated management structure. The efficiencies, conveniences, and management structure of a large group practice provide an optimal environment to practice medicine. However, a search of the literature found no data linking a large group practice environment to practice outcomes. The purpose of the study reported in this article was to determine if physicians in large practices have fewer quality and utilization problems than physicians in small or independent practices.

  13. A modified Holly-Preissmann scheme for simulating sharp concentration fronts in streams with steep velocity gradients using RIV1Q

    NASA Astrophysics Data System (ADS)

    Liu, Zhao-wei; Zhu, De-jun; Chen, Yong-can; Wang, Zhi-gang

    2014-12-01

    RIV1Q is the stand-alone water quality program of CE-QUAL-RIV1, a hydraulic and water quality model developed by U.S. Army Corps of Engineers Waterways Experiment Station. It utilizes an operator-splitting algorithm and the advection term in governing equation is treated using the explicit two-point, fourth-order accurate, Holly-Preissmann scheme, in order to preserve numerical accuracy for advection of sharp gradients in concentration. In the scheme, the spatial derivative of the transport equation, where the derivative of velocity is included, is introduced to update the first derivative of dependent variable. In the stream with larger cross-sectional variation, steep velocity gradient can be easily found and should be estimated correctly. In the original version of RIV1Q, however, the derivative of velocity is approximated by a finite difference which is first-order accurate. Its leading truncation error leads to the numerical error of concentration which is related with the velocity and concentration gradients and increases with the decreasing Courant number. The simulation may also be unstable when a sharp velocity drop occurs. In the present paper, the derivative of velocity is estimated with a modified second-order accurate scheme and the corresponding numerical error of concentration decreases. Additionally, the stability of the simulation is improved. The modified scheme is verified with a hypothetical channel case and the results demonstrate that satisfactory accuracy and stability can be achieved even when the Courant number is very low. Finally, the applicability of the modified scheme is discussed.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Zhigang, E-mail: zsun@dicp.ac.cn; Yu, Dequan; Xie, Wenbo

    The O + O{sub 2} isotope exchange reactions play an important role in determining the oxygen isotopic composition of a number of trace gases in the atmosphere, and their temperature dependence and kinetic isotope effects (KIEs) provide important constraints on our understanding of the origin and mechanism of these and other unusual oxygen KIEs important in the atmosphere. This work reports a quantum dynamics study of the title reactions on the newly constructed Dawes-Lolur-Li-Jiang-Guo (DLLJG) potential energy surface (PES). The thermal reaction rate coefficients of both the {sup 18}O + {sup 32}O{sub 2} and {sup 16}O + {sup 36}O{sub 2}more » reactions obtained using the DLLJG PES exhibit a clear negative temperature dependence, in sharp contrast with the positive temperature dependence obtained using the earlier modified Siebert-Schinke-Bittererova (mSSB) PES. In addition, the calculated KIE shows an improved agreement with the experiment. These results strongly support the absence of the “reef” structure in the entrance/exit channels of the DLLJG PES, which is present in the mSSB PES. The quantum dynamics results on both PESs attribute the marked KIE to strong near-threshold reactive resonances, presumably stemming from the mass differences and/or zero point energy difference between the diatomic reactant and product. The accurate characterization of the reactivity for these near-thermoneutral reactions immediately above the reaction threshold is important for correct characterization of the thermal reaction rate coefficients.« less

  15. Leptokurtic portfolio theory

    NASA Astrophysics Data System (ADS)

    Kitt, R.; Kalda, J.

    2006-03-01

    The question of optimal portfolio is addressed. The conventional Markowitz portfolio optimisation is discussed and the shortcomings due to non-Gaussian security returns are outlined. A method is proposed to minimise the likelihood of extreme non-Gaussian drawdowns of the portfolio value. The theory is called Leptokurtic, because it minimises the effects from “fat tails” of returns. The leptokurtic portfolio theory provides an optimal portfolio for investors, who define their risk-aversion as unwillingness to experience sharp drawdowns in asset prices. Two types of risks in asset returns are defined: a fluctuation risk, that has Gaussian distribution, and a drawdown risk, that deals with distribution tails. These risks are quantitatively measured by defining the “noise kernel” — an ellipsoidal cloud of points in the space of asset returns. The size of the ellipse is controlled with the threshold parameter: the larger the threshold parameter, the larger return are accepted for investors as normal fluctuations. The return vectors falling into the kernel are used for calculation of fluctuation risk. Analogously, the data points falling outside the kernel are used for the calculation of drawdown risks. As a result the portfolio optimisation problem becomes three-dimensional: in addition to the return, there are two types of risks involved. Optimal portfolio for drawdown-averse investors is the portfolio minimising variance outside the noise kernel. The theory has been tested with MSCI North America, Europe and Pacific total return stock indices.

  16. On the classification of seawater intrusion

    NASA Astrophysics Data System (ADS)

    Werner, Adrian D.

    2017-08-01

    Seawater intrusion (SWI) arising from aquifer depletion is often classified as ;active; or ;passive;, depending on whether seawater moves in the same direction as groundwater flow or not. However, recent studies have demonstrated that alternative forms of active SWI show distinctly different characteristics, to the degree that the term ;active SWI; may be misleading without additional qualification. In response, this article proposes to modify hydrogeology lexicon by defining and characterizing three classes of SWI, namely passive SWI, passive-active SWI and active SWI. The threshold parameter combinations for the onset of each form of SWI are developed using sharp-interface, steady-state analytical solutions. Numerical simulation is then applied to a hypothetical case study to test the developed theory and to provide additional insights into dispersive SWI behavior. The results indicate that the three classes of SWI are readily predictable, with the exception of active SWI occurring in the presence of distributed recharge. The key characteristics of each SWI class are described to distinguish their most defining features. For example, active SWI occurring in aquifers receiving distributed recharge only creates watertable salinization downstream of the groundwater mound and only where dispersion effects are significant. The revised classification of SWI proposed in this article, along with the analysis of thresholds and SWI characteristics, provides coastal aquifer custodians with an improved basis upon which to expect salinization mechanisms to impact freshwater availability following aquifer depletion.

  17. Functionalised polyurethane for efficient laser micromachining

    NASA Astrophysics Data System (ADS)

    Brodie, G. W. J.; Kang, H.; MacMillan, F. J.; Jin, J.; Simpson, M. C.

    2017-02-01

    Pulsed laser ablation is a valuable tool that offers a much cleaner and more flexible etching process than conventional lithographic techniques. Although much research has been undertaken on commercially available polymers, many challenges still remain, including contamination by debris on the surface, a rough etched appearance and high ablation thresholds. Functionalizing polymers with a photosensitive group is a novel way and effective way to improve the efficiency of laser micromachining. In this study, several polyurethane films grafted with different concentrations of the chromophore anthracene have been synthesized which are specifically designed for 248 nm KrF excimer laser ablation. A series of lines etched with a changing number of pulses and fluences by the nanosecond laser were applied to each polyurethane film. The resultant ablation behaviours were studied through optical interference tomography and Scanning Electron Microscopy. The anthracene grafted polyurethanes showed a vast improvement in both edge quality and the presence of debris compared with the unmodified polyurethane. Under the same laser fluence and number of pulses the spots etched in the anthracene contained polyurethane show sharp depth profiles and smooth surfaces, whereas the spots etched in polyurethane without anthracene group grafted present rough cavities with debris according to the SEM images. The addition of a small amount of anthracene (1.47%) shows a reduction in ablation threshold from unmodified polyurethane showing that the desired effect can be achieved with very little modification to the polymer.

  18. Measurement of D-7Li Neutron Production in Neutron Generators Using the Threshold Activation Foil Technique

    NASA Astrophysics Data System (ADS)

    Coventry, M. D.; Krites, A. M.

    Measurements to determine the absolute D-D and D-7Li neutron production rates with a neutron generator running at 100-200 kV acceleration potential were performed using the threshold activation foil technique. This technique provides a clear measure of fast neutron flux and with a suitable model, the neutron output. This approach requires little specialized equipment and is used to calibrate real-time neutron detectors and to verify neutron output. We discuss the activation foil measurement technique and describe its use in determining the relative contributions of D-D and D-7Li reactions to the total neutron yield and real-time detector response and compare to model predictions. The D-7Li reaction produces neutrons with a continuum of energies and a sharp peak around 13.5 MeV for measurement techniques outside of what D-D generators can perform. The ability to perform measurements with D-D neutrons alone, then add D-7Li neutrons for inelastic gamma production presents additional measurement modalities with the same neutron source without the use of tritium. Typically, D-T generators are employed for inelastic scattering applications but have a high regulatory burden from a radiological aspect (tritium inventory, liability concerns) and are export-controlled. D-D and D-7Li generators avoid these issues completely.

  19. Image quality, threshold contrast and mean glandular dose in CR mammography

    NASA Astrophysics Data System (ADS)

    Jakubiak, R. R.; Gamba, H. R.; Neves, E. B.; Peixoto, J. E.

    2013-09-01

    In many countries, computed radiography (CR) systems represent the majority of equipment used in digital mammography. This study presents a method for optimizing image quality and dose in CR mammography of patients with breast thicknesses between 45 and 75 mm. Initially, clinical images of 67 patients (group 1) were analyzed by three experienced radiologists, reporting about anatomical structures, noise and contrast in low and high pixel value areas, and image sharpness and contrast. Exposure parameters (kV, mAs and target/filter combination) used in the examinations of these patients were reproduced to determine the contrast-to-noise ratio (CNR) and mean glandular dose (MGD). The parameters were also used to radiograph a CDMAM (version 3.4) phantom (Artinis Medical Systems, The Netherlands) for image threshold contrast evaluation. After that, different breast thicknesses were simulated with polymethylmethacrylate layers and various sets of exposure parameters were used in order to determine optimal radiographic parameters. For each simulated breast thickness, optimal beam quality was defined as giving a target CNR to reach the threshold contrast of CDMAM images for acceptable MGD. These results were used for adjustments in the automatic exposure control (AEC) by the maintenance team. Using optimized exposure parameters, clinical images of 63 patients (group 2) were evaluated as described above. Threshold contrast, CNR and MGD for such exposure parameters were also determined. Results showed that the proposed optimization method was effective for all breast thicknesses studied in phantoms. The best result was found for breasts of 75 mm. While in group 1 there was no detection of the 0.1 mm critical diameter detail with threshold contrast below 23%, after the optimization, detection occurred in 47.6% of the images. There was also an average MGD reduction of 7.5%. The clinical image quality criteria were attended in 91.7% for all breast thicknesses evaluated in both patient groups. Finally, this study also concluded that the use of the AEC of the x-ray unit based on the constant dose to the detector may bring some difficulties to CR systems to operate under optimal conditions. More studies must be performed, so that the compatibility between systems and optimization methodologies can be evaluated, as well as this optimization method. Most methods are developed for phantoms, so comparative studies including clinical images must be developed.

  20. 355 E Riverwalk, February 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 2,600 cpm to 4,300 cpm. No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.

  1. 230 E. Ontario, May 2018, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,800 cpm to 2,600 cpm.No count rates were found at any time that exceeded the threshold limit of 7,366 cpm.

  2. 77 FR 3070 - Electric Engineering, Architectural Services, Design Policies and Construction Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-23

    ... Engineering, Architectural Services, Design Policies and Construction Standards AGENCY: Rural Utilities..., engineering services and architectural services for transactions above the established threshold dollar levels... Code of Federal Regulations as follows: PART 1724--ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND...

  3. The threshold strength of laminar ceramics utilizing molar volume changes and porosity

    NASA Astrophysics Data System (ADS)

    Pontin, Michael Gene

    It has been shown that uniformly spaced thin compressive layers within a ceramic body can arrest the propagation of an otherwise catastrophic crack, producing a threshold strength: a strength below which the probability of failure is zero. Previous work has shown that the threshold strength increases with both the magnitude of the compressive stress and the fracture toughness of the thin layer material, and finite element analysis predicts that the threshold strength can be further increased when the elastic modulus of the compressive layer is much smaller than the thicker layer. The current work describes several new approaches to increase the threshold strength of a laminar ceramic system. The initial method utilized a molar volume expansion within the thin layers, produced by the tetragonal-to-monoclinic phase transformation of unstabilized zirconia during cooling, in order to produce large compressive stresses within the thin layers. High threshold strengths were measured for this system, but they remained relatively constant as the zirconia content was increased. It was determined that microcracking produced during the transformation reduced the magnitude of the compressive stresses, but may also have served to reduce the modulus of the thin compressive layer, providing an additional strengthening mechanism. The second approach studied the addition of porosity to reduce the elastic modulus of the thin compressive layers. A new processing method was created and analyzed, in which thick layers of the laminate were fabricated by tape-casting, and then dip-coated into a slurry, containing rice starch, to create thin porous compressive layers upon densification. The effects of porosity on the residual compressive stress, elastic modulus, and fracture toughness of the thin layers were measured and calculated, and it was found that the elastic modulus mismatch between the thin and thick layers produced a large strengthening effect for volume fractions of porosity below a critical level. Specimens with greater volume fractions of porosity exhibited complete crack arrest, typically followed by non-catastrophic failure, as cracks initiating in adjacent thick layers coalesced by cracking or delamination along the thin porous layers.

  4. Optimising health care within given budgets: primary prevention of cardiovascular disease in different regions of Sweden.

    PubMed

    Löfroth, Emil; Lindholm, Lars; Wilhelmsen, Lars; Rosén, Måns

    2006-01-01

    This study investigated the consequences of applying strict health maximisation to the choice between three different interventions with a defined budget. We analysed three interventions of preventing cardiovascular diseases, through doctor's advice on smoking secession, through blood-pressure-lowering drugs, and through lipid-lowering drugs. A state transition model has been used to estimate the cost-utility ratios for entire population in three different county councils in Sweden, where the populations were stratified into mutually excluding risk groups. The incremental cost-utility ratios are being presented in a league table and combined with the local resources and the local epidemiological data as a proxy for need for treatment. All interventions with an incremental cost-utility ratio exceeding the threshold ratios are excluded from being funded. The threshold varied between 1687 Euro and 6192 Euro. The general reallocation of resources between the three interventions was a 60% reduction of blood-pressure-lowering drugs with redistribution of resources to advice on smoking secession and to lipid-lowering drugs. One advantage of this method is that the results are very concrete. Recommendations can thereby be more precise which hopefully will create a public debate between decision-makers, practising physicians and patient groups.

  5. Utility of Intraoperative Neuromonitoring during Minimally Invasive Fusion of the Sacroiliac Joint.

    PubMed

    Woods, Michael; Birkholz, Denise; MacBarb, Regina; Capobianco, Robyn; Woods, Adam

    2014-01-01

    Study Design. Retrospective case series. Objective. To document the clinical utility of intraoperative neuromonitoring during minimally invasive surgical sacroiliac joint fusion for patients diagnosed with sacroiliac joint dysfunction (as a direct result of sacroiliac joint disruptions or degenerative sacroiliitis) and determine stimulated electromyography thresholds reflective of favorable implant position. Summary of Background Data. Intraoperative neuromonitoring is a well-accepted adjunct to minimally invasive pedicle screw placement. The utility of intraoperative neuromonitoring during minimally invasive surgical sacroiliac joint fusion using a series of triangular, titanium porous plasma coated implants has not been evaluated. Methods. A medical chart review of consecutive patients treated with minimally invasive surgical sacroiliac joint fusion was undertaken at a single center. Baseline patient demographics and medical history, intraoperative electromyography thresholds, and perioperative adverse events were collected after obtaining IRB approval. Results. 111 implants were placed in 37 patients. Sensitivity of EMG was 80% and specificity was 97%. Intraoperative neuromonitoring potentially avoided neurologic sequelae as a result of improper positioning in 7% of implants. Conclusions. The results of this study suggest that intraoperative neuromonitoring may be a useful adjunct to minimally invasive surgical sacroiliac joint fusion in avoiding nerve injury during implant placement.

  6. High-throughput shock investigation of thin film thermites and thermites in fluoropolymer binder

    NASA Astrophysics Data System (ADS)

    Matveev, Sergey; Basset, Will; Dlott, Dana; Lee, Evyn; Maria, Jon-Paul; University of Illinois at Urbana-Champaign Collaboration; North Carolina State University Collaboration

    2017-06-01

    Investigation of nanofabricated thermite systems with respect to their energy release is presented. The knowledge obtained by utilization of a high-throughput tabletop shock-system provides essential information that can be used to tune properties of reactive materials towards a desired application. Our shock system launches 0.25-0.75 mm flyer plates, which can reach velocities of 0.5-6 km s-1 and shock durations of 4 - 16 ns. In current studies, emission was detected by a home-built pyrometer. Various reactive materials with differing composition (Al/CuO and Zr/CuO nanolaminates; Al/CuO/PVDF); Al, Zr, CuO standards) and varying interfacial area, were impacted at velocities spanning the available range to ascertain reaction thresholds. Our results show that reaction-impact threshold for the thermite systems under consideration is <1 km/s and that reaction starts at a time as short as 20 ns. Utilization of graybody approximation provides temperature profiles along the reaction time. In future, our goal is to expand detection capabilities utilizing infrared absorption to analyze formation of the products after the shock. The work is supported by the U.S. Army Research Office under Award W911NF-16-1-0406.

  7. Do manatees utilize infrasonic communication or detection?

    NASA Astrophysics Data System (ADS)

    Gerstein, Edmund; Gerstein, Laura; Forsythe, Steve; Blue, Joseph

    2004-05-01

    Some researchers speculate Sirenians might utilize infrasonic communication like their distant elephant cousins; however, audiogram measurements and calibrated manatee vocalizations do not support this contention. A comprehensive series of hearing tests conducted with West Indian manatees yielded the first and most definitive audiogram for any Sirenian. The manatee hearing tests were also the first controlled underwater infrasonic psychometric tests with any marine mammal. Auditory thresholds were measured from 0.4 to 46 kHz, but detection thresholds of possible vibrotactile origin were measured as low as 0.015 kHz. Manatees have short hairs on their bodies that may be sensitive vibrotactile receptors capable of detecting particle displacement in the near field. To detect these signals the manatee rotated on axis, exposing the densest portion of hairs toward the projector. Manatees inhabit shallow water where particle motion detection may be more useful near the water's surface, where sound pressures are low due to the Lloyd mirror effect. With respect to intraspecific communication, no infrasonic spectra have been identified in hundreds of calibrated calls. Low source levels and propagation limits in shallow-water habitats suggest low-frequency manatee calls have limited utility over long distances and infrasonic communication is not an attribute shared with elephants.

  8. Integrating real-time subsurface hydrologic monitoring with empirical rainfall thresholds to improve landslide early warning

    USGS Publications Warehouse

    Mirus, Benjamin B.; Becker, Rachel E.; Baum, Rex L.; Smith, Joel B.

    2018-01-01

    Early warning for rainfall-induced shallow landsliding can help reduce fatalities and economic losses. Although these commonly occurring landslides are typically triggered by subsurface hydrological processes, most early warning criteria rely exclusively on empirical rainfall thresholds and other indirect proxies for subsurface wetness. We explore the utility of explicitly accounting for antecedent wetness by integrating real-time subsurface hydrologic measurements into landslide early warning criteria. Our efforts build on previous progress with rainfall thresholds, monitoring, and numerical modeling along the landslide-prone railway corridor between Everett and Seattle, Washington, USA. We propose a modification to a previously established recent versus antecedent (RA) cumulative rainfall thresholds by replacing the antecedent 15-day rainfall component with an average saturation observed over the same timeframe. We calculate this antecedent saturation with real-time telemetered measurements from five volumetric water content probes installed in the shallow subsurface within a steep vegetated hillslope. Our hybrid rainfall versus saturation (RS) threshold still relies on the same recent 3-day rainfall component as the existing RA thresholds, to facilitate ready integration with quantitative precipitation forecasts. During the 2015–2017 monitoring period, this RS hybrid approach has an increase of true positives and a decrease of false positives and false negatives relative to the previous RA rainfall-only thresholds. We also demonstrate that alternative hybrid threshold formats could be even more accurate, which suggests that further development and testing during future landslide seasons is needed. The positive results confirm that accounting for antecedent wetness conditions with direct subsurface hydrologic measurements can improve thresholds for alert systems and early warning of rainfall-induced shallow landsliding.

  9. Electrophysiological gap detection thresholds: effects of age and comparison with a behavioral measure.

    PubMed

    Palmer, Shannon B; Musiek, Frank E

    2014-01-01

    Temporal processing ability has been linked to speech understanding ability and older adults often complain of difficulty understanding speech in difficult listening situations. Temporal processing can be evaluated using gap detection procedures. There is some research showing that gap detection can be evaluated using an electrophysiological procedure. However, there is currently no research establishing gap detection threshold using the N1-P2 response. The purposes of the current study were to 1) determine gap detection thresholds in younger and older normal-hearing adults using an electrophysiological measure, 2) compare the electrophysiological gap detection threshold and behavioral gap detection threshold within each group, and 3) investigate the effect of age on each gap detection measure. This study utilized an older adult group and younger adult group to compare performance on an electrophysiological and behavioral gap detection procedure. The subjects in this study were 11 younger, normal-hearing adults (mean = 22 yrs) and 11 older, normal-hearing adults (mean = 64.36 yrs). All subjects completed an adaptive behavioral gap detection procedure in order to determine their behavioral gap detection threshold (BGDT). Subjects also completed an electrophysiologic gap detection procedure to determine their electrophysiologic gap detection threshold (EGDT). Older adults demonstrated significantly larger gap detection thresholds than the younger adults. However, EGDT and BGDT were not significantly different in either group. The mean difference between EGDT and BGDT for all subjects was 0.43 msec. Older adults show poorer gap detection ability when compared to younger adults. However, this study shows that gap detection thresholds can be measured using evoked potential recordings and yield results similar to a behavioral measure. American Academy of Audiology.

  10. A study of the high-frequency hearing thresholds of dentistry professionals

    PubMed Central

    Lopes, Andréa Cintra; de Melo, Ana Dolores Passarelli; Santos, Cibele Carmelo

    2012-01-01

    Summary Introduction: In the dentistry practice, dentists are exposed to harmful effects caused by several factors, such as the noise produced by their work instruments. In 1959, the American Dental Association recommended periodical hearing assessments and the use of ear protectors. Aquiring more information regarding dentists', dental nurses', and prosthodontists' hearing abilities is necessary to propose prevention measures and early treatment strategies. Objective: To investigate the auditory thresholds of dentists, dental nurses, and prosthodontists. Method: In this clinical and experimental study, 44 dentists (Group I; GI), 36 dental nurses (Group II; GII), and 28 prosthodontists (Group III; GIII) were included, , with a total of 108 professionals. The procedures that were performed included a specific interview, ear canal inspection, conventional and high-frequency threshold audiometry, a speech reception threshold test, and an acoustic impedance test. Results: In the 3 groups that were tested, the comparison between the mean hearing thresholds provided evidence of worsened hearing ability relative to the increase in frequency. For the tritonal mean at 500 to 2,000 Hz and 3,000 to 6,000 Hz, GIII presented the worst thresholds. For the mean of the high frequencies (9,000 and 16,000 Hz), GII presented the worst thresholds. Conclusion: The conventional hearing threshold evaluation did not demonstrate alterations in the 3 groups that were tested; however, the complementary tests such as high-frequency audiometry provided greater efficacy in the early detection of hearing problems, since this population's hearing loss impaired hearing ability at frequencies that are not tested by the conventional tests. Therefore, we emphasize the need of utilizing high-frequency threshold audiometry in the hearing assessment routine in combination with other audiological tests. PMID:25991940

  11. Mice Lacking the Circadian Modulators SHARP1 and SHARP2 Display Altered Sleep and Mixed State Endophenotypes of Psychiatric Disorders

    PubMed Central

    Shahmoradi, Ali; Reinecke, Lisa; Kroos, Christina; Wichert, Sven P.; Oster, Henrik; Wehr, Michael C.; Taneja, Reshma; Hirrlinger, Johannes; Rossner, Moritz J.

    2014-01-01

    Increasing evidence suggests that clock genes may be implicated in a spectrum of psychiatric diseases, including sleep and mood related disorders as well as schizophrenia. The bHLH transcription factors SHARP1/DEC2/BHLHE41 and SHARP2/DEC1/BHLHE40 are modulators of the circadian system and SHARP1/DEC2/BHLHE40 has been shown to regulate homeostatic sleep drive in humans. In this study, we characterized Sharp1 and Sharp2 double mutant mice (S1/2-/-) using online EEG recordings in living animals, behavioral assays and global gene expression profiling. EEG recordings revealed attenuated sleep/wake amplitudes and alterations of theta oscillations. Increased sleep in the dark phase is paralleled by reduced voluntary activity and cortical gene expression signatures reveal associations with psychiatric diseases. S1/2-/- mice display alterations in novelty induced activity, anxiety and curiosity. Moreover, mutant mice exhibit impaired working memory and deficits in prepulse inhibition resembling symptoms of psychiatric diseases. Network modeling indicates a connection between neural plasticity and clock genes, particularly for SHARP1 and PER1. Our findings support the hypothesis that abnormal sleep and certain (endo)phenotypes of psychiatric diseases may be caused by common mechanisms involving components of the molecular clock including SHARP1 and SHARP2. PMID:25340473

  12. Laser damage threshold of gelatin and a copper phthalocyanine doped gelatin optical limiter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brant, M.C.; McLean, D.G.; Sutherland, R.L.

    1996-12-31

    The authors demonstrate optical limiting in a unique guest-host system which uses neither the typical liquid or solid host. Instead, they dope a gelatin gel host with a water soluble Copper (II) phthalocyaninetetrasulfonic acid, tetrasodium salt (CuPcTs). They report on the gelatin`s viscoelasticity, laser damage threshold, and self healing of this damage. The viscoelastic gelatin has mechanical properties quite different than a liquid or solid. The authors` laser measurements demonstrate that the single shot damage threshold of the undoped gelatin host increases with decreasing gelatin concentration. The gelatin also has a much higher laser damage threshold than a stiff acrylic.more » Unlike brittle solids, the soft gelatin self heals from laser induced damage. Optical limiting test also show the utility of a gelatin host doped with CuPcTs. The CuPcTs/gelatin matrix is not damaged at incident laser energies 5 times the single shot damage threshold of the gelatin host. However, at this high laser energy the CuPcTs is photo bleached at the beam waist. The authors repair photo bleached sites by annealing the CuPcTs/gelatin matrix.« less

  13. [Perceptual sharpness metric for visible and infrared color fusion images].

    PubMed

    Gao, Shao-Shu; Jin, Wei-Qi; Wang, Xia; Wang, Ling-Xue; Luo, Yuan

    2012-12-01

    For visible and infrared color fusion images, objective sharpness assessment model is proposed to measure the clarity of detail and edge definition of the fusion image. Firstly, the contrast sensitivity functions (CSF) of the human visual system is used to reduce insensitive frequency components under certain viewing conditions. Secondly, perceptual contrast model, which takes human luminance masking effect into account, is proposed based on local band-limited contrast model. Finally, the perceptual contrast is calculated in the region of interest (contains image details and edges) in the fusion image to evaluate image perceptual sharpness. Experimental results show that the proposed perceptual sharpness metrics provides better predictions, which are more closely matched to human perceptual evaluations, than five existing sharpness (blur) metrics for color images. The proposed perceptual sharpness metrics can evaluate the perceptual sharpness for color fusion images effectively.

  14. Time trend in the impact of heat waves on daily mortality in Spain for a period of over thirty years (1983-2013).

    PubMed

    Díaz, J; Carmona, R; Mirón, I J; Luna, M Y; Linares, C

    2018-07-01

    Many of the studies that analyze the future impact of climate change on mortality assume that the temperature that constitutes a heat wave will not change over time. This is unlikely, however, given the process of adapting to heat changes, prevention plans, and improvements in social and health infrastructure. The objective of this study is to analyze whether, during the 1983-2013 period, there has been a temporal change in the maximum daily temperatures that constitute a heat wave (T threshold ) in Spain, and to investigate whether there has been variation in the attributable risk (AR) associated with mortality due to high temperatures in this period. This study uses daily mortality data for natural causes except accidents CIEX: A00-R99 in municipalities of over 10,000 inhabitants in 10 Spanish provinces and maximum temperature data from observatories located in province capitals. The time series is divided into three periods: 1983-1992, 1993-2003 and 2004-2013. For each period and each province, the value of T threshold was calculated using scatter-plot diagram of the daily mortality pre-whitened series. For each period and each province capitals, it has been calculated the number of heat waves and quantifying the impact on mortality through generalized linear model (GLM) methodology with the Poisson regression link. These models permits obtained the relative risks (RR) and attributable risks (AR). Via a meta-analysis, using the Global RR and AR were calculated the heat impact for the total of the 10 provinces. The results show that in the first two periods RR remained constant RR: 1.14 (CI95%: 1.09 1.19) and RR: 1.14 (CI95%: 1.10 1.18), while the third period shows a sharp decrease with respect to the prior two periods RR: 1.01 (CI95%: 1.00 1.01); the difference is statistically significant. In Spain there has been a sharp decrease in mortality attributable to heat over the past 10 years. The observed variation in RR puts into question the results of numerous studies that analyze the future impact of heat on mortality in different temporal scenarios and show it to be constant over time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT, HVLP COATING EQUIPMENT, SHARPE MANUFACTURING COMPANY PLATINUM 2012 HVLP SPRAY GUN

    EPA Science Inventory

    This report presents the results of the verification test of the Sharpe Platinum 2013 high-volume, low-pressure gravity-feed spray gun, hereafter referred to as the Sharpe Platinum, which is designed for use in automotive refinishing. The test coating chosen by Sharpe Manufacturi...

  16. 77 FR 52061 - Notice of Proposed Exemption Involving Sharp HealthCare Located in San Diego, CA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-28

    ...This document contains a notice of pendency (the Notice) before the Department of Labor (the Department) of a proposed individual exemption from certain prohibited transaction restrictions of the Employee Retirement Income Security Act of 1974 (the Act or ERISA). The transactions involve the Sharp HealthCare Health and Dental Plan (the Plan). The proposed exemption, if granted, would affect the Plan, its participants and beneficiaries, Sharp Healthcare (Sharp), and the Sharp Health Plan (the HMO).

  17. Occupation and its relationship with health and wellbeing: the threshold concept for occupational therapy.

    PubMed

    Fortune, Tracy; Kennedy-Jones, Mary

    2014-10-01

    We introduce the educational framework of 'threshold concepts' and discuss its utility in understanding the fundamental difficulties learners have in understanding ways of thinking and practising as occupational therapists. We propose that the relationship between occupation and health is a threshold concept for occupational therapy because of students' trouble in achieving lasting conceptual change in relation to their understanding of it. The authors present and discuss key ideas drawn from educational writings on threshold concepts, review the emerging literature on threshold concepts in occupational therapy, and pose a series of questions in order to prompt consideration of the pedagogical issues requiring action by academic and fieldwork educators. Threshold concepts in occupational therapy have been considered in a primarily cross-disciplinary sense, that is, the understandings that occupational therapy learners grapple with are relevant to learners in other disciplines. In contrast, we present a more narrowly defined conception that emphasises the 'bounded-ness' of the concept to the discipline. A threshold concept that captures the essential nature of occupational therapy is likely to be (highly) troublesome in terms of a learner's acquisition of it. Rather than simplifying these learning 'jewels' educators are encouraged to sit with the discomfort that they and the learner may experience as the learner struggles to grasp them. Moreover, they should reshape their curricula to provoke such struggles if transformative learning is to be the outcome. © 2014 Occupational Therapy Australia.

  18. Adaptive time-sequential binary sensing for high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Hu, Chenhui; Lu, Yue M.

    2012-06-01

    We present a novel image sensor for high dynamic range imaging. The sensor performs an adaptive one-bit quantization at each pixel, with the pixel output switched from 0 to 1 only if the number of photons reaching that pixel is greater than or equal to a quantization threshold. With an oracle knowledge of the incident light intensity, one can pick an optimal threshold (for that light intensity) and the corresponding Fisher information contained in the output sequence follows closely that of an ideal unquantized sensor over a wide range of intensity values. This observation suggests the potential gains one may achieve by adaptively updating the quantization thresholds. As the main contribution of this work, we propose a time-sequential threshold-updating rule that asymptotically approaches the performance of the oracle scheme. With every threshold mapped to a number of ordered states, the dynamics of the proposed scheme can be modeled as a parametric Markov chain. We show that the frequencies of different thresholds converge to a steady-state distribution that is concentrated around the optimal choice. Moreover, numerical experiments show that the theoretical performance measures (Fisher information and Craḿer-Rao bounds) can be achieved by a maximum likelihood estimator, which is guaranteed to find globally optimal solution due to the concavity of the log-likelihood functions. Compared with conventional image sensors and the strategy that utilizes a constant single-photon threshold considered in previous work, the proposed scheme attains orders of magnitude improvement in terms of sensor dynamic ranges.

  19. 371 E. Lower Wacker Drive, March 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,600 cpm to 2,600 cpm.No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.

  20. 220 E. Illinois St., March 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,500 cpm to 5,600 cpm.No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.

  1. 8-37 W. Hubbard, March 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,800 cpm to 5,200 cpm. No count rates were found at any time that exceeded the threshold limit of 7,389 cpm.

  2. 429 E. Grand Ave, March 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,700 cpm to 3,700 cpm.No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.

  3. 201-211 E. Grand Ave, January 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,700 cpm to 3,900 cpm. No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.

  4. 230 N. Michigan Ave, April 2018, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,400 cpm to 3,800 cpm. No count rates were found at any time that exceeded the threshold limit of 6,542 cpm.

  5. 36 W. Illinois St, March 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,800 cpm to 2,400 cpm.No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.

  6. 1-37 W. Hubbard, March 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,800 cpm to 5,000 cpm. No count rates were found at any time that exceeded the threshold limit of 7,389 cpm.

  7. 211 E. Ohio St., March 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,700 cpm to 2,300 cpm.No count rates were found at any time that exceeded the threshold limit of 6,338 cpm.

  8. 140-200 E. Grand Ave, February 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,700 cpm to 2,400 cpm. No count rates were found at any time that exceeded the threshold limit of 6,738 cpm.

  9. 430 N. Michigan Ave, January 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,600 cpm to 2,100 cpm. No count rates were found at any time that exceeded the threshold limit of 6,338 cpm.

  10. 401-599 N. Dearborn St., March 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation, The count rates in the excavation ranged from 1,700 cpm to 5,800 cpm.No count rates were found at any time that exceeded the threshold limit of 6,738 cpm.

  11. 237 E. Ontario St., January 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The measurements within the excavations and of the soil did not exceed the instrument USEPA threshold and ranged from a minimum of 4,800 cpm to a maximum of 8,300 cpm unshielded.

  12. Percolation of binary disk systems: Modeling and theory

    DOE PAGES

    Meeks, Kelsey; Tencer, John; Pantoya, Michelle L.

    2017-01-12

    The dispersion and connectivity of particles with a high degree of polydispersity is relevant to problems involving composite material properties and reaction decomposition prediction and has been the subject of much study in the literature. This paper utilizes Monte Carlo models to predict percolation thresholds for a two-dimensional systems containing disks of two different radii. Monte Carlo simulations and spanning probability are used to extend prior models into regions of higher polydispersity than those previously considered. A correlation to predict the percolation threshold for binary disk systems is proposed based on the extended dataset presented in this work and comparedmore » to previously published correlations. Finally, a set of boundary conditions necessary for a good fit is presented, and a condition for maximizing percolation threshold for binary disk systems is suggested.« less

  13. Considering the filler network as a third phase in polymer/CNT nanocomposites to predict the tensile modulus using Hashin-Hansen model

    NASA Astrophysics Data System (ADS)

    Kim, Sanghoon; Jamalzadeh, Navid; Zare, Yasser; Hui, David; Rhee, Kyong Yop

    2018-07-01

    In this paper, a conventional Hashin-Hansen model is developed to analyze the tensile modulus of polymer/CNT nanocomposites above the percolation threshold. This model for composites containing dispersed particles utilizes the aspect ratio of the nanofiller (α), the number of nanotubes per unit area (N), the percolation threshold (φp) and the modulus of the filler network (EN), assuming that the filler network constitutes a third phase in the nanocomposites. The experimental results and the predictions agree well, verifying the proposed relations between the modulus and the other parameters in the Hashin-Hansen model. Moreover, large values of "α", "N" and "EN" result in an improved modulus of the polymer/CNT nanocomposites, while a low percolation threshold results in a high modulus.

  14. Pattern formation in a liquid-crystal light valve with feedback, including polarization, saturation, and internal threshold effects

    NASA Astrophysics Data System (ADS)

    Neubecker, R.; Oppo, G.-L.; Thuering, B.; Tschudi, T.

    1995-07-01

    The use of liquid-crystal light valves (LCLV's) as nonlinear elements in diffractive optical systems with feedback leads to the formation of a variety of optical patterns. The spectrum of possible spatial instabilities is shown to be even richer when the LCLV's capability for polarization modulation is utilized and internal threshold and saturation effects are considered. We derive a model for the feedback system based on a realistic description of the LCLV's internal function and coupling to a polarizer. Thresholds of pattern formation are compared to the common Kerr-type approximation and show transitions involving rolls, squares, hexagons, and tiled patterns. Numerical and experimental results confirm our theoretical predictions and unveil how patterns and their typical length scales can be easily controlled by changes of the parameters.

  15. Photofragment slice imaging studies of pyrrole and the Xe…pyrrole cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubio-Lago, L.; Zaouris, D.; Sakellariou, Y.

    The photolysis of pyrrole has been studied in a molecular beam at wavelengths 250 nm, 240 nm and 193.3 nm, using 2 different carrier gases, He and Xe. A broad bimodal distribution of H atom fragment velocities has been observed at all wavelengths. Near threshold at both 240 and 250 nm, , sharp features have been observed in the fast part of the H-atom distribution. Under appropriate molecular beam conditions, these sharp features and the photolysis of pyrrole at both 240 and 250 nm disappear when using Xe as opposed to He as the carrier gas. We attribute this phenomenonmore » to cluster formation between Xe and pyrrole, and this assumption is supported by observation of resonance enhanced multiphoton ionization spectra for the (Xe…pyrrole) cluster followed by photofragmentation of the nascent cation cluster. Ab initio calculations are performed to support the experimental data. Part of this work is supported by the transfer of knowledge program SOUTHERN DYNAMICS MTKD-CT-2004-014306. The experimental work was performed at the Ultraviolet Laser Facility operating at IESL-FORTH and has been supported in part by the European Commission through the Research Infrastructures activity of FP6 (“Laserlab- Europe” RII3-CT-2003-506350). We also wish to thank the graduate program Applied Molecular Spectroscopy (EPEAEK). Part of this work was supported by the Division of Chemical Sciences, Geosciences and Biosciences, Office of Basic Energy Sciences, US Department of Energy with Battelle Memorial Institute, which operates the Pacific Northwest National Laboratory. Computer resources were provided by the Office of Science, US Department of Energy.« less

  16. Antimicrobial activity of highly stable silver nanoparticles embedded in agar-agar matrix as a thin film.

    PubMed

    Ghosh, S; Kaushik, R; Nagalakshmi, K; Hoti, S L; Menezes, G A; Harish, B N; Vasan, H N

    2010-10-13

    Highly stable silver nanoparticles (Ag NPs) in agar-agar (Ag/agar) as inorganic-organic hybrid were obtained as free-standing film by in situ reduction of silver nitrate by ethanol. The antimicrobial activity of Ag/agar film on Escherichia coli (E. coli), Staphylococcus aureus (S. aureus), and Candida albicans (C. albicans) was evaluated in a nutrient broth and also in saline solution. In particular, films were repeatedly tested for antimicrobial activity after recycling. UV-vis absorption and TEM studies were carried out on films at different stages and morphological studies on microbes were carried out by SEM. Results showed spherical Ag NPs of size 15-25 nm, having sharp surface plasmon resonance (SPR) band. The antimicrobial activity of Ag/agar film was found to be in the order, C. albicans>E. coli>S. aureus, and antimicrobial activity against C. albicans was almost maintained even after the third cycle. Whereas, in case of E. coli and S. aureus there was a sharp decline in antimicrobial activity after the second cycle. Agglomeration of Ag NPs in Ag/agar film on exposure to microbes was observed by TEM studies. Cytotoxic experiments carried out on HeLa cells showed a threshold Ag NPs concentration of 60 μg/mL, much higher than the minimum inhibition concentration of Ag NPs (25.8 μg/mL) for E. coli. The mechanical strength of the film determined by nanoindentation technique showed almost retention of the strength even after repeated cycle. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. The impact of skull bone intensity on the quality of compressed CT neuro images

    NASA Astrophysics Data System (ADS)

    Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw

    2012-02-01

    The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.

  18. Energy efficient engine shroudless, hollow fan blade technology report

    NASA Technical Reports Server (NTRS)

    Michael, C. J.

    1981-01-01

    The Shroudless, Hollow Fan Blade Technology program was structured to support the design, fabrication, and subsequent evaluation of advanced hollow and shroudless blades for the Energy Efficient Engine fan component. Rockwell International was initially selected to produce hollow airfoil specimens employing the superplastic forming/diffusion bonding (SPF/DB) fabrication technique. Rockwell demonstrated that a titanium hollow structure could be fabricated utilizing SPF/DB manufacturing methods. However, some problems such as sharp internal cavity radii and unsatisfactory secondary bonding of the edge and root details prevented production of the required quantity of fatigue test specimens. Subsequently, TRW was selected to (1) produce hollow airfoil test specimens utilizing a laminate-core/hot isostatic press/diffusion bond approach, and (2) manufacture full-size hollow prototype fan blades utilizing the technology that evolved from the specimen fabrication effort. TRW established elements of blade design and defined laminate-core/hot isostatic press/diffusion bonding fabrication techniques to produce test specimens. This fabrication technology was utilized to produce full size hollow fan blades in which the HIP'ed parts were cambered/twisted/isothermally forged, finish machined, and delivered to Pratt & Whitney Aircraft and NASA for further evaluation.

  19. Implementing AORN recommended practices for sharps safety.

    PubMed

    Ford, Donna A

    2014-01-01

    Prevention of percutaneous sharps injuries in perioperative settings remains a challenge. Occupational transmission of bloodborne pathogens, not only from patients to health care providers but also from health care providers to patients, is a significant concern. Legislation and position statements geared toward ensuring the safety of patients and health care workers have not resulted in significantly reduced sharps injuries in perioperative settings. Awareness and understanding of the types of percutaneous injuries that occur in perioperative settings is fundamental to developing an effective sharps injury prevention program. The AORN "Recommended practices for sharps safety" clearly delineates evidence-based recommendations for sharps injury prevention. Perioperative RNs can lead efforts to change practice for the safety of patients and perioperative team members by promoting the elimination of sharps hazards; the use of engineering, work practice, and administrative controls; and the proper use of personal protective equipment, including double gloving. Copyright © 2014 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  20. The SHARP Program: Giving Kids Chances to Excel

    ERIC Educational Resources Information Center

    Kenney, Rich

    2007-01-01

    In this article, the author describes the Sports, Habilitation, and Recreation Program (SHARP), a program of the Foundation for Blind Children in Phoenix, Arizona. The SHARP program aims to help children, who have visual impairments, achieve goals, develop independence, and make friends. One of the unique features of the SHARP program is that it…

  1. Clinical Utility of Risk Models to Refer Patients with Adnexal Masses to Specialized Oncology Care: Multicenter External Validation Using Decision Curve Analysis.

    PubMed

    Wynants, Laure; Timmerman, Dirk; Verbakel, Jan Y; Testa, Antonia; Savelli, Luca; Fischerova, Daniela; Franchi, Dorella; Van Holsbeke, Caroline; Epstein, Elisabeth; Froyman, Wouter; Guerriero, Stefano; Rossi, Alberto; Fruscio, Robert; Leone, Francesco Pg; Bourne, Tom; Valentin, Lil; Van Calster, Ben

    2017-09-01

    Purpose: To evaluate the utility of preoperative diagnostic models for ovarian cancer based on ultrasound and/or biomarkers for referring patients to specialized oncology care. The investigated models were RMI, ROMA, and 3 models from the International Ovarian Tumor Analysis (IOTA) group [LR2, ADNEX, and the Simple Rules risk score (SRRisk)]. Experimental Design: A secondary analysis of prospectively collected data from 2 cross-sectional cohort studies was performed to externally validate diagnostic models. A total of 2,763 patients (2,403 in dataset 1 and 360 in dataset 2) from 18 centers (11 oncology centers and 7 nononcology hospitals) in 6 countries participated. Excised tissue was histologically classified as benign or malignant. The clinical utility of the preoperative diagnostic models was assessed with net benefit (NB) at a range of risk thresholds (5%-50% risk of malignancy) to refer patients to specialized oncology care. We visualized results with decision curves and generated bootstrap confidence intervals. Results: The prevalence of malignancy was 41% in dataset 1 and 40% in dataset 2. For thresholds up to 10% to 15%, RMI and ROMA had a lower NB than referring all patients. SRRisks and ADNEX demonstrated the highest NB. At a threshold of 20%, the NBs of ADNEX, SRrisks, and RMI were 0.348, 0.350, and 0.270, respectively. Results by menopausal status and type of center (oncology vs. nononcology) were similar. Conclusions: All tested IOTA methods, especially ADNEX and SRRisks, are clinically more useful than RMI and ROMA to select patients with adnexal masses for specialized oncology care. Clin Cancer Res; 23(17); 5082-90. ©2017 AACR . ©2017 American Association for Cancer Research.

  2. Discrimination of Swiss cheese from 5 different factories by high impact volatile organic compound profiles determined by odor activity value using selected ion flow tube mass spectrometry and odor threshold.

    PubMed

    Taylor, Kaitlyn; Wick, Cheryl; Castada, Hardy; Kent, Kyle; Harper, W James

    2013-10-01

    Swiss cheese contains more than 200 volatile organic compounds (VOCs). Gas chromatography-mass spectrometry has been utilized for the analysis of volatile compounds in food products; however, it is not sensitive enough to measure VOCs directly in the headspace of a food at low concentrations. Selected ion flow tube mass spectrometry (SIFT-MS) provides a basis for determining the concentrations of VOCs in the head space of the sample in real time at low concentration levels of parts per billion/trillion by volume. Of the Swiss cheese VOCs, relatively few have a major impact on flavor quality. VOCs with odor activity values (OAVs) (concentration/odor threshold) greater than one are considered high-impact flavor compounds. The objective of this study was to utilize SIFT-MS concentrations in conjunction with odor threshold values to determine OAVs thereby identifying high-impact VOCs to use for differentiating Swiss cheese from five factories and identify the factory variability. Seventeen high-impact VOCs were identified for Swiss cheese based on an OAV greater than one in at least 1 of the 5 Swiss cheese factories. Of these, 2,3-butanedione was the only compound with significantly different OAVs in all factories; however, cheese from any pair of factories had multiple statistically different compounds based on OAV. Principal component analysis using soft independent modeling of class analogy statistical differentiation plots, with all of the OAVs, showed differentiation between the 5 factories. Overall, Swiss cheese from different factories was determined to have different OAV profiles utilizing SIFT-MS to determine OAVs of high impact compounds. © 2013 Institute of Food Technologists®

  3. Insulin stimulates the expression of the SHARP-1 gene via multiple signaling pathways.

    PubMed

    Takagi, K; Asano, K; Haneishi, A; Ono, M; Komatsu, Y; Yamamoto, T; Tanaka, T; Ueno, H; Ogawa, W; Tomita, K; Noguchi, T; Yamada, K

    2014-06-01

    The rat enhancer of split- and hairy-related protein-1 (SHARP-1) is a basic helix-loop-helix transcription factor. An issue of whether SHARP-1 is an insulin-inducible transcription factor was examined. Insulin rapidly increased the level of SHARP-1 mRNA both in vivo and in vitro. Then, signaling pathways involved with the increase of SHARP-1 mRNA by insulin were determined in H4IIE rat hepatoma cells. Pretreatments with LY294002, wortmannin, and staurosporine completely blocked the induction effect, suggesting the involvement of both phosphoinositide 3-kinase (PI 3-K) and protein kinase C (PKC) pathways. In fact, overexpression of a dominant negative form of atypical protein kinase C lambda (aPKCλ) significantly decreased the induction of the SHARP-1 mRNA. In addition, inhibitors for the small GTPase Rac or Jun N-terminal kinase (JNK) also blocked the induction of SHARP-1 mRNA by insulin. Overexpression of a dominant negative form of Rac1 prevented the activation by insulin. Furthermore, actinomycin D and cycloheximide completely blocked the induction of SHARP-1 mRNA by insulin. Finally, when a SHARP-1 expression plasmid was transiently transfected with various reporter plasmids into H4IIE cells, the promoter activity of PEPCK reporter plasmid was specifically decreased. Thus, we conclude that insulin induces the SHARP-1 gene expression at the transcription level via a both PI 3-K/aPKCλ/JNK- and a PI 3-K/Rac/JNK-signaling pathway; protein synthesis is required for this induction; and that SHARP-1 is a potential repressor of the PEPCK gene expression. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Hierarchically Superstructured Prussian Blue Analogues: Spontaneous Assembly Synthesis and Applications as Pseudocapacitive Materials

    DOE PAGES

    Yue, Yanfeng; Zhang, Zhiyong; Binder, Andrew J.; ...

    2014-11-10

    Hierarchically superstructured Prussian blue analogues (hexa- conventional hybrid graphene/MnO 2 nanostructured textiles. cyanoferrate, M = Ni II, Co II and Cu II) are synthesized through Because sodium or potassium ions are involved in energy stor- a spontaneous assembly technique. In sharp contrast to mac- age processes, more environmentally neutral electrolytes can roporous-only Prussian blue analogues, the hierarchically su- be utilized, making the superstructured porous Prussian blue perstructured porous Prussian blue materials are demonstrated analogues a great contender for applications as high-per- to possess a high capacitance, which is similar to those of the formance pseudocapacitors.

  6. Consumerism and wellness: rising tide, falling cost.

    PubMed

    Domaszewicz, Alexander

    2008-01-01

    Annual employer-sponsored health plan cost increases have been slowing incrementally due to slowing health care utilization--a phenomenon very likely tied to the proliferation of health management activities, wellness programs and other consumerism strategies. This article describes the sharp rise in recent years of consumer-directed health plans (CDHPs) and explains what developments must happen for genuine consumer-directed health care to realize its full potential. These developments include gathering transparent health care information, increasing consumer demand for that information and creating truly intuitive data solutions that allow consumers to easily access information in order to make better health care decisions.

  7. ASSESSMENT OF LOW-FREQUENCY HEARING WITH NARROW-BAND CHIRP EVOKED 40-HZ SINUSOIDAL AUDITORY STEADY STATE RESPONSE

    PubMed Central

    Wilson, Uzma S.; Kaf, Wafaa A.; Danesh, Ali A.; Lichtenhan, Jeffery T.

    2016-01-01

    Objective To determine the clinical utility of narrow-band chirp evoked 40-Hz sinusoidal auditory steady state responses (s-ASSR) in the assessment of low-frequency hearing in noisy participants. Design Tone bursts and narrow-band chirps were used to respectively evoke auditory brainstem responses (tb-ABR) and 40-Hz s-ASSR thresholds with the Kalman-weighted filtering technique and were compared to behavioral thresholds at 500, 2000, and 4000 Hz. A repeated measure ANOVA and post-hoc t-tests, and simple regression analyses were performed for each of the three stimulus frequencies. Study Sample Thirty young adults aged 18–25 with normal hearing participated in this study. Results When 4000 equivalent responses averages were used, the range of mean s-ASSR thresholds from 500, 2000, and 4000 Hz were 17–22 dB lower (better) than when 2000 averages were used. The range of mean tb-ABR thresholds were lower by 11–15 dB for 2000 and 4000 Hz when twice as many equivalent response averages were used, while mean tb-ABR thresholds for 500 Hz were indistinguishable regardless of additional response averaging Conclusion Narrow band chirp evoked 40-Hz s-ASSR requires a ~15 dB smaller correction factor than tb-ABR for estimating low-frequency auditory threshold in noisy participants when adequate response averaging is used. PMID:26795555

  8. Are current cost-effectiveness thresholds for low- and middle-income countries useful? Examples from the world of vaccines.

    PubMed

    Newall, A T; Jit, M; Hutubessy, R

    2014-06-01

    The World Health Organization's CHOosing Interventions that are Cost Effective (WHO-CHOICE) thresholds for averting a disability-adjusted life-year of one to three times per capita income have been widely cited and used as a measure of cost effectiveness in evaluations of vaccination for low- and middle-income countries (LMICs). These thresholds were based upon criteria set out by the WHO Commission on Macroeconomics and Health, which reflected the potential economic returns of interventions. The CHOICE project sought to evaluate a variety of health interventions at a subregional level and classify them into broad categories to help assist decision makers, but the utility of the thresholds for within-country decision making for individual interventions (given budgetary constraints) has not been adequately explored. To examine whether the 'WHO-CHOICE thresholds' reflect funding decisions, we examined the results of two recent reviews of cost-effectiveness analyses of human papillomavirus and rotavirus vaccination in LMICs, and we assessed whether the results of these studies were reflected in funding decisions for these vaccination programmes. We found that in many cases, programmes that were deemed cost effective were not subsequently implemented in the country. We consider the implications of this finding, the advantages and disadvantages of alternative methods to estimate thresholds, and how cost perspectives and the funders of healthcare may impact on these choices.

  9. Optimal Sequential Rules for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  10. Examination of Laser Microprobe Vacuum Ultraviolet Ionization Mass Spectrometry with Application to Mapping Mars Returned Samples

    NASA Astrophysics Data System (ADS)

    Burton, A. S.; Berger, E. L.; Locke, D. R.; Lewis, E. K.; Moore, J. F.

    2018-04-01

    Laser microprobe of surfaces utilizing a two laser setup whereby the desorption laser threshold is lowered below ionization, and the resulting neutral plume is examined using 157nm Vacuum Ultraviolet laser light for mass spec surface mapping.

  11. Optimizing Environmental Monitoring Networks with Direction-Dependent Distance Thresholds.

    ERIC Educational Resources Information Center

    Hudak, Paul F.

    1993-01-01

    In the direction-dependent approach to location modeling developed herein, the distance within which a point of demand can find service from a facility depends on direction of measurement. The utility of the approach is illustrated through an application to groundwater remediation. (Author/MDH)

  12. SHARP: Spacecraft Health Automated Reasoning Prototype

    NASA Technical Reports Server (NTRS)

    Atkinson, David J.

    1991-01-01

    The planetary spacecraft mission OPS as applied to SHARP is studied. Knowledge systems involved in this study are detailed. SHARP development task and Voyager telecom link analysis were examined. It was concluded that artificial intelligence has a proven capability to deliver useful functions in a real time space flight operations environment. SHARP has precipitated major change in acceptance of automation at JPL. The potential payoff from automation using AI is substantial. SHARP, and other AI technology is being transferred into systems in development including mission operations automation, science data systems, and infrastructure applications.

  13. Transition to Chaos in Random Neuronal Networks

    NASA Astrophysics Data System (ADS)

    Kadmon, Jonathan; Sompolinsky, Haim

    2015-10-01

    Firing patterns in the central nervous system often exhibit strong temporal irregularity and considerable heterogeneity in time-averaged response properties. Previous studies suggested that these properties are the outcome of the intrinsic chaotic dynamics of the neural circuits. Indeed, simplified rate-based neuronal networks with synaptic connections drawn from Gaussian distribution and sigmoidal nonlinearity are known to exhibit chaotic dynamics when the synaptic gain (i.e., connection variance) is sufficiently large. In the limit of an infinitely large network, there is a sharp transition from a fixed point to chaos, as the synaptic gain reaches a critical value. Near the onset, chaotic fluctuations are slow, analogous to the ubiquitous, slow irregular fluctuations observed in the firing rates of many cortical circuits. However, the existence of a transition from a fixed point to chaos in neuronal circuit models with more realistic architectures and firing dynamics has not been established. In this work, we investigate rate-based dynamics of neuronal circuits composed of several subpopulations with randomly diluted connections. Nonzero connections are either positive for excitatory neurons or negative for inhibitory ones, while single neuron output is strictly positive with output rates rising as a power law above threshold, in line with known constraints in many biological systems. Using dynamic mean field theory, we find the phase diagram depicting the regimes of stable fixed-point, unstable-dynamic, and chaotic-rate fluctuations. We focus on the latter and characterize the properties of systems near this transition. We show that dilute excitatory-inhibitory architectures exhibit the same onset to chaos as the single population with Gaussian connectivity. In these architectures, the large mean excitatory and inhibitory inputs dynamically balance each other, amplifying the effect of the residual fluctuations. Importantly, the existence of a transition to chaos and its critical properties depend on the shape of the single-neuron nonlinear input-output transfer function, near firing threshold. In particular, for nonlinear transfer functions with a sharp rise near threshold, the transition to chaos disappears in the limit of a large network; instead, the system exhibits chaotic fluctuations even for small synaptic gain. Finally, we investigate transition to chaos in network models with spiking dynamics. We show that when synaptic time constants are slow relative to the mean inverse firing rates, the network undergoes a transition from fast spiking fluctuations with constant rates to a state where the firing rates exhibit chaotic fluctuations, similar to the transition predicted by rate-based dynamics. Systems with finite synaptic time constants and firing rates exhibit a smooth transition from a regime dominated by stationary firing rates to a regime of slow rate fluctuations. This smooth crossover obeys scaling properties, similar to crossover phenomena in statistical mechanics. The theoretical results are supported by computer simulations of several neuronal architectures and dynamics. Consequences for cortical circuit dynamics are discussed. These results advance our understanding of the properties of intrinsic dynamics in realistic neuronal networks and their functional consequences.

  14. When is rational to order a diagnostic test, or prescribe treatment: the threshold model as an explanation of practice variation.

    PubMed

    Djulbegovic, Benjamin; van den Ende, Jef; Hamm, Robert M; Mayrhofer, Thomas; Hozo, Iztok; Pauker, Stephen G

    2015-05-01

    The threshold model represents an important advance in the field of medical decision-making. It is a linchpin between evidence (which exists on the continuum of credibility) and decision-making (which is a categorical exercise - we decide to act or not act). The threshold concept is closely related to the question of rational decision-making. When should the physician act, that is order a diagnostic test, or prescribe treatment? The threshold model embodies the decision theoretic rationality that says the most rational decision is to prescribe treatment when the expected treatment benefit outweighs its expected harms. However, the well-documented large variation in the way physicians order diagnostic tests or decide to administer treatments is consistent with a notion that physicians' individual action thresholds vary. We present a narrative review summarizing the existing literature on physicians' use of a threshold strategy for decision-making. We found that the observed variation in decision action thresholds is partially due to the way people integrate benefits and harms. That is, explanation of variation in clinical practice can be reduced to a consideration of thresholds. Limited evidence suggests that non-expected utility threshold (non-EUT) models, such as regret-based and dual-processing models, may explain current medical practice better. However, inclusion of costs and recognition of risk attitudes towards uncertain treatment effects and comorbidities may improve the explanatory and predictive value of the EUT-based threshold models. The decision when to act is closely related to the question of rational choice. We conclude that the medical community has not yet fully defined criteria for rational clinical decision-making. The traditional notion of rationality rooted in EUT may need to be supplemented by reflective rationality, which strives to integrate all aspects of medical practice - medical, humanistic and socio-economic - within a coherent reasoning system. © 2015 Stichting European Society for Clinical Investigation Journal Foundation.

  15. Exploring Moho sharpness in Northeastern North China Craton with frequency-dependence analysis of Ps receiver function

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Yao, H.; Chen, L.; WANG, X.; Fang, L.

    2017-12-01

    The North China Craton (NCC), one of the oldest cratons in the world, has attracted wide attention in Earth Science for decades because of the unusual Mesozoic destruction of its cratonic lithosphere. Understanding the deep processes and mechanism of this craton destruction demands detailed knowledge about the deep structure of this region. In this study, we calculate P-wave receiver functions (RFs) with two-year teleseismic records from the North China Seismic Array ( 200 stations) deployed in the northeastern NCC. We observe both diffused and concentered PpPs signals from the Moho in RF waveforms, which indicates heterogeneous Moho sharpness variations in the study region. Synthetic Ps phases generated from broad positive velocity gradients at the depth of the Moho (referred as Pms) show a clear frequency dependence nature, which in turn is required to constrain the sharpness of the velocity gradient. Practically, characterizing such a frequency dependence feature in real data is challenging, because of low signal-to-noise ratio, contaminations by multiples generated from shallow structure, distorted signal stacking especially in double-peak Pms signals, etc. We attempt to address these issues by, firstly, utilizing a high-resolution Moho depth model of this region to predict theoretical delay times of Pms that facilitate more accurate Pms identifications. The Moho depth model is derived by wave-equation based poststack depth migration on both Ps phase and surface-reflected multiples in RFs in our previous study (Zhang et al., submitted to JGR). Second, we select data from a major back azimuth range of 100° - 220° that includes 70% teleseismic events due to the uneven data coverage and to avoid azimuthal influence as well. Finally, we apply an adaptive cross-correlation stacking of Pms signals in RFs for each station within different frequency bands. High-quality Pms signals at different frequencies will be selected after careful visual inspection and adaptive cross-correlation stacking. At last, we will model the stacked Pms signals within different frequency bands to obtain the final sharpness of crust-mantle boundary, which may shed new lights on understanding the mechanism of cratonic reactivation and destruction in the NCC.

  16. Willingness to pay per quality-adjusted life year: is one threshold enough for decision-making?: results from a study in patients with chronic prostatitis.

    PubMed

    Zhao, Fei-Li; Yue, Ming; Yang, Hua; Wang, Tian; Wu, Jiu-Hong; Li, Shu-Chuen

    2011-03-01

    To estimate the willingness to pay (WTP) per quality-adjusted life year (QALY) ratio with the stated preference data and compare the results obtained between chronic prostatitis (CP) patients and general population (GP). WTP per QALY was calculated with the subjects' own health-related utility and the WTP value. Two widely used preference-based health-related quality of life instruments, EuroQol (EQ-5D) and Short Form 6D (SF-6D), were used to elicit utility for participants' own health. The monthly WTP values for moving from participants' current health to a perfect health were elicited using closed-ended iterative bidding contingent valuation method. A total of 268 CP patients and 364 participants from GP completed the questionnaire. We obtained 4 WTP/QALY ratios ranging from $4700 to $7400, which is close to the lower bound of local gross domestic product per capita, a threshold proposed by World Health Organization. Nevertheless, these values were lower than other proposed thresholds and published empirical researches on diseases with mortality risk. Furthermore, the WTP/QALY ratios from the GP were significantly lower than those from the CP patients, and different determinants were associated with the within group variation identified by multiple linear regression. Preference elicitation methods are acceptable and feasible in the socio-cultural context of an Asian environment and the calculation of WTP/QALY ratio produced meaningful answers. The necessity of considering the QALY type or disease-specific QALY in estimating WTP/QALY ratio was highlighted and 1 to 3 times of gross domestic product/capita recommended by World Health Organization could potentially serve as a benchmark for threshold in this Asian context.

  17. Joint Dictionary Learning for Multispectral Change Detection.

    PubMed

    Lu, Xiaoqiang; Yuan, Yuan; Zheng, Xiangtao

    2017-04-01

    Change detection is one of the most important applications of remote sensing technology. It is a challenging task due to the obvious variations in the radiometric value of spectral signature and the limited capability of utilizing spectral information. In this paper, an improved sparse coding method for change detection is proposed. The intuition of the proposed method is that unchanged pixels in different images can be well reconstructed by the joint dictionary, which corresponds to knowledge of unchanged pixels, while changed pixels cannot. First, a query image pair is projected onto the joint dictionary to constitute the knowledge of unchanged pixels. Then reconstruction error is obtained to discriminate between the changed and unchanged pixels in the different images. To select the proper thresholds for determining changed regions, an automatic threshold selection strategy is presented by minimizing the reconstruction errors of the changed pixels. Adequate experiments on multispectral data have been tested, and the experimental results compared with the state-of-the-art methods prove the superiority of the proposed method. Contributions of the proposed method can be summarized as follows: 1) joint dictionary learning is proposed to explore the intrinsic information of different images for change detection. In this case, change detection can be transformed as a sparse representation problem. To the authors' knowledge, few publications utilize joint learning dictionary in change detection; 2) an automatic threshold selection strategy is presented, which minimizes the reconstruction errors of the changed pixels without the prior assumption of the spectral signature. As a result, the threshold value provided by the proposed method can adapt to different data due to the characteristic of joint dictionary learning; and 3) the proposed method makes no prior assumption of the modeling and the handling of the spectral signature, which can be adapted to different data.

  18. Predicted Deepwater Bathymetry from Satellite Altimetry: Non-Fourier Transform Alternatives

    NASA Astrophysics Data System (ADS)

    Salazar, M.; Elmore, P. A.

    2017-12-01

    Robert Parker (1972) demonstrated the effectiveness of Fourier Transforms (FT) to compute gravitational potential anomalies caused by uneven, non-uniform layers of material. This important calculation relates the gravitational potential anomaly to sea-floor topography. As outlined by Sandwell and Smith (1997), a six-step procedure, utilizing the FT, then demonstrated how satellite altimetry measurements of marine geoid height are inverted into seafloor topography. However, FTs are not local in space and produce Gibb's phenomenon around discontinuities. Seafloor features exhibit spatial locality and features such as seamounts and ridges often have sharp inclines. Initial tests compared the windowed-FT to wavelets in reconstruction of the step and saw-tooth functions and resulted in lower RMS error with fewer coefficients. This investigation, thus, examined the feasibility of utilizing sparser base functions such as the Mexican Hat Wavelet, which is local in space, to first calculate the gravitational potential, and then relate it to sea-floor topography.

  19. Water and Wastewater Rate Hikes Outpace CPI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stratton, Hannah; Fuchs, Heidi; Chen, Yuting

    Water and wastewater treatment and delivery is the most capital-intensive of all utility services. Historically underpriced, water and wastewater rates have exhibited unprecedented growth in the past fifteen years. Steep annual increases in water and wastewater rates that outpace the Consumer Price Index (CPI) have increasingly become the norm across the United States. In this paper, we analyze water and wastewater rates across U.S. census regions between 2000 and 2014. We also examine some of the driving factors behind these rate increases, including drought, water source, required infrastructure investment, population patterns, and conservation effects. Our results demonstrate that water andmore » wastewater prices have consistently increased and have outstripped CPI throughout the study period nationwide, as well as within each census region. Further, evaluation of the current and upcoming challenges facing water and wastewater utilities suggests that sharp rate increases are likely to continue in the foreseeable future.« less

  20. A systematic review of intervention thresholds based on FRAX : A report prepared for the National Osteoporosis Guideline Group and the International Osteoporosis Foundation

    PubMed Central

    Kanis, John A; Harvey, Nicholas C; Cooper, Cyrus; Johansson, Helena; Odén, Anders; McCloskey, Eugene V

    2016-01-01

    In most assessment guidelines, treatment for osteoporosis is recommended in individuals with prior fragility fractures, especially fractures at spine and hip. However, for those without prior fractures, the intervention thresholds can be derived using different methods. The aim of this report was to undertake a systematic review of the available information on the use of FRAX® in assessment guidelines, in particular the setting of thresholds and their validation. We identified 120 guidelines or academic papers that incorporated FRAX of which 38 provided no clear statement on how the fracture probabilities derived are to be used in decision-making in clinical practice. The remainder recommended a fixed intervention threshold (n=58), most commonly as a component of more complex guidance (e.g. bone mineral density (BMD) thresholds) or an age-dependent threshold (n=22). Two guidelines have adopted both age-dependent and fixed thresholds. Fixed probability thresholds have ranged from 4 to 20 % for a major fracture and 1.3-5 % for hip fracture. More than one half (39) of the 58 publications identified utilized a threshold probability of 20 % for a major osteoporotic fracture, many of which also mention a hip fracture probability of 3 % as an alternative intervention threshold. In nearly all instances, no rationale is provided other than that this was the threshold used by the National Osteoporosis Foundation of the US. Where undertaken, fixed probability thresholds have been determined from tests of discrimination (Hong Kong), health economic assessment (US, Switzerland), to match the prevalence of osteoporosis (China) or to align with pre-existing guidelines or reimbursement criteria (Japan, Poland). Age-dependent intervention thresholds, first developed by the National Osteoporosis Guideline Group (NOGG), are based on the rationale that if a woman with a prior fragility fracture is eligible for treatment, then, at any given age, a man or woman with the same fracture probability but in the absence of a previous fracture (i.e. at the ‘fracture threshold’) should also be eligible. Under current NOGG guidelines, based on age-dependent probability thresholds, inequalities in access to therapy arise especially at older ages (≥ 70 years) depending on the presence or absence of a prior fracture. An alternative threshold using a hybrid model reduces this disparity. The use of FRAX (fixed or age-dependent thresholds) as the gateway to assessment identifies individuals at high risk more effectively than the use of BMD. However, the setting of intervention thresholds need to be country-specific. PMID:27465509

  1. Design and Analysis of Bionic Cutting Blades Using Finite Element Method.

    PubMed

    Li, Mo; Yang, Yuwang; Guo, Li; Chen, Donghui; Sun, Hongliang; Tong, Jin

    2015-01-01

    Praying mantis is one of the most efficient predators in insect world, which has a pair of powerful tools, two sharp and strong forelegs. Its femur and tibia are both armed with a double row of strong spines along their posterior edges which can firmly grasp the prey, when the femur and tibia fold on each other in capturing. These spines are so sharp that they can easily and quickly cut into the prey. The geometrical characteristic of the praying mantis's foreleg, especially its tibia, has important reference value for the design of agricultural soil-cutting tools. Learning from the profile and arrangement of these spines, cutting blades with tooth profile were designed in this work. Two different sizes of tooth structure and arrangement were utilized in the design on the cutting edge. A conventional smooth-edge blade was used to compare with the bionic serrate-edge blades. To compare the working efficiency of conventional blade and bionic blades, 3D finite element simulation analysis and experimental measurement were operated in present work. Both the simulation and experimental results indicated that the bionic serrate-edge blades showed better performance in cutting efficiency.

  2. Design and Analysis of Bionic Cutting Blades Using Finite Element Method

    PubMed Central

    Li, Mo; Yang, Yuwang; Guo, Li; Chen, Donghui; Sun, Hongliang; Tong, Jin

    2015-01-01

    Praying mantis is one of the most efficient predators in insect world, which has a pair of powerful tools, two sharp and strong forelegs. Its femur and tibia are both armed with a double row of strong spines along their posterior edges which can firmly grasp the prey, when the femur and tibia fold on each other in capturing. These spines are so sharp that they can easily and quickly cut into the prey. The geometrical characteristic of the praying mantis's foreleg, especially its tibia, has important reference value for the design of agricultural soil-cutting tools. Learning from the profile and arrangement of these spines, cutting blades with tooth profile were designed in this work. Two different sizes of tooth structure and arrangement were utilized in the design on the cutting edge. A conventional smooth-edge blade was used to compare with the bionic serrate-edge blades. To compare the working efficiency of conventional blade and bionic blades, 3D finite element simulation analysis and experimental measurement were operated in present work. Both the simulation and experimental results indicated that the bionic serrate-edge blades showed better performance in cutting efficiency. PMID:27019583

  3. A fracture mechanics study of the phase separating planar electrodes: Phase field modeling and analytical results

    NASA Astrophysics Data System (ADS)

    Haftbaradaran, H.; Maddahian, A.; Mossaiby, F.

    2017-05-01

    It is well known that phase separation could severely intensify mechanical degradation and expedite capacity fading in lithium-ion battery electrodes during electrochemical cycling. Experiments have frequently revealed that such degradation effects could be substantially mitigated via reducing the electrode feature size to the nanoscale. The purpose of this work is to present a fracture mechanics study of the phase separating planar electrodes. To this end, a phase field model is utilized to predict how phase separation affects evolution of the solute distribution and stress profile in a planar electrode. Behavior of the preexisting flaws in the electrode in response to the diffusion induced stresses is then examined via computing the time dependent stress intensity factor arising at the tip of flaws during both the insertion and extraction half-cycles. Further, adopting a sharp-interphase approximation of the system, a critical electrode thickness is derived below which the phase separating electrode becomes flaw tolerant. Numerical results of the phase field model are also compared against analytical predictions of the sharp-interphase model. The results are further discussed with reference to the available experiments in the literature. Finally, some of the limitations of the model are cautioned.

  4. Closed-form solution of decomposable stochastic models

    NASA Technical Reports Server (NTRS)

    Sjogren, Jon A.

    1990-01-01

    Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.

  5. Design of High Performance Microstrip LPF with Analytical Transfer Function

    NASA Astrophysics Data System (ADS)

    Mousavi, Seyed Mohammad Hadi; Raziani, Saeed; Falihi, Ali

    2017-12-01

    By exploiting butterfly and T-shaped resonators, a new design of microstrip lowpass filter (LPF) is proposed and analyzed. The LPF is investigated in four sections. Analyzing initial resonator and its equation in detail, providing a sharp skirt by using series configuration, suppressing in middle frequencies and suppressing in high frequencies are focused in each section, respectively. To present a theoretical design, LC equivalent circuit and transfer function are precisely calculated. The measured insertion loss of the LPF is less that 0.4 dB in frequency range from DC up to 1.25 GHz, and the return loss is better than 16 dB. A narrow transition band of 217 MHz and a roll-off rate of 170.5 dB /GHz are indicative of a sharp skirt. By utilizing T-shaped and modified T-shaped resonators in the third and fourth sections, respectively, a relative stopband bandwidth (RSB) of 166 % is obtained. Furthermore, the proposed LPF occupies a small circuit of 0.116{λ _g} × 0.141{λ _g}, where {λ _g} is the guided wavelength at cut-off frequency (1.495 GHz). Finally, the proposed LPF is fabricated and the measured results agree well with the simulated ones.

  6. Theory and applications of refractive index-based optical microscopy to measure protein mass transfer in spherical adsorbent particles.

    PubMed

    Bankston, Theresa E; Stone, Melani C; Carta, Giorgio

    2008-04-25

    This work provides the theoretical foundation and a range of practical application examples of a recently developed method to measure protein mass transfer in adsorbent particles using refractive index-based optical microscopy. A ray-theoretic approach is first used to predict the behavior of light traveling through a particle during transient protein adsorption. When the protein concentration gradient in the particle is sharp, resulting in a steep refractive index gradient, the rays bend and intersect, thereby concentrating light in a sharp ring that marks the position of the adsorption front. This behavior is observed when mass transfer is dominated by pore diffusion and the adsorption isotherm is highly favorable. Applications to protein cation-exchange, hydrophobic interaction, and affinity adsorption are then considered using, as examples, the three commercial, agarose-based stationary phases SP-Sepharose-FF, Butyl Sepharose 4FF, and MabSelect. In all three cases, the method provides results that are consistent with measurements based on batch adsorption and previously published data confirming its utility for the determination of protein mass transfer kinetics under a broad range of practically relevant conditions.

  7. Comparative study on aerodynamic heating under perfect and nonequilibrium hypersonic flows

    NASA Astrophysics Data System (ADS)

    Wang, Qiu; Li, JinPing; Zhao, Wei; Jiang, ZongLin

    2016-02-01

    In this study, comparative heat flux measurements for a sharp cone model were conducted by utilizing a high enthalpy shock tunnel JF-10 and a large-scale shock tunnel JF-12, responsible for providing nonequilibrium and perfect gas flows, respectively. Experiments were performed at the Key Laboratory of High Temperature Gas Dynamics (LHD), Institute of Mechanics, Chinese Academy of Sciences. Corresponding numerical simulations were also conducted in effort to better understand the phenomena accompanying in these experiments. By assessing the consistency and accuracy of all the data gathered during this study, a detailed comparison of sharp cone heat transfer under a totally different kind of freestream conditions was build and analyzed. One specific parameter, defined as the product of the Stanton number and the square root of the Reynold number, was found to be more characteristic for the aerodynamic heating phenomena encountered in hypersonic flight. Adequate use of said parameter practically eliminates the variability caused by the deferent flow conditions, regardless of whether the flow is in dissociation or the boundary condition is catalytic. Essentially, the parameter identified in this study reduces the amount of ground experimental data necessary and eases data extrapolation to flight.

  8. Focus information is used to interpret binocular images

    PubMed Central

    Hoffman, David M.; Banks, Martin S.

    2011-01-01

    Focus information—blur and accommodation—is highly correlated with depth in natural viewing. We examined the use of focus information in solving the binocular correspondence problem and in interpreting monocular occlusions. We presented transparent scenes consisting of two planes. Observers judged the slant of the farther plane, which was seen through the nearer plane. To do this, they had to solve the correspondence problem. In one condition, the two planes were presented with sharp rendering on one image plane, as is done in conventional stereo displays. In another condition, the planes were presented on two image planes at different focal distances, simulating focus information in natural viewing. Depth discrimination performance improved significantly when focus information was correct, which shows that the visual system utilizes the information contained in depth-of-field blur in solving binocular correspondence. In a second experiment, we presented images in which one eye could see texture behind an occluder that the other eye could not see. When the occluder's texture was sharp along with the occluded texture, binocular rivalry was prominent. When the occluded and occluding textures were presented with different blurs, rivalry was significantly reduced. This shows that blur aids the interpretation of scene layout near monocular occlusions. PMID:20616139

  9. Behavior of healthcare workers after injuries from sharp instruments.

    PubMed

    Adib-Hajbaghery, Mohsen; Lotfi, Mohammad Sajjad

    2013-09-01

    Injuries with sharps are common occupational hazards for healthcare workers. Such injuries predispose the staff to dangerous infections such as hepatitis B, C and HIV. The present study was conducted to investigate the behaviors of healthcare workers in Kashan healthcare centers after needle sticks and injuries with sharps in 2012. A cross-sectional study was conducted on 298 healthcare workers of medical centers governed by Kashan University of Medical Sciences. A questionnaire was used in this study. The first part included questions about demographic characteristics. The second part of the questionnaire consisted of 16 items related to the sharp instrument injuries. For data analysis, descriptive and analytical statistics (chi-square, ANOVA and Pearson correlation coefficient) SPSS version 16.0 software was used. From a total of 298 healthcare workers, 114 (38.3%) had a history of injury from needles and sharp instruments in the last six months. Most needle stick and sharp instrument injuries had occurred among the operating room nurses and midwifes; 32.5% of injuries from sharp instruments occurred in the morning shift. Needles were responsible for 46.5% of injuries. The most common actions taken after needle stick injuries were compression (27.2%) and washing the area with soap and water (15.8%). Only 44.6% of the injured personnel pursued follow-up measures after a needle stick or sharp instrument injury. More than a half of the healthcare workers with needle stick or sharp instrument injury had refused follow-up for various reasons. The authorities should implement education programs along with protocols to be implemented after needle stick injuries or sharps.

  10. The impact of sharps injuries on student nurses: a systematic review.

    PubMed

    Hambridge, Kevin; Nichols, Andrew; Endacott, Ruth

    2016-10-27

    The purpose of this review was to discover the impact of sharps injuries in the student nurse population. Much is known and reported about sharps injuries in registered nurses, but there has been a lack of published evidence regarding sharps injuries within the student nurse population. A systematic review of nursing, health and psychology databases was conducted. The limits set were publications between 1980 and 2014 in the English language. Studies were identified then, following a rigorous critical and quality appraisal with validated tools, were selected for the systematic review. A total of 40 articles met the inclusion criteria, reporting studies conducted in 18 countries. Psychological and physical impacts of sharps injuries in student nurses were reported, such as fear, anxiety and depression, although these impacts were not quantified using a validated instrument. The impact of sharps injuries can be severe, both psychological and physical. This systematic review shows that further research is needed into this, especially in under-researched areas such as the UK, to establish the impact of sharps injuries within this population. Further research would also aid the education and prevention of this harmful problem. The review also emphasises the psychological issues relating to sharps injuries, the impact these can have on individuals and the support and counselling that student nurses require after injury. These findings highlight the potential psychological issues that can result from sharps injuries in this population.

  11. An acoustofluidic micromixer based on oscillating sidewall sharp-edges.

    PubMed

    Huang, Po-Hsun; Xie, Yuliang; Ahmed, Daniel; Rufo, Joseph; Nama, Nitesh; Chen, Yuchao; Chan, Chung Yu; Huang, Tony Jun

    2013-10-07

    Rapid and homogeneous mixing inside a microfluidic channel is demonstrated via the acoustic streaming phenomenon induced by the oscillation of sidewall sharp-edges. By optimizing the design of the sharp-edges, excellent mixing performance and fast mixing speed can be achieved in a simple device, making our sharp-edge-based acoustic micromixer a promising candidate for a wide variety of applications.

  12. Composite, ordered material having sharp surface features

    DOEpatents

    D'Urso, Brian R.; Simpson, John T.

    2006-12-19

    A composite material having sharp surface features includes a recessive phase and a protrusive phase, the recessive phase having a higher susceptibility to a preselected etchant than the protrusive phase, the composite material having an etched surface wherein the protrusive phase protrudes from the surface to form a sharp surface feature. The sharp surface features can be coated to make the surface super-hydrophobic.

  13. Adaptive threshold determination for efficient channel sensing in cognitive radio network using mobile sensors

    NASA Astrophysics Data System (ADS)

    Morshed, M. N.; Khatun, S.; Kamarudin, L. M.; Aljunid, S. A.; Ahmad, R. B.; Zakaria, A.; Fakir, M. M.

    2017-03-01

    Spectrum saturation problem is a major issue in wireless communication systems all over the world. Huge number of users is joining each day to the existing fixed band frequency but the bandwidth is not increasing. These requirements demand for efficient and intelligent use of spectrum. To solve this issue, the Cognitive Radio (CR) is the best choice. Spectrum sensing of a wireless heterogeneous network is a fundamental issue to detect the presence of primary users' signals in CR networks. In order to protect primary users (PUs) from harmful interference, the spectrum sensing scheme is required to perform well even in low signal-to-noise ratio (SNR) environments. Meanwhile, the sensing period is usually required to be short enough so that secondary (unlicensed) users (SUs) can fully utilize the available spectrum. CR networks can be designed to manage the radio spectrum more efficiently by utilizing the spectrum holes in primary user's licensed frequency bands. In this paper, we have proposed an adaptive threshold detection method to detect presence of PU signal using free space path loss (FSPL) model in 2.4 GHz WLAN network. The model is designed for mobile sensors embedded in smartphones. The mobile sensors acts as SU while the existing WLAN network (channels) works as PU. The theoretical results show that the desired threshold range detection of mobile sensors mainly depends on the noise floor level of the location in consideration.

  14. Dynamic Multiple-Threshold Call Admission Control Based on Optimized Genetic Algorithm in Wireless/Mobile Networks

    NASA Astrophysics Data System (ADS)

    Wang, Shengling; Cui, Yong; Koodli, Rajeev; Hou, Yibin; Huang, Zhangqin

    Due to the dynamics of topology and resources, Call Admission Control (CAC) plays a significant role for increasing resource utilization ratio and guaranteeing users' QoS requirements in wireless/mobile networks. In this paper, a dynamic multi-threshold CAC scheme is proposed to serve multi-class service in a wireless/mobile network. The thresholds are renewed at the beginning of each time interval to react to the changing mobility rate and network load. To find suitable thresholds, a reward-penalty model is designed, which provides different priorities between different service classes and call types through different reward/penalty policies according to network load and average call arrival rate. To speed up the running time of CAC, an Optimized Genetic Algorithm (OGA) is presented, whose components such as encoding, population initialization, fitness function and mutation etc., are all optimized in terms of the traits of the CAC problem. The simulation demonstrates that the proposed CAC scheme outperforms the similar schemes, which means the optimization is realized. Finally, the simulation shows the efficiency of OGA.

  15. A new temperature threshold detector - Application to missile monitoring

    NASA Astrophysics Data System (ADS)

    Coston, C. J.; Higgins, E. V.

    Comprehensive thermal surveys within the case of solid propellant ballistic missile flight motors are highly desirable. For example, a problem involving motor failures due to insulator cracking at motor ignition, which took several years to solve, could have been identified immediately on the basis of a suitable thermal survey. Using conventional point measurements, such as those utilizing typical thermocouples, for such a survey on a full scale motor is not feasible because of the great number of sensors and measurements required. An alternate approach recognizes that temperatures below a threshold (which depends on the material being monitored) are acceptable, but higher temperatures exceed design margins. In this case hot spots can be located by a grid of wire-like sensors which are sensitive to temperature above the threshold anywhere along the sensor. A new type of temperature threshold detector is being developed for flight missile use. The considered device consists of KNO3 separating copper and Constantan metals. Above the KNO3 MP, galvanic action provides a voltage output of a few tenths of a volt.

  16. The Vulnerability of People to Landslides: A Case Study on the Relationship between the Casualties and Volume of Landslides in China.

    PubMed

    Lin, Qigen; Wang, Ying; Liu, Tianxue; Zhu, Yingqi; Sui, Qi

    2017-02-21

    The lack of a detailed landslide inventory makes research on the vulnerability of people to landslides highly limited. In this paper, the authors collect information on the landslides that have caused casualties in China, and established the Landslides Casualties Inventory of China . 100 landslide cases from 2003 to 2012 were utilized to develop an empirical relationship between the volume of a landslide event and the casualties caused by the occurrence of the event. The error bars were used to describe the uncertainty of casualties resulting from landslides and to establish a threshold curve of casualties caused by landslides in China. The threshold curve was then applied to the landslide cases occurred in 2013 and 2014. The validation results show that the estimated casualties of the threshold curve were in good agreement with the real casualties with a small deviation. Therefore, the threshold curve can be used for estimating potential casualties and landslide vulnerability, which is meaningful for emergency rescue operations after landslides occurred and for risk assessment research.

  17. Rainfall threshold definition using an entropy decision approach and radar data

    NASA Astrophysics Data System (ADS)

    Montesarchio, V.; Ridolfi, E.; Russo, F.; Napolitano, F.

    2011-07-01

    Flash flood events are floods characterised by a very rapid response of basins to storms, often resulting in loss of life and property damage. Due to the specific space-time scale of this type of flood, the lead time available for triggering civil protection measures is typically short. Rainfall threshold values specify the amount of precipitation for a given duration that generates a critical discharge in a given river cross section. If the threshold values are exceeded, it can produce a critical situation in river sites exposed to alluvial risk. It is therefore possible to directly compare the observed or forecasted precipitation with critical reference values, without running online real-time forecasting systems. The focus of this study is the Mignone River basin, located in Central Italy. The critical rainfall threshold values are evaluated by minimising a utility function based on the informative entropy concept and by using a simulation approach based on radar data. The study concludes with a system performance analysis, in terms of correctly issued warnings, false alarms and missed alarms.

  18. Novel Analog For Muscle Deconditioning

    NASA Technical Reports Server (NTRS)

    Ploutz-Snyder, Lori; Ryder, Jeff; Buxton, Roxanne; Redd. Elizabeth; Scott-Pandorf, Melissa; Hackney, Kyle; Fiedler, James; Ploutz-Snyder, Robert; Bloomberg, Jacob

    2011-01-01

    Existing models (such as bed rest) of muscle deconditioning are cumbersome and expensive. We propose a new model utilizing a weighted suit to manipulate strength, power, or endurance (function) relative to body weight (BW). Methods: 20 subjects performed 7 occupational astronaut tasks while wearing a suit weighted with 0-120% of BW. Models of the full relationship between muscle function/BW and task completion time were developed using fractional polynomial regression and verified by the addition of pre-and postflightastronaut performance data for the same tasks. Splineregression was used to identify muscle function thresholds below which task performance was impaired. Results: Thresholds of performance decline were identified for each task. Seated egress & walk (most difficult task) showed thresholds of leg press (LP) isometric peak force/BW of 18 N/kg, LP power/BW of 18 W/kg, LP work/BW of 79 J/kg, isokineticknee extension (KE)/BW of 6 Nm/kg, and KE torque/BW of 1.9 Nm/kg.Conclusions: Laboratory manipulation of relative strength has promise as an appropriate analog for spaceflight-induced loss of muscle function, for predicting occupational task performance and establishing operationally relevant strength thresholds.

  19. Development of water quality thresholds during dredging for the protection of benthic primary producer habitats.

    PubMed

    Sofonia, Jeremy J; Unsworth, Richard K F

    2010-01-01

    Given the potential for adverse effects of ocean dredging on marine organisms, particularly benthic primary producer communities, the management and monitoring of those activities which cause elevated turbidity and sediment loading is critical. In practice, however, this has proven challenging as the development of water quality threshold values, upon which management responses are based, are subject to a large number of physical and biological parameters that are spatially and temporally specific. As a consequence, monitoring programs to date have taken a wide range of different approaches, most focusing on measures of turbidity reported as nephelometric turbidity units (NTU). This paper presents a potential approach in the determination of water quality thresholds which utilises data gathered through the long-term deployment of in situ water instruments, but suggests a focus on photosynthetic active radiation (PAR) rather than NTU as it is more relevant biologically and inclusive of other site conditions. A simple mathematical approach to data interpretation is also presented which facilitates threshold value development, not individual values of concentrations over specific intervals, but as an equation which may be utilized in numerical modelling.

  20. The Vulnerability of People to Landslides: A Case Study on the Relationship between the Casualties and Volume of Landslides in China

    PubMed Central

    Lin, Qigen; Wang, Ying; Liu, Tianxue; Zhu, Yingqi; Sui, Qi

    2017-01-01

    The lack of a detailed landslide inventory makes research on the vulnerability of people to landslides highly limited. In this paper, the authors collect information on the landslides that have caused casualties in China, and established the Landslides Casualties Inventory of China. 100 landslide cases from 2003 to 2012 were utilized to develop an empirical relationship between the volume of a landslide event and the casualties caused by the occurrence of the event. The error bars were used to describe the uncertainty of casualties resulting from landslides and to establish a threshold curve of casualties caused by landslides in China. The threshold curve was then applied to the landslide cases occurred in 2013 and 2014. The validation results show that the estimated casualties of the threshold curve were in good agreement with the real casualties with a small deviation. Therefore, the threshold curve can be used for estimating potential casualties and landslide vulnerability, which is meaningful for emergency rescue operations after landslides occurred and for risk assessment research. PMID:28230810

  1. Frequency modulation television analysis: Threshold impulse analysis. [with computer program

    NASA Technical Reports Server (NTRS)

    Hodge, W. H.

    1973-01-01

    A computer program is developed to calculate the FM threshold impulse rates as a function of the carrier-to-noise ratio for a specified FM system. The system parameters and a vector of 1024 integers, representing the probability density of the modulating voltage, are required as input parameters. The computer program is utilized to calculate threshold impulse rates for twenty-four sets of measured probability data supplied by NASA and for sinusoidal and Gaussian modulating waveforms. As a result of the analysis several conclusions are drawn: (1) The use of preemphasis in an FM television system improves the threshold by reducing the impulse rate. (2) Sinusoidal modulation produces a total impulse rate which is a practical upper bound for the impulse rates of TV signals providing the same peak deviations. (3) As the moment of the FM spectrum about the center frequency of the predetection filter increases, the impulse rate tends to increase. (4) A spectrum having an expected frequency above (below) the center frequency of the predetection filter produces a higher negative (positive) than positive (negative) impulse rate.

  2. Ultralow-threshold multiphoton-pumped lasing from colloidal nanoplatelets in solution

    PubMed Central

    Li, Mingjie; Zhi, Min; Zhu, Hai; Wu, Wen-Ya; Xu, Qing-Hua; Jhon, Mark Hyunpong; Chan, Yinthai

    2015-01-01

    Although multiphoton-pumped lasing from a solution of chromophores is important in the emerging fields of nonlinear optofluidics and bio-photonics, conventionally used organic dyes are often rendered unsuitable because of relatively small multiphoton absorption cross-sections and low photostability. Here, we demonstrate highly photostable, ultralow-threshold multiphoton-pumped biexcitonic lasing from a solution of colloidal CdSe/CdS nanoplatelets within a cuvette-based Fabry–Pérot optical resonator. We find that colloidal nanoplatelets surprisingly exhibit an optimal lateral size that minimizes lasing threshold. These nanoplatelets possess very large gain cross-sections of 7.3 × 10−14 cm2 and ultralow lasing thresholds of 1.2 and 4.3 mJ cm−2 under two-photon (λexc=800 nm) and three-photon (λexc=1.3 μm) excitation, respectively. The highly polarized emission from the nanoplatelet laser shows no significant photodegradation over 107 laser shots. These findings constitute a more comprehensive understanding of the utility of colloidal semiconductor nanoparticles as the gain medium in high-performance frequency-upconversion liquid lasers. PMID:26419950

  3. 200-300 N. Stetson, January 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates throughout the grading ranged from 4,500 cpm to 8,000 cpm. No count rates were found at any time that exceeded the threshold limits of 17,246 cpm and 18,098 cpm.

  4. 0 - 36 W. Illinois St., January 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,600 cpm to 3,700 cpm.No count rates were found at any time that exceeded the threshold limits of 6,738 cpm and 7,029 cpm.

  5. 400-449 N. State St, March 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,600 cpm to 4,300 cpm. No count rates were found at any time that exceeded the threshold limits of 6,338 cpm and 7,038 cpm.

  6. Walsh Preprocessor.

    DTIC Science & Technology

    1980-08-01

    tile se(q uenw threshold does not utilize thle D)C level inlforiat ion and the time thlresliolditig adaptively adjusts for DC lvel . 𔃻This characteristic...lowest 256/8 = 32 elements. The above observation can be mathematically proven to also relate the fact that the lowest (NT/W) elements can, at worst case

  7. Identifying postural control and thresholds of instability utilizing a motion-based ATV simulator.

    DOT National Transportation Integrated Search

    2017-01-01

    Our ATV simulator is currently the only one in existence that allows studies of human subjects engaged in active riding, a process that is necessary for ATV operators to perform in order to maintain vehicle control, in a virtual reality environ...

  8. A design tool for predicting the capillary transport characteristics of fuel cell diffusion media using an artificial neural network

    NASA Astrophysics Data System (ADS)

    Kumbur, E. C.; Sharp, K. V.; Mench, M. M.

    Developing a robust, intelligent design tool for multivariate optimization of multi-phase transport in fuel cell diffusion media (DM) is of utmost importance to develop advanced DM materials. This study explores the development of a DM design algorithm based on artificial neural network (ANN) that can be used as a powerful tool for predicting the capillary transport characteristics of fuel cell DM. Direct measurements of drainage capillary pressure-saturation curves of the differently engineered DMs (5, 10 and 20 wt.% PTFE) were performed at room temperature under three compressions (0, 0.6 and 1.4 MPa) [E.C. Kumbur, K.V. Sharp, M.M. Mench, J. Electrochem. Soc. 154(12) (2007) B1295-B1304; E.C. Kumbur, K.V. Sharp, M.M. Mench, J. Electrochem. Soc. 154(12) (2007) B1305-B1314; E.C. Kumbur, K.V. Sharp, M.M. Mench, J. Electrochem. Soc. 154(12) (2007) B1315-B1324]. The generated benchmark data were utilized to systematically train a three-layered ANN framework that processes the feed-forward error back propagation methodology. The designed ANN successfully predicts the measured capillary pressures within an average uncertainty of ±5.1% of the measured data, confirming that the present ANN model can be used as a design tool within the range of tested parameters. The ANN simulations reveal that tailoring the DM with high PTFE loading and applying high compression pressure lead to a higher capillary pressure, therefore promoting the liquid water transport within the pores of the DM. Any increase in hydrophobicity of the DM is found to amplify the compression effect, thus yielding a higher capillary pressure for the same saturation level and compression.

  9. Distributed Feedback Laser Based on Single Crystal Perovskite

    NASA Astrophysics Data System (ADS)

    Sun, Shang; Xiao, Shumin; Song, Qinghai

    2017-06-01

    We demonstrate a single crystal perovskite based, with grating-structured photoresist on top, highly polarized distributed feedback laser. A lower laser threshold than the Fabry-Perot mode lasers from the same single crystal CH3NH3PbBr3 microplate was obtained. Single crystal CH3NH3PbBr3 microplates was synthesized with one-step solution processed precipitation method. Once the photoresist on top of the microplate was patterned with electron beam, the device was realized. This one-step fabrication process utilized the advantage of single crystal to the greatest extend. The ultra-low defect density in single crystalline microplate offer an opportunity for lower threshold lasing action compare with poly-crystal perovskite films. In the experiment, the lasing action based on the distributed feedback grating design was found with lower threshold and higher intensity than the Fabry-Perot mode lasers supported by the flat facets of the same microplate.

  10. Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios

    NASA Astrophysics Data System (ADS)

    Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui

    2018-01-01

    The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.

  11. Phosphatase activity tunes two-component system sensor detection threshold.

    PubMed

    Landry, Brian P; Palanki, Rohan; Dyulgyarov, Nikola; Hartsough, Lucas A; Tabor, Jeffrey J

    2018-04-12

    Two-component systems (TCSs) are the largest family of multi-step signal transduction pathways in biology, and a major source of sensors for biotechnology. However, the input concentrations to which biosensors respond are often mismatched with application requirements. Here, we utilize a mathematical model to show that TCS detection thresholds increase with the phosphatase activity of the sensor histidine kinase. We experimentally validate this result in engineered Bacillus subtilis nitrate and E. coli aspartate TCS sensors by tuning their detection threshold up to two orders of magnitude. We go on to apply our TCS tuning method to recently described tetrathionate and thiosulfate sensors by mutating a widely conserved residue previously shown to impact phosphatase activity. Finally, we apply TCS tuning to engineer B. subtilis to sense and report a wide range of fertilizer concentrations in soil. This work will enable the engineering of tailor-made biosensors for diverse synthetic biology applications.

  12. Study on reservoir time-varying design flood of inflow based on Poisson process with time-dependent parameters

    NASA Astrophysics Data System (ADS)

    Li, Jiqing; Huang, Jing; Li, Jianchang

    2018-06-01

    The time-varying design flood can make full use of the measured data, which can provide the reservoir with the basis of both flood control and operation scheduling. This paper adopts peak over threshold method for flood sampling in unit periods and Poisson process with time-dependent parameters model for simulation of reservoirs time-varying design flood. Considering the relationship between the model parameters and hypothesis, this paper presents the over-threshold intensity, the fitting degree of Poisson distribution and the design flood parameters are the time-varying design flood unit period and threshold discriminant basis, deduced Longyangxia reservoir time-varying design flood process at 9 kinds of design frequencies. The time-varying design flood of inflow is closer to the reservoir actual inflow conditions, which can be used to adjust the operating water level in flood season and make plans for resource utilization of flood in the basin.

  13. Establishing seasonal and alert influenza thresholds in Cambodia using the WHO method: implications for effective utilization of influenza surveillance in the tropics and subtropics.

    PubMed

    Ly, Sovann; Arashiro, Takeshi; Ieng, Vanra; Tsuyuoka, Reiko; Parry, Amy; Horwood, Paul; Heng, Seng; Hamid, Sarah; Vandemaele, Katelijn; Chin, Savuth; Sar, Borann; Arima, Yuzo

    2017-01-01

    To establish seasonal and alert thresholds and transmission intensity categories for influenza to provide timely triggers for preventive measures or upscaling control measures in Cambodia. Using Cambodia's influenza-like illness (ILI) and laboratory-confirmed influenza surveillance data from 2009 to 2015, three parameters were assessed to monitor influenza activity: the proportion of ILI patients among all outpatients, proportion of ILI samples positive for influenza and the product of the two. With these parameters, four threshold levels (seasonal, moderate, high and alert) were established and transmission intensity was categorized based on a World Health Organization alignment method. Parameters were compared against their respective thresholds. Distinct seasonality was observed using the two parameters that incorporated laboratory data. Thresholds established using the composite parameter, combining syndromic and laboratory data, had the least number of false alarms in declaring season onset and were most useful in monitoring intensity. Unlike in temperate regions, the syndromic parameter was less useful in monitoring influenza activity or for setting thresholds. Influenza thresholds based on appropriate parameters have the potential to provide timely triggers for public health measures in a tropical country where monitoring and assessing influenza activity has been challenging. Based on these findings, the Ministry of Health plans to raise general awareness regarding influenza among the medical community and the general public. Our findings have important implications for countries in the tropics/subtropics and in resource-limited settings, and categorized transmission intensity can be used to assess severity of potential pandemic influenza as well as seasonal influenza.

  14. An acoustofluidic micromixer based on oscillating sidewall sharp-edges†

    PubMed Central

    Huang, Po-Hsun; Xie, Yuliang; Ahmed, Daniel; Rufo, Joseph; Nama, Nitesh; Chen, Yuchao; Chan, Chung Yu; Huang, Tony Jun

    2014-01-01

    Rapid and homogeneous mixing inside a microfluidic channel is demonstrated via the acoustic streaming phenomenon induced by the oscillation of sidewall sharp-edges. By optimizing the design of the sharp-edges, excellent mixing performance and fast mixing speed can be achieved in a simple device, making our sharp-edge-based acoustic micromixer a promising candidate for a wide variety of applications. PMID:23896797

  15. An enhancement of ROC curves made them clinically relevant for diagnostic-test comparison and optimal-threshold determination.

    PubMed

    Subtil, Fabien; Rabilloud, Muriel

    2015-07-01

    The receiver operating characteristic curves (ROC curves) are often used to compare continuous diagnostic tests or determine the optimal threshold of a test; however, they do not consider the costs of misclassifications or the disease prevalence. The ROC graph was extended to allow for these aspects. Two new lines are added to the ROC graph: a sensitivity line and a specificity line. Their slopes depend on the disease prevalence and on the ratio of the net benefit of treating a diseased subject to the net cost of treating a nondiseased one. First, these lines help researchers determine the range of specificities within which test comparisons of partial areas under the curves is clinically relevant. Second, the ROC curve point the farthest from the specificity line is shown to be the optimal threshold in terms of expected utility. This method was applied: (1) to determine the optimal threshold of ratio specific immunoglobulin G (IgG)/total IgG for the diagnosis of congenital toxoplasmosis and (2) to select, among two markers, the most accurate for the diagnosis of left ventricular hypertrophy in hypertensive subjects. The two additional lines transform the statistically valid ROC graph into a clinically relevant tool for test selection and threshold determination. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Cost utility analysis of endoscopic biliary stent in unresectable hilar cholangiocarcinoma: decision analytic modeling approach.

    PubMed

    Sangchan, Apichat; Chaiyakunapruk, Nathorn; Supakankunti, Siripen; Pugkhem, Ake; Mairiang, Pisaln

    2014-01-01

    Endoscopic biliary drainage using metal and plastic stent in unresectable hilar cholangiocarcinoma (HCA) is widely used but little is known about their cost-effectiveness. This study evaluated the cost-utility of endoscopic metal and plastic stent drainage in unresectable complex, Bismuth type II-IV, HCA patients. Decision analytic model, Markov model, was used to evaluate cost and quality-adjusted life year (QALY) of endoscopic biliary drainage in unresectable HCA. Costs of treatment and utilities of each Markov state were retrieved from hospital charges and unresectable HCA patients from tertiary care hospital in Thailand, respectively. Transition probabilities were derived from international literature. Base case analyses and sensitivity analyses were performed. Under the base-case analysis, metal stent is more effective but more expensive than plastic stent. An incremental cost per additional QALY gained is 192,650 baht (US$ 6,318). From probabilistic sensitivity analysis, at the willingness to pay threshold of one and three times GDP per capita or 158,000 baht (US$ 5,182) and 474,000 baht (US$ 15,546), the probability of metal stent being cost-effective is 26.4% and 99.8%, respectively. Based on the WHO recommendation regarding the cost-effectiveness threshold criteria, endoscopic metal stent drainage is cost-effective compared to plastic stent in unresectable complex HCA.

  17. Acceptable regret in medical decision making.

    PubMed

    Djulbegovic, B; Hozo, I; Schwartz, A; McMasters, K M

    1999-09-01

    When faced with medical decisions involving uncertain outcomes, the principles of decision theory hold that we should select the option with the highest expected utility to maximize health over time. Whether a decision proves right or wrong can be learned only in retrospect, when it may become apparent that another course of action would have been preferable. This realization may bring a sense of loss, or regret. When anticipated regret is compelling, a decision maker may choose to violate expected utility theory to avoid regret. We formulate a concept of acceptable regret in medical decision making that explicitly introduces the patient's attitude toward loss of health due to a mistaken decision into decision making. In most cases, minimizing expected regret results in the same decision as maximizing expected utility. However, when acceptable regret is taken into consideration, the threshold probability below which we can comfortably withhold treatment is a function only of the net benefit of the treatment, and the threshold probability above which we can comfortably administer the treatment depends only on the magnitude of the risks associated with the therapy. By considering acceptable regret, we develop new conceptual relations that can help decide whether treatment should be withheld or administered, especially when the diagnosis is uncertain. This may be particularly beneficial in deciding what constitutes futile medical care.

  18. SHARP User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Y. Q.; Shemon, E. R.; Thomas, J. W.

    SHARP is an advanced modeling and simulation toolkit for the analysis of nuclear reactors. It is comprised of several components including physical modeling tools, tools to integrate the physics codes for multi-physics analyses, and a set of tools to couple the codes within the MOAB framework. Physics modules currently include the neutronics code PROTEUS, the thermal-hydraulics code Nek5000, and the structural mechanics code Diablo. This manual focuses on performing multi-physics calculations with the SHARP ToolKit. Manuals for the three individual physics modules are available with the SHARP distribution to help the user to either carry out the primary multi-physics calculationmore » with basic knowledge or perform further advanced development with in-depth knowledge of these codes. This manual provides step-by-step instructions on employing SHARP, including how to download and install the code, how to build the drivers for a test case, how to perform a calculation and how to visualize the results. Since SHARP has some specific library and environment dependencies, it is highly recommended that the user read this manual prior to installing SHARP. Verification tests cases are included to check proper installation of each module. It is suggested that the new user should first follow the step-by-step instructions provided for a test problem in this manual to understand the basic procedure of using SHARP before using SHARP for his/her own analysis. Both reference output and scripts are provided along with the test cases in order to verify correct installation and execution of the SHARP package. At the end of this manual, detailed instructions are provided on how to create a new test case so that user can perform novel multi-physics calculations with SHARP. Frequently asked questions are listed at the end of this manual to help the user to troubleshoot issues.« less

  19. Microstructural and hardness investigations on a dissimilar metal weld between low alloy steel and Alloy 82 weld metal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Z.R., E-mail: raymix@aliyun.com

    The investigation on microstructure and hardness at the fusion boundary (FB) region of a dissimilar metal weld (DMW) between low alloy steel (LAS) A508-III and Alloy 82 weld metal (WM) was carried out. The results indicated that there were two kinds of FBs, martensite FB and sharp FB, with obvious different microstructures, alternately distributed in the same FB. The martensite FB region had a gradual change of elemental concentration across FB, columnar WM grains with high length/width ratios, a thick martensite layer and a wide heat affected zone (HAZ) with large prior austenite grains. By comparison, the sharp FB regionmore » had a relatively sharp change of elemental concentration across the FB, WM grains with low length/width ratios and a narrow HAZ with smaller prior austenite grains. The martensite possessed a K-S orientation relationship with WM grains, while no orientation relationship was found between the HAZ grains and WM grains at the sharp FB. Compared with sharp FB there were much more Σ3 boundaries in the HAZ beside martensite FB. The hardness maximum of the martensite FB was much higher than that of the sharp FB, which was attributed to the martensite layer at the martensite FB. - Highlights: •Martensite and sharp FBs with different microstructures were found in the same FB. •There were high length/width-ratio WM grains and a wide HAZ beside martensite FB. •There were low length/width-ratio WM grains and a narrow HAZ beside sharp FB. •Compared with sharp FB, there were much more Σ3 boundaries in HAZ of martensite FB. •Hardness maximium of martensite FB was much higher than that of sharp FB.« less

  20. Long Term Upper Ocean Study (LOTUS) at 34 deg N, 70 deg W: Meteorological Sensors, Data, and Heat Fluxes for May-October 1982 (LOTUS-3 and LOTUS-4).

    DTIC Science & Technology

    1983-09-01

    6061 aluminum alloy. The watertight hull is divided into three chambers and a central in- strument well. The entire hull, except for the instrument...see calibration curve in Fig. 7). The aluminum cups have a turning radius of . -l , 4.4 cm and a threshold less than 0.7 m s o The cup assembly has a...of sheet aluminum and has a threshold less than 0.7 m al* The vane utilizes a 10 ohm conductive-plastic potentiometer mounted in the lower part of the

  1. Reducing Threshold of Multi Quantum Wells InGaN Laser Diode by Using InGaN/GaN Waveguide

    NASA Astrophysics Data System (ADS)

    Abdullah, Rafid A.; Ibrahim, Kamarulazizi

    2010-07-01

    ISE TCAD (Integrated System Engineering Technology Computer Aided Design) software simulation program has been utilized to help study the effect of using InGaN/GaN as a waveguide instead of conventional GaN waveguide for multi quantum wells violet InGaN laser diode (LD). Simulation results indicate that the threshold of the LD has been reduced by using InGaN/GaN waveguide where InGaN/GaN waveguide increases the optical confinement factor which leads to increase the confinement carriers at the active region of the LD.

  2. 1-99 W. Hubbard St, May 2018, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation.The count rates in the excavation ranged from 2,100 cpm to 4,200 cpm.No count rates were found at any time that exceeded the instrument specific threshold limits of 7,366 and 6,415 cpm.

  3. Notes on breeding sharp-shinned hawks and cooper’s hawks in Barnwell County, South Carolina

    Treesearch

    Mark Vukovich; John C. Kilgo

    2009-01-01

    Breeding records of Accipiter striatus (Sharp-shinned Hawks) in the southeastern US are scattered and isolated. We documented a Sharp-shinned Hawk and Accipiter cooperii (Cooper’s Hawk) nest while conducting a telemetry study on Melanerpes erythrocephalus (Red-headed Woodpeckers) in Barnwell County, SC in 2006 and 2007. We report the first known nest of a Sharp-shinned...

  4. S-HARP: A parallel dynamic spectral partitioner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohn, A.; Simon, H.

    1998-01-01

    Computational science problems with adaptive meshes involve dynamic load balancing when implemented on parallel machines. This dynamic load balancing requires fast partitioning of computational meshes at run time. The authors present in this report a fast parallel dynamic partitioner, called S-HARP. The underlying principles of S-HARP are the fast feature of inertial partitioning and the quality feature of spectral partitioning. S-HARP partitions a graph from scratch, requiring no partition information from previous iterations. Two types of parallelism have been exploited in S-HARP, fine grain loop level parallelism and coarse grain recursive parallelism. The parallel partitioner has been implemented in Messagemore » Passing Interface on Cray T3E and IBM SP2 for portability. Experimental results indicate that S-HARP can partition a mesh of over 100,000 vertices into 256 partitions in 0.2 seconds on a 64 processor Cray T3E. S-HARP is much more scalable than other dynamic partitioners, giving over 15 fold speedup on 64 processors while ParaMeTiS1.0 gives a few fold speedup. Experimental results demonstrate that S-HARP is three to 10 times faster than the dynamic partitioners ParaMeTiS and Jostle on six computational meshes of size over 100,000 vertices.« less

  5. Hardware Design and Implementation of a Wavelet De-Noising Procedure for Medical Signal Preprocessing

    PubMed Central

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-01-01

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290

  6. Proconvulsant Actions of Intrahippocampal Botulinum Neurotoxin B in the Rat

    PubMed Central

    Bröer, Sonja; Zolkowska, Dorota; Gernert, Manuela; Rogawski, Michael A.

    2013-01-01

    Botulinum neurotoxins (BoNTs) may affect the excitability of brain circuits by inhibiting neurotransmitter release at central synapses. There is evidence that local delivery of BoNT serotypes A and E, which target SNAP-25, a component of the release machinery specific to excitatory synapses, can inhibit seizure generation. BoNT serotype B (BoNT/B) targets VAMP2, which is expressed in both excitatory and inhibitory terminals. Here we assessed the effects of unilateral intrahippocampal infusion of BoNT/B in the rat on intravenous pentylenetetrazol (PTZ) seizure thresholds, and on the expression of spontaneous behavioral and electrographic seizures. Infusion of BoNT/B (500 and 1000 unit) by convection-enhanced delivery caused a reduction in myoclonic twitch and clonic seizure thresholds in response to intravenous PTZ beginning about 6 days after the infusion. Handling-evoked and spontaneous convulsive seizures were observed in many BoNT/B-treated animals but not in vehicle-treated controls. Spontaneous electrographic seizure discharges were recorded in the dentate gyrus of animals that received local BoNT/B infusion. In addition, there was an increased frequency of interictal epileptiform spikes and sharp waves at the same recording site. BoNT/B treated animals also exhibited tactile hyperresponsivity in comparison with vehicle-treated controls. This is the first demonstration that BoNT/B causes a delayed proconvulsant action when infused into the hippocampus. Local infusion of BoNT/B could be useful as a focal epilepsy model. PMID:23906638

  7. The effect of symmetrical and asymmetrical hearing impairment on music quality perception.

    PubMed

    Cai, Yuexin; Zhao, Fei; Chen, Yuebo; Liang, Maojin; Chen, Ling; Yang, Haidi; Xiong, Hao; Zhang, Xueyuan; Zheng, Yiqing

    2016-09-01

    The purpose of this study was to investigate the effect of symmetrical, asymmetrical and unilateral hearing impairment on music quality perception. Six validated music pieces in the categories of classical music, folk music and pop music were used to assess music quality in terms of its 'pleasantness', 'naturalness', 'fullness', 'roughness' and 'sharpness'. 58 participants with sensorineural hearing loss [20 with unilateral hearing loss (UHL), 20 with bilateral symmetrical hearing loss (BSHL) and 18 with bilateral asymmetrical hearing loss (BAHL)] and 29 normal hearing (NH) subjects participated in the present study. Hearing impaired (HI) participants had greater difficulty in overall music quality perception than NH participants. Participants with BSHL rated music pleasantness and naturalness to be higher than participants with BAHL. Moreover, the hearing thresholds of the better ears from BSHL and BAHL participants as well as the hearing thresholds of the worse ears from BSHL participants were negatively correlated to the pleasantness and naturalness perception. HI participants rated the familiar music pieces higher than unfamiliar music pieces in the three music categories. Music quality perception in participants with hearing impairment appeared to be affected by symmetry of hearing loss, degree of hearing loss and music familiarity when they were assessed using the music quality rating test (MQRT). This indicates that binaural symmetrical hearing is important to achieve a high level of music quality perception in HI listeners. This emphasizes the importance of provision of bilateral hearing assistive devices for people with asymmetrical hearing impairment.

  8. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    PubMed

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-10-16

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  9. Role of Slip Mode on Stress Corrosion Cracking Behavior

    NASA Astrophysics Data System (ADS)

    Vasudevan, A. K.; Sadananda, K.

    2011-02-01

    In this article, we examine the effect of aging treatment and the role of planarity of slip on stress corrosion cracking (SCC) behavior in precipitation-hardened alloys. With aging, the slip mode can change from a planar slip in the underage (UA) to a wavy slip in the overage (OA) region. This, in turn, results in sharpening the crack tip in the UA compared to blunting in the OA condition. We propose that the planar slip enhances the stress concentration effects by making the alloys more susceptible to SCC. In addition, the planarity of slip enhances plateau velocities, reduces thresholds for SCC, and reduces component life. We show that the effect of slip planarity is somewhat similar to the effects of mechanically induced stress concentrations such as due to the presence of sharp notches. Aging treatment also causes variations in the matrix and grain boundary (GB) microstructures, along with typical mechanical and SCC properties. These properties include yield stress, work hardening rate, fracture toughness K IC , thresholds K Iscc, and steady-state plateau velocity ( da/ dt). The SCC data for a wide range of ductile alloys including 7050, 7075, 5083, 5456 Al, MAR M steels, and solid solution copper-base alloys are collected from the literature. Our assertion is that slip mode and the resulting stress concentration are important factors in SCC behavior. This is further supported by similar observations in many other systems including some steels, Al alloys, and Cu alloys.

  10. Dipole response of neutron-rich Sn isotopes

    NASA Astrophysics Data System (ADS)

    Klimkiewicz, A.; Adrich, P.; Boretzky, K.; Fallot, M.; Aumann, T.; Cortina-Gil, D.; Datta Pramanik, U.; Elze, Th. W.; Emling, H.; Geissel, H.; Hellstroem, M.; Jones, K. L.; Kratz, J. V.; Kulessa, R.; Leifels, Y.; Nociforo, C.; Palit, R.; Simon, H.; Surowka, G.; Sümmerer, K.; Typel, S.; Walus, W.

    2007-05-01

    The neutron-rich isotopes 129-133Sn were studied in a Coulomb excitation experiment at about 500 AMeV using the FRS-LAND setup at GSI. From the exclusive measurement of all projectile-like particles following the excitation and decay of the projectile in a high-Z target, the energy differential cross section can be extracted. At these beam energies dipole transitions are dominating, and within the semi-classical approach the Coulomb excitation cross sections can be transformed into photoabsorption cross sections. In contrast to stable Sn nuclei, a substantial fraction of dipole strength is observed at energies below the giant dipole resonance (GDR). For 130Sn and 132Sn this strength is located in a peak-like structure around 10 MeV excitation energy and exhibits a few percent of the Thomas-Reiche Kuhn (TRK) sum-rule strength. Several calculations predict the appearance of dipole strength at low excitation energies in neutron-rich nuclei. This low-lying strength is often referred to as pygmy dipole resonance (PDR) and, in a macroscopic picture, is discussed in terms of a collective oscillation of excess neutrons versus the core nucleons. Moreover, a sharp rise is observed at the neutron separation threshold around 5 MeV for the odd isotopes. A possible contribution of 'threshold strength', which can be described within the direct-breakup model is discussed. The results for the neutron-rich Sn isotopes are confronted with results on stable nuclei investigated in experiments using real photons.

  11. Fabrication of a Highly Sensitive Single Aligned TiO2 and Gold Nanoparticle Embedded TiO2 Nano-Fiber Gas Sensor.

    PubMed

    Nikfarjam, Alireza; Hosseini, Seyedsina; Salehifar, Nahideh

    2017-05-10

    In this research, a single-aligned nanofiber of pure TiO 2 and gold nanoparticle (GNP)-TiO 2 were fabricated using a novel electro-spinning procedure equipped with secondary electrostatic fields on highly sharp triangular and rectangular electrodes provided for gas sensing applications. The sol used for spinning nanofiber consisted of titanium tetraisopropoxide (C 12 H 28 O 4 Ti), acetic acid (CH 3 COOH), ethanol (C 2 H 5 OH), polyvinylpyrrolidone (PVP), and gold nanoparticle solution. FE-SEM, TEM, and XRD were used to characterize the single nanofiber. In triangular electrodes, the electrostatic voltage for aligning single nanofiber between electrodes depends on the angle tip of the electrode, which was around 1.4-2.1, 2-2.9, and 3.2-4.1 kV for 30°, 45°, and 60°, respectively. However, by changing the shape of the electrodes to rectangular samples and by increasing distance between electrodes from 100 to 200 μm, electro-spinning applied voltage decreased. Response of pure TiO 2 single nanofiber sensor was measured for 30-200 ppb carbon monoxide gas. The triangular sample revealed better response and lower threshold than the rectangular sample. Adding appropriate amounts of GNP decreased the operating temperature and increased the responses. CO concentration threshold for the pure TiO 2 and GNP-TiO 2 triangular samples was about 5 ppb and 700 ppt, respectively.

  12. LinkImputeR: user-guided genotype calling and imputation for non-model organisms.

    PubMed

    Money, Daniel; Migicovsky, Zoë; Gardner, Kyle; Myles, Sean

    2017-07-10

    Genomic studies such as genome-wide association and genomic selection require genome-wide genotype data. All existing technologies used to create these data result in missing genotypes, which are often then inferred using genotype imputation software. However, existing imputation methods most often make use only of genotypes that are successfully inferred after having passed a certain read depth threshold. Because of this, any read information for genotypes that did not pass the threshold, and were thus set to missing, is ignored. Most genomic studies also choose read depth thresholds and quality filters without investigating their effects on the size and quality of the resulting genotype data. Moreover, almost all genotype imputation methods require ordered markers and are therefore of limited utility in non-model organisms. Here we introduce LinkImputeR, a software program that exploits the read count information that is normally ignored, and makes use of all available DNA sequence information for the purposes of genotype calling and imputation. It is specifically designed for non-model organisms since it requires neither ordered markers nor a reference panel of genotypes. Using next-generation DNA sequence (NGS) data from apple, cannabis and grape, we quantify the effect of varying read count and missingness thresholds on the quantity and quality of genotypes generated from LinkImputeR. We demonstrate that LinkImputeR can increase the number of genotype calls by more than an order of magnitude, can improve genotyping accuracy by several percent and can thus improve the power of downstream analyses. Moreover, we show that the effects of quality and read depth filters can differ substantially between data sets and should therefore be investigated on a per-study basis. By exploiting DNA sequence data that is normally ignored during genotype calling and imputation, LinkImputeR can significantly improve both the quantity and quality of genotype data generated from NGS technologies. It enables the user to quickly and easily examine the effects of varying thresholds and filters on the number and quality of the resulting genotype calls. In this manner, users can decide on thresholds that are most suitable for their purposes. We show that LinkImputeR can significantly augment the value and utility of NGS data sets, especially in non-model organisms with poor genomic resources.

  13. Induction of the SHARP-2 mRNA level by insulin is mediated by multiple signaling pathways.

    PubMed

    Kanai, Yukiko; Asano, Kosuke; Komatsu, Yoshiko; Takagi, Katsuhiro; Ono, Moe; Tanaka, Takashi; Tomita, Koji; Haneishi, Ayumi; Tsukada, Akiko; Yamada, Kazuya

    2017-02-01

    The rat enhancer of split- and hairy-related protein-2 (SHARP-2) is an insulin-inducible transcription factor which represses transcription of the rat phosphoenolpyruvate carboxykinase gene. In this study, a regulatory mechanism of the SHARP-2 mRNA level by insulin was analyzed. Insulin rapidly induced the level of SHARP-2 mRNA. This induction was blocked by inhibitors for phosphoinositide 3-kinase (PI 3-K), protein kinase C (PKC), and mammalian target of rapamycin (mTOR), actinomycin D, and cycloheximide. Whereas an adenovirus infection expressing a dominant negative form of atypical PKC lambda (aPKCλ) blocked the insulin-induction of the SHARP-2 mRNA level, insulin rapidly activated the mTOR. Insulin did not enhance transcriptional activity from a 3.7 kb upstream region of the rat SHARP-2 gene. Thus, we conclude that insulin induces the expression of the rat SHARP-2 gene at the transcription level via both a PI 3-K/aPKCλ- and a PI 3-K/mTOR- pathways and that protein synthesis is required for this induction.

  14. Audit of sharp weapon deaths in metropolis of Karachi--an autopsy based study.

    PubMed

    Mirza, Farhat Hussain; Hasan, Qudsia; Memon, Akhtar Amin; Adil, Syeda Ezz-e-Rukhshan

    2010-01-01

    Sharp weapons are one of the most violent and abhorrent means of deaths. This study assesses the frequency of sharp weapon deaths in Karachi. This was a cross sectional study, and involves the deaths by sharp weapons autopsied in Karachi during Mar 2008-Feb 2009. This study reports that the frequency of sharp weapon deaths in Karachi is similar to some other studies conducted in different regions of Pakistan, yet it is very high as the population of Karachi is way more than any other metropolis of Pakistan. Our study reported that out of 2090 medico-legal deaths in Karachi during the study period, 91 deaths were due to sharp weapons, including 73 (80.2%) males and 18 (19.8%) females. 100% of the deaths were homicides, so none were suicides. Deaths were more frequent in age group ranging from 20-39 years (59.3%). Sharp weapon deaths continue to be a means of quite a number of deaths in Karachi. Such violence depicts intolerant and frustrated nature of the citizens.

  15. Diagnostic performance of BMI percentiles to identify adolescents with metabolic syndrome.

    PubMed

    Laurson, Kelly R; Welk, Gregory J; Eisenmann, Joey C

    2014-02-01

    To compare the diagnostic performance of the Centers for Disease Control and Prevention (CDC) and FITNESSGRAM (FGram) BMI standards for quantifying metabolic risk in youth. Adolescents in the NHANES (n = 3385) were measured for anthropometric variables and metabolic risk factors. BMI percentiles were calculated, and youth were categorized by weight status (using CDC and FGram thresholds). Participants were also categorized by presence or absence of metabolic syndrome. The CDC and FGram standards were compared by prevalence of metabolic abnormalities, various diagnostic criteria, and odds of metabolic syndrome. Receiver operating characteristic curves were also created to identify optimal BMI percentiles to detect metabolic syndrome. The prevalence of metabolic syndrome in obese youth was 19% to 35%, compared with <2% in the normal-weight groups. The odds of metabolic syndrome for obese boys and girls were 46 to 67 and 19 to 22 times greater, respectively, than for normal-weight youth. The receiver operating characteristic analyses identified optimal thresholds similar to the CDC standards for boys and the FGram standards for girls. Overall, BMI thresholds were more strongly associated with metabolic syndrome in boys than in girls. Both the CDC and FGram standards are predictive of metabolic syndrome. The diagnostic utility of the CDC thresholds outperformed the FGram values for boys, whereas FGram standards were slightly better thresholds for girls. The use of a common set of thresholds for school and clinical applications would provide advantages for public health and clinical research and practice.

  16. Damage in fatigue: A new outlook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, K.J.

    1995-12-01

    This paper concentrates on the difficulties produced by linear elastic fracture mechanics (LEFM) and how recent research has removed many of these difficulties thereby permitting the design engineer to have a much improved basis for solving complex problems of engineering plant subjected to cyclic loading. This paper intends to show that: (1) In polycrystalline materials the period of initiation is in reality, zero and fatigue lifetime is entirely composed of crack propagation. (2) The fatigue limit of a metal, component or structure is related to whether or not a crack can propagate. (3) Elastic Fracture Mechanics is only a beginningmore » in the science of, and application of, fracture mechanics. (4) Fatigue Damage is current crack length and the rate of damage accumulation is the rate of crack growth. (5) Only two basic forms of crack extension occur when any combination of the three loading mode mechanisms (Modes 1, 2, and 3) are applied, namely Stage 1 (shear crack growth) and Stage 2 (tensile crack growth). (6) Three fundamentally different fatigue crack growth thresholds exist. (7) The fatigue resistance of a metal is predominantly concerned with a crack changing its crack-growth direction, ie from Stage 1 to Stage 2, or vice versa. (8) Notches fall into two clearly defined categories; sharp notches where failure is related to the mechanical threshold condition, and shallow notches where failure is related to the material threshold condition. (9) Complex three-dimensional cyclic stress systems should be evaluated with respect to the possible Stage 1 and Stage 2 crack growth planes. (10) Barriers to fatigue crack growth can have origins in the microstructure (eg: grain boundaries) and in the mechanical state (eg: other crack systems). (11) The removal of a fatigue limit by a corrosive environment can be evaluated by the interface conditions between the Elastic-Plastic Fracture Mechanics (EPFM) and Microstructural Fracture Mechanics (MFM) regimes.« less

  17. Prestin Regulation and Function in Residual Outer Hair Cells after Noise-Induced Hearing Loss

    PubMed Central

    Xia, Anping; Song, Yohan; Wang, Rosalie; Gao, Simon S.; Clifton, Will; Raphael, Patrick; Chao, Sung-il; Pereira, Fred A.; Groves, Andrew K.; Oghalai, John S.

    2013-01-01

    The outer hair cell (OHC) motor protein prestin is necessary for electromotility, which drives cochlear amplification and produces exquisitely sharp frequency tuning. TectaC1509G transgenic mice have hearing loss, and surprisingly have increased OHC prestin levels. We hypothesized, therefore, that prestin up-regulation may represent a generalized response to compensate for a state of hearing loss. In the present study, we sought to determine the effects of noise-induced hearing loss on prestin expression. After noise exposure, we performed cytocochleograms and observed OHC loss only in the basal region of the cochlea. Next, we patch clamped OHCs from the apical turn (9–12 kHz region), where no OHCs were lost, in noise-exposed and age-matched control mice. The non-linear capacitance was significantly higher in noise-exposed mice, consistent with higher functional prestin levels. We then measured prestin protein and mRNA levels in whole-cochlea specimens. Both Western blot and qPCR studies demonstrated increased prestin expression after noise exposure. Finally, we examined the effect of the prestin increase in vivo following noise damage. Immediately after noise exposure, ABR and DPOAE thresholds were elevated by 30–40 dB. While most of the temporary threshold shifts recovered within 3 days, there were additional improvements over the next month. However, DPOAE magnitudes, basilar membrane vibration, and CAP tuning curve measurements from the 9–12 kHz cochlear region demonstrated no differences between noise-exposed mice and control mice. Taken together, these data indicate that prestin is up-regulated by 32–58% in residual OHCs after noise exposure and that the prestin is functional. These findings are consistent with the notion that prestin increases in an attempt to partially compensate for reduced force production because of missing OHCs. However, in regions where there is no OHC loss, the cochlea is able to compensate for the excess prestin in order to maintain stable auditory thresholds and frequency discrimination. PMID:24376553

  18. The vertical profile of radar reflectivity of convective cells: A strong indicator of storm intensity and lightning probability?

    NASA Technical Reports Server (NTRS)

    Zipser, Edward J.; Lutz, Kurt R.

    1994-01-01

    Reflectivity data from Doppler radars are used to construct vertical profiles of radar reflectivity (VPRR) of convective cells in mesoscale convective systems (MCSs) in three different environmental regimes. The National Center for Atmospheric Research CP-3 and CP-4 radars are used to calculate median VPRR for MCSs in the Oklahoma-Kansas Preliminary Regional Experiment for STORM-Central in 1985. The National Oceanic and Atmospheric Administration-Tropical Ocean Global Atmosphere radar in Darwin, Australia, is used to calculate VPRR for MCSs observed both in oceanic, monsoon regimes and in continental, break period regimes during the wet seasons of 1987/88 and 1988/89. The midlatitude and tropical continental VPRRs both exhibit maximum reflectivity somewhat above the surface and have a gradual decrease in reflectivity with height above the freezing level. In sharp contrast, the tropical oceanic profile has a maximum reflectivity at the lowest level and a very rapid decrease in reflectivity with height beginning just above the freezing level. The tropical oceanic profile in the Darwin area is almost the same shape as that for two other tropical oceanic regimes, leading to the conclustion that it is characteristic. The absolute values of reflectivity in the 0 to 20 C range are compared with values in the literature thought to represent a threshold for rapid storm electrification leading to lightning, about 40 dBZ at -10 C. The large negative vertical gradient of reflectivity in this temperature range for oceanic storms is hypothesized to be a direct result of the characteristically weaker vertical velocities observed in MCSs over tropical oceans. It is proposed, as a necessary condition for rapid electrification, that a convective cell must have its updraft speed exceed some threshold value. Based upon field program data, a tentative estimate for the magnitude of this threshold is 6-7 m/s for mean speed and 10-12 m/s for peak speed.

  19. The Impact of Nearly Universal Insurance Coverage on Health Care Utilization: Evidence from Medicare.

    PubMed

    Card, David; Dobkin, Carlos; Maestas, Nicole

    2008-12-01

    The onset of Medicare eligibility at age 65 leads to sharp changes in the health insurance coverage of the U.S. population. These changes lead to increases in the use of medical services, with a pattern of gains across socioeconomic groups that varies by type of service. While routine doctor visits increase more for groups that previously lacked insurance, hospital admissions for relatively expensive procedures like bypass surgery and joint replacement increase more for previously insured groups that are more likely to have supplementary coverage after 65, reflecting the relative generosity of their combined insurance package under Medicare.

  20. The Impact of Nearly Universal Insurance Coverage on Health Care Utilization: Evidence from Medicare

    PubMed Central

    Dobkin, Carlos; Maestas, Nicole

    2008-01-01

    The onset of Medicare eligibility at age 65 leads to sharp changes in the health insurance coverage of the U.S. population. These changes lead to increases in the use of medical services, with a pattern of gains across socioeconomic groups that varies by type of service. While routine doctor visits increase more for groups that previously lacked insurance, hospital admissions for relatively expensive procedures like bypass surgery and joint replacement increase more for previously insured groups that are more likely to have supplementary coverage after 65, reflecting the relative generosity of their combined insurance package under Medicare. PMID:19079738

  1. Preliminary investigation of Large Format Camera photography utility in soil mapping and related agricultural applications

    NASA Technical Reports Server (NTRS)

    Pelletier, R. E.; Hudnall, W. H.

    1987-01-01

    The use of Space Shuttle Large Format Camera (LFC) color, IR/color, and B&W images in large-scale soil mapping is discussed and illustrated with sample photographs from STS 41-6 (October 1984). Consideration is given to the characteristics of the film types used; the photographic scales available; geometric and stereoscopic factors; and image interpretation and classification for soil-type mapping (detecting both sharp and gradual boundaries), soil parent material topographic and hydrologic assessment, natural-resources inventory, crop-type identification, and stress analysis. It is suggested that LFC photography can play an important role, filling the gap between aerial and satellite remote sensing.

  2. Tip-enhanced near-field optical microscopy

    PubMed Central

    Mauser, Nina; Hartschuh, Achim

    2013-01-01

    Tip-enhanced near-field optical microscopy (TENOM) is a scanning probe technique capable of providing a broad range of spectroscopic information on single objects and structured surfaces at nanometer spatial resolution and with highest detection sensitivity. In this review, we first illustrate the physical principle of TENOM that utilizes the antenna function of a sharp probe to efficiently couple light to excitations on nanometer length scales. We then discuss the antenna-induced enhancement of different optical sample responses including Raman scattering, fluorescence, generation of photocurrent and electroluminescence. Different experimental realizations are presented and several recent examples that demonstrate the capabilities of the technique are reviewed. PMID:24100541

  3. A novel fast optical switch based on two cascaded Terahertz Optical Asymmetric Demultiplexers (TOAD).

    PubMed

    Wang, Bing; Baby, Varghese; Tong, Wilson; Xu, Lei; Friedman, Michelle; Runser, Robert; Glesk, Ivan; Prucnal, Paul

    2002-01-14

    A novel optical switch based on cascading two terahertz optical asymmetric demultiplexers (TOAD) is presented. By utilizing the sharp edge of the asymmetric TOAD switching window profile, two TOAD switching windows are overlapped to produce a narrower aggregate switching window, not limited by the pulse propagation time in the SOA of the TOAD. Simulations of the cascaded TOAD switching window show relatively constant window amplitude for different window sizes. Experimental results on cascading two TOADs, each with a switching window of 8ps, but with the SOA on opposite sides of the fiber loop, show a minimum switching window of 2.7ps.

  4. Global-Local Finite Element Analysis of Bonded Single-Lap Joints

    NASA Technical Reports Server (NTRS)

    Kilic, Bahattin; Madenci, Erdogan; Ambur, Damodar R.

    2004-01-01

    Adhesively bonded lap joints involve dissimilar material junctions and sharp changes in geometry, possibly leading to premature failure. Although the finite element method is well suited to model the bonded lap joints, traditional finite elements are incapable of correctly resolving the stress state at junctions of dissimilar materials because of the unbounded nature of the stresses. In order to facilitate the use of bonded lap joints in future structures, this study presents a finite element technique utilizing a global (special) element coupled with traditional elements. The global element includes the singular behavior at the junction of dissimilar materials with or without traction-free surfaces.

  5. A Pitch Extraction Method with High Frequency Resolution for Singing Evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Hideyo; Hoguro, Masahiro; Umezaki, Taizo

    This paper proposes a pitch estimation method suitable for singing evaluation incorporable in KARAOKE machines. Professional singers and musicians have sharp hearing for music and singing voice. They recognize that singer's voice pitch is “a little off key” or “be in tune”. In the same way, the pitch estimation method that has high frequency resolution is necessary in order to evaluate singing. This paper proposes a pitch estimation method with high frequency resolution utilizing harmonic characteristic of autocorrelation function. The proposed method can estimate a fundamental frequency in the range 50 ∼ 1700[Hz] with resolution less than 3.6 cents in light processing.

  6. A reversible transition in liquid Bi under pressure.

    PubMed

    Emuna, M; Matityahu, S; Yahel, E; Makov, G; Greenberg, Y

    2018-01-21

    The electrical resistance of solid and liquid Bi has been measured at high pressures and temperatures using a novel experimental design for high sensitivity measurements utilizing a "Paris-Edinburgh" toroid large volume press. An anomalous sharp decrease in resistivity with increasing temperature at constant pressures was observed in the region beyond melting which implies a possible novel transition in the melt. The proposed transition was observed across a range of pressures both in heating and cooling cycles of the sample demonstrating its reversibility. From the measurements it was possible to determine a "phase-line" of this transition on the Bi pressure-temperature phase diagram terminating at the melting curve.

  7. Imaging, cutting, and collecting instrument and method

    DOEpatents

    Tench, Robert J.; Siekhaus, Wigbert J.; Balooch, Mehdi; Balhorn, Rodney L.; Allen, Michael J.

    1995-01-01

    Instrumentation and techniques to image small objects, such as but not limited to individual human chromosomes, with nanometer resolution, to cut-off identified parts of such objects, to move around and manipulate such cut-off parts on the substrate on which they are being imaged to predetermined locations on the substrate, and to remove the cut-off parts from the substrate. This is accomplished using an atomic force microscope (AFM) and by modification of the conventional cantilever stylus assembly of an AFM, such that plural cantilevers are used with either sharp-tips or knife-edges thereon. In addition, the invention can be utilized for measuring hardness of materials.

  8. Plasmon-mediated Enhancement of Rhodamine 6G Spontaneous Emission on Laser-spalled Nanotextures

    NASA Astrophysics Data System (ADS)

    Kuchmizhak, A. A.; Nepomnyashchii, A. V.; Vitrik, O. B.; Kulchin, Yu. N.

    Biosensing characteristics of the laser-spalled nanotextures produced under single-pulse irradiation of a 500-nm thick Ag film surface were assessed by measuring spontaneous emission enhancement of overlaying Rhodamine 6G (Rh6G) molecules utilizing polarization-resolved confocal microspectroscopy technique. Our preliminary study shows for the first time that a single spalled micro-sized crater covered with sub-100 nm sharp tips at a certain excitation conditions provides up to 40-fold plasmon-mediated enhancement of the spontaneous emission from the 10-nm thick Rh6G over-layer indicating high potential of these easy-to-do structures for routine biosensing tasks.

  9. Synchrotron radiation calibration of the EUVE variable line-spaced diffraction gratings at the NBS SURF II facility

    NASA Technical Reports Server (NTRS)

    Jelinsky, P.; Jelinsky, S. R.; Miller, A.; Vallerga, J.; Malina, R. F.

    1988-01-01

    The Extreme Ultraviolet Explorer (EUVE) has a spectrometer which utilizes variable line-spaced, plane diffraction gratings in the converging beam of a Wolter-Schwarzschild type II mirror. The gratings, microchannel plate detector, and thin film filters have been calibrated with continuum radiation provided by the NBS SURF II facility. These were calibrated in a continuum beam to find edges or other sharp spectral features in the transmission of the filters, quantum efficiency of the microchannel plate detector, and efficiency of the gratings. The details of the calibration procedure and the results of the calibration are presented.

  10. Gamma-ray lines from neutron stars as probes of fundamental physics

    NASA Technical Reports Server (NTRS)

    Brecher, K.

    1978-01-01

    The detection of gamma-ray lines produced at the surface of neutron stars will serve to test both the strong and gravitational interactions under conditions unavailable in terrestrial laboratories. Observation of a single redshifted gamma-ray line, combined with an estimate of the mass of the star will serve as a strong constraint on allowable equations of state of matter at supernuclear densities. Detection of two redshifted lines arising from different physical processes at the neutron star surface can provide a test of the strong principle of equivalence. Expected fluxes of nuclear gamma-ray lines from accreting neutron stars were calculated, including threshold, radiative transfer and redshift effects. The most promising probes of neutron star structure are the deuterium formation line and the positron annihilation line. Detection of sharp redshifted gamma-ray lines from X-ray sources such as Cyg X-1 would argue strongly in favor of a neutron star rather than black hole identification for the object.

  11. Low-Energy Elastic Electron Scattering by Atomic Oxygen

    NASA Technical Reports Server (NTRS)

    Zatsarinny O.; Bartschat, K.; Tayal, S. S.

    2006-01-01

    The B-spline R-matrix method is employed to investigate the low-energy elastic electron scattering by atomic oxygen. Flexible non-orthogonal sets of radial functions are used to construct the target description and to represent the scattering functions. A detailed investigation regarding the dependence of the predicted partial and total cross sections on the scattering model and the accuracy of the target description is presented. The predicted angle-integrated elastic cross sections are in good agreement with experiment, whereas significant discrepancies are found in the angle-differential elastic cross sections near the forward direction. .The near-threshold results are found to strongly depend on the treatment of inner-core short-range correlation effects in the target description, as well as on a proper account of the target polarizability. A sharp increase in the elastic cross sections below 1 eV found in some earlier calculations is judged to be an artifact of an unbalanced description of correlation in the N-electron target structure and the (N+l)-electron-collision problems.

  12. Morphological filtering and multiresolution fusion for mammographic microcalcification detection

    NASA Astrophysics Data System (ADS)

    Chen, Lulin; Chen, Chang W.; Parker, Kevin J.

    1997-04-01

    Mammographic images are often of relatively low contrast and poor sharpness with non-stationary background or clutter and are usually corrupted by noise. In this paper, we propose a new method for microcalcification detection using gray scale morphological filtering followed by multiresolution fusion and present a unified general filtering form called the local operating transformation for whitening filtering and adaptive thresholding. The gray scale morphological filters are used to remove all large areas that are considered as non-stationary background or clutter variations, i.e., to prewhiten images. The multiresolution fusion decision is based on matched filter theory. In addition to the normal matched filter, the Laplacian matched filter which is directly related through the wavelet transforms to multiresolution analysis is exploited for microcalcification feature detection. At the multiresolution fusion stage, the region growing techniques are used in each resolution level. The parent-child relations between resolution levels are adopted to make final detection decision. FROC is computed from test on the Nijmegen database.

  13. Softened Mechanical Properties of Graphene Induced by Electric Field.

    PubMed

    Huang, Peng; Guo, Dan; Xie, Guoxin; Li, Jian

    2017-10-11

    The understanding on the mechanical properties of graphene under the applications of physical fields is highly relevant to the reliability and lifetime of graphene-based nanodevices. In this work, we demonstrate that the application of electric field could soften the mechanical properties of graphene dramatically on the basis of the conductive AFM nanoindentation method. It has been found that the Young's modulus and fracture strength of graphene nanosheets suspended on the holes almost stay the same initially and then exhibit a sharp drop when the normalized electric field strength increases to be 0.18 ± 0.03 V/nm. The threshold voltage of graphene nanosheets before the onset of fracture under the fixed applied load increases with the thickness. Supported graphene nanosheets can sustain larger electric field under the same applied load than the suspended ones. The excessively regional Joule heating caused by the high electric current under the applied load is responsible for the electromechanical failure of graphene. These findings can provide a beneficial guideline for the electromechanical applications of graphene-based nanodevices.

  14. Persistence versus extinction for a class of discrete-time structured population models.

    PubMed

    Jin, Wen; Smith, Hal L; Thieme, Horst R

    2016-03-01

    We provide sharp conditions distinguishing persistence and extinction for a class of discrete-time dynamical systems on the positive cone of an ordered Banach space generated by a map which is the sum of a positive linear contraction A and a nonlinear perturbation G that is compact and differentiable at zero in the direction of the cone. Such maps arise as year-to-year projections of population age, stage, or size-structure distributions in population biology where typically A has to do with survival and individual development and G captures the effects of reproduction. The threshold distinguishing persistence and extinction is the principal eigenvalue of (II−A)(−1)G'(0) provided by the Krein-Rutman Theorem, and persistence is described in terms of associated eigenfunctionals. Our results significantly extend earlier persistence results of the last two authors which required more restrictive conditions on G. They are illustrated by application of the results to a plant model with a seed bank.

  15. Statistical mechanics of complex economies

    NASA Astrophysics Data System (ADS)

    Bardoscia, Marco; Livan, Giacomo; Marsili, Matteo

    2017-04-01

    In the pursuit of ever increasing efficiency and growth, our economies have evolved to remarkable degrees of complexity, with nested production processes feeding each other in order to create products of greater sophistication from less sophisticated ones, down to raw materials. The engine of such an expansion have been competitive markets that, according to general equilibrium theory (GET), achieve efficient allocations under specific conditions. We study large random economies within the GET framework, as templates of complex economies, and we find that a non-trivial phase transition occurs: the economy freezes in a state where all production processes collapse when either the number of primary goods or the number of available technologies fall below a critical threshold. As in other examples of phase transitions in large random systems, this is an unintended consequence of the growth in complexity. Our findings suggest that the Industrial Revolution can be regarded as a sharp transition between different phases, but also imply that well developed economies can collapse if too many intermediate goods are introduced.

  16. Text extraction via an edge-bounded averaging and a parametric character model

    NASA Astrophysics Data System (ADS)

    Fan, Jian

    2003-01-01

    We present a deterministic text extraction algorithm that relies on three basic assumptions: color/luminance uniformity of the interior region, closed boundaries of sharp edges and the consistency of local contrast. The algorithm is basically independent of the character alphabet, text layout, font size and orientation. The heart of this algorithm is an edge-bounded averaging for the classification of smooth regions that enhances robustness against noise without sacrificing boundary accuracy. We have also developed a verification process to clean up the residue of incoherent segmentation. Our framework provides a symmetric treatment for both regular and inverse text. We have proposed three heuristics for identifying the type of text from a cluster consisting of two types of pixel aggregates. Finally, we have demonstrated the advantages of the proposed algorithm over adaptive thresholding and block-based clustering methods in terms of boundary accuracy, segmentation coherency, and capability to identify inverse text and separate characters from background patches.

  17. Modification of Fe-B based metallic glasses using swift heavy ions

    NASA Astrophysics Data System (ADS)

    Rodriguez, M. D.; Trautmann, C.; Toulemonde, M.; Afra, B.; Bierschenk, T.; Giulian, R.; Kirby, N.; Kluth, P.

    2012-10-01

    We report on small-angle x-ray scattering (SAXS) measurements of amorphous Fe80B20, Fe85B15, Fe81B13.5Si3.5C2, and Fe40Ni40B20 metallic alloys irradiated with 11.1 MeV/u 132Xe, 152Sm, 197Au, and 8.2 MeV/u 238U ions. SAXS experiments are nondestructive and give evidence for ion track formation including quantitative information about the size of the track radius. The measurements also indicate a cylindrical track structure with a sharp transition to the undamaged surrounding matrix material. Results are compared with calculations using an inelastic thermal spike model to deduce the critical energy loss for the track formation threshold. The damage recovery of ion tracks produced in Fe80B20 by 11.1 MeV/u 197Au ions was studied by means of isochronal annealing yielding an activation energy of 0.4 ± 0.1 eV

  18. Epitaxial VO2 thin-film-based radio-frequency switches with electrical activation

    NASA Astrophysics Data System (ADS)

    Lee, Jaeseong; Lee, Daesu; Cho, Sang June; Seo, Jung-Hun; Liu, Dong; Eom, Chang-Beom; Ma, Zhenqiang

    2017-09-01

    Vanadium dioxide (VO2) is a correlated material exhibiting a sharp insulator-to-metal phase transition (IMT) caused by temperature change and/or bias voltage. We report on the demonstration of electrically triggered radio-frequency (RF) switches based on epitaxial VO2 thin films. The highly epitaxial VO2 and SnO2 template layer was grown on a (001) TiO2 substrate by pulsed laser deposition (PLD). A resistance change of the VO2 thin films of four orders of magnitude was achieved with a relatively low threshold voltage, as low as 13 V, for an IMT phase transition. VO2 RF switches also showed high-frequency responses of insertion losses of -3 dB at the on-state and return losses of -4.3 dB at the off-state over 27 GHz. Furthermore, an intrinsic cutoff frequency of 17.4 THz was estimated for the RF switches. The study on electrical IMT dynamics revealed a phase transition time of 840 ns.

  19. Homeostatic enhancement of sensory transduction

    PubMed Central

    Milewski, Andrew R.; Ó Maoiléidigh, Dáibhid; Salvi, Joshua D.; Hudspeth, A. J.

    2017-01-01

    Our sense of hearing boasts exquisite sensitivity, precise frequency discrimination, and a broad dynamic range. Experiments and modeling imply, however, that the auditory system achieves this performance for only a narrow range of parameter values. Small changes in these values could compromise hair cells’ ability to detect stimuli. We propose that, rather than exerting tight control over parameters, the auditory system uses a homeostatic mechanism that increases the robustness of its operation to variation in parameter values. To slowly adjust the response to sinusoidal stimulation, the homeostatic mechanism feeds back a rectified version of the hair bundle’s displacement to its adaptation process. When homeostasis is enforced, the range of parameter values for which the sensitivity, tuning sharpness, and dynamic range exceed specified thresholds can increase by more than an order of magnitude. Signatures in the hair cell’s behavior provide a means to determine through experiment whether such a mechanism operates in the auditory system. Robustness of function through homeostasis may be ensured in any system through mechanisms similar to those that we describe here. PMID:28760949

  20. Substrate Vibrations as Promoters of Chemical Reactivity on Metal Surfaces.

    PubMed

    Campbell, Victoria L; Chen, Nan; Guo, Han; Jackson, Bret; Utz, Arthur L

    2015-12-17

    Studies exploring how vibrational energy (Evib) promotes chemical reactivity most often focus on molecular reagents, leaving the role of substrate atom motion in heterogeneous interfacial chemistry underexplored. This combined theoretical and experimental study of methane dissociation on Ni(111) shows that lattice atom motion modulates the reaction barrier height during each surface atom's vibrational period, which leads to a strong variation in the reaction probability (S0) with surface temperature (Tsurf). State-resolved beam-surface scattering studies at Tsurf = 90 K show a sharp threshold in S0 at translational energy (Etrans) = 42 kJ/mol. When Etrans decreases from 42 kJ/mol to 34 kJ/mol, S0 decreases 1000-fold at Tsurf = 90 K, but only 2-fold at Tsurf = 475 K. Results highlight the mechanism for this effect, provide benchmarks for DFT calculations, and suggest the potential importance of surface atom induced barrier height modulation in heterogeneously catalyzed reactions, particularly on structurally labile nanoscale particles and defect sites.

Top