Sample records for critical threshold values

  1. Rainfall threshold definition using an entropy decision approach and radar data

    NASA Astrophysics Data System (ADS)

    Montesarchio, V.; Ridolfi, E.; Russo, F.; Napolitano, F.

    2011-07-01

    Flash flood events are floods characterised by a very rapid response of basins to storms, often resulting in loss of life and property damage. Due to the specific space-time scale of this type of flood, the lead time available for triggering civil protection measures is typically short. Rainfall threshold values specify the amount of precipitation for a given duration that generates a critical discharge in a given river cross section. If the threshold values are exceeded, it can produce a critical situation in river sites exposed to alluvial risk. It is therefore possible to directly compare the observed or forecasted precipitation with critical reference values, without running online real-time forecasting systems. The focus of this study is the Mignone River basin, located in Central Italy. The critical rainfall threshold values are evaluated by minimising a utility function based on the informative entropy concept and by using a simulation approach based on radar data. The study concludes with a system performance analysis, in terms of correctly issued warnings, false alarms and missed alarms.

  2. An entropy decision approach in flash flood warning: rainfall thresholds definition

    NASA Astrophysics Data System (ADS)

    Montesarchio, V.; Napolitano, F.; Ridolfi, E.

    2009-09-01

    Flash floods events are floods characterised by very rapid response of the basins to the storms, and often they involve loss of life and damage to common and private properties. Due to the specific space-time scale of this kind of flood, generally only a short lead time is available for triggering civil protection measures. Thresholds values specify the precipitation amount for a given duration that generates a critical discharge in a given cross section. The overcoming of these values could produce a critical situation in river sites exposed to alluvial risk, so it is possible to compare directly the observed or forecasted precipitation with critical reference values, without running on line real time forecasting systems. This study is focused on the Mignone River basin, located in Central Italy. The critical rainfall threshold values are evaluated minimising an utility function based on the informative entropy concept. The study concludes with a system performance analysis, in terms of correctly issued warning, false alarms and missed alarms.

  3. Spreading dynamics of a SIQRS epidemic model on scale-free networks

    NASA Astrophysics Data System (ADS)

    Li, Tao; Wang, Yuanmei; Guan, Zhi-Hong

    2014-03-01

    In order to investigate the influence of heterogeneity of the underlying networks and quarantine strategy on epidemic spreading, a SIQRS epidemic model on the scale-free networks is presented. Using the mean field theory the spreading dynamics of the virus is analyzed. The spreading critical threshold and equilibria are derived. Theoretical results indicate that the critical threshold value is significantly dependent on the topology of the underlying networks and quarantine rate. The existence of equilibria is determined by threshold value. The stability of disease-free equilibrium and the permanence of the disease are proved. Numerical simulations confirmed the analytical results.

  4. Critical fluctuations and the rates of interstate switching near the excitation threshold of a quantum parametric oscillator.

    PubMed

    Lin, Z R; Nakamura, Y; Dykman, M I

    2015-08-01

    We study the dynamics of a nonlinear oscillator near the critical point where period-two vibrations are first excited with the increasing amplitude of parametric driving. Above the threshold, quantum fluctuations induce transitions between the period-two states over the quasienergy barrier. We find the effective quantum activation energies for such transitions and their scaling with the difference of the driving amplitude from its critical value. We also find the scaling of the fluctuation correlation time with the quantum noise parameters in the critical region near the threshold. The results are extended to oscillators with nonlinear friction.

  5. Diversity of Rainfall Thresholds for early warning of hydro-geological disasters

    NASA Astrophysics Data System (ADS)

    De Luca, Davide L.; Versace, Pasquale

    2017-06-01

    For early warning of disasters induced by precipitation (such as floods and landslides), different kinds of rainfall thresholds are adopted, which vary from each other, on the basis on adopted hypotheses. In some cases, they represent the occurrence probability of an event (landslide or flood), in other cases the exceedance probability of a critical value for an assigned indicator I (a function of rainfall heights), and in further cases they only indicate the exceeding of a prefixed percentage a critical value for I, indicated as Icr. For each scheme, it is usual to define three different criticality levels (ordinary, moderate and severe), which are associated to warning levels, according to emergency plans. This work briefly discusses different schemes of rainfall thresholds, focusing attention on landslide prediction, with some applications to a real case study in Calabria region (southern Italy).

  6. Simple Model for Identifying Critical Regions in Atrial Fibrillation

    NASA Astrophysics Data System (ADS)

    Christensen, Kim; Manani, Kishan A.; Peters, Nicholas S.

    2015-01-01

    Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment.

  7. Self-Organization on Social Media: Endo-Exo Bursts and Baseline Fluctuations

    PubMed Central

    Oka, Mizuki; Hashimoto, Yasuhiro; Ikegami, Takashi

    2014-01-01

    A salient dynamic property of social media is bursting behavior. In this paper, we study bursting behavior in terms of the temporal relation between a preceding baseline fluctuation and the successive burst response using a frequency time series of 3,000 keywords on Twitter. We found that there is a fluctuation threshold up to which the burst size increases as the fluctuation increases and that above the threshold, there appears a variety of burst sizes. We call this threshold the critical threshold. Investigating this threshold in relation to endogenous bursts and exogenous bursts based on peak ratio and burst size reveals that the bursts below this threshold are endogenously caused and above this threshold, exogenous bursts emerge. Analysis of the 3,000 keywords shows that all the nouns have both endogenous and exogenous origins of bursts and that each keyword has a critical threshold in the baseline fluctuation value to distinguish between the two. Having a threshold for an input value for activating the system implies that Twitter is an excitable medium. These findings are useful for characterizing how excitable a keyword is on Twitter and could be used, for example, to predict the response to particular information on social media. PMID:25329610

  8. Existence of infinitely many stationary solutions of the L2-subcritical and critical NLSE on compact metric graphs

    NASA Astrophysics Data System (ADS)

    Dovetta, Simone

    2018-04-01

    We investigate the existence of stationary solutions for the nonlinear Schrödinger equation on compact metric graphs. In the L2-subcritical setting, we prove the existence of an infinite number of such solutions, for every value of the mass. In the critical regime, the existence of infinitely many solutions is established if the mass is lower than a threshold value, while global minimizers of the NLS energy exist if and only if the mass is lower or equal to the threshold. Moreover, the relation between this threshold and the topology of the graph is characterized. The investigation is based on variational techniques and some new versions of Gagliardo-Nirenberg inequalities.

  9. Development of an epiphyte indicator of nutrient enrichment ...

    EPA Pesticide Factsheets

    Metrics of epiphyte load on macrophytes were evaluated for use as quantitative biological indicators for nutrient impacts in estuarine waters, based on review and analysis of the literature on epiphytes and macrophytes, primarily seagrasses, but including some brackish and freshwater rooted macrophyte species. An approach is presented that empirically derives threshold epiphyte loads which are likely to cause specified levels of decrease in macrophyte response metrics such as biomass, shoot density, percent cover, production and growth. Data from 36 studies of 10 macrophyte species were pooled to derive relationships between epiphyte load and -25 and -50% seagrass response levels, which are proposed as the primary basis for establishment of critical threshold values. Given multiple sources of variability in the response data, threshold ranges based on the range of values falling between the median and the 75th quantiles of observations at a given seagrass response level are proposed rather than single, critical point values. Four epiphyte load threshold categories - low, moderate, high, very high, are proposed. Comparison of values of epiphyte loads associated with 25 and 50% reductions in light to macrophytes suggest that the threshold ranges are realistic both in terms of the principle mechanism of impact to macrophytes and in terms of the magnitude of resultant impacts expressed by the macrophytes. Some variability in response levels was observed among

  10. Estimation of the geochemical threshold and its statistical significance

    USGS Publications Warehouse

    Miesch, A.T.

    1981-01-01

    A statistic is proposed for estimating the geochemical threshold and its statistical significance, or it may be used to identify a group of extreme values that can be tested for significance by other means. The statistic is the maximum gap between adjacent values in an ordered array after each gap has been adjusted for the expected frequency. The values in the ordered array are geochemical values transformed by either ln(?? - ??) or ln(?? - ??) and then standardized so that the mean is zero and the variance is unity. The expected frequency is taken from a fitted normal curve with unit area. The midpoint of an adjusted gap that exceeds the corresponding critical value may be taken as an estimate of the geochemical threshold, and the associated probability indicates the likelihood that the threshold separates two geochemical populations. The adjusted gap test may fail to identify threshold values if the variation tends to be continuous from background values to the higher values that reflect mineralized ground. However, the test will serve to identify other anomalies that may be too subtle to have been noted by other means. ?? 1981.

  11. Investigating Over Critical Thresholds of Forest Megafires Danger Conditions in Europe Utilising the ECMWF ERA-Interim Reanalysis

    NASA Astrophysics Data System (ADS)

    Petroliagkis, Thomas I.; Camia, Andrea; Liberta, Giorgio; Durrant, Tracy; Pappenberger, Florian; San-Miguel-Ayanz, Jesus

    2014-05-01

    The European Forest Fire Information System (EFFIS) has been established by the Joint Research Centre (JRC) and the Directorate General for Environment (DG ENV) of the European Commission (EC) to support the services in charge of the protection of forests against fires in the EU and neighbour countries, and also to provide the EC services and the European Parliament with information on forest fires in Europe. Within its applications, EFFIS provides current and forecast meteorological fire danger maps up to 6 days. Weather plays a key role in affecting wildfire occurrence and behaviour. Meteorological parameters can be used to derive meteorological fire weather indices that provide estimations of fire danger level at a given time over a specified area of interest. In this work, we investigate the suitability of critical thresholds of fire danger to provide an early warning for megafires (fires > 500 ha) over Europe. Past trends of fire danger are analysed computing daily fire danger from weather data taken from re-analysis fields for a period of 31 years (1980 to 2010). Re-analysis global data sets coming from the construction of high-quality climate records, which combine past observations collected from many different observing and measuring platforms, are capable of describing how Fire Danger Indices have evolved over time at a global scale. The latest and most updated ERA-Interim dataset of the European Centre for Medium-Range Weather Forecast (ECMWF) was used to extract meteorological variables needed to compute daily values of the Canadian Fire Weather Index (CFWI) over Europe, with a horizontal resolution of about 75x75 km. Daily time series of CFWI were constructed and analysed over a total of 1,071 European NUTS3 centroids, resulting in a set of percentiles and critical thresholds. Such percentiles could be used as thresholds to help fire services establish a measure of the significance of CFWI outputs as they relate to levels of fire potential, fuel conditions and fire danger. Median percentile values of fire days accumulated over the 31-year period were compared to median values of all days from that period. As expected, the CWFI time series exhibit different values on fire days than on all days. In addition, a percentile analysis was performed in order to determine the behaviour of index values corresponding to fire events falling into the megafire category. This analysis resulted in a set of critical thresholds based on percentiles. By utilising such thresholds, an initial framework of an early warning system has being established. By lowering the value of any of these thresholds, the number of hits could be increased until all extremes were captured (resulting in zero misses). However, in doing so, the number of false alarms tends to increase significantly. Consequently, an optimal trade-off between hits and false alarms has to be established when setting different (critical) CFWI thresholds.

  12. Accuracy of topographic index models at identifying ephemeral gully trajectories on agricultural fields

    NASA Astrophysics Data System (ADS)

    Sheshukov, Aleksey Y.; Sekaluvu, Lawrence; Hutchinson, Stacy L.

    2018-04-01

    Topographic index (TI) models have been widely used to predict trajectories and initiation points of ephemeral gullies (EGs) in agricultural landscapes. Prediction of EGs strongly relies on the selected value of critical TI threshold, and the accuracy depends on topographic features, agricultural management, and datasets of observed EGs. This study statistically evaluated the predictions by TI models in two paired watersheds in Central Kansas that had different levels of structural disturbances due to implemented conservation practices. Four TI models with sole dependency on topographic factors of slope, contributing area, and planform curvature were used in this study. The observed EGs were obtained by field reconnaissance and through the process of hydrological reconditioning of digital elevation models (DEMs). The Kernel Density Estimation analysis was used to evaluate TI distribution within a 10-m buffer of the observed EG trajectories. The EG occurrence within catchments was analyzed using kappa statistics of the error matrix approach, while the lengths of predicted EGs were compared with the observed dataset using the Nash-Sutcliffe Efficiency (NSE) statistics. The TI frequency analysis produced bi-modal distribution of topographic indexes with the pixels within the EG trajectory having a higher peak. The graphs of kappa and NSE versus critical TI threshold showed similar profile for all four TI models and both watersheds with the maximum value representing the best comparison with the observed data. The Compound Topographic Index (CTI) model presented the overall best accuracy with NSE of 0.55 and kappa of 0.32. The statistics for the disturbed watershed showed higher best critical TI threshold values than for the undisturbed watershed. Structural conservation practices implemented in the disturbed watershed reduced ephemeral channels in headwater catchments, thus producing less variability in catchments with EGs. The variation in critical thresholds for all TI models suggested that TI models tend to predict EG occurrence and length over a range of thresholds rather than find a single best value.

  13. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.

    PubMed

    Donoho, David; Jin, Jiashun

    2008-09-30

    In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.

  14. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak

    PubMed Central

    Donoho, David; Jin, Jiashun

    2008-01-01

    In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365

  15. Reference guide to odor thresholds for hazardous air pollutants listed in the Clean Air Act amendments of 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cain, W.S.; Shoaf, C.R.; Velasquez, S.F.

    1992-03-01

    In response to numerous requests for information related to odor thresholds, this document was prepared by the Air Risk Information Support Center in its role in providing technical assistance to State and Local government agencies on risk assessment of air pollutants. A discussion of basic concepts related to olfactory function and the measurement of odor thresholds is presented. A detailed discussion of criteria which are used to evaluate the quality of published odor threshold values is provided. The use of odor threshold information in risk assessment is discussed. The results of a literature search and review of odor threshold informationmore » for the chemicals listed as hazardous air pollutants in the Clean Air Act amendments of 1990 is presented. The published odor threshold values are critically evaluated based on the criteria discussed and the values of acceptable quality are used to determine a geometric mean or best estimate.« less

  16. Utility of repeat testing of critical values: a Q-probes analysis of 86 clinical laboratories.

    PubMed

    Lehman, Christopher M; Howanitz, Peter J; Souers, Rhona; Karcher, Donald S

    2014-06-01

    A common laboratory practice is to repeat critical values before reporting the test results to the clinical care provider. This may be an unnecessary step that delays the reporting of critical test results without adding value to the accuracy of the test result. To determine the proportions of repeated chemistry and hematology critical values that differ significantly from the original value as defined by the participating laboratory, to determine the threshold differences defined by the laboratory as clinically significant, and to determine the additional time required to analyze the repeat test. Participants prospectively reviewed critical test results for 4 laboratory tests: glucose, potassium, white blood cell count, and platelet count. Participants reported the following information: initial and repeated test result; time initial and repeat results were first known to laboratory staff; critical result notification time; if the repeat result was still a critical result; if the repeat result was significantly different from the initial result, as judged by the laboratory professional or policy; significant difference threshold, as defined by the laboratory; the make and model of the instrument used for primary and repeat testing. Routine, repeat analysis of critical values is a common practice. Most laboratories did not formally define a significant difference between repeat results. Repeated results were rarely considered significantly different. Median repeated times were at least 17 to 21 minutes for 10% of laboratories. Twenty percent of laboratories reported at least 1 incident in the last calendar year of delayed result reporting that clinicians indicated had adversely affected patient care. Routine repeat analysis of automated chemistry and hematology critical values is unlikely to be clinically useful and may adversely affect patient care.

  17. A test of critical thresholds and their indicators in a desertification-prone ecosystem: more resilience than we thought

    USGS Publications Warehouse

    Bestelmeyer, Brandon T.; Duniway, Michael C.; James, Darren K.; Burkett, Laura M.; Havstad, Kris M.

    2013-01-01

    Theoretical models predict that drylands can cross critical thresholds, but experimental manipulations to evaluate them are non-existent. We used a long-term (13-year) pulse-perturbation experiment featuring heavy grazing and shrub removal to determine if critical thresholds and their determinants can be demonstrated in Chihuahuan Desert grasslands. We asked if cover values or patch-size metrics could predict vegetation recovery, supporting their use as early-warning indicators. We found that season of grazing, but not the presence of competing shrubs, mediated the severity of grazing impacts on dominant grasses. Recovery occurred at the same rate irrespective of grazing history, suggesting that critical thresholds were not crossed, even at low cover levels. Grass cover, but not patch size metrics, predicted variation in recovery rates. Some transition-prone ecosystems are surprisingly resilient; management of grazing impacts and simple cover measurements can be used to avert undesired transitions and initiate restoration.

  18. Critical dynamics on a large human Open Connectome network

    NASA Astrophysics Data System (ADS)

    Ódor, Géza

    2016-12-01

    Extended numerical simulations of threshold models have been performed on a human brain network with N =836 733 connected nodes available from the Open Connectome Project. While in the case of simple threshold models a sharp discontinuous phase transition without any critical dynamics arises, variable threshold models exhibit extended power-law scaling regions. This is attributed to fact that Griffiths effects, stemming from the topological or interaction heterogeneity of the network, can become relevant if the input sensitivity of nodes is equalized. I have studied the effects of link directness, as well as the consequence of inhibitory connections. Nonuniversal power-law avalanche size and time distributions have been found with exponents agreeing with the values obtained in electrode experiments of the human brain. The dynamical critical region occurs in an extended control parameter space without the assumption of self-organized criticality.

  19. Extending the excluded volume for percolation threshold estimates in polydisperse systems: The binary disk system

    DOE PAGES

    Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah; ...

    2017-06-01

    For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less

  20. Non-equilibrium relaxation in a stochastic lattice Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Chen, Sheng; Täuber, Uwe C.

    2016-04-01

    We employ Monte Carlo simulations to study a stochastic Lotka-Volterra model on a two-dimensional square lattice with periodic boundary conditions. If the (local) prey carrying capacity is finite, there exists an extinction threshold for the predator population that separates a stable active two-species coexistence phase from an inactive state wherein only prey survive. Holding all other rates fixed, we investigate the non-equilibrium relaxation of the predator density in the vicinity of the critical predation rate. As expected, we observe critical slowing-down, i.e., a power law dependence of the relaxation time on the predation rate, and algebraic decay of the predator density at the extinction critical point. The numerically determined critical exponents are in accord with the established values of the directed percolation universality class. Following a sudden predation rate change to its critical value, one finds critical aging for the predator density autocorrelation function that is also governed by universal scaling exponents. This aging scaling signature of the active-to-absorbing state phase transition emerges at significantly earlier times than the stationary critical power laws, and could thus serve as an advanced indicator of the (predator) population’s proximity to its extinction threshold.

  1. Critical levels as applied to ozone for North American forests

    Treesearch

    Robert C. Musselman

    2006-01-01

    The United States and Canada have used concentration-based parameters for air quality standards for ozone effects on forests in North America. The European critical levels method for air quality standards uses an exposure-based parameter, a cumulative ozone concentration index with a threshold cutoff value. The critical levels method has not been used in North America...

  2. Global threshold dynamics of an SIVS model with waning vaccine-induced immunity and nonlinear incidence.

    PubMed

    Yang, Junyuan; Martcheva, Maia; Wang, Lin

    2015-10-01

    Vaccination is the most effective method of preventing the spread of infectious diseases. For many diseases, vaccine-induced immunity is not life long and the duration of immunity is not always fixed. In this paper, we propose an SIVS model taking the waning of vaccine-induced immunity and general nonlinear incidence into consideration. Our analysis shows that the model exhibits global threshold dynamics in the sense that if the basic reproduction number is less than 1, then the disease-free equilibrium is globally asymptotically stable implying the disease dies out; while if the basic reproduction number is larger than 1, then the endemic equilibrium is globally asymptotically stable indicating that the disease persists. This global threshold result indicates that if the vaccination coverage rate is below a critical value, then the disease always persists and only if the vaccination coverage rate is above the critical value, the disease can be eradicated. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah

    For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less

  4. Resolution of the threshold fracture energy paradox for solid particle erosion

    NASA Astrophysics Data System (ADS)

    Peck, Daniel; Volkov, Grigory; Mishuris, Gennady; Petrov, Yuri

    2016-12-01

    Previous models of a single erosion impact, for a rigid axisymmetric indenter defined by the shape function ?, have shown that a critical shape parameter ? exists which determines the behaviour of the threshold fracture energy. However, repeated investigations into this parameter have found no physical explanation for its value. Again utilising the notion of incubation time prior to fracture, this paper attempts to provide a physical explanation of this phenomena by introducing a supersonic stage into the model. The final scheme allows for the effect of waves along the indenters contact area to be taken into account. The effect of this physical characteristic of the impact on the threshold fracture energy and critical shape parameter ? are investigated and discussed.

  5. A Matter of Millimeters: Defining the Processes for Critical Clearances on Curiosity

    NASA Technical Reports Server (NTRS)

    Florow, Brandon

    2013-01-01

    The Mars Science Laboratory (MSL) mission presents an immense packaging problem in that it takes a rover the size of a car with a sky crane landing system and packs it tightly into a spacecraft. This creates many areas of close and critical clearances. Critical Clearances are defined as hardware-to-hardware or hardware-to-envelope clearances which fall below a pre-established location dependent threshold and pose a risk of hardware to hardware contact during events such as launch, entry, landing, and operations. Close Clearances, on the other hand, are defined as any clearance value that is chosen to be tracked but is larger than the critical clearance threshold for its region. Close clearances may be tracked for various reasons including uncertainty in design, large expected dynamic motion, etc.

  6. Standardization of haematology critical results management in adults: an International Council for Standardization in Haematology, ICSH, survey and recommendations.

    PubMed

    Keng, T B; De La Salle, B; Bourner, G; Merino, A; Han, J-Y; Kawai, Y; Peng, M T; McCafferty, R

    2016-10-01

    These recommendations are intended to develop a consensus in the previously published papers as to which parameters and what values should be considered critical. A practical guide on the standardization of critical results management in haematology laboratories would be beneficial as part of good laboratory and clinical practice and for use by laboratory-accrediting agencies. A working group with members from Europe, America, Australasia and Asia was formed by International Council for Standardization in Haematology. A pattern of practice survey of 21 questions was distributed in 2014, and the data were collected electronically by Survey Monkey. The mode, or most commonly occurring value, was selected as the threshold for the upper and lower alert limits for critical results reporting. A total of 666 laboratories submitted data to this study and, of these, 499 submitted complete responses. Full blood count critical results alert thresholds, morphology findings that trigger critical result notification, critical results alert list, notification process and maintenance of critical results management protocol are described. This international survey provided a snapshot of the current practice worldwide and has identified the existence of considerable heterogeneity of critical results management. The recommendations in this study represent a consensus of good laboratory practice. They are intended to encourage the implementation of a standardized critical results management protocol in the laboratory. © 2016 John Wiley & Sons Ltd.

  7. Pressure and cold pain threshold reference values in a large, young adult, pain-free population.

    PubMed

    Waller, Robert; Smith, Anne Julia; O'Sullivan, Peter Bruce; Slater, Helen; Sterling, Michele; McVeigh, Joanne Alexandra; Straker, Leon Melville

    2016-10-01

    Currently there is a lack of large population studies that have investigated pain sensitivity distributions in healthy pain free people. The aims of this study were: (1) to provide sex-specific reference values of pressure and cold pain thresholds in young pain-free adults; (2) to examine the association of potential correlates of pain sensitivity with pain threshold values. This study investigated sex specific pressure and cold pain threshold estimates for young pain free adults aged 21-24 years. A cross-sectional design was utilised using participants (n=617) from the Western Australian Pregnancy Cohort (Raine) Study at the 22-year follow-up. The association of site, sex, height, weight, smoking, health related quality of life, psychological measures and activity with pain threshold values was examined. Pressure pain threshold (lumbar spine, tibialis anterior, neck and dorsal wrist) and cold pain threshold (dorsal wrist) were assessed using standardised quantitative sensory testing protocols. Reference values for pressure pain threshold (four body sites) stratified by sex and site, and cold pain threshold (dorsal wrist) stratified by sex are provided. Statistically significant, independent correlates of increased pressure pain sensitivity measures were site (neck, dorsal wrist), sex (female), higher waist-hip ratio and poorer mental health. Statistically significant, independent correlates of increased cold pain sensitivity measures were, sex (female), poorer mental health and smoking. These data provide the most comprehensive and robust sex specific reference values for pressure pain threshold specific to four body sites and cold pain threshold at the dorsal wrist for young adults aged 21-24 years. Establishing normative values in this young age group is important given that the transition from adolescence to adulthood is a critical temporal period during which trajectories for persistent pain can be established. These data will provide an important research resource to enable more accurate profiling and interpretation of pain sensitivity in clinical pain disorders in young adults. The robust and comprehensive data can assist interpretation of future clinical pain studies and provide further insight into the complex associations of pain sensitivity that can be used in future research. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  8. Risk Assessment Methodology for Software Supportability (RAMSS): guidelines for Adapting Software Supportability Evaluations

    DTIC Science & Technology

    1986-04-14

    CONCIPT DIFINITION OIVILOPMINTITIST I OPERATION ANO ■ MAINTENANCE ■ TRACK MOifCTIO PROGRAMS • «VIIW CRITICAL ISSUIS . Mt PARI INPUTS TO PMO...development and beyond, evaluation criteria must Include quantitative goals (the desired value) and thresholds (the value beyond which the charac

  9. Novel threshold pressure sensors based on nonlinear dynamics of MEMS resonators

    NASA Astrophysics Data System (ADS)

    Hasan, Mohammad H.; Alsaleem, Fadi M.; Ouakad, Hassen M.

    2018-06-01

    Triggering an alarm in a car for low air-pressure in the tire or tripping an HVAC compressor if the refrigerant pressure is lower than a threshold value are examples for applications where measuring the amount of pressure is not as important as determining if the pressure has exceeded a threshold value for an action to occur. Unfortunately, current technology still relies on analog pressure sensors to perform this functionality by adding a complex interface (extra circuitry, controllers, and/or decision units). In this paper, we demonstrate two new smart tunable-threshold pressure switch concepts that can reduce the complexity of a threshold pressure sensor. The first concept is based on the nonlinear subharmonic resonance of a straight double cantilever microbeam with a proof mass and the other concept is based on the snap-through bi-stability of a clamped-clamped MEMS shallow arch. In both designs, the sensor operation concept is simple. Any actuation performed at a certain pressure lower than a threshold value will activate a nonlinear dynamic behavior (subharmonic resonance or snap-through bi-stability) yielding a large output that would be interpreted as a logic value of ONE, or ON. Once the pressure exceeds the threshold value, the nonlinear response ceases to exist, yielding a small output that would be interpreted as a logic value of ZERO, or OFF. A lumped, single degree of freedom model for the double cantilever beam, that is validated using experimental data, and a continuous beam model for the arch beam, are used to simulate the operation range of the proposed sensors by identifying the relationship between the excitation signal and the critical cut-off pressure.

  10. Analyzing threshold pressure limitations in microfluidic transistors for self-regulated microfluidic circuits.

    PubMed

    Kim, Sung-Jin; Yokokawa, Ryuji; Takayama, Shuichi

    2012-12-03

    This paper reveals a critical limitation in the electro-hydraulic analogy between a microfluidic membrane-valve (μMV) and an electronic transistor. Unlike typical transistors that have similar on and off threshold voltages, in hydraulic μMVs, the threshold pressures for opening and closing are significantly different and can change, even for the same μMVs depending on overall circuit design and operation conditions. We explain, in particular, how the negative values of the closing threshold pressures significantly constrain operation of even simple hydraulic μMV circuits such as autonomously switching two-valve microfluidic oscillators. These understandings have significant implications in designing self-regulated microfluidic devices.

  11. Percolation of disordered jammed sphere packings

    NASA Astrophysics Data System (ADS)

    Ziff, Robert M.; Torquato, Salvatore

    2017-02-01

    We determine the site and bond percolation thresholds for a system of disordered jammed sphere packings in the maximally random jammed state, generated by the Torquato-Jiao algorithm. For the site threshold, which gives the fraction of conducting versus non-conducting spheres necessary for percolation, we find {{p}\\text{c}}=0.3116(3) , consistent with the 1979 value of Powell 0.310(5) and identical within errors to the threshold for the simple-cubic lattice, 0.311 608, which shares the same average coordination number of 6. In terms of the volume fraction ϕ, the threshold corresponds to a critical value {φ\\text{c}}=0.199 . For the bond threshold, which apparently was not measured before, we find {{p}\\text{c}}=0.2424(3) . To find these thresholds, we considered two shape-dependent universal ratios involving the size of the largest cluster, fluctuations in that size, and the second moment of the size distribution; we confirmed the ratios’ universality by also studying the simple-cubic lattice with a similar cubic boundary. The results are applicable to many problems including conductivity in random mixtures, glass formation, and drug loading in pharmaceutical tablets.

  12. Usage of fMRI for pre-surgical planning in brain tumor and vascular lesion patients: task and statistical threshold effects on language lateralization.

    PubMed

    Nadkarni, Tanvi N; Andreoli, Matthew J; Nair, Veena A; Yin, Peng; Young, Brittany M; Kundu, Bornali; Pankratz, Joshua; Radtke, Andrew; Holdsworth, Ryan; Kuo, John S; Field, Aaron S; Baskaya, Mustafa K; Moritz, Chad H; Meyerand, M Elizabeth; Prabhakaran, Vivek

    2015-01-01

    Functional magnetic resonance imaging (fMRI) is a non-invasive pre-surgical tool used to assess localization and lateralization of language function in brain tumor and vascular lesion patients in order to guide neurosurgeons as they devise a surgical approach to treat these lesions. We investigated the effect of varying the statistical thresholds as well as the type of language tasks on functional activation patterns and language lateralization. We hypothesized that language lateralization indices (LIs) would be threshold- and task-dependent. Imaging data were collected from brain tumor patients (n = 67, average age 48 years) and vascular lesion patients (n = 25, average age 43 years) who received pre-operative fMRI scanning. Both patient groups performed expressive (antonym and/or letter-word generation) and receptive (tumor patients performed text-reading; vascular lesion patients performed text-listening) language tasks. A control group (n = 25, average age 45 years) performed the letter-word generation task. Brain tumor patients showed left-lateralization during the antonym-word generation and text-reading tasks at high threshold values and bilateral activation during the letter-word generation task, irrespective of the threshold values. Vascular lesion patients showed left-lateralization during the antonym and letter-word generation, and text-listening tasks at high threshold values. Our results suggest that the type of task and the applied statistical threshold influence LI and that the threshold effects on LI may be task-specific. Thus identifying critical functional regions and computing LIs should be conducted on an individual subject basis, using a continuum of threshold values with different tasks to provide the most accurate information for surgical planning to minimize post-operative language deficits.

  13. Normalizing rainfall/debris-flow thresholds along the U.S. Pacific coast for long-term variations in precipitation climate

    USGS Publications Warehouse

    Wilson, Raymond C.

    1997-01-01

    Broad-scale variations in long-term precipitation climate may influence rainfall/debris-flow threshold values along the U.S. Pacific coast, where both the mean annual precipitation (MAP) and the number of rainfall days (#RDs) are controlled by topography, distance from the coastline, and geographic latitude. Previous authors have proposed that rainfall thresholds are directly proportional to MAP, but this appears to hold only within limited areas (< 1?? latitude), where rainfall frequency (#RDs) is nearly constant. MAP-normalized thresholds underestimate the critical rainfall when applied to areas to the south, where the #RDs decrease, and overestimate threshold rainfall when applied to areas to the north, where the #RDs increase. For normalization between climates where both MAP and #RDs vary significantly, thresholds may best be described as multiples of the rainy-day normal, RDN = MAP/#RDs. Using data from several storms that triggered significant debris-flow activity in southern California, the San Francisco Bay region, and the Pacific Northwest, peak 24-hour rainfalls were plotted against RDN values, displaying a linear relationship with a lower bound at about 14 RDN. RDN ratios in this range may provide a threshold for broad-scale regional forecasting of debris-flow activity.

  14. Development of water quality thresholds during dredging for the protection of benthic primary producer habitats.

    PubMed

    Sofonia, Jeremy J; Unsworth, Richard K F

    2010-01-01

    Given the potential for adverse effects of ocean dredging on marine organisms, particularly benthic primary producer communities, the management and monitoring of those activities which cause elevated turbidity and sediment loading is critical. In practice, however, this has proven challenging as the development of water quality threshold values, upon which management responses are based, are subject to a large number of physical and biological parameters that are spatially and temporally specific. As a consequence, monitoring programs to date have taken a wide range of different approaches, most focusing on measures of turbidity reported as nephelometric turbidity units (NTU). This paper presents a potential approach in the determination of water quality thresholds which utilises data gathered through the long-term deployment of in situ water instruments, but suggests a focus on photosynthetic active radiation (PAR) rather than NTU as it is more relevant biologically and inclusive of other site conditions. A simple mathematical approach to data interpretation is also presented which facilitates threshold value development, not individual values of concentrations over specific intervals, but as an equation which may be utilized in numerical modelling.

  15. Validating the Kinematic Wave Approach for Rapid Soil Erosion Assessment and Improved BMP Site Selection to Enhance Training Land Sustainability

    DTIC Science & Technology

    2014-02-01

    installation based on a Euclidean distance allocation and assigned that installation’s threshold values. The second approach used a thin - plate spline ...installation critical nLS+ thresholds involved spatial interpolation. A thin - plate spline radial basis functions (RBF) was selected as the...the interpolation of installation results using a thin - plate spline radial basis function technique. 6.5 OBJECTIVE #5: DEVELOP AND

  16. Analyzing threshold pressure limitations in microfluidic transistors for self-regulated microfluidic circuits

    PubMed Central

    Kim, Sung-Jin; Yokokawa, Ryuji; Takayama, Shuichi

    2012-01-01

    This paper reveals a critical limitation in the electro-hydraulic analogy between a microfluidic membrane-valve (μMV) and an electronic transistor. Unlike typical transistors that have similar on and off threshold voltages, in hydraulic μMVs, the threshold pressures for opening and closing are significantly different and can change, even for the same μMVs depending on overall circuit design and operation conditions. We explain, in particular, how the negative values of the closing threshold pressures significantly constrain operation of even simple hydraulic μMV circuits such as autonomously switching two-valve microfluidic oscillators. These understandings have significant implications in designing self-regulated microfluidic devices. PMID:23284181

  17. Ecological thresholds: The key to successful enviromental management or an important concept with no practical application?

    USGS Publications Warehouse

    Groffman, P.M.; Baron, Jill S.; Blett, T.; Gold, A.J.; Goodman, I.; Gunderson, L.H.; Levinson, B.M.; Palmer, Margaret A.; Paerl, H.W.; Peterson, G.D.; Poff, N.L.; Rejeski, D.W.; Reynolds, J.F.; Turner, M.G.; Weathers, K.C.; Wiens, J.

    2006-01-01

    An ecological threshold is the point at which there is an abrupt change in an ecosystem quality, property or phenomenon, or where small changes in an environmental driver produce large responses in the ecosystem. Analysis of thresholds is complicated by nonlinear dynamics and by multiple factor controls that operate at diverse spatial and temporal scales. These complexities have challenged the use and utility of threshold concepts in environmental management despite great concern about preventing dramatic state changes in valued ecosystems, the need for determining critical pollutant loads and the ubiquity of other threshold-based environmental problems. In this paper we define the scope of the thresholds concept in ecological science and discuss methods for identifying and investigating thresholds using a variety of examples from terrestrial and aquatic environments, at ecosystem, landscape and regional scales. We end with a discussion of key research needs in this area.

  18. Application of the coherent anomaly method to percolation

    NASA Astrophysics Data System (ADS)

    Takayasu, Misako; Takayasu, Hideki

    1988-03-01

    Applying the coherent anomaly method (CAM) to site percolation problems, we estimate the percolation threshold pc and critical exponents. We obtain pc=0.589, β=0.140, γ=2.426 on the two-dimensional square lattice. These values are in good agreement with the values already known. We also investigate higher-dimensional cases by this method.

  19. Application of the Coherent Anomaly Method to Percolation

    NASA Astrophysics Data System (ADS)

    Takayasu, Misako; Takayasu, Hideki

    Applying the coherent anomaly method (CAM) to site percolation problems, we estimate the percolation threshold ϱc and critical exponents. We obtain pc = 0.589, Β=0.140, Γ = 2.426 on the two-dimensional square lattice. These values are in good agreement with the values already known. We also investigate higher-dimensional cases by this method.

  20. Light - Instead of UV Protection: New Requirements for Skin Cancer Prevention.

    PubMed

    Zastrow, Leonhard; Lademann, Jürgen

    2016-03-01

    The requirements on sunscreens have essentially changed, since some years ago it was demonstrated that approximately 50% of free radicals, that are formed in the skin by solar radiation, originate from the visible and infrared regions of the solar spectrum. In addition, a critical radical concentration threshold could be found. If this concentration, the free radical threshold value (FRTV), is exceeded, sunburn, immunosuppression and skin cancer may develop. Application of sunscreens and lotions protects against sunburn in the UV region of the solar spectrum and therefore is frequently used to extend people's stay in the sun. However, this behaviour can enhance the concentration of free radicals formed in the visible and infrared regions of the solar spectrum, so that the critical radical threshold is exceeded and the skin may be damaged. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  1. New method for measuring the laser-induced damage threshold of optical thin film

    NASA Astrophysics Data System (ADS)

    Su, Jun-hong; Wang, Hong; Xi, Ying-xue

    2012-10-01

    The laser-induced damage threshold (LIDT) of thin film means that the thin film can withstand a maximum intensity of laser radiation. The film will be damaged when the irradiation under high laser intensity is greater than the value of LIDT. In this paper, an experimental platform with measurement operator interfaces and control procedures in the VB circumstance is built according to ISO11254-1. In order to obtain more accurate results than that with manual measurement, in the software system, a hardware device can be controlled by control widget on the operator interfaces. According to the sample characteristic, critical parameters of the LIDT measurement system such as spot diameter, damage threshold region, and critical damage pixel number are set up on the man-machine conversation interface, which could realize intelligent measurements of the LIDT. According to experimental data, the LIDT is obtained by fitting damage curve automatically.

  2. External validation of a prehospital risk score for critical illness.

    PubMed

    Kievlan, Daniel R; Martin-Gill, Christian; Kahn, Jeremy M; Callaway, Clifton W; Yealy, Donald M; Angus, Derek C; Seymour, Christopher W

    2016-08-11

    Identification of critically ill patients during prehospital care could facilitate early treatment and aid in the regionalization of critical care. Tools to consistently identify those in the field with or at higher risk of developing critical illness do not exist. We sought to validate a prehospital critical illness risk score that uses objective clinical variables in a contemporary cohort of geographically and temporally distinct prehospital encounters. We linked prehospital encounters at 21 emergency medical services (EMS) agencies to inpatient electronic health records at nine hospitals in southwestern Pennsylvania from 2010 to 2012. The primary outcome was critical illness during hospitalization, defined as an intensive care unit stay with delivery of organ support (mechanical ventilation or vasopressor use). We calculated the prehospital risk score using demographics and first vital signs from eligible EMS encounters, and we tested the association between score variables and critical illness using multivariable logistic regression. Discrimination was assessed using the AUROC curve, and calibration was determined by plotting observed versus expected events across score values. Operating characteristics were calculated at score thresholds. Among 42,550 nontrauma, non-cardiac arrest adult EMS patients, 1926 (4.5 %) developed critical illness during hospitalization. We observed moderate discrimination of the prehospital critical illness risk score (AUROC 0.73, 95 % CI 0.72-0.74) and adequate calibration based on observed versus expected plots. At a score threshold of 2, sensitivity was 0.63 (95 % CI 0.61-0.75), specificity was 0.73 (95 % CI 0.72-0.73), negative predictive value was 0.98 (95 % CI 0.98-0.98), and positive predictive value was 0.10 (95 % CI 0.09-0.10). The risk score performance was greater with alternative definitions of critical illness, including in-hospital mortality (AUROC 0.77, 95 % CI 0.7 -0.78). In an external validation cohort, a prehospital risk score using objective clinical data had moderate discrimination for critical illness during hospitalization.

  3. Electrorotation of novel electroactive polymer composites in uniform DC and AC electric fields

    NASA Astrophysics Data System (ADS)

    Zrinyi, Miklós; Nakano, Masami; Tsujita, Teppei

    2012-06-01

    Novel electroactive polymer composites have been developed that could spin in uniform DC and AC electric fields. The angular displacement as well as rotation of polymer disks around an axis that is perpendicular to the direction of the applied electric field was studied. It was found that the dynamics of the polymer rotor is very complex. Depending on the strength of the static DC field, three regimes have been observed: no rotation occurs below a critical threshold field intensity, oscillatory motion takes place just above this value and continuous rotation can be observed above the critical threshold field intensity. It was also found that low frequency AC fields could also induce angular deformation.

  4. Methods for forewarning of critical condition changes in monitoring civil structures

    DOEpatents

    Abercrombie, Robert K.; Hively, Lee M.

    2013-04-02

    Sensor modules (12) including accelerometers (20) are placed on a physical structure (10) and tri-axial accelerometer data is converted to mechanical power (P) data (41) which then processed to provide a forewarning (57) of a critical event concerning the physical structure (10). The forewarning is based on a number of occurrences of a composite measure of dissimilarity (C.sub.i) exceeding a forewarning threshold over a defined sampling time; and a forewarning signal (58) is provided to a human observer through a visual, audible or tangible signal. A forewarning of a structural failure can also be provided based on a number of occurrences of (C.sub.i) above a failure value threshold.

  5. Using a higher criticism statistic to detect modest effects in a genome-wide study of rheumatoid arthritis

    PubMed Central

    2009-01-01

    In high-dimensional studies such as genome-wide association studies, the correction for multiple testing in order to control total type I error results in decreased power to detect modest effects. We present a new analytical approach based on the higher criticism statistic that allows identification of the presence of modest effects. We apply our method to the genome-wide study of rheumatoid arthritis provided in the Genetic Analysis Workshop 16 Problem 1 data set. There is evidence for unknown bias in this study that could be explained by the presence of undetected modest effects. We compared the asymptotic and empirical thresholds for the higher criticism statistic. Using the asymptotic threshold we detected the presence of modest effects genome-wide. We also detected modest effects using 90th percentile of the empirical null distribution as a threshold; however, there is no such evidence when the 95th and 99th percentiles were used. While the higher criticism method suggests that there is some evidence for modest effects, interpreting individual single-nucleotide polymorphisms with significant higher criticism statistics is of undermined value. The goal of higher criticism is to alert the researcher that genetic effects remain to be discovered and to promote the use of more targeted and powerful studies to detect the remaining effects. PMID:20018032

  6. Synchronization on Erdös-Rényi networks.

    PubMed

    Gong, Baihua; Yang, Lei; Yang, Kongqing

    2005-09-01

    In this Brief Report, by analyzing the spectral properties of the Laplacian matrix of Erdös-Rényi networks, we obtained the critical coupling strength of the complete synchronization analytically. In particular, for any size of the networks, when the average degree is greater than a threshold and the coupling strength is large enough, the networks can synchronize. Here, the threshold is determined by the value of the maximal Lyapunov exponent of each dynamical unit.

  7. The use of spatio-temporal correlation to forecast critical transitions

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek; Bierkens, Marc F. P.

    2010-05-01

    Complex dynamical systems may have critical thresholds at which the system shifts abruptly from one state to another. Such critical transitions have been observed in systems ranging from the human body system to financial markets and the Earth system. Forecasting the timing of critical transitions before they are reached is of paramount importance because critical transitions are associated with a large shift in dynamical regime of the system under consideration. However, it is hard to forecast critical transitions, because the state of the system shows relatively little change before the threshold is reached. Recently, it was shown that increased spatio-temporal autocorrelation and variance can serve as alternative early warning signal for critical transitions. However, thus far these second order statistics have not been used for forecasting in a data assimilation framework. Here we show that the use of spatio-temporal autocorrelation and variance in the state of the system reduces the uncertainty in the predicted timing of critical transitions compared to classical approaches that use the value of the system state only. This is shown by assimilating observed spatio-temporal autocorrelation and variance into a dynamical system model using a Particle Filter. We adapt a well-studied distributed model of a logistically growing resource with a fixed grazing rate. The model describes the transition from an underexploited system with high resource biomass to overexploitation as grazing pressure crosses the critical threshold, which is a fold bifurcation. To represent limited prior information, we use a large variance in the prior probability distributions of model parameters and the system driver (grazing rate). First, we show that the rate of increase in spatio-temporal autocorrelation and variance prior to reaching the critical threshold is relatively consistent across the uncertainty range of the driver and parameter values used. This indicates that an increase in spatio-temporal autocorrelation and variance are consistent predictors of a critical transition, even under the condition of a poorly defined system. Second, we perform data assimilation experiments using an artificial exhaustive data set generated by one realization of the model. To mimic real-world sampling, an observational data set is created from this exhaustive data set. This is done by sampling on a regular spatio-temporal grid, supplemented by sampling locations at a short distance. Spatial and temporal autocorrelation in this observational data set is calculated for different spatial and temporal separation (lag) distances. To assign appropriate weights to observations (here, autocorrelation values and variance) in the Particle Filter, the covariance matrix of the error in these observations is required. This covariance matrix is estimated using Monte Carlo sampling, selecting a different random position of the sampling network relative to the exhaustive data set for each realization. At each update moment in the Particle Filter, observed autocorrelation values are assimilated into the model and the state of the model is updated. Using this approach, it is shown that the use of autocorrelation reduces the uncertainty in the forecasted timing of a critical transition compared to runs without data assimilation. The performance of the use of spatial autocorrelation versus temporal autocorrelation depends on the timing and number of observational data. This study is restricted to a single model only. However, it is becoming increasingly clear that spatio-temporal autocorrelation and variance can be used as early warning signals for a large number of systems. Thus, it is expected that spatio-temporal autocorrelation and variance are valuable in data assimilation frameworks in a large number of dynamical systems.

  8. An integrated approach to evaluating alternative risk prediction strategies: a case study comparing alternative approaches for preventing invasive fungal disease.

    PubMed

    Sadique, Z; Grieve, R; Harrison, D A; Jit, M; Allen, E; Rowan, K M

    2013-12-01

    This article proposes an integrated approach to the development, validation, and evaluation of new risk prediction models illustrated with the Fungal Infection Risk Evaluation study, which developed risk models to identify non-neutropenic, critically ill adult patients at high risk of invasive fungal disease (IFD). Our decision-analytical model compared alternative strategies for preventing IFD at up to three clinical decision time points (critical care admission, after 24 hours, and end of day 3), followed with antifungal prophylaxis for those judged "high" risk versus "no formal risk assessment." We developed prognostic models to predict the risk of IFD before critical care unit discharge, with data from 35,455 admissions to 70 UK adult, critical care units, and validated the models externally. The decision model was populated with positive predictive values and negative predictive values from the best-fitting risk models. We projected lifetime cost-effectiveness and expected value of partial perfect information for groups of parameters. The risk prediction models performed well in internal and external validation. Risk assessment and prophylaxis at the end of day 3 was the most cost-effective strategy at the 2% and 1% risk threshold. Risk assessment at each time point was the most cost-effective strategy at a 0.5% risk threshold. Expected values of partial perfect information were high for positive predictive values or negative predictive values (£11 million-£13 million) and quality-adjusted life-years (£11 million). It is cost-effective to formally assess the risk of IFD for non-neutropenic, critically ill adult patients. This integrated approach to developing and evaluating risk models is useful for informing clinical practice and future research investment. © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Published by International Society for Pharmacoeconomics and Outcomes Research (ISPOR) All rights reserved.

  9. MEthods of ASsessing blood pressUre: identifying thReshold and target valuEs (MeasureBP): a review & study protocol.

    PubMed

    Blom, Kimberly C; Farina, Sasha; Gomez, Yessica-Haydee; Campbell, Norm R C; Hemmelgarn, Brenda R; Cloutier, Lyne; McKay, Donald W; Dawes, Martin; Tobe, Sheldon W; Bolli, Peter; Gelfer, Mark; McLean, Donna; Bartlett, Gillian; Joseph, Lawrence; Featherstone, Robin; Schiffrin, Ernesto L; Daskalopoulou, Stella S

    2015-04-01

    Despite progress in automated blood pressure measurement (BPM) technology, there is limited research linking hard outcomes to automated office BPM (OBPM) treatment targets and thresholds. Equivalences for automated BPM devices have been estimated from approximations of standardized manual measurements of 140/90 mmHg. Until outcome-driven targets and thresholds become available for automated measurement methods, deriving evidence-based equivalences between automated methods and standardized manual OBPM is the next best solution. The MeasureBP study group was initiated by the Canadian Hypertension Education Program to close this critical knowledge gap. MeasureBP aims to define evidence-based equivalent values between standardized manual OBPM and automated BPM methods by synthesizing available evidence using a systematic review and individual subject-level data meta-analyses. This manuscript provides a review of the literature and MeasureBP study protocol. These results will lay the evidenced-based foundation to resolve uncertainties within blood pressure guidelines which, in turn, will improve the management of hypertension.

  10. Assessment of expected breeding values for fertility traits of Murrah buffaloes under subtropical climate.

    PubMed

    Dash, Soumya; Chakravarty, A K; Singh, Avtar; Shivahre, Pushp Raj; Upadhyay, Arpan; Sah, Vaishali; Singh, K Mahesh

    2015-03-01

    The aim of the present study was to assess the influence of temperature and humidity prevalent under subtropical climate on the breeding values for fertility traits viz. service period (SP), pregnancy rate (PR) and conception rate (CR) of Murrah buffaloes in National Dairy Research Institute (NDRI) herd. Fertility data on 1379 records of 581 Murrah buffaloes spread over four lactations and climatic parameters viz. dry bulb temperature and relative humidity (RH) spanned over 20 years (1993-2012) were collected from NDRI and Central Soil and Salinity Research Institute, Karnal, India. Monthly average temperature humidity index (THI) values were estimated. Threshold THI value affecting fertility traits was identified by fixed least-squares model analysis. Three zones of non-heat stress, heat stress and critical heat stress zones were developed in a year. The genetic parameters heritability (h(2)) and repeatability (r) of each fertility trait were estimated. Genetic evaluation of Murrah buffaloes was performed in each zone with respect to their expected breeding values (EBV) for fertility traits. Effect of THI was found significant (p<0.001) on all fertility traits with threshold THI value identified as 75. Based on THI values, a year was classified into three zones: Non heat stress zone(THI 56.71-73.21), HSZ (THI 75.39-81.60) and critical HSZ (THI 80.27-81.60). The EBVfor SP, PR, CR were estimated as 138.57 days, 0.362 and 69.02% in non-HSZ while in HSZ EBV were found as 139.62 days, 0.358 and 68.81%, respectively. EBV for SP was increased to 140.92 days and for PR and CR, it was declined to 0.357 and 68.71% in critical HSZ. The negative effect of THI was observed on EBV of fertility traits under the non-HSZ and critical HSZ Thus, the influence of THI should be adjusted before estimating the breeding values for fertility traits in Murrah buffaloes.

  11. Usage of fMRI for pre-surgical planning in brain tumor and vascular lesion patients: Task and statistical threshold effects on language lateralization☆☆☆

    PubMed Central

    Nadkarni, Tanvi N.; Andreoli, Matthew J.; Nair, Veena A.; Yin, Peng; Young, Brittany M.; Kundu, Bornali; Pankratz, Joshua; Radtke, Andrew; Holdsworth, Ryan; Kuo, John S.; Field, Aaron S.; Baskaya, Mustafa K.; Moritz, Chad H.; Meyerand, M. Elizabeth; Prabhakaran, Vivek

    2014-01-01

    Background and purpose Functional magnetic resonance imaging (fMRI) is a non-invasive pre-surgical tool used to assess localization and lateralization of language function in brain tumor and vascular lesion patients in order to guide neurosurgeons as they devise a surgical approach to treat these lesions. We investigated the effect of varying the statistical thresholds as well as the type of language tasks on functional activation patterns and language lateralization. We hypothesized that language lateralization indices (LIs) would be threshold- and task-dependent. Materials and methods Imaging data were collected from brain tumor patients (n = 67, average age 48 years) and vascular lesion patients (n = 25, average age 43 years) who received pre-operative fMRI scanning. Both patient groups performed expressive (antonym and/or letter-word generation) and receptive (tumor patients performed text-reading; vascular lesion patients performed text-listening) language tasks. A control group (n = 25, average age 45 years) performed the letter-word generation task. Results Brain tumor patients showed left-lateralization during the antonym-word generation and text-reading tasks at high threshold values and bilateral activation during the letter-word generation task, irrespective of the threshold values. Vascular lesion patients showed left-lateralization during the antonym and letter-word generation, and text-listening tasks at high threshold values. Conclusion Our results suggest that the type of task and the applied statistical threshold influence LI and that the threshold effects on LI may be task-specific. Thus identifying critical functional regions and computing LIs should be conducted on an individual subject basis, using a continuum of threshold values with different tasks to provide the most accurate information for surgical planning to minimize post-operative language deficits. PMID:25685705

  12. Phase-locking transition in a chirped superconducting Josephson resonator.

    PubMed

    Naaman, O; Aumentado, J; Friedland, L; Wurtele, J S; Siddiqi, I

    2008-09-12

    We observe a sharp threshold for dynamic phase locking in a high-Q transmission line resonator embedded with a Josephson tunnel junction, and driven with a purely ac, chirped microwave signal. When the drive amplitude is below a critical value, which depends on the chirp rate and is sensitive to the junction critical current I0, the resonator is only excited near its linear resonance frequency. For a larger amplitude, the resonator phase locks to the chirped drive and its amplitude grows until a deterministic maximum is reached. Near threshold, the oscillator evolves smoothly in one of two diverging trajectories, providing a way to discriminate small changes in I0 with a nonswitching detector, with potential applications in quantum state measurement.

  13. Quantitative Observation of Threshold Defect Behavior in Memristive Devices with Operando X-ray Microscopy.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Huajun; Dong, Yongqi; Cherukara, Matthew J.

    Memristive devices are an emerging technology that enables both rich interdisciplinary science and novel device functionalities, such as nonvolatile memories and nanoionics-based synaptic electronics. Recent work has shown that the reproducibility and variability of the devices depend sensitively on the defect structures created during electroforming as well as their continued evolution under dynamic electric fields. However, a fundamental principle guiding the material design of defect structures is still lacking due to the difficulty in understanding dynamic defect behavior under different resistance states. Here, we unravel the existence of threshold behavior by studying model, single-crystal devices: resistive switching requires that themore » pristine oxygen vacancy concentration reside near a critical value. Theoretical calculations show that the threshold oxygen vacancy concentration lies at the boundary for both electronic and atomic phase transitions. Through operando, multimodal X-ray imaging, we show that field tuning of the local oxygen vacancy concentration below or above the threshold value is responsible for switching between different electrical states. These results provide a general strategy for designing functional defect structures around threshold concentrations to create dynamic, field-controlled phases for memristive devices.« less

  14. The ventilatory anaerobic threshold is related to, but is lower than, the critical power, but does not explain exercise tolerance at this workrate.

    PubMed

    Okudan, N; Gökbel, H

    2006-03-01

    The aim of the present study was to investigate the relationships between critical power (CP), maximal aerobic power and the anaerobic threshold and whether exercise time to exhaustion and work at the CP can be used as an index in the determination of endurance. An incremental maximal cycle exercise test was performed on 30 untrained males aged 18-22 years. Lactate analysis was carried out on capillary blood samples at every 2 minutes. From gas exchange parameters and heart rate and lactate values, ventilatory anaerobic thresholds, heart rate deflection point and the onset of blood lactate accumulation were calculated. CP was determined with linear work-time method using 3 loads. The subjects exercised until they could no longer maintain a cadence above 24 rpm at their CP and exercise time to exhaustion was determined. CP was lower than the power output corresponding to VO2max, higher than the power outputs corresponding to anaerobic threshold. CP was correlated with VO2max and anaerobic threshold. Exercise time to exhaustion and work at CP were not correlated with VO2max and anaerobic threshold. Because of the correlations of the CP with VO2max and anaerobic threshold and no correlation of exercise time to exhaustion and work at the CP with these parameters, we conclude that exercise time to exhaustion and work at the CP cannot be used as an index in the determination of endurance.

  15. Phase transition of Boolean networks with partially nested canalizing functions

    NASA Astrophysics Data System (ADS)

    Jansen, Kayse; Matache, Mihaela Teodora

    2013-07-01

    We generate the critical condition for the phase transition of a Boolean network governed by partially nested canalizing functions for which a fraction of the inputs are canalizing, while the remaining non-canalizing inputs obey a complementary threshold Boolean function. Past studies have considered the stability of fully or partially nested canalizing functions paired with random choices of the complementary function. In some of those studies conflicting results were found with regard to the presence of chaotic behavior. Moreover, those studies focus mostly on ergodic networks in which initial states are assumed equally likely. We relax that assumption and find the critical condition for the sensitivity of the network under a non-ergodic scenario. We use the proposed mathematical model to determine parameter values for which phase transitions from order to chaos occur. We generate Derrida plots to show that the mathematical model matches the actual network dynamics. The phase transition diagrams indicate that both order and chaos can occur, and that certain parameters induce a larger range of values leading to order versus chaos. The edge-of-chaos curves are identified analytically and numerically. It is shown that the depth of canalization does not cause major dynamical changes once certain thresholds are reached; these thresholds are fairly small in comparison to the connectivity of the nodes.

  16. Critical thresholds in sea lice epidemics: evidence, sensitivity and subcritical estimation

    PubMed Central

    Frazer, L. Neil; Morton, Alexandra; Krkošek, Martin

    2012-01-01

    Host density thresholds are a fundamental component of the population dynamics of pathogens, but empirical evidence and estimates are lacking. We studied host density thresholds in the dynamics of ectoparasitic sea lice (Lepeophtheirus salmonis) on salmon farms. Empirical examples include a 1994 epidemic in Atlantic Canada and a 2001 epidemic in Pacific Canada. A mathematical model suggests dynamics of lice are governed by a stable endemic equilibrium until the critical host density threshold drops owing to environmental change, or is exceeded by stocking, causing epidemics that require rapid harvest or treatment. Sensitivity analysis of the critical threshold suggests variation in dependence on biotic parameters and high sensitivity to temperature and salinity. We provide a method for estimating the critical threshold from parasite abundances at subcritical host densities and estimate the critical threshold and transmission coefficient for the two epidemics. Host density thresholds may be a fundamental component of disease dynamics in coastal seas where salmon farming occurs. PMID:22217721

  17. Interlaminar shear fracture toughness and fatigue thresholds for composite materials

    NASA Technical Reports Server (NTRS)

    Obrien, T. Kevin; Murri, Gretchen B.; Salpekar, Satish A.

    1987-01-01

    Static and cyclic end notched flexure tests were conducted on a graphite epoxy, a glass epoxy, and graphite thermoplastic to determine their interlaminar shear fracture toughness and fatigue thresholds for delamination in terms of limiting values of the mode II strain energy release rate, G-II, for delamination growth. The influence of precracking and data reduction schemes are discussed. Finite element analysis indicated that the beam theory calculation for G-II with the transverse shear contribution included was reasonably accurate over the entire range of crack lengths. Cyclic loading significantly reduced the critical G-II for delamination. A threshold value of the maximum cyclic G-II below which no delamination occurred after one million cycles was identified for each material. Also, residual static toughness tests were conducted on glass epoxy specimens that had undergone one million cycles without delamination. A linear mixed-mode delamination criteria was used to characterize the static toughness of several composite materials; however, a total G threshold criterion appears to characterize the fatigue delamination durability of composite materials with a wide range of static toughness.

  18. Percolation critical polynomial as a graph invariant

    DOE PAGES

    Scullard, Christian R.

    2012-10-18

    Every lattice for which the bond percolation critical probability can be found exactly possesses a critical polynomial, with the root in [0; 1] providing the threshold. Recent work has demonstrated that this polynomial may be generalized through a definition that can be applied on any periodic lattice. The polynomial depends on the lattice and on its decomposition into identical finite subgraphs, but once these are specified, the polynomial is essentially unique. On lattices for which the exact percolation threshold is unknown, the polynomials provide approximations for the critical probability with the estimates appearing to converge to the exact answer withmore » increasing subgraph size. In this paper, I show how the critical polynomial can be viewed as a graph invariant like the Tutte polynomial. In particular, the critical polynomial is computed on a finite graph and may be found using the deletion-contraction algorithm. This allows calculation on a computer, and I present such results for the kagome lattice using subgraphs of up to 36 bonds. For one of these, I find the prediction p c = 0:52440572:::, which differs from the numerical value, p c = 0:52440503(5), by only 6:9 X 10 -7.« less

  19. Critical phenomena in the general spherically symmetric Einstein-Yang-Mills system

    NASA Astrophysics Data System (ADS)

    Maliborski, Maciej; Rinne, Oliver

    2018-02-01

    We study critical behavior in gravitational collapse of a general spherically symmetric Yang-Mills field coupled to the Einstein equations. Unlike the magnetic ansatz used in previous numerical work, the general Yang-Mills connection has two degrees of freedom in spherical symmetry. This fact changes the phenomenology of critical collapse dramatically. The magnetic sector features both type I and type II critical collapse, with universal critical solutions. In contrast, in the general system type I disappears and the critical behavior at the threshold between dispersal and black hole formation is always type II. We obtain values of the mass scaling and echoing exponents close to those observed in the magnetic sector, however we find some indications that the critical solution differs from the purely magnetic discretely self-similar attractor and exact self-similarity and universality might be lost. The additional "type III" critical phenomenon in the magnetic sector, where black holes form on both sides of the threshold but the Yang-Mills potential is in different vacuum states and there is a mass gap, also disappears in the general system. We support our dynamical numerical simulations with calculations in linear perturbation theory; for instance, we compute quasi-normal modes of the unstable attractor (the Bartnik-McKinnon soliton) in type I collapse in the magnetic sector.

  20. Research on critical groundwater level under the threshold value of land subsidence in the typical region of Beijing

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Liu, J.-R.; Luo, Y.; Yang, Y.; Tian, F.; Lei, K.-C.

    2015-11-01

    Groundwater in Beijing has been excessively exploited in a long time, causing the groundwater level continued to declining and land subsidence areas expanding, which restrained the economic and social sustainable development. Long years of study show good time-space corresponding relationship between groundwater level and land subsidence. To providing scientific basis for the following land subsidence prevention and treatment, quantitative research between groundwater level and settlement is necessary. Multi-linear regression models are set up by long series factual monitoring data about layered water table and settlement in the Tianzhu monitoring station. The results show that: layered settlement is closely related to water table, water level variation and amplitude, especially the water table. Finally, according to the threshold value in the land subsidence prevention and control plan of China (45, 30, 25 mm), the minimum allowable layered water level in this region while settlement achieving the threshold value is calculated between -18.448 and -10.082 m. The results provide a reasonable and operable control target of groundwater level for rational adjustment of groundwater exploited horizon in the future.

  1. A Critical Analysis of Grain-Size and Yield-Strength Dependence of Near-Threshold Fatigue-Crack Growth in Steels.

    DTIC Science & Technology

    1981-07-15

    strength (ays) or grain size ( ) -- as is the case, for example, with a low-carbon ferritic steel -- it is unmistakably clear that for the gamut of...steels examined (15 cases), the transition points do not order on the basis of £ either cy, or k alone. Rather, values of AKT for the gamut of steels...the search for a systematic ordering of near-threshold fatigue crack growth rates that pertains to the whole gamut of steels. SURVEY AND ANALYSIS A

  2. Conductivity fluctuations in polymer's networks

    NASA Astrophysics Data System (ADS)

    Samukhin, A. N.; Prigodin, V. N.; Jastrabík, L.

    1998-01-01

    A Polymer network is treated as an anisotropic fractal with fractional dimensionality D = 1 + ε close to one. Percolation model on such a fractal is studied. Using real space renormalization group approach of Migdal and Kadanoff, we find the threshold value and all the critical exponents in the percolation model to be strongly nonanalytic functions of ε, e.g. the critical exponent of the conductivity was obtained to be ε-2 exp (-1 - 1/ε). The main part of the finite-size conductivities distribution function at the threshold was found to be universal if expressed in terms of the fluctuating variable which is proportional to a large power of the conductivity, but with ε-dependent low-conductivity cut-off. Its reduced central momenta are of the order of e -1/ε up to a very high order.

  3. Blister formation at subcritical doses in tungsten irradiated by MeV protons

    NASA Astrophysics Data System (ADS)

    Gavish Segev, I.; Yahel, E.; Silverman, I.; Makov, G.

    2017-12-01

    The material response of tungsten to irradiation by MeV protons has been studied experimentally, in particular with respect to bubble and blister formation. Tungsten samples were irradiated by 2.2 MeV protons at the Soreq Applied Research Accelerator Facility (SARAF) to doses of the order of 1017 protons/cm2 which are below the reported critical threshold for blister formation derived from keV range irradiation studies. Large, well-developed blisters are observed indicating that for MeV range protons the critical threshold is at least an order of magnitude lower than the lowest value reported previously. The effects of fluence, flux, and corresponding temperature on the distribution and characteristics of the obtained blisters were studied. FIB cross sections of several blisters exposed their depth and structure.

  4. On the Estimation of the Cost-Effectiveness Threshold: Why, What, How?

    PubMed

    Vallejo-Torres, Laura; García-Lorenzo, Borja; Castilla, Iván; Valcárcel-Nazco, Cristina; García-Pérez, Lidia; Linertová, Renata; Polentinos-Castro, Elena; Serrano-Aguilar, Pedro

    2016-01-01

    Many health care systems claim to incorporate the cost-effectiveness criterion in their investment decisions. Information on the system's willingness to pay per effectiveness unit, normally measured as quality-adjusted life-years (QALYs), however, is not available in most countries. This is partly because of the controversy that remains around the use of a cost-effectiveness threshold, about what the threshold ought to represent, and about the appropriate methodology to arrive at a threshold value. The aim of this article was to identify and critically appraise the conceptual perspectives and methodologies used to date to estimate the cost-effectiveness threshold. We provided an in-depth discussion of different conceptual views and undertook a systematic review of empirical analyses. Identified studies were categorized into the two main conceptual perspectives that argue that the threshold should reflect 1) the value that society places on a QALY and 2) the opportunity cost of investment to the system given budget constraints. These studies showed different underpinning assumptions, strengths, and limitations, which are highlighted and discussed. Furthermore, this review allowed us to compare the cost-effectiveness threshold estimates derived from different types of studies. We found that thresholds based on society's valuation of a QALY are generally larger than thresholds resulting from estimating the opportunity cost to the health care system. This implies that some interventions with positive social net benefits, as informed by individuals' preferences, might not be an appropriate use of resources under fixed budget constraints. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. General relativistic calculations for white dwarfs

    NASA Astrophysics Data System (ADS)

    Mathew, Arun; Nandy, Malay K.

    2017-05-01

    The mass-radius relations for white dwarfs are investigated by solving the Newtonian as well as Tolman-Oppenheimer-Volkoff (TOV) equations for hydrostatic equilibrium assuming the electron gas to be non-interacting. We find that the Newtonian limiting mass of 1.4562{M}⊙ is modified to 1.4166{M}⊙ in the general relativistic case for {}_2^4{{He}} (and {}_612{{C}}) white dwarfs. Using the same general relativistic treatment, the critical mass for {}2656{{Fe}} white dwarfs is obtained as 1.2230{M}⊙ . In addition, departure from the ideal degenerate equation of state (EoS) is accounted for by considering Salpeter’s EoS along with the TOV equation, yielding slightly lower values for the critical masses, namely 1.4081{M}⊙ for {}_2^4{{He}}, 1.3916{M}⊙ for {}_612{{C}} and 1.1565{M}⊙ for {}2656{{Fe}} white dwarfs. We also compare the critical densities for gravitational instability with the neutronization threshold densities to find that {}_2^4{{He}} and {}_612{{C}} white dwarfs are stable against neutronization with the critical values of 1.4081{M}⊙ and 1.3916{M}⊙ , respectively. However, the critical masses for {}_816{{O}}, {}1020{{Ne}}, {}1224{{Mg}}, {}1428{{Si}}, {}1632{{S}} and {}2656{{Fe}} white dwarfs are lower due to neutronization. Corresponding to their central densities for neutronization thresholds, we obtain their maximum stable masses due to neutronization by solving the TOV equation coupled with the Salpeter EoS.

  6. Inhibitory neurons promote robust critical firing dynamics in networks of integrate-and-fire neurons.

    PubMed

    Lu, Zhixin; Squires, Shane; Ott, Edward; Girvan, Michelle

    2016-12-01

    We study the firing dynamics of a discrete-state and discrete-time version of an integrate-and-fire neuronal network model with both excitatory and inhibitory neurons. When the integer-valued state of a neuron exceeds a threshold value, the neuron fires, sends out state-changing signals to its connected neurons, and returns to the resting state. In this model, a continuous phase transition from non-ceaseless firing to ceaseless firing is observed. At criticality, power-law distributions of avalanche size and duration with the previously derived exponents, -3/2 and -2, respectively, are observed. Using a mean-field approach, we show analytically how the critical point depends on model parameters. Our main result is that the combined presence of both inhibitory neurons and integrate-and-fire dynamics greatly enhances the robustness of critical power-law behavior (i.e., there is an increased range of parameters, including both sub- and supercritical values, for which several decades of power-law behavior occurs).

  7. Parametric Excitation of Marangoni Instability in a Heated Thin Layer Covered by Insoluble Surfactant

    NASA Astrophysics Data System (ADS)

    Mikishev, Alexander B.; Nepomnyashchy, Alexander A.

    2018-05-01

    The paper presents the analysis of the impact of vertical periodic vibrations on the long-wavelength Marangoni instability in a liquid layer with poorly conducting boundaries in the presence of insoluble surfactant on the deformable gas-liquid interface. The layer is subject to a uniform transverse temperature gradient. Linear stability analysis is performed in order to find critical values of Marangoni numbers for both monotonic and oscillatory instability modes. Longwave asymptotic expansions are used. At the leading order, the critical values are independent on vibration parameters; at the next order of approximation we obtained the rise of stability thresholds due to vibration.

  8. Self-organized criticality in a cold plasma

    NASA Astrophysics Data System (ADS)

    Alex, Prince; Carreras, Benjamin Andres; Arumugam, Saravanan; Sinha, Suraj Kumar

    2017-12-01

    We present direct evidence for the existence of self-organized critical behavior in cold plasma. A multiple anodic double layer structure generated in a double discharge plasma setup shows critical behavior for the anode bias above a threshold value. Analysis of the floating potential fluctuations reveals the existence of long-range time correlations and power law behavior in the tail of the probability distribution function of the fluctuations. The measured Hurst exponent and the power law tail in the rank function are strong indication of the self-organized critical behavior of the system and hence provide a condition under which complexities arise in cold plasma.

  9. Regional rainfall thresholds for landslide occurrence using a centenary database

    NASA Astrophysics Data System (ADS)

    Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Quaresma, Ivânia

    2017-04-01

    Rainfall is one of the most important triggering factors for landslides occurrence worldwide. The relation between rainfall and landslide occurrence is complex and some approaches have been focus on the rainfall thresholds identification, i.e., rainfall critical values that when exceeded can initiate landslide activity. In line with these approaches, this work proposes and validates rainfall thresholds for the Lisbon region (Portugal), using a centenary landslide database associated with a centenary daily rainfall database. The main objectives of the work are the following: i) to compute antecedent rainfall thresholds using linear and potential regression; ii) to define lower limit and upper limit rainfall thresholds; iii) to estimate the probability of critical rainfall conditions associated with landslide events; and iv) to assess the thresholds performance using receiver operating characteristic (ROC) metrics. In this study we consider the DISASTER database, which lists landslides that caused fatalities, injuries, missing people, evacuated and homeless people occurred in Portugal from 1865 to 2010. The DISASTER database was carried out exploring several Portuguese daily and weekly newspapers. Using the same newspaper sources, the DISASTER database was recently updated to include also the landslides that did not caused any human damage, which were also considered for this study. The daily rainfall data were collected at the Lisboa-Geofísico meteorological station. This station was selected considering the quality and completeness of the rainfall data, with records that started in 1864. The methodology adopted included the computation, for each landslide event, of the cumulative antecedent rainfall for different durations (1 to 90 consecutive days). In a second step, for each combination of rainfall quantity-duration, the return period was estimated using the Gumbel probability distribution. The pair (quantity-duration) with the highest return period was considered as the critical rainfall combination responsible for triggering the landslide event. Only events whose critical rainfall combinations have a return period above 3 years were included. This criterion reduces the likelihood of been included events whose triggering factor was other than rainfall. The rainfall quantity-duration threshold for the Lisbon region was firstly defined using the linear and potential regression. Considering that this threshold allow the existence of false negatives (i.e. events below the threshold) it was also identified the lower limit and upper limit rainfall thresholds. These limits were defined empirically by establishing the quantity-durations combinations bellow which no landslides were recorded (lower limit) and the quantity-durations combinations above which only landslides were recorded without any false positive occurrence (upper limit). The zone between the lower limit and upper limit rainfall thresholds was analysed using a probabilistic approach, defining the uncertainties of each rainfall critical conditions in the triggering of landslides. Finally, the performances of the thresholds obtained in this study were assessed using ROC metrics. This work was supported by the project FORLAND - Hydrogeomorphologic risk in Portugal: driving forces and application for land use planning [grant number PTDC/ATPGEO/1660/2014] funded by the Portuguese Foundation for Science and Technology (FCT), Portugal. Sérgio Cruz Oliveira is a post-doc fellow of the FCT [grant number SFRH/BPD/85827/2012].

  10. DO3SE model applicability and O3 flux performance compared to AOT40 for an O3-sensitive tropical tree species (Psidium guajava L. 'Paluma').

    PubMed

    Assis, Pedro I L S; Alonso, Rocío; Meirelles, Sérgio T; Moraes, Regina M

    2015-07-01

    Phytotoxic ozone (O3) levels have been recorded in the Metropolitan Region of São Paulo (MRSP). Flux-based critical levels for O3 through stomata have been adopted for some northern hemisphere species, showing better accuracy than with accumulated ozone exposure above a threshold of 40 ppb (AOT40). In Brazil, critical levels for vegetation protection against O3 adverse effects do not exist. The study aimed to investigate the applicability of O3 deposition model (Deposition of Ozone for Stomatal Exchange (DO3SE)) to an O3-sensitive tropical tree species (Psidium guajava L. 'Paluma') under the MRSP environmental conditions, which are very unstable, and to assess the performance of O3 flux and AOT40 in relation to O3-induced leaf injuries. Stomatal conductance (g s) parameterization for 'Paluma' was carried out and used to calculate different rate thresholds (from 0 to 5 nmol O3 m(-2) projected leaf area (PLA) s(-1)) for the phytotoxic ozone dose (POD). The model performance was assessed through the relationship between the measured and modeled g sto. Leaf injuries were analyzed and associated with POD and AOT40. The model performance was satisfactory and significant (R (2) = 0.56; P < 0.0001; root-mean-square error (RMSE) = 116). As already expected, high AOT40 values did not result in high POD values. Although high POD values do not always account for more injuries, POD0 showed better performance than did AOT40 and other different rate thresholds for POD. Further investigation is necessary to improve our model and also to check if there is a critical level of ozone in which leaf injuries arise. The conclusion is that the DO3SE model for 'Paluma' is applicable in the MRSP as well as in temperate regions and may contribute to future directives.

  11. Effect of threshold disorder on the quorum percolation model

    NASA Astrophysics Data System (ADS)

    Monceau, Pascal; Renault, Renaud; Métens, Stéphane; Bottani, Samuel

    2016-07-01

    We study the modifications induced in the behavior of the quorum percolation model on neural networks with Gaussian in-degree by taking into account an uncorrelated Gaussian thresholds variability. We derive a mean-field approach and show its relevance by carrying out explicit Monte Carlo simulations. It turns out that such a disorder shifts the position of the percolation transition, impacts the size of the giant cluster, and can even destroy the transition. Moreover, we highlight the occurrence of disorder independent fixed points above the quorum critical value. The mean-field approach enables us to interpret these effects in terms of activation probability. A finite-size analysis enables us to show that the order parameter is weakly self-averaging with an exponent independent on the thresholds disorder. Last, we show that the effects of the thresholds and connectivity disorders cannot be easily discriminated from the measured averaged physical quantities.

  12. Experimental and theoretical characterization of deep penetration welding threshold induced by 1-μm laser

    NASA Astrophysics Data System (ADS)

    Zou, J. L.; He, Y.; Wu, S. K.; Huang, T.; Xiao, R. S.

    2015-12-01

    The deep penetration-welding threshold (DPWT) is the critical value that describes the welding mode transition from the thermal conduction to the deep penetration. The objective of this research is to clarify the DPWT induced by the lasers with wavelength of 1 μm (1-μm laser), based on experimental observation and theoretical analysis. The experimental results indicated that the DPWT was the ratio between laser power and laser spot diameter (P/d) rather than laser power density (P/S). The evaporation threshold was smaller than the DPWT, while the jump threshold of the evaporated mass flux in the molten pool surface was consistent with the DPWT. Based on the force balance between the evaporation recoil pressure and the surface tension pressure at the gas-liquid interface of the molten pool as well as the temperature field, we developed a self-focusing model, which further confirmed the experimental results.

  13. Derivation of soil screening thresholds to protect chisel-toothed kangaroo rat from uranium mine waste in northern Arizona

    USGS Publications Warehouse

    Hinck, Jo E.; Linder, Greg L.; Otton, James K.; Finger, Susan E.; Little, Edward E.; Tillitt, Donald E.

    2013-01-01

    Chemical data from soil and weathered waste material samples collected from five uranium mines north of the Grand Canyon (three reclaimed, one mined but not reclaimed, and one never mined) were used in a screening-level risk analysis for the Arizona chisel-toothed kangaroo rat (Dipodomys microps leucotis); risks from radiation exposure were not evaluated. Dietary toxicity reference values were used to estimate soil-screening thresholds presenting risk to kangaroo rats. Sensitivity analyses indicated that body weight critically affected outcomes of exposed-dose calculations; juvenile kangaroo rats were more sensitive to the inorganic constituent toxicities than adult kangaroo rats. Species-specific soil-screening thresholds were derived for arsenic (137 mg/kg), cadmium (16 mg/kg), copper (1,461 mg/kg), lead (1,143 mg/kg), nickel (771 mg/kg), thallium (1.3 mg/kg), uranium (1,513 mg/kg), and zinc (731 mg/kg) using toxicity reference values that incorporate expected chronic field exposures. Inorganic contaminants in soils within and near the mine areas generally posed minimal risk to kangaroo rats. Most exceedances of soil thresholds were for arsenic and thallium and were associated with weathered mine wastes.

  14. A Gompertz population model with Allee effect and fuzzy initial values

    NASA Astrophysics Data System (ADS)

    Amarti, Zenia; Nurkholipah, Nenden Siti; Anggriani, Nursanti; Supriatna, Asep K.

    2018-03-01

    Growth and population dynamics models are important tools used in preparing a good management for society to predict the future of population or species. This has been done by various known methods, one among them is by developing a mathematical model that describes population growth. Models are usually formed into differential equations or systems of differential equations, depending on the complexity of the underlying properties of the population. One example of biological complexity is Allee effect. It is a phenomenon showing a high correlation between very small population size and the mean individual fitness of the population. In this paper the population growth model used is the Gompertz equation model by considering the Allee effect on the population. We explore the properties of the solution to the model numerically using the Runge-Kutta method. Further exploration is done via fuzzy theoretical approach to accommodate uncertainty of the initial values of the model. It is known that an initial value greater than the Allee threshold will cause the solution rises towards carrying capacity asymptotically. However, an initial value smaller than the Allee threshold will cause the solution decreases towards zero asymptotically, which means the population is eventually extinct. Numerical solutions show that modeling uncertain initial value of the critical point A (the Allee threshold) with a crisp initial value could cause the extinction of population of a certain possibilistic degree, depending on the predetermined membership function of the initial value.

  15. Mitochondrial threshold effects.

    PubMed Central

    Rossignol, Rodrigue; Faustin, Benjamin; Rocher, Christophe; Malgat, Monique; Mazat, Jean-Pierre; Letellier, Thierry

    2003-01-01

    The study of mitochondrial diseases has revealed dramatic variability in the phenotypic presentation of mitochondrial genetic defects. To attempt to understand this variability, different authors have studied energy metabolism in transmitochondrial cell lines carrying different proportions of various pathogenic mutations in their mitochondrial DNA. The same kinds of experiments have been performed on isolated mitochondria and on tissue biopsies taken from patients with mitochondrial diseases. The results have shown that, in most cases, phenotypic manifestation of the genetic defect occurs only when a threshold level is exceeded, and this phenomenon has been named the 'phenotypic threshold effect'. Subsequently, several authors showed that it was possible to inhibit considerably the activity of a respiratory chain complex, up to a critical value, without affecting the rate of mitochondrial respiration or ATP synthesis. This phenomenon was called the 'biochemical threshold effect'. More recently, quantitative analysis of the effects of various mutations in mitochondrial DNA on the rate of mitochondrial protein synthesis has revealed the existence of a 'translational threshold effect'. In this review these different mitochondrial threshold effects are discussed, along with their molecular bases and the roles that they play in the presentation of mitochondrial diseases. PMID:12467494

  16. Stream amphibians as metrics of critical biological thresholds in the Pacific Northwest, U.S.A.: a response to Kroll et al.

    Treesearch

    H. H. Welsh; G. R. Hodgson

    2009-01-01

    1. Kroll, Hayes & MacCracken (in press) Concerns regarding the use of amphibians as metrics of critical biological thresholds: a comment on Welsh and Hodgson 2008. Freshwater Biology, criticised our paper [Welsh & Hodgson (2008) Amphibians as metrics of critical biological thresholds in forested headwater streams of the...

  17. Single event upset vulnerability of selected 4K and 16K CMOS static RAM's

    NASA Technical Reports Server (NTRS)

    Kolasinski, W. A.; Koga, R.; Blake, J. B.; Brucker, G.; Pandya, P.; Petersen, E.; Price, W.

    1982-01-01

    Upset thresholds for bulk CMOS and CMOS/SOS RAMS were deduced after bombardment of the devices with 140 MeV Kr, 160 MeV Ar, and 33 MeV O beams in a cyclotron. The trials were performed to test prototype devices intended for space applications, to relate feature size to the critical upset charge, and to check the validity of computer simulation models. The tests were run on 4 and 1 K memory cells with 6 transistors, in either hardened or unhardened configurations. The upset cross sections were calculated to determine the critical charge for upset from the soft errors observed in the irradiated cells. Computer simulations of the critical charge were found to deviate from the experimentally observed variation of the critical charge as the square of the feature size. Modeled values of series resistors decoupling the inverter pairs of memory cells showed that above some minimum resistance value a small increase in resistance produces a large increase in the critical charge, which the experimental data showed to be of questionable validity unless the value is made dependent on the maximum allowed read-write time.

  18. The importance of reference materials in doping-control analysis.

    PubMed

    Mackay, Lindsey G; Kazlauskas, Rymantas

    2011-08-01

    Currently a large range of pure substance reference materials are available for calibration of doping-control methods. These materials enable traceability to the International System of Units (SI) for the results generated by World Anti-Doping Agency (WADA)-accredited laboratories. Only a small number of prohibited substances have threshold limits for which quantification is highly important. For these analytes only the highest quality reference materials that are available should be used. Many prohibited substances have no threshold limits and reference materials provide essential identity confirmation. For these reference materials the correct identity is critical and the methods used to assess identity in these cases should be critically evaluated. There is still a lack of certified matrix reference materials to support many aspects of doping analysis. However, in key areas a range of urine matrix materials have been produced for substances with threshold limits, for example 19-norandrosterone and testosterone/epitestosterone (T/E) ratio. These matrix-certified reference materials (CRMs) are an excellent independent means of checking method recovery and bias and will typically be used in method validation and then regularly as quality-control checks. They can be particularly important in the analysis of samples close to threshold limits, in which measurement accuracy becomes critical. Some reference materials for isotope ratio mass spectrometry (IRMS) analysis are available and a matrix material certified for steroid delta values is currently under production. In other new areas, for example the Athlete Biological Passport, peptide hormone testing, designer steroids, and gene doping, reference material needs still need to be thoroughly assessed and prioritised.

  19. On flows of viscoelastic fluids under threshold-slip boundary conditions

    NASA Astrophysics Data System (ADS)

    Baranovskii, E. S.

    2018-03-01

    We investigate a boundary-value problem for the steady isothermal flow of an incompressible viscoelastic fluid of Oldroyd type in a 3D bounded domain with impermeable walls. We use the Fujita threshold-slip boundary condition. This condition states that the fluid can slip along a solid surface when the shear stresses reach a certain critical value; otherwise the slipping velocity is zero. Assuming that the flow domain is not rotationally symmetric, we prove an existence theorem for the corresponding slip problem in the framework of weak solutions. The proof uses methods for solving variational inequalities with pseudo-monotone operators and convex functionals, the method of introduction of auxiliary viscosity, as well as a passage-to-limit procedure based on energy estimates of approximate solutions, Korn’s inequality, and compactness arguments. Also, some properties and estimates of weak solutions are established.

  20. Epidemic thresholds for bipartite networks

    NASA Astrophysics Data System (ADS)

    Hernández, D. G.; Risau-Gusman, S.

    2013-11-01

    It is well known that sexually transmitted diseases (STD) spread across a network of human sexual contacts. This network is most often bipartite, as most STD are transmitted between men and women. Even though network models in epidemiology have quite a long history now, there are few general results about bipartite networks. One of them is the simple dependence, predicted using the mean field approximation, between the epidemic threshold and the average and variance of the degree distribution of the network. Here we show that going beyond this approximation can lead to qualitatively different results that are supported by numerical simulations. One of the new features, that can be relevant for applications, is the existence of a critical value for the infectivity of each population, below which no epidemics can arise, regardless of the value of the infectivity of the other population.

  1. Defining the Molecular Actions of Dietary Fatty Acids in Breast Cancer: Selective Modulation of Peroxisome Proliferator-Activated Receptor Gamma

    DTIC Science & Technology

    2006-12-01

    hypothesis testing (ANOVA) using icrosoft Excel v10.0 atα= 0.05 significance threshold. Fol- owing ANOVA, Fisher’s least significant difference, LSD , air...critical value (α= 0.05) ound in the t distribution. If the average absolute difference etween any two groups was greater than the LSD critical alue...cIntyre, T.M., Pontsler, A.V., Silva, A.R., St Hilaire, A., Xu, Y., Hin- shaw , J.C., Zimmerman, G.A., Hama, K., Aoki, J., Arai, H., Prestwich, G.D

  2. Nonequilibrium transition induced by mass media in a model for social influence

    NASA Astrophysics Data System (ADS)

    González-Avella, J. C.; Cosenza, M. G.; Tucci, K.

    2005-12-01

    We study the effect of mass media, modeled as an applied external field, on a social system based on Axelrod’s model for the dissemination of culture. The numerical simulations show that the system undergoes a nonequilibrium phase transition between an ordered phase (homogeneous culture) specified by the mass media and a disordered (culturally fragmented) one. The critical boundary separating these phases is calculated on the parameter space of the system, given by the intensity of the mass media influence and the number of options per cultural attribute. Counterintuitively, mass media can induce cultural diversity when its intensity is above some threshold value. The nature of the phase transition changes from continuous to discontinuous at some critical value of the number of options.

  3. Evaluating critical uncertainty thresholds in a spatial model of forest pest invasion risk

    Treesearch

    Frank H. Koch; Denys Yemshanov; Daniel W. McKenney; William D. Smith

    2009-01-01

    Pest risk maps can provide useful decision support in invasive species management, but most do not adequately consider the uncertainty associated with predicted risk values. This study explores how increased uncertainty in a risk model’s numeric assumptions might affect the resultant risk map. We used a spatial stochastic model, integrating components for...

  4. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.

  5. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems.

    PubMed

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.

  6. Simulated Critical Differences for Speech Reception Thresholds

    ERIC Educational Resources Information Center

    Pedersen, Ellen Raben; Juhl, Peter Møller

    2017-01-01

    Purpose: Critical differences state by how much 2 test results have to differ in order to be significantly different. Critical differences for discrimination scores have been available for several decades, but they do not exist for speech reception thresholds (SRTs). This study presents and discusses how critical differences for SRTs can be…

  7. Resonances and thresholds in the Rydberg-level population of multiply charged ions at solid surfaces

    NASA Astrophysics Data System (ADS)

    Nedeljković, Lj. D.; Nedeljković, N. N.

    1998-12-01

    We present a theoretical study of resonances and thresholds, two specific features of Rydberg-state formation of multiply charged ions (Z=6, 7, and 8) escaping a solid surface at intermediate velocities (v~1 a.u.) in the normal emergence geometry. The resonances are recognized in pronounced maxima of the experimentally observed population curves of Ar VIII ions for resonant values of the principal quantum number n=nres=11 and for the angular momentum quantum numbers l=1 and 2. Absence of optical signals in detectors of beam-foil experiments for n>nthr of S VI and Cl VII ions (with l=0, 1, and 2) and Ar VIII for l=0 is interpreted as a threshold phenomenon. An interplay between resonance and threshold effects is established within the framework of quantum dynamics of the low angular momentum Rydberg-state formation, based on a generalization of Demkov-Ostrovskii's charge-exchange model. In the model proposed, the Ar VIII resonances appear as a consequence of electron tunneling in the very vicinity of the ion-surface potential barrier top and at some critical ion-surface distances Rc. The observed thresholds are explained by means of a decay mechanism of ionic Rydberg states formed dominantly above the Fermi level EF of a solid conduction band. The theoretically predicted resonant and threshold values, nres and nthr of the principal quantum number n, as well as the obtained population probabilities Pnl=Pnl(v,Z), are in sufficiently good agreement with all available experimental findings.

  8. Analysis of Critical Mass in Threshold Model of Diffusion

    NASA Astrophysics Data System (ADS)

    Kim, Jeehong; Hur, Wonchang; Kang, Suk-Ho

    2012-04-01

    Why does diffusion sometimes show cascade phenomena but at other times is impeded? In addressing this question, we considered a threshold model of diffusion, focusing on the formation of a critical mass, which enables diffusion to be self-sustaining. Performing an agent-based simulation, we found that the diffusion model produces only two outcomes: Almost perfect adoption or relatively few adoptions. In order to explain the difference, we considered the various properties of network structures and found that the manner in which thresholds are arrayed over a network is the most critical factor determining the size of a cascade. On the basis of the results, we derived a threshold arrangement method effective for generation of a critical mass and calculated the size required for perfect adoption.

  9. Anatomy of filamentary threshold switching in amorphous niobium oxide.

    PubMed

    Li, Shuai; Liu, Xinjun; Nandi, Sanjoy Kumar; Elliman, Robert Glen

    2018-06-25

    The threshold switching behaviour of Pt/NbOx/TiN devices is investigated as a function device area and NbOx film thickness and shown to reveal important insight into the structure of the self-assembled switching region. The devices exhibit combined selector-memory (1S1R) behavior after an initial voltage-controlled forming process, but exhibit symmetric threshold switching when the RESET and SET currents are kept below a critical value. In this mode, the threshold and hold voltages are independent of the device area and film thickness but the threshold current (power), while independent of device area, decreases with increasing film thickness. These results are shown to be consistent with a structure in which the threshold switching volume is confined, both laterally and vertically, to the region between the residual memory filament and the TiN electrode, and where the memory filament has a core-shell structure comprising a metallic core and a semiconducting shell. The veracity of this structure is demonstrated by comparing experimental results with the predictions of a simple circuit model, and more detailed finite element simulations. These results provide further insight into the structure and operation of NbOx threshold switching devices that have application in emerging memory and neuromorphic computing fields. © 2018 IOP Publishing Ltd.

  10. Rainfall Threshold for Flash Flood Early Warning Based on Rational Equation: A Case Study of Zuojiao Watershed in Yunnan Province

    NASA Astrophysics Data System (ADS)

    Li, Q.; Wang, Y. L.; Li, H. C.; Zhang, M.; Li, C. Z.; Chen, X.

    2017-12-01

    Rainfall threshold plays an important role in flash flood warning. A simple and easy method, using Rational Equation to calculate rainfall threshold, was proposed in this study. The critical rainfall equation was deduced from the Rational Equation. On the basis of the Manning equation and the results of Chinese Flash Flood Survey and Evaluation (CFFSE) Project, the critical flow was obtained, and the net rainfall was calculated. Three aspects of the rainfall losses, i.e. depression storage, vegetation interception, and soil infiltration were considered. The critical rainfall was the sum of the net rainfall and the rainfall losses. Rainfall threshold was estimated after considering the watershed soil moisture using the critical rainfall. In order to demonstrate this method, Zuojiao watershed in Yunnan Province was chosen as study area. The results showed the rainfall thresholds calculated by the Rational Equation method were approximated to the rainfall thresholds obtained from CFFSE, and were in accordance with the observed rainfall during flash flood events. Thus the calculated results are reasonable and the method is effective. This study provided a quick and convenient way to calculated rainfall threshold of flash flood warning for the grass root staffs and offered technical support for estimating rainfall threshold.

  11. Identifying and assessing critical uncertainty thresholds in a forest pest risk model

    Treesearch

    Frank H. Koch; Denys Yemshanov

    2015-01-01

    Pest risk maps can provide helpful decision support for invasive alien species management, but often fail to address adequately the uncertainty associated with their predicted risk values. Th is chapter explores how increased uncertainty in a risk model’s numeric assumptions (i.e. its principal parameters) might aff ect the resulting risk map. We used a spatial...

  12. Deterministic Approach for Estimating Critical Rainfall Threshold of Rainfall-induced Landslide in Taiwan

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Chien; Tan, Chih-Hao; Chen, Mien-Min; Su, Tai-Wei

    2013-04-01

    Taiwan is an active mountain belt created by the oblique collision between the northern Luzon arc and the Asian continental margin. The inherent complexities of geological nature create numerous discontinuities through rock masses and relatively steep hillside on the island. In recent years, the increase in the frequency and intensity of extreme natural events due to global warming or climate change brought significant landslides. The causes of landslides in these slopes are attributed to a number of factors. As is well known, rainfall is one of the most significant triggering factors for landslide occurrence. In general, the rainfall infiltration results in changing the suction and the moisture of soil, raising the unit weight of soil, and reducing the shear strength of soil in the colluvium of landslide. The stability of landslide is closely related to the groundwater pressure in response to rainfall infiltration, the geological and topographical conditions, and the physical and mechanical parameters. To assess the potential susceptibility to landslide, an effective modeling of rainfall-induced landslide is essential. In this paper, a deterministic approach is adopted to estimate the critical rainfall threshold of the rainfall-induced landslide. The critical rainfall threshold is defined as the accumulated rainfall while the safety factor of the slope is equal to 1.0. First, the process of deterministic approach establishes the hydrogeological conceptual model of the slope based on a series of in-situ investigations, including geological drilling, surface geological investigation, geophysical investigation, and borehole explorations. The material strength and hydraulic properties of the model were given by the field and laboratory tests. Second, the hydraulic and mechanical parameters of the model are calibrated with the long-term monitoring data. Furthermore, a two-dimensional numerical program, GeoStudio, was employed to perform the modelling practice. Finally, the critical rainfall threshold of the slope can be obtained by the coupled analysis of rainfall, infiltration, seepage, and slope stability. Taking the slope located at 50k+650 on Tainan county road No 174 as an example, it located at Zeng-Wun river watershed in the southern Taiwan, is an active landslide due to typhoon events. Coordinates for the case study site are 194925, 2567208 (TWD97). The site was selected as the results of previous reports and geological survey. According to the Central Weather Bureau, the annual precipitation is about 2,450 mm, the highest monthly value is in August with 630 mm, and the lowest value is in November with 13 mm. The results show that the critical rainfall threshold of the study case is around 640 mm. It means that there should be alarmed when the accumulated rainfall over 640 mm. Our preliminary results appear to be useful for rainfall-induced landslide hazard assessments. The findings are also a good reference to establish an early warning system of landslides and develop strategies to prevent so much misfortune from happening in the future.

  13. Uncertainty in determining extreme precipitation thresholds

    NASA Astrophysics Data System (ADS)

    Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili

    2013-10-01

    Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.

  14. Is the Critical Shields Stress for Incipient Sediment Motion Dependent on Bed Slope in Natural Channels? No.

    NASA Astrophysics Data System (ADS)

    Phillips, C. B.; Jerolmack, D. J.

    2017-12-01

    Understanding when coarse sediment begins to move in a river is essential for linking rivers to the evolution of mountainous landscapes. Unfortunately, the threshold of surface particle motion is notoriously difficult to measure in the field. However, recent studies have shown that the threshold of surface motion is empirically correlated with channel slope, a property that is easy to measure and readily available from the literature. These studies have thoroughly examined the mechanistic underpinnings behind the observed correlation and produced suitably complex models. These models are difficult to implement for natural rivers using widely available data, and thus others have treated the empirical regression between slope and the threshold of motion as a predictive model. We note that none of the authors of the original studies exploring this correlation suggested their empirical regressions be used in a predictive fashion, nevertheless these regressions between slope and the threshold of motion have found their way into numerous recent studies engendering potentially spurious conclusions. We demonstrate that there are two significant problems with using these empirical equations for prediction: (1) the empirical regressions are based on a limited sampling of the phase space of bed-load rivers and (2) the empirical measurements of bankfull and critical shear stresses are paired. The upshot of these problems limits the empirical relations predictive capacity to field sites drawn from the same region of the bed-load river phase space and that the paired nature of the data introduces a spurious correlation when considering the ratio of bankfull to critical shear stress. Using a large compilation of bed-load river hydraulic geometry data, we demonstrate that the variation within independently measured values of the threshold of motion changes systematically with bankfull shields stress and not channel slope. Additionally, we highlight using several recent datasets the potential pitfalls that one can encounter when using simplistic empirical regressions to predict the threshold of motion showing that while these concerns could be construed as subtle the resulting implications can be substantial.

  15. Assessing the paradigm of mutually exclusive erosion and deposition of mud, with examples from upper Chesapeake Bay

    USGS Publications Warehouse

    Sanford, L.P.; Halka, J.P.

    1993-01-01

    A paradigm of cohesive sediment transport research is that erosion and deposition are mutually exclusive. Many laboratory studies have shown that there is a velocity/stress threshold below which erosion does not occur and a lower threshold above which deposition does not occur. In contrast, a deposition threshold is not included in standard noncohesive sediment transport models, allowing erosion and deposition to occur simultaneously. Several researchers have also modeled erosion and deposition of mud without a deposition threshold. This distinction can have important implications for suspended sediment transport predictions and for data interpretation. Model-data comparisons based on observations of in situ erosion and deposition of upper Chesapeake Bay mud indicate poor agreement when the sediments are modeled as a single resuspended particle class and mutually exclusive erosion and deposition is assumed. The total resuspended sediment load increases in conjunction with increasing bottom shear stress as anticipated, but deposition is initiated soon after the shear stress begins to decrease and long before the stress falls below the value at which erosion had previously begun. Models assuming no critical stress for deposition, with continuous deposition proportional to the near bottom resuspended sediment concentration, describe the data better. Empirical parameter values estimated from these model fits are similar to other published values for estuarine cohesive sediments, indicating significantly greater erodability for higher water content surface sediments and settling velocities appropriate for large estuarine flocs. The apparent failure of the cohesive paradigm when applied to in situ data does not mean that the concept of a critical stress for deposition is wrong. Two possibilities for explaining the observed discrepancies are that certain aspects of in situ conditions have not been replicated in the laboratory experiments underlying the cohesive paradigm, and that in situ sediment behavior is better described as a sequence of particle classes than as the single particle class modeled here. However, the in situ measurements needed to resolve these questions are very difficult and data generally are not available. For practical modeling purposes, allowing continuous deposition of a single resuspended particle class may often give quite satisfactory results. ?? 1993.

  16. Dynamic Sensor Tasking for Space Situational Awareness via Reinforcement Learning

    NASA Astrophysics Data System (ADS)

    Linares, R.; Furfaro, R.

    2016-09-01

    This paper studies the Sensor Management (SM) problem for optical Space Object (SO) tracking. The tasking problem is formulated as a Markov Decision Process (MDP) and solved using Reinforcement Learning (RL). The RL problem is solved using the actor-critic policy gradient approach. The actor provides a policy which is random over actions and given by a parametric probability density function (pdf). The critic evaluates the policy by calculating the estimated total reward or the value function for the problem. The parameters of the policy action pdf are optimized using gradients with respect to the reward function. Both the critic and the actor are modeled using deep neural networks (multi-layer neural networks). The policy neural network takes the current state as input and outputs probabilities for each possible action. This policy is random, and can be evaluated by sampling random actions using the probabilities determined by the policy neural network's outputs. The critic approximates the total reward using a neural network. The estimated total reward is used to approximate the gradient of the policy network with respect to the network parameters. This approach is used to find the non-myopic optimal policy for tasking optical sensors to estimate SO orbits. The reward function is based on reducing the uncertainty for the overall catalog to below a user specified uncertainty threshold. This work uses a 30 km total position error for the uncertainty threshold. This work provides the RL method with a negative reward as long as any SO has a total position error above the uncertainty threshold. This penalizes policies that take longer to achieve the desired accuracy. A positive reward is provided when all SOs are below the catalog uncertainty threshold. An optimal policy is sought that takes actions to achieve the desired catalog uncertainty in minimum time. This work trains the policy in simulation by letting it task a single sensor to "learn" from its performance. The proposed approach for the SM problem is tested in simulation and good performance is found using the actor-critic policy gradient method.

  17. Hamiltonian mean-field model: effect of temporal perturbation in coupling matrix

    NASA Astrophysics Data System (ADS)

    Bhadra, Nivedita; Patra, Soumen K.

    2018-05-01

    The Hamiltonian mean-field (HMF) model is a system of fully coupled rotators which exhibits a second-order phase transition at some critical energy in its canonical ensemble. We investigate the case where the interaction between the rotors is governed by a time-dependent coupling matrix. Our numerical study reveals a shift in the critical point due to the temporal modulation. The shift in the critical point is shown to be independent of the modulation frequency above some threshold value, whereas the impact of the amplitude of modulation is dominant. In the microcanonical ensemble, the system with constant coupling reaches a quasi-stationary state (QSS) at an energy near the critical point. Our result indicates that the QSS subsists in presence of such temporal modulation of the coupling parameter.

  18. Ecosystem thresholds, tipping points, and critical transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munson, Seth M.; Reed, Sasha C.; Peñuelas, Josep

    Terrestrial ecosystems in a time of change: thresholds, tipping points, and critical transitions; an organized session at the American Geophysical Union Fall Meeting in New Orleans, Louisiana, USA, December 2017

  19. Effect of a preventive vaccine on the dynamics of HIV transmission

    NASA Astrophysics Data System (ADS)

    Gumel, A. B.; Moghadas, S. M.; Mickens, R. E.

    2004-12-01

    A deterministic mathematical model for the transmission dynamics of HIV infection in the presence of a preventive vaccine is considered. Although the equilibria of the model could not be expressed in closed form, their existence and threshold conditions for their stability are theoretically investigated. It is shown that the disease-free equilibrium is locally-asymptotically stable if the basic reproductive number R<1 (thus, HIV disease can be eradicated from the community) and unstable if R>1 (leading to the persistence of HIV within the community). A robust, positivity-preserving, non-standard finite-difference method is constructed and used to solve the model equations. In addition to showing that the anti-HIV vaccine coverage level and the vaccine-induced protection are critically important in reducing the threshold quantity R, our study predicts the minimum threshold values of vaccine coverage and efficacy levels needed to eradicate HIV from the community.

  20. Critical loads of nitrogen deposition and critical levels of atmospheric ammonia for semi-natural Mediterranean evergreen woodlands

    NASA Astrophysics Data System (ADS)

    Pinho, P.; Theobald, M. R.; Dias, T.; Tang, Y. S.; Cruz, C.; Martins-Loução, M. A.; Máguas, C.; Sutton, M.; Branquinho, C.

    2012-03-01

    Nitrogen (N) has emerged in recent years as a key factor associated with global changes, with impacts on biodiversity, ecosystems functioning and human health. In order to ameliorate the effects of excessive N, safety thresholds such as critical loads (deposition fluxes) and levels (concentrations) can be established. Few studies have assessed these thresholds for semi-natural Mediterranean ecosystems. Our objective was therefore to determine the critical loads of N deposition and long-term critical levels of atmospheric ammonia for semi-natural Mediterranean evergreen woodlands. We have considered changes in epiphytic lichen communities, one of the most sensitive comunity indicators of excessive N in the atmosphere. Based on a classification of lichen species according to their tolerance to N we grouped species into response functional groups, which we used as a tool to determine the critical loads and levels. This was done for a Mediterranean climate in evergreen cork-oak woodlands, based on the relation between lichen functional diversity and modelled N deposition for critical loads and measured annual atmospheric ammonia concentrations for critical levels, evaluated downwind from a reduced N source (a cattle barn). Modelling the highly significant relationship between lichen functional groups and annual atmospheric ammonia concentration showed the critical level to be below 1.9 μg m-3, in agreement with recent studies for other ecosystems. Modelling the highly significant relationship between lichen functional groups and N deposition showed that the critical load was lower than 26 kg (N) ha-1 yr-1, which is within the upper range established for other semi-natural ecosystems. Taking into account the high sensitivity of lichen communities to excessive N, these values should aid development of policies to protect Mediterranean woodlands from the initial effects of excessive N.

  1. Characterizing air quality data from complex network perspective.

    PubMed

    Fan, Xinghua; Wang, Li; Xu, Huihui; Li, Shasha; Tian, Lixin

    2016-02-01

    Air quality depends mainly on changes in emission of pollutants and their precursors. Understanding its characteristics is the key to predicting and controlling air quality. In this study, complex networks were built to analyze topological characteristics of air quality data by correlation coefficient method. Firstly, PM2.5 (particulate matter with aerodynamic diameter less than 2.5 μm) indexes of eight monitoring sites in Beijing were selected as samples from January 2013 to December 2014. Secondly, the C-C method was applied to determine the structure of phase space. Points in the reconstructed phase space were considered to be nodes of the network mapped. Then, edges were determined by nodes having the correlation greater than a critical threshold. Three properties of the constructed networks, degree distribution, clustering coefficient, and modularity, were used to determine the optimal value of the critical threshold. Finally, by analyzing and comparing topological properties, we pointed out that similarities and difference in the constructed complex networks revealed influence factors and their different roles on real air quality system.

  2. Dynamo threshold detection in the von Kármán sodium experiment.

    PubMed

    Miralles, Sophie; Bonnefoy, Nicolas; Bourgoin, Mickael; Odier, Philippe; Pinton, Jean-François; Plihon, Nicolas; Verhille, Gautier; Boisson, Jean; Daviaud, François; Dubrulle, Bérengère

    2013-07-01

    Predicting dynamo self-generation in liquid metal experiments has been an ongoing question for many years. In contrast to simple dynamical systems for which reliable techniques have been developed, the ability to predict the dynamo capacity of a flow and the estimate of the corresponding critical value of the magnetic Reynolds number (the control parameter of the instability) has been elusive, partly due to the high level of turbulent fluctuations of flows in such experiments (with kinetic Reynolds numbers in excess of 10(6)). We address these issues here, using the von Kármán sodium experiment and studying its response to an externally applied magnetic field. We first show that a dynamo threshold can be estimated from analysis related to critical slowing down and susceptibility divergence, in configurations for which dynamo action is indeed observed. These approaches are then applied to flow configurations that have failed to self-generate magnetic fields within operational limits, and we quantify the dynamo capacity of these configurations.

  3. Three-dimensional profile extraction from CD-SEM image and top/bottom CD measurement by line-edge roughness analysis

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Atsuko; Ohashi, Takeyoshi; Kawasaki, Takahiro; Inoue, Osamu; Kawada, Hiroki

    2013-04-01

    A new method for calculating critical dimension (CDs) at the top and bottom of three-dimensional (3D) pattern profiles from a critical-dimension scanning electron microscope (CD-SEM) image, called as "T-sigma method", is proposed and evaluated. Without preparing a library of database in advance, T-sigma can estimate a feature of a pattern sidewall. Furthermore, it supplies the optimum edge-definition (i.e., threshold level for determining edge position from a CDSEM signal) to detect the top and bottom of the pattern. This method consists of three steps. First, two components of line-edge roughness (LER); noise-induced bias (i.e., LER bias) and unbiased component (i.e., bias-free LER) are calculated with set threshold level. Second, these components are calculated with various threshold values, and the threshold-dependence of these two components, "T-sigma graph", is obtained. Finally, the optimum threshold value for the top and the bottom edge detection are given by the analysis of T-sigma graph. T-sigma was applied to CD-SEM images of three kinds of resist-pattern samples. In addition, reference metrology was performed with atomic force microscope (AFM) and scanning transmission electron microscope (STEM). Sensitivity of CD measured by T-sigma to the reference CD was higher than or equal to that measured by the conventional edge definition. Regarding the absolute measurement accuracy, T-sigma showed better results than the conventional definition. Furthermore, T-sigma graphs were calculated from CD-SEM images of two kinds of resist samples and compared with corresponding STEM observation results. Both bias-free LER and LER bias increased as the detected edge point moved from the bottom to the top of the pattern in the case that the pattern had a straight sidewall and a round top. On the other hand, they were almost constant in the case that the pattern had a re-entrant profile. T-sigma will be able to reveal a re-entrant feature. From these results, it is found that T-sigma method can provide rough cross-sectional pattern features and achieve quick, easy and accurate measurements of top and bottom CD.

  4. Determination of the Water Potential Threshold at Which Rice Growth Is Impacted.

    PubMed

    Dos Santos, Caio Luiz; de Borja Reis, André Froes; Mazzafera, Paulo; Favarin, José Laércio

    2018-06-22

    Rice feeds 50% of the world’s population. Flooding is the most common irrigation system used for growing rice, a practice responsible for a large amount of water loss. Climate changes may affect water availability in irrigated agriculture, and it will be necessary to develop more sustainable irrigation practices. The aim of this work was to determine, in controlled conditions, the threshold when water potential begins to decrease plant growth. Two independent greenhouse experiments were conducted during middle summer and fall, in order to validate the results for high and low evapotranspiration conditions. Rice plants were grown in hydroponics and the water potential was adjusted with polyethylene glycol 6000, varying from −0.04 MPa (control) to −0.19 MPa. Leaf water potential, water use efficiency, leaf area, and root and shoot biomass were evaluated. All assayed parameters decreased as the water potential was decreased. The water potential threshold which starts to negatively affect rice growth was between −0.046 and −0.056 MPa, which are values close to those observed in the field in previous research. The definition of a critical value may help to improve water management in rice cultivation and to maintain productivity.

  5. Effects of programming threshold and maplaw settings on acoustic thresholds and speech discrimination with the MED-EL COMBI 40+ cochlear implant.

    PubMed

    Boyd, Paul J

    2006-12-01

    The principal task in the programming of a cochlear implant (CI) speech processor is the setting of the electrical dynamic range (output) for each electrode, to ensure that a comfortable loudness percept is obtained for a range of input levels. This typically involves separate psychophysical measurement of electrical threshold ([theta] e) and upper tolerance levels using short current bursts generated by the fitting software. Anecdotal clinical experience and some experimental studies suggest that the measurement of [theta]e is relatively unimportant and that the setting of upper tolerance limits is more critical for processor programming. The present study aims to test this hypothesis and examines in detail how acoustic thresholds and speech recognition are affected by setting of the lower limit of the output ("Programming threshold" or "PT") to understand better the influence of this parameter and how it interacts with certain other programming parameters. Test programs (maps) were generated with PT set to artificially high and low values and tested on users of the MED-EL COMBI 40+ CI system. Acoustic thresholds and speech recognition scores (sentence tests) were measured for each of the test maps. Acoustic thresholds were also measured using maps with a range of output compression functions ("maplaws"). In addition, subjective reports were recorded regarding the presence of "background threshold stimulation" which is occasionally reported by CI users if PT is set to relatively high values when using the CIS strategy. Manipulation of PT was found to have very little effect. Setting PT to minimum produced a mean 5 dB (S.D. = 6.25) increase in acoustic thresholds, relative to thresholds with PT set normally, and had no statistically significant effect on speech recognition scores on a sentence test. On the other hand, maplaw setting was found to have a significant effect on acoustic thresholds (raised as maplaw is made more linear), which provides some theoretical explanation as to why PT has little effect when using the default maplaw of c = 500. Subjective reports of background threshold stimulation showed that most users could perceive a relatively loud auditory percept, in the absence of microphone input, when PT was set to double the behaviorally measured electrical thresholds ([theta]e), but that this produced little intrusion when microphone input was present. The results of these investigations have direct clinical relevance, showing that setting of PT is indeed relatively unimportant in terms of speech discrimination, but that it is worth ensuring that PT is not set excessively high, as this can produce distracting background stimulation. Indeed, it may even be set to minimum values without deleterious effect.

  6. Diffraction-controlled backscattering threshold and application to Raman gap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rose, Harvey A.; Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87544; Mounaix, Philippe

    2011-04-15

    In most classic analytical models of linear stimulated scatter, light diffraction is omitted, a priori. However, modern laser optic typically includes a variant of the random phase plate [Y. Kato et al., Phys. Rev. Lett. 53, 1057 (1984)], resulting in diffraction limited laser intensity fluctuations - or localized speckles - which may result in explosive reflectivity growth as the average laser intensity approaches a critical value [H. A. Rose and D. F. DuBois, Phys. Rev. Lett. 72, 2883 (1994)]. Among the differences between stimulated Raman scatter (SRS) and stimulated Brillouin scatter is that the SRS scattered light diffracts more stronglymore » than the laser light with increase of electron density. This weakens the tendency of the SRS light to closely follow the most amplified paths, diminishing gain. Let G{sub 0} be the one-dimensional power gain exponent of the stimulated scatter. In this paper we show that differential diffraction gives rise to an increase of G{sub 0} at the SRS physical threshold with increase of electron density up to a drastic disruption of SRS as electron density approaches one fourth of its critical value from below. For three wave interaction lengths not small compared to a speckle length, this is a physically robust Raman gap mechanism.« less

  7. Assessment of lead pollution in topsoils of a southern Italy area: Analysis of urban and peri-urban environment.

    PubMed

    Guagliardi, Ilaria; Cicchella, Domenico; De Rosa, Rosanna; Buttafuoco, Gabriele

    2015-07-01

    Exposure to lead (Pb) may affect adversely human health. Mapping soil Pb contents is essential to obtain a quantitative estimate of potential risk of Pb contamination. The main aim of this paper was to determine the soil Pb concentrations in the urban and peri-urban area of Cosenza-Rende to map their spatial distribution and assess the probability that soil Pb concentration exceeds a critical threshold that might cause concern for human health. Samples were collected at 149 locations from residual and non-residual topsoil in gardens, parks, flower-beds, and agricultural fields. Fine earth fraction of soil samples was analyzed by X-ray Fluorescence spectrometry. Stochastic images generated by the sequential Gaussian simulation were jointly combined to calculate the probability of exceeding the critical threshold that could be used to delineate the potentially risky areas. Results showed areas in which Pb concentration values were higher to the Italian regulatory values. These polluted areas were quite large and likely, they could create a significant health risk for human beings and vegetation in the near future. The results demonstrated that the proposed approach can be used to study soil contamination to produce geochemical maps, and identify hot-spot areas for soil Pb concentration. Copyright © 2015. Published by Elsevier B.V.

  8. Exploration into Uric and Cardiovascular Disease: Uric Acid Right for heArt Health (URRAH) Project, A Study Protocol for a Retrospective Observational Study.

    PubMed

    Desideri, Giovambattista; Virdis, Agostino; Casiglia, Edoardo; Borghi, Claudio

    2018-06-01

    The relevance of cardiovascular role played by levels of serum uric acid is dramatically growing, especially as cardiovascular risk factor potentially able to exert either a direct deleterious impact or a synergic effect with other cardiovascular risk factors. At the present time, it still remains undefined the threshold level of serum uric acid able to contribute to the cardiovascular risk. Indeed, the available epidemiological case studies are not homogeneous, and some preliminary data suggest that the so-called "cardiovascular threshold limit" may substantially differ from that identified as a cut-off able to trigger the acute gout attack. In such scenario, there is the necessity to clarify and quantify this threshold value, to insert it in the stratification of risk algorithm scores and, in turn, to adopt proper prevention and correction strategies. The clarification of the relationship between circulating levels of uric acid and cardio-nephro-metabolic disorders in a broad sample representative of general population is critical to identify the threshold value of serum uric acid better discriminating the increased risk associated with uric acid. The Uric acid Right for heArt Health (URRAH) project has been designed to define, as primary objective, the level of uricemia above which the independent risk of cardiovascular disease may increase in a significantly manner in a general Italian population.

  9. Suppressing epidemic spreading in multiplex networks with social-support

    NASA Astrophysics Data System (ADS)

    Chen, Xiaolong; Wang, Ruijie; Tang, Ming; Cai, Shimin; Stanley, H. Eugene; Braunstein, Lidia A.

    2018-01-01

    Although suppressing the spread of a disease is usually achieved by investing in public resources, in the real world only a small percentage of the population have access to government assistance when there is an outbreak, and most must rely on resources from family or friends. We study the dynamics of disease spreading in social-contact multiplex networks when the recovery of infected nodes depends on resources from healthy neighbors in the social layer. We investigate how degree heterogeneity affects the spreading dynamics. Using theoretical analysis and simulations we find that degree heterogeneity promotes disease spreading. The phase transition of the infected density is hybrid and increases smoothly from zero to a finite small value at the first invasion threshold and then suddenly jumps at the second invasion threshold. We also find a hysteresis loop in the transition of the infected density. We further investigate how an overlap in the edges between two layers affects the spreading dynamics. We find that when the amount of overlap is smaller than a critical value the phase transition is hybrid and there is a hysteresis loop, otherwise the phase transition is continuous and the hysteresis loop vanishes. In addition, the edge overlap allows an epidemic outbreak when the transmission rate is below the first invasion threshold, but suppresses any explosive transition when the transmission rate is above the first invasion threshold.

  10. Critical gravitational collapse with angular momentum. II. Soft equations of state

    NASA Astrophysics Data System (ADS)

    Gundlach, Carsten; Baumgarte, Thomas W.

    2018-03-01

    We study critical phenomena in the collapse of rotating ultrarelativistic perfect fluids, in which the pressure P is related to the total energy density ρ by P =κ ρ , where κ is a constant. We generalize earlier results for radiation fluids with κ =1 /3 to other values of κ , focusing on κ <1 /9 . For 1 /9 <κ ≲0.49 , the critical solution has only one unstable, growing mode, which is spherically symmetric. For supercritical data it controls the black-hole mass, while for subcritical data it controls the maximum density. For κ <1 /9 , an additional axial l =1 mode becomes unstable. This controls either the black-hole angular momentum, or the maximum angular velocity. In theory, the additional unstable l =1 mode changes the nature of the black-hole threshold completely: at sufficiently large initial rotation rates Ω and sufficient fine-tuning of the initial data to the black-hole threshold we expect to observe nontrivial universal scaling functions (familiar from critical phase transitions in thermodynamics) governing the black-hole mass and angular momentum, and, with further fine-tuning, eventually a finite black-hole mass almost everywhere on the threshold. In practice, however, the second unstable mode grows so slowly that we do not observe this breakdown of scaling at the level of fine-tuning we can achieve, nor systematic deviations from the leading-order power-law scalings of the black-hole mass. We do see systematic effects in the black-hole angular momentum, but it is not clear yet if these are due to the predicted nontrivial scaling functions, or to nonlinear effects at sufficiently large initial angular momentum (which we do not account for in our theoretical model).

  11. Critical, sustainable and threshold fluxes for membrane filtration with water industry applications.

    PubMed

    Field, Robert W; Pearce, Graeme K

    2011-05-11

    Critical flux theory evolved as a description of the upper bound in the operating envelope for controlled steady state environments such as cross-flow systems. However, in the application of UF membranes in the water industry, dead-end (direct-flow) designs are used. Direct-flow is a pseudo steady state operation with different fouling characteristics to cross-flow, and thus the critical flux concept has limited applicability. After a review of recent usage of the critical flux theory, an alternative concept for providing design guidelines for direct-flow systems namely that of the threshold flux is introduced. The concept of threshold flux can also be applicable to cross-flow systems. In more general terms the threshold flux can be taken to be the flux that divides a low fouling region from a high fouling region. This may be linked both to the critical flux concept and to the concept of a sustainable flux. The sustainable flux is the one at which a modest degree of fouling occurs, providing a compromise between capital expenditure (which is reduced by using high flux) and operating costs (which are reduced by restricting the fouling rate). Whilst the threshold flux can potentially be linked to physical phenomena alone, the sustainable flux also depends upon economic factors and is thus of a different nature to the critical and threshold fluxes. This distinction will be illustrated using some MBR data. Additionally the utility of the concept of a threshold flux will be illustrated using pilot plant data obtained for UF treatment of four sources of water. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Forecasting Corrosion of Steel in Concrete Introducing Chloride Threshold Dependence on Steel Potential

    NASA Astrophysics Data System (ADS)

    Sanchez, Andrea Nathalie

    Corrosion initiates in reinforced concrete structures exposed to marine environments when the chloride ion concentration at the surface of an embedded steel reinforcing bar exceeds the chloride corrosion threshold (CT) value. The value of CT is generally assumed to have a conservative fixed value ranging from 0.2% to - 0.5 % of chloride ions by weight of cement. However, extensive experimental investigations confirmed that C T is not a fixed value and that the value of CT depends on many variables. Among those, the potential of passive steel embedded in concrete is a key influential factor on the value of CT and has received little attention in the literature. The phenomenon of a potential-dependent threshold (PDT) permits accounting for corrosion macrocell coupling between active and passive steel assembly components in corrosion forecast models, avoiding overly conservative long-term damage projections and leading to more efficient design. The objectives of this investigation was to 1) expand by a systematic experimental assessment the knowledge and data base on how dependent the chloride threshold is on the potential of the steel embedded in concrete and 2) introduce the chloride threshold dependence on steel potential as an integral part of corrosion-related service life prediction of reinforced concrete structures. Experimental assessments on PDT were found in the literature but for a limited set of conditions. Therefore, experiments were conducted with mortar and concrete specimens and exposed to conditions more representative of the field than those previously available. The experimental results confirmed the presence of the PDT effect and provided supporting information to use a value of -550 mV per decade of Cl- for the cathodic prevention slope betaCT, a critical quantitative input for implementation in a practical model. A refinement of a previous corrosion initiation-propagation model that incorporated PDT in a partially submerged reinforced concrete column in sea water was developed. Corrosion was assumed to start when the chloride corrosion threshold was reached in an active steel zone of a given size, followed by recalculating the potential distribution and update threshold values over the entire system at each time step. Notably, results of this work indicated that when PDT is ignored, as is the case in present forecasting model practice, the corrosion damage prediction can be overly conservative which could lead to structural overdesign or misguided future damage management planning. Implementation of PDT in next-generation models is therefore highly desirable. However, developing a mathematical model that forecasts the corrosion damage of an entire marine structure with a fully implemented PDT module can result in excessive computational complexity. Hence, a provisional simplified approach for incorporating the effect of PDT was developed. The approach uses a correction function to be applied to projections that have been computed using the traditional procedures.

  13. Asymptotic Laws of Thermovibrational Convecton in a Horizontal Fluid Layer

    NASA Astrophysics Data System (ADS)

    Smorodin, B. L.; Myznikova, B. I.; Keller, I. O.

    2017-02-01

    Theoretical study of convective instability is applied to a horizontal layer of incompressible single-component fluid subjected to the uniform steady gravity, longitudinal vibrations of arbitrary frequency and initial temperature difference. The mathematical model of thermovibrational convection has the form of initial boundary value problem for the Oberbeck-Boussinesq system of equations. The problems are solved using different simulation strategies, like the method of averaging, method of multiple scales, Galerkin approach, Wentzel-Kramers-Brillouin method and Floquet technique. The numerical analysis has shown that the effect of vibrations on the stability threshold is complex: vibrations can either stabilize or destabilize the basic state depending on values of the parameters. The influence of the Prandtl number on the instability thresholds is investigated. The asymptotic behaviour of critical values of the parameters is studied in two limiting cases: (i) small amplitude and (ii) low frequency of vibration. In case (i), the instability is due to the influence of thermovibrational mechanism on the classical Rayleigh-Benard convective instability. In case (ii), the nature of the instability is related to the instability of oscillating counter-streams with a cubic profile.

  14. Objective definition of rainfall intensity-duration thresholds for the initiation of post-fire debris flows in southern California

    USGS Publications Warehouse

    Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.

    2012-01-01

    Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this study represent a computationally simple means of deriving critical values for other studies of nonlinear phenomena characterized by thresholds.

  15. Change of wandering pattern with anisotropy in step kinetics

    NASA Astrophysics Data System (ADS)

    Sato, Masahide; Uwaha, Makio

    1999-03-01

    We study the effect of anisotropy in step kinetics on the wandering instability of an isolated step. With the asymmetry of the step kinetics, a straight step becomes unstable for long wavelength fluctuations and wanders when the step velocity exceeds a critical value. Near the threshold of the instability, an isotropic step obeys the Kuramoto-Sivashinsky equation, HT=- HXX- HXXXX+( H2X/2), and shows a chaotic pattern. A step with anisotropic kinetics obeys the Benney equation, HT=- HXX- δHXXX- HXXXX+( H2X/2), and the wandering pattern changes: when the anisotropy is strong, δ≫1, the step shows a regular pattern. Near the threshold of the instability, the anisotropy effect becomes strong while that of the step stiffness becomes weak.

  16. Multiscaling Edge Effects in an Agent-based Money Emergence Model

    NASA Astrophysics Data System (ADS)

    Oświęcimka, P.; Drożdż, S.; Gębarowski, R.; Górski, A. Z.; Kwapień, J.

    An agent-based computational economical toy model for the emergence of money from the initial barter trading, inspired by Menger's postulate that money can spontaneously emerge in a commodity exchange economy, is extensively studied. The model considered, while manageable, is significantly complex, however. It is already able to reveal phenomena that can be interpreted as emergence and collapse of money as well as the related competition effects. In particular, it is shown that - as an extra emerging effect - the money lifetimes near the critical threshold value develop multiscaling, which allow one to set parallels to critical phenomena and, thus, to the real financial markets.

  17. Failure modes in electroactive polymer thin films with elastic electrodes

    NASA Astrophysics Data System (ADS)

    De Tommasi, D.; Puglisi, G.; Zurlo, G.

    2014-02-01

    Based on an energy minimization approach, we analyse the elastic deformations of a thin electroactive polymer (EAP) film sandwiched by two elastic electrodes with non-negligible stiffness. We analytically show the existence of a critical value of the electrode voltage for which non-homogeneous solutions bifurcate from the homogeneous equilibrium state, leading to the pull-in phenomenon. This threshold strongly decreases the limit value proposed in the literature considering only homogeneous deformations. We explicitly discuss the influence of geometric and material parameters together with boundary conditions in the attainment of the different failure modes observed in EAP devices. In particular, we obtain the optimum values of these parameters leading to the maximum activation performances of the device.

  18. Establishing storm thresholds for the Spanish Gulf of Cádiz coast

    NASA Astrophysics Data System (ADS)

    Del Río, Laura; Plomaritis, Theocharis A.; Benavente, Javier; Valladares, María; Ribera, Pedro

    2012-03-01

    In this study critical thresholds are defined for storm impacts along the Spanish coast of the Gulf of Cádiz. The thresholds correspond to the minimum wave and tide conditions necessary to produce significant morphological changes on beaches and dunes and/or damage on coastal infrastructure or human occupation. Threshold definition was performed by computing theoretical sea-level variations during storms and comparing them with the topography of the study area and the location of infrastructure at a local level. Specifically, the elevations of the berm, the dune foot and the entrance of existing washovers were selected as threshold parameters. The total sea-level variation generated by a storm event was estimated as the sum of the tidal level, the wind-induced setup, the barometric setup and the wave-associated sea-level variation (wave setup and runup), assuming a minimum interaction between the different processes. These components were calculated on the basis of parameterisations for significant wave height (Hs) obtained for the oceanographic and environmental conditions of the Gulf of Cadiz. For this purpose real data and reanalysis time-series (HIPOCAS project) were used. Validation of the obtained results was performed for a range of coastal settings over the study area. The obtained thresholds for beach morphological changes in spring tide conditions range between a significant wave height of 1.5 m and 3.7 m depending on beach characteristics, while for dune foot erosion are around 3.3 to 3.7 m and for damage to infrastructure around 7.2 m. In case of neap tide conditions these values are increased on average by 50% over the areas with large tidal range. Furthermore, records of real damage in coastal infrastructure caused by storms were collected at a regional level from newspapers and other bibliographic sources and compared with the hydrodynamic conditions that caused the damage. These were extracted from the hindcast database of the HIPOCAS project, including parameters such as storm duration, mean and maximum wave height and wave direction. Results show that the duration of the storm is not critical in determining the occurrence of coastal damage in the regional study area. This way, the threshold would be defined as a duration ≥30 h, with moderate average wave height (≥3.3 m) and high maximum wave height (≥4.1 m) approaching from the 3rd and 4th quadrants, during mean or spring tide situation. The calculated thresholds constitute snapshots of risk conditions within a certain time framework. Beach and nearshore zones are extremely dynamic, and also the characteristics of occupation on the coast change over time, so critical storm thresholds will change accordingly and therefore will need to be updated.

  19. I. RENAL THRESHOLDS FOR HEMOGLOBIN IN DOGS

    PubMed Central

    Lichty, John A.; Havill, William H.; Whipple, George H.

    1932-01-01

    We use the term "renal threshold for hemoglobin" to indicate the smallest amount of hemoglobin which given intravenously will effect the appearance of recognizable hemoglobin in the urine. The initial renal threshold level for dog hemoglobin is established by the methods employed at an average value of 155 mg. hemoglobin per kilo body weight with maximal values of 210 and minimal of 124. Repeated daily injections of hemoglobin will depress this initial renal threshold level on the average 46 per cent with maximal values of 110 and minimal values of 60 mg. hemoglobin per kilo body weight. This minimal or depression threshold is relatively constant if the injections are continued. Rest periods without injections cause a return of the renal threshold for hemoglobin toward the initial threshold levels—recovery threshold level. Injections of hemoglobin below the initial threshold level but above the minimal or depression threshold will eventually reduce the renal threshold for hemoglobin to its depression threshold level. We believe the depression threshold or minimal renal threshold level due to repeated hemoglobin injections is a little above the glomerular threshold which we assume is the base line threshold for hemoglobin. Our reasons for this belief in the glomerular threshold are given above and in the other papers of this series. PMID:19870016

  20. Anesthesia modifies subthreshold critical slowing down in a stochastic Hodgkin-Huxley-like model with inhibitory synaptic input

    NASA Astrophysics Data System (ADS)

    Bukoski, Alex; Steyn-Ross, D. A.; Pickett, Ashley F.; Steyn-Ross, Moira L.

    2018-06-01

    The dynamics of a stochastic type-I Hodgkin-Huxley-like point neuron model exposed to inhibitory synaptic noise are investigated as a function of distance from spiking threshold and the inhibitory influence of the general anesthetic agent propofol. The model is biologically motivated and includes the effects of intrinsic ion-channel noise via a stochastic differential equation description as well as inhibitory synaptic noise modeled as multiple Poisson-distributed impulse trains with saturating response functions. The effect of propofol on these synapses is incorporated through this drug's principal influence on fast inhibitory neurotransmission mediated by γ -aminobutyric acid (GABA) type-A receptors via reduction of the synaptic response decay rate. As the neuron model approaches spiking threshold from below, we track membrane voltage fluctuation statistics of numerically simulated stochastic trajectories. We find that for a given distance from spiking threshold, increasing the magnitude of anesthetic-induced inhibition is associated with augmented signatures of critical slowing: fluctuation amplitudes and correlation times grow as spectral power is increasingly focused at 0 Hz. Furthermore, as a function of distance from threshold, anesthesia significantly modifies the power-law exponents for variance and correlation time divergences observable in stochastic trajectories. Compared to the inverse square root power-law scaling of these quantities anticipated for the saddle-node bifurcation of type-I neurons in the absence of anesthesia, increasing anesthetic-induced inhibition results in an observable exponent <-0.5 for variance and >-0.5 for correlation time divergences. However, these behaviors eventually break down as distance from threshold goes to zero with both the variance and correlation time converging to common values independent of anesthesia. Compared to the case of no synaptic input, linearization of an approximating multivariate Ornstein-Uhlenbeck model reveals these effects to be the consequence of an additional slow eigenvalue associated with synaptic activity that competes with those of the underlying point neuron in a manner that depends on distance from spiking threshold.

  1. Simulation of MAD Cow Disease Propagation

    NASA Astrophysics Data System (ADS)

    Magdoń-Maksymowicz, M. S.; Maksymowicz, A. Z.; Gołdasz, J.

    Computer simulation of dynamic of BSE disease is presented. Both vertical (to baby) and horizontal (to neighbor) mechanisms of the disease spread are considered. The game takes place on a two-dimensional square lattice Nx×Ny = 1000×1000 with initial population randomly distributed on the net. The disease may be introduced either with the initial population or by a spontaneous development of BSE in an item, at a small frequency. Main results show a critical probability of the BSE transmission above which the disease is present in the population. This value is vulnerable to possible spatial clustering of the population and it also depends on the mechanism responsible for the disease onset, evolution and propagation. A threshold birth rate below which the population is extinct is seen. Above this threshold the population is disease free at equilibrium until another birth rate value is reached when the disease is present in population. For typical model parameters used for the simulation, which may correspond to the mad cow disease, we are close to the BSE-free case.

  2. Reactive nitrogen requirements to feed the world in 2050 and potential to mitigate nitrogen pollution.

    PubMed

    Bodirsky, Benjamin Leon; Popp, Alexander; Lotze-Campen, Hermann; Dietrich, Jan Philipp; Rolinski, Susanne; Weindl, Isabelle; Schmitz, Christoph; Müller, Christoph; Bonsch, Markus; Humpenöder, Florian; Biewald, Anne; Stevanovic, Miodrag

    2014-05-13

    Reactive nitrogen (Nr) is an indispensable nutrient for agricultural production and human alimentation. Simultaneously, agriculture is the largest contributor to Nr pollution, causing severe damages to human health and ecosystem services. The trade-off between food availability and Nr pollution can be attenuated by several key mitigation options, including Nr efficiency improvements in crop and animal production systems, food waste reduction in households and lower consumption of Nr-intensive animal products. However, their quantitative mitigation potential remains unclear, especially under the added pressure of population growth and changes in food consumption. Here we show by model simulations, that under baseline conditions, Nr pollution in 2050 can be expected to rise to 102-156% of the 2010 value. Only under ambitious mitigation, does pollution possibly decrease to 36-76% of the 2010 value. Air, water and atmospheric Nr pollution go far beyond critical environmental thresholds without mitigation actions. Even under ambitious mitigation, the risk remains that thresholds are exceeded.

  3. Bubble oscillation and inertial cavitation in viscoelastic fluids.

    PubMed

    Jiménez-Fernández, J; Crespo, A

    2005-08-01

    Non-linear acoustic oscillations of gas bubbles immersed in viscoelastic fluids are theoretically studied. The problem is formulated by considering a constitutive equation of differential type with an interpolated time derivative. With the aid of this rheological model, fluid elasticity, shear thinning viscosity and extensional viscosity effects may be taken into account. Bubble radius evolution in time is analyzed and it is found that the amplitude of the bubble oscillations grows drastically as the Deborah number (the ratio between the relaxation time of the fluid and the characteristic time of the flow) increases, so that, even for moderate values of the external pressure amplitude, the behavior may become chaotic. The quantitative influence of the rheological fluid properties on the pressure thresholds for inertial cavitation is investigated. Pressure thresholds values in terms of the Deborah number for systems of interest in ultrasonic biomedical applications, are provided. It is found that these critical pressure amplitudes are clearly reduced as the Deborah number is increased.

  4. Extended time-to-collision measures for road traffic safety assessment.

    PubMed

    Minderhoud, M M; Bovy, P H

    2001-01-01

    This article describes two new safety indicators based on the time-to-collision notion suitable for comparative road traffic safety analyses. Such safety indicators can be applied in the comparison of a do-nothing case with an adapted situation, e.g. the introduction of intelligent driver support systems. In contrast to the classical time-to-collision value, measured at a cross section, the improved safety indicators use vehicle trajectories collected over a specific time horizon for a certain roadway segment to calculate the overall safety indicator value. Vehicle-specific indicator values as well as safety-critical probabilities can easily be determined from the developed safety measures. Application of the derived safety indicators is demonstrated for the assessment of the potential safety impacts of driver support systems from which it appears that some Autonomous Intelligent Cruise Control (AICC) designs are more safety-critical than the reference case without these systems. It is suggested that the indicator threshold value to be applied in the safety assessment has to be adapted when advanced AICC-systems with safe characteristics are introduced.

  5. Critical ratios of beluga whales (Delphinapterus leucas) and masked signal duration.

    PubMed

    Erbe, Christine

    2008-10-01

    This article examines the masking of a complex beluga vocalization by natural and anthropogenic noise. The call consisted of six 150 ms pulses exhibiting spectral peaks between 800 Hz and 8 kHz. Comparing the spectra and spectrograms of the call and noises at detection threshold showed that the animal did not hear the entire call at threshold. It only heard parts of the call in frequency and time. From the masked hearing thresholds in broadband continuous noises, critical ratios were computed. Fletcher critical bands were narrower than either 15 or 111 of an octave at the low frequencies of the call (<2 kHz), depending on which frequency the animal cued on. From the masked hearing thresholds in intermittent noises, the audible signal duration at detection threshold was computed. The intermittent noises differed in gap length, gap number, and masking, but the total audible signal duration at threshold was the same: 660 ms. This observation supports a multiple-looks model. The two amplitude modulated noises exhibited weaker masking than the unmodulated noises hinting at a comodulation masking release.

  6. Sequence of Changes in Maize Responding to Soil Water Deficit and Related Critical Thresholds

    PubMed Central

    Ma, Xueyan; He, Qijin; Zhou, Guangsheng

    2018-01-01

    The sequence of changes in crop responding to soil water deficit and related critical thresholds are essential for better drought damage classification and drought monitoring indicators. This study was aimed to investigate the critical thresholds of maize growth and physiological characteristics responding to changing soil water and to reveal the sequence of changes in maize responding to soil water deficit both in seedling and jointing stages based on 2-year’s maize field experiment responding to six initial soil water statuses conducted in 2013 and 2014. Normal distribution tolerance limits were newly adopted to identify critical thresholds of maize growth and physiological characteristics to a wide range of soil water status. The results showed that in both stages maize growth characteristics related to plant water status [stem moisture content (SMC) and leaf moisture content (LMC)], leaf gas exchange [net photosynthetic rate (Pn), transpiration rate (Tr), and stomatal conductance (Gs)], and leaf area were sensitive to soil water deficit, while biomass-related characteristics were less sensitive. Under the concurrent weather conditions and agronomic managements, the critical soil water thresholds in terms of relative soil moisture of 0–30 cm depth (RSM) of maize SMC, LMC, net Pn, Tr, Gs, and leaf area were 72, 65, 62, 60, 58, and 46%, respectively, in seedling stage, and 64, 64, 51, 53, 48, and 46%, respectively, in jointing stage. It indicated that there is a sequence of changes in maize responding to soil water deficit, i.e., their response sequences as soil water deficit intensified: SMC ≥ LMC > leaf gas exchange > leaf area in both stages. This sequence of changes in maize responding to soil water deficit and related critical thresholds may be better indicators of damage classification and drought monitoring. PMID:29765381

  7. Harm is all you need? Best interests and disputes about parental decision-making

    PubMed Central

    Birchley, Giles

    2016-01-01

    A growing number of bioethics papers endorse the harm threshold when judging whether to override parental decisions. Among other claims, these papers argue that the harm threshold is easily understood by lay and professional audiences and correctly conforms to societal expectations of parents in regard to their children. English law contains a harm threshold which mediates the use of the best interests test in cases where a child may be removed from her parents. Using Diekema's seminal paper as an example, this paper explores the proposed workings of the harm threshold. I use examples from the practical use of the harm threshold in English law to argue that the harm threshold is an inadequate answer to the indeterminacy of the best interests test. I detail two criticisms: First, the harm standard has evaluative overtones and judges are loath to employ it where parental behaviour is misguided but they wish to treat parents sympathetically. Thus, by focusing only on ‘substandard’ parenting, harm is problematic where the parental attempts to benefit their child are misguided or wrong, such as in disputes about withdrawal of medical treatment. Second, when harm is used in genuine dilemmas, court judgments offer different answers to similar cases. This level of indeterminacy suggests that, in practice, the operation of the harm threshold would be indistinguishable from best interests. Since indeterminacy appears to be the greatest problem in elucidating what is best, bioethicists should concentrate on discovering the values that inform best interests. PMID:26401048

  8. Comparing the locking threshold for rings and chains of oscillators.

    PubMed

    Ottino-Löffler, Bertrand; Strogatz, Steven H

    2016-12-01

    We present a case study of how topology can affect synchronization. Specifically, we consider arrays of phase oscillators coupled in a ring or a chain topology. Each ring is perfectly matched to a chain with the same initial conditions and the same random natural frequencies. The only difference is their boundary conditions: periodic for a ring and open for a chain. For both topologies, stable phase-locked states exist if and only if the spread or "width" of the natural frequencies is smaller than a critical value called the locking threshold (which depends on the boundary conditions and the particular realization of the frequencies). The central question is whether a ring synchronizes more readily than a chain. We show that it usually does, but not always. Rigorous bounds are derived for the ratio between the locking thresholds of a ring and its matched chain, for a variant of the Kuramoto model that also includes a wider family of models.

  9. Enhancement of high dielectric permittivity in CaCu3Ti4O12/RuO2 composites in the vicinity of the percolation threshold

    NASA Astrophysics Data System (ADS)

    Mukherjee, Rupam; Lawes, Gavin; Nadgorny, Boris

    2014-08-01

    We observe the large enhancement in the dielectric permittivity near the percolation threshold in a composite nanoparticle system consisting of metallic RuO2 grains embedded into CaCu3Ti4O12 (CCTO) matrix and annealed at 1100 °C. To understand the nature of the dielectric response, we prepared CCTO by using standard solid state and sol-gel processes, with the relative permittivity found to be on the order of 103-104 at 10 kHz. For RuO2/CCTO composites, an increase in the real part of the dielectric permittivity by approximately an order of magnitude is observed in the vicinity of the percolation threshold, with moderate losses at room temperature. The critical exponent of dielectric permittivity and conductivity of these composites are lower than universal value (0.8-1). In these composite systems, both Maxwell-Wagner and percolation effects have been found responsible for the enhancement of dielectric permittivity.

  10. Comparing the locking threshold for rings and chains of oscillators

    NASA Astrophysics Data System (ADS)

    Ottino-Löffler, Bertrand; Strogatz, Steven H.

    2016-12-01

    We present a case study of how topology can affect synchronization. Specifically, we consider arrays of phase oscillators coupled in a ring or a chain topology. Each ring is perfectly matched to a chain with the same initial conditions and the same random natural frequencies. The only difference is their boundary conditions: periodic for a ring and open for a chain. For both topologies, stable phase-locked states exist if and only if the spread or "width" of the natural frequencies is smaller than a critical value called the locking threshold (which depends on the boundary conditions and the particular realization of the frequencies). The central question is whether a ring synchronizes more readily than a chain. We show that it usually does, but not always. Rigorous bounds are derived for the ratio between the locking thresholds of a ring and its matched chain, for a variant of the Kuramoto model that also includes a wider family of models.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelson, Alan S; Gerrish, Philip J

    The constructive creativity of natural selection originates from its paradoxical ability to foster cooperation through competition. Cooperating communities ranging from complex societies to somatic tissue are constantly under attack, however, by non-cooperating mutants or transformants, called 'cheaters'. Structure in these communities promotes the formation of cooperating clusters whose competitive superiority can alone be sufficient to thwart outgrowths of cheaters and thereby maintain cooperation. But we find that when cheaters appear too frequently -- exceeding a threshold mutation or transformation rate -- their scattered outgrowths infiltrate and break up cooperating clusters, resulting in a cascading loss of community integrity, a switchmore » to net positive selection for cheaters, and ultimately in the loss of cooperation. We find that this threshold mutation rate is directly proportional to the fitness support received from each cooperating neighbor minus the individual fitness benefit of cheating. When mutation rate also evolves, this threshold is crossed spontaneously after thousands of generations, at which point cheaters rapidly invade. In a structured community, cooperation can persist only if the mutation rate remains below a critical value.« less

  12. Percolation Laws of a Fractal Fracture-Pore Double Medium

    NASA Astrophysics Data System (ADS)

    Zhao, Yangsheng; Feng, Zengchao; Lv, Zhaoxing; Zhao, Dong; Liang, Weiguo

    2016-12-01

    The fracture-pore double porosity medium is one of the most common media in nature, for example, rock mass in strata. Fracture has a more significant effect on fluid flow than a pore in a fracture-pore double porosity medium. Hence, the fracture effect on percolation should be considered when studying the percolation phenomenon in porous media. In this paper, based on the fractal distribution law, three-dimensional (3D) fracture surfaces, and two-dimensional (2D) fracture traces in rock mass, the locations of fracture surfaces or traces are determined using a random function of uniform distribution. Pores are superimposed to build a fractal fracture-pore double medium. Numerical experiments were performed to show percolation phenomena in the fracture-pore double medium. The percolation threshold can be determined from three independent variables (porosity n, fracture fractal dimension D, and initial value of fracture number N0). Once any two are determined, the percolation probability exists at a critical point with the remaining parameter changing. When the initial value of the fracture number is greater than zero, the percolation threshold in the fracture-pore medium is much smaller than that in a pore medium. When the fracture number equals zero, the fracture-pore medium degenerates to a pore medium, and both percolation thresholds are the same.

  13. Recent Results on "Approximations to Optimal Alarm Systems for Anomaly Detection"

    NASA Technical Reports Server (NTRS)

    Martin, Rodney Alexander

    2009-01-01

    An optimal alarm system and its approximations may use Kalman filtering for univariate linear dynamic systems driven by Gaussian noise to provide a layer of predictive capability. Predicted Kalman filter future process values and a fixed critical threshold can be used to construct a candidate level-crossing event over a predetermined prediction window. An optimal alarm system can be designed to elicit the fewest false alarms for a fixed detection probability in this particular scenario.

  14. Exploring childhood lead exposure through GIS: a review of the recent literature.

    PubMed

    Akkus, Cem; Ozdenerol, Esra

    2014-06-18

    Childhood exposure to lead remains a critical health control problem in the US. Integration of Geographic Information Systems (GIS) into childhood lead exposure studies significantly enhanced identifying lead hazards in the environment and determining at risk children. Research indicates that the toxic threshold for lead exposure was updated three times in the last four decades: 60 to 30 micrograms per deciliter (µg/dL) in 1975, 25 µg/dL in 1985, and 10 µb/dL in 1991. These changes revealed the extent of lead poisoning. By 2012 it was evident that no safe blood lead threshold for the adverse effects of lead on children had been identified and the Center for Disease Control (CDC) currently uses a reference value of 5 µg/dL. Review of the recent literature on GIS-based studies suggests that numerous environmental risk factors might be critical for lead exposure. New GIS-based studies are used in surveillance data management, risk analysis, lead exposure visualization, and community intervention strategies where geographically-targeted, specific intervention measures are taken.

  15. Exploring Childhood Lead Exposure through GIS: A Review of the Recent Literature

    PubMed Central

    Akkus, Cem; Ozdenerol, Esra

    2014-01-01

    Childhood exposure to lead remains a critical health control problem in the US. Integration of Geographic Information Systems (GIS) into childhood lead exposure studies significantly enhanced identifying lead hazards in the environment and determining at risk children. Research indicates that the toxic threshold for lead exposure was updated three times in the last four decades: 60 to 30 micrograms per deciliter (µg/dL) in 1975, 25 µg/dL in 1985, and 10 µb/dL in 1991. These changes revealed the extent of lead poisoning. By 2012 it was evident that no safe blood lead threshold for the adverse effects of lead on children had been identified and the Center for Disease Control (CDC) currently uses a reference value of 5 µg/dL. Review of the recent literature on GIS-based studies suggests that numerous environmental risk factors might be critical for lead exposure. New GIS-based studies are used in surveillance data management, risk analysis, lead exposure visualization, and community intervention strategies where geographically-targeted, specific intervention measures are taken. PMID:24945189

  16. STOCHASTICITY AND EFFICIENCY IN SIMPLIFIED MODELS OF CORE-COLLAPSE SUPERNOVA EXPLOSIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardall, Christian Y.; Budiardja, Reuben D., E-mail: cardallcy@ornl.gov, E-mail: reubendb@utk.edu

    2015-11-01

    We present an initial report on 160 simulations of a highly simplified model of the post-bounce core-collapse supernova environment in three spatial dimensions (3D). We set different values of a parameter characterizing the impact of nuclear dissociation at the stalled shock in order to regulate the post-shock fluid velocity, thereby determining the relative importance of convection and the stationary accretion shock instability (SASI). While our convection-dominated runs comport with the paradigmatic notion of a “critical neutrino luminosity” for explosion at a given mass accretion rate (albeit with a nontrivial spread in explosion times just above threshold), the outcomes of ourmore » SASI-dominated runs are much more stochastic: a sharp threshold critical luminosity is “smeared out” into a rising probability of explosion over a ∼20% range of luminosity. We also find that the SASI-dominated models are able to explode with 3–4 times less efficient neutrino heating, indicating that progenitor properties, and fluid and neutrino microphysics, conducive to the SASI would make the neutrino-driven explosion mechanism more robust.« less

  17. Stochasticity and efficiency of convection-dominated vs. SASI-dominated supernova explosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-10-22

    We present an initial report on 160 simulations of a highly simplified model of the post-bounce supernova environment in three position space dimensions (3D). We set different values of a parameter characterizing the impact of nuclear dissociation at the stalled shock in order to regulate the post-shock fluid velocity, thereby determining the relative importance of convection and the stationary accretion shock instability (SASI). While our convection-dominated runs comport with the paradigmatic notion of a `critical neutrino luminosity' for explosion at a given mass accretion rate (albeit with a nontrivial spread in explosion times just above threshold), the outcomes of our SASI-dominated runs are more stochastic: a sharp threshold critical luminosity is `smeared out' into a rising probability of explosion over amore » $$\\sim 20\\%$$ range of luminosity. We also find that the SASI-dominated models are able to explode with 3 to 4 times less efficient neutrino heating, indicating that progenitor properties, and fluid and neutrino microphysics, conducive to the SASI would make the neutrino-driven explosion mechanism more robust.« less

  18. Threshold of microvascular occlusion: injury size defines the thrombosis scenario.

    PubMed

    Belyaev, Aleksey V; Panteleev, Mikhail A; Ataullakhanov, Fazly I

    2015-07-21

    Damage to the blood vessel triggers formation of a hemostatic plug, which is meant to prevent bleeding, yet the same phenomenon may result in a total blockade of a blood vessel by a thrombus, causing severe medical conditions. Here, we show that the physical interplay between platelet adhesion and hemodynamics in a microchannel manifests in a critical threshold behavior of a growing thrombus. Depending on the size of injury, two distinct dynamic pathways of thrombosis were found: the formation of a nonocclusive plug, if injury length does not exceed the critical value, and the total occlusion of the vessel by the thrombus otherwise. We develop a mathematical model that demonstrates that switching between these regimes occurs as a result of a saddle-node bifurcation. Our study reveals the mechanism of self-regulation of thrombosis in blood microvessels and explains experimentally observed distinctions between thrombi of different physical etiology. This also can be useful for the design of platelet-aggregation-inspired engineering solutions. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  19. 41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public... § 102-73.40 What happens if the dollar value of the project exceeds the prospectus threshold? Projects... the prospectus threshold. To obtain this approval, the Administrator of General Services will transmit...

  20. 41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public... § 102-73.40 What happens if the dollar value of the project exceeds the prospectus threshold? Projects... the prospectus threshold. To obtain this approval, the Administrator of General Services will transmit...

  1. 41 CFR 102-73.40 - What happens if the dollar value of the project exceeds the prospectus threshold?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... dollar value of the project exceeds the prospectus threshold? 102-73.40 Section 102-73.40 Public... § 102-73.40 What happens if the dollar value of the project exceeds the prospectus threshold? Projects... the prospectus threshold. To obtain this approval, the Administrator of General Services will transmit...

  2. Methods for automatic trigger threshold adjustment

    DOEpatents

    Welch, Benjamin J; Partridge, Michael E

    2014-03-18

    Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.

  3. Reduced critical rotation for resistive-wall mode stabilization in a near-axisymmetric configuration.

    PubMed

    Reimerdes, H; Garofalo, A M; Jackson, G L; Okabayashi, M; Strait, E J; Chu, M S; In, Y; La Haye, R J; Lanctot, M J; Liu, Y Q; Navratil, G A; Solomon, W M; Takahashi, H; Groebner, R J

    2007-02-02

    Recent DIII-D experiments with reduced neutral beam torque and minimum nonaxisymmetric perturbations of the magnetic field show a significant reduction of the toroidal plasma rotation required for the stabilization of the resistive-wall mode (RWM) below the threshold values observed in experiments that apply nonaxisymmetric magnetic fields to slow the plasma rotation. A toroidal rotation frequency of less than 10 krad/s at the q=2 surface (measured with charge exchange recombination spectroscopy using C VI) corresponding to 0.3% of the inverse of the toroidal Alfvén time is sufficient to sustain the plasma pressure above the ideal MHD no-wall stability limit. The low-rotation threshold is found to be consistent with predictions by a kinetic model of RWM damping.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yang; Burghoff, David; Reno, John

    Frequency combs based on quantum cascade laser (QCL) are finding promising applications in highspeed broadband spectroscopy in the terahertz regime, where many molecules have their "fingerprints". To form stable combs in QCLs, an effective control of group velocity dispersion plays a critical role. The dispersion of the QCL cavity has two main parts: a static part from the material and a dynamic part from the intersubband transitions. Unlike the gain, which is clamped to a fixed value above the lasing threshold, dispersion associated with the intersubband transitions changes with bias even above the threshold, and this reduces the dynamic rangemore » of comb formation. Here, by incorporating tunability into the dispersion compensator, we demonstrate a QCL device exhibiting comb operation from I th to I max, which greatly expands the operation range of the frequency combs.« less

  5. Critical thresholds and recovery of Chihuahuan Desert grasslands: Insights from long-term data

    USDA-ARS?s Scientific Manuscript database

    Background/Question/Methods: Desertification and other harmful state transitions in drylands are expected to accelerate with global change. Ecologists are called upon to devise methods to anticipate critical thresholds and promote recovery of desired states. As in other drylands, transitions in sem...

  6. WE-H-207A-06: Hypoxia Quantification in Static PET Images: The Signal in the Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, H; Yeung, I; Milosevic, M

    2016-06-15

    Purpose: Quantification of hypoxia from PET images is of considerable clinical interest. In the absence of dynamic PET imaging the hypoxic fraction (HF) of a tumor has to be estimated from voxel values of activity concentration of a radioactive hypoxia tracer. This work is part of an effort to standardize quantification of tumor hypoxic fraction from PET images. Methods: A simple hypoxia imaging model in the tumor was developed. The distribution of the tracer activity was described as the sum of two different probability distributions, one for the normoxic (and necrotic), the other for the hypoxic voxels. The widths ofmore » the distributions arise due to variability of the transport, tumor tissue inhomogeneity, tracer binding kinetics, and due to PET image noise. Quantification of HF was performed for various levels of variability using two different methodologies: a) classification thresholds between normoxic and hypoxic voxels based on a non-hypoxic surrogate (muscle), and b) estimation of the (posterior) probability distributions based on maximizing likelihood optimization that does not require a surrogate. Data from the hypoxia imaging model and from 27 cervical cancer patients enrolled in a FAZA PET study were analyzed. Results: In the model, where the true value of HF is known, thresholds usually underestimate the value for large variability. For the patients, a significant uncertainty of the HF values (an average intra-patient range of 17%) was caused by spatial non-uniformity of image noise which is a hallmark of all PET images. Maximum likelihood estimation (MLE) is able to directly optimize for the weights of both distributions, however, may suffer from poor optimization convergence. For some patients, MLE-based HF values showed significant differences to threshold-based HF-values. Conclusion: HF-values depend critically on the magnitude of the different sources of tracer uptake variability. A measure of confidence should also be reported.« less

  7. [The new German general threshold limit value for dust--pro and contra the adoption in Austria].

    PubMed

    Godnic-Cvar, Jasminka; Ponocny, Ivo

    2004-01-01

    Since it has been realised that inhalation of inert dust is one of the important confounding variables for the development of chronic bronchitis, the threshold values for occupational exposure to these dusts needs to be further decreased. The German Commission for the Investigation of Health Hazards of Chemical Compounds in the Work Area (MAK-Commission) has set a new threshold (MAK-Value) for inert dusts (4 mg/m3 for inhalable dust, 1.5 mg/m3 for respirable dust) in 1997. This value is much lower than the threshold values currently used world-wide. The aim of the present article is to assess the scientific plausibility of the methodology (databases and statistics) used to set these new German MAK-Values, regarding their adoption in Austria. Although we believe that it is substantial to lower the MAK-Value for inert dust in order to prevent the development of chronic bronchitis as a consequence of occupational exposure to inert dusts, the applied methodology used by the German MAK-Commission in 1997 to set the new MAK-Values does not justify the reduction of the threshold limit value. A carefully designed study to establish an appropriate scientific basis for setting a new threshold value for inert dusts in the workplace should be carried out. Meanwhile, at least the currently internationally applied threshold values should be adopted in Austria.

  8. Measure synchronization in a Huygens's non-dissipative two-pendulum clocks system

    NASA Astrophysics Data System (ADS)

    Tian, Jing; Chen, ZiChen; Qiu, HaiBo; Xi, XiaoQiang

    2018-01-01

    In this paper, we characterize measure synchronization (MS) in a four-degrees-of-freedom Huygens's two-pendulum clocks system. The two-pendulum clocks are connected by a massless spring with stiffness constant k. We find that with the stiffness constant k increasing, the coupled pendulums system achieves MS above a threshold value of k c . The energy characteristics of measure synchronization have been discussed, it is found that averaged energy of each pendulum system provide us an easy way to characterize MS transition. Furthermore, we discuss the dependence of the critical value for MS transition on initial conditions and the characteristic parameters of the system.

  9. How to Assess the Value of Medicines?

    PubMed Central

    Simoens, Steven

    2010-01-01

    This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value. PMID:21607066

  10. How to assess the value of medicines?

    PubMed

    Simoens, Steven

    2010-01-01

    This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value.

  11. A strategy to minimize the energy offset in carrier injection from excited dyes to inorganic semiconductors for efficient dye-sensitized solar energy conversion.

    PubMed

    Fujisawa, Jun-Ichi; Osawa, Ayumi; Hanaya, Minoru

    2016-08-10

    Photoinduced carrier injection from dyes to inorganic semiconductors is a crucial process in various dye-sensitized solar energy conversions such as photovoltaics and photocatalysis. It has been reported that an energy offset larger than 0.2-0.3 eV (threshold value) is required for efficient electron injection from excited dyes to metal-oxide semiconductors such as titanium dioxide (TiO2). Because the energy offset directly causes loss in the potential of injected electrons, it is a crucial issue to minimize the energy offset for efficient solar energy conversions. However, a fundamental understanding of the energy offset, especially the threshold value, has not been obtained yet. In this paper, we report the origin of the threshold value of the energy offset, solving the long-standing questions of why such a large energy offset is necessary for the electron injection and which factors govern the threshold value, and suggest a strategy to minimize the threshold value. The threshold value is determined by the sum of two reorganization energies in one-electron reduction of semiconductors and typically-used donor-acceptor (D-A) dyes. In fact, the estimated values (0.21-0.31 eV) for several D-A dyes are in good agreement with the threshold value, supporting our conclusion. In addition, our results reveal that the threshold value is possible to be reduced by enlarging the π-conjugated system of the acceptor moiety in dyes and enhancing its structural rigidity. Furthermore, we extend the analysis to hole injection from excited dyes to semiconductors. In this case, the threshold value is given by the sum of two reorganization energies in one-electron oxidation of semiconductors and D-A dyes.

  12. Kuramoto model with uniformly spaced frequencies: Finite-N asymptotics of the locking threshold.

    PubMed

    Ottino-Löffler, Bertrand; Strogatz, Steven H

    2016-06-01

    We study phase locking in the Kuramoto model of coupled oscillators in the special case where the number of oscillators, N, is large but finite, and the oscillators' natural frequencies are evenly spaced on a given interval. In this case, stable phase-locked solutions are known to exist if and only if the frequency interval is narrower than a certain critical width, called the locking threshold. For infinite N, the exact value of the locking threshold was calculated 30 years ago; however, the leading corrections to it for finite N have remained unsolved analytically. Here we derive an asymptotic formula for the locking threshold when N≫1. The leading correction to the infinite-N result scales like either N^{-3/2} or N^{-1}, depending on whether the frequencies are evenly spaced according to a midpoint rule or an end-point rule. These scaling laws agree with numerical results obtained by Pazó [D. Pazó, Phys. Rev. E 72, 046211 (2005)PLEEE81539-375510.1103/PhysRevE.72.046211]. Moreover, our analysis yields the exact prefactors in the scaling laws, which also match the numerics.

  13. Discontinuous non-equilibrium phase transition in a threshold Schloegl model for autocatalysis: Generic two-phase coexistence and metastability

    NASA Astrophysics Data System (ADS)

    Wang, Chi-Jen; Liu, Da-Jiang; Evans, James W.

    2015-04-01

    Threshold versions of Schloegl's model on a lattice, which involve autocatalytic creation and spontaneous annihilation of particles, can provide a simple prototype for discontinuous non-equilibrium phase transitions. These models are equivalent to so-called threshold contact processes. A discontinuous transition between populated and vacuum states can occur selecting a threshold of N ≥ 2 for the minimum number, N, of neighboring particles enabling autocatalytic creation at an empty site. Fundamental open questions remain given the lack of a thermodynamic framework for analysis. For a square lattice with N = 2, we show that phase coexistence occurs not at a unique value but for a finite range of particle annihilation rate (the natural control parameter). This generic two-phase coexistence also persists when perturbing the model to allow spontaneous particle creation. Such behavior contrasts both the Gibbs phase rule for thermodynamic systems and also previous analysis for this model. We find metastability near the transition corresponding to a non-zero effective line tension, also contrasting previously suggested critical behavior. Mean-field type analysis, extended to treat spatially heterogeneous states, further elucidates model behavior.

  14. Structured decision making as a conceptual framework to identify thresholds for conservation and management

    USGS Publications Warehouse

    Martin, J.; Runge, M.C.; Nichols, J.D.; Lubow, B.C.; Kendall, W.L.

    2009-01-01

    Thresholds and their relevance to conservation have become a major topic of discussion in the ecological literature. Unfortunately, in many cases the lack of a clear conceptual framework for thinking about thresholds may have led to confusion in attempts to apply the concept of thresholds to conservation decisions. Here, we advocate a framework for thinking about thresholds in terms of a structured decision making process. The purpose of this framework is to promote a logical and transparent process for making informed decisions for conservation. Specification of such a framework leads naturally to consideration of definitions and roles of different kinds of thresholds in the process. We distinguish among three categories of thresholds. Ecological thresholds are values of system state variables at which small changes bring about substantial changes in system dynamics. Utility thresholds are components of management objectives (determined by human values) and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. The approach that we present focuses directly on the objectives of management, with an aim to providing decisions that are optimal with respect to those objectives. This approach clearly distinguishes the components of the decision process that are inherently subjective (management objectives, potential management actions) from those that are more objective (system models, estimates of system state). Optimization based on these components then leads to decision matrices specifying optimal actions to be taken at various values of system state variables. Values of state variables separating different actions in such matrices are viewed as decision thresholds. Utility thresholds are included in the objectives component, and ecological thresholds may be embedded in models projecting consequences of management actions. Decision thresholds are determined by the above-listed components of a structured decision process. These components may themselves vary over time, inducing variation in the decision thresholds inherited from them. These dynamic decision thresholds can then be determined using adaptive management. We provide numerical examples (that are based on patch occupancy models) of structured decision processes that include all three kinds of thresholds. ?? 2009 by the Ecological Society of America.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xi, P. W.; Lawrence Livermore National Laboratory, Livermore, California 94550; Xu, X. Q.

    We demonstrate that the occurrence of Edge-Localized-Modes (ELM) crashes does not depend only on the linear peeling-ballooning threshold, but also relies on nonlinear processes. Wave-wave interaction constrains the growth time of a mode, thus inducing a shift in the criterion for triggering an ELM crash. An ELM crash requires the P-B growth rate to exceed a critical value γ>γ{sub c}, where γ{sub c} is set by 1/τ{sup ¯}{sub c}, and τ{sup ¯}{sub c} is the averaged mode phase coherence time. For 0

  16. Evaluating time dynamics of topographic threshold relations for gully initiation

    NASA Astrophysics Data System (ADS)

    Hayas, Antonio; Vanwalleghem, Tom; Poesen, Jean

    2016-04-01

    Gully erosion is one of the most important soil degradation processes at global scale. However, modelling of gully erosion is still difficult. Despite advances in the modelling of gully headcut rates and incision rates, it remains difficult to predict the location of gully initiation points and trajectories. In different studies it has been demonstrated that a good method of predicting gully initiation is by using a slope (S) - area (A) threshold. Such an S-A relation is a simple way of estimating the critical discharges needed to generate a critical shear stress that can incise a particular soil and initiate a gully. As such, the simple S-A threshold will vary if the rainfall-runoff behaviour of the soil changes or if the soil's erodibility changes. Over the past decades, important agronomic changes have produced significant changes in the soil use and soil management in SW Spain. It is the objective of this research to evaluate how S-A relations for gully initiation have changed over time and for two different land uses, cereal and olive. Data was collected for a gully network in the Cordoba Province, SW Spain. From photo-interpretation of historical air photos between 1956 and 2013, the gully network and initiation points were derived. In total 10 different time steps are available (1956; 1977; 1984; 1998; 2001; 2004; 2006; 2008; 2010; 2013). Topographical thresholds were extracted by combining the digitized gully network with the DEM. Due to small differences in the alignment of ortophotos and DEM, an optimization technique was developed in GIS to extract the correct S-A value for each point. With the S-A values for each year, their dynamics was evaluated as a function of land use (olive or cereal) and in function of the following variables in each of the periods considered: • soil management • soil cover by weeds, where weed growth was modeled from the daily soil water balance • rainfall intensity • root cohesion, , where root growth was modeled from the daily soil water balance We found important differences between cereal and olive and significant changes in the S-A relation over time.

  17. Dusty plasma ring model

    NASA Astrophysics Data System (ADS)

    Sheridan, T. E.

    2009-12-01

    A model of a dusty plasma (Yukawa) ring is presented. We consider n identical particles confined in a two-dimensional (2D) annular potential well and interacting through a Debye (i.e. Yukawa or screened Coulomb) potential. Equilibrium configurations are computed versus n, the Debye shielding parameter and the trap radius. When the particle separation exceeds a critical value the particles form a 1D chain with a ring topology. Below the critical separation the zigzag instability gives a 2D configuration. Computed critical separations are shown to agree well with a theoretical prediction for the zigzag threshold. Normal mode spectra for 1D rings are computed and found to be in excellent agreement with the longitudinal and transverse dispersion relations for unbounded straight chains. When the longitudinal and transverse dispersion relations intersect we observe a resonance due to the finite curvature of the ring.

  18. Randomness fault detection system

    NASA Technical Reports Server (NTRS)

    Russell, B. Don (Inventor); Aucoin, B. Michael (Inventor); Benner, Carl L. (Inventor)

    1996-01-01

    A method and apparatus are provided for detecting a fault on a power line carrying a line parameter such as a load current. The apparatus monitors and analyzes the load current to obtain an energy value. The energy value is compared to a threshold value stored in a buffer. If the energy value is greater than the threshold value a counter is incremented. If the energy value is greater than a high value threshold or less than a low value threshold then a second counter is incremented. If the difference between two subsequent energy values is greater than a constant then a third counter is incremented. A fault signal is issued if the counter is greater than a counter limit value and either the second counter is greater than a second limit value or the third counter is greater than a third limit value.

  19. Temporal integration property of stereopsis after higher-order aberration correction

    PubMed Central

    Kang, Jian; Dai, Yun; Zhang, Yudong

    2015-01-01

    Based on a binocular adaptive optics visual simulator, we investigated the effect of higher-order aberration correction on the temporal integration property of stereopsis. Stereo threshold for line stimuli, viewed in 550nm monochromatic light, was measured as a function of exposure duration, with higher-order aberrations uncorrected, binocularly corrected or monocularly corrected. Under all optical conditions, stereo threshold decreased with increasing exposure duration until a steady-state threshold was reached. The critical duration was determined by a quadratic summation model and the high goodness of fit suggested this model was reasonable. For normal subjects, the slope for stereo threshold versus exposure duration was about −0.5 on logarithmic coordinates, and the critical duration was about 200 ms. Both the slope and the critical duration were independent of the optical condition of the eye, showing no significant effect of higher-order aberration correction on the temporal integration property of stereopsis. PMID:26601010

  20. Orientational glasses. II. Calculation of critical thresholds in ACNxMn{1-x} mixtures

    NASA Astrophysics Data System (ADS)

    Galam, Serge; Depondt, Philippe

    1992-10-01

    Using a simple steric hindrance based idea, critical thresholds which occur in the phase diagram of ACNxMn{1-x} mixtures, where A stands for K, Na or Rb while Mn represents Br, Cl or I, are calculated. The cyanide density x is divided into a free-to-reorient part x_r, and a frozen-in part x_f. The latter term x_f is calculated from microscopic characteristics of the molecules involved. Two critical thresholds x_c and x_d for the disappearance of respectively, ferroelastic transitions and ferroelastic domains are obtained. The calculated values are in excellent agreement with available experimental results. Predictions are made for additionnal mixtures. Une idée simple d'encombrement stérique permet de calculer des seuils critiques qui apparaissent dans le diagramme de phase de mélanges ACNxMn{1-x}, où A représente K, Na ou Rb, et Mn, des atoms du type Br, Cl ou I. La concentration x du cyanure est divisée en une partie x_r de molécules libres de se réorienter, et une partie de molécules gelées x_f. Ce dernier terme x_f est calculé à partir des caractéristiques microscopiques des molécules concernées. Deux seuils critiques x_c et x_d pour la disparition respectivement des transitions et des domaines ferroelastiques sont obtenus. Les valeurs calculées sont en excellent accord avec les résultats expérimentaux disponibles. Des prédictions sont faites pour d'autres mélanges.

  1. A statistical study of current-sheet formation above solar active regions based on selforganized criticality

    NASA Astrophysics Data System (ADS)

    Dimitropoulou, M.; Isliker, H.; Vlahos, L.; Georgoulis, M.; Anastasiadis, A.; Toutountzi, A.

    2013-09-01

    We treat flaring solar active regions as physical systems having reached the self-organized critical state. Their evolving magnetic configurations in the low corona may satisfy an instability criterion, related to the excession of a specific threshold in the curl of the magnetic field. This imposed instability criterion implies an almost zero resistivity everywhere in the solar corona, except in regions where magnetic-field discontinuities and. hence, local currents, reach the critical value. In these areas, current-driven instabilities enhance the resistivity by many orders of magnitude forming structures which efficiently accelerate charged particles. Simulating the formation of such structures (thought of as current sheets) via a refined SOC cellular-automaton model provides interesting information regarding their statistical properties. It is shown that the current density in such unstable regions follows power-law scaling. Furthermore, the size distribution of the produced current sheets is best fitted by power laws, whereas their formation probability is investigated against the photospheric magnetic configuration (e.g. Polarity Inversion Lines, Plage). The average fractal dimension of the produced current sheets is deduced depending on the selected critical threshold. The above-mentioned statistical description of intermittent electric field structures can be used by collisional relativistic test particle simulations, aiming to interpret particle acceleration in flaring active regions and in strongly turbulent media in astrophysical plasmas. The above work is supported by the Hellenic National Space Weather Research Network (HNSWRN) via the THALIS Programme.

  2. A test of critical thresholds and their indicators in a desertification-prone ecosystem: more resilience than we thought

    USDA-ARS?s Scientific Manuscript database

    Theoretical models predict that dryland ecosystems can cross critical thresholds after which vegetation loss is independent of initial drivers, but experimental data are nonexistent. We used a long-term (13 year) pulse-perturbation experiment featuring heavy grazing and shrub removal to determine i...

  3. The limits of thresholds: silica and the politics of science, 1935 to 1990.

    PubMed Central

    Markowitz, G; Rosner, D

    1995-01-01

    Since the 1930s threshold limit values have been presented as an objectively established measure of US industrial safety. However, there have been important questions raised regarding the adequacy of these thresholds for protecting workers from silicosis. This paper explores the historical debates over silica threshold limit values and the intense political negotiation that accompanied their establishment. In the 1930s and early 1940s, a coalition of business, public health, insurance, and political interests formed in response to a widely perceived "silicosis crisis." Part of the resulting program aimed at containing the crisis was the establishment of threshold limit values. Yet silicosis cases continued to be documented. By the 1960s these cases had become the basis for a number of revisions to the thresholds. In the 1970s, following a National Institute for Occupational Safety and Health recommendation to lower the threshold limit value for silica and to eliminate sand as an abrasive in blasting, industry fought attempts to make the existing values more stringent. This paper traces the process by which threshold limit values became part of a compromise between the health of workers and the economic interests of industry. Images p254-a p256-a p257-a p259-a PMID:7856788

  4. A new edge detection algorithm based on Canny idea

    NASA Astrophysics Data System (ADS)

    Feng, Yingke; Zhang, Jinmin; Wang, Siming

    2017-10-01

    The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.

  5. Influence of drug load on dissolution behavior of tablets containing a poorly water-soluble drug: estimation of the percolation threshold.

    PubMed

    Wenzel, Tim; Stillhart, Cordula; Kleinebudde, Peter; Szepes, Anikó

    2017-08-01

    Drug load plays an important role in the development of solid dosage forms, since it can significantly influence both processability and final product properties. The percolation threshold of the active pharmaceutical ingredient (API) corresponds to a critical concentration, above which an abrupt change in drug product characteristics can occur. The objective of this study was to identify the percolation threshold of a poorly water-soluble drug with regard to the dissolution behavior from immediate release tablets. The influence of the API particle size on the percolation threshold was also studied. Formulations with increasing drug loads were manufactured via roll compaction using constant process parameters and subsequent tableting. Drug dissolution was investigated in biorelevant medium. The percolation threshold was estimated via a model dependent and a model independent method based on the dissolution data. The intragranular concentration of mefenamic acid had a significant effect on granules and tablet characteristics, such as particle size distribution, compactibility and tablet disintegration. Increasing the intragranular drug concentration of the tablets resulted in lower dissolution rates. A percolation threshold of approximately 20% v/v could be determined for both particle sizes of the API above which an abrupt decrease of the dissolution rate occurred. However, the increasing drug load had a more pronounced effect on dissolution rate of tablets containing the micronized API, which can be attributed to the high agglomeration tendency of micronized substances during manufacturing steps, such as roll compaction and tableting. Both methods that were applied for the estimation of percolation threshold provided comparable values.

  6. Modeling Source Water Threshold Exceedances with Extreme Value Theory

    NASA Astrophysics Data System (ADS)

    Rajagopalan, B.; Samson, C.; Summers, R. S.

    2016-12-01

    Variability in surface water quality, influenced by seasonal and long-term climate changes, can impact drinking water quality and treatment. In particular, temperature and precipitation can impact surface water quality directly or through their influence on streamflow and dilution capacity. Furthermore, they also impact land surface factors, such as soil moisture and vegetation, which can in turn affect surface water quality, in particular, levels of organic matter in surface waters which are of concern. All of these will be exacerbated by anthropogenic climate change. While some source water quality parameters, particularly Total Organic Carbon (TOC) and bromide concentrations, are not directly regulated for drinking water, these parameters are precursors to the formation of disinfection byproducts (DBPs), which are regulated in drinking water distribution systems. These DBPs form when a disinfectant, added to the water to protect public health against microbial pathogens, most commonly chlorine, reacts with dissolved organic matter (DOM), measured as TOC or dissolved organic carbon (DOC), and inorganic precursor materials, such as bromide. Therefore, understanding and modeling the extremes of TOC and Bromide concentrations is of critical interest for drinking water utilities. In this study we develop nonstationary extreme value analysis models for threshold exceedances of source water quality parameters, specifically TOC and bromide concentrations. In this, the threshold exceedances are modeled as Generalized Pareto Distribution (GPD) whose parameters vary as a function of climate and land surface variables - thus, enabling to capture the temporal nonstationarity. We apply these to model threshold exceedance of source water TOC and bromide concentrations at two locations with different climate and find very good performance.

  7. Epidemic spreading with activity-driven awareness diffusion on multiplex network.

    PubMed

    Guo, Quantong; Lei, Yanjun; Jiang, Xin; Ma, Yifang; Huo, Guanying; Zheng, Zhiming

    2016-04-01

    There has been growing interest in exploring the interplay between epidemic spreading with human response, since it is natural for people to take various measures when they become aware of epidemics. As a proper way to describe the multiple connections among people in reality, multiplex network, a set of nodes interacting through multiple sets of edges, has attracted much attention. In this paper, to explore the coupled dynamical processes, a multiplex network with two layers is built. Specifically, the information spreading layer is a time varying network generated by the activity driven model, while the contagion layer is a static network. We extend the microscopic Markov chain approach to derive the epidemic threshold of the model. Compared with extensive Monte Carlo simulations, the method shows high accuracy for the prediction of the epidemic threshold. Besides, taking different spreading models of awareness into consideration, we explored the interplay between epidemic spreading with awareness spreading. The results show that the awareness spreading can not only enhance the epidemic threshold but also reduce the prevalence of epidemics. When the spreading of awareness is defined as susceptible-infected-susceptible model, there exists a critical value where the dynamical process on the awareness layer can control the onset of epidemics; while if it is a threshold model, the epidemic threshold emerges an abrupt transition with the local awareness ratio α approximating 0.5. Moreover, we also find that temporal changes in the topology hinder the spread of awareness which directly affect the epidemic threshold, especially when the awareness layer is threshold model. Given that the threshold model is a widely used model for social contagion, this is an important and meaningful result. Our results could also lead to interesting future research about the different time-scales of structural changes in multiplex networks.

  8. Epidemic spreading with activity-driven awareness diffusion on multiplex network

    NASA Astrophysics Data System (ADS)

    Guo, Quantong; Lei, Yanjun; Jiang, Xin; Ma, Yifang; Huo, Guanying; Zheng, Zhiming

    2016-04-01

    There has been growing interest in exploring the interplay between epidemic spreading with human response, since it is natural for people to take various measures when they become aware of epidemics. As a proper way to describe the multiple connections among people in reality, multiplex network, a set of nodes interacting through multiple sets of edges, has attracted much attention. In this paper, to explore the coupled dynamical processes, a multiplex network with two layers is built. Specifically, the information spreading layer is a time varying network generated by the activity driven model, while the contagion layer is a static network. We extend the microscopic Markov chain approach to derive the epidemic threshold of the model. Compared with extensive Monte Carlo simulations, the method shows high accuracy for the prediction of the epidemic threshold. Besides, taking different spreading models of awareness into consideration, we explored the interplay between epidemic spreading with awareness spreading. The results show that the awareness spreading can not only enhance the epidemic threshold but also reduce the prevalence of epidemics. When the spreading of awareness is defined as susceptible-infected-susceptible model, there exists a critical value where the dynamical process on the awareness layer can control the onset of epidemics; while if it is a threshold model, the epidemic threshold emerges an abrupt transition with the local awareness ratio α approximating 0.5. Moreover, we also find that temporal changes in the topology hinder the spread of awareness which directly affect the epidemic threshold, especially when the awareness layer is threshold model. Given that the threshold model is a widely used model for social contagion, this is an important and meaningful result. Our results could also lead to interesting future research about the different time-scales of structural changes in multiplex networks.

  9. Critical phenomena at the threshold of immediate merger in binary black hole systems: The extreme mass ratio case

    NASA Astrophysics Data System (ADS)

    Gundlach, Carsten; Akcay, Sarp; Barack, Leor; Nagar, Alessandro

    2012-10-01

    In numerical simulations of black hole binaries, Pretorius and Khurana [Classical Quantum Gravity 24, S83 (2007)CQGRDG0264-938110.1088/0264-9381/24/12/S07] have observed critical behavior at the threshold between scattering and immediate merger. The number of orbits scales as n≃-γln⁡|p-p*| along any one-parameter family of initial data such that the threshold is at p=p*. Hence, they conjecture that in ultrarelativistic collisions almost all the kinetic energy can be converted into gravitational waves if the impact parameter is fine-tuned to the threshold. As a toy model for the binary, they consider the geodesic motion of a test particle in a Kerr black hole spacetime, where the unstable circular geodesics play the role of critical solutions, and calculate the critical exponent γ. Here, we incorporate radiation reaction into this model using the self-force approximation. The critical solution now evolves adiabatically along a sequence of unstable circular geodesic orbits under the effect of the self-force. We confirm that almost all the initial energy and angular momentum are radiated on the critical solution. Our calculation suggests that, even for infinite initial energy, this happens over a finite number of orbits given by n∞≃0.41/η, where η is the (small) mass ratio. We derive expressions for the time spent on the critical solution, number of orbits and radiated energy as functions of the initial energy and impact parameter.

  10. Influence of the helium-pressure on diode-pumped alkali-vapor laser

    NASA Astrophysics Data System (ADS)

    Gao, Fei; Chen, Fei; Xie, Ji-jiang; Zhang, Lai-ming; Li, Dian-jun; Yang, Gui-long; Guo, Jing

    2013-05-01

    Diode-pumped alkali-vapor laser (DPAL) is a kind of laser attracted much attention for its merits, such as high quantum efficiency, excellent beam quality, favorable thermal management, and potential scalability to high power and so on. Based on the rate-equation theory of end-pumped DPAL, the performances of DPAL using Cs-vapor collisionally broadened by helium are simulated and studied. With the increase of helium pressure, the numerical results show that: 1) the absorption line-width increases and the stimulated absorption cross-section decreases contrarily; 2) the threshold pumping power decreases to minimum and then rolls over to increase linearly; 3) the absorption efficiency rises to maximum initially due to enough large stimulated absorption cross-section in the far wings of collisionally broadened D2 transition (absorption transition), and then begins to reduce; 4) an optimal value of helium pressure exists to obtain the highest output power, leading to an optimal optical-optical efficiency. Furthermore, to generate the self-oscillation of laser, a critical value of helium pressure occurs when small-signal gain equals to the threshold gain.

  11. Suppressing epidemic spreading by risk-averse migration in dynamical networks

    NASA Astrophysics Data System (ADS)

    Yang, Han-Xin; Tang, Ming; Wang, Zhen

    2018-01-01

    In this paper, we study the interplay between individual behaviors and epidemic spreading in a dynamical network. We distribute agents on a square-shaped region with periodic boundary conditions. Every agent is regarded as a node of the network and a wireless link is established between two agents if their geographical distance is less than a certain radius. At each time, every agent assesses the epidemic situation and make decisions on whether it should stay in or leave its current place. An agent will leave its current place with a speed if the number of infected neighbors reaches or exceeds a critical value E. Owing to the movement of agents, the network's structure is dynamical. Interestingly, we find that there exists an optimal value of E leading to the maximum epidemic threshold. This means that epidemic spreading can be effectively controlled by risk-averse migration. Besides, we find that the epidemic threshold increases as the recovering rate increases, decreases as the contact radius increases, and is maximized by an optimal moving speed. Our findings offer a deeper understanding of epidemic spreading in dynamical networks.

  12. Dependence of intravoxel incoherent motion diffusion MR threshold b-value selection for separating perfusion and diffusion compartments and liver fibrosis diagnostic performance.

    PubMed

    Wáng, Yì Xiáng J; Li, Yáo T; Chevallier, Olivier; Huang, Hua; Leung, Jason Chi Shun; Chen, Weitian; Lu, Pu-Xuan

    2018-01-01

    Background Intravoxel incoherent motion (IVIM) tissue parameters depend on the threshold b-value. Purpose To explore how threshold b-value impacts PF ( f), D slow ( D), and D fast ( D*) values and their performance for liver fibrosis detection. Material and Methods Fifteen healthy volunteers and 33 hepatitis B patients were included. With a 1.5-T magnetic resonance (MR) scanner and respiration gating, IVIM data were acquired with ten b-values of 10, 20, 40, 60, 80, 100, 150, 200, 400, and 800 s/mm 2 . Signal measurement was performed on the right liver. Segmented-unconstrained analysis was used to compute IVIM parameters and six threshold b-values in the range of 40-200 s/mm 2 were compared. PF, D slow , and D fast values were placed along the x-axis, y-axis, and z-axis, and a plane was defined to separate volunteers from patients. Results Higher threshold b-values were associated with higher PF measurement; while lower threshold b-values led to higher D slow and D fast measurements. The dependence of PF, D slow , and D fast on threshold b-value differed between healthy livers and fibrotic livers; with the healthy livers showing a higher dependence. Threshold b-value = 60 s/mm 2 showed the largest mean distance between healthy liver datapoints vs. fibrotic liver datapoints, and a classification and regression tree showed that a combination of PF (PF < 9.5%), D slow (D slow  < 1.239 × 10 -3 mm 2 /s), and D fast (D fast  < 20.85 × 10 -3 mm 2 /s) differentiated healthy individuals and all individual fibrotic livers with an area under the curve of logistic regression (AUC) of 1. Conclusion For segmented-unconstrained analysis, the selection of threshold b-value = 60 s/mm 2 improves IVIM differentiation between healthy livers and fibrotic livers.

  13. Mechanism behind Erosive Bursts In Porous Media.

    PubMed

    Jäger, R; Mendoza, M; Herrmann, H J

    2017-09-22

    Erosion and deposition during flow through porous media can lead to large erosive bursts that manifest as jumps in permeability and pressure loss. Here we reveal that the cause of these bursts is the reopening of clogged pores when the pressure difference between two opposite sites of the pore surpasses a certain threshold. We perform numerical simulations of flow through porous media and compare our predictions to experimental results, recovering with excellent agreement shape and power-law distribution of pressure loss jumps, and the behavior of the permeability jumps as a function of particle concentration. Furthermore, we find that erosive bursts only occur for pressure gradient thresholds within the range of two critical values, independent of how the flow is driven. Our findings provide a better understanding of sudden sand production in oil wells and breakthrough in filtration.

  14. Achieving comb formation over the entire lasing range of quantum cascade lasers.

    PubMed

    Yang, Yang; Burghoff, David; Reno, John; Hu, Qing

    2017-10-01

    Frequency combs based on quantum cascade lasers (QCLs) are finding promising applications in high-speed broadband spectroscopy in the terahertz regime, where many molecules have their "fingerprints." To form stable combs in QCLs, an effective control of group velocity dispersion plays a critical role. The dispersion of the QCL cavity has two main parts: a static part from the material and a dynamic part from the intersubband transitions. Unlike the gain, which is clamped to a fixed value above the lasing threshold, dispersion associated with the intersubband transitions changes with bias, even above the threshold, and this reduces the dynamic range of comb formation. Here, by incorporating tunability into the dispersion compensator, we demonstrate a QCL device exhibiting comb operation from I th to I max , which greatly expands the operation range of the frequency combs.

  15. Doctoral conceptual thresholds in cellular and molecular biology

    NASA Astrophysics Data System (ADS)

    Feldon, David F.; Rates, Christopher; Sun, Chongning

    2017-12-01

    In the biological sciences, very little is known about the mechanisms by which doctoral students acquire the skills they need to become independent scientists. In the postsecondary biology education literature, identification of specific skills and effective methods for helping students to acquire them are limited to undergraduate education. To establish a foundation from which to investigate the developmental trajectory of biologists' research skills, it is necessary to identify those skills which are integral to doctoral study and distinct from skills acquired earlier in students' educational pathways. In this context, the current study engages the framework of threshold concepts to identify candidate skills that are both obstacles and significant opportunities for developing proficiency in conducting research. Such threshold concepts are typically characterised as transformative, integrative, irreversible, and challenging. The results from interviews and focus groups with current and former doctoral students in cellular and molecular biology suggest two such threshold concepts relevant to their subfield: the first is an ability to effectively engage primary research literature from the biological sciences in a way that is critical without dismissing the value of its contributions. The second is the ability to conceptualise appropriate control conditions necessary to design and interpret the results of experiments in an efficient and effective manner for research in the biological sciences as a discipline. Implications for prioritising and sequencing graduate training experiences are discussed on the basis of the identified thresholds.

  16. The effects of prolonged weightlessness and reduced gravity environments on human survival.

    PubMed

    Taylor, R L

    1993-03-01

    The manned exploration of the solar system and the surfaces of some of the smaller planets and larger satellites requires that we are able to keep the adverse human physiological response to long term exposure to near zero and greatly reduced gravity environments within acceptable limits consistent with metabolic function. This paper examines the physiological changes associated with microgravity conditions with particular reference to the weightless demineralizatoin of bone (WDB). It is suggested that many of these changes are the result of physical/mechanical processes and are not primarily a medical problem. There are thus two immediately obvious and workable, if relatively costly, solutions to the problem of weightlessness. The provision of a near 1 g field during prolonged space flights, and/or the development of rapid transit spacecraft capable of significant acceleration and short flight times. Although these developments could remove or greatly ameliorate the effects of weightlessness during long-distance space flights there remains a problem relating to the long term colonization of the surfaces of Mars, the Moon, and other small solar system bodies. It is not yet known whether or not there is a critical threshold value of 'g' below which viable human physiological function cannot be sustained. If such a threshold exists permanent colonization may only be possible if the threshold value of 'g' is less than that at the surface of the planet on which we wish to settle.

  17. Extrinsic regime shifts drive abrupt changes in regeneration dynamics at upper treeline in the Rocky Mountains, U.S.A.

    PubMed

    Elliott, Grant P

    2012-07-01

    Given the widespread and often dramatic influence of climate change on terrestrial ecosystems, it is increasingly common for abrupt threshold changes to occur, yet explicitly testing for climate and ecological regime shifts is lacking in climatically sensitive upper treeline ecotones. In this study, quantitative evidence based on empirical data is provided to support the key role of extrinsic, climate-induced thresholds in governing the spatial and temporal patterns of tree establishment in these high-elevation environments. Dendroecological techniques were used to reconstruct a 420-year history of regeneration dynamics within upper treeline ecotones along a latitudinal gradient (approximately 44-35 degrees N) in the Rocky Mountains. Correlation analysis was used to assess the possible influence of minimum and maximum temperature indices and cool-season (November-April) precipitation on regional age-structure data. Regime-shift analysis was used to detect thresholds in tree establishment during the entire period of record (1580-2000), temperature variables significantly Correlated with establishment during the 20th century, and cool-season precipitation. Tree establishment was significantly correlated with minimum temperature during the spring (March-May) and cool season. Regime-shift analysis identified an abrupt increase in regional tree establishment in 1950 (1950-1954 age class). Coincident with this period was a shift toward reduced cool-season precipitation. The alignment of these climate conditions apparently triggered an abrupt increase in establishment that was unprecedented during the period of record. Two main findings emerge from this research that underscore the critical role of climate in governing regeneration dynamics within upper treeline ecotones. (1) Regional climate variability is capable of exceeding bioclimatic thresholds, thereby initiating synchronous and abrupt changes in the spatial and temporal patterns of tree establishment at broad regional scales. (2) The importance of climate parameters exceeding critical threshold values and triggering a regime shift in tree establishment appears to be contingent on the alignment of favorable temperature and moisture regimes. This research suggests that threshold changes in the climate system can fundamentally alter regeneration dynamics within upper treeline ecotones and, through the use of regime-shift analysis, reveals important climate-vegetation linkages.

  18. Pulsating Hydrodynamic Instability in a Dynamic Model of Liquid-Propellant Combustion

    NASA Technical Reports Server (NTRS)

    Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)

    1999-01-01

    Hydrodynamic (Landau) instability in combustion is typically associated with the onset of wrinkling of a flame surface, corresponding to the formation of steady cellular structures as the stability threshold is crossed. In the context of liquid-propellant combustion, such instability has recently been shown to occur for critical values of the pressure sensitivity of the burning rate and the disturbance wavenumber, significantly generalizing previous classical results for this problem that assumed a constant normal burning rate. Additionally, however, a pulsating form of hydrodynamic instability has been shown to occur as well, corresponding to the onset of temporal oscillations in the location of the liquid/gas interface. In the present work, we consider the realistic influence of a nonzero temperature sensitivity in the local burning rate on both types of stability thresholds. It is found that for sufficiently small values of this parameter, there exists a stable range of pressure sensitivities for steady, planar burning such that the classical cellular form of hydrodynamic instability and the more recent pulsating form of hydrodynamic instability can each occur as the corresponding stability threshold is crossed. For larger thermal sensitivities, however, the pulsating stability boundary evolves into a C-shaped curve in the disturbance-wavenumber/ pressure-sensitivity plane, indicating loss of stability to pulsating perturbations for all sufficiently large disturbance wavelengths. It is thus concluded, based on characteristic parameter values, that an equally likely form of hydrodynamic instability in liquid-propellant combustion is of a nonsteady, long-wave nature, distinct from the steady, cellular form originally predicted by Landau.

  19. Pulsating Hydrodynamic Instability and Thermal Coupling in an Extended Landau/Levich Model of Liquid-Propellant Combustion. 1; Inviscid Analysis

    NASA Technical Reports Server (NTRS)

    Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)

    1999-01-01

    Hydrodynamic (Landau) instability in combustion is typically associated with the onset of wrinkling of a flame surface, corresponding to the formation of steady cellular structures as the stability threshold is crossed. In the context of liquid-propellant combustion, such instability has recently been shown to occur for critical values of the pressure sensitivity of the burning rate and the disturbance wavenumber, significantly generalizing previous classical results for this problem that assumed a constant normal burning rate. Additionally, however, a pulsating form of hydrodynamic instability has been shown to occur as well, corresponding to the onset of temporal oscillations in the location of the liquid/gas interface. In the present work, we consider the realistic influence of a non-zero temperature sensitivity in the local burning rate on both types of stability thresholds. It is found that for sufficiently small values of this parameter, there exists a stable range of pressure sensitivities for steady, planar burning such that the classical cellular form of hydrodynamic instability and the more recent pulsating form of hydrodynamic instability can each occur as the corresponding stability threshold is crossed. For larger thermal sensitivities, however, the pulsating stability boundary evolves into a C-shaped curve in the (disturbance-wavenumber, pressure-sensitivity) plane, indicating loss of stability to pulsating perturbations for all sufficiently large disturbance wavelengths. It is thus concluded, based on characteristic parameter values, that an equally likely form of hydrodynamic instability in liquid-propellant combustion is of a non-steady, long-wave nature, distinct from the steady, cellular form originally predicted by Landau.

  20. Thermodynamic Basis for the Emergence of Genomes during Prebiotic Evolution

    PubMed Central

    Woo, Hyung-June; Vijaya Satya, Ravi; Reifman, Jaques

    2012-01-01

    The RNA world hypothesis views modern organisms as descendants of RNA molecules. The earliest RNA molecules must have been random sequences, from which the first genomes that coded for polymerase ribozymes emerged. The quasispecies theory by Eigen predicts the existence of an error threshold limiting genomic stability during such transitions, but does not address the spontaneity of changes. Following a recent theoretical approach, we applied the quasispecies theory combined with kinetic/thermodynamic descriptions of RNA replication to analyze the collective behavior of RNA replicators based on known experimental kinetics data. We find that, with increasing fidelity (relative rate of base-extension for Watson-Crick versus mismatched base pairs), replications without enzymes, with ribozymes, and with protein-based polymerases are above, near, and below a critical point, respectively. The prebiotic evolution therefore must have crossed this critical region. Over large regions of the phase diagram, fitness increases with increasing fidelity, biasing random drifts in sequence space toward ‘crystallization.’ This region encloses the experimental nonenzymatic fidelity value, favoring evolutions toward polymerase sequences with ever higher fidelity, despite error rates above the error catastrophe threshold. Our work shows that experimentally characterized kinetics and thermodynamics of RNA replication allow us to determine the physicochemical conditions required for the spontaneous crystallization of biological information. Our findings also suggest that among many potential oligomers capable of templated replication, RNAs may have evolved to form prebiotic genomes due to the value of their nonenzymatic fidelity. PMID:22693440

  1. Ventricular beat classifier using fractal number clustering.

    PubMed

    Bakardjian, H

    1992-09-01

    A two-stage ventricular beat 'associative' classification procedure is described. The first stage separates typical beats from extrasystoles on the basis of area and polarity rules. At the second stage, the extrasystoles are classified in self-organised cluster formations of adjacent shape parameter values. This approach avoids the use of threshold values for discrimination between ectopic beats of different shapes, which could be critical in borderline cases. A pattern shape feature conventionally called a 'fractal number', in combination with a polarity attribute, was found to be a good criterion for waveform evaluation. An additional advantage of this pattern classification method is its good computational efficiency, which affords the opportunity to implement it in real-time systems.

  2. Nucleation of Bubbles by Electrons in Liquid Helium-4

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Sirisky, S.; Wei, W.; Seidel, G. M.; Maris, H. J.

    2018-02-01

    We report on experiments in which we study cavitation resulting from electrons in liquid helium. Electrons are introduced into the liquid by a radioactive source. After an electron comes to rest in the liquid, it forces open a small cavity referred to as an electron bubble. To study cavitation, a sound pulse is generated by means of a hemispherical piezoelectric transducer producing a large-amplitude pressure oscillation at the acoustic focus. If an electron is in the vicinity of the focus and the negative-going pressure swing exceeds a critical value, a cavitation bubble is produced which can be detected by light scattering. Two distinct critical pressures P_{el} and P_{rare} have been measured. The first corresponds to cavitation resulting from the application of a reduced pressure to liquid containing an electron which has already formed an electron bubble. The second is the critical pressure needed to lead to cavitation when an electron enters the liquid at a time and place where there is already a reduced pressure. We have measured these two pressures as a function of temperature and consider possible explanations for the difference between them. In addition to these clearly seen cavitation thresholds, there are some cavitation events that have been detected with a threshold that is at an even smaller negative pressure than P_{el} and P_{rare}.

  3. Threshold network of a financial market using the P-value of correlation coefficients

    NASA Astrophysics Data System (ADS)

    Ha, Gyeong-Gyun; Lee, Jae Woo; Nobi, Ashadun

    2015-06-01

    Threshold methods in financial networks are important tools for obtaining important information about the financial state of a market. Previously, absolute thresholds of correlation coefficients have been used; however, they have no relation to the length of time. We assign a threshold value depending on the size of the time window by using the P-value concept of statistics. We construct a threshold network (TN) at the same threshold value for two different time window sizes in the Korean Composite Stock Price Index (KOSPI). We measure network properties, such as the edge density, clustering coefficient, assortativity coefficient, and modularity. We determine that a significant difference exists between the network properties of the two time windows at the same threshold, especially during crises. This implies that the market information depends on the length of the time window when constructing the TN. We apply the same technique to Standard and Poor's 500 (S&P500) and observe similar results.

  4. Critical threshold behavior for steady-state internal transport barriers in burning plasmas.

    PubMed

    García, J; Giruzzi, G; Artaud, J F; Basiuk, V; Decker, J; Imbeaux, F; Peysson, Y; Schneider, M

    2008-06-27

    Burning tokamak plasmas with internal transport barriers are investigated by means of integrated modeling simulations. The barrier sustainment in steady state, differently from the barrier formation process, is found to be characterized by a critical behavior, and the critical number of the phase transition is determined. Beyond a power threshold, alignment of self-generated and noninductively driven currents occurs and steady state becomes possible. This concept is applied to simulate a steady-state scenario within the specifications of the International Thermonuclear Experimental Reactor.

  5. Colour perception with changes in levels of illumination

    NASA Astrophysics Data System (ADS)

    Baah, Kwame F.; Green, Phil; Pointer, Michael

    2012-01-01

    The perceived colour of a stimulus depends on the conditions under which it is viewed. For colours employed as an important cue or identifier, such as signage and brand colours, colour reproduction tolerances are critically important. Typically, such stimuli would be judged using a known level of illumination but, in the target environment, the level of illumination used to view the samples may be entirely different. The effect of changes in the viewing condition on the perceptibility and acceptability of small colour differences should be understood when such tolerances and associated viewing conditions, are specified. A series of psychophysical experiments was conducted to determine whether changes in illumination level significantly alter acceptability and perceptibility thresholds of uniform colour stimuli. It was found that perceived colour discrimination thresholds varied by up to 2.0 ΔE00. For the perceptual correlate of hue however, this value could be of significance if the accepted error of colour difference was at the threshold, thereby yielding the possibility of rejection with changes in illumination level. Lightness and chroma on the other hand, exhibited greater tolerance and were less likely to be rejected with illuminance changes.

  6. Should sensory function after median nerve injury and repair be quantified using two-point discrimination as the critical measure?

    PubMed

    Jerosch-Herold, C

    2000-12-01

    Two-point discrimination (2PD) is widely used for evaluating outcome from peripheral nerve injury and repair. It is the only quantifiable measure used in the British Medical Research Council (MRC) classification that was developed by Highet in 1954. This paper reports the results of a study of 41 patients with complete median nerve lacerations to the wrist or forearm. Two-point discrimination thresholds were assessed together with locognosia (locognosia is the ability to localise a sensory stimulus on the body's surface), tactile gnosis, and touch threshold. Using the MRC classification 29 (71%) patients had a result of S2 or below, 11 (27%) were S3, and only one scored S3+. Patients scored much better on the other tests and showed progressive recovery. It remains too difficult for patients to obtain a measurable threshold value on 2PD and the test therefore lacks responsiveness. The rating of outcome from peripheral nerve repair should not be based solely on 2PD testing and must include other tests of tactile sensibility.

  7. The fragmentation threshold and implications for explosive eruptions

    NASA Astrophysics Data System (ADS)

    Kennedy, B.; Spieler, O.; Kueppers, U.; Scheu, B.; Mueller, S.; Taddeucci, J.; Dingwell, D.

    2003-04-01

    The fragmentation threshold is the minimum pressure differential required to cause a porous volcanic rock to form pyroclasts. This is a critical parameter when considering the shift from effusive to explosive eruptions. We fragmented a variety of natural volcanic rock samples at room temperature (20oC) and high temperature (850oC) using a shock tube modified after Aldibirov and Dingwell (1996). This apparatus creates a pressure differential which drives fragmentation. Pressurized gas in the vesicles of the rock suddenly expands, blowing the sample apart. For this reason, the porosity is the primary control on the fragmentation threshold. On a graph of porosity against fragmentation threshold, our results from a variety of natural samples at both low and high temperatures all plot on the same curve and show the threshold increasing steeply at low porosities. A sharp decrease in the fragmentation threshold occurs as porosity increases from 0- 15%, while a more gradual decrease is seen from 15- 85%. The high temperature experiments form a curve with less variability than the low temperature experiments. For this reason, we have chosen to model the high temperature thresholds. The curve can be roughly predicted by the tensile strength of glass (140 MPa) divided by the porosity. Fractured phenocrysts in the majority of our samples reduces the overall strength of the sample. For this reason, the threshold values can be more accurately predicted by % matrix x the tensile strength/ porosity. At very high porosities the fragmentation threshold varies significantly due to the effect of bubble shape and size distributions on the permeability (Mueller et al, 2003). For example, high thresholds are seen for samples with very high permeabilities, where gas flow reduces the local pressure differential. These results allow us to predict the fragmentation threshold for any volcanic rock for which the porosity and crystal contents are known. During explosive eruptions, the fragmentation threshold may be exceeded in two ways: (1) by building an overpressure within the vesicles above the fragmentation threshold or (2) by unloading and exposing lithostatically pressurised magma to lower pressures. Using this data, we can in principle estimate the height of dome collapse or amount of overpressure necessary to produce an explosive eruption.

  8. Influence of anisotropy on percolation and jamming of linear k-mers on square lattice with defects

    NASA Astrophysics Data System (ADS)

    Tarasevich, Yu Yu; Laptev, V. V.; Burmistrov, A. S.; Shinyaeva, T. S.

    2015-09-01

    By means of the Monte Carlo simulation, we study the layers produced by the random sequential adsorption of the linear rigid objects (k-mers also known as rigid or stiff rods, sticks, needles) onto the square lattice with defects in the presence of an external field. The value of k varies from 2 to 32. The point defects randomly and uniformly placed on the substrate hinder adsorption of the elongated objects. The external field affects isotropic deposition of the particles, consequently the deposited layers are anisotropic. We study the influence of the defect concentration, the length of the objects, and the external field on the percolation threshold and the jamming concentration. Our main findings are (i) the critical defect concentration at which the percolation never occurs even at jammed state decreases for short k-mers (k < 16) and increases for long k-mers (k > 16) as anisotropy increases, (ii) the corresponding critical k-mer concentration decreases with anisotropy growth, (iii) the jamming concentration decreases drastically with growth of k-mer length for any anisotropy, (iv) for short k-mers, the percolation threshold is almost insensitive to the defect concentration for any anisotropy.

  9. Self-organized sorting limits behavioral variability in swarms

    PubMed Central

    Copenhagen, Katherine; Quint, David A.; Gopinathan, Ajay

    2016-01-01

    Swarming is a phenomenon where collective motion arises from simple local interactions between typically identical individuals. Here, we investigate the effects of variability in behavior among the agents in finite swarms with both alignment and cohesive interactions. We show that swarming is abolished above a critical fraction of non-aligners who do not participate in alignment. In certain regimes, however, swarms above the critical threshold can dynamically reorganize and sort out excess non-aligners to maintain the average fraction close to the critical value. This persists even in swarms with a distribution of alignment interactions, suggesting a simple, robust and efficient mechanism that allows heterogeneously mixed populations to naturally regulate their composition and remain in a collective swarming state or even differentiate among behavioral phenotypes. We show that, for evolving swarms, this self-organized sorting behavior can couple to the evolutionary dynamics leading to new evolutionarily stable equilibrium populations set by the physical swarm parameters. PMID:27550316

  10. Self-organized sorting limits behavioral variability in swarms

    NASA Astrophysics Data System (ADS)

    Copenhagen, Katherine; Quint, David A.; Gopinathan, Ajay

    2016-08-01

    Swarming is a phenomenon where collective motion arises from simple local interactions between typically identical individuals. Here, we investigate the effects of variability in behavior among the agents in finite swarms with both alignment and cohesive interactions. We show that swarming is abolished above a critical fraction of non-aligners who do not participate in alignment. In certain regimes, however, swarms above the critical threshold can dynamically reorganize and sort out excess non-aligners to maintain the average fraction close to the critical value. This persists even in swarms with a distribution of alignment interactions, suggesting a simple, robust and efficient mechanism that allows heterogeneously mixed populations to naturally regulate their composition and remain in a collective swarming state or even differentiate among behavioral phenotypes. We show that, for evolving swarms, this self-organized sorting behavior can couple to the evolutionary dynamics leading to new evolutionarily stable equilibrium populations set by the physical swarm parameters.

  11. Laboratory tests of catastrophic disruption of rotating bodies

    NASA Astrophysics Data System (ADS)

    Morris, A. J. W.; Burchell, M. J.

    2017-11-01

    The results of catastrophic disruption experiments on static and rotating targets are reported. The experiments used cement spheres of diameter 10 cm as the targets. Impacts were by mm sized stainless steel spheres at speeds of between 1 and 7.75 km s-1. Energy densities (Q) in the targets ranged from 7 to 2613 J kg-1. The experiments covered both the cratering and catastrophic disruption regimes. For static, i.e. non-rotating targets the critical energy density for disruption (Q*, the value of Q when the largest surviving target fragment has a mass equal to one half of the pre-impact target mass) was Q* = 1447 ± 90 J kg-1. For rotating targets (median rotation frequency of 3.44 Hz) we found Q* = 987 ± 349 J kg-1, a reduction of 32% in the mean value. This lower value of Q* for rotating targets was also accompanied by a larger scatter on the data, hence the greater uncertainty. We suggest that in some cases the rotating targets behaved as static targets, i.e. broke up with the same catastrophic disruption threshold, but in other cases the rotation helped the break up causing a lower catastrophic disruption threshold, hence both the lower value of Q* and the larger scatter on the data. The fragment mass distributions after impact were similar in both the static and rotating target experiments with similar slopes.

  12. Phase-space dependent critical gradient behavior of fast-ion transport due to Alfvén eigenmodes

    DOE PAGES

    Collins, C. S.; Heidbrink, W. W.; Podestà, M.; ...

    2017-06-09

    Experiments in the DIII-D tokamak show that many overlapping small-amplitude Alfv en eigenmodes (AEs) cause fast-ion transport to sharply increase above a critical threshold, leading to fast-ion density profile resilience and reduced fusion performance. The threshold is above the AE linear stability limit and varies between diagnostics that are sensitive to different parts of fast-ion phase-space. A comparison with theoretical analysis using the nova and orbit codes shows that, for the neutral particle diagnostic, the threshold corresponds to the onset of stochastic particle orbits due to wave-particle resonances with AEs in the measured region of phase space. We manipulated themore » bulk fast-ion distribution and instability behavior through variations in beam deposition geometry, and no significant differences in the onset threshold outside of measurement uncertainties were found, in agreement with the theoretical stochastic threshold analysis. Simulations using the `kick model' produce beam ion density gradients consistent with the empirically measured radial critical gradient and highlight the importance of including the energy and pitch dependence of the fast-ion distribution function in critical gradient models. The addition of electron cyclotron heating changes the types of AEs present in the experiment, comparatively increasing the measured fast-ion density and radial gradient. Our studies provide the basis for understanding how to avoid AE transport that can undesirably redistribute current and cause fast-ion losses, and the measurements are being used to validate AE-induced transport models that use the critical gradient paradigm, giving greater confidence when applied to ITER.« less

  13. Phase-space dependent critical gradient behavior of fast-ion transport due to Alfvén eigenmodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, C. S.; Heidbrink, W. W.; Podestà, M.

    Experiments in the DIII-D tokamak show that many overlapping small-amplitude Alfv en eigenmodes (AEs) cause fast-ion transport to sharply increase above a critical threshold, leading to fast-ion density profile resilience and reduced fusion performance. The threshold is above the AE linear stability limit and varies between diagnostics that are sensitive to different parts of fast-ion phase-space. A comparison with theoretical analysis using the nova and orbit codes shows that, for the neutral particle diagnostic, the threshold corresponds to the onset of stochastic particle orbits due to wave-particle resonances with AEs in the measured region of phase space. We manipulated themore » bulk fast-ion distribution and instability behavior through variations in beam deposition geometry, and no significant differences in the onset threshold outside of measurement uncertainties were found, in agreement with the theoretical stochastic threshold analysis. Simulations using the `kick model' produce beam ion density gradients consistent with the empirically measured radial critical gradient and highlight the importance of including the energy and pitch dependence of the fast-ion distribution function in critical gradient models. The addition of electron cyclotron heating changes the types of AEs present in the experiment, comparatively increasing the measured fast-ion density and radial gradient. Our studies provide the basis for understanding how to avoid AE transport that can undesirably redistribute current and cause fast-ion losses, and the measurements are being used to validate AE-induced transport models that use the critical gradient paradigm, giving greater confidence when applied to ITER.« less

  14. 30 CFR 71.700 - Inhalation hazards; threshold limit values for gases, dust, fumes, mists, and vapors.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Inhalation hazards; threshold limit values for... SURFACE WORK AREAS OF UNDERGROUND COAL MINES Airborne Contaminants § 71.700 Inhalation hazards; threshold... containing quartz, and asbestos dust) in excess of, on the basis of a time-weighted average, the threshold...

  15. Percolation behavior of polymer/metal composites on modification of filler

    NASA Astrophysics Data System (ADS)

    Panda, M.; Srinivas, V.; Thakur, A. K.

    2014-02-01

    Polymer-metal composites with different fillers, such as nanocrystalline nickel (n-Ni), core shell n-Ni and nickel oxide (NiO)[n-Ni@NiO] were prepared under the same processing conditions with polyvinyledene fluoride matrix. The larger value of critical exponents (s and s') and percolation threshold (fc 0.30) for n-Ni@NiO composites as compared to n-Ni composites (fc 0.07) and a comparable effective dielectric constant (ɛeff 300) with low loss tangent (tan δ 0.1) at 100 Hz in case of percolative n-Ni@NiO composite was observed. The core shell structure [n-Ni@NiO] also shows a very high value of ɛeff 6000 with tan δ 8 at 40 Hz. The results have been explained by using boundary layer capacitor effect and the percolation theory. The difference in fc and critical exponents is attributed to NiO insulating layer that gives rise to different extent of continuumness at fc and have been explained with the help of Swiss cheese model.

  16. Magnetoacoustic Waves and the Kelvin-Helmholtz Instability in a Steady Asymmetric Slab. I: The Effects of Varying Density Ratios

    NASA Astrophysics Data System (ADS)

    Barbulescu, M.; Erdélyi, R.

    2018-06-01

    Recent observations have shown that bulk flow motions in structured solar plasmas, most evidently in coronal mass ejections (CMEs), may lead to the formation of Kelvin-Helmholtz instabilities (KHIs). Analytical models are thus essential in understanding both how the flows affect the propagation of magnetohydrodynamic (MHD) waves, and what the critical flow speed is for the formation of the KHI. We investigate both these aspects in a novel way: in a steady magnetic slab embedded in an asymmetric environment. The exterior of the slab is defined as having different equilibrium values of the background density, pressure, and temperature on either side. A steady flow and constant magnetic field are present in the slab interior. Approximate solutions to the dispersion relation are obtained analytically and classified with respect to mode and speed. General solutions and the KHI thresholds are obtained numerically. It is shown that, generally, both the KHI critical value and the cut-off speeds for magnetoacoustic waves are lowered by the external asymmetry.

  17. Thresholds for conservation and management: structured decision making as a conceptual framework

    USGS Publications Warehouse

    Nichols, James D.; Eaton, Mitchell J.; Martin, Julien; Edited by Guntenspergen, Glenn R.

    2014-01-01

    changes in system dynamics. They are frequently incorporated into ecological models used to project system responses to management actions. Utility thresholds are components of management objectives and are values of state or performance variables at which small changes yield substantial changes in the value of the management outcome. Decision thresholds are values of system state variables at which small changes prompt changes in management actions in order to reach specified management objectives. Decision thresholds are derived from the other components of the decision process.We advocate a structured decision making (SDM) approach within which the following components are identified: objectives (possibly including utility thresholds), potential actions, models (possibly including ecological thresholds), monitoring program, and a solution algorithm (which produces decision thresholds). Adaptive resource management (ARM) is described as a special case of SDM developed for recurrent decision problems that are characterized by uncertainty. We believe that SDM, in general, and ARM, in particular, provide good approaches to conservation and management. Use of SDM and ARM also clarifies the distinct roles of ecological thresholds, utility thresholds, and decision thresholds in informed decision processes.

  18. Risk indicators for water supply systems for a drought Decision Support System in central Tuscany (Italy)

    NASA Astrophysics Data System (ADS)

    Rossi, Giuseppe; Garrote, Luis; Caporali, Enrica

    2010-05-01

    Identifying the occurrence, the extent and the magnitude of a drought can be delicate, requiring detection of depletions of supplies and increases in demand. Drought indices, particularly the meteorological ones, can describe the onset and the persistency of droughts, especially in natural systems. However they have to be used cautiously when applied to water supply systems. They show little correlation with water shortage situations, since water storage, as well as demand fluctuation, play an important role in water resources management. For that reason a more dynamic indicator relating supply and demand is required in order to identify situations when there is risk of water shortages. In water supply systems there is great variability on the natural water resources and also on the demands. These quantities can only be defined probabilistically. This great variability is faced defining some threshold values, expressed in probabilistic terms, that measure the hydrologic state of the system. They can identify specific actions in an operational context in different levels of severity, like the normal, pre-alert, alert and emergency scenarios. They can simplify the decision-making required during stressful periods and can help mitigate the impacts of drought by clearly defining the conditions requiring actions. The threshold values are defined considering the probability to satisfy a given fraction of the demand in a certain time horizon, and are calibrated through discussion with water managers. A simplified model of the water resources system is built to evaluate the threshold values and the management rules. The threshold values are validated with a long term simulation that takes into account the characteristics of the evaluated system. The levels and volumes in the different reservoirs are simulated using 20-30 years time series. The critical situations are assessed month by month in order to evaluate optimal management rules during the year and avoid conditions of total water shortage. The methodology is applied to the urban area Firenze-Prato-Pistoia in central Tuscany, in central Italy. The catchment of the investigated area has a surface of 1231 km2 and, accordingly to the census ISTAT 2001, 945˙972 inhabitants.

  19. AREA RADIATION MONITOR

    DOEpatents

    Manning, F.W.; Groothuis, S.E.; Lykins, J.H.; Papke, D.M.

    1962-06-12

    S>An improved area radiation dose monitor is designed which is adapted to compensate continuously for background radiation below a threshold dose rate and to give warning when the dose integral of the dose rate of an above-threshold radiation excursion exceeds a selected value. This is accomplished by providing means for continuously charging an ionization chamber. The chamber provides a first current proportional to the incident radiation dose rate. Means are provided for generating a second current including means for nulling out the first current with the second current at all values of the first current corresponding to dose rates below a selected threshold dose rate value. The second current has a maximum value corresponding to that of the first current at the threshold dose rate. The excess of the first current over the second current, which occurs above the threshold, is integrated and an alarm is given at a selected integrated value of the excess corresponding to a selected radiation dose. (AEC)

  20. Critical frontier of the Potts and percolation models on triangular-type and kagome-type lattices. II. Numerical analysis

    NASA Astrophysics Data System (ADS)

    Ding, Chengxiang; Fu, Zhe; Guo, Wenan; Wu, F. Y.

    2010-06-01

    In the preceding paper, one of us (F. Y. Wu) considered the Potts model and bond and site percolation on two general classes of two-dimensional lattices, the triangular-type and kagome-type lattices, and obtained closed-form expressions for the critical frontier with applications to various lattice models. For the triangular-type lattices Wu’s result is exact, and for the kagome-type lattices Wu’s expression is under a homogeneity assumption. The purpose of the present paper is twofold: First, an essential step in Wu’s analysis is the derivation of lattice-dependent constants A,B,C for various lattice models, a process which can be tedious. We present here a derivation of these constants for subnet networks using a computer algorithm. Second, by means of a finite-size scaling analysis based on numerical transfer matrix calculations, we deduce critical properties and critical thresholds of various models and assess the accuracy of the homogeneity assumption. Specifically, we analyze the q -state Potts model and the bond percolation on the 3-12 and kagome-type subnet lattices (n×n):(n×n) , n≤4 , for which the exact solution is not known. Our numerical determination of critical properties such as conformal anomaly and magnetic correlation length verifies that the universality principle holds. To calibrate the accuracy of the finite-size procedure, we apply the same numerical analysis to models for which the exact critical frontiers are known. The comparison of numerical and exact results shows that our numerical values are correct within errors of our finite-size analysis, which correspond to 7 or 8 significant digits. This in turn infers that the homogeneity assumption determines critical frontiers with an accuracy of 5 decimal places or higher. Finally, we also obtained the exact percolation thresholds for site percolation on kagome-type subnet lattices (1×1):(n×n) for 1≤n≤6 .

  1. Critical analysis of forensic cut-offs and legal thresholds: A coherent approach to inference and decision.

    PubMed

    Biedermann, A; Taroni, F; Bozza, S; Augsburger, M; Aitken, C G G

    2018-07-01

    In this paper we critically discuss the definition and use of cut-off values by forensic scientists, for example in forensic toxicology, and point out when and why such values - and ensuing categorical conclusions - are inappropriate concepts for helping recipients of expert information with their questions of interest. Broadly speaking, a cut-off is a particular value of results of analyses of a target substance (e.g., a toxic substance or one of its metabolites in biological sample from a person of interest), defined in a way such as to enable scientists to suggest conclusions regarding the condition of the person of interest. The extent to which cut-offs can be reliably defined and used is not unanimously agreed within the forensic science community, though many practitioners - especially in operational laboratories - rely on cut-offs for reasons such as ease of use and simplicity. In our analysis, we challenge this practice by arguing that choices made for convenience should not be to the detriment of balance and coherence. To illustrate our discussion, we will choose the example of alcohol markers in hair, used widely by forensic toxicologists to reach conclusions regarding the drinking behaviour of individuals. Using real data from one of the co-authors' own work and recommendations of cut-offs published by relevant professional organisations, we will point out in what sense cut-offs are incompatible with current evaluative guidelines (e.g., [31]) and show how to proceed logically without cut-offs by using a standard measure for evidential value. Our conclusions run counter to much current practice, but are inevitable given the inherent definitional and conceptual shortcomings of scientific cut-offs. We will also point out the difference between scientific cut-offs and legal thresholds and argue that the latter - but not the former - are justifiable and can be dealt with in logical evaluative procedures. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Phosphorus saturation and superficial fertilizer application as key parameters to assess the risk of diffuse phosphorus losses from agricultural soils in Brazil.

    PubMed

    Fischer, P; Pöthig, R; Gücker, B; Venohr, M

    2018-07-15

    In Brazil, a steady increase in phosphorus (P) fertilizer application and agricultural intensification has been reported for recent decades. The concomitant P accumulation in soils potentially threatens surface water bodies with eutrophication through diffuse P losses. Here, we demonstrated the applicability of a soil type-independent approach for estimating the degree of P saturation (DPS; a risk parameter of P loss) by a standard method of water-soluble phosphorus (WSP) for two major soil types (Oxisols, Entisols) of the São Francisco catchment in Brazil. Subsequently, soil Mehlich-1P (M1P) levels recommended by Brazilian agricultural institutions were transformed into DPS values. Recommended M1P values for optimal agronomic production corresponded to DPS values below critical thresholds of high risks of P losses (DPS=80%) for major crops of the catchment. Higher risks of reaching critical DPS values due to P accumulation were found for Entisols due to their total sorption capacities being only half those of Oxisols. For complementary information on soil mineralogy and its influence on P sorption and P binding forms, Fourier transformation infrared (FTIR) spectroscopic analyses were executed. FTIR analyses suggested the occurrence of the clay minerals palygorskite and sepiolite in some of the analyzed Entisols and the formation of crandallite as the soil specific P binding form in the investigated Oxisols. Palygorskite and sepiolite can enhance P solubility and hence the risk of P losses. In contrast, the reshaping of superphosphate grains into crandallite may explain the chemical processes leading to previously observed low dissolved P concentrations in surface runoff from Oxisols. To prevent high risk of P losses, we recommend avoiding superficial fertilizer application and establishing environmental thresholds for soil M1P based on DPS. These measures could help to prevent eutrophication of naturally oligotrophic surface waters, and subsequent adverse effects on biodiversity and ecosystem function. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. The Threshold Level--For Schools?

    ERIC Educational Resources Information Center

    Lauerbach, Gerda

    1979-01-01

    Comments on the document "Threshold Level for Modern Language Learning Schools" (J. A. Van Ek, Strasbourg, 1976) and its appropriateness as a description of learning goals for the first years of foreign language teaching. Criticizes particularly the "reduced learning" concept, on which the threshold projects are based. (IFS/WGA)

  4. Two-flash thresholds as a function of comparison stimulus duration.

    DOT National Transportation Integrated Search

    1970-09-01

    The proposal that two-flash thresholds may be used as direct measures of the critical duration (tc) of Bloch's law was tested. Two-flash threshold was found to be an increasing function of comparison stimulus duration for durations of 3 to 22 msec. i...

  5. Harm is all you need? Best interests and disputes about parental decision-making.

    PubMed

    Birchley, Giles

    2016-02-01

    A growing number of bioethics papers endorse the harm threshold when judging whether to override parental decisions. Among other claims, these papers argue that the harm threshold is easily understood by lay and professional audiences and correctly conforms to societal expectations of parents in regard to their children. English law contains a harm threshold which mediates the use of the best interests test in cases where a child may be removed from her parents. Using Diekema's seminal paper as an example, this paper explores the proposed workings of the harm threshold. I use examples from the practical use of the harm threshold in English law to argue that the harm threshold is an inadequate answer to the indeterminacy of the best interests test. I detail two criticisms: First, the harm standard has evaluative overtones and judges are loath to employ it where parental behaviour is misguided but they wish to treat parents sympathetically. Thus, by focusing only on 'substandard' parenting, harm is problematic where the parental attempts to benefit their child are misguided or wrong, such as in disputes about withdrawal of medical treatment. Second, when harm is used in genuine dilemmas, court judgments offer different answers to similar cases. This level of indeterminacy suggests that, in practice, the operation of the harm threshold would be indistinguishable from best interests. Since indeterminacy appears to be the greatest problem in elucidating what is best, bioethicists should concentrate on discovering the values that inform best interests. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Earth resources data acquisition sensor study

    NASA Technical Reports Server (NTRS)

    Grohse, E. W.

    1975-01-01

    The minimum data collection and data processing requirements are investigated for the development of water monitoring systems, which disregard redundant and irrelevant data and process only those data predictive of the onset of significant pollution events. Two approaches are immediately suggested: (1) adaptation of a presently available ambient air monitoring system developed by TVA, and (2) consideration of an air, water, and radiological monitoring system developed by the Georgia Tech Experiment Station. In order to apply monitoring systems, threshold values and maximum allowable rates of change of critical parameters such as dissolved oxygen and temperature are required.

  7. Buckling of an Elastic Ridge: Competition between Wrinkles and Creases

    NASA Astrophysics Data System (ADS)

    Lestringant, C.; Maurini, C.; Lazarus, A.; Audoly, B.

    2017-04-01

    We investigate the elastic buckling of a triangular prism made of a soft elastomer. A face of the prism is bonded to a stiff slab that imposes an average axial compression. We observe two possible buckling modes which are localized along the free ridge. For ridge angles ϕ below a critical value ϕ⋆≈9 0 ° , experiments reveal an extended sinusoidal mode, while for ϕ above ϕ⋆, we observe a series of creases progressively invading the lateral faces starting from the ridge. A numerical linear stability analysis is set up using the finite-element method and correctly predicts the sinusoidal mode for ϕ ≤ϕ⋆, as well as the associated critical strain ɛc(ϕ ). The experimental transition at ϕ⋆ is found to occur when this critical strain ɛc(ϕ ) attains the value ɛc(ϕ⋆)=0.44 corresponding to the threshold of the subcritical surface creasing instability. Previous analyses have focused on elastic crease patterns appearing on planar surfaces, where the role of scale invariance has been emphasized; our analysis of the elastic ridge provides a different perspective, and reveals that scale invariance is not a sufficient condition for localization.

  8. Surfactants and the Rayleigh-Taylor instability of Couette type flows

    NASA Astrophysics Data System (ADS)

    Frenkel, A. L.; Halpern, D.; Schweiger, A. S.

    2011-11-01

    We study the Rayleigh-Taylor instability of slow Couette- type flows in the presence of insoluble surfactants. It is known that with zero gravity, the surfactant makes the flow unstable to longwave disturbances in certain regions of the parameter space; while in other parametric regions, it reinforces the flow stability (Frenkel and Halpern 2002). Here, we show that in the latter parametric sectors, and when the (gravity) Bond number Bo is below a certain threshold value, the Rayleigh-Taylor instability is completely stabilized for a finite interval of Ma, the (surfactant) Marangoni number: MaL Ma2, and also for MaL

  9. The paradoxical zero reflection at zero energy

    NASA Astrophysics Data System (ADS)

    Ahmed, Zafar; Sharma, Vibhu; Sharma, Mayank; Singhal, Ankush; Kaiwart, Rahul; Priyadarshini, Pallavi

    2017-03-01

    Usually, the reflection probability R(E) of a particle of zero energy incident on a potential which converges to zero asymptotically is found to be 1: R(0)=1. But earlier, a paradoxical phenomenon of zero reflection at zero energy (R(0)=0) has been revealed as a threshold anomaly. Extending the concept of half-bound state (HBS) of 3D, here we show that in 1D when a symmetric (asymmetric) attractive potential well possesses a zero-energy HBS, R(0)=0 (R(0)\\ll 1). This can happen only at some critical values q c of an effective parameter q of the potential well in the limit E\\to {0}+. We demonstrate this critical phenomenon in two simple analytically solvable models: square and exponential wells. However, in numerical calculations, even for these two models R(0)=0 is observed only as extrapolation to zero energy from low energies, close to a precise critical value q c. By numerical investigation of a variety of potential wells, we conclude that for a given potential well (symmetric or asymmetric), we can adjust the effective parameter q to have a low reflection at a low energy.

  10. The formation of continuous opinion dynamics based on a gambling mechanism and its sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Alexandre Wang, Qiuping; Li, Wei; Cai, Xu

    2017-09-01

    The formation of continuous opinion dynamics is investigated based on a virtual gambling mechanism where agents fight for a limited resource. We propose a model with agents holding opinions between -1 and 1. Agents are segregated into two cliques according to the sign of their opinions. Local communication happens only when the opinion distance between corresponding agents is no larger than a pre-defined confidence threshold. Theoretical analysis regarding special cases provides a deep understanding of the roles of both the resource allocation parameter and confidence threshold in the formation of opinion dynamics. For a sparse network, the evolution of opinion dynamics is negligible in the region of low confidence threshold when the mindless agents are absent. Numerical results also imply that, in the presence of economic agents, high confidence threshold is required for apparent clustering of agents in opinion. Moreover, a consensus state is generated only when the following three conditions are satisfied simultaneously: mindless agents are absent, the resource is concentrated in one clique, and confidence threshold tends to a critical value(=1.25+2/ka ; k_a>8/3 , the average number of friends of individual agents). For fixed a confidence threshold and resource allocation parameter, the most chaotic steady state of the dynamics happens when the fraction of mindless agents is about 0.7. It is also demonstrated that economic agents are more likely to win at gambling, compared to mindless ones. Finally, the importance of three involved parameters in establishing the uncertainty of model response is quantified in terms of Latin hypercube sampling-based sensitivity analysis.

  11. Determination of the measurement threshold in gamma-ray spectrometry.

    PubMed

    Korun, M; Vodenik, B; Zorko, B

    2017-03-01

    In gamma-ray spectrometry the measurement threshold describes the lover boundary of the interval of peak areas originating in the response of the spectrometer to gamma-rays from the sample measured. In this sense it presents a generalization of the net indication corresponding to the decision threshold, which is the measurement threshold at the quantity value zero for a predetermined probability for making errors of the first kind. Measurement thresholds were determined for peaks appearing in the spectra of radon daughters 214 Pb and 214 Bi by measuring the spectrum 35 times under repeatable conditions. For the calculation of the measurement threshold the probability for detection of the peaks and the mean relative uncertainty of the peak area were used. The relative measurement thresholds, the ratios between the measurement threshold and the mean peak area uncertainty, were determined for 54 peaks where the probability for detection varied between some percent and about 95% and the relative peak area uncertainty between 30% and 80%. The relative measurement thresholds vary considerably from peak to peak, although the nominal value of the sensitivity parameter defining the sensitivity for locating peaks was equal for all peaks. At the value of the sensitivity parameter used, the peak analysis does not locate peaks corresponding to the decision threshold with the probability in excess of 50%. This implies that peaks in the spectrum may not be located, although the true value of the measurand exceeds the decision threshold. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Initial-state-independent equilibration at the breakdown of the eigenstate thermalization hypothesis

    NASA Astrophysics Data System (ADS)

    Khodja, Abdellah; Schmidtke, Daniel; Gemmer, Jochen

    2016-04-01

    This work aims at understanding the interplay between the eigenstate thermalization hypothesis (ETH), initial state independent equilibration, and quantum chaos in systems that do not have a direct classical counterpart. It is based on numerical investigations of asymmetric Heisenberg spin ladders with varied interaction strengths between the legs, i.e., along the rungs. The relaxation of the energy difference between the legs is investigated. Two different parameters, both intended to quantify the degree of accordance with the ETH, are computed. Both indicate violation of the ETH at large interaction strengths but at different thresholds. Indeed, the energy difference is found not to relax independently of its initial value above some critical interaction strength, which coincides with one of the thresholds. At the same point the level statistics shift from Poisson-type to Wigner-type. Hence, the system may be considered to become integrable again in the strong interaction limit.

  13. Achieving comb formation over the entire lasing range of quantum cascade lasers

    DOE PAGES

    Yang, Yang; Burghoff, David; Reno, John; ...

    2017-01-01

    Frequency combs based on quantum cascade laser (QCL) are finding promising applications in highspeed broadband spectroscopy in the terahertz regime, where many molecules have their "fingerprints". To form stable combs in QCLs, an effective control of group velocity dispersion plays a critical role. The dispersion of the QCL cavity has two main parts: a static part from the material and a dynamic part from the intersubband transitions. Unlike the gain, which is clamped to a fixed value above the lasing threshold, dispersion associated with the intersubband transitions changes with bias even above the threshold, and this reduces the dynamic rangemore » of comb formation. Here, by incorporating tunability into the dispersion compensator, we demonstrate a QCL device exhibiting comb operation from I th to I max, which greatly expands the operation range of the frequency combs.« less

  14. Methods for improved forewarning of condition changes in monitoring physical processes

    DOEpatents

    Hively, Lee M.

    2013-04-09

    This invention teaches further improvements in methods for forewarning of critical events via phase-space dissimilarity analysis of data from biomedical equipment, mechanical devices, and other physical processes. One improvement involves objective determination of a forewarning threshold (U.sub.FW), together with a failure-onset threshold (U.sub.FAIL) corresponding to a normalized value of a composite measure (C) of dissimilarity; and providing a visual or audible indication to a human observer of failure forewarning and/or failure onset. Another improvement relates to symbolization of the data according the binary numbers representing the slope between adjacent data points. Another improvement relates to adding measures of dissimilarity based on state-to-state dynamical changes of the system. And still another improvement relates to using a Shannon entropy as the measure of condition change in lieu of a connected or unconnected phase space.

  15. Critical thresholds in species` responses to landscape structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    With, K.A.; Crist, T.O.

    1995-12-01

    Critical thresholds are transition ranges across which small changes in spatial pattern produce abrupt shifts in ecological responses. Habitat fragmentation provides a familiar example of a critical threshold. As the landscape becomes dissected into smaller parcels of habitat. landscape connectivity-the functional linkage among habitat patches - may suddenly become disrupted, which may have important consequences for the distribution and persistence of populations. Landscape connectivity depends not only on the abundance and spatial patterning of habitat. but also on the habitat specificity and dispersal abilities of species. Habitat specialists with limited dispersal capabilities presumably have a much lower threshold to habitatmore » fragmentation than highly vagile species, which may perceive the landscape as functionally connected across a greater range of fragmentation severity. To determine where threshold effects in species, responses to landscape structure are likely to occur, a simulation model modified from percolation theory was developed. Our simulations predicted the distributional patterns of populations in different landscape mosaics, which we tested empirically using two grasshopper species (Orthoptera: Acrididae) that occur in the shortgrass prairie of north-central Colorado. The distribution of these two species in this grassland mosaic matched the predictions from our simulations. By providing quantitative predictions of threshold effects, this modelling approach may prove useful in the formulation of conservation strategies and assessment of land-use changes on species` distributional patterns and persistence.« less

  16. Learning Portals: Analyzing Threshold Concept Theory for LIS Education

    ERIC Educational Resources Information Center

    Tucker, Virginia M.; Weedman, Judith; Bruce, Christine S.; Edwards, Sylvia L.

    2014-01-01

    This paper explores the theoretical framework of threshold concepts and its potential for LIS education. Threshold concepts are key ideas, often troublesome and counterintuitive, that are critical to profound understanding of a domain. Once understood, they allow mastery of significant aspects of the domain, opening up new, previously inaccessible…

  17. A Market Model for Evaluating Technologies That Impact Critical-Material Intensity

    NASA Astrophysics Data System (ADS)

    Iyer, Ananth V.; Vedantam, Aditya

    2016-07-01

    A recent Critical Materials Strategy report highlighted the supply chain risk associated with neodymium and dysprosium, which are used in the manufacturing of neodymium-iron-boron permanent magnets (PM). In response, the Critical Materials Institute is developing innovative strategies to increase and diversify primary production, develop substitutes, reduce material intensity and recycle critical materials. Our goal in this paper is to propose an economic model to quantify the impact of one of these strategies, material intensity reduction. Technologies that reduce material intensity impact the economics of magnet manufacturing in multiple ways because of: (1) the lower quantity of critical material required per unit PM, (2) more efficient use of limited supply, and (3) the potential impact on manufacturing cost. However, the net benefit of these technologies to a magnet manufacturer is an outcome of an internal production decision subject to market demand characteristics, availability and resource constraints. Our contribution in this paper shows how a manufacturer's production economics moves from a region of being supply-constrained, to a region enabling the market optimal production quantity, to a region being constrained by resources other than critical materials, as the critical material intensity changes. Key insights for engineers and material scientists are: (1) material intensity reduction can have a significant market impact, (2) benefits to manufacturers are non-linear in the material intensity reduction, (3) there exists a threshold value for material intensity reduction that can be calculated for any target PM application, and (4) there is value for new intellectual property (IP) when existing manufacturing technology is IP-protected.

  18. Intuitive parameter-free visualization of tumor vascularization using rotating connectivity projections

    NASA Astrophysics Data System (ADS)

    Wiemker, Rafael; Bülow, Thomas; Opfer, Roland; Kabus, Sven; Dharaiya, Ekta

    2008-03-01

    We present an effective and intuitive visualization of the macro-vasculature of a selected nodule or tumor in three-dimensional image data (e.g. CT, MR, US). For the differential diagnosis of nodules the possible distortion of adjacent vessels is one important clinical criterion. Surface renderings of vessel- and tumor-segmentations depend critically on the chosen parameter- and threshold-values for the underlying segmentation. Therefore we use rotating Maximum Intensity Projections (MIPs) of a volume of interests (VOI) around the selected tumor. The MIP does not require specific parameters, and allows much quicker visual inspection in comparison to slicewise navigation, while the rotation gives depth cues to the viewer. Of the vessel network within the VOI, however, not all vessels are connected to the selected tumor, and it is tedious to sort out which adjacent vessels are in fact connected and which are overlaid only by projection. Therefore we suggest a simple transformation of the original image values into connectivity values. In the derived connectedness-image each voxel value corresponds to the lowest image value encountered on the highest possible pathway from the tumor to the voxel. The advantage of the visualization is that no implicit binary decision is made whether a certain vessel is connected to the tumor or not, but rather the degree of connectedness is visualized as the brightness of the vessel. Non-connected structures disappear, feebly connected structures appear faint, and strongly connected structures remain in their original brightness. The visualization does not depend on delicate threshold values. Promising results have been achieved for pulmonary nodules in CT.

  19. Midline Shift Threshold Value for Hemiparesis in Chronic Subdural Hematoma.

    PubMed

    Juković, Mirela F; Stojanović, Dejan B

    2015-01-01

    Chronic subdural hematoma (CSDH) has a variety of clinical presentations, with numerous neurological symptoms and signs. Hemiparesis is one of the leading signs that potentially indicates CSDH. Purpose of this study was to determine the threshold (cut-off) value of midsagittal line (MSL) shift after which hemiparesis is likely to appear. The study evaluated 83 patients with 53 unilateral and 30 bilateral CSDHs in period of three years. Evaluated computed tomography (CT) findings in patients with CSDH were diameter of the hematoma and midsagittal line shift, measured on non-contrast CT scan in relation with occurrence of hemiparesis. Threshold values of MSL shift for both types of CSDHs were obtained as maximal (equal) sensitivity and specificity (intersection of the curves). MSL is a good predictor for hemiparesis occurrence (total sample, AUROC 0.75, p=0.0001). Unilateral and bilateral CSDHs had different threshold values of the MSL for hemiparesis development. Results suggested that in unilateral CSDH the threshold values of MSL could be at 10 mm (AUROC=0.65; p=0.07). For bilateral CSDH the threshold level of MSL shift was 4.5 mm (AUROC=0.77; p=0.01). Our study pointed on the phenomenon that midsagittal line shift can predict hemiparesis occurrence. Hemiparesis in patients with bilateral CSDH was more related to midsagittal line shift compared with unilateral CSDH. When value of midsagittal line shift exceed the threshold level, hemiparesis occurs with certain probability.

  20. Experimenting with ecosystem interaction networks in search of threshold potentials in real-world marine ecosystems.

    PubMed

    Thrush, Simon F; Hewitt, Judi E; Parkes, Samantha; Lohrer, Andrew M; Pilditch, Conrad; Woodin, Sarah A; Wethey, David S; Chiantore, Mariachiara; Asnaghi, Valentina; De Juan, Silvia; Kraan, Casper; Rodil, Ivan; Savage, Candida; Van Colen, Carl

    2014-06-01

    Thresholds profoundly affect our understanding and management of ecosystem dynamics, but we have yet to develop practical techniques to assess the risk that thresholds will be crossed. Combining ecological knowledge of critical system interdependencies with a large-scale experiment, we tested for breaks in the ecosystem interaction network to identify threshold potential in real-world ecosystem dynamics. Our experiment with the bivalves Macomona liliana and Austrovenus stutchburyi on marine sandflats in New Zealand demonstrated that reductions in incident sunlight changed the interaction network between sediment biogeochemical fluxes, productivity, and macrofauna. By demonstrating loss of positive feedbacks and changes in the architecture of the network, we provide mechanistic evidence that stressors lead to break points in dynamics, which theory predicts predispose a system to a critical transition.

  1. [The analysis of threshold effect using Empower Stats software].

    PubMed

    Lin, Lin; Chen, Chang-zhong; Yu, Xiao-dan

    2013-11-01

    In many studies about biomedical research factors influence on the outcome variable, it has no influence or has a positive effect within a certain range. Exceeding a certain threshold value, the size of the effect and/or orientation will change, which called threshold effect. Whether there are threshold effects in the analysis of factors (x) on the outcome variable (y), it can be observed through a smooth curve fitting to see whether there is a piecewise linear relationship. And then using segmented regression model, LRT test and Bootstrap resampling method to analyze the threshold effect. Empower Stats software developed by American X & Y Solutions Inc has a threshold effect analysis module. You can input the threshold value at a given threshold segmentation simulated data. You may not input the threshold, but determined the optimal threshold analog data by the software automatically, and calculated the threshold confidence intervals.

  2. Critical rainfall conditions for the initiation of torrential flows. Results from the Rebaixader catchment (Central Pyrenees)

    NASA Astrophysics Data System (ADS)

    Abancó, Clàudia; Hürlimann, Marcel; Moya, José; Berenguer, Marc

    2016-10-01

    Torrential flows like debris flows or debris floods are fast movements formed by a mix of water and different amounts of unsorted solid material. They generally occur in steep torrents and pose high risk in mountainous areas. Rainfall is their most common triggering factor and the analysis of the critical rainfall conditions is a fundamental research task. Due to their wide use in warning systems, rainfall thresholds for the triggering of torrential flows are an important outcome of such analysis and are empirically derived using data from past events. In 2009, a monitoring system was installed in the Rebaixader catchment, Central Pyrenees (Spain). Since then, rainfall data of 25 torrential flows (;TRIG rainfalls;) were recorded, with a 5-min sampling frequency. Other 142 rainfalls that did not trigger torrential flows (;NonTRIG rainfalls;) were also collected and analyzed. The goal of this work was threefold: (i) characterize rainfall episodes in the Rebaixader catchment and compare rainfall data that triggered torrential flows and others that did not; (ii) define and test Intensity-Duration (ID) thresholds using rainfall data measured inside the catchment by with different techniques; (iii) analyze how the criterion used for defining the rainfall duration and the spatial variability of rainfall influences the value obtained for the thresholds. The statistical analysis of the rainfall characteristics showed that the parameters that discriminate better the TRIG and NonTRIG rainfalls are the rainfall intensities, the mean rainfall and the total rainfall amount. The antecedent rainfall was not significantly different between TRIG and NonTRIG rainfalls, as it can be expected when the source material is very pervious (a sandy glacial soil in the study site). Thresholds were derived from data collected at one rain gauge located inside the catchment. Two different methods were applied to calculate the duration and intensity of rainfall: (i) using total duration, Dtot, and mean intensity, Imean, of the rainfall event, and (ii) using floating durations, D, and intensities, Ifl, based on the maximum values over floating periods of different duration. The resulting thresholds are considerably different (Imean = 6.20 Dtot-0.36 and Ifl_90% = 5.49 D-0.75, respectively) showing a strong dependence on the applied methodology. On the other hand, the definition of the thresholds is affected by several types of uncertainties. Data from both rain gauges and weather radar were used to analyze the uncertainty associated with the spatial variability of the triggering rainfalls. The analysis indicates that the precipitation recorded by the nearby rain gauges can introduce major uncertainties, especially for convective summer storms. Thus, incorporating radar rainfall can significantly improve the accuracy of the measured triggering rainfall. Finally, thresholds were also derived according to three different criteria for the definition of the duration of the triggering rainfall: (i) the duration until the peak intensity, (ii) the duration until the end of the rainfall; and, (iii) the duration until the trigger of the torrential flow. An important contribution of this work is the assessment of the threshold relationships obtained using the third definition of duration. Moreover, important differences are observed in the obtained thresholds, showing that ID relationships are significantly dependent on the applied methodology.

  3. Threshold concepts: implications for the management of natural resources

    USGS Publications Warehouse

    Guntenspergen, Glenn R.; Gross, John

    2014-01-01

    Threshold concepts can have broad relevance in natural resource management. However, the concept of ecological thresholds has not been widely incorporated or adopted in management goals. This largely stems from the uncertainty revolving around threshold levels and the post hoc analyses that have generally been used to identify them. Natural resource managers have a need for new tools and approaches that will help them assess the existence and detection of conditions that demand management actions. Recognition of additional threshold concepts include: utility thresholds (which are based on human values about ecological systems) and decision thresholds (which reflect management objectives and values and include ecological knowledge about a system) as well as ecological thresholds. All of these concepts provide a framework for considering the use of threshold concepts in natural resource decision making.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Michael A.; School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland 4072; Dawson, Christopher M.

    The one-way quantum computing model introduced by Raussendorf and Briegel [Phys. Rev. Lett. 86, 5188 (2001)] shows that it is possible to quantum compute using only a fixed entangled resource known as a cluster state, and adaptive single-qubit measurements. This model is the basis for several practical proposals for quantum computation, including a promising proposal for optical quantum computation based on cluster states [M. A. Nielsen, Phys. Rev. Lett. (to be published), quant-ph/0402005]. A significant open question is whether such proposals are scalable in the presence of physically realistic noise. In this paper we prove two threshold theorems which showmore » that scalable fault-tolerant quantum computation may be achieved in implementations based on cluster states, provided the noise in the implementations is below some constant threshold value. Our first threshold theorem applies to a class of implementations in which entangling gates are applied deterministically, but with a small amount of noise. We expect this threshold to be applicable in a wide variety of physical systems. Our second threshold theorem is specifically adapted to proposals such as the optical cluster-state proposal, in which nondeterministic entangling gates are used. A critical technical component of our proofs is two powerful theorems which relate the properties of noisy unitary operations restricted to act on a subspace of state space to extensions of those operations acting on the entire state space. We expect these theorems to have a variety of applications in other areas of quantum-information science.« less

  5. An evaluation of corn earworm damage and thresholds in soybean

    NASA Astrophysics Data System (ADS)

    Adams, Brian Patrick

    Interactions between corn earworm, Helicoverpa zea (Boddie), and soybean, Glycine max L. (Merrill), were investigated in the Mid-South to evaluate thresholds and damage levels. Field studies were conducted in both indeterminate and determinate modern cultivars to evaluate damage, critical injury levels, and soybean response to simulated corn earworm injury. Field studies were also conducted to evaluate the response of indeterminate cultivars to infestations of corn earworm. Field studies were also conducted to investigate the relationship between pyrethroid insecticide application and corn earworm oviposition in soybean. Results of field studies involving simulated corn earworm damage indicated the need for a dynamic threshold that becomes more conservative as soybean phenology progressed through the reproductive growth stages. This suggested that soybean was more tolerant to fruit loss during the earlier reproductive stages and was able to compensate for fruit loss better during this time than at later growth stages. Results of field studies involving infestations of corn earworm indicated that current thresholds are likely too liberal. This resulted in economic injury level tables being constructed based upon a range of crop values and control costs, however, a general action threshold was also recommended for indeterminate soybean in the Mid-South. Field study results investigating the relationship of pyrethroid application and corn earworm oviposition indicated that even in the presence of an insecticide, corn earworm prefers to oviposit in the upper portion of the canopy, as well as on the leaves as opposed to all other plant parts, consistent with all previous literature.

  6. Quantifying Information Gain from Dynamic Downscaling Experiments

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Peters-Lidard, C. D.

    2015-12-01

    Dynamic climate downscaling experiments are designed to produce information at higher spatial and temporal resolutions. Such additional information is generated from the low-resolution initial and boundary conditions via the predictive power of the physical laws. However, errors and uncertainties in the initial and boundary conditions can be propagated and even amplified to the downscaled simulations. Additionally, the limit of predictability in nonlinear dynamical systems will also damper the information gain, even if the initial and boundary conditions were error-free. Thus it is critical to quantitatively define and measure the amount of information increase from dynamic downscaling experiments, to better understand and appreciate their potentials and limitations. We present a scheme to objectively measure the information gain from such experiments. The scheme is based on information theory, and we argue that if a downscaling experiment is to exhibit value, it has to produce more information than what can be simply inferred from information sources already available. These information sources include the initial and boundary conditions, the coarse resolution model in which the higher-resolution models are embedded, and the same set of physical laws. These existing information sources define an "information threshold" as a function of the spatial and temporal resolution, and this threshold serves as a benchmark to quantify the information gain from the downscaling experiments, or any other approaches. For a downscaling experiment to shown any value, the information has to be above this threshold. A recent NASA-supported downscaling experiment is used as an example to illustrate the application of this scheme.

  7. Photoacoustic signals denoising of the glucose aqueous solutions using an improved wavelet threshold method

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Xiong, Zhihua

    2016-10-01

    The photoacoustic signals denoising of glucose is one of most important steps in the quality identification of the fruit because the real-time photoacoustic singals of glucose are easily interfered by all kinds of noises. To remove the noises and some useless information, an improved wavelet threshld function were proposed. Compared with the traditional wavelet hard and soft threshold functions, the improved wavelet threshold function can overcome the pseudo-oscillation effect of the denoised photoacoustic signals due to the continuity of the improved wavelet threshold function, and the error between the denoised signals and the original signals can be decreased. To validate the feasibility of the improved wavelet threshold function denoising, the denoising simulation experiments based on MATLAB programmimg were performed. In the simulation experiments, the standard test signal was used, and three different denoising methods were used and compared with the improved wavelet threshold function. The signal-to-noise ratio (SNR) and the root-mean-square error (RMSE) values were used to evaluate the performance of the improved wavelet threshold function denoising. The experimental results demonstrate that the SNR value of the improved wavelet threshold function is largest and the RMSE value is lest, which fully verifies that the improved wavelet threshold function denoising is feasible. Finally, the improved wavelet threshold function denoising was used to remove the noises of the photoacoustic signals of the glucose solutions. The denoising effect is also very good. Therefore, the improved wavelet threshold function denoising proposed by this paper, has a potential value in the field of denoising for the photoacoustic singals.

  8. A Threshold Model of Social Support, Adjustment, and Distress after Breast Cancer Treatment

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Armer, Jane M.; Heppner, P. Paul

    2012-01-01

    This study examined a threshold model that proposes that social support exhibits a curvilinear association with adjustment and distress, such that support in excess of a critical threshold level has decreasing incremental benefits. Women diagnosed with a first occurrence of breast cancer (N = 154) completed survey measures of perceived support…

  9. Sepsis and Inflammatory Response Mechanisms: An Activity Stress Model in Humans

    DTIC Science & Technology

    2001-01-31

    anaerobic threshold yielded the reactions most typical of trauma. However, no mode of laboratory exercise induced a sustained and prolonged inflammatory...mobilization ....................................................................................... 9 8. Anaerobic threshold as a marker of the critical...after epmnephriei injection. Lymphocyte retention by lymph nodes, however, may contribute to post-injection lymphopenia. 8. Anaerobic Threshold as a

  10. Nonlinear threshold behavior during the loss of Arctic sea ice.

    PubMed

    Eisenman, I; Wettlaufer, J S

    2009-01-06

    In light of the rapid recent retreat of Arctic sea ice, a number of studies have discussed the possibility of a critical threshold (or "tipping point") beyond which the ice-albedo feedback causes the ice cover to melt away in an irreversible process. The focus has typically been centered on the annual minimum (September) ice cover, which is often seen as particularly susceptible to destabilization by the ice-albedo feedback. Here, we examine the central physical processes associated with the transition from ice-covered to ice-free Arctic Ocean conditions. We show that although the ice-albedo feedback promotes the existence of multiple ice-cover states, the stabilizing thermodynamic effects of sea ice mitigate this when the Arctic Ocean is ice covered during a sufficiently large fraction of the year. These results suggest that critical threshold behavior is unlikely during the approach from current perennial sea-ice conditions to seasonally ice-free conditions. In a further warmed climate, however, we find that a critical threshold associated with the sudden loss of the remaining wintertime-only sea ice cover may be likely.

  11. Nonlinear threshold behavior during the loss of Arctic sea ice

    PubMed Central

    Eisenman, I.; Wettlaufer, J. S.

    2009-01-01

    In light of the rapid recent retreat of Arctic sea ice, a number of studies have discussed the possibility of a critical threshold (or “tipping point”) beyond which the ice–albedo feedback causes the ice cover to melt away in an irreversible process. The focus has typically been centered on the annual minimum (September) ice cover, which is often seen as particularly susceptible to destabilization by the ice–albedo feedback. Here, we examine the central physical processes associated with the transition from ice-covered to ice-free Arctic Ocean conditions. We show that although the ice–albedo feedback promotes the existence of multiple ice-cover states, the stabilizing thermodynamic effects of sea ice mitigate this when the Arctic Ocean is ice covered during a sufficiently large fraction of the year. These results suggest that critical threshold behavior is unlikely during the approach from current perennial sea-ice conditions to seasonally ice-free conditions. In a further warmed climate, however, we find that a critical threshold associated with the sudden loss of the remaining wintertime-only sea ice cover may be likely. PMID:19109440

  12. A threshold theory of the humor response.

    PubMed

    Epstein, Robert; Joker, Veronica R

    2007-01-01

    The humor response has long been considered mysterious, and it is given relatively little attention in modern experimental psychology, in spite of the fact that numerous studies suggest that it has substantial benefits for mood and health. Existing theories of humor fail to account for some of the most basic humor phenomena. On most occasions when a humor response occurs, certain verbal or visual stimuli (the "setup" stimuli, which function as an establishing operation) must precede a critical stimulus (such as a "punch line" or the final panel or critical feature of a cartoon), which then occasions a sudden "revelation" or "understanding"; this revelation is often accompanied by the humor response. We suggest that the setup stimuli increase the strength of the revelatory response to a point just below the threshold of awareness and that the critical stimulus, properly designed and timed, edges the revelatory response to a point just above threshold. We also suggest that it is this threshold phenomenon that produces most instances of the humor response. We discuss these issues in the context of some notable humor of Carl Rogers and B. F. Skinner.

  13. The critical size of focal articular cartilage defects is associated with strains in the collagen fibers.

    PubMed

    Heuijerjans, A; Wilson, W; Ito, K; van Donkelaar, C C

    2017-12-01

    The size of full-thickness focal cartilage defect is accepted to be predictive of its fate, but at which size threshold treatment is required is unclear. Clarification of the mechanism behind this threshold effect will help determining when treatment is required. The objective was to investigate the effect of defect size on strains in the collagen fibers and the non-fibrillar matrix of surrounding cartilage. These strains may indicate matrix disruption. Tissue deformation into the defect was expected, stretching adjacent superficial collagen fibers, while an osteochondral implant was expected to prevent these deformations. Finite element simulations of cartilage/cartilage contact for intact, 0.5 to 8mm wide defects and 8mm implant cases were performed. Impact, a load increase to 2MPa in 1ms, and creep loading, a constant load of 0.5MPa for 900s, scenarios were simulated. A composition-based material model for articular cartilage was employed. Impact loading caused low strain levels for all models. Creep loading increased deviatoric strains and collagen strains in the surrounding cartilage. Deviatoric strains increased gradually with defect size, but the surface area at which collagen fiber strains exceeded failure thresholds, abruptly increased for small increases of defect size. This was caused by a narrow distribution of collagen fiber strains resulting from the non-linear stiffness of the fibers. We postulate this might be the mechanism behind the existence of a critical defect size. Filling of the defect with an implant reduced deviatoric and collagen fiber strains towards values for intact cartilage. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Ecohydrology and tipping points in semiarid australian rangelands

    NASA Astrophysics Data System (ADS)

    Saco, P. M.; Azadi, S.; Moreno de las Heras, M.; Willgoose, G. R.

    2017-12-01

    Semiarid landscapes are often characterised by a spatially heterogeneous vegetation cover forming mosaics of patches with dense vegetation within bare soil. This patchy vegetation cover, which is linked to the healthy function of these ecosystems, is sensitive to human disturbances that can lead to degradation. Previous work suggests that vegetation loss below a critical value can lead to a sudden decrease in landscape functionality following threshold behaviour. The decrease in vegetation cover is linked to erosion and substantial water losses by increasing landscape hydrological connectivity. We study these interactions and the possible existence of tipping points in the Mulga land bioregion, by combining remote sensing observations and results from an eco-geomorphologic model to investigate changes in ecosystem connectivity and the existence of threshold behaviour. More than 30 sites were selected along a precipitation gradient spanning a range from approximately 250 to 500 mm annual rainfall. The analysis of vegetation patterns is derived from high resolution remote sensing images (IKONOS, QuickBird, Pleiades) and MODIS NDVI, which combined with local precipitation data is used to compute rainfall use efficiency to assess the ecosystem function. A critical tipping point associated to loss of vegetation cover appears in the sites with lower annual precipitation. We found that this tipping point behaviour decreases for sites with higher rainfall. We use the model to investigate the relation between structural and functional connectivity and the emergence of threshold behaviour for selected plots along this precipitation gradient. Both observations and modelling results suggest that sites with higher rainfall are more resilient to changes in surface connectivity. The implications for ecosystem resilience and land management are discussed

  15. Effects of epidemic threshold definition on disease spread statistics

    NASA Astrophysics Data System (ADS)

    Lagorio, C.; Migueles, M. V.; Braunstein, L. A.; López, E.; Macri, P. A.

    2009-03-01

    We study the statistical properties of SIR epidemics in random networks, when an epidemic is defined as only those SIR propagations that reach or exceed a minimum size sc. Using percolation theory to calculate the average fractional size of an epidemic, we find that the strength of the spanning link percolation cluster P∞ is an upper bound to . For small values of sc, P∞ is no longer a good approximation, and the average fractional size has to be computed directly. We find that the choice of sc is generally (but not always) guided by the network structure and the value of T of the disease in question. If the goal is to always obtain P∞ as the average epidemic size, one should choose sc to be the typical size of the largest percolation cluster at the critical percolation threshold for the transmissibility. We also study Q, the probability that an SIR propagation reaches the epidemic mass sc, and find that it is well characterized by percolation theory. We apply our results to real networks (DIMES and Tracerouter) to measure the consequences of the choice sc on predictions of average outcome sizes of computer failure epidemics.

  16. Influence of fractal substructures of the percolating cluster on transferring processes in macroscopically disordered environments

    NASA Astrophysics Data System (ADS)

    Kolesnikov, B. P.

    2017-11-01

    The presented work belongs to the issue of searching for the effective kinetic properties of macroscopically disordered environments (MDE). These properties characterize MDE in general on the sizes which significantly exceed the sizes of macro inhomogeneity. The structure of MDE is considered as a complex of interpenetrating percolating and finite clusters consolidated from homonymous components, topological characteristics of which influence on the properties of the whole environment. The influence of percolating clusters’ fractal substructures (backbone, skeleton of backbone, red bonds) on the transfer processes during crossover (a structure transition from fractal to homogeneous condition) is investigated based on the offered mathematical approach for finding the effective conductivity of MDEs and on the percolating cluster model. The nature of the change of the critical conductivity index t during crossover from the characteristic value for the area close to percolation threshold to the value corresponded to homogeneous condition is demonstrated. The offered model describes the transfer processes in MDE with the finite conductivity relation of «conductive» and «low conductive» phases above and below percolation threshold and in smearing area (an analogue of a blur area of the second-order phase transfer).

  17. Concurrent segregation and erosion effects in medium-energy iron beam patterning of silicon surfaces

    NASA Astrophysics Data System (ADS)

    Redondo-Cubero, A.; Lorenz, K.; Palomares, F. J.; Muñoz, A.; Castro, M.; Muñoz-García, J.; Cuerno, R.; Vázquez, L.

    2018-07-01

    We have bombarded crystalline silicon targets with a 40 keV Fe+ ion beam at different incidence angles. The resulting surfaces have been characterized by atomic force, current-sensing and magnetic force microscopies, scanning electron microscopy, and x-ray photoelectron spectroscopy. We have found that there is a threshold angle smaller than 40° for the formation of ripple patterns, which is definitely lower than those frequently reported for noble gas ion beams. We compare our observations with estimates of the value of the critical angle and of additional basic properties of the patterning process, which are based on a continuum model whose parameters are obtained from binary collision simulations. We have further studied experimentally the ripple structures and measured how the surface slopes change with the ion incidence angle. We explore in particular detail the fluence dependence of the pattern for an incidence angle value (40°) close to the threshold. Initially, rimmed holes appear randomly scattered on the surface, which evolve into large, bug-like structures. Further increasing the ion fluence induces a smooth, rippled background morphology. By means of microscopy techniques, a correlation between the morphology of these structures and their metal content can be unambiguously established.

  18. Chronic Migraine Is Associated With Sustained Elevation of Somatosensory Temporal Discrimination Thresholds.

    PubMed

    Vuralli, Doga; Evren Boran, H; Cengiz, Bulent; Coskun, Ozlem; Bolay, Hayrunnisa

    2016-10-01

    Migraine headache attacks have been shown to be accompanied by significant prolongation of somatosensory temporal discrimination threshold values, supporting signs of disrupted sensorial processing in migraine. Chronic migraine is one of the most debilitating and challenging headache disorders with no available biomarker. We aimed to test the diagnostic value of somatosensory temporal discrimination for chronic migraine in this prospective, controlled study. Fifteen chronic migraine patients and 15 healthy controls completed the study. Chronic migraine patients were evaluated twice, during a headache and headache-free period. Somatosensory temporal discrimination threshold values were evaluated in both hands. Duration of migraine and chronic migraine, headache intensity, clinical features accompanying headache such as nausea, photophobia, phonophobia and osmophobia, and pressure pain thresholds were also recorded. In the chronic migraine group, somatosensory temporal discrimination threshold values on the headache day (138.8 ± 21.8 ms for the right hand and 141.2 ± 17.4 ms for the left hand) were significantly higher than somatosensory temporal discrimination threshold values on the headache free day (121.5 ± 13.8 ms for the right hand and 122.8 ± 12.6 ms for the left hand, P = .003 and P < .0001, respectively) and somatosensory temporal discrimination thresholds of healthy volunteers (35.4 ± 5.5 ms for the right hand and 36.4 ± 5.4 ms for the left hand, P < .0001 and P < .0001, respectively). Somatosensory temporal discrimination threshold values of chronic migraine patients on the headache free day were significantly prolonged compared to somatosensory temporal discrimination threshold values of the control group (121.5 ± 13.8 ms vs 35.4 ± 5.5 ms for the right hand, P < .0001 and 122.8 ± 12.6 ms vs 36.4 ± 5.4 ms for the left hand, P < .0001). Somatosensory temporal discrimination threshold values of the hand contralateral to the headache lateralization (153.3 ± 13.7 ms) were significantly higher (P < .0001) than the ipsilateral hand (118.2 ± 11.9 ms) in chronic migraine patients when headache was lateralized. The headache intensity of chronic migraine patients rated with visual analog score was positively correlated with the contralateral somatosensory temporal discrimination threshold values. Somatosensory temporal discrimination thresholds persist elevated during the headache-free intervals in patients with chronic migraine. By providing evidence for the first time for unremitting disruption of central sensory processing, somatosensory temporal discrimination test stands out as a promising neurophysiological biomarker for chronic migraine. © 2016 American Headache Society.

  19. CRITICAL ILLUMINATION AND FLICKER FREQUENCY IN RELATED FISHES

    PubMed Central

    Crozier, W. J.; Wolf, E.; Zerrahn-Wolf, Gertrud

    1937-01-01

    Flicker response curves have been obtained at 21.5°C. for three genera of fresh water teleosts: Enneacanthus (sunfish), Xiphophorus (swordtail), Platypoecilius (Platy), by the determination of mean critical intensities for response at fixed flicker frequencies, and for a certain homogeneous group of backcross hybrids of swordtail x Platy (Black Helleri). The curves exhibit marked differences in form and proportions. The same type of analysis is applicable to each, however. A low intensity rod-governed section has added to it a more extensive cone portion. Each part is accurately described by the equation F = Fmax./(1 + e -p log-p logI/Ii), where F = flicker frequency, I = associated mean critical intensity, and Ii is the intensity at the inflection point of the sigmoid curve relating F to log I. There is no correlation between quantitative features of the rod and cone portions. Threshold intensities, p, Ii, and Fmax. are separately and independently determined. The hybrid Black Helleri show quantitative agreement with the Xiphophorus parental stock in the values of p for rods and cones, and in the cone Fmax.; the rod Fmax. is very similar to that for the Platy stock; the general level of effective intensities is rather like that of the Platy form. This provides, among other things, a new kind of support for the duplicity doctrine. Various races of Platypoecilius maculatus, and P. variatus, give closely agreeing values of Im at different flicker frequencies; and two species of sunfish also agree. The effect of cross-breeding is thus not a superficial thing. It indicates the possibility of further genetic investigation. The variability of the critical intensity for response to flicker follows the rules previously found to hold for other forms. The variation is the expression of a property of the tested organism. It is shown that, on the assumption of a frequency distribution of receptor element thresholds as a function of log I, with fluctuation in the excitabilities of the marginally excited elements, it is to be expected that the dispersion of critical flicker frequencies in repeated measurements will pass through a maximum as log I is increased, whereas the dispersion of critical intensities will be proportional to Im; and that the proportionality factor in the case of different organisms bears no relation to the form or position of the respective curves relating mean critical intensity to flicker frequency. These deductions agree with the experimental findings. PMID:19873037

  20. Study of blur discrimination for 3D stereo viewing

    NASA Astrophysics Data System (ADS)

    Subedar, Mahesh; Karam, Lina J.

    2014-03-01

    Blur is an important attribute in the study and modeling of the human visual system. Blur discrimination was studied extensively using 2D test patterns. In this study, we present the details of subjective tests performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. Specifically, the effect of disparity on the blur discrimination thresholds is studied on a passive stereoscopic 3D display. The blur discrimination thresholds are measured using stereoscopic 3D test patterns with positive, negative and zero disparity values, at multiple reference blur levels. A disparity value of zero represents the 2D viewing case where both the eyes will observe the same image. The subjective test results indicate that the blur discrimination thresholds remain constant as we vary the disparity value. This further indicates that binocular disparity does not affect blur discrimination thresholds and the models developed for 2D blur discrimination thresholds can be extended to stereoscopic 3D blur discrimination thresholds. We have presented fitting of the Weber model to the 3D blur discrimination thresholds measured from the subjective experiments.

  1. Climate Change, Population Immunity, and Hyperendemicity in the Transmission Threshold of Dengue

    PubMed Central

    Oki, Mika; Yamamoto, Taro

    2012-01-01

    Background It has been suggested that the probability of dengue epidemics could increase because of climate change. The probability of epidemics is most commonly evaluated by the basic reproductive number (R0), and in mosquito-borne diseases, mosquito density (the number of female mosquitoes per person [MPP]) is the critical determinant of the R0 value. In dengue-endemic areas, 4 different serotypes of dengue virus coexist–a state known as hyperendemicity–and a certain proportion of the population is immune to one or more of these serotypes. Nevertheless, these factors are not included in the calculation of R0. We aimed to investigate the effects of temperature change, population immunity, and hyperendemicity on the threshold MPP that triggers an epidemic. Methods and Findings We designed a mathematical model of dengue transmission dynamics. An epidemic was defined as a 10% increase in seroprevalence in a year, and the MPP that triggered an epidemic was defined as the threshold MPP. Simulations were conducted in Singapore based on the recorded temperatures from 1980 to 2009 The threshold MPP was estimated with the effect of (1) temperature only; (2) temperature and fluctuation of population immunity; and (3) temperature, fluctuation of immunity, and hyperendemicity. When only the effect of temperature was considered, the threshold MPP was estimated to be 0.53 in the 1980s and 0.46 in the 2000s, a decrease of 13.2%. When the fluctuation of population immunity and hyperendemicity were considered in the model, the threshold MPP decreased by 38.7%, from 0.93 to 0.57, from the 1980s to the 2000s. Conclusions The threshold MPP was underestimated if population immunity was not considered and overestimated if hyperendemicity was not included in the simulations. In addition to temperature, these factors are particularly important when quantifying the threshold MPP for the purpose of setting goals for vector control in dengue-endemic areas. PMID:23144746

  2. Climate change, population immunity, and hyperendemicity in the transmission threshold of dengue.

    PubMed

    Oki, Mika; Yamamoto, Taro

    2012-01-01

    It has been suggested that the probability of dengue epidemics could increase because of climate change. The probability of epidemics is most commonly evaluated by the basic reproductive number (R(0)), and in mosquito-borne diseases, mosquito density (the number of female mosquitoes per person [MPP]) is the critical determinant of the R(0) value. In dengue-endemic areas, 4 different serotypes of dengue virus coexist-a state known as hyperendemicity-and a certain proportion of the population is immune to one or more of these serotypes. Nevertheless, these factors are not included in the calculation of R(0). We aimed to investigate the effects of temperature change, population immunity, and hyperendemicity on the threshold MPP that triggers an epidemic. We designed a mathematical model of dengue transmission dynamics. An epidemic was defined as a 10% increase in seroprevalence in a year, and the MPP that triggered an epidemic was defined as the threshold MPP. Simulations were conducted in Singapore based on the recorded temperatures from 1980 to 2009 The threshold MPP was estimated with the effect of (1) temperature only; (2) temperature and fluctuation of population immunity; and (3) temperature, fluctuation of immunity, and hyperendemicity. When only the effect of temperature was considered, the threshold MPP was estimated to be 0.53 in the 1980s and 0.46 in the 2000s, a decrease of 13.2%. When the fluctuation of population immunity and hyperendemicity were considered in the model, the threshold MPP decreased by 38.7%, from 0.93 to 0.57, from the 1980s to the 2000s. The threshold MPP was underestimated if population immunity was not considered and overestimated if hyperendemicity was not included in the simulations. In addition to temperature, these factors are particularly important when quantifying the threshold MPP for the purpose of setting goals for vector control in dengue-endemic areas.

  3. [Soil Phosphorus Forms and Leaching Risk in a Typically Agricultural Catchment of Hefei Suburban].

    PubMed

    Fan, Hui-hui; Li, Ru-zhong; Pei, Ting-ting; Zhang, Rui-gang

    2016-01-15

    To investigate the soil phosphorus forms and leaching risk in a typically agricultural catchment of Ershibu River in Hefei Suburban, Chaohu Lake basin, 132 surface soil samples were collected from the catchment area. The spatial distribution of total phosphorus (TP) and bio-available phosphorus (Bio-P), and the spatial variability of soil available phosphorus (Olsen-P) and easy desorption phosphorus (CaCl2-P) were analyzed using the Kriging technology of AreGIS after speciation analysis of soil phosphorus. Moreover, the enrichment level of soil phosphorus was studied, and the phosphorus leaching risk was evaluated through determining the leaching threshold value of soil phosphorus. The results showed that the samples with high contents of TP and Bio-P mainly located in the upstream of the left tributary and on the right side of local area where two tributaries converged. The enrichment rates of soil phosphorus forms were arranged as follows: Ca-P (15.01) > OP (4.16) > TP (3. 42) > IP (2.94) > Ex-P (2.76) > Fe/Al-P (2.43) > Olsen-P (2.34). The critical value of Olsen-P leaching was 18.388 mg x kg(-1), and the leaching samples with values higher than the threshold value accounted for 16.6% of total samples. Generally, the high-risk areas mainly occurred in the upstream of the left tributary, the middle of the right tributary and the local area of the downstream of the area where two tributaries converged.

  4. Possible Impact of Incremental Cost-Effectiveness Ratio (ICER) on Decision Making for Cancer Screening in Hong Kong: A Systematic Review.

    PubMed

    Wong, Carlos K H; Lang, Brian H H; Guo, Vivian Y W; Lam, Cindy L K

    2016-12-01

    The aim of this paper was to critically review the literature on the cost effectiveness of cancer screening interventions, and examine the incremental cost-effectiveness ratios (ICERs) that may influence government recommendations on cancer screening strategies and funding for mass implementation in the Hong Kong healthcare system. We conducted a literature review of cost-effectiveness studies in the Hong Kong population related to cancer screening published up to 2015, through a hand search and database search of PubMed, Web of Science, Embase, and OVID Medline. Binary data on the government's decisions were obtained from the Cancer Expert Working Group, Department of Health. Mixed-effect logistic regression analysis was used to examine the impact of ICERs on decision making. Using Youden's index, an optimal ICER threshold value for positive decisions was examined by area under receiver operating characteristic curve (AUC). Eight studies reporting 30 cost-effectiveness pairwise comparisons of population-based cancer screening were identified. Most studies reported an ICER for a cancer screening strategy versus a comparator with outcomes in terms of cost per life-years (55.6 %), or cost per quality-adjusted life-years (55.6 %). Among comparisons with a mean ICER of US$102,931 (range 800-715,137), the increase in ICER value by 1000 was associated with decreased odds (odds ratio 0.990, 0.981-0.999; p = 0.033) of a positive recommendation. An optimal ICER value of US$61,600 per effectiveness unit yielded a high sensitivity of 90 % and specificity of 85 % for a positive recommendation. A lower ICER threshold value of below US$8044 per effectiveness unit was detected for a positive funding decision. Linking published evidence to Government recommendations and practice on cancer screening, ICERs influence decisions on the adoption of health technologies in Hong Kong. The potential ICER threshold for recommendation in Hong Kong may be higher than those of developed countries.

  5. Effect of solute immobilization on the stability problem within the fractional model in the solute analog of the Horton-Rogers-Lapwood problem.

    PubMed

    Klimenko, Lyudmila S; Maryshev, Boris S

    2017-11-24

    The paper is devoted to the linear stability analysis within the solute analogue of the Horton-Rogers-Lapwood (HRL) problem. The solid nanoparticles are treated as solute within the continuous approach. Therefore, we consider the infinite horizontal porous layer saturated with a mixture (carrier fluid and solute). Solute transport in porous media is very often complicated by solute immobilization on a solid matrix of porous media. Solute immobilization (solute sorption) is taken into account within the fractal model of the MIM approach. According to this model a solute in porous media immobilizes within random time intervals and the distribution of such random variable does not have a finite mean value, which has a good agreement with some experiments. The solute concentration difference between the layer boundaries is assumed as constant. We consider two cases of horizontal external filtration flux: constant and time-modulated. For the constant flux the system of equations that determines the frequency of neutral oscillations and the critical value of the Rayleigh-Darcy number is derived. Neutral curves of the critical parameters on the governing parameters are plotted. Stability maps are obtained numerically in a wide range of parameters of the system. We have found that taking immobilization into account leads to an increase in the critical value of the Rayleigh-Darcy number with an increase in the intensity of the external filtration flux. The case of weak time-dependent external flux is investigated analytically. We have shown that the modulated external flux leads to an increase in the critical value of the Rayleigh-Darcy number and a decrease in the critical wave number. For moderate time-dependent filtration flux the differential equation with Caputo fractional derivatives has been obtained for the description of the behavior near the convection instability threshold. This equation is analyzed numerically by the Floquet method; the parametric excitation of convection is observed.

  6. Too big or too narrow? Disturbance characteristics determine the functional resilience in virtual microbial ecosystems

    NASA Astrophysics Data System (ADS)

    König, Sara; Firle, Anouk-Letizia; Koehnke, Merlin; Banitz, Thomas; Frank, Karin

    2017-04-01

    In general ecology, there is an ongoing debate about the influence of fragmentation on extinction thresholds. Whether this influence is positive or negative depends on the considered type of fragmentation: whereas habitat fragmentation often has a negative influence on population extinction thresholds, spatially fragmented disturbances are observed to have mostly positive effects on the extinction probability. Besides preventing population extinction, in soil systems ecology we are interested in analyzing how ecosystem functions are maintained despite disturbance events. Here, we analyzed the influence of disturbance size and fragmentation on the functional resilience of a microbial soil ecosystem. As soil is a highly heterogeneous environment exposed to disturbances of different spatial configurations, the identification of critical disturbance characteristics for maintaining its functions is crucial. We used the numerical simulation model eColony considering bacterial growth, degradation and dispersal for analyzing the dynamic response of biodegradation examplary for an important microbial ecosystem service to disturbance events of different spatial configurations. We systematically varied the size and the degree of fragmentation of the affected area (disturbance pattern). We found that the influence of the disturbance size on functional recovery and biodegradation performance highly depends on the spatial fragmentation of the disturbance. Generally, biodegradation performance decreases with increasing clumpedness and increasing size of the affected area. After spatially correlated disturbance events, biodegradation performance decreases linear with increasing disturbance size. After spatially fragmented disturbance events, on the other hand, an increase in disturbance size has no influence on the biodegradation performance until a critical disturbance size is reached. Is the affected area bigger than this critical size, the functional performance decreases dramatically. Under recurrent disturbance events, this threshold is shifted to lower disturbance sizes. The more frequent disturbances are recurring, the lower is the critical disturbance size. Our simulation results indicate the importance of spatial characteristics of disturbance events for the functional resilience of microbial ecosystems. Critical values for disturbance size and fragmentation emerge from an interplay between both characteristics. In consequence, a precise definition of the specific disturbance regime is necessary for analysing functional resilience. With this study, we show that we need to consider the influence of fragmentation in terrestrial environments not only on population extincions but also on the resilience of ecosystem functions. Moreover, spatial disturbance characteristics - which are widely discussed on landscape scale - are an important factor on smaller scales, too.

  7. Modeling of the blood rheology in steady-state shear flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apostolidis, Alex J.; Beris, Antony N., E-mail: beris@udel.edu

    We undertake here a systematic study of the rheology of blood in steady-state shear flows. As blood is a complex fluid, the first question that we try to answer is whether, even in steady-state shear flows, we can model it as a rheologically simple fluid, i.e., we can describe its behavior through a constitutive model that involves only local kinematic quantities. Having answered that question positively, we then probe as to which non-Newtonian model best fits available shear stress vs shear-rate literature data. We show that under physiological conditions blood is typically viscoplastic, i.e., it exhibits a yield stress thatmore » acts as a minimum threshold for flow. We further show that the Casson model emerges naturally as the best approximation, at least for low and moderate shear-rates. We then develop systematically a parametric dependence of the rheological parameters entering the Casson model on key physiological quantities, such as the red blood cell volume fraction (hematocrit). For the yield stress, we base our description on its critical, percolation-originated nature. Thus, we first determine onset conditions, i.e., the critical threshold value that the hematocrit has to have in order for yield stress to appear. It is shown that this is a function of the concentration of a key red blood cell binding protein, fibrinogen. Then, we establish a parametric dependence as a function of the fibrinogen and the square of the difference of the hematocrit from its critical onset value. Similarly, we provide an expression for the Casson viscosity, in terms of the hematocrit and the temperature. A successful validation of the proposed formula is performed against additional experimental literature data. The proposed expression is anticipated to be useful not only for steady-state blood flow modeling but also as providing the starting point for transient shear, or more general flow modeling.« less

  8. A model to predict the effects of soil structure on denitrification and N 2O emission

    NASA Astrophysics Data System (ADS)

    Laudone, G. M.; Matthews, G. P.; Bird, N. R. A.; Whalley, W. R.; Cardenas, L. M.; Gregory, A. S.

    2011-10-01

    SummaryA model of the void space of soil is presented, and used for the a priori biophysical simulation of denitrification. The model comprises a single critical percolation channel through a 5 cm stack of four unit cells of a dual-porous void structure. Together, the micro- and macro-porous structures closely replicate the full water retention characteristic of a sandy clay loam soil from the Woburn Experimental Farm operated by Rothamsted Research, UK. Between 1 and 10 micro-porous hot-spot zones of biological activity were positioned at equally spaced distances within 5 cm from the surface, and at either 10 μm or 100 μm from the critical percolation channel. Nitrification and denitrification reactions within the hotspots were assumed to follow Michaelis-Menten kinetics, with estimated values of rate coefficients. Estimates were also made of the threshold values of oxygen concentration below which the anaerobic processes would commence. The pore network was fully saturated following addition of an aqueous 'amendment' of nitrate and glucose which started the reactions, and which mirrored an established laboratory protocol. Diffusion coefficients for Fickian and Crank-Nicolson calculations were taken from the literature, and were corrected for the tortuosity of the micro-porosity. The model was used to show the amount of carbon dioxide, nitrous oxide and molecular nitrogen emerging from the simulated soil with time. Adjustment of the rate coefficient and oxygen threshold concentrations, within the context of a sensitivity analysis, gave emission curves in good agreement with previous experimental measurements. Positioning of the hot-spot zones away from the critical percolation path slowed the increase and decline in emission of the gases. The model and its parameters can now be used for modelling the effect of soil compaction and saturation on the emission of nitrous oxide.

  9. Simulating Flaring Events via an Intelligent Cellular Automata Mechanism

    NASA Astrophysics Data System (ADS)

    Dimitropoulou, M.; Vlahos, L.; Isliker, H.; Georgoulis, M.

    2010-07-01

    We simulate flaring events through a Cellular Automaton (CA) model, in which, for the first time, we use observed vector magnetograms as initial conditions. After non-linear force free extrapolation of the magnetic field from the vector magnetograms, we identify magnetic discontinuities, using two alternative criteria: (1) the average magnetic field gradient, or (2) the normalized magnetic field curl (i.e. the current). Magnetic discontinuities are identified at the grid-sites where the magnetic field gradient or curl exceeds a specified threshold. We then relax the magnetic discontinuities according to the rules of Lu and Hamilton (1991) or Lu et al. (1993), i.e. we redistribute the magnetic field locally so that the discontinuities disappear. In order to simulate the flaring events, we consider several alternative scenarios with regard to: (1) The threshold above which magnetic discontinuities are identified (applying low, high, and height-dependent threshold values); (2) The driving process that occasionally causes new discontinuities (at randomly chosen grid sites, magnetic field increments are added that are perpendicular (or may-be also parallel) to the existing magnetic field). We address the question whether the coronal active region magnetic fields can indeed be considered to be in the state of self-organized criticality (SOC).

  10. Olive response to water availability: yield response functions, soil water content indicators and evaluation of adaptability to climate change

    NASA Astrophysics Data System (ADS)

    Riccardi, Maria; Alfieri, Silvia Maria; Basile, Angelo; Bonfante, Antonello; Menenti, Massimo; Monaco, Eugenia; De Lorenzi, Francesca

    2013-04-01

    Climate evolution, with the foreseen increase of temperature and frequency of drought events during the summer, could cause significant changes in the availability of water resources specially in the Mediterranean region. European countries need to encourage sustainable agriculture practices, reducing inputs, especially of water, and minimizing any negative impact on crop quantity and quality. Olive is an important crop in the Mediterranean region that has traditionally been cultivated with no irrigation and is known to attain acceptable production under dry farming. Therefore this crop will not compete for foreseen reduced water resources. However, a good quantitative knowledge must be available about effects of reduced precipitation and water availability on yield. Yield response functions, coupled with indicators of soil water availability, provide a quantitative description of the cultivar- specific behavior in relation to hydrological conditions. Yield response functions of 11 olive cultivars, typical of Mediterranean environment, were determined using experimental data (unpublished or reported in scientific literature). The yield was expressed as relative yield (Yr); the soil water availability was described by means of different indicators: relative soil water deficit (RSWD), relative evapotranspiration (RED) and transpiration deficit (RTD). Crops can respond nonlinearly to changes in their growing conditions and exhibit threshold responses, so for the yield functions of each olive cultivar both linear regression and threshold-slope models were considered to evaluate the best fit. The level of relative yield attained in rain-fed conditions was identified and defined as the acceptable yield level (Yrrainfed). The value of the indicator (RSWD, RED and RTD) corresponding to Yrrainfed was determined for each cultivar and indicated as the critical value of water availability. The error in the determination of the critical value was estimated. By means of a simulation model of the water flow in the soil-plant-atmosphere system, the indicators of soil water availability were calculated for different soil units in an area of Southern Italy, traditionally cultivated with olive. Simulations were performed for two climate scenarios: reference (1961-90) and future climate (2021-50). The potentiality of the indicators RSWD, RED and RTD to describe soil water availability was evaluated using simulated and experimental data. The analysis showed that RED values were correlated to RTD. The analysis demonstrated that RTD was more effective than RED in representing crop water availability RSWD is very well correlated to RTD and the degree of correlation depends of the period of deficit considered. The probability of adaptation of each cultivar was calculated for both climatic periods by comparing the critical values (and their error distribution) with soil availability indicators. Keywords: Olea europaea, soil water deficit, water availability critical value. The work was carried out within the Italian national project AGROSCENARI funded by the Ministry for Agricultural, Food and Forest Policies (MIPAAF, D.M. 8608/7303/2008)

  11. Is ``No-Threshold'' a ``Non-Concept''?

    NASA Astrophysics Data System (ADS)

    Schaeffer, David J.

    1981-11-01

    A controversy prominent in scientific literature that has carried over to newspapers, magazines, and popular books is having serious social and political expressions today: “Is there, or is there not, a threshold below which exposure to a carcinogen will not induce cancer?” The distinction between establishing the existence of this threshold (which is a theoretical question) and its value (which is an experimental one) gets lost in the scientific arguments. Establishing the existence of this threshold has now become a philosophical question (and an emotional one). In this paper I qualitatively outline theoretical reasons why a threshold must exist, discuss experiments which measure thresholds on two chemicals, and describe and apply a statistical method for estimating the threshold value from exposure-response data.

  12. A critical review of the potential impacts of marine seismic surveys on fish & invertebrates.

    PubMed

    Carroll, A G; Przeslawski, R; Duncan, A; Gunning, M; Bruce, B

    2017-01-15

    Marine seismic surveys produce high intensity, low-frequency impulsive sounds at regular intervals, with most sound produced between 10 and 300Hz. Offshore seismic surveys have long been considered to be disruptive to fisheries, but there are few ecological studies that target commercially important species, particularly invertebrates. This review aims to summarise scientific studies investigating the impacts of low-frequency sound on marine fish and invertebrates, as well as to critically evaluate how such studies may apply to field populations exposed to seismic operations. We focus on marine seismic surveys due to their associated unique sound properties (i.e. acute, low-frequency, mobile source locations), as well as fish and invertebrates due to the commercial value of many species in these groups. The main challenges of seismic impact research are the translation of laboratory results to field populations over a range of sound exposure scenarios and the lack of sound exposure standardisation which hinders the identification of response thresholds. An integrated multidisciplinary approach to manipulative and in situ studies is the most effective way to establish impact thresholds in the context of realistic exposure levels, but if that is not practical the limitations of each approach must be carefully considered. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  13. Determination of Reynolds Shear Stress Level for Hemolysis.

    PubMed

    Jhun, Choon-Sik; Stauffer, Megan A; Reibson, John D; Yeager, Eric E; Newswanger, Raymond K; Taylor, Joshua O; Manning, Keefe B; Weiss, William J; Rosenberg, Gerson

    Reynolds shear stress (RSS) has served as a metric for the effect of turbulence on hemolysis. Forstrom (1969) and Sallam and Hwang (1984) determined the RSS threshold for hemolysis to be 50,000 and 4,000 dyne/cm, respectively, using a turbulent jet. Despite the order of magnitude discrepancy, the threshold by Sallam and Hwang has been frequently cited for hemolytic potential in blood pumps. We recreated a Sallam apparatus (SA) to resolve this discrepancy and provide additional data to be used in developing a more accurate hemolysis model. Hemolysis was measured over a large range of Reynolds numbers (Re) (Re = 1,000-80,000). Washed bovine red blood cells (RBCs) were injected into the free jet of phosphate buffered saline, and hemolysis was quantified using a percent hemolysis, Hp = h (100 - hematocrit [HCT])/Hb, where h (mg/dl) is free hemoglobin and Hb (mg/dl) is total hemoglobin. Reynolds shear stress was calculated using two-dimensional laser Doppler velocimetry. Reynolds shear stress of ≥30,000 dyne/cm corresponding to Re of ≥60,000 appeared to cause hemolysis (p < 0.05). This RSS is an order of magnitude greater than the RSS threshold that Sallam and Hwang suggested, and it is similar to Forstrom's RSS threshold. This study resolved a long-standing uncertainty regarding the critical values of RSS for hemolysis and may provide a foundation for a more accurate hemolysis model.

  14. Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.

    PubMed

    Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles

    2015-11-01

    Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.

  15. Relationship between slow visual processing and reading speed in people with macular degeneration

    PubMed Central

    Cheong, Allen MY; Legge, Gordon E; Lawrence, Mary G; Cheung, Sing-Hang; Ruff, Mary A

    2007-01-01

    Purpose People with macular degeneration (MD) often read slowly even with adequate magnification to compensate for acuity loss. Oculomotor deficits may affect reading in MD, but cannot fully explain the substantial reduction in reading speed. Central-field loss (CFL) is often a consequence of macular degeneration, necessitating the use of peripheral vision for reading. We hypothesized that slower temporal processing of visual patterns in peripheral vision is a factor contributing to slow reading performance in MD patients. Methods Fifteen subjects with MD, including 12 with CFL, and five age-matched control subjects were recruited. Maximum reading speed and critical print size were measured with RSVP (Rapid Serial Visual Presentation). Temporal processing speed was studied by measuring letter-recognition accuracy for strings of three randomly selected letters centered at fixation for a range of exposure times. Temporal threshold was defined as the exposure time yielding 80% recognition accuracy for the central letter. Results Temporal thresholds for the MD subjects ranged from 159 to 5881 ms, much longer than values for age-matched controls in central vision (13 ms, p<0.01). The mean temporal threshold for the 11 MD subjects who used eccentric fixation (1555.8 ± 1708.4 ms) was much longer than the mean temporal threshold (97.0 ms ± 34.2 ms, p<0.01) for the age-matched controls at 10° in the lower visual field. Individual temporal thresholds accounted for 30% of the variance in reading speed (p<0.05). Conclusion The significant association between increased temporal threshold for letter recognition and reduced reading speed is consistent with the hypothesis that slower visual processing of letter recognition is one of the factors limiting reading speed in MD subjects. PMID:17881032

  16. New stomatal flux-based critical levels for ozone effects on vegetation

    NASA Astrophysics Data System (ADS)

    Mills, Gina; Pleijel, Håkan; Braun, Sabine; Büker, Patrick; Bermejo, Victoria; Calvo, Esperanza; Danielsson, Helena; Emberson, Lisa; Fernández, Ignacio González; Grünhage, Ludger; Harmens, Harry; Hayes, Felicity; Karlsson, Per-Erik; Simpson, David

    2011-09-01

    The critical levels for ozone effects on vegetation have been reviewed and revised by the LRTAP Convention. Eight new or revised critical levels based on the accumulated stomatal flux of ozone (POD Y, the Phytotoxic Ozone Dose above a threshold flux of Y nmol m -2 PLA s -1, where PLA is the projected leaf area) have been agreed. For each receptor, data were combined from experiments conducted under naturally fluctuating environmental conditions in 2-4 countries, resulting in linear dose-response relationships with response variables specific to each receptor ( r2 = 0.49-0.87, p < 0.001 for all). For crops, critical levels were derived for effects on wheat (grain yield, grain mass, and protein yield), potato (tuber yield) and tomato (fruit yield). For forest trees, critical levels were derived for effects on changes in annual increment in whole tree biomass for beech and birch, and Norway spruce. For (semi-)natural vegetation, the critical level for effects on productive and high conservation value perennial grasslands was based on effects on important component species of the genus Trifolium (clover species). These critical levels can be used to assess protection against the damaging effects of ozone on food security, important ecosystem services provided by forest trees (roundwood production, C sequestration, soil stability and flood prevention) and the vitality of pasture.

  17. Prolonged noise exposure-induced auditory threshold shifts in rats

    PubMed Central

    Chen, Guang-Di; Decker, Brandon; Muthaiah, Vijaya Prakash Krishnan; Sheppard, Adam; Salvi, Richard

    2014-01-01

    Noise-induced hearing loss (NIHL) initially increases with exposure duration, but eventually reaches an asymptotic threshold shift (ATS) once the exposure duration exceeds 18-24 h. Equations for predicting the ATS have been developed for several species, but not for rats, even though this species is extensively used in noise exposure research. To fill this void, we exposed rats to narrowband noise (NBN, 16-20 kHz) for 5 weeks starting at 80 dB SPL in the first week and then increasing the level by 6 dB per week to a final level of 104 dB SPL. Auditory brainstem responses (ABR) were recorded before, during, and following the exposure to determine the amount of hearing loss. The noise induced threshold shift to continuous long-term exposure, defined as compound threshold shift (CTS), within and above 16-20 kHz increased with noise level at the rate of 1.82 dB threshold shift per dB of noise level (NL) above a critical level (C) of 77.2 dB SPL i.e. CTS = 1.82(NL-77.2). The normalized amplitude of the largest ABR peak measured at 100 dB SPL decreased at the rate of 3.1% per dB of NL above the critical level of 76.9 dB SPL, i.e., %ABR Reduction = 3.1%(NL-76.9). ABR thresholds measured >30 days post-exposure only partially recovered resulting in a permanent threshold shift of 30-40 dB along with severe hair cell loss in the basal, high-frequency region of the cochlea. In the rat, CTS increases with noise level with a slope similar to humans and chinchillas. The critical level (C) in the rat is similar to that of humans, but higher than that of chinchillas. PMID:25219503

  18. Novel methodologies for spectral classification of exon and intron sequences

    NASA Astrophysics Data System (ADS)

    Kwan, Hon Keung; Kwan, Benjamin Y. M.; Kwan, Jennifer Y. Y.

    2012-12-01

    Digital processing of a nucleotide sequence requires it to be mapped to a numerical sequence in which the choice of nucleotide to numeric mapping affects how well its biological properties can be preserved and reflected from nucleotide domain to numerical domain. Digital spectral analysis of nucleotide sequences unfolds a period-3 power spectral value which is more prominent in an exon sequence as compared to that of an intron sequence. The success of a period-3 based exon and intron classification depends on the choice of a threshold value. The main purposes of this article are to introduce novel codes for 1-sequence numerical representations for spectral analysis and compare them to existing codes to determine appropriate representation, and to introduce novel thresholding methods for more accurate period-3 based exon and intron classification of an unknown sequence. The main findings of this study are summarized as follows: Among sixteen 1-sequence numerical representations, the K-Quaternary Code I offers an attractive performance. A windowed 1-sequence numerical representation (with window length of 9, 15, and 24 bases) offers a possible speed gain over non-windowed 4-sequence Voss representation which increases as sequence length increases. A winner threshold value (chosen from the best among two defined threshold values and one other threshold value) offers a top precision for classifying an unknown sequence of specified fixed lengths. An interpolated winner threshold value applicable to an unknown and arbitrary length sequence can be estimated from the winner threshold values of fixed length sequences with a comparable performance. In general, precision increases as sequence length increases. The study contributes an effective spectral analysis of nucleotide sequences to better reveal embedded properties, and has potential applications in improved genome annotation.

  19. Olfactory Threshold of Chlorine in Oxygen.

    DTIC Science & Technology

    1977-09-01

    The odor threshold of chlorine in oxygen was determined. Measurements were conducted in an altitude chamber, which provided an odor-free and noise...free background. Human male volunteers, with no previous olfactory acuity testing experience, served as panelists. Threshold values were affected by...time intervals between trials and by age differences. The mean threshold value for 11 subjects was 0.08 ppm obtained by positive responses to the lowest detectable level of chlorine in oxygen, 50% of the time. (Author)

  20. Large exchange-dominated domain wall velocities in antiferromagnetically coupled nanowires

    NASA Astrophysics Data System (ADS)

    Kuteifan, Majd; Lubarda, M. V.; Fu, S.; Chang, R.; Escobar, M. A.; Mangin, S.; Fullerton, E. E.; Lomakin, V.

    2016-04-01

    Magnetic nanowires supporting field- and current-driven domain wall motion are envisioned for methods of information storage and processing. A major obstacle for their practical use is the domain-wall velocity, which is traditionally limited for low fields and currents due to the Walker breakdown occurring when the driving component reaches a critical threshold value. We show through numerical and analytical modeling that the Walker breakdown limit can be extended or completely eliminated in antiferromagnetically coupled magnetic nanowires. These coupled nanowires allow for large domain-wall velocities driven by field and/or current as compared to conventional nanowires.

  1. Temperature-dependent change in the nature of glass fracture under electron bombardment

    NASA Astrophysics Data System (ADS)

    Kravchenko, A. A.

    1991-04-01

    We report the experimental discovery of a temperature-dependent change in the nature of glass fracture under low-energy (<10 keV) electron bombardment. This is shown to depend on the transition from the thermal-shock to the thermalfluctuation mechanism of fracture at the limiting temperature T1 = (Tg - 150) °C. The high-temperature cleavage fracture of K8 and TF1 glasses was studied and the threshold value of the critical power initiating cleavage fracture was determined (for the glasses studied Θthr = 50 70 W·sec·cm-2).

  2. Optimum reduction of the dynamo threshold by a ferromagnetic layer located in the flow.

    PubMed

    Herault, J; Pétrélis, F

    2014-09-01

    We consider a fluid dynamo model generated by the flow on both sides of a moving layer. The magnetic permeability of the layer is larger than that of the flow. We show that there exists an optimum value of magnetic permeability for which the critical magnetic Reynolds number for dynamo onset is smaller than for a nonmagnetic material and also smaller than for a layer of infinite magnetic permeability. We present a mechanism that provides an explanation for recent experimental results. A similar effect occurs when the electrical conductivity of the layer is large.

  3. Switching dynamics of TaOx-based threshold switching devices

    NASA Astrophysics Data System (ADS)

    Goodwill, Jonathan M.; Gala, Darshil K.; Bain, James A.; Skowronski, Marek

    2018-03-01

    Bi-stable volatile switching devices are being used as access devices in solid-state memory arrays and as the active part of compact oscillators. Such structures exhibit two stable states of resistance and switch between them at a critical value of voltage or current. A typical resistance transient under a constant amplitude voltage pulse starts with a slow decrease followed by a rapid drop and leveling off at a low steady state value. This behavior prompted the interpretation of initial delay and fast transition as due to two different processes. Here, we show that the entire transient including incubation time, transition time, and the final resistance values in TaOx-based switching can be explained by one process, namely, Joule heating with the rapid transition due to the thermal runaway. The time, which is required for the device in the conducting state to relax back to the stable high resistance one, is also consistent with the proposed mechanism.

  4. Real-Time Mapping alert system; characteristics and capabilities

    USGS Publications Warehouse

    Torres, L.A.; Lambert, S.C.; Liebermann, T.D.

    1995-01-01

    The U.S. Geological Survey has an extensive hydrologic network that records and transmits precipitation, stage, discharge, and other water-related data on a real-time basis to an automated data processing system. Data values are recorded on electronic data collection platforms at field sampling sites. These values are transmitted by means of orbiting satellites to receiving ground stations, and by way of telecommunication lines to a U.S. Geological Survey office where they are processed on a computer system. Data that exceed predefined thresholds are identified as alert values. The current alert status at monitoring sites within a state or region is of critical importance during floods, hurricanes, and other extreme hydrologic events. This report describes the characteristics and capabilities of a series of computer programs for real-time mapping of hydrologic data. The software provides interactive graphics display and query of hydrologic information from the network in a real-time, map-based, menu-driven environment.

  5. Pseudo-entanglement evaluated in noninertial frames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehri-Dehnavi, Hossein, E-mail: mehri@alice.math.kindai.ac.jp; Research Center for Quantum Computing, Kinki University, 3-4-1 Kowakae, Higashi-Osaka, Osaka 577-8502; Mirza, Behrouz, E-mail: b.mirza@cc.iut.ac.ir

    2011-05-15

    Research Highlights: > We study pseudo-entanglement in noninertial frames. > We examine different measures of entanglement and nonclassical correlation for the state. > We find the threshold for entanglement is changed in noninertial frames. > We also describe the behavior of local unitary classes of states in noninertial frames. - Abstract: We study quantum discord, in addition to entanglement, of bipartite pseudo-entanglement in noninertial frames. It is shown that the entanglement degrades from its maximum value in a stationary frame to a minimum value in an infinite accelerating frame. There is a critical region found in which, for particular cases,more » entanglement of states vanishes for certain accelerations. The quantum discord of pseudo-entanglement decreases by increasing the acceleration. Also, for a physically inaccessible region, entanglement and nonclassical correlation are evaluated and shown to match the corresponding values of the physically accessible region for an infinite acceleration.« less

  6. Measurement of the temperature-dependent threshold shear-stress of red blood cell aggregation.

    PubMed

    Lim, Hyun-Jung; Nam, Jeong-Hun; Lee, Yong-Jin; Shin, Sehyun

    2009-09-01

    Red blood cell (RBC) aggregation is becoming an important hemorheological parameter, which typically exhibits temperature dependence. Quite recently, a critical shear-stress was proposed as a new dimensional index to represent the aggregative and disaggregative behaviors of RBCs. The present study investigated the effect of the temperature on the critical shear-stress that is required to keep RBC aggregates dispersed. The critical shear-stress was measured at various temperatures (4, 10, 20, 30, and 37 degrees C) through the use of a transient microfluidic aggregometry. The critical shear-stress significantly increased as the blood temperature lowered, which accorded with the increase in the low-shear blood viscosity with the lowering of the temperature. Furthermore, the critical shear-stress also showed good agreement with the threshold shear-stress, as measured in a rotational Couette flow. These findings assist in rheologically validating the critical shear-stress, as defined in the microfluidic aggregometry.

  7. Phase diagram of the ultrafast photoinduced insulator-metal transition in vanadium dioxide

    NASA Astrophysics Data System (ADS)

    Cocker, T. L.; Titova, L. V.; Fourmaux, S.; Holloway, G.; Bandulet, H.-C.; Brassard, D.; Kieffer, J.-C.; El Khakani, M. A.; Hegmann, F. A.

    2012-04-01

    We use time-resolved terahertz spectroscopy to probe the ultrafast dynamics of the insulator-metal phase transition induced by femtosecond laser pulses in a nanogranular vanadium dioxide (VO2) film. Based on the observed thresholds for characteristic transient terahertz dynamics, a phase diagram of critical pump fluence versus temperature for the insulator-metal phase transition in VO2 is established for the first time over a broad range of temperatures down to 17 K. We find that both Mott and Peierls mechanisms are present in the insulating state and that the photoinduced transition is nonthermal. We propose a critical-threshold model for the ultrafast photoinduced transition based on a critical density of electrons and a critical density of coherently excited phonons necessary for the structural transition to the metallic state. As a result, evidence is found at low temperatures for an intermediate metallic state wherein the Mott state is melted but the Peierls distortion remains intact, consistent with recent theoretical predictions. Finally, the observed terahertz conductivity dynamics above the photoinduced transition threshold reveal nucleation and growth of metallic nanodomains over picosecond time scales.

  8. SU-D-9A-02: Relative Effects of Threshold Choice and Spatial Resolution Modeling On SUV and Volume Quantification in F18-FDG PET Imaging of Anal Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, F; Shandong Cancer Hospital and Insititute, Jinan, Shandong; Bowsher, J

    2014-06-01

    Purpose: PET imaging with F18-FDG is utilized for treatment planning, treatment assessment, and prognosis. A region of interest (ROI) encompassing the tumor may be determined on the PET image, often by a threshold T on the PET standard uptake values (SUVs). Several studies have shown prognostic value for relevant ROI properties including maximum SUV value (SUVmax), metabolic tumor volume (MTV), and total glycolytic activity (TGA). The choice of threshold T may affect mean SUV value (SUVmean), MTV, and TGA. Recently spatial resolution modeling (SRM) has been introduced on many PET systems. SRM may also affect these ROI properties. The purposemore » of this work is to investigate the relative influence of SRM and threshold choice T on SUVmean, MTV, TGA, and SUVmax. Methods: For 9 anal cancer patients, 18F-FDG PET scans were performed prior to treatment. PET images were reconstructed by 2 iterations of Ordered Subsets Expectation Maximization (OSEM), with and without SRM. ROI contours were generated by 5 different SUV threshold values T: 2.5, 3.0, 30%, 40%, and 50% of SUVmax. Paired-samples t tests were used to compare SUVmean, MTV, and TGA (a) for SRM on versus off and (b) between each pair of threshold values T. SUVmax was also compared for SRM on versus off. Results: For almost all (57/60) comparisons of 2 different threshold values, SUVmean, MTV, and TGA showed statistically significant variation. For comparison of SRM on versus off, there were no statistically significant changes in SUVmax and TGA, but there were statistically significant changes in MTV for T=2.5 and T=3.0 and in SUVmean for all T. Conclusion: The near-universal statistical significance of threshold choice T suggests that, regarding harmonization across sites, threshold choice may be a greater concern than choice of SRM. However, broader study is warranted, e.g. other iterations of OSEM should be considered.« less

  9. Phase transition in the countdown problem

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas; Luque, Bartolo

    2012-07-01

    We present a combinatorial decision problem, inspired by the celebrated quiz show called Countdown, that involves the computation of a given target number T from a set of k randomly chosen integers along with a set of arithmetic operations. We find that the probability of winning the game evidences a threshold phenomenon that can be understood in the terms of an algorithmic phase transition as a function of the set size k. Numerical simulations show that such probability sharply transitions from zero to one at some critical value of the control parameter, hence separating the algorithm's parameter space in different phases. We also find that the system is maximally efficient close to the critical point. We derive analytical expressions that match the numerical results for finite size and permit us to extrapolate the behavior in the thermodynamic limit.

  10. Assessment of mechanical properties of human head tissues for trauma modelling.

    PubMed

    Lozano-Mínguez, Estívaliz; Palomar, Marta; Infante-García, Diego; Rupérez, María José; Giner, Eugenio

    2018-05-01

    Many discrepancies are found in the literature regarding the damage and constitutive models for head tissues as well as the values of the constants involved in the constitutive equations. Their proper definition is required for consistent numerical model performance when predicting human head behaviour, and hence skull fracture and brain damage. The objective of this research is to perform a critical review of constitutive models and damage indicators describing human head tissue response under impact loading. A 3D finite element human head model has been generated by using computed tomography images, which has been validated through the comparison to experimental data in the literature. The threshold values of the skull and the scalp that lead to fracture have been analysed. We conclude that (1) compact bone properties are critical in skull fracture, (2) the elastic constants of the cerebrospinal fluid affect the intracranial pressure distribution, and (3) the consideration of brain tissue as a nearly incompressible solid with a high (but not complete) water content offers pressure responses consistent with the experimental data. Copyright © 2018 John Wiley & Sons, Ltd.

  11. Econophysics: Two-phase behaviour of financial markets

    NASA Astrophysics Data System (ADS)

    Plerou, Vasiliki; Gopikrishnan, Parameswaran; Stanley, H. Eugene

    2003-01-01

    Buying and selling in financial markets is driven by demand, which can be quantified by the imbalance in the number of shares transacted by buyers and sellers over a given time interval. Here we analyse the probability distribution of demand, conditioned on its local noise intensity Σ, and discover the surprising existence of a critical threshold, Σc. For Σ < Σc, the most probable value of demand is roughly zero; we interpret this as an equilibrium phase in which neither buying nor selling predominates. For Σ > Σc, two most probable values emerge that are symmetrical around zero demand, corresponding to excess demand and excess supply; we interpret this as an out-of-equilibrium phase in which the market behaviour is mainly buying for half of the time, and mainly selling for the other half.

  12. Thresholds for the cost-effectiveness of interventions: alternative approaches.

    PubMed

    Marseille, Elliot; Larson, Bruce; Kazi, Dhruv S; Kahn, James G; Rosen, Sydney

    2015-02-01

    Many countries use the cost-effectiveness thresholds recommended by the World Health Organization's Choosing Interventions that are Cost-Effective project (WHO-CHOICE) when evaluating health interventions. This project sets the threshold for cost-effectiveness as the cost of the intervention per disability-adjusted life-year (DALY) averted less than three times the country's annual gross domestic product (GDP) per capita. Highly cost-effective interventions are defined as meeting a threshold per DALY averted of once the annual GDP per capita. We argue that reliance on these thresholds reduces the value of cost-effectiveness analyses and makes such analyses too blunt to be useful for most decision-making in the field of public health. Use of these thresholds has little theoretical justification, skirts the difficult but necessary ranking of the relative values of locally-applicable interventions and omits any consideration of what is truly affordable. The WHO-CHOICE thresholds set such a low bar for cost-effectiveness that very few interventions with evidence of efficacy can be ruled out. The thresholds have little value in assessing the trade-offs that decision-makers must confront. We present alternative approaches for applying cost-effectiveness criteria to choices in the allocation of health-care resources.

  13. NIS/publications

    Science.gov Websites

    Viewer. Reaction Q-Values and Thresholds This tool computes reaction Q-values and thresholds using , uncertainties, and correlations using 30 energy ranges. Simple tables of reaction uncertainties are also

  14. Thresholds of Extinction: Simulation Strategies in Environmental Values Education.

    ERIC Educational Resources Information Center

    Glew, Frank

    1990-01-01

    Describes a simulation exercise for campers and an accompanying curriculum unit--"Thresholds of Extinction"--that addresses the issues of endangered species. Uses this context to illustrate steps in the process of values development: awareness, gathering data, resolution (decision making), responsibility (acting on values), and…

  15. Comparison of edge detection techniques for M7 subtype Leukemic cell in terms of noise filters and threshold value

    NASA Astrophysics Data System (ADS)

    Salam, Afifah Salmi Abdul; Isa, Mohd. Nazrin Md.; Ahmad, Muhammad Imran; Che Ismail, Rizalafande

    2017-11-01

    This paper will focus on the study and identifying various threshold values for two commonly used edge detection techniques, which are Sobel and Canny Edge detection. The idea is to determine which values are apt in giving accurate results in identifying a particular leukemic cell. In addition, evaluating suitability of edge detectors are also essential as feature extraction of the cell depends greatly on image segmentation (edge detection). Firstly, an image of M7 subtype of Acute Myelocytic Leukemia (AML) is chosen due to its diagnosing which were found lacking. Next, for an enhancement in image quality, noise filters are applied. Hence, by comparing images with no filter, median and average filter, useful information can be acquired. Each threshold value is fixed with value 0, 0.25 and 0.5. From the investigation found, without any filter, Canny with a threshold value of 0.5 yields the best result.

  16. The stability of color discrimination threshold determined using pseudoisochromatic test plates

    NASA Astrophysics Data System (ADS)

    Zutere, B.; Jurasevska Luse, K.; Livzane, A.

    2014-09-01

    Congenital red-green color vision deficiency is one of the most common genetic disorders. A previously printed set of pseudoisochromatic plates (KAMS test, 2012) was created for individual discrimination threshold determination in case of mild congenital red-green color vision deficiency using neutral colors (colors confused with gray). The diagnostics of color blind subjects was performed with Richmond HRR (4th edition, 2002) test, Oculus HMC anomaloscope, and further the examination was made using the KAMS test. 4 male subjects aged 20 to 24 years old participated in the study: all of them were diagnosed with deuteranomalia. Due to the design of the plates, the threshold of every subject in each trial was defined as the plate total color difference value ΔE at which the stimulus was detected 75% of the time, so the just-noticeable difference (jnd) was calculated in CIE LAB DeltaE (ΔE) units. Authors performed repeated discrimination threshold measurements (5 times) for all four subjects under controlled illumination conditions. Psychophysical data were taken by sampling an observer's performance on a psychophysical task at a number of different stimulus saturation levels. Results show that a total color difference value ΔE threshold exists for each individual tested with the KAMS pseudoisochromatic plates, this threshold value does not change significantly in multiple measurements. Deuteranomal threshold values aquired using greenish plates of KAMS test are significantly higher than thresholds acquired using reddish plates. A strong positive correlation (R=0.94) exists between anomaloscope matching range (MR) and deuteranomal thresholds aquired by the KAMS test and (R=0.81) between error score in the Richmond HRR test and thresholds aquired by the KAMS test.

  17. Metabolic Tumor Volume and Total Lesion Glycolysis in Oropharyngeal Cancer Treated With Definitive Radiotherapy: Which Threshold Is the Best Predictor of Local Control?

    PubMed

    Castelli, Joël; Depeursinge, Adrien; de Bari, Berardino; Devillers, Anne; de Crevoisier, Renaud; Bourhis, Jean; Prior, John O

    2017-06-01

    In the context of oropharyngeal cancer treated with definitive radiotherapy, the aim of this retrospective study was to identify the best threshold value to compute metabolic tumor volume (MTV) and/or total lesion glycolysis to predict local-regional control (LRC) and disease-free survival. One hundred twenty patients with a locally advanced oropharyngeal cancer from 2 different institutions treated with definitive radiotherapy underwent FDG PET/CT before treatment. Various MTVs and total lesion glycolysis were defined based on 2 segmentation methods: (i) an absolute threshold of SUV (0-20 g/mL) or (ii) a relative threshold for SUVmax (0%-100%). The parameters' predictive capabilities for disease-free survival and LRC were assessed using the Harrell C-index and Cox regression model. Relative thresholds between 40% and 68% and absolute threshold between 5.5 and 7 had a similar predictive value for LRC (C-index = 0.65 and 0.64, respectively). Metabolic tumor volume had a higher predictive value than gross tumor volume (C-index = 0.61) and SUVmax (C-index = 0.54). Metabolic tumor volume computed with a relative threshold of 51% of SUVmax was the best predictor of disease-free survival (hazard ratio, 1.23 [per 10 mL], P = 0.009) and LRC (hazard ratio: 1.22 [per 10 mL], P = 0.02). The use of different thresholds within a reasonable range (between 5.5 and 7 for an absolute threshold and between 40% and 68% for a relative threshold) seems to have no major impact on the predictive value of MTV. This parameter may be used to identify patient with a high risk of recurrence and who may benefit from treatment intensification.

  18. Determination of the interfacial rheological properties of a PLA encapsulated contrast agent using in vitro attenuation and scattering

    PubMed Central

    Paul, Shirshendu; Russakow, Daniel; Rodgers, Tyler; Sarkar, Kausik; Cochran, Michael; Wheatley, Margaret

    2013-01-01

    The stabilizing encapsulation of a microbubble based ultrasound contrast agent (UCA) critically affects its acoustic properties. Polymers, which behave differently from commonly used materials—e.g. lipids or proteins—for the monolayer encapsulation, hold potential for better stability and control over encapsulation properties. Air-filled microbubbles coated with Poly (D, L-lactide) (PLA) are characterized here using in vitro acoustic experiments and several models of encapsulation. The interfacial rheological properties of the encapsulation are determined according to each of these models using attenuation of ultrasound through a suspension of these microbubbles. Then the model predictions are compared with scattered nonlinear—sub- and second harmonic—responses. For this microbubble population (average diameter 1.9 μm), the peak in attenuation measurement indicates a weighted average resonance frequency of 2.5–3 MHz, which, in contrast to other encapsulated microbubbles, is lower than the resonance frequency of a free bubble of similar size (diameter 1.9 μm). This apparently contradictory result stems from the extremely low surface dilatational elasticity (around 0.01–0.07 N/m) and the reduced surface tension of the PLA encapsulation as well as the polydispersity of the bubble population. All models considered here are shown to behave similarly even in the nonlinear regime because of the low value of the surface dilatational elasticity. Pressure dependent scattering measurements at two different excitation frequencies (2.25 and 3 MHz) show strongly non-linear behavior with 25–30 dB and 5–20 dB enhancements in fundamental and second-harmonic responses respectively for a concentration of 1.33 μg/mL of suspension. Subharmonic responses are registered above a relatively low generation threshold of 100–150 kPa with up to 20 dB enhancement beyond that pressure. Numerical predictions from all models show good agreement with the experimentally measured fundamental response, but not with the second harmonic response. The characteristic features of subharmonic response and the steady response beyond the threshold are matched well by model predictions. However, prediction of the threshold value depends on property values and the size distribution. The variation in size distribution from sample to sample leads to variation in estimated encapsulation property values—the lowest estimated value of surface dilatational viscosity better predicts the subharmonic threshold. PMID:23643050

  19. Determination of optimum threshold values for EMG time domain features; a multi-dataset investigation

    NASA Astrophysics Data System (ADS)

    Nlandu Kamavuako, Ernest; Scheme, Erik Justin; Englehart, Kevin Brian

    2016-08-01

    Objective. For over two decades, Hudgins’ set of time domain features have extensively been applied for classification of hand motions. The calculation of slope sign change and zero crossing features uses a threshold to attenuate the effect of background noise. However, there is no consensus on the optimum threshold value. In this study, we investigate for the first time the effect of threshold selection on the feature space and classification accuracy using multiple datasets. Approach. In the first part, four datasets were used, and classification error (CE), separability index, scatter matrix separability criterion, and cardinality of the features were used as performance measures. In the second part, data from eight classes were collected during two separate days with two days in between from eight able-bodied subjects. The threshold for each feature was computed as a factor (R = 0:0.01:4) times the average root mean square of data during rest. For each day, we quantified CE for R = 0 (CEr0) and minimum error (CEbest). Moreover, a cross day threshold validation was applied where, for example, CE of day two (CEodt) is computed based on optimum threshold from day one and vice versa. Finally, we quantified the effect of the threshold when using training data from one day and test data of the other. Main results. All performance metrics generally degraded with increasing threshold values. On average, CEbest (5.26 ± 2.42%) was significantly better than CEr0 (7.51 ± 2.41%, P = 0.018), and CEodt (7.50 ± 2.50%, P = 0.021). During the two-fold validation between days, CEbest performed similar to CEr0. Interestingly, when using the threshold values optimized per subject from day one and day two respectively, on the cross-days classification, the performance decreased. Significance. We have demonstrated that threshold value has a strong impact on the feature space and that an optimum threshold can be quantified. However, this optimum threshold is highly data and subject driven and thus do not generalize well. There is a strong evidence that R = 0 provides a good trade-off between system performance and generalization. These findings are important for practical use of pattern recognition based myoelectric control.

  20. Determination of optimum threshold values for EMG time domain features; a multi-dataset investigation.

    PubMed

    Kamavuako, Ernest Nlandu; Scheme, Erik Justin; Englehart, Kevin Brian

    2016-08-01

    For over two decades, Hudgins' set of time domain features have extensively been applied for classification of hand motions. The calculation of slope sign change and zero crossing features uses a threshold to attenuate the effect of background noise. However, there is no consensus on the optimum threshold value. In this study, we investigate for the first time the effect of threshold selection on the feature space and classification accuracy using multiple datasets. In the first part, four datasets were used, and classification error (CE), separability index, scatter matrix separability criterion, and cardinality of the features were used as performance measures. In the second part, data from eight classes were collected during two separate days with two days in between from eight able-bodied subjects. The threshold for each feature was computed as a factor (R = 0:0.01:4) times the average root mean square of data during rest. For each day, we quantified CE for R = 0 (CEr0) and minimum error (CEbest). Moreover, a cross day threshold validation was applied where, for example, CE of day two (CEodt) is computed based on optimum threshold from day one and vice versa. Finally, we quantified the effect of the threshold when using training data from one day and test data of the other. All performance metrics generally degraded with increasing threshold values. On average, CEbest (5.26 ± 2.42%) was significantly better than CEr0 (7.51 ± 2.41%, P = 0.018), and CEodt (7.50 ± 2.50%, P = 0.021). During the two-fold validation between days, CEbest performed similar to CEr0. Interestingly, when using the threshold values optimized per subject from day one and day two respectively, on the cross-days classification, the performance decreased. We have demonstrated that threshold value has a strong impact on the feature space and that an optimum threshold can be quantified. However, this optimum threshold is highly data and subject driven and thus do not generalize well. There is a strong evidence that R = 0 provides a good trade-off between system performance and generalization. These findings are important for practical use of pattern recognition based myoelectric control.

  1. Identifying optimal threshold statistics for elimination of hookworm using a stochastic simulation model.

    PubMed

    Truscott, James E; Werkman, Marleen; Wright, James E; Farrell, Sam H; Sarkar, Rajiv; Ásbjörnsdóttir, Kristjana; Anderson, Roy M

    2017-06-30

    There is an increased focus on whether mass drug administration (MDA) programmes alone can interrupt the transmission of soil-transmitted helminths (STH). Mathematical models can be used to model these interventions and are increasingly being implemented to inform investigators about expected trial outcome and the choice of optimum study design. One key factor is the choice of threshold for detecting elimination. However, there are currently no thresholds defined for STH regarding breaking transmission. We develop a simulation of an elimination study, based on the DeWorm3 project, using an individual-based stochastic disease transmission model in conjunction with models of MDA, sampling, diagnostics and the construction of study clusters. The simulation is then used to analyse the relationship between the study end-point elimination threshold and whether elimination is achieved in the long term within the model. We analyse the quality of a range of statistics in terms of the positive predictive values (PPV) and how they depend on a range of covariates, including threshold values, baseline prevalence, measurement time point and how clusters are constructed. End-point infection prevalence performs well in discriminating between villages that achieve interruption of transmission and those that do not, although the quality of the threshold is sensitive to baseline prevalence and threshold value. Optimal post-treatment prevalence threshold value for determining elimination is in the range 2% or less when the baseline prevalence range is broad. For multiple clusters of communities, both the probability of elimination and the ability of thresholds to detect it are strongly dependent on the size of the cluster and the size distribution of the constituent communities. Number of communities in a cluster is a key indicator of probability of elimination and PPV. Extending the time, post-study endpoint, at which the threshold statistic is measured improves PPV value in discriminating between eliminating clusters and those that bounce back. The probability of elimination and PPV are very sensitive to baseline prevalence for individual communities. However, most studies and programmes are constructed on the basis of clusters. Since elimination occurs within smaller population sub-units, the construction of clusters introduces new sensitivities for elimination threshold values to cluster size and the underlying population structure. Study simulation offers an opportunity to investigate key sources of sensitivity for elimination studies and programme designs in advance and to tailor interventions to prevailing local or national conditions.

  2. Forecasting Ionospheric Real-time Scintillation Tool (FIRST)

    NASA Astrophysics Data System (ADS)

    Anderson, D. N.; Redmon, R.; Bullett, T.; Caton, R. G.; Retterer, J. M.

    2009-05-01

    It is well-known that the generation of equatorial, F-region plasma density irregularities, via the Generalized Rayleigh-Taylor instability mechanism is critically dependent on the magnitude of the pre-reversal enhancement (PRE) in upward ExB drift velocity after sunset. These plasma density bubbles that are generated after sunset lead to the scintillation of trans-ionospheric radio wave signals that pass through these bubbles and is commonly referred to as scintillation activity. Communication and Navigation systems can be severely disrupted by these plasma density irregularities. A measure of scintillation activity is given by the S4 Index and a network of Air Force, ground-based UHF and L-band receivers measuring the S4 Index is called the SCIntillation Network Decision Aid (SCINDA) network. After sunset, the height-rise with time of the bottom- side of the F-layer reflects the magnitude of the upward ExB drift velocity. The value of the ionospheric parameter, h'F (the virtual height of the bottom-side F-layer) at 1930 LT reflects the integrated ExB drift effect on lifting the F-layer to an altitude where the Rayleigh-Taylor (R-T) instability mechanism becomes important. It is found that there exists a threshold in the h'F value at 1930 LT and the onset of scintillation activity as measured by the S4 Index value in the Peruvian longitude sector. This h'F threshold value is found to decrease with decreasing F10.7 cm fluxes in a linear manner (R = 0.99). T o examine this relationship, theoretically, we incorporate a suite of first-principle models of the ambient ionosphere (PBMOD) developed at the Air Force Research Lab (AFRL) to investigate R-T growth rates and threshold h'F (1930 LT) values as a function of solar cycle activity. In addition, this paper describes a technique for automatically forecasting, in real-time, the occurrence or non-occurrence of scintillation activity that relies on real-time data from a ground-based ionospheric sounder at or near the geomagnetic equator. We describe how FIRST has been developed into a real-time capability for automatically forecasting scintillation activity that is available on Google Earth to all interested parties.

  3. Ozone dose-response relationships for spring oilseed rape and broccoli

    NASA Astrophysics Data System (ADS)

    De Bock, Maarten; Op de Beeck, Maarten; De Temmerman, Ludwig; Guisez, Yves; Ceulemans, Reinhart; Vandermeiren, Karine

    2011-03-01

    Tropospheric ozone is an important air pollutant with known detrimental effects for several crops. Ozone effects on seed yield, oil percentage, oil yield and 1000 seed weight were examined for spring oilseed rape ( Brassica napus cv. Ability). For broccoli ( Brassica oleracea L. cv. Italica cv. Monaco) the effects on fresh marketable weight and total dry weight were studied. Current ozone levels were compared with an increase of 20 and 40 ppb during 8 h per day, over the entire growing season. Oilseed rape seed yield was negatively correlated with ozone dose indices calculated from emergence until harvest. This resulted in an R2 of 0.24 and 0.26 ( p < 0.001) for the accumulated hourly O 3 exposure over a threshold of 40 ppb (AOT40) and the phytotoxic ozone dose above a threshold of 6 nmol m -2 s -1 (POD 6) respectively. Estimated critical levels, above which 5% yield reduction is expected, were 3.7 ppm h and 4.4 mmol m -2 respectively. Our results also confirm that a threshold value of 6 nmol s -1 m -2 projected leaf area, as recommended for agricultural crops (UNECE, Mills, 2004), can indeed be applied for spring oilseed rape. The reduction of oilseed rape yield showed the highest correlation with the ozone uptake during the vegetative growth stage: when only the first 47 days after emergence were used to calculate POD 6, R2 values increased up to 0.476 or even 0.545 when the first 23 days were excluded. The highest ozone treatments, corresponding to the future ambient level by 2100 (IPCC, Meehl et al., 2007), led to a reduction of approximately 30% in oilseed rape seed yield in comparison to the current ozone concentrations. Oil percentage was also significantly reduced in response to ozone ( p < 0.001). As a consequence oil yield was even more severely affected by elevated ozone exposure compared to seed yield: critical levels for oil yield dropped to 3.2 ppm h and 3.9 mmol m -2. For broccoli the applied ozone doses had no effect on yield.

  4. Ignition criterion for heterogeneous energetic materials based on hotspot size-temperature threshold

    NASA Astrophysics Data System (ADS)

    Barua, A.; Kim, S.; Horie, Y.; Zhou, M.

    2013-02-01

    A criterion for the ignition of granular explosives (GXs) and polymer-bonded explosives (PBXs) under shock and non-shock loading is developed. The formulation is based on integration of a quantification of the distributions of the sizes and locations of hotspots in loading events using a cohesive finite element method (CFEM) developed recently and the characterization by Tarver et al. [C. M. Tarver et al., "Critical conditions for impact- and shock-induced hot spots in solid explosives," J. Phys. Chem. 100, 5794-5799 (1996)] of the critical size-temperature threshold of hotspots required for chemical ignition of solid explosives. The criterion, along with the CFEM capability to quantify the thermal-mechanical behavior of GXs and PBXs, allows the critical impact velocity for ignition, time to ignition, and critical input energy at ignition to be determined as functions of material composition, microstructure, and loading conditions. The applicability of the relation between the critical input energy (E) and impact velocity of James [H. R. James, "An extension to the critical energy criterion used to predict shock initiation thresholds," Propellants, Explos., Pyrotech. 21, 8-13 (1996)] for shock loading is examined, leading to a modified interpretation, which is sensitive to microstructure and loading condition. As an application, numerical studies are undertaken to evaluate the ignition threshold of granular high melting point eXplosive, octahydro-1,3,5,7-tetranitro-1,2,3,5-tetrazocine (HMX) and HMX/Estane PBX under loading with impact velocities up to 350 ms-1 and strain rates up to 105 s-1. Results show that, for the GX, the time to criticality (tc) is strongly influenced by initial porosity, but is insensitive to grain size. Analyses also lead to a quantification of the differences between the responses of the GXs and PBXs in terms of critical impact velocity for ignition, time to ignition, and critical input energy at ignition. Since the framework permits explicit tracking of the influences of microstructure, loading, and mechanical constraints, the calculations also show the effects of stress wave reflection and confinement condition on the ignition behaviors of GXs and PBXs.

  5. Defining operating rules for mitigation of drought effects on water supply systems

    NASA Astrophysics Data System (ADS)

    Rossi, G.; Caporali, E.; Garrote, L.; Federici, G. V.

    2012-04-01

    Reservoirs play a pivotal role for water supply systems regulation and management especially during drought periods. Optimization of reservoir releases, related to drought mitigation rules is particularly required. The hydrologic state of the system is evaluated defining some threshold values, expressed in probabilistic terms. Risk deficit curves are used to reduce the ensemble of possible rules for simulation. Threshold values can be linked to specific actions in an operational context in different levels of severity, i.e. normal, pre-alert, alert and emergency scenarios. A simplified model of the water resources system is built to evaluate the threshold values and the management rules. The threshold values are defined considering the probability to satisfy a given fraction of the demand in a certain time horizon, and are validated with a long term simulation that takes into account the characteristics of the evaluated system. The threshold levels determine some curves that define reservoir releases as a function of existing storage volume. A demand reduction is related to each threshold level. The rules to manage the system in drought conditions, the threshold levels and the reductions are optimized using long term simulations with different hypothesized states of the system. Synthetic sequences of flows with the same statistical properties of the historical ones are produced to evaluate the system behaviour. Performances of different values of reduction and different threshold curves are evaluated using different objective function and performances indices. The methodology is applied to the urban area Firenze-Prato-Pistoia in central Tuscany, in Central Italy. The considered demand centres are Firenze and Bagno a Ripoli that have, accordingly to the census ISTAT 2001, a total of 395.000 inhabitants.

  6. Absorption spectrum of a two-level atom in a bad cavity with injected squeezed vacuum

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Swain, S.

    1996-02-01

    We study the absorption spectrum of a coherently driven two-level atom interacting with a resonant cavity mode which is coupled to a broadband squeezed vacuum through its input-output mirror in the bad cavity limit. We study the modification of the two-photon correlation strength of the injected squeezed vacuum inside the cavity, and show that the equations describing probe absorption in the cavity environment are formally identical to these in free space, but with modified parameters describing the squeezed vacuum. The two photon correlations induced by the squeezed vacuum are always weaker than in free space. We pay particular attention to the spectral behaviour at line centre in the region of intermediate trength driving intensities, where anomalous spectral features such as hole-burning and dispersive profiles are displayed. These unusual spectral features are very sensitive to the squeezing phase and the Rabi frequency of the driving field. We also derive the threshold value of the Rabi frequency which gives rise to the transparency of the probe beam at the driving frequency. When the Rabi frequency is less than the threshold value, the probe beam is absorbed, whilst the probe beam is amplified (without population inversion under certain conditions) when the Rabi frequency is larger than this threshold. The anomalous spectral features all take place in the vicinity of the critical point dividing the different dynamical regimes, probe absorption and amplification, of the atomic radiation. The physical origin of the strong amplification without population inversion, and the feasibility of observing it, are discussed.

  7. Effects of a Novel Acoustic Transmitter on Swimming Performance and Predator Avoidance of Juvenile Chinook Salmon: Determination of a Size Threshold

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Ricardo W.; Ashton, Neil K.; Brown, Richard S.

    Abstract Telemetry studies are used worldwide to investigate the behavior and migration of fishes. The miniaturization of acoustic transmitters enables researchers to tag smaller fish, such as the juvenile life stages of salmon, thus representing a greater proportion of the population of interest. The development of an injectable acoustic transmitter has led to research determining the least invasive and quickest method of tag implantation. Swimming performance and predator avoidance were examined. To quantify critical swimming speed (Ucrit; an index of prolonged swimming performance) and predator avoidance for juvenile Chinook salmon (Oncorhynchus tshawytscha), fish were split into three groups: (1) fishmore » implanted with a dummy injectable acoustic transmitter (IAT treatment), (2) fish implanted with a dummy injectable acoustic transmitter and passive integrated transponder (PIT) tag (IAT+PIT treatment), and (3) an untagged control group. The Ucrits and predator avoidance capability of tagged fish were compared with untagged fish to determine if carrying an IAT adversely affected swimming performance or predator avoidance. Fish implanted with only an IAT had lower Ucrit values than untagged fish and a size threshold at 79 mm fork length was found. Conversely, Ucrit values for fish implanted with an IAT+PIT were not significantly different from untagged controls and no size threshold was found. Predator avoidance testing showed no significant difference for fish implanted with an IAT compared to untagged individuals, nor was there a significant difference for IAT+PIT fish compared to untagged fish.« less

  8. Critical Mutation Rate Has an Exponential Dependence on Population Size in Haploid and Diploid Populations

    PubMed Central

    Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.

    2013-01-01

    Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200

  9. A new threshold of apparent diffusion coefficient values in white matter after successful tissue plasminogen activator treatment for acute brain ischemia.

    PubMed

    Sato, Atsushi; Shimizu, Yusaku; Koyama, Junichi; Hongo, Kazuhiro

    2017-06-01

    Tissue plasminogen activator (tPA) is effective for the treatment of acute brain ischemia, but may trigger fatal brain edema or hemorrhage if the brain ischemia results in a large infarct. Herein, we attempted to predict the extent of infarcts by determining the optimal threshold of ADC values on DWI that predictively distinguishes between infarct and reversible areas, and by reconstructing color-coded images based on this threshold. The study subjects consisted of 36 patients with acute brain ischemia in whom MRA had confirmed reopening of the occluded arteries in a short time (mean: 99min) after tPA treatment. We measured the apparetnt diffusion coefficient (ADC) values in several small regions of interest over the white matter within high-intensity areas on the initial diffusion weighted image (DWI); then, by comparing the findings to the follow-up images, we obtained the optimal threshold of ADC values using receiver-operating characteristic analysis. The threshold obtained (583×10 -6 m 2 /s) was lower than those previously reported; this threshold could distinguish between infarct and reversible areas with considerable accuracy (sensitivity: 0.87, specificity: 0.94). The threshold obtained and the reconstructed images were predictive of the final radiological result of tPA treatment, and this threshold may be helpful in determining the appropriate management of patients with acute brain ischemia. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  10. Assessing the nutrient intake of a low-carbohydrate, high-fat (LCHF) diet: a hypothetical case study design

    PubMed Central

    Zinn, Caryn; Rush, Amy; Johnson, Rebecca

    2018-01-01

    Objective The low-carbohydrate, high-fat (LCHF) diet is becoming increasingly employed in clinical dietetic practice as a means to manage many health-related conditions. Yet, it continues to remain contentious in nutrition circles due to a belief that the diet is devoid of nutrients and concern around its saturated fat content. This work aimed to assess the micronutrient intake of the LCHF diet under two conditions of saturated fat thresholds. Design In this descriptive study, two LCHF meal plans were designed for two hypothetical cases representing the average Australian male and female weight-stable adult. National documented heights, a body mass index of 22.5 to establish weight and a 1.6 activity factor were used to estimate total energy intake using the Schofield equation. Carbohydrate was limited to <130 g, protein was set at 15%–25% of total energy and fat supplied the remaining calories. One version of the diet aligned with the national saturated fat guideline threshold of <10% of total energy and the other included saturated fat ad libitum. Primary outcomes The primary outcomes included all micronutrients, which were assessed using FoodWorks dietary analysis software against national Australian/New Zealand nutrient reference value (NRV) thresholds. Results All of the meal plans exceeded the minimum NRV thresholds, apart from iron in the female meal plans, which achieved 86%–98% of the threshold. Saturated fat intake was logistically unable to be reduced below the 10% threshold for the male plan but exceeded the threshold by 2 g (0.6%). Conclusion Despite macronutrient proportions not aligning with current national dietary guidelines, a well-planned LCHF meal plan can be considered micronutrient replete. This is an important finding for health professionals, consumers and critics of LCHF nutrition, as it dispels the myth that these diets are suboptimal in their micronutrient supply. As with any diet, for optimal nutrient achievement, meals need to be well formulated. PMID:29439004

  11. 'Outbreak Gold Standard' selection to provide optimized threshold for infectious diseases early-alert based on China Infectious Disease Automated-alert and Response System.

    PubMed

    Wang, Rui-Ping; Jiang, Yong-Gen; Zhao, Gen-Ming; Guo, Xiao-Qin; Michael, Engelgau

    2017-12-01

    The China Infectious Disease Automated-alert and Response System (CIDARS) was successfully implemented and became operational nationwide in 2008. The CIDARS plays an important role in and has been integrated into the routine outbreak monitoring efforts of the Center for Disease Control (CDC) at all levels in China. In the CIDARS, thresholds are determined using the "Mean+2SD‟ in the early stage which have limitations. This study compared the performance of optimized thresholds defined using the "Mean +2SD‟ method to the performance of 5 novel algorithms to select optimal "Outbreak Gold Standard (OGS)‟ and corresponding thresholds for outbreak detection. Data for infectious disease were organized by calendar week and year. The "Mean+2SD‟, C1, C2, moving average (MA), seasonal model (SM), and cumulative sum (CUSUM) algorithms were applied. Outbreak signals for the predicted value (Px) were calculated using a percentile-based moving window. When the outbreak signals generated by an algorithm were in line with a Px generated outbreak signal for each week, this Px was then defined as the optimized threshold for that algorithm. In this study, six infectious diseases were selected and classified into TYPE A (chickenpox and mumps), TYPE B (influenza and rubella) and TYPE C [hand foot and mouth disease (HFMD) and scarlet fever]. Optimized thresholds for chickenpox (P 55 ), mumps (P 50 ), influenza (P 40 , P 55 , and P 75 ), rubella (P 45 and P 75 ), HFMD (P 65 and P 70 ), and scarlet fever (P 75 and P 80 ) were identified. The C1, C2, CUSUM, SM, and MA algorithms were appropriate for TYPE A. All 6 algorithms were appropriate for TYPE B. C1 and CUSUM algorithms were appropriate for TYPE C. It is critical to incorporate more flexible algorithms as OGS into the CIDRAS and to identify the proper OGS and corresponding recommended optimized threshold by different infectious disease types.

  12. Polynomial sequences for bond percolation critical thresholds

    DOE PAGES

    Scullard, Christian R.

    2011-09-22

    In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less

  13. On the tertiary instability formalism of zonal flows in magnetized plasmas

    NASA Astrophysics Data System (ADS)

    Rath, F.; Peeters, A. G.; Buchholz, R.; Grosshauser, S. R.; Seiferling, F.; Weikl, A.

    2018-05-01

    This paper investigates the so-called tertiary instabilities driven by the zonal flow in gyro-kinetic tokamak core turbulence. The Kelvin Helmholtz instability is first considered within a 2D fluid model and a threshold in the zonal flow wave vector kZF>kZF,c for instability is found. This critical scale is related to the breaking of the rotational symmetry by flux-surfaces, which is incorporated into the modified adiabatic electron response. The stability of undamped Rosenbluth-Hinton zonal flows is then investigated in gyro-kinetic simulations. Absolute instability, in the sense that the threshold zonal flow amplitude tends towards zero, is found above a zonal flow wave vector kZF,cρi≈1.3 ( ρi is the ion thermal Larmor radius), which is comparable to the 2D fluid results. Large scale zonal flows with kZF

  14. Labile glycated haemoglobin and carbamylated haemoglobin are still critical points for HbA1c measurement.

    PubMed

    Desmons, Aurore; Jaisson, Stéphane; Leroy, Nathalie; Gillery, Philippe; Guillard, Emmanuelle

    2017-06-15

    Haemoglobin A 1c (HbA 1c ) is a key analyte for the monitoring of glycemic balance in diabetic patients and is used for diabetes diagnosis in many countries. The potential interference of carbamylated haemoglobin (cHb) and labile glycated haemoglobin (LA 1c ) on HbA 1c assays must remain a matter of vigilance. Such a situation has occurred in our laboratory with a kit replacement on the Bio-Rad Variant™ II testing system, a cation-exchange high performance liquid chromatography (HPLC) system. With this method, LA 1c and cHb coeluted in a same peak which may have different consequences on HbA 1c values. The influence of increasing LA 1c and cHb values on HbA 1c results was studied with in vitro glycation and carbamylation of samples. Samples from patients with high and normal blood urea concentrations were assayed by HPLC and immunological assay. We observed that the degree of interference greatly varied depending on the nature of the interfering Hb fractions found under the so-called "LA 1c peak". Thus, we have decided to apply a decision tree using "LA 1c " thresholds depending on: (i) the retention time, (ii) the shape of the peak, (iii) other analytes, like urea. If the peak recognized as "LA 1c " is mainly formed by LA 1c, we consider that there is no interference until 4%. If the peak is mainly formed by cHb, we consider an interference threshold equal to 2%. This situation reminds that cHb and LA 1c remain critical issues in chromatography-based HbA 1c assays and that adapted criteria must be set up for result interpretation.

  15. Recent advances in transfusions in neonates/infants

    PubMed Central

    Goel, Ruchika; Josephson, Cassandra D.

    2018-01-01

    Transfusions of red blood cells (RBCs), platelets, and plasma are critical therapies for infants and neonates (particularly preterm neonates) in the neonatal intensive care unit, who are the most frequently transfused subpopulation across all ages. Although traditionally a significant gap has existed between the blood utilization and the evidence base essential to adequately guide transfusion practices in infants and neonates, pediatric transfusion medicine is evolving from infancy and gradually coming of age. It is entering an exciting era with recognition as an independent discipline, a new and evolving high-quality evidence base for transfusion practices, novel technologies and therapeutics, and national/international collaborative research, educational, and clinical efforts. Triggers and thresholds for red cell transfusion are accumulating evidence with current phase III clinical trials. Ongoing trials and studies of platelet and plasma transfusions in neonates are anticipated to provide high-quality evidence in years to come. This article aims to summarize the most current evidence-based practices regarding blood component therapy in neonates. Data on the use of specific components (RBCs, plasma, and platelets) are provided. We attempt to define thresholds for anemia, thrombocytopenia, and abnormal coagulation profile in neonates to highlight the difficulties in having a specific cutoff value in neonates and preterm infants. Indications for transfusion of specific products, transfusion thresholds, and current practices and guidelines are provided, and possible adverse outcomes and complications are discussed. Finally, the critical research knowledge gaps in these practices as well as ongoing and future research areas are discussed. In an era of personalized medicine, neonatal transfusion decisions guided by a strong evidence base must be the overarching goal, and this underlies all of the strategic initiatives in pediatric and neonatal transfusion research highlighted in this article. PMID:29904575

  16. Increased insolation threshold for runaway greenhouse processes on Earth-like planets

    NASA Astrophysics Data System (ADS)

    Leconte, Jérémy; Forget, Francois; Charnay, Benjamin; Wordsworth, Robin; Pottier, Alizée

    2013-12-01

    The increase in solar luminosity over geological timescales should warm the Earth's climate, increasing water evaporation, which will in turn enhance the atmospheric greenhouse effect. Above a certain critical insolation, this destabilizing greenhouse feedback can `run away' until the oceans have completely evaporated. Through increases in stratospheric humidity, warming may also cause evaporative loss of the oceans to space before the runaway greenhouse state occurs. The critical insolation thresholds for these processes, however, remain uncertain because they have so far been evaluated using one-dimensional models that cannot account for the dynamical and cloud feedback effects that are key stabilizing features of the Earth's climate. Here we use a three-dimensional global climate model to show that the insolation threshold for the runaway greenhouse state to occur is about 375 W m-2, which is significantly higher than previously thought. Our model is specifically developed to quantify the climate response of Earth-like planets to increased insolation in hot and extremely moist atmospheres. In contrast with previous studies, we find that clouds have a destabilizing feedback effect on the long-term warming. However, subsident, unsaturated regions created by the Hadley circulation have a stabilizing effect that is strong enough to shift the runaway greenhouse limit to higher values of insolation than are inferred from one-dimensional models. Furthermore, because of wavelength-dependent radiative effects, the stratosphere remains sufficiently cold and dry to hamper the escape of atmospheric water, even at large fluxes. This has strong implications for the possibility of liquid water existing on Venus early in its history, and extends the size of the habitable zone around other stars.

  17. Increased insolation threshold for runaway greenhouse processes on Earth-like planets.

    PubMed

    Leconte, Jérémy; Forget, Francois; Charnay, Benjamin; Wordsworth, Robin; Pottier, Alizée

    2013-12-12

    The increase in solar luminosity over geological timescales should warm the Earth's climate, increasing water evaporation, which will in turn enhance the atmospheric greenhouse effect. Above a certain critical insolation, this destabilizing greenhouse feedback can 'run away' until the oceans have completely evaporated. Through increases in stratospheric humidity, warming may also cause evaporative loss of the oceans to space before the runaway greenhouse state occurs. The critical insolation thresholds for these processes, however, remain uncertain because they have so far been evaluated using one-dimensional models that cannot account for the dynamical and cloud feedback effects that are key stabilizing features of the Earth's climate. Here we use a three-dimensional global climate model to show that the insolation threshold for the runaway greenhouse state to occur is about 375 W m(-2), which is significantly higher than previously thought. Our model is specifically developed to quantify the climate response of Earth-like planets to increased insolation in hot and extremely moist atmospheres. In contrast with previous studies, we find that clouds have a destabilizing feedback effect on the long-term warming. However, subsident, unsaturated regions created by the Hadley circulation have a stabilizing effect that is strong enough to shift the runaway greenhouse limit to higher values of insolation than are inferred from one-dimensional models. Furthermore, because of wavelength-dependent radiative effects, the stratosphere remains sufficiently cold and dry to hamper the escape of atmospheric water, even at large fluxes. This has strong implications for the possibility of liquid water existing on Venus early in its history, and extends the size of the habitable zone around other stars.

  18. Optimization of the microbial synthesis of dihydroxyacetone from glycerol with Gluconobacter oxydans.

    PubMed

    Hekmat, D; Bauer, R; Fricke, J

    2003-12-01

    An optimized repeated-fed-batch fermentation process for the synthesis of dihydroxyacetone (DHA) from glycerol utilizing Gluconobacter oxydans is presented. Cleaning, sterilization, and inoculation procedures could be reduced significantly compared to the conventional fed-batch process. A stringent requirement was that the product concentration was kept below a critical threshold level at all times in order to avoid irreversible product inhibition of the cells. On the basis of experimentally validated model calculations, a threshold value of about 60 kg x m(-3) DHA was obtained. The innovative bioreactor system consisted of a stirred tank reactor combined with a packed trickle-bed column. In the packed column, active cells could be retained by in situ immobilization on a hydrophilized Ralu-ring carrier material. Within 17 days, the productivity of the process could be increased by 75% to about 2.8 kg x m(-3) h(-1). However, it was observed that the maximum achievable productivity had not been reached yet.

  19. Piezoresistive strain sensing of carbon black /silicone composites above percolation threshold

    NASA Astrophysics Data System (ADS)

    Shang, Shuying; Yue, Yujuan; Wang, Xiaoer

    2016-12-01

    A series of flexible composites with a carbon black (CB) filled silicone rubber matrix were made by an improved process in this work. A low percolation threshold with a mass ratio of 2.99% CB was achieved. The piezoresistive behavior of CB/silicone composites above the critical value, with the mass ratio of carbon black to the silicone rubber ranging from 0.01 to 0.2, was studied. The piezoresistive behavior was different from each other for the composites with different CB contents. But, the composites show an excellent repeatability of piezoresistivity under cyclic compression, no matter with low filler content or with high filler content. The most interesting phenomena were that the plots of gauge factor versus strain of the composites with different CB contents constructed a master curve and the curve could be well fitted by a function. It was showed that the gauge factor of the composites was strain-controlled showing a promising prospect of application.

  20. Investigation of Current Methods to Identify Helicopter Gear Health

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Lewicki, David G.; Le, Dy D.

    2007-01-01

    This paper provides an overview of current vibration methods used to identify the health of helicopter transmission gears. The gears are critical to the transmission system that provides propulsion, lift and maneuvering of the helicopter. This paper reviews techniques used to process vibration data to calculate conditions indicators (CI's), guidelines used by the government aviation authorities in developing and certifying the Health and Usage Monitoring System (HUMS), condition and health indicators used in commercial HUMS, and different methods used to set thresholds to detect damage. Initial assessment of a method to set thresholds for vibration based condition indicators applied to flight and test rig data by evaluating differences in distributions between comparable transmissions are also discussed. Gear condition indicator FM4 values are compared on an OH58 helicopter during 14 maneuvers and an OH58 transmission test stand during crack propagation tests. Preliminary results show the distributions between healthy helicopter and rig data are comparable and distributions between healthy and damaged gears show significant differences.

  1. Investigation of Current Methods to Identify Helicopter Gear Health

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Lewicki, David G.; Le, Dy D.

    2007-01-01

    This paper provides an overview of current vibration methods used to identify the health of helicopter transmission gears. The gears are critical to the transmission system that provides propulsion, lift and maneuvering of the helicopter. This paper reviews techniques used to process vibration data to calculate conditions indicators (CI s), guidelines used by the government aviation authorities in developing and certifying the Health and Usage Monitoring System (HUMS), condition and health indicators used in commercial HUMS, and different methods used to set thresholds to detect damage. Initial assessment of a method to set thresholds for vibration based condition indicators applied to flight and test rig data by evaluating differences in distributions between comparable transmissions are also discussed. Gear condition indicator FM4 values are compared on an OH58 helicopter during 14 maneuvers and an OH58 transmission test stand during crack propagation tests. Preliminary results show the distributions between healthy helicopter and rig data are comparable and distributions between healthy and damaged gears show significant differences.

  2. On oscillatory convection with the Cattaneo–Christov hyperbolic heat-flow model

    PubMed Central

    Bissell, J. J.

    2015-01-01

    Adoption of the hyperbolic Cattaneo–Christov heat-flow model in place of the more usual parabolic Fourier law is shown to raise the possibility of oscillatory convection in the classic Bénard problem of a Boussinesq fluid heated from below. By comparing the critical Rayleigh numbers for stationary and oscillatory convection, Rc and RS respectively, oscillatory convection is found to represent the preferred form of instability whenever the Cattaneo number C exceeds a threshold value CT≥8/27π2≈0.03. In the case of free boundaries, analytical approaches permit direct treatment of the role played by the Prandtl number P1, which—in contrast to the classical stationary scenario—can impact on oscillatory modes significantly owing to the non-zero frequency of convection. Numerical investigation indicates that the behaviour found analytically for free boundaries applies in a qualitatively similar fashion for fixed boundaries, while the threshold Cattaneo number CT is computed as a function of P1∈[10−2,10+2] for both boundary regimes. PMID:25792960

  3. A micro-epidemic model for primary dengue infection

    NASA Astrophysics Data System (ADS)

    Mishra, Arti; Gakkhar, Sunita

    2017-06-01

    In this paper, a micro-epidemic non-linear dynamical model has been proposed and analyzed for primary dengue infection. The model incorporates the effects of T cells immune response as well as humoral response during pathogenesis of dengue infection. The time delay has been accounted for production of antibodies from B cells. The basic reproduction number (R0) has been computed. Three equilibrium states are obtained. The existence and stability conditions for infection-free and ineffective cellular immune response state have been discussed. The conditions for existence of endemic state have been obtained. Further, the parametric region is obtained where system exhibits complex behavior. The threshold value of time delay has been computed which is critical for change in stability of endemic state. A threshold level for antibodies production rate has been obtained over which the infection will die out even though R0 > 1. The model is in line with the clinical observation that viral load decreases within 7-14 days from the onset of primary infection.

  4. When Is a Sprint a Sprint? A Review of the Analysis of Team-Sport Athlete Activity Profile

    PubMed Central

    Sweeting, Alice J.; Cormack, Stuart J.; Morgan, Stuart; Aughey, Robert J.

    2017-01-01

    The external load of a team-sport athlete can be measured by tracking technologies, including global positioning systems (GPS), local positioning systems (LPS), and vision-based systems. These technologies allow for the calculation of displacement, velocity and acceleration during a match or training session. The accurate quantification of these variables is critical so that meaningful changes in team-sport athlete external load can be detected. High-velocity running, including sprinting, may be important for specific team-sport match activities, including evading an opponent or creating a shot on goal. Maximal accelerations are energetically demanding and frequently occur from a low velocity during team-sport matches. Despite extensive research, conjecture exists regarding the thresholds by which to classify the high velocity and acceleration activity of a team-sport athlete. There is currently no consensus on the definition of a sprint or acceleration effort, even within a single sport. The aim of this narrative review was to examine the varying velocity and acceleration thresholds reported in athlete activity profiling. The purposes of this review were therefore to (1) identify the various thresholds used to classify high-velocity or -intensity running plus accelerations; (2) examine the impact of individualized thresholds on reported team-sport activity profile; (3) evaluate the use of thresholds for court-based team-sports and; (4) discuss potential areas for future research. The presentation of velocity thresholds as a single value, with equivocal qualitative descriptors, is confusing when data lies between two thresholds. In Australian football, sprint efforts have been defined as activity >4.00 or >4.17 m·s−1. Acceleration thresholds differ across the literature, with >1.11, 2.78, 3.00, and 4.00 m·s−2 utilized across a number of sports. It is difficult to compare literature on field-based sports due to inconsistencies in velocity and acceleration thresholds, even within a single sport. Velocity and acceleration thresholds have been determined from physical capacity tests. Limited research exists on the classification of velocity and acceleration data by female team-sport athletes. Alternatively, data mining techniques may be used to report team-sport athlete external load, without the requirement of arbitrary or physiologically defined thresholds. PMID:28676767

  5. The evolution of an unsteady translating nonlinear rossby-wave critical layer

    NASA Astrophysics Data System (ADS)

    Haynes, Peter H.; Cowley, Stephen J.

    When a monochromatic Rossby wave is forced on a flow which is slowly varying in time, the location of the critical line, where the phase speed of the wave is equal to that of the flow, also slowly changes. It is shown that this translation can play an important role in the vorticity balance near the critical line. The behavior of the translating critical layer is analyzed for various values of y, a parameter which measures the relative importance of nonlinear advection and translation. First, the vorticity equation in the critical layer is solved numerically in an important special case, where the velocity field in the critical layer is independent of the vorticity distribution and constant in time. The solutions reveal a number of new aspects of the behavior which are introduced by the translation, including the formation of a wake behind the critical layer, and the possibility of "trapping" of fluid particles in the critical layer if y exceeds a threshold value. Viewed in a frame of reference moving with the critical line the vorticity distribution may tend to a steady state, except in a "vorticity front" far downstream in the wake. If streamlines in the critical layer are open this steady state may be a predominantly inviscid one; if they are closed a steady state is possible only with non-zero dissipation. For both the unsteady and steady flows the translation allows the "logarithmic phase jump" across the critical layer, 4, to be non-zero and negative. Hence, even when the viscosity is vanishingly small, the critical layer can act as a strong "absorber" of Eliassen-Palm wave activity. Second, steady-state solutions are obtained numerically for a case when the velocity field in the critical layer is not independent of the vorticity distribution there. The interaction restricts the formation of closed streamlines, and an asymptotic open-streamline solution for large y can be found. The critical layer again acts an absorber of wave activity, but with decreasing eNectiveness as y increases.

  6. Optimal Threshold Determination for Interpreting Semantic Similarity and Particularity: Application to the Comparison of Gene Sets and Metabolic Pathways Using GO and ChEBI

    PubMed Central

    Bettembourg, Charles; Diot, Christian; Dameron, Olivier

    2015-01-01

    Background The analysis of gene annotations referencing back to Gene Ontology plays an important role in the interpretation of high-throughput experiments results. This analysis typically involves semantic similarity and particularity measures that quantify the importance of the Gene Ontology annotations. However, there is currently no sound method supporting the interpretation of the similarity and particularity values in order to determine whether two genes are similar or whether one gene has some significant particular function. Interpretation is frequently based either on an implicit threshold, or an arbitrary one (typically 0.5). Here we investigate a method for determining thresholds supporting the interpretation of the results of a semantic comparison. Results We propose a method for determining the optimal similarity threshold by minimizing the proportions of false-positive and false-negative similarity matches. We compared the distributions of the similarity values of pairs of similar genes and pairs of non-similar genes. These comparisons were performed separately for all three branches of the Gene Ontology. In all situations, we found overlap between the similar and the non-similar distributions, indicating that some similar genes had a similarity value lower than the similarity value of some non-similar genes. We then extend this method to the semantic particularity measure and to a similarity measure applied to the ChEBI ontology. Thresholds were evaluated over the whole HomoloGene database. For each group of homologous genes, we computed all the similarity and particularity values between pairs of genes. Finally, we focused on the PPAR multigene family to show that the similarity and particularity patterns obtained with our thresholds were better at discriminating orthologs and paralogs than those obtained using default thresholds. Conclusion We developed a method for determining optimal semantic similarity and particularity thresholds. We applied this method on the GO and ChEBI ontologies. Qualitative analysis using the thresholds on the PPAR multigene family yielded biologically-relevant patterns. PMID:26230274

  7. Statistical properties of effective drought index (EDI) for Seoul, Busan, Daegu, Mokpo in South Korea

    NASA Astrophysics Data System (ADS)

    Park, Jong-Hyeok; Kim, Ki-Beom; Chang, Heon-Young

    2014-08-01

    Time series of drought indices has been considered mostly in view of temporal and spatial distributions of a drought index so far. Here we investigate the statistical properties of a daily Effective Drought Index (EDI) itself for Seoul, Busan, Daegu, Mokpo for the period of 100 years from 1913 to 2012. We have found that both in dry and wet seasons the distribution of EDI as a function of EDI follows the Gaussian function. In dry season the shape of the Gaussian function is characteristically broader than that in wet seasons. The total number of drought days during the period we have analyzed is related both to the mean value and more importantly to the standard deviation. We have also found that according to the distribution of the number of occasions where the EDI values of several consecutive days are all less than a threshold, the distribution follows the exponential distribution. The slope of the best fit becomes steeper not only as the critical EDI value becomes more negative but also as the number of consecutive days increases. The slope of the exponential distribution becomes steeper as the number of the city in which EDI is simultaneously less than a critical EDI in a row increases. Finally, we conclude by pointing out implications of our findings.

  8. Site- and bond-percolation thresholds in K_{n,n}-based lattices: Vulnerability of quantum annealers to random qubit and coupler failures on chimera topologies.

    PubMed

    Melchert, O; Katzgraber, Helmut G; Novotny, M A

    2016-04-01

    We estimate the critical thresholds of bond and site percolation on nonplanar, effectively two-dimensional graphs with chimeralike topology. The building blocks of these graphs are complete and symmetric bipartite subgraphs of size 2n, referred to as K_{n,n} graphs. For the numerical simulations we use an efficient union-find-based algorithm and employ a finite-size scaling analysis to obtain the critical properties for both bond and site percolation. We report the respective percolation thresholds for different sizes of the bipartite subgraph and verify that the associated universality class is that of standard two-dimensional percolation. For the canonical chimera graph used in the D-Wave Systems Inc. quantum annealer (n=4), we discuss device failure in terms of network vulnerability, i.e., we determine the critical fraction of qubits and couplers that can be absent due to random failures prior to losing large-scale connectivity throughout the device.

  9. Self-Supervised Learning to Visually Detect Terrain Surfaces for Autonomous Robots Operating in Forested Terrain

    DTIC Science & Technology

    2012-01-01

    values of EAFP, EAFN, and EAF, can be compared with three user-defined threshold values, TAFP, TAFN, and TAF . These threshold values determine the update...values were chosen as TAFP = E0AFP + 0.02, TAFN = E0AFN + 0.02, and TAF = E0AF + 0.02). We called the value of 0.02 the margin of error tolerance. In

  10. Performance specifications of critical results management.

    PubMed

    Piva, Elisa; Sciacovelli, Laura; Pelloso, Michela; Plebani, Mario

    2017-07-01

    Formerly defined "critical values", the importance of critical results (CRs) management in patient care has grown in recent years. According to the George Lundberg definition the result becomes "critical" when, exceeding actionable thresholds, it suggests imminent danger for the patient, unless appropriate therapy is initiated promptly. As required in most important accreditation standards, such as the ISO:15,189 or the Joint Commission standards, a quality reporting system should deliver the correct result to the appropriate clinician in a time-frame that ensures patient safety. From this point of view, medical laboratories should implement a process that assures the most effective communication in a timely manner, to the referring physician or care team member. Failure in communication, particularly in this type of situation, continues to be one of the most common factors contributing to the occurrence of adverse events. In the last few decades, Information Technology (IT) in Health Care has become increasingly important. The ability to interface radiology, anatomic pathology or laboratory information systems with electronic medical records is now a real opportunity, offering much safer communication than in the past. Future achievements on performance criteria and quality indicators for the notification of CRs, should ensure a comparable examination across different institutions, adding value to clinical laboratories in controlling post-analytical processes that concern patient safety. Therefore, the novel approach to CRs should combine quality initiatives, IT solutions and a culture to strengthen professional interaction. Copyright © 2017. Published by Elsevier Inc.

  11. Verification of the tumor volume delineation method using a fixed threshold of peak standardized uptake value.

    PubMed

    Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro

    2017-09-01

    We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.

  12. Quantifying patterns of change in marine ecosystem response to multiple pressures.

    PubMed

    Large, Scott I; Fay, Gavin; Friedland, Kevin D; Link, Jason S

    2015-01-01

    The ability to understand and ultimately predict ecosystem response to multiple pressures is paramount to successfully implement ecosystem-based management. Thresholds shifts and nonlinear patterns in ecosystem responses can be used to determine reference points that identify levels of a pressure that may drastically alter ecosystem status, which can inform management action. However, quantifying ecosystem reference points has proven elusive due in large part to the multi-dimensional nature of both ecosystem pressures and ecosystem responses. We used ecological indicators, synthetic measures of ecosystem status and functioning, to enumerate important ecosystem attributes and to reduce the complexity of the Northeast Shelf Large Marine Ecosystem (NES LME). Random forests were used to quantify the importance of four environmental and four anthropogenic pressure variables to the value of ecological indicators, and to quantify shifts in aggregate ecological indicator response along pressure gradients. Anthropogenic pressure variables were critical defining features and were able to predict an average of 8-13% (up to 25-66% for individual ecological indicators) of the variation in ecological indicator values, whereas environmental pressures were able to predict an average of 1-5 % (up to 9-26% for individual ecological indicators) of ecological indicator variation. Each pressure variable predicted a different suite of ecological indicator's variation and the shapes of ecological indicator responses along pressure gradients were generally nonlinear. Threshold shifts in ecosystem response to exploitation, the most important pressure variable, occurred when commercial landings were 20 and 60% of total surveyed biomass. Although present, threshold shifts in ecosystem response to environmental pressures were much less important, which suggests that anthropogenic pressures have significantly altered the ecosystem structure and functioning of the NES LME. Gradient response curves provide ecologically informed transformations of pressure variables to explain patterns of ecosystem structure and functioning. By concurrently identifying thresholds for a suite of ecological indicator responses to multiple pressures, we demonstrate that ecosystem reference points can be evaluated and used to support ecosystem-based management.

  13. To bloom or not to bloom: contrasting responses of cyanobacteria to recent heat waves explained by critical thresholds of abiotic drivers.

    PubMed

    Huber, Veronika; Wagner, Carola; Gerten, Dieter; Adrian, Rita

    2012-05-01

    Past heat waves are considered harbingers of future climate change. In this study, we have evaluated the effects of two recent Central European summer heat waves (2003 and 2006) on cyanobacterial blooms in a eutrophic, shallow lake. While a bloom of cyanobacteria developed in 2006, consistent with our expectations, cyanobacterial biomass surprisingly remained at a record-low during the entire summer of 2003. Critical thresholds of abiotic drivers extracted from the long-term (1993-2007) data set of the studied lake using classification tree analysis (CTA) proved suitable to explain these observations. We found that cyanobacterial blooms were especially favoured in 2006 because thermal stratification was critically intense (Schmidt stability >44 g cm cm(-2)) and long-lasting (>3 weeks). Our results also suggest that some cyanobacterial species (Anabaena sp.) benefitted directly from the stable water column, whereas other species (Planktothrix sp.) took advantage of stratification-induced internal nutrient loading. In 2003, conditions were less favourable for cyanobacteria due to a spell of lower temperatures and stronger winds in mid-summer; as a result, the identified thresholds of thermal stratification were hardly ever reached. Overall, our study shows that extracting critical thresholds of environmental drivers from long-term records is a promising avenue for predicting ecosystem responses to future climate warming. Specifically, our results emphasize that not average temperature increase but changes in short-term meteorological variability will determine whether cyanobacteria will bloom more often in a warmer world.

  14. Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image

    NASA Astrophysics Data System (ADS)

    Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.

    2017-12-01

    Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.

  15. The asymmetry of U.S. monetary policy: Evidence from a threshold Taylor rule with time-varying threshold values

    NASA Astrophysics Data System (ADS)

    Zhu, Yanli; Chen, Haiqiang

    2017-05-01

    In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.

  16. Evaluation of the stability indices for the thunderstorm forecasting in the region of Belgrade, Serbia

    NASA Astrophysics Data System (ADS)

    Vujović, D.; Paskota, M.; Todorović, N.; Vučković, V.

    2015-07-01

    The pre-convective atmosphere over Serbia during the ten-year period (2001-2010) was investigated using the radiosonde data from one meteorological station and the thunderstorm observations from thirteen SYNOP meteorological stations. In order to verify their ability to forecast a thunderstorm, several stability indices were examined. Rank sum scores (RSSs) were used to segregate indices and parameters which can differentiate between a thunderstorm and no-thunderstorm event. The following indices had the best RSS values: Lifted index (LI), K index (KI), Showalter index (SI), Boyden index (BI), Total totals (TT), dew-point temperature and mixing ratio. The threshold value test was used in order to determine the appropriate threshold values for these variables. The threshold with the best skill scores was chosen as the optimal. The thresholds were validated in two ways: through the control data set, and comparing the calculated indices thresholds with the values of indices for a randomly chosen day with an observed thunderstorm. The index with the highest skill for thunderstorm forecasting was LI, and then SI, KI and TT. The BI had the poorest skill scores.

  17. Discrete diffraction managed solitons: Threshold phenomena and rapid decay for general nonlinearities

    NASA Astrophysics Data System (ADS)

    Choi, Mi-Ran; Hundertmark, Dirk; Lee, Young-Ran

    2017-10-01

    We prove a threshold phenomenon for the existence/non-existence of energy minimizing solitary solutions of the diffraction management equation for strictly positive and zero average diffraction. Our methods allow for a large class of nonlinearities; they are, for example, allowed to change sign, and the weakest possible condition, it only has to be locally integrable, on the local diffraction profile. The solutions are found as minimizers of a nonlinear and nonlocal variational problem which is translation invariant. There exists a critical threshold λcr such that minimizers for this variational problem exist if their power is bigger than λcr and no minimizers exist with power less than the critical threshold. We also give simple criteria for the finiteness and strict positivity of the critical threshold. Our proof of existence of minimizers is rather direct and avoids the use of Lions' concentration compactness argument. Furthermore, we give precise quantitative lower bounds on the exponential decay rate of the diffraction management solitons, which confirm the physical heuristic prediction for the asymptotic decay rate. Moreover, for ground state solutions, these bounds give a quantitative lower bound for the divergence of the exponential decay rate in the limit of vanishing average diffraction. For zero average diffraction, we prove quantitative bounds which show that the solitons decay much faster than exponentially. Our results considerably extend and strengthen the results of Hundertmark and Lee [J. Nonlinear Sci. 22, 1-38 (2012) and Commun. Math. Phys. 309(1), 1-21 (2012)].

  18. Comparison between intensity- duration thresholds and cumulative rainfall thresholds for the forecasting of landslide

    NASA Astrophysics Data System (ADS)

    Lagomarsino, Daniela; Rosi, Ascanio; Rossi, Guglielmo; Segoni, Samuele; Catani, Filippo

    2014-05-01

    This work makes a quantitative comparison between the results of landslide forecasting obtained using two different rainfall threshold models, one using intensity-duration thresholds and the other based on cumulative rainfall thresholds in an area of northern Tuscany of 116 km2. The first methodology identifies rainfall intensity-duration thresholds by means a software called MaCumBA (Massive CUMulative Brisk Analyzer) that analyzes rain-gauge records, extracts the intensities (I) and durations (D) of the rainstorms associated with the initiation of landslides, plots these values on a diagram, and identifies thresholds that define the lower bounds of the I-D values. A back analysis using data from past events can be used to identify the threshold conditions associated with the least amount of false alarms. The second method (SIGMA) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering: the statistical distribution of the rainfall series is analyzed, and multiples of the standard deviation (σ) are used as thresholds to discriminate between ordinary and extraordinary rainfall events. The name of the model, SIGMA, reflects the central role of the standard deviations in the proposed methodology. The definition of intensity-duration rainfall thresholds requires the combined use of rainfall measurements and an inventory of dated landslides, whereas SIGMA model can be implemented using only rainfall data. These two methodologies were applied in an area of 116 km2 where a database of 1200 landslides was available for the period 2000-2012. The results obtained are compared and discussed. Although several examples of visual comparisons between different intensity-duration rainfall thresholds are reported in the international literature, a quantitative comparison between thresholds obtained in the same area using different techniques and approaches is a relatively undebated research topic.

  19. Energy localization in the phi4 oscillator chain.

    PubMed

    Ponno, A; Ruggiero, J; Drigo, E; De Luca, J

    2006-05-01

    We study energy localization in a finite one-dimensional phi(4) oscillator chain with initial energy in a single oscillator of the chain. We numerically calculate the effective number of degrees of freedom sharing the energy on the lattice as a function of time. We find that for energies smaller than a critical value, energy equipartition among the oscillators is reached in a relatively short time. On the other hand, above the critical energy, a decreasing number of particles sharing the energy is observed. We give an estimate of the effective number of degrees of freedom as a function of the energy. Our results suggest that localization is due to the appearance, above threshold, of a breather-like structure. Analytic arguments are given, based on the averaging theory and the analysis of a discrete nonlinear Schrödinger equation approximating the dynamics, to support and explain the numerical results.

  20. Long-time efficacy of the surface code in the presence of a super-Ohmic environment

    NASA Astrophysics Data System (ADS)

    López-Delgado, D. A.; Novais, E.; Mucciolo, E. R.; Caldeira, A. O.

    2017-06-01

    We study the long-time evolution of a quantum memory coupled to a bosonic environment on which quantum error correction (QEC) is performed using the surface code. The memory's evolution encompasses N QEC cycles, each of them yielding a nonerror syndrome. This assumption makes our analysis independent of the recovery process. We map the expression for the time evolution of the memory onto the partition function of an equivalent statistical-mechanical spin system. In the super-Ohmic dissipation case the long-time evolution of the memory has the same behavior as the time evolution for just one QEC cycle. For this case we find analytical expressions for the critical parameters of the order-disorder phase transition of an equivalent spin system. These critical parameters determine the threshold value for the system-environment coupling below which it is possible to preserve the memory's state.

  1. A critique of the use of indicator-species scores for identifying thresholds in species responses

    USGS Publications Warehouse

    Cuffney, Thomas F.; Qian, Song S.

    2013-01-01

    Identification of ecological thresholds is important both for theoretical and applied ecology. Recently, Baker and King (2010, King and Baker 2010) proposed a method, threshold indicator analysis (TITAN), to calculate species and community thresholds based on indicator species scores adapted from Dufrêne and Legendre (1997). We tested the ability of TITAN to detect thresholds using models with (broken-stick, disjointed broken-stick, dose-response, step-function, Gaussian) and without (linear) definitive thresholds. TITAN accurately and consistently detected thresholds in step-function models, but not in models characterized by abrupt changes in response slopes or response direction. Threshold detection in TITAN was very sensitive to the distribution of 0 values, which caused TITAN to identify thresholds associated with relatively small differences in the distribution of 0 values while ignoring thresholds associated with large changes in abundance. Threshold identification and tests of statistical significance were based on the same data permutations resulting in inflated estimates of statistical significance. Application of bootstrapping to the split-point problem that underlies TITAN led to underestimates of the confidence intervals of thresholds. Bias in the derivation of the z-scores used to identify TITAN thresholds and skewedness in the distribution of data along the gradient produced TITAN thresholds that were much more similar than the actual thresholds. This tendency may account for the synchronicity of thresholds reported in TITAN analyses. The thresholds identified by TITAN represented disparate characteristics of species responses that, when coupled with the inability of TITAN to identify thresholds accurately and consistently, does not support the aggregation of individual species thresholds into a community threshold.

  2. Salicylate-induced changes in auditory thresholds of adolescent and adult rats.

    PubMed

    Brennan, J F; Brown, C A; Jastreboff, P J

    1996-01-01

    Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.

  3. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  4. Evaluation of different radon guideline values based on characterization of ecological risk and visualization of lung cancer mortality trends in British Columbia, Canada.

    PubMed

    Branion-Calles, Michael C; Nelson, Trisalyn A; Henderson, Sarah B

    2015-11-19

    There is no safe concentration of radon gas, but guideline values provide threshold concentrations that are used to map areas at higher risk. These values vary between different regions, countries, and organizations, which can lead to differential classification of risk. For example the World Health Organization suggests a 100 Bq m(-3)value, while Health Canada recommends 200 Bq m(-3). Our objective was to describe how different thresholds characterized ecological radon risk and their visual association with lung cancer mortality trends in British Columbia, Canada. Eight threshold values between 50 and 600 Bq m(-3) were identified, and classes of radon vulnerability were defined based on whether the observed 95(th) percentile radon concentration was above or below each value. A balanced random forest algorithm was used to model vulnerability, and the results were mapped. We compared high vulnerability areas, their estimated populations, and differences in lung cancer mortality trends stratified by smoking prevalence and sex. Classification accuracy improved as the threshold concentrations decreased and the area classified as high vulnerability increased. Majority of the population lived within areas of lower vulnerability regardless of the threshold value. Thresholds as low as 50 Bq m(-3) were associated with higher lung cancer mortality, even in areas with low smoking prevalence. Temporal trends in lung cancer mortality were increasing for women, while decreasing for men. Radon contributes to lung cancer in British Columbia. The results of the study contribute evidence supporting the use of a reference level lower than the current guideline of 200 Bq m(-3) for the province.

  5. Line length dependence of threshold current density and driving force in eutectic SnPb and SnAgCu solder electromigration

    NASA Astrophysics Data System (ADS)

    Yoon, Min-Seung; Ko, Min-Ku; Kim, Bit-Na; Kim, Byung-Joon; Park, Yong-Bae; Joo, Young-Chang

    2008-04-01

    The relationship between the threshold current density and the critical line length in eutectic SnPb and SnAgCu electromigrations were examined using solder lines with the various lengths ranging from 100to1000μm. When the electron wind-force was balanced by the back-stress gradient force, the net flux of electromigration is zero, at which the current density and line length are defined as the threshold current density and the critical length, respectively. It was found that in SnAgCu electromigration, the 1/L dependence on the threshold current density showed good agreement, whereas the threshold current densities of the eutectic SnPb deviated from the 1/L dependence. The balance between the electron wind-force and the back-stress gradient force was the main factor determining the threshold product of SnAgCu electromigration. On the other hand, in the case of eutectic SnPb, the chemical driving force is contributed as a back-flux force in addition to the back-stress gradient force. The existence of the chemical driving force was caused by the nonequilibrium Pb concentration inside the Pb-rich phases between the cathode and anode during the electromigration procedure.

  6. Calculation of photoionization cross section near auto-ionizing lines and magnesium photoionization cross section near threshold

    NASA Technical Reports Server (NTRS)

    Moore, E. N.; Altick, P. L.

    1972-01-01

    The research performed is briefly reviewed. A simple method was developed for the calculation of continuum states of atoms when autoionization is present. The method was employed to give the first theoretical cross section for beryllium and magnesium; the results indicate that the values used previously at threshold were sometimes seriously in error. These threshold values have potential applications in astrophysical abundance estimates.

  7. The Montreal Cognitive Assessment and the mini-mental state examination as screening instruments for cognitive impairment: item analyses and threshold scores.

    PubMed

    Damian, Anne M; Jacobson, Sandra A; Hentz, Joseph G; Belden, Christine M; Shill, Holly A; Sabbagh, Marwan N; Caviness, John N; Adler, Charles H

    2011-01-01

    To perform an item analysis of the Montreal Cognitive Assessment (MoCA) versus the Mini-Mental State Examination (MMSE) in the prediction of cognitive impairment, and to examine the characteristics of different MoCA threshold scores. 135 subjects enrolled in a longitudinal clinicopathologic study were administered the MoCA by a single physician and the MMSE by a trained research assistant. Subjects were classified as cognitively impaired or cognitively normal based on independent neuropsychological testing. 89 subjects were found to be cognitively normal, and 46 cognitively impaired (20 with dementia, 26 with mild cognitive impairment). The MoCA was superior in both sensitivity and specificity to the MMSE, although not all MoCA tasks were of equal predictive value. A MoCA threshold score of 26 had a sensitivity of 98% and a specificity of 52% in this population. In a population with a 20% prevalence of cognitive impairment, a threshold of 24 was optimal (negative predictive value 96%, positive predictive value 47%). This analysis suggests the potential for creating an abbreviated MoCA. For screening in primary care, the MoCA threshold of 26 appears optimal. For testing in a memory disorders clinic, a lower threshold has better predictive value. Copyright © 2011 S. Karger AG, Basel.

  8. Otoliths - Accelerometer and seismometer; Implications in Vestibular Evoked Myogenic Potential (VEMP).

    PubMed

    Grant, Wally; Curthoys, Ian

    2017-09-01

    Vestibular otolithic organs are recognized as transducers of head acceleration and they function as such up to their corner frequency or undamped natural frequency. It is well recognized that these organs respond to frequencies above their corner frequency up to the 2-3 kHz range (Curthoys et al., 2016). A mechanics model for the transduction of these organs is developed that predicts the response below the undamped natural frequency as an accelerometer and above that frequency as a seismometer. The model is converted to a transfer function using hair cell bundle deflection. Measured threshold acceleration stimuli are used along with threshold deflections for threshold transfer function values. These are compared to model predicted values, both below and above their undamped natural frequency. Threshold deflection values are adjusted to match the model transfer function. The resulting threshold deflection values were well within in measure threshold bundle deflection ranges. Vestibular Evoked Myogenic Potentials (VEMPs) today routinely uses stimulus frequencies of 500 and 1000 Hz, and otoliths have been established incontrovertibly by clinical and neural evidence as the stimulus source. The mechanism for stimulus at these frequencies above the undamped natural frequency of otoliths is presented where otoliths are utilizing a seismometer mode of response for VEMP transduction. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Carrier mobility in mesoscale heterogeneous organic materials: Effects of crystallinity and anisotropy on efficient charge transport

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hajime; Shirasawa, Raku; Nakamoto, Mitsunori; Hattori, Shinnosuke; Tomiya, Shigetaka

    2017-07-01

    Charge transport in the mesoscale bulk heterojunctions (BHJs) of organic photovoltaic devices (OPVs) is studied using multiscale simulations in combination with molecular dynamics, the density functional theory, the molecular-level kinetic Monte Carlo (kMC) method, and the coarse-grained kMC method, which was developed to estimate mesoscale carrier mobility. The effects of the degree of crystallinity and the anisotropy of the conductivity of donors on hole mobility are studied for BHJ structures that consist of crystalline and amorphous pentacene grains that act as donors and amorphous C60 grains that act as acceptors. We find that the hole mobility varies dramatically with the degree of crystallinity of pentacene because it is largely restricted by a low-mobility amorphous region that occurs in the hole transport network. It was also found that the percolation threshold of crystalline pentacene is relatively high at approximately 0.6. This high percolation threshold is attributed to the 2D-like conductivity of crystalline pentacene, and the threshold is greatly improved to a value of approximately 0.3 using 3D-like conductive donors. We propose essential guidelines to show that it is critical to increase the degree of crystallinity and develop 3D conductive donors for efficient hole transport through percolative networks in the BHJs of OPVs.

  10. Phase-resolved analysis of the susceptibility of pinned spiral waves to far-field pacing in a two-dimensional model of excitable media

    PubMed Central

    Bittihn, Philip; Squires, Amgad; Luther, Gisa; Bodenschatz, Eberhard; Krinsky, Valentin; Parlitz, Ulrich; Luther, Stefan

    2010-01-01

    Life-threatening cardiac arrhythmias are associated with the existence of stable and unstable spiral waves. Termination of such complex spatio-temporal patterns by local control is substantially limited by anchoring of spiral waves at natural heterogeneities. Far-field pacing (FFP) is a new local control strategy that has been shown to be capable of unpinning waves from obstacles. In this article, we investigate in detail the FFP unpinning mechanism for a single rotating wave pinned to a heterogeneity. We identify qualitatively different phase regimes of the rotating wave showing that the concept of vulnerability is important but not sufficient to explain the failure of unpinning in all cases. Specifically, we find that a reduced excitation threshold can lead to the failure of unpinning, even inside the vulnerable window. The critical value of the excitation threshold (below which no unpinning is possible) decreases for higher electric field strengths and larger obstacles. In contrast, for a high excitation threshold, the success of unpinning is determined solely by vulnerability, allowing for a convenient estimation of the unpinning success rate. In some cases, we also observe phase resetting in discontinuous phase intervals of the spiral wave. This effect is important for the application of multiple stimuli in experiments. PMID:20368243

  11. Amending the W* Velocity Scale for Surface Layer, Entrainment Zone, and Baroclinic Shear in Mixed Forced/Free Turbulent Convection

    DTIC Science & Technology

    1992-03-30

    transitionally turbulent by nature. Thus, we expect Rh to fluctuate about some critical threshold turbulence value, RCh. R,, is much larger than the 1/4...the EZ shear results from turning the wind into the v direction. So for a mature mid-latitude BL with u. 0.4ms 1, f =14S-1, z’ = 103m , e = 300 0C, and...will diminish later as zi becomes large. If we require more accuracy, we can couple eqns. (16, 18, 19, 20, and 26) with 2u. 2 v We v2,& G H’t = We + ( Rh

  12. Threshold-based insulin-pump interruption for reduction of hypoglycemia.

    PubMed

    Bergenstal, Richard M; Klonoff, David C; Garg, Satish K; Bode, Bruce W; Meredith, Melissa; Slover, Robert H; Ahmann, Andrew J; Welsh, John B; Lee, Scott W; Kaufman, Francine R

    2013-07-18

    The threshold-suspend feature of sensor-augmented insulin pumps is designed to minimize the risk of hypoglycemia by interrupting insulin delivery at a preset sensor glucose value. We evaluated sensor-augmented insulin-pump therapy with and without the threshold-suspend feature in patients with nocturnal hypoglycemia. We randomly assigned patients with type 1 diabetes and documented nocturnal hypoglycemia to receive sensor-augmented insulin-pump therapy with or without the threshold-suspend feature for 3 months. The primary safety outcome was the change in the glycated hemoglobin level. The primary efficacy outcome was the area under the curve (AUC) for nocturnal hypoglycemic events. Two-hour threshold-suspend events were analyzed with respect to subsequent sensor glucose values. A total of 247 patients were randomly assigned to receive sensor-augmented insulin-pump therapy with the threshold-suspend feature (threshold-suspend group, 121 patients) or standard sensor-augmented insulin-pump therapy (control group, 126 patients). The changes in glycated hemoglobin values were similar in the two groups. The mean AUC for nocturnal hypoglycemic events was 37.5% lower in the threshold-suspend group than in the control group (980 ± 1200 mg per deciliter [54.4 ± 66.6 mmol per liter] × minutes vs. 1568 ± 1995 mg per deciliter [87.0 ± 110.7 mmol per liter] × minutes, P<0.001). Nocturnal hypoglycemic events occurred 31.8% less frequently in the threshold-suspend group than in the control group (1.5 ± 1.0 vs. 2.2 ± 1.3 per patient-week, P<0.001). The percentages of nocturnal sensor glucose values of less than 50 mg per deciliter (2.8 mmol per liter), 50 to less than 60 mg per deciliter (3.3 mmol per liter), and 60 to less than 70 mg per deciliter (3.9 mmol per liter) were significantly reduced in the threshold-suspend group (P<0.001 for each range). After 1438 instances at night in which the pump was stopped for 2 hours, the mean sensor glucose value was 92.6 ± 40.7 mg per deciliter (5.1 ± 2.3 mmol per liter). Four patients (all in the control group) had a severe hypoglycemic event; no patients had diabetic ketoacidosis. This study showed that over a 3-month period the use of sensor-augmented insulin-pump therapy with the threshold-suspend feature reduced nocturnal hypoglycemia, without increasing glycated hemoglobin values. (Funded by Medtronic MiniMed; ASPIRE ClinicalTrials.gov number, NCT01497938.).

  13. A maximally selected test of symmetry about zero.

    PubMed

    Laska, Eugene; Meisner, Morris; Wanderling, Joseph

    2012-11-20

    The problem of testing symmetry about zero has a long and rich history in the statistical literature. We introduce a new test that sequentially discards observations whose absolute value is below increasing thresholds defined by the data. McNemar's statistic is obtained at each threshold and the largest is used as the test statistic. We obtain the exact distribution of this maximally selected McNemar and provide tables of critical values and a program for computing p-values. Power is compared with the t-test, the Wilcoxon Signed Rank Test and the Sign Test. The new test, MM, is slightly less powerful than the t-test and Wilcoxon Signed Rank Test for symmetric normal distributions with nonzero medians and substantially more powerful than all three tests for asymmetric mixtures of normal random variables with or without zero medians. The motivation for this test derives from the need to appraise the safety profile of new medications. If pre and post safety measures are obtained, then under the null hypothesis, the variables are exchangeable and the distribution of their difference is symmetric about a zero median. Large pre-post differences are the major concern of a safety assessment. The discarded small observations are not particularly relevant to safety and can reduce power to detect important asymmetry. The new test was utilized on data from an on-road driving study performed to determine if a hypnotic, a drug used to promote sleep, has next day residual effects. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Desiccation and Mortality Dynamics in Seedlings of Different European Beech (Fagus sylvatica L.) Populations under Extreme Drought Conditions.

    PubMed

    Bolte, Andreas; Czajkowski, Tomasz; Cocozza, Claudia; Tognetti, Roberto; de Miguel, Marina; Pšidová, Eva; Ditmarová, Ĺubica; Dinca, Lucian; Delzon, Sylvain; Cochard, Hervè; Ræbild, Anders; de Luis, Martin; Cvjetkovic, Branislav; Heiri, Caroline; Müller, Jürgen

    2016-01-01

    European beech (Fagus sylvatica L., hereafter beech), one of the major native tree species in Europe, is known to be drought sensitive. Thus, the identification of critical thresholds of drought impact intensity and duration are of high interest for assessing the adaptive potential of European beech to climate change in its native range. In a common garden experiment with one-year-old seedlings originating from central and marginal origins in six European countries (Denmark, Germany, France, Romania, Bosnia-Herzegovina, and Spain), we applied extreme drought stress and observed desiccation and mortality processes among the different populations and related them to plant water status (predawn water potential, ΨPD) and soil hydraulic traits. For the lethal drought assessment, we used a critical threshold of soil water availability that is reached when 50% mortality in seedling populations occurs (LD50SWA). We found significant population differences in LD50SWA (10.5-17.8%), and mortality dynamics that suggest a genetic difference in drought resistance between populations. The LD50SWA values correlate significantly with the mean growing season precipitation at population origins, but not with the geographic margins of beech range. Thus, beech range marginality may be more due to climatic conditions than to geographic range. The outcome of this study suggests the genetic variation has a major influence on the varying adaptive potential of the investigated populations.

  15. Low to high confinement transition theory of finite-beta drift-wave driven shear flow and its comparison with data from DIII-D

    NASA Astrophysics Data System (ADS)

    Guzdar, P. N.; Kleva, R. G.; Groebner, R. J.; Gohil, P.

    2004-03-01

    Shear flow stabilization of edge turbulence in tokamaks has been the accepted paradigm for the improvement in confinement observed in high (H) confinement mode plasmas. Results on the generation of zonal flow and fields in finite β plasmas are presented. This theory yields a criterion for bifurcation from low to high (L-H) confinement mode, proportional to Te/√Ln , where Te is the electron temperature and Ln is the density scale-length at the steepest part of the density gradient. When this parameter exceeds a critical value (mostly determined by the strength of the toroidal magnetic field), the transition occurs. The predicted threshold based on this parameter shows good agreement with edge measurements on discharges undergoing L-H transitions in DIII-D [J. L. Luxon, R. Anderson, F. Batty et al., in Proceedings of the 11th Conference on Plasma Physics and Controlled Fusion Research, 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. I, p. 159]. The observed differences in the transitions with the reversal of the toroidal magnetic field are reconciled in terms of this critical parameter due to the differences in the density gradient scale-lengths in the edge. The theory also provides a possible explanation for lowered threshold power, pellet injection H modes in DIII-D, thereby providing a unified picture of the varied observations on the L-H transition.

  16. Using change-point models to estimate empirical critical loads for nitrogen in mountain ecosystems.

    PubMed

    Roth, Tobias; Kohli, Lukas; Rihm, Beat; Meier, Reto; Achermann, Beat

    2017-01-01

    To protect ecosystems and their services, the critical load concept has been implemented under the framework of the Convention on Long-range Transboundary Air Pollution (UNECE) to develop effects-oriented air pollution abatement strategies. Critical loads are thresholds below which damaging effects on sensitive habitats do not occur according to current knowledge. Here we use change-point models applied in a Bayesian context to overcome some of the difficulties when estimating empirical critical loads for nitrogen (N) from empirical data. We tested the method using simulated data with varying sample sizes, varying effects of confounding variables, and with varying negative effects of N deposition on species richness. The method was applied to the national-scale plant species richness data from mountain hay meadows and (sub)alpine scrubs sites in Switzerland. Seven confounding factors (elevation, inclination, precipitation, calcareous content, aspect as well as indicator values for humidity and light) were selected based on earlier studies examining numerous environmental factors to explain Swiss vascular plant diversity. The estimated critical load confirmed the existing empirical critical load of 5-15 kg N ha -1 yr -1 for (sub)alpine scrubs, while for mountain hay meadows the estimated critical load was at the lower end of the current empirical critical load range. Based on these results, we suggest to narrow down the critical load range for mountain hay meadows to 10-15 kg N ha -1 yr -1 . Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Digital audio watermarking using moment-preserving thresholding

    NASA Astrophysics Data System (ADS)

    Choi, DooSeop; Jung, Hae Kyung; Choi, Hyuk; Kim, Taejeong

    2007-09-01

    The Moment-Preserving Thresholding technique for digital images has been used in digital image processing for decades, especially in image binarization and image compression. Its main strength lies in that the binary values that the MPT produces as a result, called representative values, are usually unaffected when the signal being thresholded goes through a signal processing operation. The two representative values in MPT together with the threshold value are obtained by solving the system of the preservation equations for the first, second, and third moment. Relying on this robustness of the representative values to various signal processing attacks considered in the watermarking context, this paper proposes a new watermarking scheme for audio signals. The watermark is embedded in the root-sum-square (RSS) of the two representative values of each signal block using the quantization technique. As a result, the RSS values are modified by scaling the signal according to the watermark bit sequence under the constraint of inaudibility relative to the human psycho-acoustic model. We also address and suggest solutions to the problem of synchronization and power scaling attacks. Experimental results show that the proposed scheme maintains high audio quality and robustness to various attacks including MP3 compression, re-sampling, jittering, and, DA/AD conversion.

  18. Observation of Critical-Gradient Behavior in Alfvén-Eigenmode-Induced Fast-Ion Transport.

    PubMed

    Collins, C S; Heidbrink, W W; Austin, M E; Kramer, G J; Pace, D C; Petty, C C; Stagner, L; Van Zeeland, M A; White, R B; Zhu, Y B

    2016-03-04

    Experiments in the DIII-D tokamak show that fast-ion transport suddenly becomes stiff above a critical threshold in the presence of many overlapping small-amplitude Alfvén eigenmodes (AEs). The threshold is phase-space dependent and occurs when particle orbits become stochastic due to resonances with AEs. Above threshold, equilibrium fast-ion density profiles are unchanged despite increased drive, and intermittent fast-ion losses are observed. Fast-ion Dα spectroscopy indicates radially localized transport of the copassing population at radii that correspond to the location of midcore AEs. The observation of stiff fast-ion transport suggests that reduced models can be used to effectively predict alpha profiles, beam ion profiles, and losses to aid in the design of optimized scenarios for future burning plasma devices.

  19. A Financial Market Model Incorporating Herd Behaviour.

    PubMed

    Wray, Christopher M; Bishop, Steven R

    2016-01-01

    Herd behaviour in financial markets is a recurring phenomenon that exacerbates asset price volatility, and is considered a possible contributor to market fragility. While numerous studies investigate herd behaviour in financial markets, it is often considered without reference to the pricing of financial instruments or other market dynamics. Here, a trader interaction model based upon informational cascades in the presence of information thresholds is used to construct a new model of asset price returns that allows for both quiescent and herd-like regimes. Agent interaction is modelled using a stochastic pulse-coupled network, parametrised by information thresholds and a network coupling probability. Agents may possess either one or two information thresholds that, in each case, determine the number of distinct states an agent may occupy before trading takes place. In the case where agents possess two thresholds (labelled as the finite state-space model, corresponding to agents' accumulating information over a bounded state-space), and where coupling strength is maximal, an asymptotic expression for the cascade-size probability is derived and shown to follow a power law when a critical value of network coupling probability is attained. For a range of model parameters, a mixture of negative binomial distributions is used to approximate the cascade-size distribution. This approximation is subsequently used to express the volatility of model price returns in terms of the model parameter which controls the network coupling probability. In the case where agents possess a single pulse-coupling threshold (labelled as the semi-infinite state-space model corresponding to agents' accumulating information over an unbounded state-space), numerical evidence is presented that demonstrates volatility clustering and long-memory patterns in the volatility of asset returns. Finally, output from the model is compared to both the distribution of historical stock returns and the market price of an equity index option.

  20. Effects of passive and active movement on vibrotactile detection thresholds of the Pacinian channel and forward masking.

    PubMed

    Yıldız, Mustafa Z; Toker, İpek; Özkan, Fatma B; Güçlü, Burak

    2015-01-01

    We investigated the gating effect of passive and active movement on the vibrotactile detection thresholds of the Pacinian (P) psychophysical channel and forward masking. Previous work on gating mostly used electrocutaneous stimulation and did not allow focusing on tactile submodalities. Ten healthy adults participated in our study. Passive movement was achieved by swinging a platform, on which the participant's stimulated hand was attached, manually by a trained operator. The root-mean-square value of the movement speed was kept in a narrow range (slow: 10-20 cm/s, fast: 50-60 cm/s). Active movement was performed by the participant him-/herself using the same apparatus. The tactile stimuli consisted of 250-Hz sinusoidal mechanical vibrations, which were generated by a shaker mounted on the movement platform and applied to the middle fingertip. In the forward-masking experiments, a high-level masking stimulus preceded the test stimulus. Each movement condition was tested separately in a two-interval forced-choice detection task. Both passive and active movement caused a robust gating effect, that is, elevation of thresholds, in the fast speed range. Statistically significant change of thresholds was not found in slow movement conditions. Passive movement yielded higher thresholds than those measured during active movement, but this could not be confirmed statistically. On the other hand, the effect of forward masking was approximately constant as the movement condition varied. These results imply that gating depends on both peripheral and central factors in the P channel. Active movement may have some facilitatory role and produce less gating. Additionally, the results support the hypothesis regarding a critical speed for gating, which may be relevant for daily situations involving vibrations transmitted through grasped objects and for manual exploration.

  1. Reassessment of data used in setting exposure limits for hot particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baum, J.W.; Kaurin, D.G.

    1991-05-01

    A critical review and a reassessment of data reviewed in NCRP Report 106 on effects of hot particles'' on the skin of pigs, monkeys, and humans were made. Our analysis of the data of Forbes and Mikhail on effects from activated UC{sub 2} particles, ranging in diameter from 144 {mu}m to 328 {mu}m, led to the formulation of a new model for prediction of both the threshold for acute ulceration and for ulcer diameter. A dose of 27 Gy at a depth of 1.33 mm in tissue in this model will result in an acute ulcer with a diameter determinedmore » by the radius over which this dose (at 1.33-mm depth) extends. Application of the model to the Forbes-Mikhail data yielded a threshold'' (5% probability) of 6 {times} 10{sup 9} beta particles from a point source on skin of mixed fission product beta particles, or about 10{sup 10} beta particles from Sr--Y-90, since few of the Sr-90 beta particles reach this depth. The data of Hopewell et al. for their 1 mm Sr-Y-90 exposures were also analyzed with the above model and yielded a predicted threshold of 2 {times} 10{sup 10} Sr-Y-90 beta particles for a point source on skin. Dosimetry values were employed in this latter analysis that are 3.3 times higher than previously reported for this source. An alternate interpretation of the Forbes and Mikhail data, derived from linear plots of the data, is that the threshold depends strongly on particle size with the smaller particles yielding a much lower threshold and smaller minimum size ulcer. Additional animal exposures are planned to distinguish between the above explanations. 17 refs., 3 figs., 3 tabs.« less

  2. A new function for estimating local rainfall thresholds for landslide triggering

    NASA Astrophysics Data System (ADS)

    Cepeda, J.; Nadim, F.; Høeg, K.; Elverhøi, A.

    2009-04-01

    The widely used power law for establishing rainfall thresholds for triggering of landslides was first proposed by N. Caine in 1980. The most updated global thresholds presented by F. Guzzetti and co-workers in 2008 were derived using Caine's power law and a rigorous and comprehensive collection of global data. Caine's function is defined as I = α×Dβ, where I and D are the mean intensity and total duration of rainfall, and α and β are parameters estimated for a lower boundary curve to most or all the positive observations (i.e., landslide triggering rainfall events). This function does not account for the effect of antecedent precipitation as a conditioning factor for slope instability, an approach that may be adequate for global or regional thresholds that include landslides in surface geologies with a wide range of subsurface drainage conditions and pore-pressure responses to sustained rainfall. However, in a local scale and in geological settings dominated by a narrow range of drainage conditions and behaviours of pore-pressure response, the inclusion of antecedent precipitation in the definition of thresholds becomes necessary in order to ensure their optimum performance, especially when used as part of early warning systems (i.e., false alarms and missed events must be kept to a minimum). Some authors have incorporated the effect of antecedent rainfall in a discrete manner by first comparing the accumulated precipitation during a specified number of days against a reference value and then using a Caine's function threshold only when that reference value is exceeded. The approach in other authors has been to calculate threshold values as linear combinations of several triggering and antecedent parameters. The present study is aimed to proposing a new threshold function based on a generalisation of Caine's power law. The proposed function has the form I = (α1×Anα2)×Dβ, where I and D are defined as previously. The expression in parentheses is equivalent to Caine's α parameter. α1, α2 and β are parameters estimated for the threshold. An is the n-days cumulative rainfall. The suggested procedure to estimate the threshold is as follows: (1) Given N storms, assign one of the following flags to each storm: nL (non-triggering storms), yL (triggering storms), uL (uncertain-triggering storms). Successful predictions correspond to nL and yL storms occurring below and above the threshold, respectively. Storms flagged as uL are actually assigned either an nL or yL flag using a randomization procedure. (2) Establish a set of values of ni (e.g. 1, 4, 7, 10, 15 days, etc.) to test for accumulated precipitation. (3) For each storm and each ni value, obtain the antecedent accumulated precipitation in ni days Ani. (4) Generate a 3D grid of values of α1, α2 and β. (5) For a certain value of ni, generate confusion matrices for the N storms at each grid point and estimate an evaluation metrics parameter EMP (e.g., accuracy, specificity, etc.). (6) Repeat the previous step for all the set of ni values. (7) From the 3D grid corresponding to each ni value, search for the optimum grid point EMPopti(global minimum or maximum parameter). (8) Search for the optimum value of ni in the space ni vs EMPopti . (9) The threshold is defined by the value of ni obtained in the previous step and the corresponding values of α1, α2 and β. The procedure is illustrated using rainfall data and landslide observations from the San Salvador volcano, where a rainfall-triggered debris flow destroyed a neighbourhood in the capital city of El Salvador in 19 September, 1982, killing not less than 300 people.

  3. Effects of pulse duration on magnetostimulation thresholds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saritas, Emine U., E-mail: saritas@ee.bilkent.edu.tr; Department of Electrical and Electronics Engineering, Bilkent University, Bilkent, Ankara 06800; National Magnetic Resonance Research Center

    Purpose: Medical imaging techniques such as magnetic resonance imaging and magnetic particle imaging (MPI) utilize time-varying magnetic fields that are subject to magnetostimulation limits, which often limit the speed of the imaging process. Various human-subject experiments have studied the amplitude and frequency dependence of these thresholds for gradient or homogeneous magnetic fields. Another contributing factor was shown to be number of cycles in a magnetic pulse, where the thresholds decreased with longer pulses. The latter result was demonstrated on two subjects only, at a single frequency of 1.27 kHz. Hence, whether the observed effect was due to the number ofmore » cycles or due to the pulse duration was not specified. In addition, a gradient-type field was utilized; hence, whether the same phenomenon applies to homogeneous magnetic fields remained unknown. Here, the authors investigate the pulse duration dependence of magnetostimulation limits for a 20-fold range of frequencies using homogeneous magnetic fields, such as the ones used for the drive field in MPI. Methods: Magnetostimulation thresholds were measured in the arms of six healthy subjects (age: 27 ± 5 yr). Each experiment comprised testing the thresholds at eight different pulse durations between 2 and 125 ms at a single frequency, which took approximately 30–40 min/subject. A total of 34 experiments were performed at three different frequencies: 1.2, 5.7, and 25.5 kHz. A solenoid coil providing homogeneous magnetic field was used to induce stimulation, and the field amplitude was measured in real time. A pre-emphasis based pulse shaping method was employed to accurately control the pulse durations. Subjects reported stimulation via a mouse click whenever they felt a twitching/tingling sensation. A sigmoid function was fitted to the subject responses to find the threshold at a specific frequency and duration, and the whole procedure was repeated at all relevant frequencies and pulse durations. Results: The magnetostimulation limits decreased with increasing pulse duration (T{sub pulse}). For T{sub pulse} < 18 ms, the thresholds were significantly higher than at the longest pulse durations (p < 0.01, paired Wilcoxon signed-rank test). The normalized magnetostimulation threshold (B{sub Norm}) vs duration curve at all three frequencies agreed almost identically, indicating that the observed effect is independent of the operating frequency. At the shortest pulse duration (T{sub pulse} ≈ 2 ms), the thresholds were approximately 24% higher than at the asymptotes. The thresholds decreased to within 4% of their asymptotic values for T{sub pulse} > 20 ms. These trends were well characterized (R{sup 2} = 0.78) by a stretched exponential function given by B{sub Norm}=1+αe{sup −(T{sub p}{sub u}{sub l}{sub s}{sub e}/β){sup γ}}, where the fitted parameters were α = 0.44, β = 4.32, and γ = 0.60. Conclusions: This work shows for the first time that the magnetostimulation thresholds decrease with increasing pulse duration, and that this effect is independent of the operating frequency. Normalized threshold vs duration trends are almost identical for a 20-fold range of frequencies: the thresholds are significantly higher at short pulse durations and settle to within 4% of their asymptotic values for durations longer than 20 ms. These results emphasize the importance of matching the human-subject experiments to the imaging conditions of a particular setup. Knowing the dependence of the safety limits to all contributing factors is critical for increasing the time-efficiency of imaging systems that utilize time-varying magnetic fields.« less

  4. Extreme summer temperatures in Iberia: health impacts and associated synoptic conditions

    NASA Astrophysics Data System (ADS)

    García-Herrera, R.; Díaz, J.; Trigo, R. M.; Hernández, E.

    2005-02-01

    This paper examines the effect of extreme summer temperatures on daily mortality in two large cities of Iberia: Lisbon (Portugal) and Madrid (Spain). Daily mortality and meteorological variables are analysed using the same methodology based on Box-Jenkins models. Results reveal that in both cases there is a triggering effect on mortality when maximum daily temperature exceeds a given threshold (34°C in Lisbon and 36°C in Madrid). The impact of most intense heat events is very similar for both cities, with significant mortality values occurring up to 3 days after the temperature threshold has been surpassed. This impact is measured as the percentual increase of mortality associated to a 1°C increase above the threshold temperature. In this respect, Lisbon shows a higher impact, 31%, as compared with Madrid at 21%. The difference can be attributed to demographic and socio-economic factors. Furthermore, the longer life span of Iberian women is critical to explain why, in both cities, females are more susceptible than males to heat effects, with an almost double mortality impact value. The analysis of Sea Level Pressure (SLP), 500hPa geopotential height and temperature fields reveals that, despite being relatively close to each other, Lisbon and Madrid have relatively different synoptic circulation anomalies associated with their respective extreme summer temperature days. The SLP field reveals higher anomalies for Lisbon, but extending over a smaller area. Extreme values in Madrid seem to require a more western location of the Azores High, embracing a greater area over Europe, even if it is not as deep as for Lisbon. The origin of the hot and dry air masses that usually lead to extreme heat days in both cities is located in Northern Africa. However, while Madrid maxima require wind blowing directly from the south, transporting heat from Southern Spain and Northern Africa, Lisbon maxima occur under more easterly conditions, when Northern African air flows over the central Iberian plateau, which had been previously heated.

  5. Ductile Tearing of Thin Aluminum Plates Under Blast Loading. Predictions with Fully Coupled Models and Biaxial Material Response Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corona, Edmundo; Gullerud, Arne S.; Haulenbeek, Kimberly K.

    2015-06-01

    The work presented in this report concerns the response and failure of thin 2024- T3 aluminum alloy circular plates to a blast load produced by the detonation of a nearby spherical charge. The plates were fully clamped around the circumference and the explosive charge was located centrally with respect to the plate. The principal objective was to conduct a numerical model validation study by comparing the results of predictions to experimental measurements of plate deformation and failure for charges with masses in the vicinity of the threshold between no tearing and tearing of the plates. Stereo digital image correlation datamore » was acquired for all tests to measure the deflection and strains in the plates. The size of the virtual strain gage in the measurements, however, was relatively large, so the strain measurements have to be interpreted accordingly as lower bounds of the actual strains in the plate and of the severity of the strain gradients. A fully coupled interaction model between the blast and the deflection of the structure was considered. The results of the validation exercise indicated that the model predicted the deflection of the plates reasonably accurately as well as the distribution of strain on the plate. The estimation of the threshold charge based on a critical value of equivalent plastic strain measured in a bulge test, however, was not accurate. This in spite of efforts to determine the failure strain of the aluminum sheet under biaxial stress conditions. Further work is needed to be able to predict plate tearing with some degree of confidence. Given the current technology, at least one test under the actual blast conditions where the plate tears is needed to calibrate the value of equivalent plastic strain when failure occurs in the numerical model. Once that has been determined, the question of the explosive mass value at the threshold could be addressed with more confidence.« less

  6. Dependence of Interfacial Excess on the Threshold Value of the Isoconcentration Surface

    NASA Technical Reports Server (NTRS)

    Yoon, Kevin E.; Noebe, Ronald D.; Hellman, Olof C.; Seidman, David N.

    2004-01-01

    The proximity histogram (or proxigram for short) is used for analyzing data collected by a three-dimensional atom probe microscope. The interfacial excess of Re (2.41 +/- 0.68 atoms/sq nm) is calculated by employing a proxigram in a completely geometrically independent way for gamma/gamma' interfaces in Rene N6, a third-generation single-crystal Ni-based superalloy. A possible dependence of interfacial excess on the variation of the threshold value of an isoconcentration surface is investigated using the data collected for Rene N6 alloy. It is demonstrated that the dependence of the interfacial excess value on the threshold value of the isoconcentration surface is weak.

  7. Economic evaluation and cost-effectiveness thresholds: signals to firms and implications for R & D investment and innovation.

    PubMed

    Vernon, John A; Goldberg, Robert; Golec, Joseph

    2009-01-01

    In this article we describe how reimbursement cost-effectiveness thresholds, per unit of health benefit, whether set explicitly or observed implicitly via historical reimbursement decisions, serve as a signal to firms about the commercial viability of their R&D projects (including candidate products for in-licensing). Traditional finance methods for R&D project valuations, such as net present value analyses (NPV), incorporate information from these payer reimbursement signals to help determine which R&D projects should be continued and which should be terminated (in the case of the latter because they yield an NPV < 0). Because the influence these signals have for firm R&D investment decisions is so significant, we argue that it is important for reimbursement thresholds to reflect the economic value of the unit of health benefit being considered for reimbursement. Thresholds set too low (below the economic value of the health benefit) will result in R&D investment levels that are too low relative to the economic value of R&D (on the margin). Similarly, thresholds set too high (above the economic value of the health benefit) will result in inefficiently high levels of R&D spending. The US in particular, which represents approximately half of the global pharmaceutical market (based on sales), and which seems poised to begin undertaking cost effectiveness in a systematic way, needs to exert caution in setting policies that explicitly or implicitly establish cost-effectiveness reimbursement thresholds for healthcare products and technologies, such as pharmaceuticals.

  8. A fuzzy optimal threshold technique for medical images

    NASA Astrophysics Data System (ADS)

    Thirupathi Kannan, Balaji; Krishnasamy, Krishnaveni; Pradeep Kumar Kenny, S.

    2012-01-01

    A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized, preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic structures, compared with various existing algorithms and proved better than the existing algorithms.

  9. Low-threshold field emission in planar cathodes with nanocarbon materials

    NASA Astrophysics Data System (ADS)

    Zhigalov, V.; Petukhov, V.; Emelianov, A.; Timoshenkov, V.; Chaplygin, Yu.; Pavlov, A.; Shamanaev, A.

    2016-12-01

    Nanocarbon materials are of great interest as field emission cathodes due to their low threshold voltage. In this work current-voltage characteristics of nanocarbon electrodes were studied. Low-threshold emission was found in planar samples where field enhancement is negligible (<10). Electron work function values, calculated by Fowler-Nordheim theory, are anomalous low (<1 eV) and come into collision with directly measured work function values in fabricated planar samples (4.1-4.4 eV). Non-applicability of Fowler-Nordheim theory for the nanocarbon materials was confirmed. The reasons of low-threshold emission in nanocarbon materials are discussed.

  10. Polylogarithmic equilibrium treatment of molecular aggregation and critical concentrations.

    PubMed

    Michel, Denis; Ruelle, Philippe

    2017-02-15

    A full equilibrium treatment of molecular aggregation is presented for prototypes of 1D and 3D aggregates, with and without nucleation. By skipping complex kinetic parameters like aggregate size-dependent diffusion, the equilibrium treatment allows us to predict directly time-independent quantities such as critical concentrations. The relationships between the macroscopic equilibrium constants for different paths are first established by statistical corrections and so as to comply with the detailed balance constraints imposed by nucleation, and the composition of the mixture resulting from homogeneous aggregation is then analyzed using a polylogarithmic function. Several critical concentrations are distinguished: the residual monomer concentration at equilibrium (RMC) and the critical nucleation concentration (CNC), which is the threshold concentration of total subunits necessary for initiating aggregation. When increasing the concentration of total subunits, the RMC converges more strongly to its asymptotic value, the equilibrium constant of depolymerization, for 3D aggregates and in the case of nucleation. The CNC moderately depends on the number of subunits in the nucleus, but sharply increases with the difference between the equilibrium constants of polymerization and nucleation. As the RMC and CNC can be numerically but not analytically determined, ansatz equations connecting them to thermodynamic parameters are proposed.

  11. Diagnostics of Cold-Sprayed Particle Velocities Approaching Critical Deposition Conditions

    NASA Astrophysics Data System (ADS)

    Mauer, G.; Singh, R.; Rauwald, K.-H.; Schrüfer, S.; Wilson, S.; Vaßen, R.

    2017-10-01

    In cold spraying, the impact particle velocity plays a key role for successful deposition. It is well known that only those particles can achieve successful bonding which have an impact velocity exceeding a particular threshold. This critical velocity depends on the thermomechanical properties of the impacting particles at impacting temperature. The latter depends on the gas temperature in the torch but also on stand-off distance and gas pressure. In the past, some semiempirical approaches have been proposed to estimate particle impact and critical velocities. Besides that, there are a limited number of available studies on particle velocity measurements in cold spraying. In the present work, particle velocity measurements were performed using a cold spray meter, where a laser beam is used to illuminate the particles ensuring sufficiently detectable radiant signal intensities. Measurements were carried out for INCONEL® alloy 718-type powders with different particle sizes. These experimental investigations comprised mainly subcritical spray parameters for this material to have a closer look at the conditions of initial deposition. The critical velocities were identified by evaluating the deposition efficiencies and correlating them to the measured particle velocity distributions. In addition, the experimental results were compared with some values estimated by model calculations.

  12. Bayesian methods for estimating GEBVs of threshold traits

    PubMed Central

    Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q

    2013-01-01

    Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458

  13. Method and apparatus for analog pulse pile-up rejection

    DOEpatents

    De Geronimo, Gianluigi

    2013-12-31

    A method and apparatus for pulse pile-up rejection are disclosed. The apparatus comprises a delay value application constituent configured to receive a threshold-crossing time value, and provide an adjustable value according to a delay value and the threshold-crossing time value; and a comparison constituent configured to receive a peak-occurrence time value and the adjustable value, compare the peak-occurrence time value with the adjustable value, indicate pulse acceptance if the peak-occurrence time value is less than or equal to the adjustable value, and indicate pulse rejection if the peak-occurrence time value is greater than the adjustable value.

  14. Method and apparatus for analog pulse pile-up rejection

    DOEpatents

    De Geronimo, Gianluigi

    2014-11-18

    A method and apparatus for pulse pile-up rejection are disclosed. The apparatus comprises a delay value application constituent configured to receive a threshold-crossing time value, and provide an adjustable value according to a delay value and the threshold-crossing time value; and a comparison constituent configured to receive a peak-occurrence time value and the adjustable value, compare the peak-occurrence time value with the adjustable value, indicate pulse acceptance if the peak-occurrence time value is less than or equal to the adjustable value, and indicate pulse rejection if the peak-occurrence time value is greater than the adjustable value.

  15. Uncovering state-dependent relationships in shallow lakes using Bayesian latent variable regression.

    PubMed

    Vitense, Kelsey; Hanson, Mark A; Herwig, Brian R; Zimmer, Kyle D; Fieberg, John

    2018-03-01

    Ecosystems sometimes undergo dramatic shifts between contrasting regimes. Shallow lakes, for instance, can transition between two alternative stable states: a clear state dominated by submerged aquatic vegetation and a turbid state dominated by phytoplankton. Theoretical models suggest that critical nutrient thresholds differentiate three lake types: highly resilient clear lakes, lakes that may switch between clear and turbid states following perturbations, and highly resilient turbid lakes. For effective and efficient management of shallow lakes and other systems, managers need tools to identify critical thresholds and state-dependent relationships between driving variables and key system features. Using shallow lakes as a model system for which alternative stable states have been demonstrated, we developed an integrated framework using Bayesian latent variable regression (BLR) to classify lake states, identify critical total phosphorus (TP) thresholds, and estimate steady state relationships between TP and chlorophyll a (chl a) using cross-sectional data. We evaluated the method using data simulated from a stochastic differential equation model and compared its performance to k-means clustering with regression (KMR). We also applied the framework to data comprising 130 shallow lakes. For simulated data sets, BLR had high state classification rates (median/mean accuracy >97%) and accurately estimated TP thresholds and state-dependent TP-chl a relationships. Classification and estimation improved with increasing sample size and decreasing noise levels. Compared to KMR, BLR had higher classification rates and better approximated the TP-chl a steady state relationships and TP thresholds. We fit the BLR model to three different years of empirical shallow lake data, and managers can use the estimated bifurcation diagrams to prioritize lakes for management according to their proximity to thresholds and chance of successful rehabilitation. Our model improves upon previous methods for shallow lakes because it allows classification and regression to occur simultaneously and inform one another, directly estimates TP thresholds and the uncertainty associated with thresholds and state classifications, and enables meaningful constraints to be built into models. The BLR framework is broadly applicable to other ecosystems known to exhibit alternative stable states in which regression can be used to establish relationships between driving variables and state variables. © 2017 by the Ecological Society of America.

  16. Threshold-based segmentation of fluorescent and chromogenic images of microglia, astrocytes and oligodendrocytes in FIJI.

    PubMed

    Healy, Sinead; McMahon, Jill; Owens, Peter; Dockery, Peter; FitzGerald, Una

    2018-02-01

    Image segmentation is often imperfect, particularly in complex image sets such z-stack micrographs of slice cultures and there is a need for sufficient details of parameters used in quantitative image analysis to allow independent repeatability and appraisal. For the first time, we have critically evaluated, quantified and validated the performance of different segmentation methodologies using z-stack images of ex vivo glial cells. The BioVoxxel toolbox plugin, available in FIJI, was used to measure the relative quality, accuracy, specificity and sensitivity of 16 global and 9 local threshold automatic thresholding algorithms. Automatic thresholding yields improved binary representation of glial cells compared with the conventional user-chosen single threshold approach for confocal z-stacks acquired from ex vivo slice cultures. The performance of threshold algorithms varies considerably in quality, specificity, accuracy and sensitivity with entropy-based thresholds scoring highest for fluorescent staining. We have used the BioVoxxel toolbox to correctly and consistently select the best automated threshold algorithm to segment z-projected images of ex vivo glial cells for downstream digital image analysis and to define segmentation quality. The automated OLIG2 cell count was validated using stereology. As image segmentation and feature extraction can quite critically affect the performance of successive steps in the image analysis workflow, it is becoming increasingly necessary to consider the quality of digital segmenting methodologies. Here, we have applied, validated and extended an existing performance-check methodology in the BioVoxxel toolbox to z-projected images of ex vivo glia cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Can adaptive threshold-based metabolic tumor volume (MTV) and lean body mass corrected standard uptake value (SUL) predict prognosis in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy?

    PubMed

    Akagunduz, Ozlem Ozkaya; Savas, Recep; Yalman, Deniz; Kocacelebi, Kenan; Esassolak, Mustafa

    2015-11-01

    To evaluate the predictive value of adaptive threshold-based metabolic tumor volume (MTV), maximum standardized uptake value (SUVmax) and maximum lean body mass corrected SUV (SULmax) measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Pretreatment PET/CT of the 62 patients with locally advanced head and neck cancer who were treated consecutively between May 2010 and February 2013 were reviewed retrospectively. The maximum FDG uptake of the primary tumor was defined according to SUVmax and SULmax. Multiple threshold levels between 60% and 10% of the SUVmax and SULmax were tested with intervals of 5% to 10% in order to define the most suitable threshold value for the metabolic activity of each patient's tumor (adaptive threshold). MTV was calculated according to this value. We evaluated the relationship of mean values of MTV, SUVmax and SULmax with treatment response, local recurrence, distant metastasis and disease-related death. Receiver-operating characteristic (ROC) curve analysis was done to obtain optimal predictive cut-off values for MTV and SULmax which were found to have a predictive value. Local recurrence-free (LRFS), disease-free (DFS) and overall survival (OS) were examined according to these cut-offs. Forty six patients had complete response, 15 had partial response, and 1 had stable disease 6 weeks after the completion of treatment. Median follow-up of the entire cohort was 18 months. Of 46 complete responders 10 had local recurrence, and of 16 partial or no responders 10 had local progression. Eighteen patients died. Adaptive threshold-based MTV had significant predictive value for treatment response (p=0.011), local recurrence/progression (p=0.050), and disease-related death (p=0.024). SULmax had a predictive value for local recurrence/progression (p=0.030). ROC curves analysis revealed a cut-off value of 14.00 mL for MTV and 10.15 for SULmax. Three-year LRFS and DFS rates were significantly lower in patients with MTV ≥ 14.00 mL (p=0.026, p=0.018 respectively), and SULmax≥10.15 (p=0.017, p=0.022 respectively). SULmax did not have a significant predictive value for OS whereas MTV had (p=0.025). Adaptive threshold-based MTV and SULmax could have a role in predicting local control and survival in head and neck cancer patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. The impact of manual threshold selection in medical additive manufacturing.

    PubMed

    van Eijnatten, Maureen; Koivisto, Juha; Karhu, Kalle; Forouzanfar, Tymour; Wolff, Jan

    2017-04-01

    Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.

  19. Organization of the Tropical Convective Cloud Population by Humidity and the Critical Transition to Heavy Precipitation

    NASA Astrophysics Data System (ADS)

    Igel, M.

    2015-12-01

    The tropical atmosphere exhibits an abrupt statistical switch between non-raining and heavily raining states as column moisture increases across a wide range of length scales. Deep convection occurs at values of column humidity above the transition point and induces drying of moist columns. With a 1km resolution, large domain cloud resolving model run in RCE, what will be made clear here for the first time is how the entire tropical convective cloud population is affected by and feeds back to the pickup in heavy precipitation. Shallow convection can act to dry the low levels through weak precipitation or vertical redistribution of moisture, or to moisten toward a transition to deep convection. It is shown that not only can deep convection dehydrate the entire column, it can also dry just the lower layer through intense rain. In the latter case, deep stratiform cloud then forms to dry the upper layer through rain with anomalously high rates for its value of column humidity until both the total column moisture falls below the critical transition point and the upper levels are cloud free. Thus, all major tropical cloud types are shown to respond strongly to the same critical phase-transition point. This mutual response represents a potentially strong organizational mechanism for convection, and the frequency of and logical rules determining physical evolutions between these convective regimes will be discussed. The precise value of the point in total column moisture at which the transition to heavy precipitation occurs is shown to result from two independent thresholds in lower-layer and upper-layer integrated humidity.

  20. Temporal and Spatial Development of dB/dt During Substorms

    NASA Astrophysics Data System (ADS)

    Weygand, J. M.; Chu, X.

    2017-12-01

    Ground induced currents (GICs) due to space weather are a threat to high voltage power transmission systems. However, knowledge of ground conductivity is the largest source of errors in the determination of GICs. A good proxy for GICs is dB/dt obtained from the Bx and By components of the magnetic field fluctuations. It is known that dB/dt values associated with magnetic storms can reach dangerous levels for power transmission systems. On the other hand, it is not uncommon for dB/dt values associated with substorms to exceed critical thresholds of 1.5 nT/s [Pulkkinen et al., 2011; 2013] and 5 nT/s [Molinski et al., 2000] and the temporal and spatial changes of the dB/dt associated with substorms, unlike storms, are not well understood. Using two dimensional maps of dB/dt over North America and Greenland derived from the spherical elementary currents [Weygand et al., 2011], we investigate the temporal and spatial change of dB/dt for both a single substorm event and a two dimensional superposed epoch analysis of many substorms. Both the single event and the statistical analysis show a sudden increase of dB/dt at substorm onset followed by an expansion poleward, westward, and eastward after the onset during the expansion phase. This temporal and spatial development of the dB/dt resembles the temporal and spatial change of the auroral emissions. Substorm values of dB/dt peak shortly after the auroral onset time and in at least one event exceeded 6.5 nT/s for a non-storm time substorm. In many of our 24 cases the area that exceeds the Pulkkinen et al. [2011; 2013] threshold of 1.5 nT/s over several million square kilometers and after about 30 minutes the dB/dt values fall below the threshold level. These results address one of goals of the Space Weather Action Plan, which are to establish benchmarks for space weather events and improve modeling and prediction of their impacts on infrastructure.

  1. Robust crop and weed segmentation under uncontrolled outdoor illumination

    USDA-ARS?s Scientific Manuscript database

    A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...

  2. Evaluation of nonesterified fatty acids and beta-hydroxybutyrate in transition dairy cattle in the northeastern United States: Critical thresholds for prediction of clinical diseases.

    PubMed

    Ospina, P A; Nydam, D V; Stokol, T; Overton, T R

    2010-02-01

    The objectives of this study were to 1) establish cow-level critical thresholds for serum concentrations of nonesterified fatty acids (NEFA) and beta-hydroxybutyrate (BHBA) to predict periparturient diseases [displaced abomasa (DA), clinical ketosis (CK), metritis and retained placenta, or any of these three], and 2) investigate the magnitude of the metabolites' association with these diseases within 30 d in milk. In a prospective cohort study of 100 freestall, total mixed ration-fed herds in the northeastern United States, blood samples were collected from approximately 15 prepartum and 15 different postpartum transition animals in each herd, for a total of 2,758 samples. Serum NEFA concentrations were measured in the prepartum group, and both NEFA and BHBA were measured in the postpartum group. The critical thresholds for NEFA or BHBA were evaluated with receiver operator characteristic analysis for all diseases in both cohorts. The risk ratios (RR) of a disease outcome given NEFA or BHBA concentrations and other covariates were modeled with multivariable regression techniques, accounting for clustering of cows within herds. The NEFA critical threshold that predicted any of the 3 diseases in the prepartum cohort was 0.29mEq/L and in the postpartum cohort was 0.57mEq/L. The critical threshold for serum BHBA in the postpartum cohort was 10mg/dL, which predicted any of the 3 diseases. All RR with NEFA as a predictor of disease were >1.8; however, RR were greatest in animals sampled postpartum (e.g., RR for DA=9.7; 95% CI=4.2 to 22.4. All RR with BHBA as the predictor of disease were >2.3 (e.g., RR for DA=6.9; 95% CI=3.7 to 12.9). Although prepartum NEFA and postpartum BHBA were both significantly associated with development of clinical disease, postpartum serum NEFA concentration was most associated with the risk of developing DA, CK, metritis, or retained placenta during the first 30 d in milk. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Criticality in finite dynamical networks

    NASA Astrophysics Data System (ADS)

    Rohlf, Thimo; Gulbahce, Natali; Teuscher, Christof

    2007-03-01

    It has been shown analytically and experimentally that both random boolean and random threshold networks show a transition from ordered to chaotic dynamics at a critical average connectivity Kc in the thermodynamical limit [1]. By looking at the statistical distributions of damage spreading (damage sizes), we go beyond this extensively studied mean-field approximation. We study the scaling properties of damage size distributions as a function of system size N and initial perturbation size d(t=0). We present numerical evidence that another characteristic point, Kd exists for finite system sizes, where the expectation value of damage spreading in the network is independent of the system size N. Further, the probability to obtain critical networks is investigated for a given system size and average connectivity k. Our results suggest that, for finite size dynamical networks, phase space structure is very complex and may not exhibit a sharp order-disorder transition. Finally, we discuss the implications of our findings for evolutionary processes and learning applied to networks which solve specific computational tasks. [1] Derrida, B. and Pomeau, Y. (1986), Europhys. Lett., 1, 45-49

  4. High-carrier-density phase in LaTiO3/SrTiO3 superlattices

    NASA Astrophysics Data System (ADS)

    Park, Se Young; Rabe, Karin; Millis, Andrew

    2015-03-01

    We investigate superlattices composed of alternating layers of Mott insulating LaTiO3 and band insulating SrTiO3 from first principles, using the density functional theory plus U (DFT+U) method. For values of U above a critical threshold, we find that melting of the Mott-insulating phase can extend from the interface into the LaTiO3 layer, resulting in a sheet carrier density exceeding the density of 0.5 electrons per in-plane unit cell found in previous studies. The critical U for the melting transition is larger than the critical Coulomb correlation required for the insulating LaTiO3, suggesting the existence of a high sheet carrier density phase in LaTiO3/SrTiO3 superlattices. The effects of in-plane strain and varying layer thickness on the melting transition are discussed. For insulating superlattices, we study the strain and thickness dependence of the polarization and its relation to near-interface local atomic distortions. Support: DOE ER 046169, ONR N00014-11-0666.

  5. Vehicle response-based track geometry assessment using multi-body simulation

    NASA Astrophysics Data System (ADS)

    Kraft, Sönke; Causse, Julien; Coudert, Frédéric

    2018-02-01

    The assessment of the geometry of railway tracks is an indispensable requirement for safe rail traffic. Defects which represent a risk for the safety of the train have to be identified and the necessary measures taken. According to current standards, amplitude thresholds are applied to the track geometry parameters measured by recording cars. This geometry-based assessment has proved its value but suffers from the low correlation between the geometry parameters and the vehicle reactions. Experience shows that some defects leading to critical vehicle reactions are underestimated by this approach. The use of vehicle responses in the track geometry assessment process allows identifying critical defects and improving the maintenance operations. This work presents a vehicle response-based assessment method using multi-body simulation. The choice of the relevant operation conditions and the estimation of the simulation uncertainty are outlined. The defects are identified from exceedances of track geometry and vehicle response parameters. They are then classified using clustering methods and the correlation with vehicle response is analysed. The use of vehicle responses allows the detection of critical defects which are not identified from geometry parameters.

  6. Evaluation of a Teleform-based data collection system: a multi-center obesity research case study.

    PubMed

    Jenkins, Todd M; Wilson Boyce, Tawny; Akers, Rachel; Andringa, Jennifer; Liu, Yanhong; Miller, Rosemary; Powers, Carolyn; Ralph Buncher, C

    2014-06-01

    Utilizing electronic data capture (EDC) systems in data collection and management allows automated validation programs to preemptively identify and correct data errors. For our multi-center, prospective study we chose to use TeleForm, a paper-based data capture software that uses recognition technology to create case report forms (CRFs) with similar functionality to EDC, including custom scripts to identify entry errors. We quantified the accuracy of the optimized system through a data audit of CRFs and the study database, examining selected critical variables for all subjects in the study, as well as an audit of all variables for 25 randomly selected subjects. Overall we found 6.7 errors per 10,000 fields, with similar estimates for critical (6.9/10,000) and non-critical (6.5/10,000) variables-values that fall below the acceptable quality threshold of 50 errors per 10,000 established by the Society for Clinical Data Management. However, error rates were found to widely vary by type of data field, with the highest rate observed with open text fields. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Critical thresholds of liver function parameters for ketosis prediction in dairy cows using receiver operating characteristic (ROC) analysis.

    PubMed

    Sun, Yuhang; Wang, Bo; Shu, Shi; Zhang, Hongyou; Xu, Chuang; Wu, Ling; Xia, Cheng

    2015-01-01

    Fatty liver syndrome and ketosis are important metabolic disorders in high-producing cows during early lactation with fatty liver usually preceding ketosis. To date, parameters for early prediction of the risk of ketosis have not been investigated in China. To determine the predictive value of some parameters on the risk of ketosis in China. In a descriptive study, 48 control and 32 ketotic Holstein Friesian cows were randomly selected from one farm with a serum β-hydroxybutyrate (BHBA) concentration of 1.20 mmol/L as cutoff point. The risk prediction thresholds for ketosis were determined by receiver operating characteristic (ROC) analysis. In line with a high BHBA concentration, blood glucose concentration was significantly lower in ketotic cows compared to control animals (2.77 ± 0.24 versus 3.34 ± 0.03 mmol/L; P = 0.02). Thresholds were more than 0.76 mmol/L for nonesterified fatty acids (NEFA, with 65% sensitivity and 92% specificity), more than 104 U/L for aspartate aminotransferase (AST, 74% and 85%, respectively), less than 140 U/L for cholinesterase (CHE, 75% and 59%, respectively), and more than 3.3 µmol/L for total bilirubin (TBIL, 58% and 83%, respectively). There were significant correlations between BHBA and glucose (R = -4.74), or CHE (R = -0.262), BHBA and NEFA (R = 0.520), or AST (R = 0.525), or TBIL (R = 0.278), or direct bilirubin (DBIL, R = 0.348). AST, CHE, TBIL and NEFA may be useful parameters for risk prediction of ketosis. This study might be of value in addressing novel directions for future research on the connection between ketosis and liver dysfunction.

  8. Critical oxygen levels and metabolic suppression in oceanic oxygen minimum zones.

    PubMed

    Seibel, Brad A

    2011-01-15

    The survival of oceanic organisms in oxygen minimum zones (OMZs) depends on their total oxygen demand and the capacities for oxygen extraction and transport, anaerobic ATP production and metabolic suppression. Anaerobic metabolism and metabolic suppression are required for daytime forays into the most extreme OMZs. Critical oxygen partial pressures are, within a range, evolved to match the minimum oxygen level to which a species is exposed. This fact demands that low oxygen habitats be defined by the biological response to low oxygen rather than by some arbitrary oxygen concentration. A broad comparative analysis of oxygen tolerance facilitates the identification of two oxygen thresholds that may prove useful for policy makers as OMZs expand due to climate change. Between these thresholds, specific physiological adaptations to low oxygen are required of virtually all species. The lower threshold represents a limit to evolved oxygen extraction capacity. Climate change that pushes oxygen concentrations below the lower threshold (~0.8 kPa) will certainly result in a transition from an ecosystem dominated by a diverse midwater fauna to one dominated by diel migrant biota that must return to surface waters at night. Animal physiology and, in particular, the response of animals to expanding hypoxia, is a critical, but understudied, component of biogeochemical cycles and oceanic ecology. Here, I discuss the definition of hypoxia and critical oxygen levels, review adaptations of animals to OMZs and discuss the capacity for, and prevalence of, metabolic suppression as a response to temporary residence in OMZs and the possible consequences of climate change on OMZ ecology.

  9. Exercise Thresholds on Trial: Are They Really Equivalent?

    PubMed

    Caen, Kevin; Vermeire, Kobe; Bourgois, Jan G; Boone, Jan

    2018-06-01

    The interchangeable use of whole-body exercise thresholds and breakpoints (BP) in the local oxygenation response, as measured via near-infrared spectroscopy, has recently been questioned in scientific literature. Therefore, the present study aimed to longitudinally investigate the interrelationship of four commonly used exercise thresholds: critical power (CP), the respiratory compensation point (RCP), and BP in muscle (m[HHb]BP) and brain (c[O2Hb]BP) oxygenation. Nine male participants (21.8 ± 1.2 yr) completed 6 wk of cycling interval training. Before and after this intervention period, subjects performed a ramp incremental exercise protocol to determine RCP, m[HHb]BP, and c[O2Hb]BP and four constant work rate (WR) tests to calculate CP. WR associated with CP, RCP, m[HHB]BP, and c[O2Hb]BP increased by 7.7% ± 4.2%, 13.6% ± 9.0%, 9.8% ± 5.7%, and 11.3% ± 11.1%, respectively. CP was lower (pre: 260 ± 32 W, post: 280 ± 41 W; P < 0.05) than the WR associated with RCP (pre: 281 ± 28 W, post: 318 ± 36 W) and c[O2Hb]BP (pre: 283 ± 36 W, post: 313 ± 32 W) which occurred concomitantly (P = 0.683). M[HHb]BP occurred at the highest WR and differed from all others (pre: 313 ± 23 W, post: 344 ± 32 W; P < 0.05). Training-induced WR differences (ΔWR) did not contrast between thresholds, and initial parameter differences were not affected by the intervention (P = 0.253). Thresholds were partly correlated before (R = 0.67-0.85, P < 0.05) and after (R = 0.83-0.96, P < 0.05) training, but ΔWR values were not associated (P > 0.05). Results of the present study strongly question true equivalence of CP, RCP, m[HHb]BP, and c[O2Hb]BP during ramp incremental exercise. Therefore, these exercise thresholds should not be used interchangeably.

  10. On the marginal instability threshold condition of the aperiodic ordinary mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlickeiser, R.; Yoon, P. H.; School of Space Research, Kyung Hee University, Yongin

    2014-07-15

    The purely growing ordinary (O) mode instability has recently received renewed attention owing to its potential applicability to the solar wind plasma. Here, an analytical marginal instability condition is derived for counter-streaming bi-Maxwellian plasma particle distribution functions. The derived marginal instability condition as a function of the temperature anisotropy and plasma beta agrees remarkably well with the numerically determined instability condition. The existence of a new instability domain of the O-mode at small plasma beta values is confirmed with the leading A∝β{sub ∥}{sup −1}-dependence, if the counter-stream parameter P{sub e} exceeds a critical value. At small plasma beta values atmore » large enough counter-stream parameter, the O-mode also operates for temperature anisotropies A = T{sub ⊥}/T{sub ∥} > 1 even larger than unity, as the parallel counter-stream free energy exceeds the perpendicular bi-Maxwellian free energy.« less

  11. Pre-impact fall detection system using dynamic threshold and 3D bounding box

    NASA Astrophysics Data System (ADS)

    Otanasap, Nuth; Boonbrahm, Poonpong

    2017-02-01

    Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.

  12. The Global Spike: Conserved Dendritic Properties Enable Unique Ca2+ Spike Generation in Low-Threshold Spiking Neurons.

    PubMed

    Connelly, William M; Crunelli, Vincenzo; Errington, Adam C

    2015-11-25

    Low-threshold Ca(2+) spikes (LTS) are an indispensible signaling mechanism for neurons in areas including the cortex, cerebellum, basal ganglia, and thalamus. They have critical physiological roles and have been strongly associated with disorders including epilepsy, Parkinson's disease, and schizophrenia. However, although dendritic T-type Ca(2+) channels have been implicated in LTS generation, because the properties of low-threshold spiking neuron dendrites are unknown, the precise mechanism has remained elusive. Here, combining data from fluorescence-targeted dendritic recordings and Ca(2+) imaging from low-threshold spiking cells in rat brain slices with computational modeling, the cellular mechanism responsible for LTS generation is established. Our data demonstrate that key somatodendritic electrical conduction properties are highly conserved between glutamatergic thalamocortical neurons and GABAergic thalamic reticular nucleus neurons and that these properties are critical for LTS generation. In particular, the efficiency of soma to dendrite voltage transfer is highly asymmetric in low-threshold spiking cells, and in the somatofugal direction, these neurons are particularly electrotonically compact. Our data demonstrate that LTS have remarkably similar amplitudes and occur synchronously throughout the dendritic tree. In fact, these Ca(2+) spikes cannot occur locally in any part of the cell, and hence we reveal that LTS are generated by a unique whole-cell mechanism that means they always occur as spatially global spikes. This all-or-none, global electrical and biochemical signaling mechanism clearly distinguishes LTS from other signals, including backpropagating action potentials and dendritic Ca(2+)/NMDA spikes, and has important consequences for dendritic function in low-threshold spiking neurons. Low-threshold Ca(2+) spikes (LTS) are critical for important physiological processes, including generation of sleep-related oscillations, and are implicated in disorders including epilepsy, Parkinson's disease, and schizophrenia. However, the mechanism underlying LTS generation in neurons, which is thought to involve dendritic T-type Ca(2+) channels, has remained elusive due to a lack of knowledge of the dendritic properties of low-threshold spiking cells. Combining dendritic recordings, two-photon Ca(2+) imaging, and computational modeling, this study reveals that dendritic properties are highly conserved between two prominent low-threshold spiking neurons and that these properties underpin a whole-cell somatodendritic spike generation mechanism that makes the LTS a unique global electrical and biochemical signal in neurons. Copyright © 2015 Connelly et al.

  13. A study on the temperature dependence of the threshold switching characteristics of Ge2Sb2Te5

    NASA Astrophysics Data System (ADS)

    Lee, Suyoun; Jeong, Doo Seok; Jeong, Jeung-hyun; Zhe, Wu; Park, Young-Wook; Ahn, Hyung-Woo; Cheong, Byung-ki

    2010-01-01

    We investigated the temperature dependence of the threshold switching characteristics of a memory-type chalcogenide material, Ge2Sb2Te5. We found that the threshold voltage (Vth) decreased linearly with temperature, implying the existence of a critical conductivity of Ge2Sb2Te5 for its threshold switching. In addition, we investigated the effect of bias voltage and temperature on the delay time (tdel) of the threshold switching of Ge2Sb2Te5 and described the measured relationship by an analytic expression which we derived based on a physical model where thermally activated hopping is a dominant transport mechanism in the material.

  14. The conventional tuning fork as a quantitative tool for vibration threshold.

    PubMed

    Alanazy, Mohammed H; Alfurayh, Nuha A; Almweisheer, Shaza N; Aljafen, Bandar N; Muayqil, Taim

    2018-01-01

    This study was undertaken to describe a method for quantifying vibration when using a conventional tuning fork (CTF) in comparison to a Rydel-Seiffer tuning fork (RSTF) and to provide reference values. Vibration thresholds at index finger and big toe were obtained in 281 participants. Spearman's correlations were performed. Age, weight, and height were analyzed for their covariate effects on vibration threshold. Reference values at the fifth percentile were obtained by quantile regression. The correlation coefficients between CTF and RSTF values at finger/toe were 0.59/0.64 (P = 0.001 for both). Among covariates, only age had a significant effect on vibration threshold. Reference values for CTF at finger/toe for the age groups 20-39 and 40-60 years were 7.4/4.9 and 5.8/4.6 s, respectively. Reference values for RSTF at finger/toe for the age groups 20-39 and 40-60 years were 6.9/5.5 and 6.2/4.7, respectively. CTF provides quantitative values that are as good as those provided by RSTF. Age-stratified reference data are provided. Muscle Nerve 57: 49-53, 2018. © 2017 Wiley Periodicals, Inc.

  15. Investigation of critical equivalence ratio and chemical speciation in flames of ethylbenzene-ethanol blends

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Therrien, Richard J.; Ergut, Ali; Levendis, Yiannis A.

    This work investigates five different one-dimensional, laminar, atmospheric pressure, premixed ethanol/ethylbenzene flames (0%, 25%, 50%, 75% and 90% ethanol by weight) at their soot onset threshold ({phi}{sub critical}). Liquid ethanol/ethylbenzene mixtures were pre-vaporized in nitrogen, blended with an oxygen-nitrogen mixture and, upon ignition, burned in premixed one-dimensional flames at atmospheric pressure. The flames were controlled so that each was at its visual soot onset threshold, and all had similar temperature profiles (determined by thermocouples). Fixed gases, light volatile hydrocarbons, polycyclic aromatic hydrocarbons (PAH), and oxygenated aromatic hydrocarbons were directly sampled at three locations in each flame. The experimental results weremore » compared with a detailed kinetic model, and the modeling results were used to perform a reaction flux analysis of key species. The critical equivalence ratio was observed to increase in a parabolic fashion as ethanol concentration increased in the fuel mixture. The experimental results showed increasing trends of methane, ethane, and ethylene with increasing concentrations of ethanol in the flames. Carbon monoxide was also seen to increase significantly with the increase of ethanol in the flame, which removes carbon from the PAH and soot formation pathways. The PAH and oxygenated aromatic hydrocarbon values were very similar in the 0%, 25% and 50% ethanol flames, but significantly lower in the 75% and 90% ethanol flames. These results were in general agreement with the model and were reflected by the model soot predictions. The model predicted similar soot profiles for the 0%, 25% and 50% ethanol flames, however it predicted significantly lower values in the 75% and 90% ethanol flames. The reaction flux analysis revealed benzyl to be a major contributor to single and double ring aromatics (i.e., benzene and naphthalene), which was identified in a similar role in nearly sooting or highly sooting ethylbenzene flames. The presence of this radical was significantly reduced as ethanol concentration was increased in the flames, and this effect in combination with the lower carbon to oxygen ratios and the enhanced formation of carbon monoxide, are likely what allowed higher equivalence ratios to be reached without forming soot. (author)« less

  16. Reading for Integration, Identifying Complementary Threshold Concepts: The ACRL "Framework" in Conversation with "Naming What We Know: Threshold Concepts of Writing"

    ERIC Educational Resources Information Center

    Johnson, Brittney; McCracken, I. Moriah

    2016-01-01

    In 2015, threshold concepts formed the foundation of two disciplinary documents: The "ACRL Framework for Information Literacy" (2015) and "Naming What We Know: Threshold Concepts of Writing Studies" (2015). While there is no consensus in the fields about the value of threshold concepts in teaching, reading the six Frames in the…

  17. A threshold method for immunological correlates of protection

    PubMed Central

    2013-01-01

    Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322

  18. Differences in two-point discrimination and sensory threshold in the blind between braille and text reading: a pilot study.

    PubMed

    Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan

    2015-06-01

    [Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds.

  19. Differences in two-point discrimination and sensory threshold in the blind between braille and text reading: a pilot study

    PubMed Central

    Noh, Ji-Woong; Park, Byoung-Sun; Kim, Mee-Young; Lee, Lim-Kyu; Yang, Seung-Min; Lee, Won-Deok; Shin, Yong-Sub; Kang, Ji-Hye; Kim, Ju-Hyun; Lee, Jeong-Uk; Kwak, Taek-Yong; Lee, Tae-Hyun; Kim, Ju-Young; Kim, Junghwan

    2015-01-01

    [Purpose] This study investigated two-point discrimination (TPD) and the electrical sensory threshold of the blind to define the effect of using Braille on the tactile and electrical senses. [Subjects and Methods] Twenty-eight blind participants were divided equally into a text-reading and a Braille-reading group. We measured tactile sensory and electrical thresholds using the TPD method and a transcutaneous electrical nerve stimulator. [Results] The left palm TPD values were significantly different between the groups. The values of the electrical sensory threshold in the left hand, the electrical pain threshold in the left hand, and the electrical pain threshold in the right hand were significantly lower in the Braille group than in the text group. [Conclusion] These findings make it difficult to explain the difference in tactility between groups, excluding both palms. However, our data show that using Braille can enhance development of the sensory median nerve in the blind, particularly in terms of the electrical sensory and pain thresholds. PMID:26180348

  20. Higgs boson gluon-fusion production beyond threshold in N 3LO QCD

    DOE PAGES

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; ...

    2015-03-18

    In this study, we compute the gluon fusion Higgs boson cross-section at N 3LO through the second term in the threshold expansion. This calculation constitutes a major milestone towards the full N 3LO cross section. Our result has the best formal accuracy in the threshold expansion currently available, and includes contributions from collinear regions besides subleading corrections from soft and hard regions, as well as certain logarithmically enhanced contributions for general kinematics. We use our results to perform a critical appraisal of the validity of the threshold approximation at N 3LO in perturbative QCD.

  1. Health hazards of ultrafine metal and metal oxide powders

    NASA Technical Reports Server (NTRS)

    Boylen, G. W., Jr.; Chamberlin, R. I.; Viles, F. J.

    1969-01-01

    Study reveals that suggested threshold limit values are from two to fifty times lower than current recommended threshold limit values. Proposed safe limits of exposure to the ultrafine dusts are based on known toxic potential of various materials as determined in particle size ranges.

  2. 48 CFR 41.401 - Monthly and annual review.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... values exceeding the simplified acquisition threshold, on an annual basis. Annual reviews of accounts with annual values at or below the simplified acquisition threshold shall be conducted when deemed... services to each facility under the utility's most economical, applicable rate and to examine competitive...

  3. Use of Biotechnological Devices in the Quantification of Psychophysiological Workload of Professional Chess Players.

    PubMed

    Fuentes, Juan P; Villafaina, Santos; Collado-Mateo, Daniel; de la Vega, Ricardo; Gusi, Narcis; Clemente-Suárez, Vicente Javier

    2018-01-19

    Psychophysiological requirements of chess players are poorly understood, and periodization of training is often made without any empirical basis. For this reason, the aim of the present study was to investigate the psychophysiological response and quantify the player internal load during, and after playing a chess game. The participant was an elite 33 year-old male chess player ranked among the 300 best chess players in the world. Thus, cortical arousal by critical flicker fusion threshold, electroencephalogram by the theta Fz/alpha Pz ratio and autonomic modulation by heart rate variability were analyzed. Data revealed that cortical arousal by critical flicker fusion threshold and theta Fz/alpha Pz ratio increased and heart rate variability decreased during chess game. All these changes indicated that internal load increased during the chess game. In addition, pre-activation was detected in pre-game measure, suggesting that the prefrontal cortex might be preparatory activated. For these reasons, electroencephalogram, critical flicker fusion threshold and heart rate variability analysis may be highly applicable tools to control and monitor workload in chess player.

  4. Identification and classification of carcinogens: procedures of the Chemical Substances Threshold Limit Value Committee, ACGIH. American Conference of Governmental Industrial Hygienists.

    PubMed Central

    Spirtas, R; Steinberg, M; Wands, R C; Weisburger, E K

    1986-01-01

    The Chemical Substances Threshold Limit Value Committee of the American Conference of Governmental Industrial Hygienists has refined its procedures for evaluating carcinogens. Types of epidemiologic and toxicologic evidence used are reviewed and a discussion is presented on how the Committee evaluates data on carcinogenicity. Although it has not been conclusively determined whether biological thresholds exist for all types of carcinogens, the Committee will continue to develop guidelines for permissible exposures to carcinogens. The Committee will continue to use the safety factor approach to setting Threshold Limit Values for carcinogens, despite its shortcomings. A compilation has been developed for lists of substances considered to be carcinogenic by several scientific groups. The Committee will use this information to help to identify and classify carcinogens for its evaluation. PMID:3752326

  5. Identification and classification of carcinogens: procedures of the Chemical Substances Threshold Limit Value Committee, ACGIH. American Conference of Governmental Industrial Hygienists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spirtas, R.; Steinberg, M.; Wands, R.C.

    1986-10-01

    The Chemical Substances Threshold Limit Value Committee of the American Conference of Governmental Industrial Hygienists has refined its procedures for evaluating carcinogens. Types of epidemiologic and toxicologic evidence used are reviewed and a discussion is presented on how the Committee evaluates data on carcinogenicity. Although it has not been conclusively determined whether biological thresholds exist for all types of carcinogens, the Committee will continue to develop guidelines for permissible exposures to carcinogens. The Committee will continue to use the safety factor approach to setting Threshold Limit Values for carcinogens, despite its shortcomings. A compilation has been developed for lists ofmore » substances considered to be carcinogenic by several scientific groups. The Committee will use this information to help to identify and classify carcinogens for its evaluation.« less

  6. A Two-Biomarker Model Predicts Mortality in the Critically Ill with Sepsis.

    PubMed

    Mikacenic, Carmen; Price, Brenda L; Harju-Baker, Susanna; O'Mahony, D Shane; Robinson-Cohen, Cassianne; Radella, Frank; Hahn, William O; Katz, Ronit; Christiani, David C; Himmelfarb, Jonathan; Liles, W Conrad; Wurfel, Mark M

    2017-10-15

    Improving the prospective identification of patients with systemic inflammatory response syndrome (SIRS) and sepsis at low risk for organ dysfunction and death is a major clinical challenge. To develop and validate a multibiomarker-based prediction model for 28-day mortality in critically ill patients with SIRS and sepsis. A derivation cohort (n = 888) and internal test cohort (n = 278) were taken from a prospective study of critically ill intensive care unit (ICU) patients meeting two of four SIRS criteria at an academic medical center for whom plasma was obtained within 24 hours. The validation cohort (n = 759) was taken from a prospective cohort enrolled at another academic medical center ICU for whom plasma was obtained within 48 hours. We measured concentrations of angiopoietin-1, angiopoietin-2, IL-6, IL-8, soluble tumor necrosis factor receptor-1, soluble vascular cell adhesion molecule-1, granulocyte colony-stimulating factor, and soluble Fas. We identified a two-biomarker model in the derivation cohort that predicted mortality (area under the receiver operator characteristic curve [AUC], 0.79; 95% confidence interval [CI], 0.74-0.83). It performed well in the internal test cohort (AUC, 0.75; 95% CI, 0.65-0.85) and the external validation cohort (AUC, 0.77; 95% CI, 0.72-0.83). We determined a model score threshold demonstrating high negative predictive value (0.95) for death. In addition to a low risk of death, patients below this threshold had shorter ICU length of stay, lower incidence of acute kidney injury, acute respiratory distress syndrome, and need for vasopressors. We have developed a simple, robust biomarker-based model that identifies patients with SIRS/sepsis at low risk for death and organ dysfunction.

  7. On the thresholds in modeling of high flows via artificial neural networks - A bootstrapping analysis

    NASA Astrophysics Data System (ADS)

    Panagoulia, D.; Trichakis, I.

    2012-04-01

    Considering the growing interest in simulating hydrological phenomena with artificial neural networks (ANNs), it is useful to figure out the potential and limits of these models. In this study, the main objective is to examine how to improve the ability of an ANN model to simulate extreme values of flow utilizing a priori knowledge of threshold values. A three-layer feedforward ANN was trained by using the back propagation algorithm and the logistic function as activation function. By using the thresholds, the flow was partitioned in low (x < μ), medium (μ ≤ x ≤ μ + 2σ) and high (x > μ + 2σ) values. The employed ANN model was trained for high flow partition and all flow data too. The developed methodology was implemented over a mountainous river catchment (the Mesochora catchment in northwestern Greece). The ANN model received as inputs pseudo-precipitation (rain plus melt) and previous observed flow data. After the training was completed the bootstrapping methodology was applied to calculate the ANN confidence intervals (CIs) for a 95% nominal coverage. The calculated CIs included only the uncertainty, which comes from the calibration procedure. The results showed that an ANN model trained specifically for high flows, with a priori knowledge of the thresholds, can simulate these extreme values much better (RMSE is 31.4% less) than an ANN model trained with all data of the available time series and using a posteriori threshold values. On the other hand the width of CIs increases by 54.9% with a simultaneous increase by 64.4% of the actual coverage for the high flows (a priori partition). The narrower CIs of the high flows trained with all data may be attributed to the smoothing effect produced from the use of the full data sets. Overall, the results suggest that an ANN model trained with a priori knowledge of the threshold values has an increased ability in simulating extreme values compared with an ANN model trained with all the data and a posteriori knowledge of the thresholds.

  8. Atmospheric deposition and critical loads for nitrogen and metals in Arctic Alaska: Review and current status

    USGS Publications Warehouse

    Linder, Greg L.; Brumbaugh, William G.; Neitlich, Peter; Little, Edward

    2013-01-01

    To protect important resources under their bureau’s purview, the United States National Park Service’s (NPS) Arctic Network (ARCN) has developed a series of “vital signs” that are to be periodically monitored. One of these vital signs focuses on wet and dry deposition of atmospheric chemicals and further, the establishment of critical load (CL) values (thresholds for ecological effects based on cumulative depositional loadings) for nitrogen (N), sulfur, and metals. As part of the ARCN terrestrial monitoring programs, samples of the feather moss Hylocomium splendens are being col- lected and analyzed as a cost-effective means to monitor atmospheric pollutant deposition in this region. Ultimately, moss data combined with refined CL values might be used to help guide future regulation of atmospheric contaminant sources potentially impacting Arctic Alaska. But first, additional long-term studies are needed to determine patterns of contaminant deposition as measured by moss biomonitors and to quantify ecosystem responses at particular loadings/ ranges of contaminants within Arctic Alaska. Herein we briefly summarize 1) current regulatory guidance related to CL values 2) derivation of CL models for N and metals, 3) use of mosses as biomonitors of atmospheric deposition and loadings, 4) preliminary analysis of vulnerabilities and risks associated with CL estimates for N, 5) preliminary analysis of existing data for characterization of CL values for N for interior Alaska and 6) implications for managers and future research needs.

  9. Setting nutrient thresholds to support an ecological assessment based on nutrient enrichment, potential primary production and undesirable disturbance.

    PubMed

    Devlin, Michelle; Painting, Suzanne; Best, Mike

    2007-01-01

    The EU Water Framework Directive recognises that ecological status is supported by the prevailing physico-chemical conditions in each water body. This paper describes an approach to providing guidance on setting thresholds for nutrients taking account of the biological response to nutrient enrichment evident in different types of water. Indices of pressure, state and impact are used to achieve a robust nutrient (nitrogen) threshold by considering each individual index relative to a defined standard, scale or threshold. These indices include winter nitrogen concentrations relative to a predetermined reference value; the potential of the waterbody to support phytoplankton growth (estimated as primary production); and detection of an undesirable disturbance (measured as dissolved oxygen). Proposed reference values are based on a combination of historical records, offshore (limited human influence) nutrient concentrations, literature values and modelled data. Statistical confidence is based on a number of attributes, including distance of confidence limits away from a reference threshold and how well the model is populated with real data. This evidence based approach ensures that nutrient thresholds are based on knowledge of real and measurable biological responses in transitional and coastal waters.

  10. Potts-model critical manifolds revisited

    DOE PAGES

    Scullard, Christian R.; Jacobsen, Jesper Lykke

    2016-02-11

    We compute the critical polynomials for the q-state Potts model on all Archimedean lattices, using a parallel implementation of the algorithm of Ref. [1] that gives us access to larger sizes than previously possible. The exact polynomials are computed for bases of size 6 6 unit cells, and the root in the temperature variable v = e K-1 is determined numerically at q = 1 for bases of size 8 8. This leads to improved results for bond percolation thresholds, and for the Potts-model critical manifolds in the real (q; v) plane. In the two most favourable cases, we findmore » now the kagome-lattice threshold to eleven digits and that of the (3; 12 2) lattice to thirteen. Our critical manifolds reveal many interesting features in the antiferromagnetic region of the Potts model, and determine accurately the extent of the Berker-Kadano phase for the lattices studied.« less

  11. Largely ignored: the impact of the threshold value for a QALY on the importance of a transferability factor.

    PubMed

    Vemer, Pepijn; Rutten-van Mölken, Maureen P M H

    2011-10-01

    Recently, several checklists systematically assessed factors that affect the transferability of cost-effectiveness (CE) studies between jurisdictions. The role of the threshold value for a QALY has been given little consideration in these checklists, even though the importance of a factor as a cause of between country differences in CE depends on this threshold. In this paper, we study the impact of the willingness-to-pay (WTP) per QALY on the importance of transferability factors in the case of smoking cessation support (SCS). We investigated, for several values of the WTP, how differences between six countries affect the incremental net monetary benefit (INMB) of SCS. The investigated factors were demography, smoking prevalence, mortality, epidemiology and costs of smoking-related diseases, resource use and unit costs of SCS, utility weights and discount rates. We found that when the WTP decreased, factors that mainly affect health outcomes became less important and factors that mainly effect costs became more important. With a WTP below 1,000, the factors most responsible for between country differences in INMB were resource use and unit costs of SCS and the costs of smoking-related diseases. Utility values had little impact. At a threshold above 10,000, between country differences were primarily due to different discount rates, utility weights and epidemiology of smoking-related diseases. Costs of smoking-related diseases had little impact. At all thresholds, demography had little impact. We concluded that, when judging the transferability of a CE study, we should consider the between country differences in WTP threshold values.

  12. Investigation of Adaptive-threshold Approaches for Determining Area-Time Integrals from Satellite Infrared Data to Estimate Convective Rain Volumes

    NASA Technical Reports Server (NTRS)

    Smith, Paul L.; VonderHaar, Thomas H.

    1996-01-01

    The principal goal of this project is to establish relationships that would allow application of area-time integral (ATI) calculations based upon satellite data to estimate rainfall volumes. The research is being carried out as a collaborative effort between the two participating organizations, with the satellite data analysis to determine values for the ATIs being done primarily by the STC-METSAT scientists and the associated radar data analysis to determine the 'ground-truth' rainfall estimates being done primarily at the South Dakota School of Mines and Technology (SDSM&T). Synthesis of the two separate kinds of data and investigation of the resulting rainfall-versus-ATI relationships is then carried out jointly. The research has been pursued using two different approaches, which for convenience can be designated as the 'fixed-threshold approach' and the 'adaptive-threshold approach'. In the former, an attempt is made to determine a single temperature threshold in the satellite infrared data that would yield ATI values for identifiable cloud clusters which are closely related to the corresponding rainfall amounts as determined by radar. Work on the second, or 'adaptive-threshold', approach for determining the satellite ATI values has explored two avenues: (1) attempt involved choosing IR thresholds to match the satellite ATI values with ones separately calculated from the radar data on a case basis; and (2) an attempt involved a striaghtforward screening analysis to determine the (fixed) offset that would lead to the strongest correlation and lowest standard error of estimate in the relationship between the satellite ATI values and the corresponding rainfall volumes.

  13. Conception, fabrication and characterization of a silicon based MEMS inertial switch with a threshold value of 5 g

    NASA Astrophysics Data System (ADS)

    Zhang, Fengtian; Wang, Chao; Yuan, Mingquan; Tang, Bin; Xiong, Zhuang

    2017-12-01

    Most of the MEMS inertial switches developed in recent years are intended for shock and impact sensing with a threshold value above 50 g. In order to follow the requirement of detecting linear acceleration signal at low-g level, a silicon based MEMS inertial switch with a threshold value of 5 g was designed, fabricated and characterized. The switch consisted of a large proof mass, supported by circular spiral springs. An analytical model of the structure stiffness of the proposed switch was derived and verified by finite-element simulation. The structure fabrication was based on a customized double-buried layer silicon-on-insulator wafer and encapsulated by glass wafers. The centrifugal experiment and nanoindentation experiment were performed to measure the threshold value as well as the structure stiffness. The actual threshold values were measured to be 0.1-0.3 g lower than the pre-designed value of 5 g due to the dimension loss during non-contact lithography processing. Concerning the reliability assessment, a series of environmental experiments were conducted and the switches remained operational without excessive errors. However, both the random vibration and the shock tests indicate that the metal particles generated during collision of contact parts might affect the contact reliability and long-time stability. According to the conclusion reached in this report, an attentive study on switch contact behavior should be included in future research.

  14. Cost-effectiveness thresholds: methods for setting and examples from around the world.

    PubMed

    Santos, André Soares; Guerra-Junior, Augusto Afonso; Godman, Brian; Morton, Alec; Ruas, Cristina Mariano

    2018-06-01

    Cost-effectiveness thresholds (CETs) are used to judge if an intervention represents sufficient value for money to merit adoption in healthcare systems. The study was motivated by the Brazilian context of HTA, where meetings are being conducted to decide on the definition of a threshold. Areas covered: An electronic search was conducted on Medline (via PubMed), Lilacs (via BVS) and ScienceDirect followed by a complementary search of references of included studies, Google Scholar and conference abstracts. Cost-effectiveness thresholds are usually calculated through three different approaches: the willingness-to-pay, representative of welfare economics; the precedent method, based on the value of an already funded technology; and the opportunity cost method, which links the threshold to the volume of health displaced. An explicit threshold has never been formally adopted in most places. Some countries have defined thresholds, with some flexibility to consider other factors. An implicit threshold could be determined by research of funded cases. Expert commentary: CETs have had an important role as a 'bridging concept' between the world of academic research and the 'real world' of healthcare prioritization. The definition of a cost-effectiveness threshold is paramount for the construction of a transparent and efficient Health Technology Assessment system.

  15. Real-Time Mapping alert system; user's manual

    USGS Publications Warehouse

    Torres, L.A.

    1996-01-01

    The U.S. Geological Survey has an extensive hydrologic network that records and transmits precipitation, stage, discharge, and other water- related data on a real-time basis to an automated data processing system. Data values are recorded on electronic data collection platforms at field monitoring sites. These values are transmitted by means of orbiting satellites to receiving ground stations, and by way of telecommunication lines to a U.S. Geological Survey office where they are processed on a computer system. Data that exceed predefined thresholds are identified as alert values. These alert values can help keep water- resource specialists informed of current hydrologic conditions. The current alert status at monitoring sites is of critical importance during floods, hurricanes, and other extreme hydrologic events where quick analysis of the situation is needed. This manual provides instructions for using the Real-Time Mapping software, a series of computer programs developed by the U.S. Geological Survey for quick analysis of hydrologic conditions, and guides users through a basic interactive session. The software provides interactive graphics display and query of real-time information in a map-based, menu-driven environment.

  16. Threshold Values for Identification of Contamination Predicted by Reduced-Order Models

    DOE PAGES

    Last, George V.; Murray, Christopher J.; Bott, Yi-Ju; ...

    2014-12-31

    The U.S. Department of Energy’s (DOE’s) National Risk Assessment Partnership (NRAP) Project is developing reduced-order models to evaluate potential impacts on underground sources of drinking water (USDWs) if CO2 or brine leaks from deep CO2 storage reservoirs. Threshold values, below which there would be no predicted impacts, were determined for portions of two aquifer systems. These threshold values were calculated using an interwell approach for determining background groundwater concentrations that is an adaptation of methods described in the U.S. Environmental Protection Agency’s Unified Guidance for Statistical Analysis of Groundwater Monitoring Data at RCRA Facilities.

  17. Processing circuitry for single channel radiation detector

    NASA Technical Reports Server (NTRS)

    Holland, Samuel D. (Inventor); Delaune, Paul B. (Inventor); Turner, Kathryn M. (Inventor)

    2009-01-01

    Processing circuitry is provided for a high voltage operated radiation detector. An event detector utilizes a comparator configured to produce an event signal based on a leading edge threshold value. A preferred event detector does not produce another event signal until a trailing edge threshold value is satisfied. The event signal can be utilized for counting the number of particle hits and also for controlling data collection operation for a peak detect circuit and timer. The leading edge threshold value is programmable such that it can be reprogrammed by a remote computer. A digital high voltage control is preferably operable to monitor and adjust high voltage for the detector.

  18. Response of algal metrics to nutrients and physical factors and identification of nutrient thresholds in agricultural streams

    USGS Publications Warehouse

    Black, R.W.; Moran, P.W.; Frankforter, J.D.

    2011-01-01

    Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria. ?? 2010 The Author(s).

  19. Response of algal metrics to nutrients and physical factors and identification of nutrient thresholds in agricultural streams.

    PubMed

    Black, Robert W; Moran, Patrick W; Frankforter, Jill D

    2011-04-01

    Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria.

  20. Quality index of radiological devices: results of one year of use.

    PubMed

    Tofani, Alessandro; Imbordino, Patrizia; Lecci, Antonio; Bonannini, Claudia; Del Corona, Alberto; Pizzi, Stefano

    2003-01-01

    The physical quality index (QI) of radiological devices summarises in a single numerical value between 0 and 1 the results of constancy tests. The aim of this paper is to illustrate the results of the use of such an index on all public radiological devices in the Livorno province over one year. The quality index was calculated for 82 radiological devices of a wide range of types by implementing its algorithm in a spreadsheet-based software for the automatic handling of quality control data. The distribution of quality index values was computed together with the associated statistical quantities. This distribution is strongly asymmetrical, with a sharp peak near the highest QI values. The mean quality index values for the different types of device show some inhomogeneity: in particular, mammography and panoramic dental radiography devices show far lower quality than other devices. In addition, our analysis has identified the parameters that most frequently do not pass the quality tests for each type of device. Finally, we sought some correlation between quality and age of the device, but this was poorly significant. The quality index proved to be a useful tool providing an overview of the physical conditions of radiological devices. By selecting adequate QI threshold values for, it also helps to decide whether a given device should be upgraded or replaced. The identification of critical parameters for each type of device may be used to improve the definition of the QI by attributing greater weights to critical parameters, so as to better address the maintenance of radiological devices.

  1. Brief communication: Using averaged soil moisture estimates to improve the performances of a regional-scale landslide early warning system

    NASA Astrophysics Data System (ADS)

    Segoni, Samuele; Rosi, Ascanio; Lagomarsino, Daniela; Fanti, Riccardo; Casagli, Nicola

    2018-03-01

    We communicate the results of a preliminary investigation aimed at improving a state-of-the-art RSLEWS (regional-scale landslide early warning system) based on rainfall thresholds by integrating mean soil moisture values averaged over the territorial units of the system. We tested two approaches. The simplest can be easily applied to improve other RSLEWS: it is based on a soil moisture threshold value under which rainfall thresholds are not used because landslides are not expected to occur. Another approach deeply modifies the original RSLEWS: thresholds based on antecedent rainfall accumulated over long periods are substituted with soil moisture thresholds. A back analysis demonstrated that both approaches consistently reduced false alarms, while the second approach reduced missed alarms as well.

  2. Low Oxygen Delivery as a Predictor of Acute Kidney Injury during Cardiopulmonary Bypass.

    PubMed

    Newland, Richard F; Baker, Robert A

    2017-12-01

    Low indexed oxygen delivery (DO 2 i) during cardiopulmonary bypass (CPB) has been associated with an increase in the likelihood of acute kidney injury (AKI), with critical thresholds for oxygen delivery reported to be 260-270 mL/min/m 2 . This study aims to explore whether a relationship exists for oxygen delivery during CPB, in which the integral of amount and time below a critical threshold, is associated with the incidence of postoperative AKI. The area under the curve (AUC) with DO 2 i during CPB above or below 270 mL/min/m 2 was calculated as a metric of oxygen delivery in 210 patients undergoing CPB. To determine the influence of low oxygen delivery on AKI, a multivariate logistic regression model was developed including AUC < 0, Euroscore II to provide preoperative risk factor adjustment, and incidence of red blood cell transfusion to adjust for the influence of transfusion. Having an AUC < 0 for an oxygen delivery threshold of 270 mL/min/m 2 during CPB was an independent predictor of AKI, after adjustment for Euroscore II and transfusion [OR 2.74, CI {1.01-7.41}, p = .047]. These results support that a relationship exists for oxygen delivery during CPB, in which the integral of amount and time below a critical threshold is associated with the incidence of postoperative AKI.

  3. Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates

    PubMed Central

    Malone, Brian J.

    2017-01-01

    Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194

  4. Technology Thresholds for Microgravity: Status and Prospects

    NASA Technical Reports Server (NTRS)

    Noever, D. A.

    1996-01-01

    The technological and economic thresholds for microgravity space research are estimated in materials science and biotechnology. In the 1990s, the improvement of materials processing has been identified as a national scientific priority, particularly for stimulating entrepreneurship. The substantial US investment at stake in these critical technologies includes six broad categories: aerospace, transportation, health care, information, energy, and the environment. Microgravity space research addresses key technologies in each area. The viability of selected space-related industries is critically evaluated and a market share philosophy is developed, namely that incremental improvements in a large markets efficiency is a tangible reward from space-based research.

  5. Detection and quantification system for monitoring instruments

    DOEpatents

    Dzenitis, John M [Danville, CA; Hertzog, Claudia K [Houston, TX; Makarewicz, Anthony J [Livermore, CA; Henderer, Bruce D [Livermore, CA; Riot, Vincent J [Oakland, CA

    2008-08-12

    A method of detecting real events by obtaining a set of recent signal results, calculating measures of the noise or variation based on the set of recent signal results, calculating an expected baseline value based on the set of recent signal results, determining sample deviation, calculating an allowable deviation by multiplying the sample deviation by a threshold factor, setting an alarm threshold from the baseline value plus or minus the allowable deviation, and determining whether the signal results exceed the alarm threshold.

  6. Influence of taekwondo as security martial arts training on anaerobic threshold, cardiorespiratory fitness, and blood lactate recovery.

    PubMed

    Kim, Dae-Young; Seo, Byoung-Do; Choi, Pan-Am

    2014-04-01

    [Purpose] This study was conducted to determine the influence of Taekwondo as security martial arts training on anaerobic threshold, cardiorespiratory fitness, and blood lactate recovery. [Subjects and Methods] Fourteen healthy university students were recruited and divided into an exercise group and a control group (n = 7 in each group). The subjects who participated in the experiment were subjected to an exercise loading test in which anaerobic threshold, value of ventilation, oxygen uptake, maximal oxygen uptake, heart rate, and maximal values of ventilation / heart rate were measured during the exercise, immediately after maximum exercise loading, and at 1, 3, 5, 10, and 15 min of recovery. [Results] At the anaerobic threshold time point, the exercise group showed a significantly longer time to reach anaerobic threshold. The exercise group showed significantly higher values for the time to reach VO2max, maximal values of ventilation, maximal oxygen uptake and maximal values of ventilation / heart rate. Significant changes were observed in the value of ventilation volumes at the 1- and 5-min recovery time points within the exercise group; oxygen uptake and maximal oxygen uptake were significantly different at the 5- and 10-min time points; heart rate was significantly different at the 1- and 3-min time points; and maximal values of ventilation / heart rate was significantly different at the 5-min time point. The exercise group showed significant decreases in blood lactate levels at the 15- and 30-min recovery time points. [Conclusion] The study results revealed that Taekwondo as a security martial arts training increases the maximal oxygen uptake and anaerobic threshold and accelerates an individual's recovery to the normal state of cardiorespiratory fitness and blood lactate level. These results are expected to contribute to the execution of more effective security services in emergencies in which violence can occur.

  7. Decay constants of the charmed tensor mesons at finite temperature

    NASA Astrophysics Data System (ADS)

    Azizi, K.; Sundu, H.; Türkan, A.; Veliev, E. Veli

    2016-01-01

    Investigation of the thermal properties of the mesons with higher spin is one of the important problems in the hadron physics. At finite temperature, the Lorentz invariance is broken by the choice of a preferred frame of reference and some new operators appear in the Wilson expansion. Taking into account these additional operators, we calculate the thermal two-point correlation function for D2*(2460 ) and Ds2 *(2573 ) tensor mesons. In order to perform the numerical analysis, we use the fermionic part of the energy density obtained both from lattice QCD and Chiral perturbation theory. We also use the temperature dependent continuum threshold and show that the values of the decay constants decrease considerably near to the critical temperature compared to their values in the vacuum. Our results at zero temperature are in good consistency with predictions of other nonperturbative models.

  8. Integrating geophysical data for mapping the contamination of industrial sites by polycyclic aromatic hydrocarbons: A geostatistical approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colin, P.; Nicoletis, S.; Froidevaux, R.

    1996-12-31

    A case study is presented of building a map showing the probability that the concentration in polycyclic aromatic hydrocarbon (PAH) exceeds a critical threshold. This assessment is based on existing PAH sample data (direct information) and on an electrical resistivity survey (indirect information). Simulated annealing is used to build a model of the range of possible values for PAH concentrations and of the bivariate relationship between PAH concentrations and electrical resistivity. The geostatistical technique of simple indicator kriging is then used, together with the probabilistic model, to infer, at each node of a grid, the range of possible values whichmore » the PAH concentration can take. The risk map is then extracted for this characterization of the local uncertainty. The difference between this risk map and a traditional iso-concentration map is then discussed in terms of decision-making.« less

  9. Electrical and optical percolations in PMMA/GNP composite films

    NASA Astrophysics Data System (ADS)

    Arda, Ertan; Mergen, Ömer Bahadır; Pekcan, Önder

    2018-05-01

    Effects of graphene nanoplatelet (GNP) addition on the electrical conductivity and optical absorbance of poly(methyl methacrylate)/graphene nanoplatelet (PMMA/GNP) composite films were studied. Optical absorbance and two point probe resistivity techniques were used to determine the variations of the optical and electrical properties of the composites, respectively. Absorbance intensity, A, and surface resistivity, Rs, of the composite films were monitored as a function of GNP mass fraction (M) at room temperature. Absorbance intensity values of the composites were increased and surface resistivity values were decreased by increasing the content of GNP in the composite. Electrical and optical percolation thresholds of composite films were determined as Mσ = 27.5 wt.% and Mop = 26.6 wt.%, respectively. The conductivity and the optical results were attributed to the classical and site percolation theories, respectively. Optical (βop) and electrical (βσ) critical exponents were calculated as 0.40 and 1.71, respectively.

  10. Dependence of the Onset of the Runaway Greenhouse Effect on the Latitudinal Surface Water Distribution of Earth-Like Planets

    NASA Astrophysics Data System (ADS)

    Kodama, T.; Nitta, A.; Genda, H.; Takao, Y.; O'ishi, R.; Abe-Ouchi, A.; Abe, Y.

    2018-02-01

    Liquid water is one of the most important materials affecting the climate and habitability of a terrestrial planet. Liquid water vaporizes entirely when planets receive insolation above a certain critical value, which is called the runaway greenhouse threshold. This threshold forms the inner most limit of the habitable zone. Here we investigate the effects of the distribution of surface water on the runaway greenhouse threshold for Earth-sized planets using a three-dimensional dynamic atmosphere model. We considered a 1 bar atmosphere whose composition is similar to the current Earth's atmosphere with a zonally uniform distribution of surface water. As previous studies have already showed, we also recognized two climate regimes: the land planet regime, which has dry low-latitude and wet high-latitude regions, and the aqua planet regime, which is globally wet. We showed that each regime is controlled by the width of the Hadley circulation, the amount of surface water, and the planetary topography. We found that the runaway greenhouse threshold varies continuously with the surface water distribution from about 130% (an aqua planet) to 180% (the extreme case of a land planet) of the present insolation at Earth's orbit. Our results indicate that the inner edge of the habitable zone is not a single sharp boundary, but a border whose location varies depending on planetary surface condition, such as the amount of surface water. Since land planets have wider habitable zones and less cloud cover, land planets would be good targets for future observations investigating planetary habitability.

  11. The temporal dimension of regime shifts: How long can ecosystems operate beyond critical thresholds before transitions become irreversible?

    USDA-ARS?s Scientific Manuscript database

    Background/Question/Methods: Ecosystem thresholds are often identified by observing or inducing slow changes in different driver variables and investigating changes in the asymptotic state of the system, such as the response of lakes to nutrient loading or biome responses to climate change. Yet ma...

  12. Refinement of determination of critical thresholds of stress-strain behaviour by using AE data: potential for evaluation of durability of natural stone

    NASA Astrophysics Data System (ADS)

    Prikryl, Richard; Lokajíček, Tomáš

    2017-04-01

    According to previous studies, evaluation of stress-strain behaviour (in uniaxial compression) of various rocks appears to be effective tool allowing for prediction of resistance of natural stone to some physical weathering processes. Precise determination of critical thresholds, specifically of 'crack initiation' and 'crack damage' is fundamental issue in this approach. In contrast to 'crack damage stress/strain threshold', which can be easily read from deflection point on volumetric curve, detection of 'crack initiation' is much more difficult. Besides previously proposed mathematical processing of axial stress-strain curve, recording of acoustic emission (AE) data and their processing provide direct measure of various stress/strain thresholds, specifically of 'crack initiation'. This specific parameter is required during successive computation of energetic parameters (mechanical work), that can be stored by a material without formation of new defects (microcracks) due to acting stress. Based on our experimental data, this mechanical work seems to be proportional to the resistance of a material to formation of mode I (tensile) cracks that are responsible for destruction of subsurface below exposed faces of natural stone.

  13. Stability and phase transition of localized modes in Bose–Einstein condensates with both two- and three-body interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Xiao-Dong; Ai, Qing; Zhang, Mei

    We investigate the stability and phase transition of localized modes in Bose–Einstein Condensates (BECs) in an optical lattice with the discrete nonlinear Schrödinger model by considering both two- and three-body interactions. We find that there are three types of localized modes, bright discrete breather (DB), discrete kink (DK), and multi-breather (MUB). Moreover, both two- and three-body on-site repulsive interactions can stabilize DB, while on-site attractive three-body interactions destabilize it. There is a critical value for the three-body interaction with which both DK and MUB become the most stable ones. We give analytically the energy thresholds for the destabilization of localizedmore » states and find that they are unstable (stable) when the total energy of the system is higher (lower) than the thresholds. The stability and dynamics characters of DB and MUB are general for extended lattice systems. Our result is useful for the blocking, filtering, and transfer of the norm in nonlinear lattices for BECs with both two- and three-body interactions.« less

  14. Beverton-Holt discrete pest management models with pulsed chemical control and evolution of pesticide resistance

    NASA Astrophysics Data System (ADS)

    Liang, Juhua; Tang, Sanyi; Cheke, Robert A.

    2016-07-01

    Pest resistance to pesticides is usually managed by switching between different types of pesticides. The optimal switching time, which depends on the dynamics of the pest population and on the evolution of the pesticide resistance, is critical. Here we address how the dynamic complexity of the pest population, the development of resistance and the spraying frequency of pulsed chemical control affect optimal switching strategies given different control aims. To do this, we developed novel discrete pest population growth models with both impulsive chemical control and the evolution of pesticide resistance. Strong and weak threshold conditions which guarantee the extinction of the pest population, based on the threshold values of the analytical formula for the optimal switching time, were derived. Further, we addressed switching strategies in the light of chosen economic injury levels. Moreover, the effects of the complex dynamical behaviour of the pest population on the pesticide switching times were also studied. The pesticide application period, the evolution of pesticide resistance and the dynamic complexity of the pest population may result in complex outbreak patterns, with consequent effects on the pesticide switching strategies.

  15. Balancing public health and practitioner accountability in cases of medical manslaughter: reconsidering the tests for criminal negligence-related offences in Australia after R v Patel.

    PubMed

    Tuckett, Nikita

    2011-12-01

    In 2010 Dr Jayant Patel was convicted of several offences on the basis of criminal negligence. Following the Queensland Court of Appeal's 2011 endorsement of the trial judge's decision, the case provides a timely opportunity to review prosecutions for medical negligence criminal offences throughout Australia and to critically examine the tests in assessing whether the balance has been correctly struck. The author argues that the thresholds required for prosecutions for criminal negligence for medical manslaughter are problematic and unduly onerous, and do not adequately strike the balance between the utilitarian value in health care and patient safety, on the one hand, and practitioner accountability and deterrence, on the other. This article considers reforms to remedy the imbalance, including a reformulation of the Criminal Code (Qld) and common law thresholds, proposals for the enactment of a separate offence of criminally negligent manslaughter and the utilisation of corporate prosecutions for manslaughter liability to broaden accountability in health care and promote patient safety on a systemic level.

  16. The influence of dynamical change of optical properties on the thermomechanical response and damage threshold of noble metals under femtosecond laser irradiation

    NASA Astrophysics Data System (ADS)

    Tsibidis, George D.

    2018-02-01

    We present a theoretical investigation of the dynamics of the dielectric constant of noble metals following heating with ultrashort pulsed laser beams and the influence of the temporal variation of the associated optical properties on the thermomechanical response of the material. The effect of the electron relaxation time on the optical properties based on the use of a critical point model is thoroughly explored for various pulse duration values (i.e., from 110 fs to 8 ps). The proposed theoretical framework correlates the dynamical change in optical parameters, relaxation processes and induced strains-stresses. Simulations are presented by choosing gold as a test material, and we demonstrate that the consideration of the aforementioned factors leads to significant thermal effect changes compared to results when static parameters are assumed. The proposed model predicts a substantially smaller damage threshold and a large increase of the stress which firstly underlines the significant role of the temporal variation of the optical properties and secondly enhances its importance with respect to the precise determination of laser specifications in material micromachining techniques.

  17. Value of information and pricing new healthcare interventions.

    PubMed

    Willan, Andrew R; Eckermann, Simon

    2012-06-01

    Previous application of value-of-information methods to optimal clinical trial design have predominantly taken a societal decision-making perspective, implicitly assuming that healthcare costs are covered through public expenditure and trial research is funded by government or donation-based philanthropic agencies. In this paper, we consider the interaction between interrelated perspectives of a societal decision maker (e.g. the National Institute for Health and Clinical Excellence [NICE] in the UK) charged with the responsibility for approving new health interventions for reimbursement and the company that holds the patent for a new intervention. We establish optimal decision making from societal and company perspectives, allowing for trade-offs between the value and cost of research and the price of the new intervention. Given the current level of evidence, there exists a maximum (threshold) price acceptable to the decision maker. Submission for approval with prices above this threshold will be refused. Given the current level of evidence and the decision maker's threshold price, there exists a minimum (threshold) price acceptable to the company. If the decision maker's threshold price exceeds the company's, then current evidence is sufficient since any price between the thresholds is acceptable to both. On the other hand, if the decision maker's threshold price is lower than the company's, then no price is acceptable to both and the company's optimal strategy is to commission additional research. The methods are illustrated using a recent example from the literature.

  18. [Clinical experiences with four newly developed, surface modified stimulation electrodes].

    PubMed

    Winter, U J; Fritsch, J; Liebing, J; Höpp, H W; Hilger, H H

    1993-05-01

    Newly developed pacing electrodes with so-called porous surfaces promise a significantly improved post-operative pacing and sensing threshold. We therefore investigated four newly developed leads (ELA-PMCF-860 n = 10; Biotronik-60/4-DNP n = 10, CPI-4010 n = 10, Intermedics-421-03-Biopore n = 6) connected to two different pacing devices (Intermedics NOVA II, Medtronic PASYS) in 36 patients (18 men, 18 women, age: 69.7 +/- 9.8 years) suffering from symptomatic bradycardia. The individual electrode maturation process was investigated by means of repeated measurements of pacing threshold, electrode impedance in acute, subacute, and chronic phase, as well as energy consumption and sensing behavior in the chronic phase. However, with the exception of the 4010, the investigated leads showed largely varying values of the pacing threshold with individual peaks occurring from the second up to the 13th week. All leads had nearly similar chronic pacing thresholds (PMCF 0.13 +/- 0.07; DNP 0.25 +/- 0.18; Biopore 0.15 +/- 0.05; 4010 0.14 +/- 0.05 ms). Impedance measurements revealed higher, but not significantly different values for the DNP (PMCF 582 +/- 112, DNP 755 +/- 88, Biopore 650 +/- 15, 4010 718 +/- 104 Ohm). Despite differing values for pacing threshold and impedance, the energy consumption in the chronic phase during threshold-adapted, but secure stimulation (3 * impulse-width at pacing threshold) were comparable.

  19. P value and the theory of hypothesis testing: an explanation for new researchers.

    PubMed

    Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël

    2010-03-01

    In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.

  20. Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.

    PubMed

    Lee, Wen-Chung; Wu, Yun-Chun

    2016-01-01

    The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.

  1. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.

    PubMed

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.

  2. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES

    PubMed Central

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    2016-01-01

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓr norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics. PMID:26806986

  3. Probing quantum frustrated systems via factorization of the ground state.

    PubMed

    Giampaolo, Salvatore M; Adesso, Gerardo; Illuminati, Fabrizio

    2010-05-21

    The existence of definite orders in frustrated quantum systems is related rigorously to the occurrence of fully factorized ground states below a threshold value of the frustration. Ground-state separability thus provides a natural measure of frustration: strongly frustrated systems are those that cannot accommodate for classical-like solutions. The exact form of the factorized ground states and the critical frustration are determined for various classes of nonexactly solvable spin models with different spatial ranges of the interactions. For weak frustration, the existence of disentangling transitions determines the range of applicability of mean-field descriptions in biological and physical problems such as stochastic gene expression and the stability of long-period modulated structures.

  4. Disease Spreading Model with Partial Isolation

    NASA Astrophysics Data System (ADS)

    Chakraborty, Abhijit; Manna, S. S.

    2013-08-01

    The effect of partial isolation has been studied in disease spreading processes using the framework of susceptible-infected-susceptible (SIS) and susceptible-infected-recovered (SIR) models. The partial isolation is introduced by imposing a restriction: each infected individual can probabilistically infect up to a maximum number n of his susceptible neighbors, but not all. It has been observed that the critical values of the spreading rates for endemic states are non-zero in both models and decrease as 1/n with n, on all graphs including scale-free graphs. In particular, the SIR model with n = 2 turned out to be a special case, characterized by a new bond percolation threshold on square lattice.

  5. Testing and Performance Analysis of the Multichannel Error Correction Code Decoder

    NASA Technical Reports Server (NTRS)

    Soni, Nitin J.

    1996-01-01

    This report provides the test results and performance analysis of the multichannel error correction code decoder (MED) system for a regenerative satellite with asynchronous, frequency-division multiple access (FDMA) uplink channels. It discusses the system performance relative to various critical parameters: the coding length, data pattern, unique word value, unique word threshold, and adjacent-channel interference. Testing was performed under laboratory conditions and used a computer control interface with specifically developed control software to vary these parameters. Needed technologies - the high-speed Bose Chaudhuri-Hocquenghem (BCH) codec from Harris Corporation and the TRW multichannel demultiplexer/demodulator (MCDD) - were fully integrated into the mesh very small aperture terminal (VSAT) onboard processing architecture and were demonstrated.

  6. Coupling with ocean mixed layer leads to intraseasonal variability in tropical deep convection: Evidence from cloud-resolving simulations

    NASA Astrophysics Data System (ADS)

    Anber, Usama; Wang, Shuguang; Sobel, Adam

    2017-03-01

    The effect of coupling a slab ocean mixed layer to atmospheric convection is examined in cloud-resolving model (CRM) simulations in vertically sheared and unsheared environments without Coriolis force, with the large-scale circulation parameterized using the Weak Temperature Gradient (WTG) approximation. Surface fluxes of heat and moisture as well as radiative fluxes are fully interactive, and the vertical profile of domain-averaged horizontal wind is strongly relaxed toward specified profiles with vertical shear that varies from one simulation to the next. Vertical wind shear is found to play a critical role in the simulated behavior. There exists a threshold value of the shear strength above which the coupled system develops regular oscillations between deep convection and dry nonprecipitating states, similar to those found earlier in a much more idealized model which did not consider wind shear. The threshold value of the vertical shear found here varies with the depth of the ocean mixed layer. The time scale of the spontaneously generated oscillations also varies with mixed layer depth, from 10 days with a 1 m deep mixed layer to 50 days with a 10 m deep mixed layer. The results suggest the importance of the interplay between convection organized by vertical wind shear, radiative feedbacks, large-scale dynamics, and ocean mixed layer heat storage in real intraseasonal oscillations.

  7. Subharmonic Oscillations and Chaos in Dynamic Atomic Force Microscopy

    NASA Technical Reports Server (NTRS)

    Cantrell, John H.; Cantrell, Sean A.

    2015-01-01

    The increasing use of dynamic atomic force microscopy (d-AFM) for nanoscale materials characterization calls for a deeper understanding of the cantilever dynamics influencing scan stability, predictability, and image quality. Model development is critical to such understanding. Renormalization of the equations governing d- AFM provides a simple interpretation of cantilever dynamics as a single spring and mass system with frequency dependent cantilever stiffness and damping parameters. The renormalized model is sufficiently robust to predict the experimentally observed splitting of the free-space cantilever resonance into multiple resonances upon cantilever-sample contact. Central to the model is the representation of the cantilever sample interaction force as a polynomial expansion with coefficients F(sub ij) (i,j = 0, 1, 2) that account for the effective interaction stiffness parameter, the cantilever-to-sample energy transfer, and the amplitude of cantilever oscillation. Application of the Melnikov method to the model equation is shown to predict a homoclinic bifurcation of the Smale horseshoe type leading to a cascade of period doublings with increasing drive displacement amplitude culminating in chaos and loss of image quality. The threshold value of the drive displacement amplitude necessary to initiate subharmonic generation depends on the acoustic drive frequency, the effective damping coefficient, and the nonlinearity of the cantilever-sample interaction force. For parameter values leading to displacement amplitudes below threshold for homoclinic bifurcation other bifurcation scenarios can occur, some of which lead to chaos.

  8. Defect of the well-known (classical) expression for the ionization rate in gas-discharge plasma and its modification

    NASA Astrophysics Data System (ADS)

    Litvinov, I. I.

    2015-11-01

    A critical analysis is given of the well-known expression for the electron-impact ionization rate constant α i of neutral atoms and ions, derived by linearization of the ionization cross section σ i (ɛ) as a function of the electron energy near the threshold I and containing the characteristic factor ( I + 2 kT). Using the classical Thomson expression for the ionization cross section, it is shown that in addition to the linear slope of σ i (ɛ), it is also necessary to take into account the large negative curvature of this function near the threshold. In this case, the second term in parentheses changes its sign, which means that the commonly used expression for α i (˜4 kT/I) already at moderate values of the temperature ( kT/I ˜ 0.1). The source of this error lies in a mathematical mistake in the original approach and is related to the incorrect choice of the sequential orders of terms small in the parameter kT/I. On the basis of a large amount of experimental data and considerations similar to the Gryzinski theory, a universal two-parameter modification of the Thomson formula (as well as the Bethe—Born formula) is proposed and a new simple expression for the ionization rate constant for arbitrary values of kT/I is derived.

  9. AC electrified jets in a flow-focusing device: Jet length scaling

    PubMed Central

    García-Sánchez, Pablo; Alzaga-Gimeno, Javier; Baret, Jean-Christophe

    2016-01-01

    We use a microfluidic flow-focusing device with integrated electrodes for controlling the production of water-in-oil drops. In a previous work, we reported that very long jets can be formed upon application of AC fields. We now study in detail the appearance of the long jets as a function of the electrical parameters, i.e., water conductivity, signal frequency, and voltage amplitude. For intermediate frequencies, we find a threshold voltage above which the jet length rapidly increases. Interestingly, this abrupt transition vanishes for high frequencies of the signal and the jet length grows smoothly with voltage. For frequencies below a threshold value, we previously reported a transition from a well-behaved uniform jet to highly unstable liquid structures in which axisymmetry is lost rather abruptly. These liquid filaments eventually break into droplets of different sizes. In this work, we characterize this transition with a diagram as a function of voltage and liquid conductivity. The electrical response of the long jets was studied via a distributed element circuit model. The model allows us to estimate the electric potential at the tip of the jet revealing that, for any combination of the electrical parameters, the breakup of the jet occurs at a critical value of this potential. We show that this voltage is around 550 V for our device geometry and choice of flow rates. PMID:27375826

  10. AC electrified jets in a flow-focusing device: Jet length scaling.

    PubMed

    Castro-Hernández, Elena; García-Sánchez, Pablo; Alzaga-Gimeno, Javier; Tan, Say Hwa; Baret, Jean-Christophe; Ramos, Antonio

    2016-07-01

    We use a microfluidic flow-focusing device with integrated electrodes for controlling the production of water-in-oil drops. In a previous work, we reported that very long jets can be formed upon application of AC fields. We now study in detail the appearance of the long jets as a function of the electrical parameters, i.e., water conductivity, signal frequency, and voltage amplitude. For intermediate frequencies, we find a threshold voltage above which the jet length rapidly increases. Interestingly, this abrupt transition vanishes for high frequencies of the signal and the jet length grows smoothly with voltage. For frequencies below a threshold value, we previously reported a transition from a well-behaved uniform jet to highly unstable liquid structures in which axisymmetry is lost rather abruptly. These liquid filaments eventually break into droplets of different sizes. In this work, we characterize this transition with a diagram as a function of voltage and liquid conductivity. The electrical response of the long jets was studied via a distributed element circuit model. The model allows us to estimate the electric potential at the tip of the jet revealing that, for any combination of the electrical parameters, the breakup of the jet occurs at a critical value of this potential. We show that this voltage is around 550 V for our device geometry and choice of flow rates.

  11. Mechanisms of Neurofeedback: A Computation-theoretic Approach.

    PubMed

    Davelaar, Eddy J

    2018-05-15

    Neurofeedback training is a form of brain training in which information about a neural measure is fed back to the trainee who is instructed to increase or decrease the value of that particular measure. This paper focuses on electroencephalography (EEG) neurofeedback in which the neural measures of interest are the brain oscillations. To date, the neural mechanisms that underlie successful neurofeedback training are still unexplained. Such an understanding would benefit researchers, funding agencies, clinicians, regulatory bodies, and insurance firms. Based on recent empirical work, an emerging theory couched firmly within computational neuroscience is proposed that advocates a critical role of the striatum in modulating EEG frequencies. The theory is implemented as a computer simulation of peak alpha upregulation, but in principle any frequency band at one or more electrode sites could be addressed. The simulation successfully learns to increase its peak alpha frequency and demonstrates the influence of threshold setting - the threshold that determines whether positive or negative feedback is provided. Analyses of the model suggest that neurofeedback can be likened to a search process that uses importance sampling to estimate the posterior probability distribution over striatal representational space, with each representation being associated with a distribution of values of the target EEG band. The model provides an important proof of concept to address pertinent methodological questions about how to understand and improve EEG neurofeedback success. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Threshold of coexistence and critical behavior of a predator-prey stochastic model in a fractal landscape

    NASA Astrophysics Data System (ADS)

    Argolo, C.; Barros, P.; Tomé, T.; Arashiro, E.; Gleria, Iram; Lyra, M. L.

    2016-08-01

    We investigate a stochastic lattice model describing a predator-prey system in a fractal scale-free landscape, mimicked by the fractal Sierpinski carpet. We determine the threshold of species coexistence, that is, the critical phase boundary related to the transition between an active state, where both species coexist and an absorbing state where one of the species is extinct. We show that the predators must live longer in order to persist in a fractal habitat. We further performed a finite-size scaling analysis in the vicinity of the absorbing-state phase transition to compute a set of stationary and dynamical critical exponents. Our results indicate that the transition belongs to the directed percolation universality class exhibited by the usual contact process model on the same fractal landscape.

  13. Determination of the maximum operating range of hydrodynamic stress in mammalian cell culture.

    PubMed

    Neunstoecklin, Benjamin; Stettler, Matthieu; Solacroup, Thomas; Broly, Hervé; Morbidelli, Massimo; Soos, Miroslav

    2015-01-20

    Application of quality by design (QbD) requires identification of the maximum operating range for parameters affecting the cell culture process. These include hydrodynamic stress, mass transfer or gradients in dissolved oxygen and pH. Since most of these are affected by the impeller design and speed, the main goal of this work was to identify a maximum operating range for hydrodynamic stress, where no variation of cell growth, productivity and product quality can be ensured. Two scale-down models were developed operating under laminar and turbulent condition, generating repetitive oscillating hydrodynamic stress with maximum stress values ranging from 0.4 to 420Pa, to compare the effect of the different flow regimes on the cells behavior. Two manufacturing cell lines (CHO and Sp2/0) used for the synthesis of therapeutic proteins were employed in this study. For both cell lines multiple process outputs were used to determine the threshold values of hydrodynamic stress, such as cell growth, morphology, metabolism and productivity. They were found to be different in between the cell lines with values equal to 32.4±4.4Pa and 25.2±2.4Pa for CHO and Sp2/0, respectively. Below the measured thresholds both cell lines do not show any appreciable effect of the hydrodynamic stress on any critical quality attribute, while above, cells responded negatively to the elevated stress. To confirm the applicability of the proposed method, the obtained results were compared with data generated from classical small-scale reactors with a working volume of 3L. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. On the need for a time- and location-dependent estimation of the NDSI threshold value for reducing existing uncertainties in snow cover maps at different scales

    NASA Astrophysics Data System (ADS)

    Härer, Stefan; Bernhardt, Matthias; Siebers, Matthias; Schulz, Karsten

    2018-05-01

    Knowledge of current snow cover extent is essential for characterizing energy and moisture fluxes at the Earth's surface. The snow-covered area (SCA) is often estimated by using optical satellite information in combination with the normalized-difference snow index (NDSI). The NDSI thereby uses a threshold for the definition if a satellite pixel is assumed to be snow covered or snow free. The spatiotemporal representativeness of the standard threshold of 0.4 is however questionable at the local scale. Here, we use local snow cover maps derived from ground-based photography to continuously calibrate the NDSI threshold values (NDSIthr) of Landsat satellite images at two European mountain sites of the period from 2010 to 2015. The Research Catchment Zugspitzplatt (RCZ, Germany) and Vernagtferner area (VF, Austria) are both located within a single Landsat scene. Nevertheless, the long-term analysis of the NDSIthr demonstrated that the NDSIthr at these sites are not correlated (r = 0.17) and different than the standard threshold of 0.4. For further comparison, a dynamic and locally optimized NDSI threshold was used as well as another locally optimized literature threshold value (0.7). It was shown that large uncertainties in the prediction of the SCA of up to 24.1 % exist in satellite snow cover maps in cases where the standard threshold of 0.4 is used, but a newly developed calibrated quadratic polynomial model which accounts for seasonal threshold dynamics can reduce this error. The model minimizes the SCA uncertainties at the calibration site VF by 50 % in the evaluation period and was also able to improve the results at RCZ in a significant way. Additionally, a scaling experiment shows that the positive effect of a locally adapted threshold diminishes using a pixel size of 500 m or larger, underlining the general applicability of the standard threshold at larger scales.

  15. Assessing lead thresholds for phytotoxicity and potential dietary toxicity in selected vegetable crops.

    PubMed

    Hong, C L; Jia, Y B; Yang, X E; He, Z L; Stoffella, P J

    2008-04-01

    Lead tolerance and accumulation in shoots and edible parts varied with crop species and soil type. The critical Pb concentrations at 10% yield reduction were 24.71, 28.25, and 0.567 mg kg(-1) for pakchoi, celery, and hot pepper, respectively under hydroponic conditions, whereas were 13.1, 3.83, 0.734 mg kg(-1) grown in the Inceptisol and 31.7, 30.0, 0.854 mg kg(-1) in the Alluvial soil, respectively. Based on the threshold of human dietary toxicity for Pb, the critical levels of soil available Pb for pakchoi, celery, and hot pepper were 5.07, 8.06, and 0.48 mg kg(-1) for the Inceptisol, and 1.38, 1.47, and 0.162 mg kg(-1) for the Alluvial soil, respectively. Similarly, the total soil Pb thresholds were different from vegetable species and soil types.

  16. Humans and seasonal climate variability threaten large-bodied coral reef fish with small ranges.

    PubMed

    Mellin, C; Mouillot, D; Kulbicki, M; McClanahan, T R; Vigliola, L; Bradshaw, C J A; Brainard, R E; Chabanet, P; Edgar, G J; Fordham, D A; Friedlander, A M; Parravicini, V; Sequeira, A M M; Stuart-Smith, R D; Wantiez, L; Caley, M J

    2016-02-03

    Coral reefs are among the most species-rich and threatened ecosystems on Earth, yet the extent to which human stressors determine species occurrences, compared with biogeography or environmental conditions, remains largely unknown. With ever-increasing human-mediated disturbances on these ecosystems, an important question is not only how many species can inhabit local communities, but also which biological traits determine species that can persist (or not) above particular disturbance thresholds. Here we show that human pressure and seasonal climate variability are disproportionately and negatively associated with the occurrence of large-bodied and geographically small-ranging fishes within local coral reef communities. These species are 67% less likely to occur where human impact and temperature seasonality exceed critical thresholds, such as in the marine biodiversity hotspot: the Coral Triangle. Our results identify the most sensitive species and critical thresholds of human and climatic stressors, providing opportunity for targeted conservation intervention to prevent local extinctions.

  17. Topological dimension tunes activity patterns in hierarchical modular networks

    NASA Astrophysics Data System (ADS)

    Safari, Ali; Moretti, Paolo; Muñoz, Miguel A.

    2017-11-01

    Connectivity patterns of relevance in neuroscience and systems biology can be encoded in hierarchical modular networks (HMNs). Recent studies highlight the role of hierarchical modular organization in shaping brain activity patterns, providing an excellent substrate to promote both segregation and integration of neural information. Here, we propose an extensive analysis of the critical spreading rate (or ‘epidemic’ threshold)—separating a phase with endemic persistent activity from one in which activity ceases—on diverse HMNs. By employing analytical and computational techniques we determine the nature of such a threshold and scrutinize how it depends on general structural features of the underlying HMN. We critically discuss the extent to which current graph-spectral methods can be applied to predict the onset of spreading in HMNs and, most importantly, we elucidate the role played by the network topological dimension as a relevant and unifying structural parameter, controlling the epidemic threshold.

  18. Enhancement of epidemic spread by noise and stochastic resonance in spatial network models with viral dynamics.

    PubMed

    Tuckwell, H C; Toubiana, L; Vibert, J F

    2000-05-01

    We extend a previous dynamical viral network model to include stochastic effects. The dynamical equations for the viral and immune effector densities within a host population of size n are bilinear, and the noise is white, additive, and Gaussian. The individuals are connected with an n x n transmission matrix, with terms which decay exponentially with distance. In a single individual, for the range of noise parameters considered, it is found that increasing the amplitude of the noise tends to decrease the maximum mean virion level, and slightly accelerate its attainment. Two different spatial dynamical models are employed to ascertain the effects of environmental stochasticity on viral spread. In the first model transmission is unrestricted and there is no threshold within individuals. This model has the advantage that it can be analyzed using a Fokker-Planck approach. The noise is found both to synchronize and uniformize the trajectories of the viral levels across the population of infected individuals, and thus to promote the epidemic spread of the virus. Quantitative measures of the speed of spread and overall amplitude of the epidemic are obtained as functions of the noise and virulence parameters. The mean amplitude increases steadily without threshold effects for a fixed value of the virulence as the noise amplitude sigma is increased, and there is no evidence of a stochastic resonance. However, the speed of transmission, both with respect to its mean and variance, undergoes rapid increases as sigma changes by relatively small amounts. In the second, more realistic, model, there is a threshold for infection and an upper limit to the transmission rate. There may be no spread of infection at all in the absence of noise. With increasing noise level and a low threshold, the mean maximum virion level grows quickly and shows a broad-based stochastic resonance effect. When the threshold within individuals is increased, the mean population virion level increases only slowly as sigma increases, until a critical value is reached at which the mean infection level suddenly increases. Similar results are obtained when the parameters of the model are also randomized across the population. We conclude with a discussion and a description of a diffusion approximation for a model in which stochasticity arises through random contacts rather than fluctuation in ambient virion levels.

  19. Enhancement of the Daytime MODIS Based Aircraft Icing Potential Algorithm Using Mesoscale Model Data

    DTIC Science & Technology

    2006-03-01

    January, 15, 2006 ...... 37 x Figure 25. ROC curves using 3 hour PIREPs and Alexander Tmap with symbols plotted at the 0.5 threshold values...42 Figure 26. ROC curves using 3 hour PIREPs and Alexander Tmap with symbols plotted at the 0.5 threshold values...Table 4. Results using T icing potential values from the Alexander Tmap , and 3 Hour PIREPs

  20. Measurand transient signal suppressor

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1994-01-01

    A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.

  1. In-Network Processing of an Iceberg Join Query in Wireless Sensor Networks Based on 2-Way Fragment Semijoins

    PubMed Central

    Kang, Hyunchul

    2015-01-01

    We investigate the in-network processing of an iceberg join query in wireless sensor networks (WSNs). An iceberg join is a special type of join where only those joined tuples whose cardinality exceeds a certain threshold (called iceberg threshold) are qualified for the result. Processing such a join involves the value matching for the join predicate as well as the checking of the cardinality constraint for the iceberg threshold. In the previous scheme, the value matching is carried out as the main task for filtering non-joinable tuples while the iceberg threshold is treated as an additional constraint. We take an alternative approach, meeting the cardinality constraint first and matching values next. In this approach, with a logical fragmentation of the join operand relations on the aggregate counts of the joining attribute values, the optimal sequence of 2-way fragment semijoins is generated, where each fragment semijoin employs a Bloom filter as a synopsis of the joining attribute values. This sequence filters non-joinable tuples in an energy-efficient way in WSNs. Through implementation and a set of detailed experiments, we show that our alternative approach considerably outperforms the previous one. PMID:25774710

  2. [Evaluation of signal noise ratio on analysis of clear cell renal cell carcinoma using DWI with multi-b values].

    PubMed

    Ding, Jiule; Xing, Wei; Chen, Jie; Dai, Yongming; Sun, Jun; Li, Dengfa

    2014-01-21

    To explore the influence of signal noise ratio (SNR) on analysis of clear cell renal cell carcinoma (CCRCC) using DWI with multi-b values. The images of 17 cases with CCRCC were analyzed, including 17 masses and 9 pure cysts. The signal intensity of the cysts and masses was measured separately on DWI for each b value. The minimal SNR, as the threshold, was recorded when the signal curve manifest as the single exponential line. The SNR of the CCRCC was calculated on DWI for each b value, and compared with the threshold by independent Two-sample t Test. The signal decreased on DWI with increased b factors for both pure cysts and CCRCC. The threshold is 1.29 ± 0.17, and the signal intensity of the cysts on DWI with multi-b values shown as a single exponential line when b ≤ 800 s/mm(2). For the CCRCC, the SNR is similar to the threshold when b = 1 000 s/mm(2) (t = 0.40, P = 0.69), and is lower when b = 1 200 s/mm(2) (t = -2.38, P = 0.03). The SNR should be sufficient for quantitative analysis of DWI, and the maximal b value is 1000 s/mm(2) for CCRCC.

  3. [Correlation of perceptive temperature threshold of oral mucosa and sympathetic skin response].

    PubMed

    Wang, Z G; Dong, T Z; Li, J; Chen, G

    2018-02-09

    Objectives: To explore the critical values of temperature perception in various mucosa sites of oral cavity and to draw the perceptive temperature threshold maps in healthy volunteers. To observe the interrelationship between subjective cognitive perception and sympathetic skin response (SSR) under various levels of thermal stimuli. Methods: Forty-two healthy volunteers (recruited from the students of Tianjin Medical University, 16 females and 26 males) were enrolled in the present study. The whole oral mucosa of each subject was divided into multiple partitions according to the mucosa type as well as tooth position. Peltier patch (commodity name) semiconductor chip was placed in the central part of each subarea of the mucosa. The stimulus was increased or decreased at 1 ℃ each time from a baseline temperature of 37 ℃. Warm (WT) and cold (CT) perception thresholds were measured thereafter respectively. A topographic temperature map of the oral mucosa for each subject was drew. Furthermore, the SSR was elicited and recorded at three temperature levels of 50 ℃, 55 ℃, 60 ℃ respectively. Analog test with visual analogue scale (VAS) and McGill scales were also performed. Data were statistically analyzed with variance and generalized estimation equation. Results: The tip of the tongue was the most sensitive area with both WT [(38.8±2.1) ℃, P< 0.05] and CT [(23.5±4.2) ℃, P< 0.05]. The highest heat threshold of gingival mucosa was in the left lower posterior teeth area [(49.9±3.7) ℃, P< 0.05], and the highest cold threshold of gingival mucosa was in the left upper posterior teeth area [(15.9±5.5) ℃, P< 0.05]. The perceptive temperature threshold increased gradually from the midline to both left and right sides were observed symmetrically and bilaterally. There was no statistically significant differences in temperature perception threshold between males and females [WT, male (44.8±3.1) ℃, female (44.8±3.2) ℃, OR= 1.100, P= 0.930; CT, Male (18.4±4.9) ℃, female (20.8±4.8) ℃, OR= 0.157, P= 0.210]. The SSR amplitude at sites of the tongue tip and the lower lip were increased with the rise of temperature [tongue tip (4.58±4.04) mv, P< 0.05, lower lip (2.89±3.01) mv, P< 0.05]. However, SSR amplitude values had no significant differences between males and females [tongue tip, male (2.00±2.16) mv, female (1.89±1.20) mv, P= 0.890; lower lip, male (0.94±0.82) mv, female (0.85±0.68) mv, P= 0.887]. Nevertheless, the amplitude of SSR and the VAS score of subjects showed a similar trend. Conclusions: The temperature perception levels were different amongst sites of lip, buccal mucosa, tongue dorsal mucosa and gingival mucosa. SSR amplitude values could reflect the responses of the mouth to the thermal stimuli.

  4. A procedure for the reliability improvement of the oblique ionograms automatic scaling algorithm

    NASA Astrophysics Data System (ADS)

    Ippolito, Alessandro; Scotto, Carlo; Sabbagh, Dario; Sgrigna, Vittorio; Maher, Phillip

    2016-05-01

    A procedure made by the combined use of the Oblique Ionogram Automatic Scaling Algorithm (OIASA) and Autoscala program is presented. Using Martyn's equivalent path theorem, 384 oblique soundings from a high-quality data set have been converted into vertical ionograms and analyzed by Autoscala program. The ionograms pertain to the radio link between Curtin W.A. (CUR) and Alice Springs N.T. (MTE), Australia, geographical coordinates (17.60°S; 123.82°E) and (23.52°S; 133.68°E), respectively. The critical frequency foF2 values extracted from the converted vertical ionograms by Autoscala were then compared with the foF2 values derived from the maximum usable frequencies (MUFs) provided by OIASA. A quality factor Q for the MUF values autoscaled by OIASA has been identified. Q represents the difference between the foF2 value scaled by Autoscala from the converted vertical ionogram and the foF2 value obtained applying the secant law to the MUF provided by OIASA. Using the receiver operating characteristic curve, an appropriate threshold level Qt was chosen for Q to improve the performance of OIASA.

  5. Experimental and modeling study of chloride ingress into concrete and reinforcement corrosion initiation

    NASA Astrophysics Data System (ADS)

    Yu, Hui

    Effects of reinforcement and coarse aggregate on chloride ingression into concrete and reinforcement corrosion initiation have been studied with experimental and modeling (finite element method) analyses. Once specimens were fabricated and exposed to a chloride solution, various experimental techniques were employed to determine the effect of reinforcement and coarse aggregate on time-to-corrosion and chloride ingress and concentration at corrosion locations. Model analyses were performed to verify and explain the experimental results. Based upon the results, it was determined that unexpectedly higher chloride concentrations were present on the top of the rebar trace than that to the side at the same depth and an inverse concentration gradient (increasing [ Cl-] with increasing depth) occurred near the top of rebars. Also, coarse aggregate volume profile in close proximity to the rebar and spatial distribution of these aggregates, in conjunction with the physical obstruction afforded by reinforcement to chloride flow, complicates concrete sampling for Cl- intended to define the critical concentration of this species to initiate corrosion. Modeling analyses that considered cover thickness, chloride threshold concentration, reinforcement size and shape, and coarse aggregate type and percolation confirmed the experimental findings. The results, at least in part, account for the relatively wide spread in chloride corrosion threshold values reported in the literature and illustrate that more consistent chloride threshold concentrations can be acquired from mortar or paste specimens than from concrete ones.

  6. Modeling and analysis of sub-surface leakage current in nano-MOSFET under cutoff regime

    NASA Astrophysics Data System (ADS)

    Swami, Yashu; Rai, Sanjeev

    2017-02-01

    The high leakage current in nano-meter regimes is becoming a significant portion of power dissipation in nano-MOSFET circuits as threshold voltage, channel length, and gate oxide thickness are scaled down to nano-meter range. Precise leakage current valuation and meticulous modeling of the same at nano-meter technology scale is an increasingly a critical work in designing the low power nano-MOSFET circuits. We present a specific compact model for sub-threshold regime leakage current in bulk driven nano-MOSFETs. The proposed logical model is instigated and executed into the latest updated PTM bulk nano-MOSFET model and is found to be in decent accord with technology-CAD simulation data. This paper also reviews various transistor intrinsic leakage mechanisms for nano-MOSFET exclusively in weak inversion, like drain-induced barricade lowering (DIBL), gate-induced drain leakage (GIDL), gate oxide tunneling (GOT) leakage etc. The root cause of the sub-surface leakage current is mainly due to the nano-scale short channel length causing source-drain coupling even in sub-threshold domain. Consequences leading to carriers triumphing the barricade between the source and drain. The enhanced model effectively considers the following parameter dependence in the account for better-quality value-added results like drain-to-source bias (VDS), gate-to-source bias (VGS), channel length (LG), source/drain junction depth (Xj), bulk doping concentration (NBULK), and operating temperature (Top).

  7. A new characterization of three-dimensional conductivity backbone above and below the percolation threshold

    NASA Astrophysics Data System (ADS)

    Skal, Asya S.

    1996-08-01

    A new definition of three-dimensional conductivity backbone, obtained from a distribution function of Joule heat and the Hall coefficient is introduced. The fractal dimension d fB = d - ( {g}/{v}) = 2.25 of conductivity backbone for both sides of the threshold is obtained from a critical exponent of the Hall coefficient g = 0.6. This allows one to construct, below the threshold, a new order parameter of metal-conductor transition—the two-component infinite conductivity back-bone and tested scaling relation, proposed by Alexander and Orbach [ J. Phys. Rev. Lett.43, 1982, L625] for both sides of a threshold.

  8. Inhibition by Chondroitin Sulfate E Can Specify Functional Wnt/β-Catenin Signaling Thresholds in NIH3T3 Fibroblasts*

    PubMed Central

    Willis, Catherine M.; Klüppel, Michael

    2012-01-01

    Aberrant activation of the Wnt/β-catenin signaling pathway is frequently associated with human disease, including cancer, and thus represents a key therapeutic target. However, Wnt/β-catenin signaling also plays critical roles in many aspects of normal adult tissue homeostasis. The identification of mechanisms and strategies to selectively inhibit the disease-related functions of Wnt signaling, while preserving normal physiological functions, is in its infancy. Here, we report the identification of exogenous chondroitin sulfate-E (CS-E) as an inhibitor of specific molecular and biological outcomes of Wnt3a signaling in NIH3T3 fibroblasts. We demonstrate that CS-E can decrease Wnt3a signaling through the negative regulation of LRP6 receptor activation. However, this inhibitory effect of CS-E only affected Wnt3a-mediated induction, but not repression, of target gene expression. We went on to identify a critical Wnt3a signaling threshold that differentially affects target gene induction versus repression. This signaling threshold also controlled the effects of Wnt3a on proliferation and serum starvation-induced apoptosis. Limiting Wnt3a signaling to this critical threshold, either by CS-E treatment or by ligand dilution, interfered with Wnt3a-mediated stimulation of proliferation but did not impair Wnt3a-mediated reduction of serum starvation-induced apoptosis. Treatment with pharmacological inhibitors demonstrated that both induction and repression of Wnt3a target genes in NIH3T3 cells require the canonical Wnt/β-catenin signaling cascade. Our data establish the feasibility of selective inhibition of Wnt/β-catenin transcriptional programs and biological outcomes through the exploitation of intrinsic signaling thresholds. PMID:22915582

  9. Desiccation and Mortality Dynamics in Seedlings of Different European Beech (Fagus sylvatica L.) Populations under Extreme Drought Conditions

    PubMed Central

    Bolte, Andreas; Czajkowski, Tomasz; Cocozza, Claudia; Tognetti, Roberto; de Miguel, Marina; Pšidová, Eva; Ditmarová, Ĺubica; Dinca, Lucian; Delzon, Sylvain; Cochard, Hervè; Ræbild, Anders; de Luis, Martin; Cvjetkovic, Branislav; Heiri, Caroline; Müller, Jürgen

    2016-01-01

    European beech (Fagus sylvatica L., hereafter beech), one of the major native tree species in Europe, is known to be drought sensitive. Thus, the identification of critical thresholds of drought impact intensity and duration are of high interest for assessing the adaptive potential of European beech to climate change in its native range. In a common garden experiment with one-year-old seedlings originating from central and marginal origins in six European countries (Denmark, Germany, France, Romania, Bosnia-Herzegovina, and Spain), we applied extreme drought stress and observed desiccation and mortality processes among the different populations and related them to plant water status (predawn water potential, ΨPD) and soil hydraulic traits. For the lethal drought assessment, we used a critical threshold of soil water availability that is reached when 50% mortality in seedling populations occurs (LD50SWA). We found significant population differences in LD50SWA (10.5–17.8%), and mortality dynamics that suggest a genetic difference in drought resistance between populations. The LD50SWA values correlate significantly with the mean growing season precipitation at population origins, but not with the geographic margins of beech range. Thus, beech range marginality may be more due to climatic conditions than to geographic range. The outcome of this study suggests the genetic variation has a major influence on the varying adaptive potential of the investigated populations. PMID:27379105

  10. Estimation of debris flow critical rainfall thresholds by a physically-based model

    NASA Astrophysics Data System (ADS)

    Papa, M. N.; Medina, V.; Ciervo, F.; Bateman, A.

    2012-11-01

    Real time assessment of debris flow hazard is fundamental for setting up warning systems that can mitigate its risk. A convenient method to assess the possible occurrence of a debris flow is the comparison of measured and forecasted rainfall with rainfall threshold curves (RTC). Empirical derivation of the RTC from the analysis of rainfall characteristics of past events is not possible when the database of observed debris flows is poor or when the environment changes with time. For landslides triggered debris flows, the above limitations may be overcome through the methodology here presented, based on the derivation of RTC from a physically based model. The critical RTC are derived from mathematical and numerical simulations based on the infinite-slope stability model in which land instability is governed by the increase in groundwater pressure due to rainfall. The effect of rainfall infiltration on landside occurrence is modelled trough a reduced form of the Richards equation. The simulations are performed in a virtual basin, representative of the studied basin, taking into account the uncertainties linked with the definition of the characteristics of the soil. A large number of calculations are performed combining different values of the rainfall characteristics (intensity and duration of event rainfall and intensity of antecedent rainfall). For each combination of rainfall characteristics, the percentage of the basin that is unstable is computed. The obtained database is opportunely elaborated to derive RTC curves. The methodology is implemented and tested on a small basin of the Amalfi Coast (South Italy).

  11. An early warning system for marine storm hazard mitigation

    NASA Astrophysics Data System (ADS)

    Vousdoukas, M. I.; Almeida, L. P.; Pacheco, A.; Ferreira, O.

    2012-04-01

    The present contribution presents efforts towards the development of an operational Early Warning System for storm hazard prediction and mitigation. The system consists of a calibrated nested-model train which consists of specially calibrated Wave Watch III, SWAN and XBeach models. The numerical simulations provide daily forecasts of the hydrodynamic conditions, morphological change and overtopping risk at the area of interest. The model predictions are processed by a 'translation' module which is based on site-specific Storm Impact Indicators (SIIs) (Ciavola et al., 2011, Storm impacts along European coastlines. Part 2: lessons learned from the MICORE project, Environmental Science & Policy, Vol 14), and warnings are issued when pre-defined threshold values are exceeded. For the present site the selected SIIs were (i) the maximum wave run-up height during the simulations; and (ii) the dune-foot horizontal retreat at the end of the simulations. Both SIIs and pre-defined thresholds were carefully selected on the grounds of existing experience and field data. Four risk levels were considered, each associated with an intervention approach, recommended to the responsible coastal protection authority. Regular updating of the topography/bathymetry is critical for the performance of the storm impact forecasting, especially when there are significant morphological changes. The system can be extended to other critical problems, like implications of global warming and adaptive management strategies, while the approach presently followed, from model calibration to the early warning system for storm hazard mitigation, can be applied to other sites worldwide, with minor adaptations.

  12. A Bayesian Approach to the Overlap Analysis of Epidemiologically Linked Traits.

    PubMed

    Asimit, Jennifer L; Panoutsopoulou, Kalliope; Wheeler, Eleanor; Berndt, Sonja I; Cordell, Heather J; Morris, Andrew P; Zeggini, Eleftheria; Barroso, Inês

    2015-12-01

    Diseases often cooccur in individuals more often than expected by chance, and may be explained by shared underlying genetic etiology. A common approach to genetic overlap analyses is to use summary genome-wide association study data to identify single-nucleotide polymorphisms (SNPs) that are associated with multiple traits at a selected P-value threshold. However, P-values do not account for differences in power, whereas Bayes' factors (BFs) do, and may be approximated using summary statistics. We use simulation studies to compare the power of frequentist and Bayesian approaches with overlap analyses, and to decide on appropriate thresholds for comparison between the two methods. It is empirically illustrated that BFs have the advantage over P-values of a decreasing type I error rate as study size increases for single-disease associations. Consequently, the overlap analysis of traits from different-sized studies encounters issues in fair P-value threshold selection, whereas BFs are adjusted automatically. Extensive simulations show that Bayesian overlap analyses tend to have higher power than those that assess association strength with P-values, particularly in low-power scenarios. Calibration tables between BFs and P-values are provided for a range of sample sizes, as well as an approximation approach for sample sizes that are not in the calibration table. Although P-values are sometimes thought more intuitive, these tables assist in removing the opaqueness of Bayesian thresholds and may also be used in the selection of a BF threshold to meet a certain type I error rate. An application of our methods is used to identify variants associated with both obesity and osteoarthritis. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.

  13. Lower thresholds for lifetime health effects in mammals from high-LET radiation - Comparison with chronic low-LET radiation.

    PubMed

    Sazykina, Tatiana G; Kryshev, Alexander I

    2016-12-01

    Lower threshold dose rates and confidence limits are quantified for lifetime radiation effects in mammalian animals from internally deposited alpha-emitting radionuclides. Extensive datasets on effects from internal alpha-emitters are compiled from the International Radiobiological Archives. In total, the compiled database includes 257 records, which are analyzed by means of non-parametric order statistics. The generic lower threshold for alpha-emitters in mammalian animals (combined datasets) is 6.6·10 -5  Gy day -1 . Thresholds for individual alpha-emitting elements differ considerably: plutonium and americium - 2.0·10 -5  Gy day -1 ; radium - 2.1·10 -4  Gy day -1 . Threshold for chronic low-LET radiation is previously estimated at 1·10 -3  Gy day -1 . For low exposures, the following values of alpha radiation weighting factor w R for internally deposited alpha-emitters in mammals are quantified: w R (α) = 15 as a generic value for the whole group of alpha-emitters; w R (Pu) = 50 for plutonium; w R (Am) = 50 for americium; w R (Ra) = 5 for radium. These values are proposed to serve as radiation weighting factors in calculations of equivalent doses to non-human biota. The lower threshold dose rate for long-lived mammals (dogs) is significantly lower than comparing with the threshold for short-lived mammals (mice): 2.7·10 -5  Gy day -1 , and 2.0·10 -4  Gy day -1 , respectively. The difference in thresholds is exactly reflecting the relationship between the natural longevity of these two species. Graded scale of severity in lifetime radiation effects in mammals is developed, based on compiled datasets. Being placed on the severity scale, the effects of internal alpha-emitters are situated in the zones of considerably lower dose rates than effects of the same severity caused by low-LET radiation. RBE values, calculated for effects of equal severity, are found to depend on the intensity of chronic exposure: different RBE values are characteristic for low, moderate, and high lifetime exposures (30, 70, and 13, respectively). The results of the study provide a basis for selecting correct values of radiation weighting factors in dose assessment to non-human biota. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    NASA Astrophysics Data System (ADS)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  15. Mercury demethylation in waterbird livers: Dose-response thresholds and differences among species

    USGS Publications Warehouse

    Eagles-Smith, Collin A.; Ackerman, Joshua T.; Julie, Y.E.E.; Adelsbach, T.L.

    2009-01-01

    We assessed methylmercury (MeHg) demethylation in the livers of adults and chicks of four waterbird species that commonly breed in San Francisco Bay: American avocets, black-necked stilts, Caspian terns, and Forster's terns. In adults (all species combined), we found strong evidence for a threshold, model where MeHg demethylation occurred above a hepatic total mercury concentration threshold of 8.51 ?? 0.93 ??g/g dry weight, and there was a strong decline in %MeHg values as total mercury (THg) concentrations increased above 8.51 ??g/g dry weight. Conversely, there was no evidence for a demethylation threshold in chicks, and we found that %MeHg values declined linearly with increasing THg concentrations. For adults, we also found taxonomie differences in the demethylation responses, with avocets and stilts showing a higher demethylation rate than that of terns when concentrations exceeded the threshold, whereas terns had a lower demethylation threshold (7.48 ?? 1.48 ??g/g dry wt) than that of avocets and stilts (9.91 ?? 1.29 ??g/g dry wt). Finally, we assessed the role of selenium (Se) in the demethylation process. Selenium concentrations were positively correlated with inorganic Hg in livers of birds above the demethylation threshold but not below. This suggests that Se may act as a binding site for demethylated Hg and may reduce the potential for secondary toxicity. Our findings indicate that waterbirds demethylate mercury in their livers if exposure exceeds a threshold value and suggest that taxonomie differences in demethylation ability may be an important factor in evaluating species-specific risk to MeHg exposure. Further, we provide strong evidence for a threshold of approximately 8.5 ??g/g dry weight of THg in the liver where demethylation is initiated. ?? 2009 SETAC.

  16. Oil-in-Water Emulsion Exhibits Bitterness-Suppressing Effects in a Sensory Threshold Study.

    PubMed

    Torrico, Damir Dennis; Sae-Eaw, Amporn; Sriwattana, Sujinda; Boeneke, Charles; Prinyawiwatkul, Witoon

    2015-06-01

    Little is known about how emulsion characteristics affect saltiness/bitterness perception. Sensory detection and recognition thresholds of NaCl, caffeine, and KCl in aqueous solution compared with oil-in-water emulsion systems were evaluated. For emulsions, NaCl, KCl, or caffeine were dissolved in water + emulsifier and mixed with canola oil (20% by weight). Two emulsions were prepared: emulsion 1 (viscosity = 257 cP) and emulsion 2 (viscosity = 59 cP). The forced-choice ascending concentration series method of limits (ASTM E-679-04) was used to determine detection and/or recognition thresholds at 25 °C. Group best estimate threshold (GBET) geometric means were expressed as g/100 mL. Comparing NaCl with KCl, there were no significant differences in detection GBET values for all systems (0.0197 - 0.0354). For saltiness recognition thresholds, KCl GBET values were higher compared with NaCl GBET (0.0822 - 0.1070 compared with 0.0471 - 0.0501). For NaCl and KCl, emulsion 1 and/or emulsion 2 did not significantly affect the saltiness recognition threshold compared with that of the aqueous solution. However, the bitterness recognition thresholds of caffeine and KCl in solution were significantly lower than in the emulsions (0.0242 - 0.0586 compared with 0.0754 - 0.1025). Gender generally had a marginal effect on threshold values. This study showed that, compared with the aqueous solutions, emulsions did not significantly affect the saltiness recognition threshold of NaCl and KCl, but exhibited bitterness-suppressing effects on KCl and/or caffeine. © 2015 Institute of Food Technologists®

  17. The effect of the stability threshold on time to stabilization and its reliability following a single leg drop jump landing.

    PubMed

    Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H

    2016-02-08

    We aimed to provide insight in how threshold selection affects time to stabilization (TTS) and its reliability to support selection of methods to determine TTS. Eighty-two elite youth soccer players performed six single leg drop jump landings. The TTS was calculated based on four processed signals: raw ground reaction force (GRF) signal (RAW), moving root mean square window (RMS), sequential average (SA) or unbounded third order polynomial fit (TOP). For each trial and processing method a wide range of thresholds was applied. Per threshold, reliability of the TTS was assessed through intra-class correlation coefficients (ICC) for the vertical (V), anteroposterior (AP) and mediolateral (ML) direction of force. Low thresholds resulted in a sharp increase of TTS values and in the percentage of trials in which TTS exceeded trial duration. The TTS and ICC were essentially similar for RAW and RMS in all directions; ICC's were mostly 'insufficient' (<0.4) to 'fair' (0.4-0.6) for the entire range of thresholds. The SA signals resulted in the most stable ICC values across thresholds, being 'substantial' (>0.8) for V, and 'moderate' (0.6-0.8) for AP and ML. The ICC's for TOP were 'substantial' for V, 'moderate' for AP, and 'fair' for ML. The present findings did not reveal an optimal threshold to assess TTS in elite youth soccer players following a single leg drop jump landing. Irrespective of threshold selection, the SA and TOP methods yielded sufficiently reliable TTS values, while for RAW and RMS the reliability was insufficient to differentiate between players. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. An integrated evaluation of some faecal indicator bacteria (FIB) and chemical markers as potential tools for monitoring sewage contamination in subtropical estuaries.

    PubMed

    Cabral, Ana Caroline; Stark, Jonathan S; Kolm, Hedda E; Martins, César C

    2018-04-01

    Sewage input and the relationship between chemical markers (linear alkylbenzenes and coprostanol) and fecal indicator bacteria (FIB, Escherichia coli and enterococci), were evaluated in order to establish thresholds values for chemical markers in suspended particulate matter (SPM) as indicators of sewage contamination in two subtropical estuaries in South Atlantic Brazil. Both chemical markers presented no linear relationship with FIB due to high spatial microbiological variability, however, microbiological water quality was related to coprostanol values when analyzed by logistic regression, indicating that linear models may not be the best representation of the relationship between both classes of indicators. Logistic regression was performed with all data and separately for two sampling seasons, using 800 and 100 MPN 100 mL -1 of E. coli and enterococci, respectively, as the microbiological limits of sewage contamination. Threshold values of coprostanol varied depending on the FIB and season, ranging between 1.00 and 2.23 μg g -1 SPM. The range of threshold values of coprostanol for SPM are relatively higher and more variable than those suggested in literature for sediments (0.10-0.50 μg g -1 ), probably due to higher concentration of coprostanol in SPM than in sediment. Temperature may affect the relationship between microbiological indicators and coprostanol, since the threshold value of coprostanol found here was similar to tropical areas, but lower than those found during winter in temperate areas, reinforcing the idea that threshold values should be calibrated for different climatic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Evaluation of the most suitable threshold value for modelling snow glacier melt through T- index approach: the case study of Forni Glacier (Italian Alps)

    NASA Astrophysics Data System (ADS)

    Senese, Antonella; Maugeri, Maurizio; Vuillermoz, Elisa; Smiraglia, Claudio; Diolaiuti, Guglielmina

    2014-05-01

    Glacier melt occurs whenever the surface temperature is null (273.15 K) and the net energy budget is positive. These conditions can be assessed by analyzing meteorological and energy data acquired by a supraglacial Automatic Weather Station (AWS). In the case this latter is not present at the glacier surface the assessment of actual melting conditions and the evaluation of melt amount is difficult and degree-day (also named T-index) models are applied. These approaches require the choice of a correct temperature threshold. In fact, melt does not necessarily occur at daily air temperatures higher than 273.15 K, since it is determined by the energy budget which in turn is only indirectly affected by air temperature. This is the case of the late spring period when ablation processes start at the glacier surface thus progressively reducing snow thickness. In this study, to detect the most indicative air temperature threshold witnessing melt conditions in the April-June period, we analyzed air temperature data recorded from 2006 to 2012 by a supraglacial AWS (at 2631 m a.s.l.) on the ablation tongue of the Forni Glacier (Italy), and by a weather station located nearby the studied glacier (at Bormio, 1225 m a.s.l.). Moreover we evaluated the glacier energy budget (which gives the actual melt, Senese et al., 2012) and the snow water equivalent values during this time-frame. Then the ablation amount was estimated both from the surface energy balance (MEB from supraglacial AWS data) and from degree-day method (MT-INDEX, in this latter case applying the mean tropospheric lapse rate to temperature data acquired at Bormio changing the air temperature threshold) and the results were compared. We found that the mean tropospheric lapse rate permits a good and reliable reconstruction of daily glacier air temperature conditions and the major uncertainty in the computation of snow melt from degree-day models is driven by the choice of an appropriate air temperature threshold. Then, to assess the most suitable threshold, we firstly analyzed hourly MEB values to detect if ablation occurs and how long this phenomenon takes (number of hours per day). The largest part of the melting (97.7%) resulted occurring on days featuring at least 6 melting hours thus suggesting to consider their minimum average daily temperature value as a suitable threshold (268.1 K). Then we ran a simple T-index model applying different threshold values. The threshold which better reproduces snow melting results the value 268.1 K. Summarizing using a 5.0 K lower threshold value (with respect to the largely applied 273.15 K) permits the best reconstruction of glacier melt and it results in agreement with findings by van den Broeke et al. (2010) in Greenland ice sheet. Then probably the choice of a 268 K value as threshold for computing degree days amount could be generalized and applied not only on Greenland glaciers but also on Mid latitude and Alpine ones. This work was carried out under the umbrella of the SHARE Stelvio Project funded by the Lombardy Region and managed by FLA and EvK2-CNR Committee.

  20. Effect of particle surface area on ice active site densities retrieved from droplet freezing spectra

    NASA Astrophysics Data System (ADS)

    Beydoun, Hassan; Polen, Michael; Sullivan, Ryan C.

    2016-10-01

    Heterogeneous ice nucleation remains one of the outstanding problems in cloud physics and atmospheric science. Experimental challenges in properly simulating particle-induced freezing processes under atmospherically relevant conditions have largely contributed to the absence of a well-established parameterization of immersion freezing properties. Here, we formulate an ice active, surface-site-based stochastic model of heterogeneous freezing with the unique feature of invoking a continuum assumption on the ice nucleating activity (contact angle) of an aerosol particle's surface that requires no assumptions about the size or number of active sites. The result is a particle-specific property g that defines a distribution of local ice nucleation rates. Upon integration, this yields a full freezing probability function for an ice nucleating particle. Current cold plate droplet freezing measurements provide a valuable and inexpensive resource for studying the freezing properties of many atmospheric aerosol systems. We apply our g framework to explain the observed dependence of the freezing temperature of droplets in a cold plate on the concentration of the particle species investigated. Normalizing to the total particle mass or surface area present to derive the commonly used ice nuclei active surface (INAS) density (ns) often cannot account for the effects of particle concentration, yet concentration is typically varied to span a wider measurable freezing temperature range. A method based on determining what is denoted an ice nucleating species' specific critical surface area is presented and explains the concentration dependence as a result of increasing the variability in ice nucleating active sites between droplets. By applying this method to experimental droplet freezing data from four different systems, we demonstrate its ability to interpret immersion freezing temperature spectra of droplets containing variable particle concentrations. It is shown that general active site density functions, such as the popular ns parameterization, cannot be reliably extrapolated below this critical surface area threshold to describe freezing curves for lower particle surface area concentrations. Freezing curves obtained below this threshold translate to higher ns values, while the ns values are essentially the same from curves obtained above the critical area threshold; ns should remain the same for a system as concentration is varied. However, we can successfully predict the lower concentration freezing curves, which are more atmospherically relevant, through a process of random sampling from g distributions obtained from high particle concentration data. Our analysis is applied to cold plate freezing measurements of droplets containing variable concentrations of particles from NX illite minerals, MCC cellulose, and commercial Snomax bacterial particles. Parameterizations that can predict the temporal evolution of the frozen fraction of cloud droplets in larger atmospheric models are also derived from this new framework.

  1. CO32- concentration and pCO2 thresholds for calcification and dissolution on the Molokai reef flat, Hawaii

    USGS Publications Warehouse

    Yates, K.K.; Halley, R.B.

    2006-01-01

    The severity of the impact of elevated atmospheric pCO2 to coral reef ecosystems depends, in part, on how sea-water pCO2 affects the balance between calcification and dissolution of carbonate sediments. Presently, there are insufficient published data that relate concentrations of pCO 2 and CO32- to in situ rates of reef calcification in natural settings to accurately predict the impact of elevated atmospheric pCO2 on calcification and dissolution processes. Rates of net calcification and dissolution, CO32- concentrations, and pCO2 were measured, in situ, on patch reefs, bare sand, and coral rubble on the Molokai reef flat in Hawaii. Rates of calcification ranged from 0.03 to 2.30 mmol CaCO3 m-2 h-1 and dissolution ranged from -0.05 to -3.3 mmol CaCO3 m-2 h-1. Calcification and dissolution varied diurnally with net calcification primarily occurring during the day and net dissolution occurring at night. These data were used to calculate threshold values for pCO2 and CO32- at which rates of calcification and dissolution are equivalent. Results indicate that calcification and dissolution are linearly correlated with both CO32- and pCO2. Threshold pCO2 and CO32- values for individual substrate types showed considerable variation. The average pCO2 threshold value for all substrate types was 654??195 ??atm and ranged from 467 to 1003 ??atm. The average CO32- threshold value was 152??24 ??mol kg-1, ranging from 113 to 184 ??mol kg-1. Ambient seawater measurements of pCO2 and CO32- indicate that CO32- and pCO2 threshold values for all substrate types were both exceeded, simultaneously, 13% of the time at present day atmospheric pCO2 concentrations. It is predicted that atmospheric pCO2 will exceed the average pCO2 threshold value for calcification and dissolution on the Molokai reef flat by the year 2100.

  2. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    PubMed

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  3. FDI technology spillover and threshold effect of the technology gap: regional differences in the Chinese industrial sector.

    PubMed

    Wang, Hui; Liu, Huifang; Cao, Zhiyong; Wang, Bowen

    2016-01-01

    This paper presents a new perspective that there is a double-threshold effect in terms of the technology gap existing in the foreign direct investment (FDI) technology spillover process in different regional Chinese industrial sectors. In this paper, a double-threshold regression model was established to examine the relation between the threshold effect of the technology gap and technology spillover. Based on the provincial panel data of Chinese industrial sectors from 2000 to 2011, the empirical results reveal that there are two threshold values, which are 1.254 and 2.163, in terms of the technology gap in the industrial sector in eastern China. There are also two threshold values in both the central and western industrial sector, which are 1.516, 2.694 and 1.635, 2.714, respectively. The technology spillover is a decreasing function of the technology gap in both the eastern and western industrial sectors, but a concave curve function of the technology gap is in the central industrial sectors. Furthermore, the FDI technology spillover has increased gradually in recent years. Based on the empirical results, suggestions were proposed to elucidate the introduction of the FDI and the improvement in the industrial added value in different regions of China.

  4. Aquatic Rational Threshold Value (RTV) Concepts for Army Environmental Impact Assessment.

    DTIC Science & Technology

    1979-07-01

    rreversible impacts. In aquatic impacts. Examination of the etymology of “ration al systems, bot h the possible cause-effect relationships threshold value...namics, aqueous chemistry . toxicology, a driving function. 30 3’ The shading effects of ripar- and aquatic ecology. In addition , when man ’s use ian

  5. 30 CFR 71.700 - Inhalation hazards; threshold limit values for gases, dust, fumes, mists, and vapors.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... gases, dust, fumes, mists, and vapors. 71.700 Section 71.700 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-SURFACE COAL MINES AND... limit values adopted by the American Conference of Governmental Industrial Hygienists in “Threshold...

  6. A score-statistic approach for determining threshold values in QTL mapping.

    PubMed

    Kao, Chen-Hung; Ho, Hsiang-An

    2012-06-01

    Issues in determining the threshold values of QTL mapping are often investigated for the backcross and F2 populations with relatively simple genome structures so far. The investigations of these issues in the progeny populations after F2 (advanced populations) with relatively more complicated genomes are generally inadequate. As these advanced populations have been well implemented in QTL mapping, it is important to address these issues for them in more details. Due to an increasing number of meiosis cycle, the genomes of the advanced populations can be very different from the backcross and F2 genomes. Therefore, special devices that consider the specific genome structures present in the advanced populations are required to resolve these issues. By considering the differences in genome structure between populations, we formulate more general score test statistics and gaussian processes to evaluate their threshold values. In general, we found that, given a significance level and a genome size, threshold values for QTL detection are higher in the denser marker maps and in the more advanced populations. Simulations were performed to validate our approach.

  7. Economic values under inappropriate normal distribution assumptions.

    PubMed

    Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R

    2012-08-01

    The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.

  8. Threshold Haemoglobin Levels and the Prognosis of Stable Coronary Disease: Two New Cohorts and a Systematic Review and Meta-Analysis

    PubMed Central

    Shah, Anoop D.; Nicholas, Owen; Timmis, Adam D.; Feder, Gene; Abrams, Keith R.; Chen, Ruoling; Hingorani, Aroon D.; Hemingway, Harry

    2011-01-01

    Background Low haemoglobin concentration has been associated with adverse prognosis in patients with angina and myocardial infarction (MI), but the strength and shape of the association and the presence of any threshold has not been precisely evaluated. Methods and findings A retrospective cohort study was carried out using the UK General Practice Research Database. 20,131 people with a new diagnosis of stable angina and no previous acute coronary syndrome, and 14,171 people with first MI who survived for at least 7 days were followed up for a mean of 3.2 years. Using semi-parametric Cox regression and multiple adjustment, there was evidence of threshold haemoglobin values below which mortality increased in a graded continuous fashion. For men with MI, the threshold value was 13.5 g/dl (95% confidence interval [CI] 13.2–13.9); the 29.5% of patients with haemoglobin below this threshold had an associated hazard ratio for mortality of 2.00 (95% CI 1.76–2.29) compared to those with haemoglobin values in the lowest risk range. Women tended to have lower threshold haemoglobin values (e.g, for MI 12.8 g/dl; 95% CI 12.1–13.5) but the shape and strength of association did not differ between the genders, nor between patients with angina and MI. We did a systematic review and meta-analysis that identified ten previously published studies, reporting a total of only 1,127 endpoints, but none evaluated thresholds of risk. Conclusions There is an association between low haemoglobin concentration and increased mortality. A large proportion of patients with coronary disease have haemoglobin concentrations below the thresholds of risk defined here. Intervention trials would clarify whether increasing the haemoglobin concentration reduces mortality. Please see later in the article for the Editors' Summary PMID:21655315

  9. Spatially implicit approaches to understand the manipulation of mating success for insect invasion management

    Treesearch

    Takehiko Yamanaka; Andrew M. Liebhold

    2009-01-01

    Recent work indicates that Allee effects (the positive relationship between population size and per capita growth rate) are critical in determining the successful establishment of invading species. Allee effects may create population thresholds, and failure to establish is likely if invading populations fall below these thresholds. There are many mechanisms that may...

  10. Vocabulary Acquisition in L2: Does CALL Really Help?

    ERIC Educational Resources Information Center

    Averianova, Irina

    2015-01-01

    Language competence in various communicative activities in L2 largely depends on the learners' size of vocabulary. The target vocabulary of adult L2 learners should be between 2,000 high frequency words (a critical threshold) and 10,000 word families (for comprehension of university texts). For a TOEIC test, the threshold is estimated to be…

  11. Revising two-point discrimination assessment in normal aging and in patients with polyneuropathies.

    PubMed

    van Nes, S I; Faber, C G; Hamers, R M T P; Harschnitz, O; Bakkers, M; Hermans, M C E; Meijer, R J; van Doorn, P A; Merkies, I S J

    2008-07-01

    To revise the static and dynamic normative values for the two-point discrimination test and to examine its applicability and validity in patients with a polyneuropathy. Two-point discrimination threshold values were assessed in 427 healthy controls and 99 patients mildly affected by a polyneuropathy. The controls were divided into seven age groups ranging from 20-29, 30-39,..., up to 80 years and older; each group consisted of at least 30 men and 30 women. Two-point discrimination examination took place under standardised conditions on the index finger. Correlation studies were performed between the scores obtained and the values derived from the Weinstein Enhanced Sensory Test (WEST) and the arm grade of the Overall Disability SumScore (ODSS) in the patients' group (validity studies). Finally, the sensitivity to detect patients mildly affected by a polyneuropathy was evaluated for static and dynamic assessments. There was a significant age-dependent increase in the two-point discrimination values. No significant gender difference was found. The dynamic threshold values were lower than the static scores. The two-point discrimination values obtained correlated significantly with the arm grade of the ODSS (static values: r = 0.33, p = 0.04; dynamic values: r = 0.37, p = 0.02) and the scores of the WEST in patients (static values: r = 0.58, p = 0.0001; dynamic values: r = 0.55, p = 0.0002). The sensitivity for the static and dynamic threshold values was 28% and 33%, respectively. This study provides age-related normative two-point discrimination threshold values using a two-point discriminator (an aesthesiometer). This easily applicable instrument could be used as part of a more extensive neurological sensory evaluation.

  12. Effects of threshold on single-target detection by using modified amplitude-modulated joint transform correlator

    NASA Astrophysics Data System (ADS)

    Kaewkasi, Pitchaya; Widjaja, Joewono; Uozumi, Jun

    2007-03-01

    Effects of threshold value on detection performance of the modified amplitude-modulated joint transform correlator are quantitatively studied using computer simulation. Fingerprint and human face images are used as test scenes in the presence of noise and a contrast difference. Simulation results demonstrate that this correlator improves detection performance for both types of image used, but moreso for human face images. Optimal detection of low-contrast human face images obscured by strong noise can be obtained by selecting an appropriate threshold value.

  13. Two-step adaptive management for choosing between two management actions

    USGS Publications Warehouse

    Moore, Alana L.; Walker, Leila; Runge, Michael C.; McDonald-Madden, Eve; McCarthy, Michael A

    2017-01-01

    Adaptive management is widely advocated to improve environmental management. Derivations of optimal strategies for adaptive management, however, tend to be case specific and time consuming. In contrast, managers might seek relatively simple guidance, such as insight into when a new potential management action should be considered, and how much effort should be expended on trialing such an action. We constructed a two-time-step scenario where a manager is choosing between two possible management actions. The manager has a total budget that can be split between a learning phase and an implementation phase. We use this scenario to investigate when and how much a manager should invest in learning about the management actions available. The optimal investment in learning can be understood intuitively by accounting for the expected value of sample information, the benefits that accrue during learning, the direct costs of learning, and the opportunity costs of learning. We find that the optimal proportion of the budget to spend on learning is characterized by several critical thresholds that mark a jump from spending a large proportion of the budget on learning to spending nothing. For example, as sampling variance increases, it is optimal to spend a larger proportion of the budget on learning, up to a point: if the sampling variance passes a critical threshold, it is no longer beneficial to invest in learning. Similar thresholds are observed as a function of the total budget and the difference in the expected performance of the two actions. We illustrate how this model can be applied using a case study of choosing between alternative rearing diets for hihi, an endangered New Zealand passerine. Although the model presented is a simplified scenario, we believe it is relevant to many management situations. Managers often have relatively short time horizons for management, and might be reluctant to consider further investment in learning and monitoring beyond collecting data from a single time period.

  14. Two-step adaptive management for choosing between two management actions.

    PubMed

    Moore, Alana L; Walker, Leila; Runge, Michael C; McDonald-Madden, Eve; McCarthy, Michael A

    2017-06-01

    Adaptive management is widely advocated to improve environmental management. Derivations of optimal strategies for adaptive management, however, tend to be case specific and time consuming. In contrast, managers might seek relatively simple guidance, such as insight into when a new potential management action should be considered, and how much effort should be expended on trialing such an action. We constructed a two-time-step scenario where a manager is choosing between two possible management actions. The manager has a total budget that can be split between a learning phase and an implementation phase. We use this scenario to investigate when and how much a manager should invest in learning about the management actions available. The optimal investment in learning can be understood intuitively by accounting for the expected value of sample information, the benefits that accrue during learning, the direct costs of learning, and the opportunity costs of learning. We find that the optimal proportion of the budget to spend on learning is characterized by several critical thresholds that mark a jump from spending a large proportion of the budget on learning to spending nothing. For example, as sampling variance increases, it is optimal to spend a larger proportion of the budget on learning, up to a point: if the sampling variance passes a critical threshold, it is no longer beneficial to invest in learning. Similar thresholds are observed as a function of the total budget and the difference in the expected performance of the two actions. We illustrate how this model can be applied using a case study of choosing between alternative rearing diets for hihi, an endangered New Zealand passerine. Although the model presented is a simplified scenario, we believe it is relevant to many management situations. Managers often have relatively short time horizons for management, and might be reluctant to consider further investment in learning and monitoring beyond collecting data from a single time period. © 2017 by the Ecological Society of America.

  15. Auditory Sensitivity and Masking Profiles for the Sea Otter (Enhydra lutris).

    PubMed

    Ghoul, Asila; Reichmuth, Colleen

    2016-01-01

    Sea otters are threatened marine mammals that may be negatively impacted by human-generated coastal noise, yet information about sound reception in this species is surprisingly scarce. We investigated amphibious hearing in sea otters by obtaining the first measurements of absolute sensitivity and critical masking ratios. Auditory thresholds were measured in air and underwater from 0.125 to 40 kHz. Critical ratios derived from aerial masked thresholds from 0.25 to 22.6 kHz were also obtained. These data indicate that although sea otters can detect underwater sounds, their hearing appears to be primarily air adapted and not specialized for detecting signals in background noise.

  16. Epidemic dynamics and endemic states in complex networks

    NASA Astrophysics Data System (ADS)

    Pastor-Satorras, Romualdo; Vespignani, Alessandro

    2001-06-01

    We study by analytical methods and large scale simulations a dynamical model for the spreading of epidemics in complex networks. In networks with exponentially bounded connectivity we recover the usual epidemic behavior with a threshold defining a critical point below that the infection prevalence is null. On the contrary, on a wide range of scale-free networks we observe the absence of an epidemic threshold and its associated critical behavior. This implies that scale-free networks are prone to the spreading and the persistence of infections whatever spreading rate the epidemic agents might possess. These results can help understanding computer virus epidemics and other spreading phenomena on communication and social networks.

  17. Critical Deposition Condition of CoNiCrAlY Cold Spray Based on Particle Deformation Behavior

    NASA Astrophysics Data System (ADS)

    Ichikawa, Yuji; Ogawa, Kazuhiro

    2017-02-01

    Previous research has demonstrated deposition of MCrAlY coating via the cold spray process; however, the deposition mechanism of cold spraying has not been clearly explained—only empirically described by impact velocity. The purpose of this study was to elucidate the critical deposit condition. Microscale experimental measurements of individual particle deposit dimensions were incorporated with numerical simulation to investigate particle deformation behavior. Dimensional parameters were determined from scanning electron microscopy analysis of focused ion beam-fabricated cross sections of deposited particles to describe the deposition threshold. From Johnson-Cook finite element method simulation results, there is a direct correlation between the dimensional parameters and the impact velocity. Therefore, the critical velocity can describe the deposition threshold. Moreover, the maximum equivalent plastic strain is also strongly dependent on the impact velocity. Thus, the threshold condition required for particle deposition can instead be represented by the equivalent plastic strain of the particle and substrate. For particle-substrate combinations of similar materials, the substrate is more difficult to deform. Thus, this study establishes that the dominant factor of particle deposition in the cold spray process is the maximum equivalent plastic strain of the substrate, which occurs during impact and deformation.

  18. Threshold behaviors of social dynamics and financial outcomes of Ponzi scheme diffusion in complex networks

    NASA Astrophysics Data System (ADS)

    Fu, Peihua; Zhu, Anding; Ni, He; Zhao, Xin; Li, Xiulin

    2018-01-01

    Ponzi schemes always lead to mass disasters after collapse. It is important to study the critical behaviors of both social dynamics and financial outcomes for Ponzi scheme diffusion in complex networks. We develop the potential-investor-divestor-investor (PIDI) model by considering the individual behavior of direct reinvestment. We find that only the spreading rate relates to the epidemic outbreak while the reinvestment rate relates to the zero and non-zero final states for social dynamics of both homo- and inhomogeneous networks. Financially, we find that there is a critical spreading threshold, above which the scheme needs not to use its own initial capital for taking off, i.e. the starting cost is covered by the rapidly inflowing funds. However, the higher the cost per recruit, the larger the critical spreading threshold and the worse the financial outcomes. Theoretical and simulation results also reveal that schemes are easier to take off in inhomogeneous networks. The reinvestment rate does not affect the starting. However, it improves the financial outcome in the early stages and postpones the outbreak of financial collapse. Some policy suggestions for the regulator from the perspective of social physics are proposed in the end of the paper.

  19. Effect of a single dose of dextromethorphan on psychomotor performance and working memory capacity.

    PubMed

    Al-Kuraishy, Hayder M; Al-Gareeb, Ali I; Ashor, Ammar Waham

    2012-04-01

    Previous studies show that the prolonged use of dextromethorphan produces cognitive deterioration in humans. The aim of this study was to investigate the effect of a single dose of dextroemthrophan on psychomotor performance and working memory capacity. This is a randomized, double-blind, controlled, and prospective study. Thirty-six (17 women, 19 men) medical students enrolled in the study; half of them (7 women, 11 men) were given placebo, while the other half (10 women, 8 men) received dextromethorphan. The choice reaction time, critical flicker fusion threshold, and N-back working memory task were measured before and after 2 h of taking the drugs. Dextromethorphan showed a significant deterioration in the 3-back working memory task (P<0.05). No significant changes were seen as regards the choice reaction time components (total, recognition, motor) and critical flicker fusion threshold (P>0.05). On the other hand, placebo showed no significant changes as regards the choice reaction time, critical flicker fusion threshold, and N-back working memory task (P>0.05). A single dose of dextromethorphan has no effect on attention and arousal but may significantly impair the working memory capacity.

  20. Dynamical Interplay between Awareness and Epidemic Spreading in Multiplex Networks

    NASA Astrophysics Data System (ADS)

    Granell, Clara; Gómez, Sergio; Arenas, Alex

    2013-09-01

    We present the analysis of the interrelation between two processes accounting for the spreading of an epidemic, and the information awareness to prevent its infection, on top of multiplex networks. This scenario is representative of an epidemic process spreading on a network of persistent real contacts, and a cyclic information awareness process diffusing in the network of virtual social contacts between the same individuals. The topology corresponds to a multiplex network where two diffusive processes are interacting affecting each other. The analysis using a microscopic Markov chain approach reveals the phase diagram of the incidence of the epidemics and allows us to capture the evolution of the epidemic threshold depending on the topological structure of the multiplex and the interrelation with the awareness process. Interestingly, the critical point for the onset of the epidemics has a critical value (metacritical point) defined by the awareness dynamics and the topology of the virtual network, from which the onset increases and the epidemics incidence decreases.

  1. Dynamical interplay between awareness and epidemic spreading in multiplex networks.

    PubMed

    Granell, Clara; Gómez, Sergio; Arenas, Alex

    2013-09-20

    We present the analysis of the interrelation between two processes accounting for the spreading of an epidemic, and the information awareness to prevent its infection, on top of multiplex networks. This scenario is representative of an epidemic process spreading on a network of persistent real contacts, and a cyclic information awareness process diffusing in the network of virtual social contacts between the same individuals. The topology corresponds to a multiplex network where two diffusive processes are interacting affecting each other. The analysis using a microscopic Markov chain approach reveals the phase diagram of the incidence of the epidemics and allows us to capture the evolution of the epidemic threshold depending on the topological structure of the multiplex and the interrelation with the awareness process. Interestingly, the critical point for the onset of the epidemics has a critical value (metacritical point) defined by the awareness dynamics and the topology of the virtual network, from which the onset increases and the epidemics incidence decreases.

  2. The social behavior and the evolution of sexually transmitted diseases

    NASA Astrophysics Data System (ADS)

    Gonçalves, Sebastián; Kuperman, Marcelo

    2003-10-01

    We introduce a model for the evolution of sexually transmitted diseases, in which the social behavior is incorporated as a determinant factor for the further propagation of the infection. The system may be regarded as a society of agents where in principle, anyone can sexually interact with any other one in the population, indeed, in this contribution only the homosexual case is analyzed. Different social behaviors are reflected in a distribution of sexual attitudes ranging from the more conservative to the more promiscuous. This is measured by what we call the promiscuity parameter. In terms of this parameter, we find a critical behavior for the evolution of the disease. There is a threshold below which the epidemic does not occur. We relate this critical value of promiscuity to what epidemiologists call the basic reproductive number, connecting it with the other parameters of the model, namely the infectivity and the infective period in a quantitative way. We consider the possibility of subjects to be grouped in couples.

  3. Estimation of Al2O3 critical temperature using a Langmuir probe in laser ablation

    NASA Astrophysics Data System (ADS)

    Yahiaoui, K.; Abdelli-Messaci, S.; Messaoud Aberkane, S.; Kellou, A.

    2016-11-01

    Pulsed laser deposition (PLD) has demonstrated its capacity in thin films growing under the moderate laser intensity. But when the laser intensity increases, the presence of droplets on the thin film limits the PLD efficiency such that the process needs an optimization study. In this way, an experimental study has been conducted in order to correlate between the appearance of those droplets and the laser fluence. The comprehension of the physical mechanism during ablation and the control of the deposition parameters allowed to get a safe process. Our experiment consists in measuring the amount of ejected matter from polycrystalline alumina target as a function of the laser fluence when irradiated by a KrF laser. According to laser fluence, several kinds of ablation regimes have been identified. Below a threshold value found as 12 J/cm2, the mechanism of ablation was assigned to normal evaporation, desorption and nonthermal processes. While above this threshold value, the mechanism of ablation was assigned to phase explosion phenomenon which is responsible of droplets formation when the surface temperature approaches the critical temperature T tc. A negative charge collector was used to collect the positive ions in the plume. Their times of flight (TOF) signal were used to estimate the appropriate T tc for alumina target. Ions yield, current as well as kinetic energy were deduced from the TOF signal. Their evolutions show the occurrence of an optical breakdown in the vapor plume which is well correlated with the onset of the phase explosion phenomenon. At 10 J/cm2, the ions velocities collected by the probe have been compared to those obtained from optical emission spectroscopy diagnostic and were discussed. To prove the occurrence of phase explosion by the appearance of droplets, several thin films were elaborated on Si (100) substrate at different laser fluence into vacuum. They have been characterized by scanning electron microscope. The results were well correlated with those obtained with mass measurements as function of laser fluence.

  4. Development of Extended Period Pressure-Dependent Demand Water Distribution Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judi, David R.; Mcpherson, Timothy N.

    2015-03-20

    Los Alamos National Laboratory (LANL) has used modeling and simulation of water distribution systems for N-1 contingency analyses to assess criticality of water system assets. Critical components considered in these analyses include pumps, tanks, and supply sources, in addition to critical pipes or aqueducts. A contingency represents the complete removal of the asset from system operation. For each contingency, an extended period simulation (EPS) is run using EPANET. An EPS simulates water system behavior over a time period, typically at least 24 hours. It assesses the ability of a system to respond and recover from asset disruption through distributed storagemore » in tanks throughout the system. Contingencies of concern are identified as those in which some portion of the water system has unmet delivery requirements. A delivery requirement is defined as an aggregation of water demands within a service area, similar to an electric power demand. The metric used to identify areas of unmet delivery requirement in these studies is a pressure threshold of 15 pounds per square inch (psi). This pressure threshold is used because it is below the required pressure for fire protection. Any location in the model with pressure that drops below this threshold at any time during an EPS is considered to have unmet service requirements and is used to determine cascading consequences. The outage area for a contingency is the aggregation of all service areas with a pressure below the threshold at any time during the EPS.« less

  5. [Research on the threshold of Chl-a in Lake Taihu based on microcystins].

    PubMed

    Wei, Dai-chun; Su, Jing; Ji, Dan-feng; Fu, Xiao-yong; Wang, Ji; Huo, Shou-liang; Cui, Chi-fei; Tang, Jun; Xi, Bei-dou

    2014-12-01

    Water samples were collected in Lake Taihu from June to October in 2013 in order to investigate the threshold of chlorophyll a (Chl-a). The concentrations of three microcystins isomers (MC-LR, MC-RR, MC-YR) were detected by means of solid phase extraction and high performance liquid chromatography-tandem mass spectrometry. The correlations between various MCs and eutrophication factors, for instance of total nitrogen (TN), total phosphorus (TP), chlorophyll a, permanganate index etc were analyzed. The threshold of Chl-a was studied based on the relationships between MC-LR, MCs and Chl-a. The results showed that Lake Taihu was severely polluted by MCs and its spatial distribution could be described as follows: the concentration in Meiliang Bay was the highest, followed by Gonghu Bay and Western Lake, and Lake Center; the least polluted areas were in Lake Xuhu and Southern Lake. The concentration of MC-LR was the highest among the 3 MCs. The correlation analysis indicated that MC-LR, MC-RR, MC-YR and MCs had very positive correlation with permanganate index, TN, TP and Chl-a (P < 0.01). The threshold value of Chl-a was 12.26 mg x m(-3) according to the standard thresholds of MC-LR and MCs in drinking water. The threshold value of Chl-a in Lake Taihu was very close to the standard in the State of North Carolina, which demonstrated that the threshold value provided in this study was reasonable.

  6. Evaluation of Maryland abutment scour equation through selected threshold velocity methods

    USGS Publications Warehouse

    Benedict, S.T.

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.

  7. Bilevel thresholding of sliced image of sludge floc.

    PubMed

    Chu, C P; Lee, D J

    2004-02-15

    This work examined the feasibility of employing various thresholding algorithms to determining the optimal bilevel thresholding value for estimating the geometric parameters of sludge flocs from the microtome sliced images and from the confocal laser scanning microscope images. Morphological information extracted from images depends on the bilevel thresholding value. According to the evaluation on the luminescence-inverted images and fractal curves (quadric Koch curve and Sierpinski carpet), Otsu's method yields more stable performance than other histogram-based algorithms and is chosen to obtain the porosity. The maximum convex perimeter method, however, can probe the shapes and spatial distribution of the pores among the biomass granules in real sludge flocs. A combined algorithm is recommended for probing the sludge floc structure.

  8. CHANGES IN THE ANAEROBIC THRESHOLD IN AN ANNUAL CYCLE OF SPORT TRAINING OF YOUNG SOCCER PLAYERS

    PubMed Central

    Andrzejewski, M.; Wieczorek, A.; Barinow-Wojewódzki, A.; Jadczak, Ł.; Adrian, S.; Pietrzak, M.; Wieczorek, S.

    2013-01-01

    The aim of the study was to assess changes in the anaerobic threshold of young soccer players in an annual training cycle. A group of highly trained 15-18 year old players of KKS Lech Poznań were tested. The tests included an annual training macrocycle, and its individual stages resulted from the time structure of the sports training. In order to assess the level of exercise capacities of the players, a field exercise test of increasing intensity was carried out on a soccer pitch. The test made it possible to determine the 4 millimolar lactate threshold (T LA 4 mmol · l-1) on the basis of the lactate concentration in blood [LA], to establish the threshold running speed and the threshold heart rate [HR]. The threshold running speed at the level of the 4 millimolar lactate threshold was established using the two-point form of the equation of a straight line. The obtained indicators of the threshold running speed allowed for precise establishment of effort intensity used in individual training in developing aerobic endurance. In order to test the significance of differences in mean values between four dates of tests, a non-parametric Friedman ANOVA test was used. The significance of differences between consecutive dates of tests was determined using a post-hoc Friedman ANOVA test. The tests showed significant differences in values of selected indicators determined at the anaerobic threshold in various stages of an annual training cycle of young soccer players. The most beneficial changes in terms of the threshold running speed were noted on the fourth date of tests, when the participants had the highest values of 4.01 m · s-1 for older juniors, and 3.80 m · s-1 for younger juniors. This may be indicative of effective application of an individualized programme of training loads and of good preparation of teams for competition in terms of players’ aerobic endurance. PMID:24744480

  9. Changes in the anaerobic threshold in an annual cycle of sport training of young soccer players.

    PubMed

    Sliwowski, R; Andrzejewski, M; Wieczorek, A; Barinow-Wojewódzki, A; Jadczak, L; Adrian, S; Pietrzak, M; Wieczorek, S

    2013-06-01

    The aim of the study was to assess changes in the anaerobic threshold of young soccer players in an annual training cycle. A group of highly trained 15-18 year old players of KKS Lech Poznań were tested. The tests included an annual training macrocycle, and its individual stages resulted from the time structure of the sports training. In order to assess the level of exercise capacities of the players, a field exercise test of increasing intensity was carried out on a soccer pitch. The test made it possible to determine the 4 millimolar lactate threshold (T LA 4 mmol · l(-1)) on the basis of the lactate concentration in blood [LA], to establish the threshold running speed and the threshold heart rate [HR]. The threshold running speed at the level of the 4 millimolar lactate threshold was established using the two-point form of the equation of a straight line. The obtained indicators of the threshold running speed allowed for precise establishment of effort intensity used in individual training in developing aerobic endurance. In order to test the significance of differences in mean values between four dates of tests, a non-parametric Friedman ANOVA test was used. The significance of differences between consecutive dates of tests was determined using a post-hoc Friedman ANOVA test. The tests showed significant differences in values of selected indicators determined at the anaerobic threshold in various stages of an annual training cycle of young soccer players. The most beneficial changes in terms of the threshold running speed were noted on the fourth date of tests, when the participants had the highest values of 4.01 m · s(-1) for older juniors, and 3.80 m · s(-1) for younger juniors. This may be indicative of effective application of an individualized programme of training loads and of good preparation of teams for competition in terms of players' aerobic endurance.

  10. Low latency counter event indication

    DOEpatents

    Gara, Alan G [Mount Kisco, NY; Salapura, Valentina [Chappaqua, NY

    2008-09-16

    A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.

  11. Low latency counter event indication

    DOEpatents

    Gara, Alan G.; Salapura, Valentina

    2010-08-24

    A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.

  12. Mapping Critical Loads of Atmospheric Nitrogen Deposition in the Rocky Mountains, USA

    NASA Astrophysics Data System (ADS)

    Nanus, L.; Clow, D. W.; Stephens, V. C.; Saros, J. E.

    2010-12-01

    Atmospheric nitrogen (N) deposition can adversely affect sensitive aquatic ecosystems at high-elevations in the western United States. Critical loads are the amount of deposition of a given pollutant that an ecosystem can receive below which ecological effects are thought not to occur. GIS-based landscape models were used to create maps for high-elevation areas across the Rocky Mountain region showing current atmospheric deposition rates of nitrogen (N), critical loads of N, and exceedances of critical loads of N. Atmospheric N deposition maps for the region were developed at 400 meter resolution using gridded precipitation data and spatially interpolated chemical concentrations in rain and snow. Critical loads maps were developed based on chemical thresholds corresponding to observed ecological effects, and estimated ecosystem sensitivities calculated from basin characteristics. Diatom species assemblages were used as an indicator of ecosystem health to establish critical loads of N. Chemical thresholds (concentrations) were identified for surface waters by using a combination of in-situ growth experiments and observed spatial patterns in surface-water chemistry and diatom species assemblages across an N deposition gradient. Ecosystem sensitivity was estimated using a multiple-linear regression approach in which observed surface water nitrate concentrations at 530 sites were regressed against estimates of inorganic N deposition and basin characteristics (topography, soil type and amount, bedrock geology, vegetation type) to develop predictive models of surface water chemistry. Modeling results indicated that the significant explanatory variables included percent slope, soil permeability, and vegetation type (including barren land, shrub, and grassland) and were used to predict high-elevation surface water nitrate concentrations across the Rocky Mountains. Chemical threshold concentrations were substituted into an inverted form of the model equations and applied to estimate critical loads for each stream reach within a basin, from which critical loads maps were created. Atmospheric N deposition maps were overlaid on the critical loads maps to identify areas in the Rocky Mountain region where critical loads are being exceeded, or where they may do so in the future. This approach may be transferable to other high-elevation areas of the United States and the world.

  13. Great differences in the critical erosion threshold between surface and subsurface sediments: A field investigation of an intertidal mudflat, Jiangsu, China

    NASA Astrophysics Data System (ADS)

    Shi, Benwei; Wang, Ya Ping; Wang, Li Hua; Li, Peng; Gao, Jianhua; Xing, Fei; Chen, Jing Dong

    2018-06-01

    Understanding of bottom sediment erodibility is necessary for the sustainable management and protection of coastlines, and is of great importance for numerical models of sediment dynamics and transport. To investigate the dependence of sediment erodibility on degree of consolidation, we measured turbidity, waves, tidal currents, intratidal bed-level changes, and sediment properties on an exposed macrotidal mudflat during a series of tidal cycles. We estimated the water content of surface sediments (in the uppermost 2 cm of sediment) and sub-surface sediments (at 2 cm below the sediment surface). Bed shear stress values due to currents (τc), waves (τw), and combined current-wave action (τcw) were calculated using a hydrodynamic model. In this study, we estimate the critical shear stress for erosion using two approaches and both of them give similar results. We found that the critical shear stress for erosion (τce) was 0.17-0.18 N/m2 in the uppermost 0-2 cm of sediment and 0.29 N/m2 in sub-surface sediment layers (depth, 2 cm), as determined by time series of τcw values and intratidal bed-level changes, and values of τce, obtained using the water content of bottom sediments, were 0.16 N/m2 in the uppermost 2 cm and 0.28 N/m2 in the sub-surface (depth, 2 cm) sediment. These results indicate that the value of τce for sub-surface sediments (depth, 2 cm) is much greater than that for the uppermost sediments (depth, 0-2 cm), and that the τce value is mainly related to the water content, which is determined by the extent of consolidation. Our results have implications for improving the predictive accuracy of models of sediment transport and morphological evolution, by introducing variable τce values for corresponding sediment layers, and can also provide a mechanistic understanding of bottom sediment erodibility at different sediment depths on intertidal mudflats, as related to differences in the consolidation time.

  14. Legislating thresholds for drug trafficking: a policy development case study from New South Wales, Australia.

    PubMed

    Hughes, Caitlin Elizabeth; Ritter, Alison; Cowdery, Nicholas

    2014-09-01

    Legal thresholds are used in many parts of the world to define the quantity of illicit drugs over which possession is deemed "trafficking" as opposed to "possession for personal use". There is limited knowledge about why or how such laws were developed. In this study we analyse the policy processes underpinning the introduction and expansion of the drug trafficking legal threshold system in New South Wales (NSW), Australia. A critical legal and historical analysis was undertaken sourcing data from legislation, Parliamentary Hansard debates, government inquiries, police reports and research. A timeline of policy developments was constructed from 1970 until 2013 outlining key steps including threshold introduction (1970), expansion (1985), and wholesale revision (1988). We then critically analysed the drivers of each step and the roles played by formal policy actors, public opinion, research/data and the drug trafficking problem. We find evidence that while justified as a necessary tool for effective law enforcement of drug trafficking, their introduction largely preceded overt police calls for reform or actual increases in drug trafficking. Moreover, while the expansion from one to four thresholds had the intent of differentiating small from large scale traffickers, the quantities employed were based on government assumptions which led to "manifest problems" and the revision in 1988 of over 100 different quantities. Despite the revisions, there has remained no further formal review and new quantities for "legal highs" continue to be added based on assumption and an uncertain evidence-base. The development of legal thresholds for drug trafficking in NSW has been arbitrary and messy. That the arbitrariness persists from 1970 until the present day makes it hard to conclude the thresholds have been well designed. Our narrative provides a platform for future policy reform. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko

    In this study, we compute the gluon fusion Higgs boson cross-section at N 3LO through the second term in the threshold expansion. This calculation constitutes a major milestone towards the full N 3LO cross section. Our result has the best formal accuracy in the threshold expansion currently available, and includes contributions from collinear regions besides subleading corrections from soft and hard regions, as well as certain logarithmically enhanced contributions for general kinematics. We use our results to perform a critical appraisal of the validity of the threshold approximation at N 3LO in perturbative QCD.

  16. A Cyfip2-Dependent Excitatory Interneuron Pathway Establishes the Innate Startle Threshold.

    PubMed

    Marsden, Kurt C; Jain, Roshan A; Wolman, Marc A; Echeverry, Fabio A; Nelson, Jessica C; Hayer, Katharina E; Miltenberg, Ben; Pereda, Alberto E; Granato, Michael

    2018-04-17

    Sensory experiences dynamically modify whether animals respond to a given stimulus, but it is unclear how innate behavioral thresholds are established. Here, we identify molecular and circuit-level mechanisms underlying the innate threshold of the zebrafish startle response. From a forward genetic screen, we isolated five mutant lines with reduced innate startle thresholds. Using whole-genome sequencing, we identify the causative mutation for one line to be in the fragile X mental retardation protein (FMRP)-interacting protein cyfip2. We show that cyfip2 acts independently of FMRP and that reactivation of cyfip2 restores the baseline threshold after phenotype onset. Finally, we show that cyfip2 regulates the innate startle threshold by reducing neural activity in a small group of excitatory hindbrain interneurons. Thus, we identify a selective set of genes critical to establishing an innate behavioral threshold and uncover a circuit-level role for cyfip2 in this process. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  17. Sparing of normal urothelium in hexyl-aminolevulinate-mediated photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Vaucher, Laurent; Jichlinski, Patrice; Lange, Norbert; Ritter-Schenk, Celine; van den Bergh, Hubert; Kucera, Pavel

    2005-04-01

    This work determines on an in vitro porcine urothelium model the threshold values of different parameters such as photosensitizer concentration, irradiation parameters and production of reactive oxygen species in order to control the damage on normal urothelium and spare about 50% of normal mucosa. For a three hours HAL incubation time, these threshold values were with blue light (0.75J/cm at 75 mW/cm2 or 0.15J/cm2 at 30 mW/cm2) and with white light (0.55J/cm2, at 30 mW/cm2). This means that for identical fluence rates, the threshold value for white light irradiation may be 3 times higher than for blue light irradiation.

  18. Dynamics of a network-based SIS epidemic model with nonmonotone incidence rate

    NASA Astrophysics Data System (ADS)

    Li, Chun-Hsien

    2015-06-01

    This paper studies the dynamics of a network-based SIS epidemic model with nonmonotone incidence rate. This type of nonlinear incidence can be used to describe the psychological effect of certain diseases spread in a contact network at high infective levels. We first find a threshold value for the transmission rate. This value completely determines the dynamics of the model and interestingly, the threshold is not dependent on the functional form of the nonlinear incidence rate. Furthermore, if the transmission rate is less than or equal to the threshold value, the disease will die out. Otherwise, it will be permanent. Numerical experiments are given to illustrate the theoretical results. We also consider the effect of the nonlinear incidence on the epidemic dynamics.

  19. Is the introduction of another variable to the strength-duration curve necessary in neurostimulation?

    PubMed

    Abejón, David; Rueda, Pablo; del Saz, Javier; Arango, Sara; Monzón, Eva; Gilsanz, Fernando

    2015-04-01

    Neurostimulation is the process and technology derived from the application of electricity with different parameters to activate or inhibit nerve pathways. Pulse width (Pw) is the duration of each electrical impulse and, along with amplitude (I), determines the total energy charge of the stimulation. The aim of the study was to test Pw values to find the most adequate pulse widths in rechargeable systems to obtain the largest coverage of the painful area, the most comfortable paresthesia, and the greatest patient satisfaction. A study of the parameters was performed, varying Pw while maintaining a fixed frequency at 50 Hz. Data on perception threshold (Tp ), discomfort threshold (Td ), and therapeutic threshold (Tt ) were recorded, applying 14 increasing Pw values ranging from 50 µsec to 1000 µsec. Lastly, the behavior of the therapeutic range (TR), the coverage of the painful area, the subjective patient perception of paresthesia, and the degree of patient satisfaction were assessed. The findings after analyzing the different thresholds were as follows: When varying the Pw, the differences obtained at each threshold (Tp , Tt , and Td ) were statistically significant (p < 0.05). The differences among the resulting Tp values and among the resulting Tt values were statistically significant when varying Pw from 50 up to 600 µsec (p < 0.05). For Pw levels 600 µsec and up, no differences were observed in these thresholds. In the case of Td , significant differences existed as Pw increased from 50 to 700 µsec (p ≤ 0.05). The coverage increased in a statistically significant way (p < 0.05) from Pw values of 50 µsec to 300 µsec. Good or very good subjective perception was shown at about Pw 300 µsec. The patient paresthesia coverage was introduced as an extra variable in the chronaxie-rheobase curve, allowing the adjustment of Pw values for optimal programming. The coverage of the patient against the current chronaxie-rheobase formula will be represented on three axes; an extra axis (z) will appear, multiplying each combination of Pw value and amplitude by the percentage of coverage corresponding to those values. Using this new comparison of chronaxie-rheobase curve vs. coverage, maximum Pw values will be obtained different from those obtained by classic methods. © 2014 International Neuromodulation Society.

  20. Picosecond Electric-Field-Induced Threshold Switching in Phase-Change Materials.

    PubMed

    Zalden, Peter; Shu, Michael J; Chen, Frank; Wu, Xiaoxi; Zhu, Yi; Wen, Haidan; Johnston, Scott; Shen, Zhi-Xun; Landreman, Patrick; Brongersma, Mark; Fong, Scott W; Wong, H-S Philip; Sher, Meng-Ju; Jost, Peter; Kaes, Matthias; Salinga, Martin; von Hoegen, Alexander; Wuttig, Matthias; Lindenberg, Aaron M

    2016-08-05

    Many chalcogenide glasses undergo a breakdown in electronic resistance above a critical field strength. Known as threshold switching, this mechanism enables field-induced crystallization in emerging phase-change memory. Purely electronic as well as crystal nucleation assisted models have been employed to explain the electronic breakdown. Here, picosecond electric pulses are used to excite amorphous Ag_{4}In_{3}Sb_{67}Te_{26}. Field-dependent reversible changes in conductivity and pulse-driven crystallization are observed. The present results show that threshold switching can take place within the electric pulse on subpicosecond time scales-faster than crystals can nucleate. This supports purely electronic models of threshold switching and reveals potential applications as an ultrafast electronic switch.

Top