2013-01-01
Background Intraoperative detection of 18F-FDG-avid tissue sites during 18F-FDG-directed surgery can be very challenging when utilizing gamma detection probes that rely on a fixed target-to-background (T/B) ratio (ratiometric threshold) for determination of probe positivity. The purpose of our study was to evaluate the counting efficiency and the success rate of in situ intraoperative detection of 18F-FDG-avid tissue sites (using the three-sigma statistical threshold criteria method and the ratiometric threshold criteria method) for three different gamma detection probe systems. Methods Of 58 patients undergoing 18F-FDG-directed surgery for known or suspected malignancy using gamma detection probes, we identified nine 18F-FDG-avid tissue sites (from amongst seven patients) that were seen on same-day preoperative diagnostic PET/CT imaging, and for which each 18F-FDG-avid tissue site underwent attempted in situ intraoperative detection concurrently using three gamma detection probe systems (K-alpha probe, and two commercially-available PET-probe systems), and then were subsequently surgical excised. Results The mean relative probe counting efficiency ratio was 6.9 (± 4.4, range 2.2–15.4) for the K-alpha probe, as compared to 1.5 (± 0.3, range 1.0–2.1) and 1.0 (± 0, range 1.0–1.0), respectively, for two commercially-available PET-probe systems (P < 0.001). Successful in situ intraoperative detection of 18F-FDG-avid tissue sites was more frequently accomplished with each of the three gamma detection probes tested by using the three-sigma statistical threshold criteria method than by using the ratiometric threshold criteria method, specifically with the three-sigma statistical threshold criteria method being significantly better than the ratiometric threshold criteria method for determining probe positivity for the K-alpha probe (P = 0.05). Conclusions Our results suggest that the improved probe counting efficiency of the K-alpha probe design used in conjunction with the three-sigma statistical threshold criteria method can allow for improved detection of 18F-FDG-avid tissue sites when a low in situ T/B ratio is encountered. PMID:23496877
Effects of threshold on the topology of gene co-expression networks.
Couto, Cynthia Martins Villar; Comin, César Henrique; Costa, Luciano da Fontoura
2017-09-26
Several developments regarding the analysis of gene co-expression profiles using complex network theory have been reported recently. Such approaches usually start with the construction of an unweighted gene co-expression network, therefore requiring the selection of a suitable threshold defining which pairs of vertices will be connected. We aimed at addressing such an important problem by suggesting and comparing five different approaches for threshold selection. Each of the methods considers a respective biologically-motivated criterion for electing a potentially suitable threshold. A set of 21 microarray experiments from different biological groups was used to investigate the effect of applying the five proposed criteria to several biological situations. For each experiment, we used the Pearson correlation coefficient to measure the relationship between each gene pair, and the resulting weight matrices were thresholded considering several values, generating respective adjacency matrices (co-expression networks). Each of the five proposed criteria was then applied in order to select the respective threshold value. The effects of these thresholding approaches on the topology of the resulting networks were compared by using several measurements, and we verified that, depending on the database, the impact on the topological properties can be large. However, a group of databases was verified to be similarly affected by most of the considered criteria. Based on such results, it can be suggested that when the generated networks present similar measurements, the thresholding method can be chosen with greater freedom. If the generated networks are markedly different, the thresholding method that better suits the interests of each specific research study represents a reasonable choice.
IMRT QA: Selecting gamma criteria based on error detection sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steers, Jennifer M.; Fraass, Benedick A., E-mail: benedick.fraass@cshs.org
Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique,more » and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. Conclusions: We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.« less
IMRT QA: Selecting gamma criteria based on error detection sensitivity.
Steers, Jennifer M; Fraass, Benedick A
2016-04-01
The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.
NASA Astrophysics Data System (ADS)
Amanda, A. R.; Widita, R.
2016-03-01
The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.
NASA Astrophysics Data System (ADS)
Noufal, Manthala Padannayil; Abdullah, Kallikuzhiyil Kochunny; Niyas, Puzhakkal; Subha, Pallimanhayil Abdul Raheem
2017-12-01
Aim: This study evaluates the impacts of using different evaluation criteria on gamma pass rates in two commercially available QA methods employed for the verification of VMAT plans using different hypothetical planning target volumes (PTVs) and anatomical regions. Introduction: Volumetric modulated arc therapy (VMAT) is a widely accepted technique to deliver highly conformal treatment in a very efficient manner. As their level of complexity is high in comparison to intensity-modulated radiotherapy (IMRT), the implementation of stringent quality assurance (QA) before treatment delivery is of paramount importance. Material and Methods: Two sets of VMAT plans were generated using Eclipse planning systems, one with five different complex hypothetical three-dimensional PTVs and one including three anatomical regions. The verification of these plans was performed using a MatriXX ionization chamber array embedded inside a MultiCube phantom and a Varian EPID dosimetric system attached to a Clinac iX. The plans were evaluated based on the 3%/3 mm, 2%/2 mm, and 1%/1 mm global gamma criteria and with three low-dose threshold values (0%, 10%, and 20%). Results: The gamma pass rates were above 95% in all VMAT plans, when the 3%/3mm gamma criterion was used and no threshold was applied. In both systems, the pass rates decreased as the criteria become stricter. Higher pass rates were observed when no threshold was applied and they tended to decrease for 10% and 20% thresholds. Conclusion: The results confirm the suitability of the equipments used and the validity of the plans. The study also confirmed that the threshold settings greatly affect the gamma pass rates, especially for lower gamma criteria.
How to determine an optimal threshold to classify real-time crash-prone traffic conditions?
Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang
2018-08-01
One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
49 CFR 80.13 - Threshold criteria.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false Threshold criteria. 80.13 Section 80.13... § 80.13 Threshold criteria. (a) To be eligible to receive Federal credit assistance under this part, a project shall meet the following five threshold criteria: (1) The project shall be consistent with the...
49 CFR 80.13 - Threshold criteria.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 1 2014-10-01 2014-10-01 false Threshold criteria. 80.13 Section 80.13... § 80.13 Threshold criteria. (a) To be eligible to receive Federal credit assistance under this part, a project shall meet the following five threshold criteria: (1) The project shall be consistent with the...
Methods for automatic trigger threshold adjustment
Welch, Benjamin J; Partridge, Michael E
2014-03-18
Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driver, C.J.
1994-05-01
Criteria for determining the quality of liver sediment are necessary to ensure that concentrations of contaminants in aquatic systems are within acceptable limits for the protection of aquatic and human life. Such criteria should facilitate decision-making about remediation, handling, and disposal of contaminants. Several approaches to the development of sediment quality criteria (SQC) have been described and include both descriptive and numerical methods. However, no single method measures all impacts at all times to all organisms (U.S. EPA 1992b). The U.S. EPA`s interest is primarily in establishing chemically based, numerical SQC that are applicable nation-wide (Shea 1988). Of the approachesmore » proposed for SQC development, only three are being considered for numerical SQC on a national level. These approaches include an Equilibrium Partitioning Approach, a site-specific method using bioassays (the Apparent Effects Threshold Approach), and an approach similar to EPA`s water quality criteria (Pavlou and Weston 1984). Although national (or even regional) criteria address a number of political, litigative, and engineering needs, some researchers feel that protection of benthic communities require site-specific, biologically based criteria (Baudo et al. 1990). This is particularly true for areas where complex mixtures of contaminants are present in sediments. Other scientifically valid and accepted procedures for freshwater SQC include a background concentration approach, methods using field or spiked bioassays, a screening level concentration approach, the Apparent Effects Threshold Approach, the Sediment Quality Triad, the International Joint Commission Sediment Assessment Strategy, and the National Status and Trends Program Approach. The various sediment assessment approaches are evaluated for application to the Hanford Reach and recommendations for Hanford Site sediment quality criteria are discussed.« less
A. Dennis Lemly
1997-01-01
This paper describes a method for deriving site-specific water quality criteria for selenium using a two-step process: (1) gather information on selenium residues and biological effects at the site and in down-gradient systems and (2) examine criteria based on the degree of bioaccumulation, the relationship between mea-sured residues and threshold concentrations for...
Sub-threshold Post Traumatic Stress Disorder in the WHO World Mental Health Surveys
McLaughlin, Katie A.; Koenen, Karestan C.; Friedman, Matthew J.; Ruscio, Ayelet Meron; Karam, Elie G.; Shahly, Victoria; Stein, Dan J.; Hill, Eric D.; Petukhova, Maria; Alonso, Jordi; Andrade, Laura Helena; Angermeyer, Matthias C.; Borges, Guilherme; de Girolamo, Giovanni; de Graaf, Ron; Demyttenaere, Koen; Florescu, Silvia E.; Mladenova, Maya; Posada-Villa, Jose; Scott, Kate M.; Takeshima, Tadashi; Kessler, Ronald C.
2014-01-01
Background Although only a minority of people exposed to a traumatic event (TE) develops PTSD, symptoms not meeting full PTSD criteria are common and often clinically significant. Individuals with these symptoms have sometimes been characterized as having sub-threshold PTSD, but no consensus exists on the optimal definition of this term. Data from a large cross-national epidemiological survey are used to provide a principled basis for such a definition. Methods The WHO World Mental Health (WMH) Surveys administered fully-structured psychiatric diagnostic interviews to community samples in 13 countries containing assessments of PTSD associated with randomly selected TEs. Focusing on the 23,936 respondents reporting lifetime TE exposure, associations of approximated DSM-5 PTSD symptom profiles with six outcomes (distress-impairment, suicidality, comorbid fear-distress disorders, PTSD symptom duration) were examined to investigate implications of different sub-threshold definitions. Results Although consistently highest distress-impairment, suicidality, comorbidity, and symptom duration were observed among the 3.0% of respondents with DSM-5 PTSD than other symptom profiles, the additional 3.6% of respondents meeting two or three of DSM-5 Criteria BE also had significantly elevated scores for most outcomes. The proportion of cases with threshold versus sub-threshold PTSD varied depending on TE type, with threshold PTSD more common following interpersonal violence and sub-threshold PTSD more common following events happening to loved ones. Conclusions Sub-threshold DSM-5 PTSD is most usefully defined as meeting two or three of the DSM-5 Criteria B-E. Use of a consistent definition is critical to advance understanding of the prevalence, predictors, and clinical significance of sub-threshold PTSD. PMID:24842116
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described. PMID:28542338
Young, Brian; King, Jonathan L; Budowle, Bruce; Armogida, Luigi
2017-01-01
Amplicon (targeted) sequencing by massively parallel sequencing (PCR-MPS) is a potential method for use in forensic DNA analyses. In this application, PCR-MPS may supplement or replace other instrumental analysis methods such as capillary electrophoresis and Sanger sequencing for STR and mitochondrial DNA typing, respectively. PCR-MPS also may enable the expansion of forensic DNA analysis methods to include new marker systems such as single nucleotide polymorphisms (SNPs) and insertion/deletions (indels) that currently are assayable using various instrumental analysis methods including microarray and quantitative PCR. Acceptance of PCR-MPS as a forensic method will depend in part upon developing protocols and criteria that define the limitations of a method, including a defensible analytical threshold or method detection limit. This paper describes an approach to establish objective analytical thresholds suitable for multiplexed PCR-MPS methods. A definition is proposed for PCR-MPS method background noise, and an analytical threshold based on background noise is described.
ERIC Educational Resources Information Center
Solanto, Mary V.; Wasserstein, Jeanette; Marks, David J.; Mitchell, Katherine J.
2012-01-01
Objective: To empirically identify the appropriate symptom threshold for hyperactivity-impulsivity for diagnosis of ADHD in adults. Method: Participants were 88 adults (M [SD] age = 41.69 [11.78] years, 66% female, 16% minority) meeting formal "DSM-IV" criteria for ADHD combined or predominantly inattentive subtypes based on a structured…
49 CFR 80.13 - Threshold criteria.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 1 2013-10-01 2013-10-01 false Threshold criteria. 80.13 Section 80.13 Transportation Office of the Secretary of Transportation CREDIT ASSISTANCE FOR SURFACE TRANSPORTATION PROJECTS § 80.13 Threshold criteria. (a) To be eligible to receive Federal credit assistance under this part, a project shall meet the following five...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 1 2014-10-01 2014-10-01 false Definitions of metrics used to determine the general outage-reporting threshold criteria. 4.7 Section 4.7 Telecommunication FEDERAL COMMUNICATIONS....7 Definitions of metrics used to determine the general outage-reporting threshold criteria. (a...
49 CFR 80.13 - Threshold criteria.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 1 2012-10-01 2012-10-01 false Threshold criteria. 80.13 Section 80.13 Transportation Office of the Secretary of Transportation CREDIT ASSISTANCE FOR SURFACE TRANSPORTATION PROJECTS § 80.13 Threshold criteria. (a) To be eligible to receive Federal credit assistance under this part, a project shall meet the following five...
49 CFR 80.13 - Threshold criteria.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 1 2011-10-01 2011-10-01 false Threshold criteria. 80.13 Section 80.13 Transportation Office of the Secretary of Transportation CREDIT ASSISTANCE FOR SURFACE TRANSPORTATION PROJECTS § 80.13 Threshold criteria. (a) To be eligible to receive Federal credit assistance under this part, a project shall meet the following five...
Masterson, Elizabeth A.; Sweeney, Marie Haring; Deddens, James A.; Themann, Christa L.; Wall, David K.
2015-01-01
Objective The purpose of this study was to compare the prevalence of workers with National Institute for Occupational Safety and Health significant threshold shifts (NSTS), Occupational Safety and Health Administration standard threshold shifts (OSTS), and with OSTS with age correction (OSTS-A), by industry using North American Industry Classification System codes. Methods 2001-2010 worker audiograms were examined. Prevalence and adjusted prevalence ratios for NSTS were estimated by industry. NSTS, OSTS and OSTS-A prevalences were compared by industry. Results 20% of workers had an NSTS, 14% had an OSTS and 6% had an OSTS-A. For most industries, the OSTS and OSTS-A criteria identified 28-36% and 66-74% fewer workers than the NSTS criteria, respectively. Conclusions Use of NSTS criteria allowing for earlier detection of shifts in hearing is recommended for improved prevention of occupational hearing loss. PMID:24662953
NASA Astrophysics Data System (ADS)
Chung-Wei, Li; Gwo-Hshiung, Tzeng
To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation—the impact-relations map—by which respondents organize their own actions in the world. The applicability of the DEMATEL method is widespread, ranging from analyzing world problematique decision making to industrial planning. The most important property of the DEMATEL method used in the multi-criteria decision making (MCDM) field is to construct interrelations between criteria. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using real cases to find the interrelationships between the criteria for evaluating effects in E-learning programs as an examples, we will compare the results obtained from the respondents and from our method, and discuss that the different impact-relations maps from these two methods.
Identifying Thresholds for Ecosystem-Based Management
Samhouri, Jameal F.; Levin, Phillip S.; Ainsworth, Cameron H.
2010-01-01
Background One of the greatest obstacles to moving ecosystem-based management (EBM) from concept to practice is the lack of a systematic approach to defining ecosystem-level decision criteria, or reference points that trigger management action. Methodology/Principal Findings To assist resource managers and policymakers in developing EBM decision criteria, we introduce a quantitative, transferable method for identifying utility thresholds. A utility threshold is the level of human-induced pressure (e.g., pollution) at which small changes produce substantial improvements toward the EBM goal of protecting an ecosystem's structural (e.g., diversity) and functional (e.g., resilience) attributes. The analytical approach is based on the detection of nonlinearities in relationships between ecosystem attributes and pressures. We illustrate the method with a hypothetical case study of (1) fishing and (2) nearshore habitat pressure using an empirically-validated marine ecosystem model for British Columbia, Canada, and derive numerical threshold values in terms of the density of two empirically-tractable indicator groups, sablefish and jellyfish. We also describe how to incorporate uncertainty into the estimation of utility thresholds and highlight their value in the context of understanding EBM trade-offs. Conclusions/Significance For any policy scenario, an understanding of utility thresholds provides insight into the amount and type of management intervention required to make significant progress toward improved ecosystem structure and function. The approach outlined in this paper can be applied in the context of single or multiple human-induced pressures, to any marine, freshwater, or terrestrial ecosystem, and should facilitate more effective management. PMID:20126647
Leontaridou, Maria; Urbisch, Daniel; Kolle, Susanne N; Ott, Katharina; Mulliner, Denis S; Gabbert, Silke; Landsiedel, Robert
2017-01-01
Test methods to assess the skin sensitization potential of a substance usually use threshold criteria to dichotomize continuous experimental read-outs into yes/no conclusions. The threshold criteria are prescribed in the respective OECD test guidelines and the conclusion is used for regulatory hazard assessment, i.e., classification and labelling of the substance. We can identify a borderline range (BR) around the classification threshold within which test results are inconclusive due to a test method's biological and technical variability. We quantified BRs in the prediction models of the non-animal test methods DPRA, LuSens and h-CLAT, and of the animal test LLNA, respectively. Depending on the size of the BR, we found that between 6% and 28% of the substances in the sets tested with these methods were considered borderline. When the results of individual non-animal test methods were combined into integrated testing strategies (ITS), borderline test results of individual tests also affected the overall assessment of the skin sensitization potential of the testing strategy. This was analyzed for the 2-out-of-3 ITS: Four out of 40 substances (10%) were considered borderline. Based on our findings we propose expanding the standard binary classification of substances into "positive"/"negative" or "hazardous"/"non-hazardous" by adding a "borderline" or "inconclusive" alert for cases where test results fall within the borderline range.
NASA Astrophysics Data System (ADS)
Wormanns, Dag; Klotz, Ernst; Dregger, Uwe; Beyer, Florian; Heindel, Walter
2004-05-01
Lack of angiogenesis virtually excludes malignancy of a pulmonary nodule; assessment with quantitative contrast-enhanced CT (QECT) requires a reliable enhancement measurement technique. Diagnostic performance of different measurement methods in the distinction between malignant and benign nodules was evaluated. QECT (unenhanced scan and 4 post-contrast scans) was performed in 48 pulmonary nodules (12 malignant, 12 benign, 24 indeterminate). Nodule enhancement was the difference between the highest nodule density at any post-contrast scan and the unenhanced scan. Enhancement was determined with: A) the standard 2D method; B) a 3D method consisting of segmentation, removal of peripheral structures and density averaging. Enhancement curves were evaluated for their plausibility using a predefined set of criteria. Sensitivity and specificity were 100% and 33% for the 2D method resp. 92% and 55% for the 3D method using a threshold of 20 HU. One malignant nodule did not show significant enhancement with method B due to adjacent atelectasis which disappeared within the few minutes of the QECT examination. Better discrimination between benign and malignant lesions was achieved with a slightly higher threshold than proposed in the literature. Application of plausibility criteria to the enhancement curves rendered less plausibility faults with the 3D method. A new 3D method for analysis of QECT scans yielded less artefacts and better specificity in the discrimination between benign and malignant pulmonary nodules when using an appropriate enhancement threshold. Nevertheless, QECT results must be interpreted with care.
Yeatts, Sharon D.; Gennings, Chris; Crofton, Kevin M.
2014-01-01
Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose-dependent interaction. However, the corresponding likelihood-ratio-based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds-optimal second-stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds-optimal second-stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice. PMID:22640366
Denis, Cécile; Fatséas, Mélina; Auriacombe, Marc
2012-04-01
The DSM-5 Substance-Related Disorders Work Group proposed to include Pathological Gambling within the current Substance-Related Disorders section. The objective of the current report was to assess four possible sets of diagnostic criteria for Pathological Gambling. Gamblers (N=161) were defined as either Pathological or Non-Pathological according to four classification methods. (a) Option 1: the current DSM-IV criteria for Pathological Gambling; (b) Option 2: dropping the "Illegal Acts" criterion, while keeping the threshold at 5 required criteria endorsed; (c) Option 3: the proposed DSM-5 approach, i.e., deleting "Illegal Acts" and lowering the threshold of required criteria from 5 to 4; (d) Option 4: to use a set of Pathological Gambling criteria modeled on the DSM-IV Substance Dependence criteria. Cronbach's alpha and eigenvalues were calculated for reliability, Phi, discriminant function analyses, correlations and multivariate regression models were performed for validity and kappa coefficients were calculated for diagnostic consistency of each option. All criteria sets were reliable and valid. Some criteria had higher discriminant properties than others. The proposed DSM-5 criteria in Options 2 and 3 performed well and did not appear to alter the meanings of the diagnoses of Pathological Gambling from DSM-IV. Future work should further explore if Pathological Gambling might be assessed using the same criteria as those used for Substance Use Disorders. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Aggen, S. H.; Neale, M. C.; Røysamb, E.; Reichborn-Kjennerud, T.; Kendler, K. S.
2009-01-01
Background Despite its importance as a paradigmatic personality disorder, little is known about the measurement invariance of the DSM-IV borderline personality disorder (BPD) criteria ; that is, whether the criteria assess the disorder equivalently across different groups. Method BPD criteria were evaluated at interview in 2794 young adult Norwegian twins. Analyses, based on item-response modeling, were conducted to test for differential age and sex moderation of the individual BPD criteria characteristics given factor-level covariate effects. Results Confirmatory factor analytic results supported a unidimensional structure for the nine BPD criteria. Compared to males, females had a higher BPD factor mean, larger factor variance and there was a significant age by sex interaction on the factor mean. Strong differential sex and age by sex interaction effects were found for the ‘ impulsivity ’ criterion factor loading and threshold. Impulsivity related to the BPD factor poorly in young females but improved significantly in older females. Males reported more impulsivity compared to females and this difference increased with age. The ‘ affective instability ’ threshold was also moderated, with males reporting less than expected. Conclusions The results suggest the DSM-IV BPD ‘ impulsivity ’ and ‘ affective instability ’ criteria function differentially with respect to age and sex, with impulsivity being especially problematic. If verified, these findings have important implications for the interpretation of prior research with these criteria. These non-invariant age and sex effects may be identifying criteria-level expression features relevant to BPD nosology and etiology. Criterion functioning assessed using modern psychometric methods should be considered in the development of DSM-V. PMID:19400977
Gerstl, Lucia; Schoppe, Nikola; Albers, Lucia; Ertl-Wagner, Birgit; Alperin, Noam; Ehrt, Oliver; Pomschar, Andreas; Landgraf, Mirjam N; Heinen, Florian
2017-11-01
Idiopathic intracranial hypertension (IIH) in children is a rare condition of unknown etiology and various clinical presentations. The primary aim of this study was to evaluate if our pediatric IIH study group fulfilled the revised diagnostic criteria for IIH published in 2013, particularly with regard to clinical presentation and threshold value of an elevated lumbar puncture opening pressure. Additionally we investigated the potential utilization of MR-based and fundoscopic methods of estimating intracranial pressure for improved diagnosis. Clinical data were collected retrospectively from twelve pediatric patients diagnosed with IIH between 2008 and 2012 and revised diagnostic criteria were applied. Comparison with non-invasive methods for measuring intracranial pressure, MRI-based measurement (MR-ICP) and venous ophthalmodynamometry was performed. Only four of the twelve children (33%) fulfilled the revised diagnostic criteria for a definite diagnosis of IIH. Regarding noninvasive methods, MR-ICP (n = 6) showed a significantly higher mean of intracranial pressure compared to a healthy age- and sex-matched control group (p = 0.0043). Venous ophthalmodynamometry (n = 4) showed comparable results to invasive lumbar puncture. The revised diagnostic criteria for IIH may be too strict especially in children without papilledema. MR-ICP and venous ophthalmodynamometry are promising complementary procedures for monitoring disease progression and response to treatment. Copyright © 2017 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.
Maulidiani; Rudiyanto; Abas, Faridah; Ismail, Intan Safinar; Lajis, Nordin H
2018-06-01
Optimization process is an important aspect in the natural product extractions. Herein, an alternative approach is proposed for the optimization in extraction, namely, the Generalized Likelihood Uncertainty Estimation (GLUE). The approach combines the Latin hypercube sampling, the feasible range of independent variables, the Monte Carlo simulation, and the threshold criteria of response variables. The GLUE method is tested in three different techniques including the ultrasound, the microwave, and the supercritical CO 2 assisted extractions utilizing the data from previously published reports. The study found that this method can: provide more information on the combined effects of the independent variables on the response variables in the dotty plots; deal with unlimited number of independent and response variables; consider combined multiple threshold criteria, which is subjective depending on the target of the investigation for response variables; and provide a range of values with their distribution for the optimization. Copyright © 2018 Elsevier Ltd. All rights reserved.
A parametric method for determining the number of signals in narrow-band direction finding
NASA Astrophysics Data System (ADS)
Wu, Qiang; Fuhrmann, Daniel R.
1991-08-01
A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).
Trace, Sara E.; Thornton, Laura M.; Root, Tammy L.; Mazzeo, Suzanne E.; Lichtenstein, Paul; Pedersen, Nancy L.; Bulik, Cynthia M.
2011-01-01
Objective We assessed the impact of reducing the binge eating frequency and duration thresholds on the diagnostic criteria for bulimia nervosa (BN) and binge eating disorder (BED). Method We estimated the lifetime population prevalence of BN and BED in 13,295 female twins from the Swedish Twin study of Adults: Genes and Environment employing a range of frequency and duration thresholds. External validation (risk to co-twin) was used to investigate empirical evidence for an optimal binge eating frequency threshold. Results The lifetime prevalence estimates of BN and BED increased linearly as the frequency criterion decreased. As the required duration increased, the prevalence of BED decreased slightly. Discontinuity in co-twin risk was observed in BN between at least four times per month and at least five times per month. This model could not be fit for BED. Discussion The proposed changes to the DSM-5 binge eating frequency and duration criteria would allow for better detection of binge eating pathology without resulting in a markedly higher lifetime prevalence of BN or BED. PMID:21882218
Using an Outranking Method Supporting the Acquisition of Military Equipment
2009-10-01
selection methodology, taking several criteria into account. We show to what extent the class of PROMETHEE methods is presenting these features. We...functions, the indifference and preference thresholds and some other technical parameters. Then we discuss the capabilities of the PROMETHEE methods to...discuss the interpretation of the results given by these PROMETHEE methods. INTRODUCTION Outranking methods for multicriteria decision aid belong
Müller, Dirk; Pulm, Jannis; Gandjour, Afschin
2012-01-01
To compare cost-effectiveness modeling analyses of strategies to prevent osteoporotic and osteopenic fractures either based on fixed thresholds using bone mineral density or based on variable thresholds including bone mineral density and clinical risk factors. A systematic review was performed by using the MEDLINE database and reference lists from previous reviews. On the basis of predefined inclusion/exclusion criteria, we identified relevant studies published since January 2006. Articles included for the review were assessed for their methodological quality and results. The literature search resulted in 24 analyses, 14 of them using a fixed-threshold approach and 10 using a variable-threshold approach. On average, 70% of the criteria for methodological quality were fulfilled, but almost half of the analyses did not include medication adherence in the base case. The results of variable-threshold strategies were more homogeneous and showed more favorable incremental cost-effectiveness ratios compared with those based on a fixed threshold with bone mineral density. For analyses with fixed thresholds, incremental cost-effectiveness ratios varied from €80,000 per quality-adjusted life-year in women aged 55 years to cost saving in women aged 80 years. For analyses with variable thresholds, the range was €47,000 to cost savings. Risk assessment using variable thresholds appears to be more cost-effective than selecting high-risk individuals by fixed thresholds. Although the overall quality of the studies was fairly good, future economic analyses should further improve their methods, particularly in terms of including more fracture types, incorporating medication adherence, and including or discussing unrelated costs during added life-years. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Rider, Lisa G.; Aggarwal, Rohit; Pistorio, Angela; Bayat, Nastaran; Erman, Brian; Feldman, Brian M.; Huber, Adam M.; Cimaz, Rolando; Cuttica, Rubén J.; de Oliveira, Sheila Knupp; Lindsley, Carol B.; Pilkington, Clarissa A.; Punaro, Marilyn; Ravelli, Angelo; Reed, Ann M.; Rouster-Stevens, Kelly; van Royen, Annet; Dressler, Frank; Magalhaes, Claudia Saad; Constantin, Tamás; Davidson, Joyce E.; Magnusson, Bo; Russo, Ricardo; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A.; Miller, Frederick W.; Vencovsky, Jiri; Ruperto, Nicolino
2017-01-01
Objective Develop response criteria for juvenile dermatomyositis (JDM). Methods We analyzed the performance of 312 definitions that used core set measures (CSM) from either the International Myositis Assessment and Clinical Studies Group (IMACS) or the Pediatric Rheumatology International Trials Organization (PRINTO) and were derived from natural history data and a conjoint-analysis survey. They were further validated in the PRINTO trial of prednisone alone compared to prednisone with methotrexate or cyclosporine and the Rituximab in Myositis trial. Experts considered 14 top-performing candidate criteria based on their performance characteristics and clinical face validity using nominal group technique at a consensus conference. Results Consensus was reached for a conjoint analysis–based continuous model with a Total Improvement Score of 0-100, using absolute percent change in CSM with thresholds for minimal (≥30 points), moderate (≥45), and major improvement (≥70). The same criteria were chosen for adult dermatomyositis/polymyositis with differing thresholds for improvement. The sensitivity and specificity were 89% and 91-98% for minimal, 92-94% and 94-99% for moderate, and 91-98% and 85-85% for major improvement, respectively, in JDM patient cohorts using the IMACS and PRINTO CSM. These criteria were validated in the PRINTO trial for differentiating between treatment arms for minimal and moderate improvement (P=0.009–0.057) and in the Rituximab trial for significantly differentiating the physician rating of improvement (P<0.006). Conclusion The response criteria for JDM was a conjoint analysis–based model using a continuous improvement score based on absolute percent change in CSM, with thresholds for minimal, moderate, and major improvement. PMID:28382787
Rider, Lisa G.; Aggarwal, Rohit; Pistorio, Angela; Bayat, Nastaran; Erman, Brian; Feldman, Brian M.; Huber, Adam M.; Cimaz, Rolando; Cuttica, Rubén J.; de Oliveira, Sheila Knupp; Lindsley, Carol B.; Pilkington, Clarissa A.; Punaro, Marilyn; Ravelli, Angelo; Reed, Ann M.; Rouster-Stevens, Kelly; van Royen, Annet; Dressler, Frank; Magalhaes, Claudia Saad; Constantin, Tamás; Davidson, Joyce E.; Magnusson, Bo; Russo, Ricardo; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A.; Miller, Frederick W.; Vencovsky, Jiri; Ruperto, Nicolino
2017-01-01
Objective Develop response criteria for juvenile dermatomyositis (JDM). Methods We analyzed the performance of 312 definitions that used core set measures (CSM) from either the International Myositis Assessment and Clinical Studies Group (IMACS) or the Pediatric Rheumatology International Trials Organization (PRINTO) and were derived from natural history data and a conjoint-analysis survey. They were further validated in the PRINTO trial of prednisone alone compared to prednisone with methotrexate or cyclosporine and the Rituximab in Myositis trial. Experts considered 14 top-performing candidate criteria based on their performance characteristics and clinical face validity using nominal group technique at a consensus conference. Results Consensus was reached for a conjoint analysis–based continuous model with a Total Improvement Score of 0-100, using absolute percent change in CSM with thresholds for minimal (≥30 points), moderate (≥45), and major improvement (≥70). The same criteria were chosen for adult dermatomyositis/polymyositis with differing thresholds for improvement. The sensitivity and specificity were 89% and 91-98% for minimal, 92-94% and 94-99% for moderate, and 91-98% and 85-85% for major improvement, respectively, in JDM patient cohorts using the IMACS and PRINTO CSM. These criteria were validated in the PRINTO trial for differentiating between treatment arms for minimal and moderate improvement (P=0.009–0.057) and in the Rituximab trial for significantly differentiating the physician rating of improvement (P<0.006). Conclusion The response criteria for JDM was a conjoint analysis–based model using a continuous improvement score based on absolute percent change in CSM, with thresholds for minimal, moderate, and major improvement. PMID:28382778
NASA Technical Reports Server (NTRS)
Darzi, Michael; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor)
1992-01-01
Methods for detecting and screening cloud contamination from satellite derived visible and infrared data are reviewed in this document. The methods are applicable to past, present, and future polar orbiting satellite radiometers. Such instruments include the Coastal Zone Color Scanner (CZCS), operational from 1978 through 1986; the Advanced Very High Resolution Radiometer (AVHRR); the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), scheduled for launch in August 1993; and the Moderate Resolution Imaging Spectrometer (IMODIS). Constant threshold methods are the least demanding computationally, and often provide adequate results. An improvement to these methods are the least demanding computationally, and often provide adequate results. An improvement to these methods is to determine the thresholds dynamically by adjusting them according to the areal and temporal distributions of the surrounding pixels. Spatial coherence methods set thresholds based on the expected spatial variability of the data. Other statistically derived methods and various combinations of basic methods are also reviewed. The complexity of the methods is ultimately limited by the computing resources. Finally, some criteria for evaluating cloud screening methods are discussed.
Measurement of the lowest dosage of phenobarbital that can produce drug discrimination in rats
Overton, Donald A.; Stanwood, Gregg D.; Patel, Bhavesh N.; Pragada, Sreenivasa R.; Gordon, M. Kathleen
2009-01-01
Rationale Accurate measurement of the threshold dosage of phenobarbital that can produce drug discrimination (DD) may improve our understanding of the mechanisms and properties of such discrimination. Objectives Compare three methods for determining the threshold dosage for phenobarbital (D) versus no drug (N) DD. Methods Rats learned a D versus N DD in 2-lever operant training chambers. A titration scheme was employed to increase or decrease dosage at the end of each 18-day block of sessions depending on whether the rat had achieved criterion accuracy during the sessions just completed. Three criterion rules were employed, all based on average percent drug lever responses during initial links of the last 6 D and 6 N sessions of a block. The criteria were: D%>66 and N%<33; D%>50 and N%<50; (D%-N%)>33. Two squads of rats were trained, one immediately after the other. Results All rats discriminated drug versus no drug. In most rats, dosage decreased to low levels and then oscillated near the minimum level required to maintain criterion performance. The lowest discriminated dosage significantly differed under the three criterion rules. The squad that was trained 2nd may have benefited by partially duplicating the lever choices of the previous squad. Conclusions The lowest discriminated dosage is influenced by the criterion of discriminative control that is employed, and is higher than the absolute threshold at which discrimination entirely disappears. Threshold estimations closer to absolute threshold can be obtained when criteria are employed that are permissive, and that allow rats to maintain lever preferences. PMID:19082992
Approaches to Identify Exceedances of Water Quality Thresholds Associated with Ocean Conditions
WED scientists have developed a method to help distinguish whether failures to meet water quality criteria are associated with natural coastal upwelling by using the statistical approach of logistic regression. Estuaries along the west coast of the United States periodically ha...
47 CFR 4.9 - Outage reporting requirements-threshold criteria.
Code of Federal Regulations, 2012 CFR
2012-10-01
... submit electronically an Initial Communications Outage Report to the Commission. Not later than thirty days after discovering the outage, the provider shall submit electronically a Final Communications... cannot be obtained through any of the methods described, for whatever reason, then the provider shall...
47 CFR 4.9 - Outage reporting requirements-threshold criteria.
Code of Federal Regulations, 2010 CFR
2010-10-01
... submit electronically an Initial Communications Outage Report to the Commission. Not later than thirty days after discovering the outage, the provider shall submit electronically a Final Communications... cannot be obtained through any of the methods described, for whatever reason, then the provider shall...
47 CFR 4.9 - Outage reporting requirements-threshold criteria.
Code of Federal Regulations, 2011 CFR
2011-10-01
... submit electronically an Initial Communications Outage Report to the Commission. Not later than thirty days after discovering the outage, the provider shall submit electronically a Final Communications... cannot be obtained through any of the methods described, for whatever reason, then the provider shall...
47 CFR 4.9 - Outage reporting requirements-threshold criteria.
Code of Federal Regulations, 2013 CFR
2013-10-01
... submit electronically an Initial Communications Outage Report to the Commission. Not later than thirty days after discovering the outage, the provider shall submit electronically a Final Communications... cannot be obtained through any of the methods described, for whatever reason, then the provider shall...
Frank, T
2001-04-01
The first purpose of this study was to determine high-frequency (8 to 16 kHz) thresholds for standardizing reference equivalent threshold sound pressure levels (RETSPLs) for a Sennheiser HDA 200 earphone. The second and perhaps more important purpose of this study was to determine whether repeated high-frequency thresholds using a Sennheiser HDA 200 earphone had a lower intrasubject threshold variability than the ASHA 1994 significant threshold shift criteria for ototoxicity. High-frequency thresholds (8 to 16 kHz) were obtained for 100 (50 male, 50 female) normally hearing (0.25 to 8 kHz) young adults (mean age of 21.2 yr) in four separate test sessions using a Sennheiser HDA 200 earphone. The mean and median high-frequency thresholds were similar for each test session and increased as frequency increased. At each frequency, the high-frequency thresholds were not significantly (p > 0.05) different for gender, test ear, or test session. The median thresholds at each frequency were similar to the 1998 interim ISO RETSPLs; however, large standard deviations and wide threshold distributions indicated very high intersubject threshold variability, especially at 14 and 16 kHz. Threshold repeatability was determined by finding the threshold differences between each possible test session comparison (N = 6). About 98% of all of the threshold differences were within a clinically acceptable range of +/-10 dB from 8 to 14 kHz. The threshold differences between each subject's second, third, and fourth minus their first test session were also found to determine whether intrasubject threshold variability was less than the ASHA 1994 criteria for determining a significant threshold shift due to ototoxicity. The results indicated a false-positive rate of 0% for a threshold shift > or = 20 dB at any frequency and a false-positive rate of 2% for a threshold shift >10 dB at two consecutive frequencies. This study verified that the output of high-frequency audiometers at 0 dB HL using Sennheiser HDA 200 earphones should equal the 1998 interim ISO RETSPLs from 8 to 16 kHz. Further, because the differences between repeated thresholds were well within +/-10 dB and had an extremely low false-positive rate in reference to the ASHA 1994 criteria for a significant threshold shift due to ototoxicity, a Sennheiser HDA 200 earphone can be used for serial monitoring to determine whether significant high-frequency threshold shifts have occurred for patients receiving potentially ototoxic drug therapy.
Protein flexibility: coordinate uncertainties and interpretation of structural differences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rashin, Alexander A., E-mail: alexander-rashin@hotmail.com; LH Baker Center for Bioinformatics and Department of Biochemistry, Biophysics and Molecular Biology, 112 Office and Lab Building, Iowa State University, Ames, IA 50011-3020; Rashin, Abraham H. L.
2009-11-01
Criteria for the interpretability of coordinate differences and a new method for identifying rigid-body motions and nonrigid deformations in protein conformational changes are developed and applied to functionally induced and crystallization-induced conformational changes. Valid interpretations of conformational movements in protein structures determined by X-ray crystallography require that the movement magnitudes exceed their uncertainty threshold. Here, it is shown that such thresholds can be obtained from the distance difference matrices (DDMs) of 1014 pairs of independently determined structures of bovine ribonuclease A and sperm whale myoglobin, with no explanations provided for reportedly minor coordinate differences. The smallest magnitudes of reportedly functionalmore » motions are just above these thresholds. Uncertainty thresholds can provide objective criteria that distinguish between true conformational changes and apparent ‘noise’, showing that some previous interpretations of protein coordinate changes attributed to external conditions or mutations may be doubtful or erroneous. The use of uncertainty thresholds, DDMs, the newly introduced CDDMs (contact distance difference matrices) and a novel simple rotation algorithm allows a more meaningful classification and description of protein motions, distinguishing between various rigid-fragment motions and nonrigid conformational deformations. It is also shown that half of 75 pairs of identical molecules, each from the same asymmetric crystallographic cell, exhibit coordinate differences that range from just outside the coordinate uncertainty threshold to the full magnitude of large functional movements. Thus, crystallization might often induce protein conformational changes that are comparable to those related to or induced by the protein function.« less
Estimation of Effect Thresholds for the Development of Water Quality Criteria
Biological and ecological effect thresholds can be used for determining safe levels of nontraditional stressors. The U.S. EPA Framework for Developing Suspended and Bedded Sediments (SABS) Water Quality Criteria (WQC) [36] uses a risk assessment approach to estimate effect thre...
Automated reconstruction of rainfall events responsible for shallow landslides
NASA Astrophysics Data System (ADS)
Vessia, G.; Parise, M.; Brunetti, M. T.; Peruccacci, S.; Rossi, M.; Vennari, C.; Guzzetti, F.
2014-04-01
Over the last 40 years, many contributions have been devoted to identifying the empirical rainfall thresholds (e.g. intensity vs. duration ID, cumulated rainfall vs. duration ED, cumulated rainfall vs. intensity EI) for the initiation of shallow landslides, based on local as well as worldwide inventories. Although different methods to trace the threshold curves have been proposed and discussed in literature, a systematic study to develop an automated procedure to select the rainfall event responsible for the landslide occurrence has rarely been addressed. Nonetheless, objective criteria for estimating the rainfall responsible for the landslide occurrence (effective rainfall) play a prominent role on the threshold values. In this paper, two criteria for the identification of the effective rainfall events are presented: (1) the first is based on the analysis of the time series of rainfall mean intensity values over one month preceding the landslide occurrence, and (2) the second on the analysis of the trend in the time function of the cumulated mean intensity series calculated from the rainfall records measured through rain gauges. The two criteria have been implemented in an automated procedure written in R language. A sample of 100 shallow landslides collected in Italy by the CNR-IRPI research group from 2002 to 2012 has been used to calibrate the proposed procedure. The cumulated rainfall E and duration D of rainfall events that triggered the documented landslides are calculated through the new procedure and are fitted with power law in the (D,E) diagram. The results are discussed by comparing the (D,E) pairs calculated by the automated procedure and the ones by the expert method.
Multi-criteria decision making approaches for quality control of genome-wide association studies.
Malovini, Alberto; Rognoni, Carla; Puca, Annibale; Bellazzi, Riccardo
2009-03-01
Experimental errors in the genotyping phases of a Genome-Wide Association Study (GWAS) can lead to false positive findings and to spurious associations. An appropriate quality control phase could minimize the effects of this kind of errors. Several filtering criteria can be used to perform quality control. Currently, no formal methods have been proposed for taking into account at the same time these criteria and the experimenter's preferences. In this paper we propose two strategies for setting appropriate genotyping rate thresholds for GWAS quality control. These two approaches are based on the Multi-Criteria Decision Making theory. We have applied our method on a real dataset composed by 734 individuals affected by Arterial Hypertension (AH) and 486 nonagenarians without history of AH. The proposed strategies appear to deal with GWAS quality control in a sound way, as they lead to rationalize and make explicit the experimenter's choices thus providing more reproducible results.
How to Assess the Value of Medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value. PMID:21607066
How to assess the value of medicines?
Simoens, Steven
2010-01-01
This study aims to discuss approaches to assessing the value of medicines. Economic evaluation assesses value by means of the incremental cost-effectiveness ratio (ICER). Health is maximized by selecting medicines with increasing ICERs until the budget is exhausted. The budget size determines the value of the threshold ICER and vice versa. Alternatively, the threshold value can be inferred from pricing/reimbursement decisions, although such values vary between countries. Threshold values derived from the value-of-life literature depend on the technique used. The World Health Organization has proposed a threshold value tied to the national GDP. As decision makers may wish to consider multiple criteria, variable threshold values and weighted ICERs have been suggested. Other approaches (i.e., replacement approach, program budgeting and marginal analysis) have focused on improving resource allocation, rather than maximizing health subject to a budget constraint. Alternatively, the generalized optimization framework and multi-criteria decision analysis make it possible to consider other criteria in addition to value.
Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology
Mattsson, Niklas; Mackin, R. Scott; Schöll, Michael; Nosheny, Rachel L.; Tosun, Duygu; Donohue, Michael C.; Aisen, Paul S.; Jagust, William J.; Weiner, Michael W.
2016-01-01
Objective: To estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloid positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ42. Future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline. PMID:27164667
Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Insel, Philip S.; Mattsson, Niklas; Mackin, R. Scott
Objective: Our objective is to estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ 42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloidmore » positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ 42. Lastly, future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline.« less
Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology
Insel, Philip S.; Mattsson, Niklas; Mackin, R. Scott; ...
2016-04-15
Objective: Our objective is to estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. Methods: In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ 42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Results: Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloidmore » positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. Conclusions: A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ 42. Lastly, future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline.« less
Postmus, Douwe; Tervonen, Tommi; van Valkenhoef, Gert; Hillege, Hans L; Buskens, Erik
2014-09-01
A standard practice in health economic evaluation is to monetize health effects by assuming a certain societal willingness-to-pay per unit of health gain. Although the resulting net monetary benefit (NMB) is easy to compute, the use of a single willingness-to-pay threshold assumes expressibility of the health effects on a single non-monetary scale. To relax this assumption, this article proves that the NMB framework is a special case of the more general stochastic multi-criteria acceptability analysis (SMAA) method. Specifically, as SMAA does not restrict the number of criteria to two and also does not require the marginal rates of substitution to be constant, there are problem instances for which the use of this more general method may result in a better understanding of the trade-offs underlying the reimbursement decision-making problem. This is illustrated by applying both methods in a case study related to infertility treatment.
Cameron, David; Ubels, Jasper; Norström, Fredrik
2018-01-01
ABSTRACT Background: The amount a government should be willing to invest in adopting new medical treatments has long been under debate. With many countries using formal cost-effectiveness (C/E) thresholds when examining potential new treatments and ever-growing medical costs, accurately setting the level of a C/E threshold can be essential for an efficient healthcare system. Objectives: The aim of this systematic review is to describe the prominent approaches to setting a C/E threshold, compile available national-level C/E threshold data and willingness-to-pay (WTP) data, and to discern whether associations exist between these values, gross domestic product (GDP) and health-adjusted life expectancy (HALE). This review further examines current obstacles faced with the presently available data. Methods: A systematic review was performed to collect articles which have studied national C/E thresholds and willingness-to-pay (WTP) per quality-adjusted life year (QALY) in the general population. Associations between GDP, HALE, WTP, and C/E thresholds were analyzed with correlations. Results: Seventeen countries were identified from nine unique sources to have formal C/E thresholds within our inclusion criteria. Thirteen countries from nine sources were identified to have WTP per QALY data within our inclusion criteria. Two possible associations were identified: C/E thresholds with HALE (quadratic correlation of 0.63), and C/E thresholds with GDP per capita (polynomial correlation of 0.84). However, these results are based on few observations and therefore firm conclusions cannot be made. Conclusions: Most national C/E thresholds identified in our review fall within the WHO’s recommended range of one-to-three times GDP per capita. However, the quality and quantity of data available regarding national average WTP per QALY, opportunity costs, and C/E thresholds is poor in comparison to the importance of adequate investment in healthcare. There exists an obvious risk that countries might either over- or underinvest in healthcare if they base their decision-making process on erroneous presumptions or non-evidence-based methodologies. The commonly referred to value of 100,000$ USD per QALY may potentially have some basis. PMID:29564962
Koning, Sarah H; van Zanden, Jelmer J; Hoogenberg, Klaas; Lutgers, Helen L; Klomp, Alberdina W; Korteweg, Fleurisca J; van Loon, Aren J; Wolffenbuttel, Bruce H R; van den Berg, Paul P
2018-04-01
Detection and management of gestational diabetes mellitus (GDM) are crucial to reduce the risk of pregnancy-related complications for both mother and child. In 2013, the WHO adopted new diagnostic criteria for GDM to improve pregnancy outcomes. However, the evidence supporting these criteria is limited. Consequently, these new criteria have not yet been endorsed in the Netherlands. The aim of this study was to determine the impact of these criteria on the number of GDM diagnoses and pregnancy outcomes. Data were available from 10,642 women who underwent a 75 g OGTT because of risk factors or signs suggestive of GDM. Women were treated if diagnosed with GDM according to the WHO 1999 criteria. Data on pregnancy outcomes were obtained from extensive chart reviews from 4,431 women and were compared between women with normal glucose tolerance (NGT) and women classified into the following groups: (1) GDM according to WHO 1999 criteria; (2) GDM according to WHO 2013 criteria; (3) GDM according to WHO 2013 fasting glucose threshold, but not WHO 1999 criteria; and (4) GDM according to WHO 1999 2 h plasma glucose threshold (2HG), but not WHO 2013 criteria. Applying the new WHO 2013 criteria would have increased the number of diagnoses by 45% (32% vs 22%) in this population of women at higher risk for GDM. In comparison with women with NGT, women classified as having GDM based only on the WHO 2013 threshold for fasting glucose, who were not treated for GDM, were more likely to have been obese (46.1% vs 28.1%, p < 0.001) and hypertensive (3.3% vs 1.2%, p < 0.001) before pregnancy, and to have had higher rates of gestational hypertension (7.8% vs 4.9%, p = 0.003), planned Caesarean section (10.3% vs 6.5%, p = 0.001) and induction of labour (34.8% vs 28.0%, p = 0.001). In addition, their neonates were more likely to have had an Apgar score <7 at 5 min (4.4% vs 2.6%, p = 0.015) and to have been admitted to the Neonatology Department (15.0% vs 11.1%, p = 0.004). The number of large for gestational age (LGA) neonates was not significantly different between the two groups. Women potentially missed owing to the higher 2HG threshold set by WHO 2013 had similar pregnancy outcomes to women with NGT. These women were all treated for GDM with diet and 20.5% received additional insulin. Applying the WHO 2013 criteria will have a major impact on the number of GDM diagnoses. Using the fasting glucose threshold set by WHO 2013 identifies a group of women with an increased risk of adverse outcomes compared with women with NGT. We therefore support the use of a lower fasting glucose threshold in the Dutch national guideline for GDM diagnosis. However, adopting the WHO 2013 criteria with a higher 2HG threshold would exclude women in whom treatment for GDM seems to be effective.
Cunningham-Williams, Renee M.; Grucza, Richard A.; Cottler, Linda B.; Womack, Sharon B.; Books, Samantha J.; Przybeck, Thomas R.; Spitznagel, Edward L.; Cloninger, C. Robert
2006-01-01
Objectives We report the prevalence of and risk and protective factors for DSM-IV sub-threshold gambling (1–4 criteria) and pathological gambling disorder (PGD; 5–10 criteria) in a non-clinical household sample of St. Louis area gamblers. Methods Of the 7689 individuals contacted via Random Digit Dialing, 3292 were screened eligible. Of these, 1142 from households in 6 contiguous regions in Missouri and Illinois consented to participate and were mailed a St. Louis Area Personality, Health, and Lifestyle (SLPHL) Survey. Results Post-stratification weighted data (n = 913) indicate lifetime prevalence rates of 12.4% sub-threshold and 2.5% PGD (conditional prevalence = 21.5% and 4.3% respectively). Risk and protective factors for gambling severity varied in the sample. Conclusions Targeted prevention messages are warranted specifically for gamblers of varying risk for PGD. PMID:15804388
Replenishing data descriptors in a DMA injection FIFO buffer
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Cernohous, Bob R [Rochester, MN; Heidelberger, Philip [Cortlandt Manor, NY; Kumar, Sameer [White Plains, NY; Parker, Jeffrey J [Rochester, MN
2011-10-11
Methods, apparatus, and products are disclosed for replenishing data descriptors in a Direct Memory Access (`DMA`) injection first-in-first-out (`FIFO`) buffer that include: determining, by a messaging module on an origin compute node, whether a number of data descriptors in a DMA injection FIFO buffer exceeds a predetermined threshold, each data descriptor specifying an application message for transmission to a target compute node; queuing, by the messaging module, a plurality of new data descriptors in a pending descriptor queue if the number of the data descriptors in the DMA injection FIFO buffer exceeds the predetermined threshold; establishing, by the messaging module, interrupt criteria that specify when to replenish the injection FIFO buffer with the plurality of new data descriptors in the pending descriptor queue; and injecting, by the messaging module, the plurality of new data descriptors into the injection FIFO buffer in dependence upon the interrupt criteria.
Cloud Detection of Optical Satellite Images Using Support Vector Machine
NASA Astrophysics Data System (ADS)
Lee, Kuan-Yi; Lin, Chao-Hung
2016-06-01
Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection accuracy of the proposed method is better than related methods.
Delineating riparian zones for entire river networks using geomorphological criteria
NASA Astrophysics Data System (ADS)
Fernández, D.; Barquín, J.; Álvarez-Cabria, M.; Peñas, F. J.
2012-03-01
Riparian zone delineation is a central issue for riparian and river ecosystem management, however, criteria used to delineate them are still under debate. The area inundated by a 50-yr flood has been indicated as an optimal hydrological descriptor for riparian areas. This detailed hydrological information is, however, not usually available for entire river corridors, and is only available for populated areas at risk of flooding. One of the requirements for catchment planning is to establish the most appropriate location of zones to conserve or restore riparian buffer strips for whole river networks. This issue could be solved by using geomorphological criteria extracted from Digital Elevation Models. In this work we have explored the adjustment of surfaces developed under two different geomorphological criteria with respect to the flooded area covered by the 50-yr flood, in an attempt to rapidly delineate hydrologically-meaningful riparian zones for entire river networks. The first geomorphological criterion is based on the surface that intersects valley walls at a given number of bankfull depths above the channel (BFDAC), while the second is based on the surface defined by a~threshold value indicating the relative cost of moving from the stream up to the valley, accounting for slope and elevation change (path distance). As the relationship between local geomorphology and 50-yr flood has been suggested to be river-type dependant, we have performed our analyses distinguishing between three river types corresponding with three valley morphologies: open, shallow vee and deep vee valleys (in increasing degree of valley constrainment). Adjustment between the surfaces derived from geomorphological and hydrological criteria has been evaluated using two different methods: one based on exceeding areas (minimum exceeding score) and the other on the similarity among total area values. Both methods have pointed out the same surfaces when looking for those that best match with the 50-yr flood. Results have shown that the BFDAC approach obtains an adjustment slightly better than that of path distance. However, BFDAC requires bankfull depth regional regressions along the considered river network. Results have also confirmed that unconstrained valleys require lower threshold values than constrained valleys when deriving surfaces using geomorphological criteria. Moreover, this study provides: (i) guidance on the selection of the proper geomorphological criterion and associated threshold values, and (ii) an easy calibration framework to evaluate the adjustment with respect to hydrologically-meaningful surfaces.
Classification criteria and probability risk maps: limitations and perspectives.
Saisana, Michaela; Dubois, Gregoire; Chaloulakou, Archontoula; Spyrellis, Nikolas
2004-03-01
Delineation of polluted zones with respect to regulatory standards, accounting at the same time for the uncertainty of the estimated concentrations, relies on classification criteria that can lead to significantly different pollution risk maps, which, in turn, can depend on the regulatory standard itself. This paper reviews four popular classification criteria related to the violation of a probability threshold or a physical threshold, using annual (1996-2000) nitrogen dioxide concentrations from 40 air monitoring stations in Milan. The relative advantages and practical limitations of each criterion are discussed, and it is shown that some of the criteria are more appropriate for the problem at hand and that the choice of the criterion can be supported by the statistical distribution of the data and/or the regulatory standard. Finally, the polluted area is estimated over the different years and concentration thresholds using the appropriate risk maps as an additional source of uncertainty.
Peroni, M; Golland, P; Sharp, G C; Baroni, G
2016-02-01
A crucial issue in deformable image registration is achieving a robust registration algorithm at a reasonable computational cost. Given the iterative nature of the optimization procedure an algorithm must automatically detect convergence, and stop the iterative process when most appropriate. This paper ranks the performances of three stopping criteria and six stopping value computation strategies for a Log-Domain Demons Deformable registration method simulating both a coarse and a fine registration. The analyzed stopping criteria are: (a) velocity field update magnitude, (b) mean squared error, and (c) harmonic energy. Each stoping condition is formulated so that the user defines a threshold ∊, which quantifies the residual error that is acceptable for the particular problem and calculation strategy. In this work, we did not aim at assigning a value to e, but to give insights in how to evaluate and to set the threshold on a given exit strategy in a very popular registration scheme. Experiments on phantom and patient data demonstrate that comparing the optimization metric minimum over the most recent three iterations with the minimum over the fourth to sixth most recent iterations can be an appropriate algorithm stopping strategy. The harmonic energy was found to provide best trade-off between robustness and speed of convergence for the analyzed registration method at coarse registration, but was outperformed by mean squared error when all the original pixel information is used. This suggests the need of developing mathematically sound new convergence criteria in which both image and vector field information could be used to detect the actual convergence, which could be especially useful when considering multi-resolution registrations. Further work should be also dedicated to study same strategies performances in other deformable registration methods and body districts. © The Author(s) 2014.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-09
...] Agency Information Collection Activities; Proposed Collection; Comment Request; Threshold of Regulation... appropriate, and other forms of information technology. Threshold of Regulation for Substances Used in Food... intended use of a substance in a food-contact article meets the threshold criteria, certain information...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false Definitions of metrics used to determine the general outage-reporting threshold criteria. 4.7 Section 4.7 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL DISRUPTIONS TO COMMUNICATIONS Reporting Requirements for Disruptions to Communications § 4.7 Definitions of metrics used to...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Definitions of metrics used to determine the general outage-reporting threshold criteria. 4.7 Section 4.7 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL DISRUPTIONS TO COMMUNICATIONS Reporting Requirements for Disruptions to Communications § 4.7 Definitions of metrics used to...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 1 2013-10-01 2013-10-01 false Definitions of metrics used to determine the general outage-reporting threshold criteria. 4.7 Section 4.7 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL DISRUPTIONS TO COMMUNICATIONS Reporting Requirements for Disruptions to Communications § 4.7 Definitions of metrics used to...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Definitions of metrics used to determine the general outage-reporting threshold criteria. 4.7 Section 4.7 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL DISRUPTIONS TO COMMUNICATIONS Reporting Requirements for Disruptions to Communications § 4.7 Definitions of metrics used to...
Compton, Wilson M.; Dawson, Deborah A.; Goldstein, Risë B.; Grant, Bridget F.
2013-01-01
Background Ascertaining agreement between DSM-IV and DSM-5 is important to determine the applicability of treatments for DSM-IV conditions to persons diagnosed according to the proposed DSM-5. Methods Data from a nationally representative sample of US adults were used to compare concordance of past-year DSM-IV Opioid, Cannabis, Cocaine and Alcohol Dependence with past-year DSM-5 disorders at thresholds of 3+, 4+ 5+ and 6+ positive DSM-5 criteria among past-year users of opioids (n=264), cannabis (n=1,622), cocaine (n=271) and alcohol (n=23,013). Substance-specific 2×2 tables yielded overall concordance (kappa), sensitivity, specificity, positive predictive values (PPV) and negative predictive values (NPV). Results For DSM-IV Alcohol, Cocaine and Opioid Dependence, optimal concordance occurred when 4+ DSM-5 criteria were endorsed, corresponding to the threshold for moderate DSM-5 Alcohol, Cocaine and Opioid Use Disorders. Maximal concordance of DSM-IV Cannabis Dependence and DSM-5 Cannabis Use Disorder occurred when 6+ criteria were endorsed, corresponding to the threshold for severe DSM-5 Cannabis Use Disorder. At these optimal thresholds, sensitivity, specificity, PPV and NPV generally exceeded 85% (>75% for cannabis). Conclusions Overall, excellent correspondence of DSM-IV Dependence with DSM-5 Substance Use Disorders was documented in this general population sample of alcohol, cannabis, cocaine and opioid users. Applicability of treatments tested for DSM-IV Dependence is supported by these results for those with a DSM-5 Alcohol, Cocaine or Opioid Use Disorder of at least moderate severity or Severe Cannabis Use Disorder. Further research is needed to provide evidence for applicability of treatments for persons with milder substance use disorders. PMID:23642316
16 CFR 801.20 - Acquisitions subsequent to exceeding threshold.
Code of Federal Regulations, 2010 CFR
2010-01-01
... threshold. 801.20 Section 801.20 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS... § 801.20 Acquisitions subsequent to exceeding threshold. Acquisitions meeting the criteria of section 7A... may have met or exceeded a notification threshold before the effective date of these rules; or (c) The...
16 CFR 801.20 - Acquisitions subsequent to exceeding threshold.
Code of Federal Regulations, 2011 CFR
2011-01-01
... threshold. 801.20 Section 801.20 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS... § 801.20 Acquisitions subsequent to exceeding threshold. Acquisitions meeting the criteria of section 7A... may have met or exceeded a notification threshold before the effective date of these rules; or (c) The...
16 CFR 801.20 - Acquisitions subsequent to exceeding threshold.
Code of Federal Regulations, 2012 CFR
2012-01-01
... threshold. 801.20 Section 801.20 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS... § 801.20 Acquisitions subsequent to exceeding threshold. Acquisitions meeting the criteria of section 7A... may have met or exceeded a notification threshold before the effective date of these rules; or (c) The...
Mindlis, I; Morales-Raveendran, E; Goodman, E; Xu, K; Vila-Castelar, C; Keller, K; Crawford, G; James, S; Katz, C L; Crowley, L E; de la Hoz, R E; Markowitz, S; Wisnivesky, J P
2017-09-01
Using data from a cohort of World Trade Center (WTC) rescue and recovery workers with asthma, we assessed whether meeting criteria for post-traumatic stress disorder (PTSD), sub-threshold PTSD, and for specific PTSD symptom dimensions are associated with increased asthma morbidity. Participants underwent a Structured Clinical Interview for Diagnostic and Statistical Manual to assess the presence of PTSD following DSM-IV criteria during in-person interviews between December 2013 and April 2015. We defined sub-threshold PTSD as meeting criteria for two of three symptom dimensions: re-experiencing, avoidance, or hyper-arousal. Asthma control, acute asthma-related healthcare utilization, and asthma-related quality of life data were collected using validated scales. Unadjusted and multiple regression analyses were performed to assess the relationship between sub-threshold PTSD and PTSD symptom domains with asthma morbidity measures. Of the 181 WTC workers with asthma recruited into the study, 28% had PTSD and 25% had sub-threshold PTSD. Patients with PTSD showed worse asthma control, higher rates of inpatient healthcare utilization, and poorer asthma quality of life than those with sub-threshold or no PTSD. After adjusting for potential confounders, among patients not meeting the criteria for full PTSD, those presenting symptoms of re-experiencing exhibited poorer quality of life (p = 0.003). Avoidance was associated with increased acute healthcare use (p = 0.05). Sub-threshold PTSD was not associated with asthma morbidity (p > 0.05 for all comparisons). There may be benefit in assessing asthma control in patients with sub-threshold PTSD symptoms as well as those with full PTSD to more effectively identify ongoing asthma symptoms and target management strategies.
Pinchi, Vilma; Pradella, Francesco; Vitale, Giulia; Rugo, Dario; Nieri, Michele; Norelli, Gian-Aristide
2016-01-01
The age threshold of 14 years is relevant in Italy as the minimum age for criminal responsibility. It is of utmost importance to evaluate the diagnostic accuracy of every odontological method for age evaluation considering the sensitivity, or the ability to estimate the true positive cases, and the specificity, or the ability to estimate the true negative cases. The research aims to compare the specificity and sensitivity of four commonly adopted methods of dental age estimation - Demirjian, Haavikko, Willems and Cameriere - in a sample of Italian children aged between 11 and 16 years, with an age threshold of 14 years, using receiver operating characteristic curves and the area under the curve (AUC). In addition, new decision criteria are developed to increase the accuracy of the methods. Among the four odontological methods for age estimation adopted in the research, the Cameriere method showed the highest AUC in both female and male cohorts. The Cameriere method shows a high degree of accuracy at the age threshold of 14 years. To adopt the Cameriere method to estimate the 14-year age threshold more accurately, however, it is suggested - according to the Youden index - that the decision criterion be set at the lower value of 12.928 for females and 13.258 years for males, obtaining a sensitivity of 85% and specificity of 88% in females, and a sensitivity of 77% and specificity of 92% in males. If a specificity level >90% is needed, the cut-off point should be set at 12.959 years (82% sensitivity) for females. © The Author(s) 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cain, W.S.; Shoaf, C.R.; Velasquez, S.F.
1992-03-01
In response to numerous requests for information related to odor thresholds, this document was prepared by the Air Risk Information Support Center in its role in providing technical assistance to State and Local government agencies on risk assessment of air pollutants. A discussion of basic concepts related to olfactory function and the measurement of odor thresholds is presented. A detailed discussion of criteria which are used to evaluate the quality of published odor threshold values is provided. The use of odor threshold information in risk assessment is discussed. The results of a literature search and review of odor threshold informationmore » for the chemicals listed as hazardous air pollutants in the Clean Air Act amendments of 1990 is presented. The published odor threshold values are critically evaluated based on the criteria discussed and the values of acceptable quality are used to determine a geometric mean or best estimate.« less
Multi-Criteria Decision Making Approaches for Quality Control of Genome-Wide Association Studies
Malovini, Alberto; Rognoni, Carla; Puca, Annibale; Bellazzi, Riccardo
2009-01-01
Experimental errors in the genotyping phases of a Genome-Wide Association Study (GWAS) can lead to false positive findings and to spurious associations. An appropriate quality control phase could minimize the effects of this kind of errors. Several filtering criteria can be used to perform quality control. Currently, no formal methods have been proposed for taking into account at the same time these criteria and the experimenter’s preferences. In this paper we propose two strategies for setting appropriate genotyping rate thresholds for GWAS quality control. These two approaches are based on the Multi-Criteria Decision Making theory. We have applied our method on a real dataset composed by 734 individuals affected by Arterial Hypertension (AH) and 486 nonagenarians without history of AH. The proposed strategies appear to deal with GWAS quality control in a sound way, as they lead to rationalize and make explicit the experimenter’s choices thus providing more reproducible results. PMID:21347174
Zhao, Na; Qin, Honglei; Sun, Kewen; Ji, Yuanfa
2017-01-01
Frequency-locked detector (FLD) has been widely utilized in tracking loops of Global Positioning System (GPS) receivers to indicate their locking status. The relation between FLD and lock status has been seldom discussed. The traditional PLL experience is not suitable for FLL. In this paper, the threshold setting criteria for frequency-locked detector in the GPS receiver has been proposed by analyzing statistical characteristic of FLD output. The approximate probability distribution of frequency-locked detector is theoretically derived by using a statistical approach, which reveals the relationship between probabilities of frequency-locked detector and the carrier-to-noise ratio (C/N0) of the received GPS signal. The relationship among mean-time-to-lose-lock (MTLL), detection threshold and lock probability related to C/N0 can be further discovered by utilizing this probability. Therefore, a theoretical basis for threshold setting criteria in frequency locked loops for GPS receivers is provided based on mean-time-to-lose-lock analysis. PMID:29207546
Jin, Tian; Yuan, Heliang; Zhao, Na; Qin, Honglei; Sun, Kewen; Ji, Yuanfa
2017-12-04
Frequency-locked detector (FLD) has been widely utilized in tracking loops of Global Positioning System (GPS) receivers to indicate their locking status. The relation between FLD and lock status has been seldom discussed. The traditional PLL experience is not suitable for FLL. In this paper, the threshold setting criteria for frequency-locked detector in the GPS receiver has been proposed by analyzing statistical characteristic of FLD output. The approximate probability distribution of frequency-locked detector is theoretically derived by using a statistical approach, which reveals the relationship between probabilities of frequency-locked detector and the carrier-to-noise ratio ( C / N ₀) of the received GPS signal. The relationship among mean-time-to-lose-lock (MTLL), detection threshold and lock probability related to C / N ₀ can be further discovered by utilizing this probability. Therefore, a theoretical basis for threshold setting criteria in frequency locked loops for GPS receivers is provided based on mean-time-to-lose-lock analysis.
Use of multi-criteria decision analysis to identify potentially dangerous glacial lakes.
Kougkoulos, Ioannis; Cook, Simon J; Jomelli, Vincent; Clarke, Leon; Symeonakis, Elias; Dortch, Jason M; Edwards, Laura A; Merad, Myriam
2018-04-15
Glacial Lake Outburst Floods (GLOFs) represent a significant threat in deglaciating environments, necessitating the development of GLOF hazard and risk assessment procedures. Here, we outline a Multi-Criteria Decision Analysis (MCDA) approach that can be used to rapidly identify potentially dangerous lakes in regions without existing tailored GLOF risk assessments, where a range of glacial lake types exist, and where field data are sparse or non-existent. Our MCDA model (1) is desk-based and uses freely and widely available data inputs and software, and (2) allows the relative risk posed by a range of glacial lake types to be assessed simultaneously within any region. A review of the factors that influence GLOF risk, combined with the strict rules of criteria selection inherent to MCDA, has allowed us to identify 13 exhaustive, non-redundant, and consistent risk criteria. We use our MCDA model to assess the risk of 16 extant glacial lakes and 6 lakes that have already generated GLOFs, and found that our results agree well with previous studies. For the first time in GLOF risk assessment, we employed sensitivity analyses to test the strength of our model results and assumptions, and to identify lakes that are sensitive to the criteria and risk thresholds used. A key benefit of the MCDA method is that sensitivity analyses are readily undertaken. Overall, these sensitivity analyses lend support to our model, although we suggest that further work is required to determine the relative importance of assessment criteria, and the thresholds that determine the level of risk for each criterion. As a case study, the tested method was then applied to 25 potentially dangerous lakes in the Bolivian Andes, where GLOF risk is poorly understood; 3 lakes are found to pose 'medium' or 'high' risk, and require further detailed investigation. Copyright © 2017 Elsevier B.V. All rights reserved.
Jeremiah D. Groom; Sherri L. Johnson; Joshua D. Seeds; George G. Ice
2017-01-01
We present the results of a replicated before-after-control-impact study on 33 streams to test the effectiveness of riparian rules for private and State forests at meeting temperature criteria in streams in western Oregon. Many states have established regulatory temperature thresholds, referred to as numeric criteria, to protect cold-water fishes such as salmon and...
Hydro-mechanical mechanism and thresholds of rainfall-induced unsaturated landslides
NASA Astrophysics Data System (ADS)
Yang, Zongji; Lei, Xiaoqin; Huang, Dong; Qiao, Jianping
2017-04-01
The devastating Ms 8 Wenchuan earthquake in 2008 created the greatest number of co-seismic mountain hazards ever recorded in China. However, the dynamics of rainfall induced mass remobilization and transport deposits after giant earthquake are not fully understood. Moreover, rainfall intensity and duration (I-D) methods are the predominant early warning indicators of rainfall-induced landslides in post-earthquake region, which are a convenient and straight-forward way to predict the hazards. However, the rainfall-based criteria and thresholds are generally empirical and based on statistical analysis,consequently, they ignore the failure mechanisms of the landslides. This study examines the mechanism and hydro-mechanical behavior and thresholds of these unsaturated deposits under the influence of rainfall. To accomplish this, in situ experiments were performed in an instrumented landslide deposit, The field experimental tests were conducted on a natural co-seismic fractured slope to 1) simulate rainfall-induced shallow failures in the depression channels of a debris flow catchment in an earthquake-affected region, 2)explore the mechanisms and transient processes associated with hydro-mechanical parameter variations in response to the infiltration of rainfall, and 3) identify the hydrologic parameter thresholds and critical criteria of gravitational erosion in areas prone to mass remobilization as a source of debris flows. These experiments provided instrumental evidence and directly proved that post-earthquake rainfall-induced mass remobilization occurred under unsaturated conditions in response to transient rainfall infiltration, and revealed the presence of transient processes and the dominance of preferential flow paths during rainfall infiltration. A hydro-mechanical method was adopted for the transient hydrologic process modelling and unsaturated slope stability analysis. and the slope failures during the experimental test were reproduced by the model, indicating that the decrease in matrix suction and increase in moisture content in response to rainfall infiltration contributed greatly to post-earthquake shallow mass movement. Thus, a threshold model for the initiation of mass remobilization is proposed based on correlations between slope stability and volumetric water content and matrix suction As a complement to rainfall-based early warning strategies, the water content and suction threshold models based on the water infiltration induced slope failure mechanism. the proposed method are expected to improve the accuracy of prediction and early warnings of post-earthquake mountain hazards
Casadei, Luisa; Fanisio, Francesca; Sorge, Roberto Pietro; Collamarini, Matteo; Piccolo, Eleonora; Piccione, Emilio
2018-07-01
To diagnose polycystic ovary syndrome (PCOS) in young infertile women using different diagnostic criteria. To define serum anti-Müllerian hormone (AMH) cutoff values for PCOS definition. To investigate the correlation between AMH and body mass index (BMI). Retrospective case-control study. A total of 140 infertile women (age 21-35 years) were enrolled. PCOS was defined according to the National Institutes of Health (NIH) criteria, the Rotterdam consensus criteria and the Androgen Excess and PCOS Society (AE-PCOS) criteria. ROC curve analysis was performed to define AMH thresholds for PCOS definition according to the three different diagnostic criteria. Correlation between AMH and BMI was investigated. The prevalence of PCOS under the NIH criteria, the Rotterdam criteria and the AE-PCOS criteria was 27.1, 40 and 29.3%, respectively. The optimal thresholds of AMH to distinguish NIH PCOS from infertile controls was 5.20 ng/ml (AUC = 0.86, sensitivity 79%, specificity 80%); the best cutoff to detect Rotterdam PCOS was 4.57 ng/ml (AUC = 0.85, sensitivity 78%, specificity 81%); a cutoff of 4.85 ng/ml (AUC = 0.85, sensitivity 80%, specificity 78%) defined PCOS women according to AE-PCOS criteria. The prevalence of the syndrome became 37.1, 44.3 and 39.2% according to the three criteria, respectively, using AMH threshold between 4.57 and 5.20 ng/ml as an alternative to antral follicle count and/or hyperandrogenism. Anti-Müllerian hormone may reconcile the three diagnostic criteria and allow the PCOS diagnosis in women with mild symptoms. No significant correlation was found between AMH and BMI in PCOS women and controls.
Cao, Xiaofeng; Wang, Jie; Jiang, Dalin; Sun, Jinhua; Huang, Yi; Luan, Shengji
2017-12-13
The establishment of numeric nutrient criteria is essential to aid the control of nutrient pollution and for protecting and restoring healthy ecological conditions. However, it's necessary to determine whether regional nutrient criteria can be defined in stream ecosystems with a poor ecological status. A database of periphytic diatom samples was collected in July and August 2011 and 2012. In total 172 samples were included in the database with matching environmental variables. Here, percentile estimates, nonparametric change-point analysis (nCPA) and Threshold Indicator Taxa ANalysis (TITAN) were conducted to detect the reference conditions and ecological thresholds along a total nitrogen (TN) and total phosphorus (TP) gradient and ammonia nitrogen (NH 3 -N) for the development of nutrient criteria in the streams of the Lake Dianchi basin. The results highlighted the possibility of establishing regional criteria for nutrient concentrations, which we recommended to be no more than 1.39 mg L -1 for TN, 0.04 mg L -1 for TP and 0.17 mg L -1 for NH 3 -N to prevent nuisance growths of tolerant taxa, and 0.38 mg L -1 for TN, 0.02 mg L -1 for TP and 0.02 mg L -1 for NH 3 -N to maintain high quality waters in streams. Additionally, the influence of excessive background nutrient enrichment on the threshold response, and the ecological interaction with other stressors (HQI, etc.) in the nutrient dynamic process need to be considered to establish the eventual nutrient criteria, regardless of which technique is applied.
Lee, Janet S; Yang, Jianing; Stockl, Karen M; Lew, Heidi; Solow, Brian K
2016-01-01
General eligibility criteria used by the Centers for Medicare & Medicaid Services (CMS) to identify patients for medication therapy management (MTM) services include having multiple chronic conditions, taking multiple Part D drugs, and being likely to incur annual drug costs that exceed a predetermined threshold. The performance of these criteria in identifying patients in greatest need of MTM services is unknown. Although there are numerous possible versions of MTM identification algorithms that satisfy these criteria, there are limited data that evaluate the performance of MTM services using eligibility thresholds representative of those used by the majority of Part D sponsors. To (a) evaluate the performance of the 2013 CMS MTM eligibility criteria thresholds in identifying Medicare Advantage Prescription Drug (MAPD) plan patients with at least 2 drug therapy problems (DTPs) relative to alternative criteria threshold levels and (b) identify additional patient risk factors significantly associated with the number of DTPs for consideration as potential future MTM eligibility criteria. All patients in the Medicare Advantage Part D population who had pharmacy eligibility as of December 31, 2013, were included in this retrospective cohort study. Study outcomes included 7 different types of DTPs: use of high-risk medications in the elderly, gaps in medication therapy, medication nonadherence, drug-drug interactions, duplicate therapy, drug-disease interactions, and brand-to-generic conversion opportunities. DTPs were identified for each member based on 6 months of most recent pharmacy claims data and 14 months of most recent medical claims data. Risk factors examined in this study included patient demographics and prior health care utilization in the most recent 6 months. Descriptive statistics were used to summarize patient characteristics and to evaluate unadjusted relationships between the average number of DTPs identified per patient and each risk factor. Quartile values identified in the study population for number of diseases, number of drugs, and annual spend were used as potential new criteria thresholds, resulting in 27 new MTM criteria combinations. The performance of each eligibility criterion was evaluated using sensitivity, specificity, positive predictive values (PPVs), and negative predictive values (NPVs). Patients identified with at least 2 DTPs were defined as those who would benefit from MTM services and were used as the gold standard. As part of a sensitivity analysis, patients identified with at least 1 DTP were used as the gold standard. Lastly, a multivariable negative binomial regression model was used to evaluate the relationship between each risk factor and the number of identified DTPs per patient while controlling for the patients' number of drugs, number of chronic diseases, and annual drug spend. A total of 2,578,336 patients were included in the study. The sensitivity, specificity, PPV, and NPV of CMS MTM criteria for the 2013 plan year were 15.3%, 95.6%, 51.3%, and 78.8%, respectively. Sensitivity and PPV improved when the drug count threshold increased from 8 to 10, and when the annual drug cost decreased from $3,144 to $2,239 or less. Results were consistent when at least 1 DTP was used as the gold standard. The adjusted rate of DTPs was significantly greater among patients identified with higher drug and disease counts, annual drug spend, and prior ER or outpatient or hospital visits. Patients with higher median household incomes who were male, younger, or white had significantly lower rates of DTPs. The performance of MTM eligibility criteria can be improved by increasing the threshold values for drug count while decreasing the threshold value for annual drug spend. Furthermore, additional risk factors, such as a recent ER or hospital visit, may be considered as potential MTM eligibility criteria.
Simplified 4-item criteria for polycystic ovary syndrome: A bridge too far?
Indran, Inthrani R; Huang, Zhongwei; Khin, Lay Wai; Chan, Jerry K Y; Viardot-Foucault, Veronique; Yong, Eu Leong
2018-05-30
Although the Rotterdam 2003 polycystic ovarian syndrome (PCOS) diagnostic criteria is widely used, the need to consider multiple variables makes it unwieldy in clinical practice. We propose a simplified PCOS criteria wherein diagnosis is made if two of the following three items were present: (i) oligomenorrhoea, (ii) anti-mullerian hormone (AMH) above threshold and/or (iii) hyperandrogenism defined as either testosterone above threshold and/or the presence of hirsutism. This prospective cross-sectional study consists of healthy women (n = 157) recruited at an annual hospital health screen for staff and volunteers from the university community, and a patient cohort (n = 174) comprising women referred for suspected PCOS. We used the healthy cohort to establish threshold values for serum testosterone, antral follicle counts (AFC), ovarian volume (OV) and AMH. Women from the patient cohort, classified as PCOS by simplified PCOS criteria, AMH alone and Rotterdam 2003, were compared with respect to prevalence of oligomenorrhoea, hyperandrogenism and metabolic indices. In healthy women, testosterone ≥1.89 nmol/L, AFC ≥22 follicles and OV ≥8.44 mL, best predicted oligomenorrhoea and were used as threshold values for PCOS criteria. An AMH level ≥37.0 pmol/L best predicted polycystic ovarian morphology. AMH alone as a single biomarker demonstrated poor specificity (58.9%) for PCOS compared to Rotterdam 2003. In contrast, there was a 94% overlap in women selected as PCOS by the simplified PCOS criteria and Rotterdam 2003. The population characteristics of these two groups of PCOS women showed no significant mean differences in androgenic, ovarian, AMH and metabolic (BMI, HOMA-IR) variables. Our data recommend the simplified PCOS criteria with population-specific thresholds for diagnosis of PCOS. Its ability to replace ovarian ultrasound biometry with the highly correlated variable AMH, and use of testosterone as a single marker for hyperandrogenaemia alongside the key symptoms of oligomenorrhoea and hirsutism confers significant clinical potential for the diagnosis of PCOS. © 2018 John Wiley & Sons Ltd.
DiFranza, Joseph; Ursprung, W W Sanouri; Lauzon, Béatrice; Bancej, Christina; Wellman, Robert J; Ziedonis, Douglas; Kim, Sun S; Gervais, André; Meltzer, Bruce; McKay, Colleen E; O'Loughlin, Jennifer; Okoli, Chizimuzo T C; Fortuna, Lisa R; Tremblay, Michèle
2010-05-01
The Diagnostic and Statistical Manual diagnostic criteria for nicotine dependence (DSM-ND) are based on the proposition that dependence is a syndrome that can be diagnosed only when a minimum of 3 of the 7 proscribed features are present. The DSM-ND criteria are an accepted research measure, but the validity of these criteria has not been subjected to a systematic evaluation. To systematically review evidence of validity and reliability for the DSM-ND criteria, a literature search was conducted of 16 national and international databases. Each article with original data was independently reviewed by two or more reviewers. In total, 380 potentially relevant articles were examined and 169 were reviewed in depth. The DSM-ND criteria have seen wide use in research settings, but sensitivity and specificity are well below the accepted standards for clinical applications. Predictive validity is generally poor. The 7 DSM-ND criteria are regarded as having face validity, but no data support a 3-symptom ND diagnostic threshold, or a 4-symptom withdrawal syndrome threshold. The DSM incorrectly states that daily smoking is a prerequisite for withdrawal symptoms. The DSM shows poor to modest concurrence with all other measures of nicotine dependence, smoking behaviors and biological measures of tobacco use. The data support the DSM-ND criteria as a valid measure of nicotine dependence severity for research applications. However, the data do not support the central premise of a 3-symptom diagnostic threshold, and no data establish that the DSM-ND criteria provide an accurate diagnosis of nicotine dependence. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Defining ADHD symptom persistence in adulthood: optimizing sensitivity and specificity.
Sibley, Margaret H; Swanson, James M; Arnold, L Eugene; Hechtman, Lily T; Owens, Elizabeth B; Stehli, Annamarie; Abikoff, Howard; Hinshaw, Stephen P; Molina, Brooke S G; Mitchell, John T; Jensen, Peter S; Howard, Andrea L; Lakes, Kimberley D; Pelham, William E
2017-06-01
Longitudinal studies of children diagnosed with ADHD report widely ranging ADHD persistence rates in adulthood (5-75%). This study documents how information source (parent vs. self-report), method (rating scale vs. interview), and symptom threshold (DSM vs. norm-based) influence reported ADHD persistence rates in adulthood. Five hundred seventy-nine children were diagnosed with DSM-IV ADHD-Combined Type at baseline (ages 7.0-9.9 years) 289 classmates served as a local normative comparison group (LNCG), 476 and 241 of whom respectively were evaluated in adulthood (Mean Age = 24.7). Parent and self-reports of symptoms and impairment on rating scales and structured interviews were used to investigate ADHD persistence in adulthood. Persistence rates were higher when using parent rather than self-reports, structured interviews rather than rating scales (for self-report but not parent report), and a norm-based (NB) threshold of 4 symptoms rather than DSM criteria. Receiver-Operating Characteristics (ROC) analyses revealed that sensitivity and specificity were optimized by combining parent and self-reports on a rating scale and applying a NB threshold. The interview format optimizes young adult self-reporting when parent reports are not available. However, the combination of parent and self-reports from rating scales, using an 'or' rule and a NB threshold optimized the balance between sensitivity and specificity. With this definition, 60% of the ADHD group demonstrated symptom persistence and 41% met both symptom and impairment criteria in adulthood. © 2016 Association for Child and Adolescent Mental Health.
van Loo, Hanna M; Schoevers, Robert A; Kendler, Kenneth S; de Jonge, Peter; Romeijn, Jan-Willem
2016-02-01
High rates of psychiatric comorbidity are subject of debate: To what extent do they depend on classification choices such as diagnostic thresholds? This paper investigates the influence of different thresholds on rates of comorbidity between major depressive disorder (MDD) and generalized anxiety disorder (GAD). Point prevalence of comorbidity between MDD and GAD was measured in 74,092 subjects from the general population (LifeLines) according to Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) criteria. Comorbidity rates were compared for different thresholds by varying the number of necessary criteria from ≥ 1 to all nine symptoms for MDD, and from ≥ 1 to all seven symptoms for GAD. According to DSM thresholds, 0.86% had MDD only, 2.96% GAD only, and 1.14% both MDD and GAD (odds ratio (OR) 42.6). Lower thresholds for MDD led to higher rates of comorbidity (1.44% for ≥ 4 of nine MDD symptoms, OR 34.4), whereas lower thresholds for GAD hardly influenced comorbidity (1.16% for ≥ 3 of seven GAD symptoms, OR 38.8). Specific patterns in the distribution of symptoms within the population explained this finding: 37.3% of subjects with core criteria of MDD and GAD reported subthreshold MDD symptoms, whereas only 7.6% reported subthreshold GAD symptoms. Lower thresholds for MDD increased comorbidity with GAD, but not vice versa, owing to specific symptom patterns in the population. Generally, comorbidity rates result from both empirical symptom distributions and classification choices and cannot be reduced to either of these exclusively. This insight invites further research into the formation of disease concepts that allow for reliable predictions and targeted therapeutic interventions. © 2015 Wiley Periodicals, Inc.
Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.
2017-01-01
A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889
Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R
2017-01-01
A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.
Kassanjee, Reshma; Pilcher, Christopher D; Busch, Michael P; Murphy, Gary; Facente, Shelley N; Keating, Sheila M; Mckinney, Elaine; Marson, Kara; Price, Matthew A; Martin, Jeffrey N; Little, Susan J; Hecht, Frederick M; Kallas, Esper G; Welte, Alex
2016-01-01
Objective Assays for classifying HIV infections as ‘recent’ or ‘non-recent’ for incidence surveillance fail to simultaneously achieve large mean durations of ‘recent’ infection (MDRIs) and low ‘false-recent’ rates (FRRs), particularly in virally suppressed persons. The potential for optimizing recent infection testing algorithms (RITAs), by introducing viral load criteria and tuning thresholds used to dichotomize quantitative measures, is explored. Design The Consortium for the Evaluation and Performance of HIV Incidence Assays characterized over 2000 possible RITAs constructed from seven assays (LAg, BED, Less-sensitive Vitros, Vitros Avidity, BioRad Avidity, Architect Avidity and Geenius) applied to 2500 diverse specimens. Methods MDRIs were estimated using regression, and FRRs as observed ‘recent’ proportions, in various specimen sets. Context-specific FRRs were estimated for hypothetical scenarios. FRRs were made directly comparable by constructing RITAs with the same MDRI through the tuning of thresholds. RITA utility was summarized by the precision of incidence estimation. Results All assays produce high FRRs amongst treated subjects and elite controllers (10%-80%). Viral load testing reduces FRRs, but diminishes MDRIs. Context-specific FRRs vary substantially by scenario – BioRad Avidity and LAg provided the lowest FRRs and highest incidence precision in scenarios considered. Conclusions The introduction of a low viral load threshold provides crucial improvements in RITAs. However, it does not eliminate non-zero FRRs, and MDRIs must be consistently estimated. The tuning of thresholds is essential for comparing and optimizing the use of assays. The translation of directly measured FRRs into context-specific FRRs critically affects their magnitudes and our understanding of the utility of assays. PMID:27454561
NASA Astrophysics Data System (ADS)
Wolfs, Cecile J. A.; Brás, Mariana G.; Schyns, Lotte E. J. R.; Nijsten, Sebastiaan M. J. J. G.; van Elmpt, Wouter; Scheib, Stefan G.; Baltes, Christof; Podesta, Mark; Verhaegen, Frank
2017-08-01
The aim of this work is to assess the performance of 2D time-integrated (2D-TI), 2D time-resolved (2D-TR) and 3D time-integrated (3D-TI) portal dosimetry in detecting dose discrepancies between the planned and (simulated) delivered dose caused by simulated changes in the anatomy of lung cancer patients. For six lung cancer patients, tumor shift, tumor regression and pleural effusion are simulated by modifying their CT images. Based on the modified CT images, time-integrated (TI) and time-resolved (TR) portal dose images (PDIs) are simulated and 3D-TI doses are calculated. The modified and original PDIs and 3D doses are compared by a gamma analysis with various gamma criteria. Furthermore, the difference in the D 95% (ΔD 95%) of the GTV is calculated and used as a gold standard. The correlation between the gamma fail rate and the ΔD 95% is investigated, as well the sensitivity and specificity of all combinations of portal dosimetry method, gamma criteria and gamma fail rate threshold. On the individual patient level, there is a correlation between the gamma fail rate and the ΔD 95%, which cannot be found at the group level. The sensitivity and specificity analysis showed that there is not one combination of portal dosimetry method, gamma criteria and gamma fail rate threshold that can detect all simulated anatomical changes. This work shows that it will be more beneficial to relate portal dosimetry and DVH analysis on the patient level, rather than trying to quantify a relationship for a group of patients. With regards to optimizing sensitivity and specificity, different combinations of portal dosimetry method, gamma criteria and gamma fail rate should be used to optimally detect certain types of anatomical changes.
Wolfs, Cecile J A; Brás, Mariana G; Schyns, Lotte E J R; Nijsten, Sebastiaan M J J G; van Elmpt, Wouter; Scheib, Stefan G; Baltes, Christof; Podesta, Mark; Verhaegen, Frank
2017-07-12
The aim of this work is to assess the performance of 2D time-integrated (2D-TI), 2D time-resolved (2D-TR) and 3D time-integrated (3D-TI) portal dosimetry in detecting dose discrepancies between the planned and (simulated) delivered dose caused by simulated changes in the anatomy of lung cancer patients. For six lung cancer patients, tumor shift, tumor regression and pleural effusion are simulated by modifying their CT images. Based on the modified CT images, time-integrated (TI) and time-resolved (TR) portal dose images (PDIs) are simulated and 3D-TI doses are calculated. The modified and original PDIs and 3D doses are compared by a gamma analysis with various gamma criteria. Furthermore, the difference in the D 95% (ΔD 95% ) of the GTV is calculated and used as a gold standard. The correlation between the gamma fail rate and the ΔD 95% is investigated, as well the sensitivity and specificity of all combinations of portal dosimetry method, gamma criteria and gamma fail rate threshold. On the individual patient level, there is a correlation between the gamma fail rate and the ΔD 95% , which cannot be found at the group level. The sensitivity and specificity analysis showed that there is not one combination of portal dosimetry method, gamma criteria and gamma fail rate threshold that can detect all simulated anatomical changes. This work shows that it will be more beneficial to relate portal dosimetry and DVH analysis on the patient level, rather than trying to quantify a relationship for a group of patients. With regards to optimizing sensitivity and specificity, different combinations of portal dosimetry method, gamma criteria and gamma fail rate should be used to optimally detect certain types of anatomical changes.
Wang, Iris Z.; Kumaraswamy, Lalith K.; Podgorsak, Matthew B.
2016-01-01
Background This study is to report 1) the sensitivity of intensity modulated radiation therapy (IMRT) QA method for clinical volumetric modulated arc therapy (VMAT) plans with multi-leaf collimator (MLC) leaf errors that will not trigger MLC interlock during beam delivery; 2) the effect of non-beam-hold MLC leaf errors on the quality of VMAT plan dose delivery. Materials and methods. Eleven VMAT plans were selected and modified using an in-house developed software. For each control point of a VMAT arc, MLC leaves with the highest speed (1.87-1.95 cm/s) were set to move at the maximal allowable speed (2.3 cm/s), which resulted in a leaf position difference of less than 2 mm. The modified plans were considered as ‘standard’ plans, and the original plans were treated as the ‘slowing MLC’ plans for simulating ‘standard’ plans with leaves moving at relatively lower speed. The measurement of each ‘slowing MLC’ plan using MapCHECK®2 was compared with calculated planar dose of the ‘standard’ plan with respect to absolute dose Van Dyk distance-to-agreement (DTA) comparisons using 3%/3 mm and 2%/2 mm criteria. Results All ‘slowing MLC’ plans passed the 90% pass rate threshold using 3%/3 mm criteria while one brain and three anal VMAT cases were below 90% with 2%/2 mm criteria. For ten out of eleven cases, DVH comparisons between ‘standard’ and ‘slowing MLC’ plans demonstrated minimal dosimetric changes in targets and organs-at-risk. Conclusions For highly modulated VMAT plans, pass rate threshold (90%) using 3%/3mm criteria is not sensitive in detecting MLC leaf errors that will not trigger the MLC leaf interlock. However, the consequential effects of non-beam hold MLC errors on target and OAR doses are negligible, which supports the reliability of current patient-specific IMRT quality assurance (QA) method for VMAT plans. PMID:27069458
A novel approach to estimation of the time to biomarker threshold: applications to HIV.
Reddy, Tarylee; Molenberghs, Geert; Njagi, Edmund Njeru; Aerts, Marc
2016-11-01
In longitudinal studies of biomarkers, an outcome of interest is the time at which a biomarker reaches a particular threshold. The CD4 count is a widely used marker of human immunodeficiency virus progression. Because of the inherent variability of this marker, a single CD4 count below a relevant threshold should be interpreted with caution. Several studies have applied persistence criteria, designating the outcome as the time to the occurrence of two consecutive measurements less than the threshold. In this paper, we propose a method to estimate the time to attainment of two consecutive CD4 counts less than a meaningful threshold, which takes into account the patient-specific trajectory and measurement error. An expression for the expected time to threshold is presented, which is a function of the fixed effects, random effects and residual variance. We present an application to human immunodeficiency virus-positive individuals from a seroprevalent cohort in Durban, South Africa. Two thresholds are examined, and 95% bootstrap confidence intervals are presented for the estimated time to threshold. Sensitivity analysis revealed that results are robust to truncation of the series and variation in the number of visits considered for most patients. Caution should be exercised when interpreting the estimated times for patients who exhibit very slow rates of decline and patients who have less than three measurements. We also discuss the relevance of the methodology to the study of other diseases and present such applications. We demonstrate that the method proposed is computationally efficient and offers more flexibility than existing frameworks. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.
Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles
2015-11-01
Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.
A conditional probability analysis (CPA) approach has been developed for identifying biological thresholds of impact for use in the development of geographic-specific water quality criteria for protection of aquatic life. This approach expresses the threshold as the likelihood ...
McKenzie, Elizabeth M.; Balter, Peter A.; Stingo, Francesco C.; Jones, Jimmy; Followill, David S.; Kry, Stephen F.
2014-01-01
Purpose: The authors investigated the performance of several patient-specific intensity-modulated radiation therapy (IMRT) quality assurance (QA) dosimeters in terms of their ability to correctly identify dosimetrically acceptable and unacceptable IMRT patient plans, as determined by an in-house-designed multiple ion chamber phantom used as the gold standard. A further goal was to examine optimal threshold criteria that were consistent and based on the same criteria among the various dosimeters. Methods: The authors used receiver operating characteristic (ROC) curves to determine the sensitivity and specificity of (1) a 2D diode array undergoing anterior irradiation with field-by-field evaluation, (2) a 2D diode array undergoing anterior irradiation with composite evaluation, (3) a 2D diode array using planned irradiation angles with composite evaluation, (4) a helical diode array, (5) radiographic film, and (6) an ion chamber. This was done with a variety of evaluation criteria for a set of 15 dosimetrically unacceptable and 9 acceptable clinical IMRT patient plans, where acceptability was defined on the basis of multiple ion chamber measurements using independent ion chambers and a phantom. The area under the curve (AUC) on the ROC curves was used to compare dosimeter performance across all thresholds. Optimal threshold values were obtained from the ROC curves while incorporating considerations for cost and prevalence of unacceptable plans. Results: Using common clinical acceptance thresholds, most devices performed very poorly in terms of identifying unacceptable plans. Grouping the detector performance based on AUC showed two significantly different groups. The ion chamber, radiographic film, helical diode array, and anterior-delivered composite 2D diode array were in the better-performing group, whereas the anterior-delivered field-by-field and planned gantry angle delivery using the 2D diode array performed less well. Additionally, based on the AUCs, there was no significant difference in the performance of any device between gamma criteria of 2%/2 mm, 3%/3 mm, and 5%/3 mm. Finally, optimal cutoffs (e.g., percent of pixels passing gamma) were determined for each device and while clinical practice commonly uses a threshold of 90% of pixels passing for most cases, these results showed variability in the optimal cutoff among devices. Conclusions: IMRT QA devices have differences in their ability to accurately detect dosimetrically acceptable and unacceptable plans. Field-by-field analysis with a MapCheck device and use of the MapCheck with a MapPhan phantom while delivering at planned rotational gantry angles resulted in a significantly poorer ability to accurately sort acceptable and unacceptable plans compared with the other techniques examined. Patient-specific IMRT QA techniques in general should be thoroughly evaluated for their ability to correctly differentiate acceptable and unacceptable plans. Additionally, optimal agreement thresholds should be identified and used as common clinical thresholds typically worked very poorly to identify unacceptable plans. PMID:25471949
Aggarwal, Rohit; Rider, Lisa G; Ruperto, Nicolino; Bayat, Nastaran; Erman, Brian; Feldman, Brian M; Oddis, Chester V; Amato, Anthony A; Chinoy, Hector; Cooper, Robert G; Dastmalchi, Maryam; Fiorentino, David; Isenberg, David; Katz, James D; Mammen, Andrew; de Visser, Marianne; Ytterberg, Steven R; Lundberg, Ingrid E; Chung, Lorinda; Danko, Katalin; García-De la Torre, Ignacio; Song, Yeong Wook; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A; Miller, Frederick W; Vencovsky, Jiri
2017-05-01
To develop response criteria for adult dermatomyositis (DM) and polymyositis (PM). Expert surveys, logistic regression, and conjoint analysis were used to develop 287 definitions using core set measures. Myositis experts rated greater improvement among multiple pairwise scenarios in conjoint analysis surveys, where different levels of improvement in 2 core set measures were presented. The PAPRIKA (Potentially All Pairwise Rankings of All Possible Alternatives) method determined the relative weights of core set measures and conjoint analysis definitions. The performance characteristics of the definitions were evaluated on patient profiles using expert consensus (gold standard) and were validated using data from a clinical trial. The nominal group technique was used to reach consensus. Consensus was reached for a conjoint analysis-based continuous model using absolute percent change in core set measures (physician, patient, and extramuscular global activity, muscle strength, Health Assessment Questionnaire, and muscle enzyme levels). A total improvement score (range 0-100), determined by summing scores for each core set measure, was based on improvement in and relative weight of each core set measure. Thresholds for minimal, moderate, and major improvement were ≥20, ≥40, and ≥60 points in the total improvement score. The same criteria were chosen for juvenile DM, with different improvement thresholds. Sensitivity and specificity in DM/PM patient cohorts were 85% and 92%, 90% and 96%, and 92% and 98% for minimal, moderate, and major improvement, respectively. Definitions were validated in the clinical trial analysis for differentiating the physician rating of improvement (P < 0.001). The response criteria for adult DM/PM consisted of the conjoint analysis model based on absolute percent change in 6 core set measures, with thresholds for minimal, moderate, and major improvement. © 2017, American College of Rheumatology.
Ferrara, A; Weiss, N S; Hedderson, M M; Quesenberry, C P; Selby, J V; Ergas, I J; Peng, T; Escobar, G J; Pettitt, D J; Sacks, D A
2007-02-01
Gestational diabetes mellitus (GDM) is a risk factor for perinatal complications. In several countries, the criteria for the diagnosis of GDM have been in flux, the American Diabetes Association (ADA) thresholds recommended in 2000 being lower than those of the National Diabetes Data Group (NDDG) that have been in use since 1979. We sought to determine the extent to which infants of women meeting only the ADA criteria for GDM are at increased risk of neonatal complications. In a multiethnic cohort of 45,245 women who did not meet the NDDG criteria and were not treated for GDM, we conducted nested case-control studies of three complications of GDM that occurred in their infants: macrosomia (birthweight >4,500 g, n = 494); hypoglycaemia (plasma glucose <2.2 mmo/l, n = 488); and hyperbilirubinaemia (serum bilirubin > or =342 micromol/l (20 mg/dl), n = 578). We compared prenatal glucose levels of the mothers of these infants and mothers of 884 control infants. Women with GDM by ADA criteria only (two or more glucose values exceeding the threshold) had an increased risk of having an infant with macrosomia (odds ratio OR = 3.40, 95% CI = 1.55-7.43), hypoglycaemia (OR = 2.61, 95% CI = 0.99-6.92) or hyperbilirubinaemia (OR = 2.22, 95% CI = 0.98-5.04). Glucose levels 1 h after the 100-g glucose challenge that exceeded the ADA threshold were particularly strongly associated with each complication. These results lend support to the ADA recommendations and highlight the importance of the 1-h glucose measurement in a diagnostic test for GDM.
Lujan, Marla E; Jarrett, Brittany Y; Brooks, Eric D; Reines, Jonathan K; Peppin, Andrew K; Muhn, Narry; Haider, Ehsan; Pierson, Roger A; Chizen, Donna R
2013-05-01
Do the ultrasonographic criteria for polycystic ovaries supported by the 2003 Rotterdam consensus adequately discriminate between the normal and polycystic ovary syndrome (PCOS) condition in light of recent advancements in imaging technology and reliable methods for estimating follicle populations in PCOS? Using newer ultrasound technology and a reliable grid system approach to count follicles, we concluded that a substantially higher threshold of follicle counts throughout the entire ovary (FNPO)-26 versus 12 follicles-is required to distinguish among women with PCOS and healthy women from the general population. The Rotterdam consensus defined the polycystic ovary as having 12 or more follicles, measuring between 2 and 9 mm (FNPO), and/or an ovarian volume (OV) >10 cm(3). Since their initial proposal in 2003, a heightened prevalence of polycystic ovaries has been described in healthy women with regular menstrual cycles, which has questioned the accuracy of these criteria and marginalized the specificity of polycystic ovaries as a diagnostic criterion for PCOS. A diagnostic test study was performed using cross-sectional data, collected from 2006 to 2011, from 168 women prospectively evaluated by transvaginal ultrasonography. Receiver operating characteristic (ROC) curve analyses were performed to determine the appropriate diagnostic thresholds for: (i) FNPO, (ii) follicle counts in a single cross section (FNPS) and (iii) OV. The levels of intra- and inter-observer reliability when five observers used the proposed criteria on 100 ultrasound cases were also determined. Ninety-eight women diagnosed with PCOS by the National Institutes of Health criteria as having both oligo-amenorrhea and hyperandrogenism and 70 healthy female volunteers recruited from the general population. Participants were evaluated by transvaginal ultrasonography at the Royal University Hospital within the Department of Obstetrics, Gynecology and Reproductive Sciences, University of Saskatchewan (Saskatoon, SK, Canada) and in the Division of Nutritional Sciences' Human Metabolic Research Unit, Cornell University (Ithaca, NY, USA). Diagnostic potential for PCOS was highest for FNPO (0.969), followed by FNPS (0.880) and OV (0.873) as judged by the area under the ROC curve. An FNPO threshold of 26 follicles had the best compromise between sensitivity (85%) and specificity (94%) when discriminating between controls and PCOS. Similarly, an FNPS threshold of nine follicles had a 69% sensitivity and 90% specificity, and an OV of 10 cm(3) had a 81% sensitivity and 84% specificity. Levels of intra-observer reliability were 0.81, 0.80 and 0.86 when assessing FNPO, FNPS and OV, respectively. Inter-observer reliability was 0.71, 0.72 and 0.82, respectively. Thresholds proposed by this study should be limited to use in women aged between 18 and 35 years. Polycystic ovarian morphology has excellent diagnostic potential for detecting PCOS. FNPO have better diagnostic potential and yield greater diagnostic confidence compared with assessments of FNPS or OV. Whenever possible, images throughout the entire ovary should be collected for the ultrasonographic evaluation of PCOS. This study was funded by Cornell University and fellowship awards from the Saskatchewan Health Research Foundation and Canadian Institutes of Health Research. The authors have no conflict of interests to disclose.
When Less Is More: How Fewer Diagnostic Criteria Can Indicate Greater Severity
ERIC Educational Resources Information Center
Cooper, Luke D.; Balsis, Steve
2009-01-01
For diagnosing many mental disorders, the current "Diagnostic and Statistical Manual of Mental Disorders" ("DSM") system weights each diagnostic criterion equally--each criterion counts the same toward meeting the diagnostic threshold. Research on the diagnostic efficiency of criteria, however, reveals that some diagnostic criteria are more useful…
Edwards, D. L.; Saleh, A. A.; Greenspan, S. L.
2015-01-01
Summary We performed a systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for DXA-determined osteoporosis or low bone density. Commonly evaluated risk instruments showed high sensitivity approaching or exceeding 90 % at particular thresholds within various populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. Introduction The purpose of the study is to systematically review the performance of clinical risk assessment instruments for screening for dual-energy X-ray absorptiometry (DXA)-determined osteoporosis or low bone density. Methods Systematic review and meta-analysis were performed. Multiple literature sources were searched, and data extracted and analyzed from included references. Results One hundred eight references met inclusion criteria. Studies assessed many instruments in 34 countries, most commonly the Osteoporosis Self-Assessment Tool (OST), the Simple Calculated Osteoporosis Risk Estimation (SCORE) instrument, the Osteoporosis Self-Assessment Tool for Asians (OSTA), the Osteoporosis Risk Assessment Instrument (ORAI), and body weight criteria. Meta-analyses of studies evaluating OST using a cutoff threshold of <1 to identify US postmenopausal women with osteoporosis at the femoral neck provided summary sensitivity and specificity estimates of 89 % (95%CI 82–96 %) and 41 % (95%CI 23–59 %), respectively. Meta-analyses of studies evaluating OST using a cutoff threshold of 3 to identify US men with osteoporosis at the femoral neck, total hip, or lumbar spine provided summary sensitivity and specificity estimates of 88 % (95%CI 79–97 %) and 55 % (95%CI 42–68 %), respectively. Frequently evaluated instruments each had thresholds and populations for which sensitivity for osteoporosis or low bone mass detection approached or exceeded 90 % but always with a trade-off of relatively low specificity. Conclusions Commonly evaluated clinical risk assessment instruments each showed high sensitivity approaching or exceeding 90 % for identifying individuals with DXA-determined osteoporosis or low BMD at certain thresholds in different populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. PMID:25644147
Quantitative traits for the tail suspension test: automation, optimization, and BXD RI mapping.
Lad, Heena V; Liu, Lin; Payá-Cano, José L; Fernandes, Cathy; Schalkwyk, Leonard C
2007-07-01
Immobility in the tail suspension test (TST) is considered a model of despair in a stressful situation, and acute treatment with antidepressants reduces immobility. Inbred strains of mouse exhibit widely differing baseline levels of immobility in the TST and several quantitative trait loci (QTLs) have been nominated. The labor of manual scoring and various scoring criteria make obtaining robust data and comparisons across different laboratories problematic. Several studies have validated strain gauge and video analysis methods by comparison with manual scoring. We set out to find objective criteria for automated scoring parameters that maximize the biological information obtained, using a video tracking system on tapes of tail suspension tests of 24 lines of the BXD recombinant inbred panel and the progenitor strains C57BL/6J and DBA/2J. The maximum genetic effect size is captured using the highest time resolution and a low mobility threshold. Dissecting the trait further by comparing genetic association of multiple measures reveals good evidence for loci involved in immobility on chromosomes 4 and 15. These are best seen when using a high threshold for immobility, despite the overall better heritability at the lower threshold. A second trial of the test has greater duration of immobility and a completely different genetic profile. Frequency of mobility is also an independent phenotype, with a distal chromosome 1 locus.
Elizabeth A. Freeman; Gretchen G. Moisen
2008-01-01
Modelling techniques used in binary classification problems often result in a predicted probability surface, which is then translated into a presence - absence classification map. However, this translation requires a (possibly subjective) choice of threshold above which the variable of interest is predicted to be present. The selection of this threshold value can have...
Prevalence of neuroleptic-induced movement disorders in chronic schizophrenia inpatients.
Janno, Sven; Holi, Matti; Tuisku, Katinka; Wahlbeck, Kristian
2004-01-01
Since most of the world's schizophrenia patients are treated with conventional antipsychotics, the authors evaluated various methods for establishing the prevalence of neuroleptic-induced movement disorders in these patients. DSM-IV criteria and established score thresholds on a movement disorder rating scale were used to identify cases of neuroleptic-induced movement disorder in a representative Estonian patient sample of 99 chronic institutionalized schizophrenia patients, 18-65 years old, treated with conventional neuroleptics (79.8%) or clozapine (20.2%). Neuroleptic-induced movement disorders according to DSM-IV criteria were found in 61.6% of the group: 31.3% had neuroleptic-induced akathisia, 23.2% had neuroleptic-induced parkinsonism, and 32.3% had neuroleptic-induced tardive dyskinesia. Prevalence rates for akathisia and tardive dyskinesia were similar when either DSM-IV criteria or rating scale scores were used, but the prevalence rate for parkinsonism was much lower per DSM-IV criteria than according to rating scale score. Nearly two-thirds of chronic schizophrenia patients suffered from a neuroleptic-induced movement disorder. Globally, extrapyramidal adverse effects still impose a huge burden on the majority of neuroleptic-treated individuals with schizophrenia. The discrepancy between the standard identification methods for neuroleptic-induced movement disorder indicate the need for further research.
Aerospect operations criteria for Mercury thresholds
NASA Technical Reports Server (NTRS)
Katz, S.
1979-01-01
The hazards anticipated from a large scale mercury spill during a possible failure in the preflight and early flight stages of the Space Shuttle were studied. Toxicity thresholds were investigated as well as other consequences of mercury interacting with the environment. Three sites of mercury spill were investigated: land, water, and atmosphere. A laboratory study of interactions between mercury vapor and ozone in a low pressure, high ultraviolet radiation environment approximated the conditions of a mercury vapor release in the ozone layer region of the stratosphere. Clear evidence of an interaction leading to the destruction of ozone by conversion to oxygen was obtained. The impact of a spill on the Earth's environment and methods of early detection of a developing hazard wave of primary concern in the study.
Loziuk, Philip L.; Sederoff, Ronald R.; Chiang, Vincent L.; Muddiman, David C.
2014-01-01
Quantitative mass spectrometry has become central to the field of proteomics and metabolomics. Selected reaction monitoring is a widely used method for the absolute quantification of proteins and metabolites. This method renders high specificity using several product ions measured simultaneously. With growing interest in quantification of molecular species in complex biological samples, confident identification and quantitation has been of particular concern. A method to confirm purity or contamination of product ion spectra has become necessary for achieving accurate and precise quantification. Ion abundance ratio assessments were introduced to alleviate some of these issues. Ion abundance ratios are based on the consistent relative abundance (RA) of specific product ions with respect to the total abundance of all product ions. To date, no standardized method of implementing ion abundance ratios has been established. Thresholds by which product ion contamination is confirmed vary widely and are often arbitrary. This study sought to establish criteria by which the relative abundance of product ions can be evaluated in an absolute quantification experiment. These findings suggest that evaluation of the absolute ion abundance for any given transition is necessary in order to effectively implement RA thresholds. Overall, the variation of the RA value was observed to be relatively constant beyond an absolute threshold ion abundance. Finally, these RA values were observed to fluctuate significantly over a 3 year period, suggesting that these values should be assessed as close as possible to the time at which data is collected for quantification. PMID:25154770
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pater, P
Purpose: To analyse the sensitivity of the creation of strand breaks (SB) to the threshold energy (Eth) and thresholding method and to quantify the impact of clustering conditions on single strand break (SSB) and double strand break (DSB) yields. Methods: Monte Carlo simulations using Geant4-DNA were conducted for electron tracks of 280 eV to 220 keV in a geometrical DNA model composed of nucleosomes of 396 phospho-diester groups (PDGs) each. A strand break was created inside a PDG when the sum of all energy deposits (method 1) or energy transfers (method 2) was higher than Eth or when at leastmore » one interaction deposited (method 3) or transferred (method 4) an energy higher than Eth. SBs were then clustered into SSBs and DSBs using clustering scoring criteria from the literature and compared to our own. Results: The total number of SBs decreases as Eth is increased. In addition, thresholding on the energy transfers (methods 2 and 4) produces a higher SB count than when thresholding on energy deposits (methods 1 and 3). Method 2 produces a step-like function and should be avoided when attempting to optimize Eth. When SBs are grouped into damage patterns, clustering conditions can underestimated SSBs by up to 18 % and DSBs can be overestimated by up to 12 % compared to our own implementation. Conclusion: We show that two often underreported simulation parameters have a non-negligible effect on overall DNA damage yields. First more SBs are counted when using energy transfers to the PDG rather than energy deposits. Also, SBs grouped according to different clustering conditions can influence reported SSB and DSB by as much as 20%. Careful handling of these parameters is required when trying to compare DNA damage yields from different authors. Research funding from the governments of Canada and Quebec. PP acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290)« less
El Hanandeh, Ali; El-Zein, Abbas
2010-01-01
A modified version of the multi-criteria decision aid, ELECTRE III has been developed to account for uncertainty in criteria weightings and threshold values. The new procedure, called ELECTRE-SS, modifies the exploitation phase in ELECTRE III, through a new definition of the pre-order and the introduction of a ranking index (RI). The new approach accommodates cases where incomplete or uncertain preference data are present. The method is applied to a case of selecting a management strategy for the bio-degradable fraction in the municipal solid waste of Sydney. Ten alternatives are compared against 11 criteria. The results show that anaerobic digestion (AD) and composting of paper are less environmentally sound options than recycling. AD is likely to out-perform incineration where a market for heating does not exist. Moreover, landfilling can be a sound alternative, when considering overall performance and conditions of uncertainty.
Masterson, Elizabeth A; Sweeney, Marie Haring; Deddens, James A; Themann, Christa L; Wall, David K
2014-04-01
The purpose of this study was to compare the prevalence of workers with National Institute for Occupational Safety and Health significant threshold shifts (NSTS), Occupational Safety and Health Administration standard threshold shifts (OSTS), and with OSTS with age correction (OSTS-A), by industry using North American Industry Classification System codes. From 2001 to 2010, worker audiograms were examined. Prevalence and adjusted prevalence ratios for NSTS were estimated by industry. NSTS, OSTS, and OSTS-A prevalences were compared by industry. Twenty percent of workers had an NSTS, 14% had an OSTS, and 6% had an OSTS-A. For most industries, the OSTS and OSTS-A criteria identified 28% to 36% and 66% to 74% fewer workers than the NSTS criteria, respectively. Use of NSTS criteria allowing for earlier detection of shifts in hearing is recommended for improved prevention of occupational hearing loss.
Underwater psychophysical audiogram of a young male California sea lion (Zalophus californianus).
Mulsow, Jason; Houser, Dorian S; Finneran, James J
2012-05-01
Auditory evoked potential (AEP) data are commonly obtained in air while sea lions are under gas anesthesia; a procedure that precludes the measurement of underwater hearing sensitivity. This is a substantial limitation considering the importance of underwater hearing data in designing criteria aimed at mitigating the effects of anthropogenic noise exposure. To determine if some aspects of underwater hearing sensitivity can be predicted using rapid aerial AEP methods, this study measured underwater psychophysical thresholds for a young male California sea lion (Zalophus californianus) for which previously published aerial AEP thresholds exist. Underwater thresholds were measured in an aboveground pool at frequencies between 1 and 38 kHz. The underwater audiogram was very similar to those previously published for California sea lions, suggesting that the current and previously obtained psychophysical data are representative for this species. The psychophysical and previously measured AEP audiograms were most similar in terms of high-frequency hearing limit (HFHL), although the underwater HFHL was sharper and occurred at a higher frequency. Aerial AEP methods are useful for predicting reductions in the HFHL that are potentially independent of the testing medium, such as those due to age-related sensorineural hearing loss.
Newall, A T; Jit, M; Hutubessy, R
2014-06-01
The World Health Organization's CHOosing Interventions that are Cost Effective (WHO-CHOICE) thresholds for averting a disability-adjusted life-year of one to three times per capita income have been widely cited and used as a measure of cost effectiveness in evaluations of vaccination for low- and middle-income countries (LMICs). These thresholds were based upon criteria set out by the WHO Commission on Macroeconomics and Health, which reflected the potential economic returns of interventions. The CHOICE project sought to evaluate a variety of health interventions at a subregional level and classify them into broad categories to help assist decision makers, but the utility of the thresholds for within-country decision making for individual interventions (given budgetary constraints) has not been adequately explored. To examine whether the 'WHO-CHOICE thresholds' reflect funding decisions, we examined the results of two recent reviews of cost-effectiveness analyses of human papillomavirus and rotavirus vaccination in LMICs, and we assessed whether the results of these studies were reflected in funding decisions for these vaccination programmes. We found that in many cases, programmes that were deemed cost effective were not subsequently implemented in the country. We consider the implications of this finding, the advantages and disadvantages of alternative methods to estimate thresholds, and how cost perspectives and the funders of healthcare may impact on these choices.
NASA Technical Reports Server (NTRS)
Smith, Stephen W.; Seshadri, Banavara R.; Newman, John A.
2015-01-01
The experimental methods to determine near-threshold fatigue crack growth rate data are prescribed in ASTM standard E647. To produce near-threshold data at a constant stress ratio (R), the applied stress-intensity factor (K) is decreased as the crack grows based on a specified K-gradient. Consequently, as the fatigue crack growth rate threshold is approached and the crack tip opening displacement decreases, remote crack wake contact may occur due to the plastically deformed crack wake surfaces and shield the growing crack tip resulting in a reduced crack tip driving force and non-representative crack growth rate data. If such data are used to life a component, the evaluation could yield highly non-conservative predictions. Although this anomalous behavior has been shown to be affected by K-gradient, starting K level, residual stresses, environmental assisted cracking, specimen geometry, and material type, the specifications within the standard to avoid this effect are limited to a maximum fatigue crack growth rate and a suggestion for the K-gradient value. This paper provides parallel experimental and computational simulations for the K-decreasing method for two materials (an aluminum alloy, AA 2024-T3 and a titanium alloy, Ti 6-2-2-2-2) to aid in establishing clear understanding of appropriate testing requirements. These simulations investigate the effect of K-gradient, the maximum value of stress-intensity factor applied, and material type. A material independent term is developed to guide in the selection of appropriate test conditions for most engineering alloys. With the use of such a term, near-threshold fatigue crack growth rate tests can be performed at accelerated rates, near-threshold data can be acquired in days instead of weeks without having to establish testing criteria through trial and error, and these data can be acquired for most engineering materials, even those that are produced in relatively small product forms.
Trace, Sara E; Thornton, Laura M; Root, Tammy L; Mazzeo, Suzanne E; Lichtenstein, Paul; Pedersen, Nancy L; Bulik, Cynthia M
2012-05-01
We assessed the impact of reducing the binge eating frequency and duration thresholds on the diagnostic criteria for bulimia nervosa (BN) and binge eating disorder (BED). We estimated the lifetime population prevalence of BN and BED in 13,295 female twins from the Swedish Twin study of Adults: Genes and Environment employing a range of frequency and duration thresholds. External validation (risk to cotwin) was used to investigate empirical evidence for an optimal binge eating frequency threshold. The lifetime prevalence estimates of BN and BED increased linearly as the frequency criterion decreased. As the required duration increased, the prevalence of BED decreased slightly. Discontinuity in cotwin risk was observed in BN between at least four times per month and at least five times per month. This model could not be fit for BED. The proposed changes to the DSM-5 binge eating frequency and duration criteria would allow for better detection of binge eating pathology without resulting in a markedly higher lifetime prevalence of BN or BED. Copyright © 2011 Wiley Periodicals, Inc.
Qualitative criteria and thresholds for low noise asphalt mixture design
NASA Astrophysics Data System (ADS)
Vaitkus, A.; Andriejauskas, T.; Gražulytė, J.; Šernas, O.; Vorobjovas, V.; Kleizienė, R.
2018-05-01
Low noise asphalt pavements are cost efficient and cost effective alternative for road traffic noise mitigation comparing with noise barriers, façade insulation and other known noise mitigation measures. However, design of low noise asphalt mixtures strongly depends on climate and traffic peculiarities of different regions. Severe climate regions face problems related with short durability of low noise asphalt mixtures in terms of considerable negative impact of harsh climate conditions (frost-thaw, large temperature fluctuations, hydrological behaviour, etc.) and traffic (traffic loads, traffic volumes, studded tyres, etc.). Thus there is a need to find balance between mechanical and acoustical durability as well as to ensure adequate pavement skid resistance for road safety purposes. Paper presents analysis of the qualitative criteria and design parameters thresholds of low noise asphalt mixtures. Different asphalt mixture composition materials (grading, aggregate, binder, additives, etc.) and relevant asphalt layer properties (air void content, texture, evenness, degree of compaction, etc.) were investigated and assessed according their suitability for durable and effective low noise pavements. Paper concluded with the overview of requirements, qualitative criteria and thresholds for low noise asphalt mixture design for severe climate regions.
Integration of ecological-biological thresholds in conservation decision making.
Mavrommati, Georgia; Bithas, Kostas; Borsuk, Mark E; Howarth, Richard B
2016-12-01
In the Anthropocene, coupled human and natural systems dominate and only a few natural systems remain relatively unaffected by human influence. On the one hand, conservation criteria based on areas of minimal human impact are not relevant to much of the biosphere. On the other hand, conservation criteria based on economic factors are problematic with respect to their ability to arrive at operational indicators of well-being that can be applied in practice over multiple generations. Coupled human and natural systems are subject to economic development which, under current management structures, tends to affect natural systems and cross planetary boundaries. Hence, designing and applying conservation criteria applicable in real-world systems where human and natural systems need to interact and sustainably coexist is essential. By recognizing the criticality of satisfying basic needs as well as the great uncertainty over the needs and preferences of future generations, we sought to incorporate conservation criteria based on minimal human impact into economic evaluation. These criteria require the conservation of environmental conditions such that the opportunity for intergenerational welfare optimization is maintained. Toward this end, we propose the integration of ecological-biological thresholds into decision making and use as an example the planetary-boundaries approach. Both conservation scientists and economists must be involved in defining operational ecological-biological thresholds that can be incorporated into economic thinking and reflect the objectives of conservation, sustainability, and intergenerational welfare optimization. © 2016 Society for Conservation Biology.
Diagnostic criteria for vascular cognitive disorders: a VASCOG statement
Sachdev, Perminder; Kalaria, Raj; O’Brien, John; Skoog, Ingmar; Alladi, Suvarna; Black, Sandra E; Blacker, Deborah; Blazer, Dan; Chen, Christopher; Chui, Helena; Ganguli, Mary; Jellinger, Kurt; Jeste, Dilip V.; Pasquier, Florence; Paulsen, Jane; Prins, Niels; Rockwood, Kenneth; Roman, Gustavo; Scheltens, Philip
2014-01-01
Background Several sets of diagnostic criteria have been published for vascular dementia (VaD) since the 1960s. The continuing ambiguity in VaD definition warrants a critical re-examination. Methods Participants at a special symposium of the International Society for Vascular Behavioral and Cognitive Disorders (VASCOG) in 2009 critiqued the current criteria. They drafted a proposal for a new set of criteria, later reviewed through multiple drafts by the group, including additional experts and the members of the Neurocognitive Disorders Work Group of the DSM-5 Task Force. Results Cognitive disorders of vascular etiology are a heterogeneous group of disorders with diverse pathologies and clinical manifestations, discussed broadly under the rubric of vascular cognitive disorders (VCD). The continuum of vascular cognitive impairment is recognized by the categories of Mild Vascular Cognitive Disorder, and Vascular Dementia or Major Vascular Cognitive Disorder. Diagnostic thresholds are defined. Clinical and neuroimaging criteria are proposed for establishing vascular etiology. Subtypes of VCD are described, and the frequent co-occurrence of Alzheimer’s disease pathology emphasized. Conclusions The proposed criteria for VCD provide a coherent approach to the diagnosis of this diverse group of disorders, with a view to stimulating clinical and pathological validation studies. These criteria can be harmonized with the DSM-5 criteria such that an international consensus on the criteria for VCD may be achieved. PMID:24632990
Dissolved Oxygen Thresholds to Protect Designated Aquatic Life Uses in Estuaries
Most if not all coastal states in the US have established numeric thresholds for dissolved oxygen (DO) to protect aquatic life in estuaries. Some are in the process, or have recently completed, revisions of their criteria based on newer science. Often, a toxicological approach ...
Overview of field gamma spectrometries based on Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Aggarwal, Rohit; Rider, Lisa G; Ruperto, Nicolino; Bayat, Nastaran; Erman, Brian; Feldman, Brian M; Oddis, Chester V; Amato, Anthony A; Chinoy, Hector; Cooper, Robert G; Dastmalchi, Maryam; Fiorentino, David; Isenberg, David; Katz, James D; Mammen, Andrew; de Visser, Marianne; Ytterberg, Steven R; Lundberg, Ingrid E; Chung, Lorinda; Danko, Katalin; García-De la Torre, Ignacio; Song, Yeong Wook; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A; Miller, Frederick W; Vencovsky, Jiri
2017-05-01
To develop response criteria for adult dermatomyositis (DM) and polymyositis (PM). Expert surveys, logistic regression, and conjoint analysis were used to develop 287 definitions using core set measures. Myositis experts rated greater improvement among multiple pairwise scenarios in conjoint analysis surveys, where different levels of improvement in 2 core set measures were presented. The PAPRIKA (Potentially All Pairwise Rankings of All Possible Alternatives) method determined the relative weights of core set measures and conjoint analysis definitions. The performance characteristics of the definitions were evaluated on patient profiles using expert consensus (gold standard) and were validated using data from a clinical trial. The nominal group technique was used to reach consensus. Consensus was reached for a conjoint analysis-based continuous model using absolute per cent change in core set measures (physician, patient, and extramuscular global activity, muscle strength, Health Assessment Questionnaire, and muscle enzyme levels). A total improvement score (range 0-100), determined by summing scores for each core set measure, was based on improvement in and relative weight of each core set measure. Thresholds for minimal, moderate, and major improvement were ≥20, ≥40, and ≥60 points in the total improvement score. The same criteria were chosen for juvenile DM, with different improvement thresholds. Sensitivity and specificity in DM/PM patient cohorts were 85% and 92%, 90% and 96%, and 92% and 98% for minimal, moderate, and major improvement, respectively. Definitions were validated in the clinical trial analysis for differentiating the physician rating of improvement (p<0.001). The response criteria for adult DM/PM consisted of the conjoint analysis model based on absolute per cent change in 6 core set measures, with thresholds for minimal, moderate, and major improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Exploring the utility of real-time hydrologic data for landslide early warning
NASA Astrophysics Data System (ADS)
Mirus, B. B.; Smith, J. B.; Becker, R.; Baum, R. L.; Koss, E.
2017-12-01
Early warning systems can provide critical information for operations managers, emergency planners, and the public to help reduce fatalities, injuries, and economic losses due to landsliding. For shallow, rainfall-triggered landslides early warning systems typically use empirical rainfall thresholds, whereas the actual triggering mechanism involves the non-linear hydrological processes of infiltration, evapotranspiration, and hillslope drainage that are more difficult to quantify. Because hydrologic monitoring has demonstrated that shallow landslides are often preceded by a rise in soil moisture and pore-water pressures, some researchers have developed early warning criteria that attempt to account for these antecedent wetness conditions through relatively simplistic storage metrics or soil-water balance modeling. Here we explore the potential for directly incorporating antecedent wetness into landslide early warning criteria using recent landslide inventories and in-situ hydrologic monitoring near Seattle, WA, and Portland, OR. We use continuous, near-real-time telemetered soil moisture and pore-water pressure data measured within a few landslide-prone hillslopes in combination with measured and forecasted rainfall totals to inform easy-to-interpret landslide initiation thresholds. Objective evaluation using somewhat limited landslide inventories suggests that our new thresholds based on subsurface hydrologic monitoring and rainfall data compare favorably to the capabilities of existing rainfall-only thresholds for the Seattle area, whereas there are no established rainfall thresholds for the Portland area. This preliminary investigation provides a proof-of-concept for the utility of developing landslide early warning criteria in two different geologic settings using real-time subsurface hydrologic measurements from in-situ instrumentation.
Harrison, Linda; Melvin, Ann; Fiscus, Susan; Saidi, Yacine; Nastouli, Eleni; Harper, Lynda; Compagnucci, Alexandra; Babiker, Abdel; McKinney, Ross; Gibb, Diana; Tudor-Williams, Gareth
2015-09-01
The PENPACT-1 trial compared virologic thresholds to determine when to switch to second-line antiretroviral therapy (ART). Using PENPACT-1 data, we aimed to describe HIV-1 drug resistance accumulation on first-line ART by virologic threshold. PENPACT-1 had a 2 × 2 factorial design, randomizing HIV-infected children to start protease inhibitor (PI) versus nonnucleoside reverse transcriptase inhibitor (NNRTI)-based ART, and switch at a 1000 copies/mL versus 30,000 copies/mL threshold. Switch criteria were not achieving the threshold by week 24, confirmed rebound above the threshold thereafter, or Center for Disease Control and Prevention stage C event. Resistance tests were performed on samples ≥1000 copies/mL before switch, resuppression, and at 4-years/trial end. Sixty-seven children started PI-based ART and were randomized to switch at 1000 copies/mL (PI-1000), 64 PIs and 30,000 copies/mL (PI-30,000), 67 NNRTIs and 1000 copies/mL (NNRTI-1000), and 65 NNRTI and 30,000 copies/mL (NNRTI-30,000). Ninety-four (36%) children reached the 1000 copies/mL switch criteria during 5-year follow-up. In 30,000 copies/mL threshold arms, median time from 1000 to 30,000 copies/mL switch criteria was 58 (PI) versus 80 (NNRTI) weeks (P = 0.81). In NNRTI-30,000, more nucleoside reverse transcriptase inhibitor (NRTI) resistance mutations accumulated than other groups. NNRTI mutations were selected before switching at 1000 copies/mL (23% NNRTI-1000, 27% NNRTI-30,000). Sixty-two children started abacavir + lamivudine, 166 lamivudine + zidovudine or stavudine, and 35 other NRTIs. The abacavir + lamivudine group acquired fewest NRTI mutations. Of 60 switched to second-line, 79% PI-1000, 63% PI-30,000, 64% NNRTI-1000, and 100% NNRTI-30,000 were <400 copies/mL 24 weeks later. Children on first-line NNRTI-based ART who were randomized to switch at a higher virologic threshold developed the most resistance, yet resuppressed on second-line. An abacavir + lamivudine NRTI combination seemed protective against development of NRTI resistance.
Criteria and Thresholds for U.S. Navy Acoustic and Explosive Effects Analysis
2012-04-01
2 2.2 Functional hearing groups... Functions ....................................................................................... 5 2.3.1 Development of marine mammal auditory weighting... functions .................... 5 2.3.2 Navy marine mammal weighting functions .................................................. 10 2.4 Criteria
15 CFR 400.31 - Manufacturing and processing activity; criteria.
Code of Federal Regulations, 2010 CFR
2010-01-01
... consider the contributory effect zone savings have as an incremental part of cost effectiveness programs... criteria—(1) Threshold factors. It is the policy of the Board to authorize zone activity only when it is... and as components of imported products. (2) Economic factors. After its review of threshold factors...
Green, Malcolm; Lander, Harvey; Snyder, Ashley; Hudson, Paul; Churpek, Matthew; Edelson, Dana
2018-02-01
Traditionally, paper based observation charts have been used to identify deteriorating patients, with emerging recent electronic medical records allowing electronic algorithms to risk stratify and help direct the response to deterioration. We sought to compare the Between the Flags (BTF) calling criteria to the Modified Early Warning Score (MEWS), National Early Warning Score (NEWS) and electronic Cardiac Arrest Risk Triage (eCART) score. Multicenter retrospective analysis of electronic health record data from all patients admitted to five US hospitals from November 2008-August 2013. Cardiac arrest, ICU transfer or death within 24h of a score RESULTS: Overall accuracy was highest for eCART, with an AUC of 0.801 (95% CI 0.799-0.802), followed by NEWS, MEWS and BTF respectively (0.718 [0.716-0.720]; 0.698 [0.696-0.700]; 0.663 [0.661-0.664]). BTF criteria had a high risk (Red Zone) specificity of 95.0% and a moderate risk (Yellow Zone) specificity of 27.5%, which corresponded to MEWS thresholds of >=4 and >=2, NEWS thresholds of >=5 and >=2, and eCART thresholds of >=12 and >=4, respectively. At those thresholds, eCART caught 22 more adverse events per 10,000 patients than BTF using the moderate risk criteria and 13 more using high risk criteria, while MEWS and NEWS identified the same or fewer. An electronically generated eCART score was more accurate than commonly used paper based observation tools for predicting the composite outcome of in-hospital cardiac arrest, ICU transfer and death within 24h of observation. The outcomes of this analysis lend weight for a move towards an algorithm based electronic risk identification tool for deteriorating patients to ensure earlier detection and prevent adverse events in the hospital. Copyright © 2017 Elsevier B.V. All rights reserved.
Griffin, Andrew; Brito, Juan P; Bahl, Manisha; Hoang, Jenny K
2017-04-01
The 2015 American Thyroid Association guidelines acknowledged that "an active surveillance management approach can be considered as an alternative to immediate surgery" in patients with low-risk papillary thyroid carcinoma (PTC). The aim of this study was to determine the proportion of PTC that would meet the criteria for active surveillance and the surgeries and complications that could have been avoided. A total of 681 patients with thyroid cancer who underwent thyroid surgery from 2003 to 2012 were retrospectively reviewed. A decision-making framework for active surveillance was applied to patients with PTC in nodules measuring ≤1.5 cm on ultrasound. Patients were identified as suitable for active surveillance based on imaging and patient characteristics. These patients were reviewed for management and outcomes. PTC was diagnosed based on fine-needle aspiration histology of Bethesda V or VI in thyroid nodules in 243 patients. Of these, 77 patients had nodules measuring ≤1.5 cm on ultrasound, and 56/77 (23%) patients met the criteria for surveillance: 15/243 (6%) patients met the criteria with a ≤1 cm size threshold, and 41/243 (17%) met the criteria with a 1.1-1.5 cm threshold. Of the 56 patients who met the criteria for active surveillance, 52 underwent total thyroidectomy, and four had a lobectomy. Forty-five (80%) patients had elective central nodal dissection, and 14 had nodal metastases on pathology (all <4 mm). Three patients had permanent complications from surgery, including vocal cord paralysis, hypoparathyroidism, and a chipped tooth from intubation. No patients died or had recurrent disease. Future programs in the United States should consider increasing the size threshold for active surveillance of PTC to 1.5 cm, since this will allow up to one quarter of patients to be eligible instead of only 6% with a 1 cm size threshold. Without an active surveillance program, the majority of patients with low-risk cancers have thyroidectomy and carry a small risk of permanent complications.
An exploratory analysis of Indiana and Illinois biotic ...
EPA recognizes the importance of nutrient criteria in protecting designated uses from eutrophication effects associated with elevated phosphorus and nitrogen in streams and has worked with states over the past 12 years to assist them in developing nutrient criteria. Towards that end, EPA has provided states and tribes with technical guidance to assess nutrient impacts and to develop criteria. EPA published recommendations in 2000 on scientifically defensible empirical approaches for setting numeric criteria. EPA also published eco-regional criteria recommendations in 2000-2001 based on a frequency distribution approach meant to approximate reference condition concentrations. In 2010, EPA elaborated on one of these empirical approaches (i.e., stressor-response relationships) for developing nutrient criteria. The purpose of this report was to conduct exploratory analyses of state datasets from Illinois and Indiana to determine threshold values for nutrients and chlorophyll a that could guide Indiana and Illinois criteria development. Box and whisker plots were used to compare nutrient and chlorophyll a concentrations between Illinois and Indiana. Stressor response analyses, using piece-wise linear regression and change-point analysis (Illinois only) were conducted to determine thresholds of change in relationships between nutrients and biotic assemblages. Impact stmt: The purpose of this report was to conduct exploratory analyses of state datasets from Illinois
Geneletti, Davide
2010-02-01
This paper presents a method based on the combination of stakeholder analysis and spatial multicriteria evaluation (SMCE) to first design possible sites for an inert landfill, and then rank them according to their suitability. The method was tested for the siting of an inert landfill in the Sarca's Plain, located in south-western Trentino, an alpine region in northern Italy. Firstly, stakeholder analysis was conducted to identify a set of criteria to be satisfied by new inert landfill sites. SMCE techniques were then applied to combine the criteria, and obtain a suitability map of the study region. Subsequently, the most suitable sites were extracted by taking into account also thresholds based on size and shape. These sites were then compared and ranked according to their visibility, accessibility and dust pollution. All these criteria were assessed through GIS modelling. Sensitivity analyses were performed on the results to assess the stability of the ranking with respect to variations in the input (criterion scores and weights). The study concluded that the three top-ranking sites are located close to each other, in the northernmost sector of the study area. A more general finding was that the use of different criteria in the different stages of the analysis allowed to better differentiate the suitability of the potential landfill sites.
Ernst, Verena; Bürger, Arne; Hammerle, Florian
2017-11-01
Changes in the DSM-5 eating disorders criteria sought to increase the clarity of the diagnostic categories and to decrease the preponderance of nonspecified eating disorders. The first objective of this study was to analyze how these revisions affect threshold and EDNOS/OSFED eating disorder diagnoses in terms of prevalence, sex ratios, and diagnostic distribution in a student sample. Second, we aimed to compare the impairment levels of participants with a threshold, an EDNOS/OSFED and no diagnosis using both DSM-IV and DSM-5. A sample of 1654 7th and 8th grade students completed self-report questionnaires to determine diagnoses and impairment levels in the context of an eating disorder prevention program in nine German secondary schools. Height and weight were measured. The prevalence of threshold disorders increased from .48% (DSM-IV) to 1.15% (DSM-5). EDNOS disorders increased from 2.90 to 6.23% when using OSFED-categories. A higher proportion of girls was found throughout all the diagnostic categories, and the sex ratios remained stable. The effect sizes of DSM-5 group differences regarding impairment levels were equal to or larger than those of the DSM-IV comparisons, ranging from small to medium. We provide an in-depth overview of changes resulting from the revisions of DSM eating disorder criteria in a German adolescent sample. Despite the overall increase in prevalence estimates, the results suggest that the DSM-5 criteria differentiate participants with threshold disorders and OSFED from those no diagnosis as well as or even more distinctly than the DSM-IV criteria. © 2017 Wiley Periodicals, Inc.
Proposed Objective Odor Control Test Methodology for Waste Containment
NASA Technical Reports Server (NTRS)
Vos, Gordon
2010-01-01
The Orion Cockpit Working Group has requested that an odor control testing methodology be proposed to evaluate the odor containment effectiveness of waste disposal bags to be flown on the Orion Crew Exploration Vehicle. As a standardized "odor containment" test does not appear to be a matter of record for the project, a new test method is being proposed. This method is based on existing test methods used in industrial hygiene for the evaluation of respirator fit in occupational settings, and takes into consideration peer reviewed documentation of human odor thresholds for standardized contaminates, industry stardnard atmostpheric testing methodologies, and established criteria for laboratory analysis. The proposed methodology is quantitative, though it can readily be complimented with a qualitative subjective assessment. Isoamyl acetate (IAA - also known at isopentyl acetate) is commonly used in respirator fit testing, and there are documented methodologies for both measuring its quantitative airborne concentrations. IAA is a clear, colorless liquid with a banana-like odor, documented detectable smell threshold for humans of 0.025 PPM, and a 15 PPB level of quantation limit.
Mirus, Benjamin B.; Becker, Rachel E.; Baum, Rex L.; Smith, Joel B.
2018-01-01
Early warning for rainfall-induced shallow landsliding can help reduce fatalities and economic losses. Although these commonly occurring landslides are typically triggered by subsurface hydrological processes, most early warning criteria rely exclusively on empirical rainfall thresholds and other indirect proxies for subsurface wetness. We explore the utility of explicitly accounting for antecedent wetness by integrating real-time subsurface hydrologic measurements into landslide early warning criteria. Our efforts build on previous progress with rainfall thresholds, monitoring, and numerical modeling along the landslide-prone railway corridor between Everett and Seattle, Washington, USA. We propose a modification to a previously established recent versus antecedent (RA) cumulative rainfall thresholds by replacing the antecedent 15-day rainfall component with an average saturation observed over the same timeframe. We calculate this antecedent saturation with real-time telemetered measurements from five volumetric water content probes installed in the shallow subsurface within a steep vegetated hillslope. Our hybrid rainfall versus saturation (RS) threshold still relies on the same recent 3-day rainfall component as the existing RA thresholds, to facilitate ready integration with quantitative precipitation forecasts. During the 2015–2017 monitoring period, this RS hybrid approach has an increase of true positives and a decrease of false positives and false negatives relative to the previous RA rainfall-only thresholds. We also demonstrate that alternative hybrid threshold formats could be even more accurate, which suggests that further development and testing during future landslide seasons is needed. The positive results confirm that accounting for antecedent wetness conditions with direct subsurface hydrologic measurements can improve thresholds for alert systems and early warning of rainfall-induced shallow landsliding.
Julian, Laura J.; Gregorich, Steven E.; Tonner, Chris; Yazdany, Jinoos; Trupin, Laura; Criswell, Lindsey A.; Yelin, ED; Katz, Patricia P.
2013-01-01
Objective Identifying persons with systemic lupus erythematosus (SLE) at risk for depression would facilitate the identification and treatment of an important comorbidity conferring additional risk for poor outcomes. The purpose of this study was to determine the utility of a brief screening measure, the Center for Epidemiologic Studies Depression Scale (CES-D), in detecting mood disorders in persons with SLE. Methods This cross-sectional study examined 150 persons with SLE. Screening cut points were empirically derived using threshold selection methods, and receiver operating characteristic curves were estimated. The empirically derived cut points of the CES-D were used as the screening measures and were compared to other commonly used CES-D cut points in addition to other commonly used methods to screen for depression. Diagnoses of major depressive disorder or other mood disorders were determined using a “gold standard” structured clinical interview. Results Of the 150 persons with SLE, 26% of subjects met criteria for any mood disorder and 17% met criteria for major depressive disorder. Optimal threshold estimations suggested a CES-D cut score of 24 and above, which yielded adequate sensitivity and specificity in detecting major depressive disorder (88% and 93%, respectively) and correctly classified 92% of participants. To detect the presence of any mood disorder, a cut score of 20 and above was suggested, yielding sensitivity and specificity of 87% and correctly classifying 87%. Conclusion These results suggest the CES-D may be a useful screening measure to identify patients at risk for depression. PMID:21312347
Abboud, Tammam; Schaper, Miriam; Dührsen, Lasse; Schwarz, Cindy; Schmidt, Nils Ole; Westphal, Manfred; Martens, Tobias
2016-10-01
OBJECTIVE Warning criteria for monitoring of motor evoked potentials (MEP) after direct cortical stimulation during surgery for supratentorial tumors have been well described. However, little is known about the value of MEP after transcranial electrical stimulation (TES) in predicting postoperative motor deficit when monitoring threshold level. The authors aimed to evaluate the feasibility and value of this method in glioma surgery by using a new approach for interpreting changes in threshold level involving contra- and ipsilateral MEP. METHODS Between November 2013 and December 2014, 93 patients underwent TES-MEP monitoring during resection of gliomas located close to central motor pathways but not involving the primary motor cortex. The MEP were elicited by transcranial repetitive anodal train stimulation. Bilateral MEP were continuously evaluated to assess percentage increase of threshold level (minimum voltage needed to evoke a stable motor response from each of the muscles being monitored) from the baseline set before dural opening. An increase in threshold level on the contralateral side (facial, arm, or leg muscles contralateral to the affected hemisphere) of more than 20% beyond the percentage increase on the ipsilateral side (facial, arm, or leg muscles ipsilateral to the affected hemisphere) was considered a significant alteration. Recorded alterations were subsequently correlated with postoperative neurological deterioration and MRI findings. RESULTS TES-MEP could be elicited in all patients, including those with recurrent glioma (31 patients) and preoperative paresis (20 patients). Five of 73 patients without preoperative paresis showed a significant increase in threshold level, and all of them developed new paresis postoperatively (transient in 4 patients and permanent in 1 patient). Eight of 20 patients with preoperative paresis showed a significant increase in threshold level, and all of them developed postoperative neurological deterioration (transient in 4 patients and permanent in 4 patients). In 80 patients no significant change in threshold level was detected, and none of them showed postoperative neurological deterioration. The specificity and sensitivity in this series were estimated at 100%. Postoperative MRI revealed gross-total tumor resection in 56 of 82 patients (68%) in whom complete tumor resection was attainable; territorial ischemia was detected in 4 patients. CONCLUSIONS The novel threshold criterion has made TES-MEP a useful method for predicting postoperative motor deficit in patients who undergo glioma surgery, and has been feasible in patients with preoperative paresis as well as in patients with recurrent glioma. Including contra- and ipsilateral changes in threshold level has led to a high sensitivity and specificity.
Whalley, H C; Kestelman, J N; Rimmington, J E; Kelso, A; Abukmeil, S S; Best, J J; Johnstone, E C; Lawrie, S M
1999-07-30
The Edinburgh High Risk Project is a longitudinal study of brain structure (and function) in subjects at high risk of developing schizophrenia in the next 5-10 years for genetic reasons. In this article we describe the methods of volumetric analysis of structural magnetic resonance images used in the study. We also consider potential sources of error in these methods: the validity of our image analysis techniques; inter- and intra-rater reliability; possible positional variation; and thresholding criteria used in separating brain from cerebro-spinal fluid (CSF). Investigation with a phantom test object (of similar imaging characteristics to the brain) provided evidence for the validity of our image acquisition and analysis techniques. Both inter- and intra-rater reliability were found to be good in whole brain measures but less so for smaller regions. There were no statistically significant differences in positioning across the three study groups (patients with schizophrenia, high risk subjects and normal volunteers). A new technique for thresholding MRI scans longitudinally is described (the 'rescale' method) and compared with our established method (thresholding by eye). Few differences between the two techniques were seen at 3- and 6-month follow-up. These findings demonstrate the validity and reliability of the structural MRI analysis techniques used in the Edinburgh High Risk Project, and highlight methodological issues of general concern in cross-sectional and longitudinal studies of brain structure in healthy control subjects and neuropsychiatric populations.
2013-01-01
Background Insulin resistance has been associated with metabolic and hemodynamic alterations and higher cardio metabolic risk. There is great variability in the threshold homeostasis model assessment of insulin resistance (HOMA-IR) levels to define insulin resistance. The purpose of this study was to describe the influence of age and gender in the estimation of HOMA-IR optimal cut-off values to identify subjects with higher cardio metabolic risk in a general adult population. Methods It included 2459 adults (range 20–92 years, 58.4% women) in a random Spanish population sample. As an accurate indicator of cardio metabolic risk, Metabolic Syndrome (MetS), both by International Diabetes Federation criteria and by Adult Treatment Panel III criteria, were used. The effect of age was analyzed in individuals with and without diabetes mellitus separately. ROC regression methodology was used to evaluate the effect of age on HOMA-IR performance in classifying cardio metabolic risk. Results In Spanish population the threshold value of HOMA-IR drops from 3.46 using 90th percentile criteria to 2.05 taking into account of MetS components. In non-diabetic women, but no in men, we found a significant non-linear effect of age on the accuracy of HOMA-IR. In non-diabetic men, the cut-off values were 1.85. All values are between 70th-75th percentiles of HOMA-IR levels in adult Spanish population. Conclusions The consideration of the cardio metabolic risk to establish the cut-off points of HOMA-IR, to define insulin resistance instead of using a percentile of the population distribution, would increase its clinical utility in identifying those patients in whom the presence of multiple metabolic risk factors imparts an increased metabolic and cardiovascular risk. The threshold levels must be modified by age in non-diabetic women. PMID:24131857
SU-E-T-647: Quality Assurance of VMAT by Gamma Analysis Dependence On Low-Dose Threshold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, J; Kim, M; Lee, S
2015-06-15
Purpose: The AAPM TG-119 instructed institutions to use low-dose threshold (LDT) of 10% or a ROI determined by the jaw when they collected gamma analysis QA data of planar dose distribution. Also, based on a survey by Nelms and Simon, more than 70% of institutions use a LDT between 0% and 10% for gamma analysis. However, there are no clinical data to quantitatively demonstrate the impact of the LDT on the gamma index. Therefore, we performed a gamma analysis with LDTs of 0% to 15% according to both global and local normalization and different acceptance criteria: 3%/3 mm, 2%/2 mm,more » and 1%/1 mm. Methods: A total of 30 treatment plans—10 head and neck, 10 brain, and 10 prostate cancer cases—were randomly selected from the Varian Eclipse TPS, retrospectively. For the gamma analysis, a predicted portal image was acquired through a portal dose calculation algorithm in the Eclipse TPS, and a measured portal image was obtained using a Varian Clinac iX and an EPID. Then, the gamma analysis was performed using the Portal Dosimetry software. Results: For the global normalization, the gamma passing rate (%GP) decreased as the LDT increased, and all cases of low-dose thresholds exhibited a %GP above 95% for both the 3%/3 mm and 2%/2 mm criteria. However, for local normalization, the %GP increased as LDT increased. The gamma passing rate with LDT of 10% increased by 6.86%, 9.22% and 6.14% compared with the 0% in the case of the head and neck, brain and prostate for 3%/3 mm criteria, respectively. Conclusion: Applying the LDT in the global normalization does not have critical impact to judge patient-specific QA results. However, LDT for the local normalization should be carefully selected because applying the LDT could affect the average of the %GP to increase rapidly.« less
Leong, Tora; Rehman, Michaela B.; Pastormerlo, Luigi Emilio; Harrell, Frank E.; Coats, Andrew J. S.; Francis, Darrel P.
2014-01-01
Background Clinicians are sometimes advised to make decisions using thresholds in measured variables, derived from prognostic studies. Objectives We studied why there are conflicting apparently-optimal prognostic thresholds, for example in exercise peak oxygen uptake (pVO2), ejection fraction (EF), and Brain Natriuretic Peptide (BNP) in heart failure (HF). Data Sources and Eligibility Criteria Studies testing pVO2, EF or BNP prognostic thresholds in heart failure, published between 1990 and 2010, listed on Pubmed. Methods First, we examined studies testing pVO2, EF or BNP prognostic thresholds. Second, we created repeated simulations of 1500 patients to identify whether an apparently-optimal prognostic threshold indicates step change in risk. Results 33 studies (8946 patients) tested a pVO2 threshold. 18 found it prognostically significant: the actual reported threshold ranged widely (10–18 ml/kg/min) but was overwhelmingly controlled by the individual study population's mean pVO2 (r = 0.86, p<0.00001). In contrast, the 15 negative publications were testing thresholds 199% further from their means (p = 0.0001). Likewise, of 35 EF studies (10220 patients), the thresholds in the 22 positive reports were strongly determined by study means (r = 0.90, p<0.0001). Similarly, in the 19 positives of 20 BNP studies (9725 patients): r = 0.86 (p<0.0001). Second, survival simulations always discovered a “most significant” threshold, even when there was definitely no step change in mortality. With linear increase in risk, the apparently-optimal threshold was always near the sample mean (r = 0.99, p<0.001). Limitations This study cannot report the best threshold for any of these variables; instead it explains how common clinical research procedures routinely produce false thresholds. Key Findings First, shifting (and/or disappearance) of an apparently-optimal prognostic threshold is strongly determined by studies' average pVO2, EF or BNP. Second, apparently-optimal thresholds always appear, even with no step in prognosis. Conclusions Emphatic therapeutic guidance based on thresholds from observational studies may be ill-founded. We should not assume that optimal thresholds, or any thresholds, exist. PMID:24475020
Statistical Criteria for Setting Thresholds in Medical School Admissions
ERIC Educational Resources Information Center
Albanese, Mark A.; Farrell, Philip; Dottl, Susan
2005-01-01
In 2001, Dr. Jordan Cohen, President of the AAMC, called for medical schools to consider using an Medical College Admission Test (MCAT) threshold to eliminate high-risk applicants from consideration and then to use non-academic qualifications for further consideration. This approach would seem to be consistent with the recent Supreme Court ruling…
Adami, Silvano; Bertoldo, Francesco; Gatti, Davide; Minisola, Giovanni; Rossini, Maurizio; Sinigaglia, Luigi; Varenna, Massimo
2013-09-01
The definition of osteoporosis was based for several years on bone mineral density values, which were used by most guidelines for defining treatment thresholds. The availability of tools for the estimation of fracture risk, such as FRAX™ or its adapted Italian version, DeFRA, is providing a way to grade osteoporosis severity. By applying these new tools, the criteria identified in Italy for treatment reimbursability (e.g., "Nota 79") are confirmed as extremely conservative. The new fracture risk-assessment tools provide continuous risk values that can be used by health authorities (or "payers") for identifying treatment thresholds. FRAX estimates the risk for "major osteoporotic fractures," which are not counted in registered fracture trials. Here, we elaborate an algorithm to convert vertebral and nonvertebral fractures to the "major fractures" of FRAX, and this allows a cost-effectiveness assessment for each drug.
Normal Perceptual Sensitivity Arising From Weakly Reflective Cone Photoreceptors
Bruce, Kady S.; Harmening, Wolf M.; Langston, Bradley R.; Tuten, William S.; Roorda, Austin; Sincich, Lawrence C.
2015-01-01
Purpose To determine the light sensitivity of poorly reflective cones observed in retinas of normal subjects, and to establish a relationship between cone reflectivity and perceptual threshold. Methods Five subjects (four male, one female) with normal vision were imaged longitudinally (7–26 imaging sessions, representing 82–896 days) using adaptive optics scanning laser ophthalmoscopy (AOSLO) to monitor cone reflectance. Ten cones with unusually low reflectivity, as well as 10 normally reflective cones serving as controls, were targeted for perceptual testing. Cone-sized stimuli were delivered to the targeted cones and luminance increment thresholds were quantified. Thresholds were measured three to five times per session for each cone in the 10 pairs, all located 2.2 to 3.3° from the center of gaze. Results Compared with other cones in the same retinal area, three of 10 monitored dark cones were persistently poorly reflective, while seven occasionally manifested normal reflectance. Tested psychophysically, all 10 dark cones had thresholds comparable with those from normally reflecting cones measured concurrently (P = 0.49). The variation observed in dark cone thresholds also matched the wide variation seen in a large population (n = 56 cone pairs, six subjects) of normal cones; in the latter, no correlation was found between cone reflectivity and threshold (P = 0.0502). Conclusions Low cone reflectance cannot be used as a reliable indicator of cone sensitivity to light in normal retinas. To improve assessment of early retinal pathology, other diagnostic criteria should be employed along with imaging and cone-based microperimetry. PMID:26193919
Bào, Yīmíng; Amarasinghe, Gaya K; Basler, Christopher F; Bavari, Sina; Bukreyev, Alexander; Chandran, Kartik; Dolnik, Olga; Dye, John M; Ebihara, Hideki; Formenty, Pierre; Hewson, Roger; Kobinger, Gary P; Leroy, Eric M; Mühlberger, Elke; Netesov, Sergey V; Patterson, Jean L; Paweska, Janusz T; Smither, Sophie J; Takada, Ayato; Towner, Jonathan S; Volchkov, Viktor E; Wahl-Jensen, Victoria; Kuhn, Jens H
2017-05-11
The mononegaviral family Filoviridae has eight members assigned to three genera and seven species. Until now, genus and species demarcation were based on arbitrarily chosen filovirus genome sequence divergence values (≈50% for genera, ≈30% for species) and arbitrarily chosen phenotypic virus or virion characteristics. Here we report filovirus genome sequence-based taxon demarcation criteria using the publicly accessible PAirwise Sequencing Comparison (PASC) tool of the US National Center for Biotechnology Information (Bethesda, MD, USA). Comparison of all available filovirus genomes in GenBank using PASC revealed optimal genus demarcation at the 55-58% sequence diversity threshold range for genera and at the 23-36% sequence diversity threshold range for species. Because these thresholds do not change the current official filovirus classification, these values are now implemented as filovirus taxon demarcation criteria that may solely be used for filovirus classification in case additional data are absent. A near-complete, coding-complete, or complete filovirus genome sequence will now be required to allow official classification of any novel "filovirus." Classification of filoviruses into existing taxa or determining the need for novel taxa is now straightforward and could even become automated using a presented algorithm/flowchart rooted in RefSeq (type) sequences.
Ecosystem Modeling Applied to Nutrient Criteria Development in Rivers
NASA Astrophysics Data System (ADS)
Carleton, James N.; Park, Richard A.; Clough, Jonathan S.
2009-09-01
Threshold concentrations for biological impairment by nutrients are difficult to quantify in lotic systems, yet States and Tribes in the United States are charged with developing water quality criteria to protect these ecosystems from excessive enrichment. The analysis described in this article explores the use of the ecosystem model AQUATOX to investigate impairment thresholds keyed to biological indexes that can be simulated. The indexes selected for this exercise include percentage cyanobacterial biomass of sestonic algae, and benthic chlorophyll a. The calibrated model was used to analyze responses of these indexes to concurrent reductions in phosphorus, nitrogen, and suspended sediment in an enriched upper Midwestern river. Results suggest that the indexes would respond strongly to changes in phosphorus and suspended sediment, and less strongly to changes in nitrogen concentration. Using simulated concurrent reductions in all three water quality constituents, a total phosphorus concentration of 0.1 mg/l was identified as a threshold concentration, and therefore a hypothetical water quality criterion, for prevention of both excessive periphyton growth and sestonic cyanobacterial blooms. This kind of analysis is suggested as a way to evaluate multiple contrasting impacts of hypothetical nutrient and sediment reductions and to define nutrient criteria or target concentrations that balance multiple management objectives concurrently.
Probabilistic peak detection in CE-LIF for STR DNA typing.
Woldegebriel, Michael; van Asten, Arian; Kloosterman, Ate; Vivó-Truyols, Gabriel
2017-07-01
In this work, we present a novel probabilistic peak detection algorithm based on a Bayesian framework for forensic DNA analysis. The proposed method aims at an exhaustive use of raw electropherogram data from a laser-induced fluorescence multi-CE system. As the raw data are informative up to a single data point, the conventional threshold-based approaches discard relevant forensic information early in the data analysis pipeline. Our proposed method assigns a posterior probability reflecting the data point's relevance with respect to peak detection criteria. Peaks of low intensity generated from a truly existing allele can thus constitute evidential value instead of fully discarding them and contemplating a potential allele drop-out. This way of working utilizes the information available within each individual data point and thus avoids making early (binary) decisions on the data analysis that can lead to error propagation. The proposed method was tested and compared to the application of a set threshold as is current practice in forensic STR DNA profiling. The new method was found to yield a significant improvement in the number of alleles identified, regardless of the peak heights and deviation from Gaussian shape. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hirschfeld, Gerrit; Blankenburg, Markus R; Süß, Moritz; Zernikow, Boris
2015-01-01
The assessment of somatosensory function is a cornerstone of research and clinical practice in neurology. Recent initiatives have developed novel protocols for quantitative sensory testing (QST). Application of these methods led to intriguing findings, such as the presence lower pain-thresholds in healthy children compared to healthy adolescents. In this article, we (re-) introduce the basic concepts of signal detection theory (SDT) as a method to investigate such differences in somatosensory function in detail. SDT describes participants' responses according to two parameters, sensitivity and response-bias. Sensitivity refers to individuals' ability to discriminate between painful and non-painful stimulations. Response-bias refers to individuals' criterion for giving a "painful" response. We describe how multilevel models can be used to estimate these parameters and to overcome central critiques of these methods. To provide an example we apply these methods to data from the mechanical pain sensitivity test of the QST protocol. The results show that adolescents are more sensitive to mechanical pain and contradict the idea that younger children simply use more lenient criteria to report pain. Overall, we hope that the wider use of multilevel modeling to describe somatosensory functioning may advance neurology research and practice.
Comparison of different criteria for diagnosis of gestational diabetes mellitus
Sagili, Haritha; Kamalanathan, Sadishkumar; Sahoo, Jayaprakash; Lakshminarayanan, Subitha; Rani, Reddi; Jayalakshmi, D.; Kumar, K. T. Hari Chandra
2015-01-01
Introduction: The International Association of Diabetes in Pregnancy Study Group (IADPSG) criteria for gestational diabetes mellitus (GDM) has been adopted by most associations across the world including the American Diabetes Association and World Health Organization (WHO). We conducted a study comparing the IADPSG and previous WHO criteria and their effects on neonatal birth weight. Methods: The study was carried out in Obstetrics and Gynaecology Department of a tertiary care institute in South India in collaboration with Endocrinology Department. Thousand two hundred and thirty-one antenatal cases with at least one risk factor for GDM and gestational age of more than 24 weeks were included in the study. Both criteria were compared on the basis of 75 g oral glucose tolerance test results. Results: The prevalence of GDM using IADPSG and previous WHO criteria were 12.6% and 12.4%, respectively. The prevalence of GDM was 9.9% when both criteria had to be satisfied. Both GDM criteria groups did not differ in neonatal birth weight and macrosomia rate. However, there was a significant increase in lower segment cesarean section in IADPSG criteria group. Elevated fasting plasma glucose alone picked up only one GDM in the previous WHO criteria group. Conclusions: A single 2 h plasma glucose is both easy to perform and economical. A revised WHO criterion using a 2 h threshold of ≥140 mg % can be adopted as a one-step screening and diagnostic procedure for GDM in our country. PMID:26693435
An experimental sample of the field gamma-spectrometer based on solid state Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Ver Elst, K; Vermeiren, S; Schouwers, S; Callebaut, V; Thomson, W; Weekx, S
2013-12-01
CLSI recommends a minimal citrate tube fill volume of 90%. A validation protocol with clinical and analytical components was set up to determine the tube fill threshold for international normalized ratio of prothrombin time (PT-INR), activated partial thromboplastin time (aPTT) and fibrinogen. Citrated coagulation samples from 16 healthy donors and eight patients receiving vitamin K antagonists (VKA) were evaluated. Eighty-nine tubes were filled to varying volumes of >50%. Coagulation tests were performed on ACL TOP 500 CTS(®) . Receiver Operating Characteristic (ROC) plot, with Total error (TE) and critical difference (CD) as possible acceptance criteria, was used to determine the fill threshold. Receiving Operating Characteristic was the most accurate with CD for PT-INR and TE for aPTT resulting in thresholds of 63% for PT and 80% for aPTT. By adapted ROC, based on threshold setting at a point of 100% sensitivity at a maximum specificity, CD was best for PT and TE for aPTT resulting in thresholds of 73% for PT and 90% for aPTT. For fibrinogen, the method was only valid with the TE criterion at a 63% fill volume. In our study, we validated the minimal citrate tube fill volumes of 73%, 90% and 63% for PT-INR, aPTT and fibrinogen, respectively. © 2013 John Wiley & Sons Ltd.
Texture-based segmentation and analysis of emphysema depicted on CT images
NASA Astrophysics Data System (ADS)
Tan, Jun; Zheng, Bin; Wang, Xingwei; Lederman, Dror; Pu, Jiantao; Sciurba, Frank C.; Gur, David; Leader, J. Ken
2011-03-01
In this study we present a texture-based method of emphysema segmentation depicted on CT examination consisting of two steps. Step 1, a fractal dimension based texture feature extraction is used to initially detect base regions of emphysema. A threshold is applied to the texture result image to obtain initial base regions. Step 2, the base regions are evaluated pixel-by-pixel using a method that considers the variance change incurred by adding a pixel to the base in an effort to refine the boundary of the base regions. Visual inspection revealed a reasonable segmentation of the emphysema regions. There was a strong correlation between lung function (FEV1%, FEV1/FVC, and DLCO%) and fraction of emphysema computed using the texture based method, which were -0.433, -.629, and -0.527, respectively. The texture-based method produced more homogeneous emphysematous regions compared to simple thresholding, especially for large bulla, which can appear as speckled regions in the threshold approach. In the texture-based method, single isolated pixels may be considered as emphysema only if neighboring pixels meet certain criteria, which support the idea that single isolated pixels may not be sufficient evidence that emphysema is present. One of the strength of our complex texture-based approach to emphysema segmentation is that it goes beyond existing approaches that typically extract a single or groups texture features and individually analyze the features. We focus on first identifying potential regions of emphysema and then refining the boundary of the detected regions based on texture patterns.
Howes, Oliver D; McCutcheon, Rob; Agid, Ofer; de Bartolomeis, Andrea; van Beveren, Nico J M; Birnbaum, Michael L; Bloomfield, Michael A P; Bressan, Rodrigo A; Buchanan, Robert W; Carpenter, William T; Castle, David J; Citrome, Leslie; Daskalakis, Zafiris J; Davidson, Michael; Drake, Richard J; Dursun, Serdar; Ebdrup, Bjørn H; Elkis, Helio; Falkai, Peter; Fleischacker, W Wolfgang; Gadelha, Ary; Gaughran, Fiona; Glenthøj, Birte Y; Graff-Guerrero, Ariel; Hallak, Jaime E C; Honer, William G; Kennedy, James; Kinon, Bruce J; Lawrie, Stephen M; Lee, Jimmy; Leweke, F Markus; MacCabe, James H; McNabb, Carolyn B; Meltzer, Herbert; Möller, Hans-Jürgen; Nakajima, Shinchiro; Pantelis, Christos; Reis Marques, Tiago; Remington, Gary; Rossell, Susan L; Russell, Bruce R; Siu, Cynthia O; Suzuki, Takefumi; Sommer, Iris E; Taylor, David; Thomas, Neil; Üçok, Alp; Umbricht, Daniel; Walters, James T R; Kane, John; Correll, Christoph U
2017-03-01
Research and clinical translation in schizophrenia is limited by inconsistent definitions of treatment resistance and response. To address this issue, the authors evaluated current approaches and then developed consensus criteria and guidelines. A systematic review of randomized antipsychotic clinical trials in treatment-resistant schizophrenia was performed, and definitions of treatment resistance were extracted. Subsequently, consensus operationalized criteria were developed through 1) a multiphase, mixed methods approach, 2) identification of key criteria via an online survey, and 3) meetings to achieve consensus. Of 2,808 studies identified, 42 met inclusion criteria. Of these, 21 studies (50%) did not provide operationalized criteria. In the remaining studies, criteria varied considerably, particularly regarding symptom severity, prior treatment duration, and antipsychotic dosage thresholds; only two studies (5%) utilized the same criteria. The consensus group identified minimum and optimal criteria, employing the following principles: 1) current symptoms of a minimum duration and severity determined by a standardized rating scale; 2) moderate or worse functional impairment; 3) prior treatment consisting of at least two different antipsychotic trials, each for a minimum duration and dosage; 4) systematic monitoring of adherence and meeting of minimum adherence criteria; 5) ideally at least one prospective treatment trial; and 6) criteria that clearly separate responsive from treatment-resistant patients. There is considerable variation in current approaches to defining treatment resistance in schizophrenia. The authors present consensus guidelines that operationalize criteria for determining and reporting treatment resistance, adequate treatment, and treatment response, providing a benchmark for research and clinical translation.
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Harford, Thomas C.; Yi, Hsiao-ye; Faden, Vivian B.; Chen, Chiung M.
2015-01-01
Background There is limited information on the validity of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) alcohol use disorders (AUD) symptom criteria among adolescents in the general population. The purpose of the present study is to assess the DSM-IV AUD symptom criteria as reported by adolescent and adult drinkers in a single representative sample of the U.S. population ages 12 years and older. This design avoids potential confounding due to differences in survey methodology when comparing adolescents and adults from different surveys. Methods A total of 133,231 current drinkers (had at least one drink in the past year) ages 12 years and older were drawn from respondents to the 2002–2005 National Surveys on Drug Use and Health. DSM-IV AUD criteria were assessed by questions related to specific symptoms occurring during the past 12 months. Factor analytic (FA) and item response theory (IRT) models were applied to the 11 AUD symptom criteria to assess the probabilities of symptom item endorsements across different values of the underlying trait. Results A one-factor model provided an adequate and parsimonious interpretation for the 11 AUD criteria for the total sample and for each of the gender-age groups. The MIMIC model exhibited significant indication for item bias among some criteria by gender, age, and race/ethnicity. Symptom criteria for “tolerance,” “time spent,” and “hazardous use” had lower item thresholds (i.e., lower severity) and low item discrimination, and they were well separated from the other symptoms, especially in the two younger age groups (12–17 and 18–25). “Larger amounts,” “cut down,” “withdrawal,” and “legal problems” had higher item thresholds but generally lower item discrimination, and they tend to exhibit greater dispersion at higher AUD severity, particularly in the youngest age group (12–17). Conclusions Findings from the present study do not provide support for the two separate DSM-IV diagnoses of alcohol abuse and dependence among either adolescents or adults. Variations in criteria severity for both abuse and dependence offer support for a dimensional approach to diagnosis which should be considered in the ongoing development of DSM-V. PMID:19320629
Vestibular Evoked Myogenic Potentials (VEMP) Can Detect Asymptomatic Saccular Hydrops
Lin, Ming-Yee; Timmer, Ferdinand C. A.; Oriel, Brad S.; Zhou, Guangwei; Guinan, John J.; Kujawa, Sharon G.; Herrmann, Barbara S.; Merchant, Saumil N.; Rauch, Steven D.
2009-01-01
Objective The objective of this study was to explore the useful of vestibular evoked myogenic potential (VEMP) testing for detecting endolymphatic hydrops, especially in the second ear of patients with unilateral Ménière disease (MD). Methods This study was performed at a tertiary care academic medical center. Part I consisted of postmortem temporal bone specimens from the temporal bone collection of the Massachusetts Eye & Ear Infirmary; part II consisted of consecutive consenting adult patients (n = 82) with unilateral MD by American Academy of Otolaryngology–Head and Neck Surgery criteria case histories. Out-come measures consisted of VEMP thresholds in patients and histologic saccular endolymphatic hydrops in postmortem temporal bones. Results Saccular hydrops was observed in the asymptomatic ear in six of 17 (35%) of temporal bones from donors with unilateral MD. Clinic patients with unilateral MD showed elevated mean VEMP thresholds and altered VEMP tuning in their symptomatic ears and, to a lesser degree, in their asymptomatic ears. Specific VEMP frequency and tuning criteria were used to define a “Ménière-like” response. This “Ménière-like” response was seen in 27% of asymptomatic ears of our patients with unilateral MD. Conclusions Bilateral involvement is seen in approximately one third of MD cases. Saccular hydrops appears to precede symptoms in bilateral MD. Changes in VEMP threshold and tuning appear to be sensitive to these structural changes in the saccule. If so, then VEMP may be useful as a detector of asymptomatic saccular hydrops and as a predictor of evolving bilateral MD. PMID:16735912
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Graham
2008-01-15
The evaluation and communication of the significance of environmental effects remains a critical yet poorly understood component of EIA theory and practice. Following a conceptual overview of the generic dimensions of impact significance in EIA, this paper reports upon the findings of an empirical study of recent environmental impact statements that considers the treatment of significance for impacts concerning landscape ('see no evil') and noise ('hear no evil'), focussing specifically upon the evaluation and communication of impact significance ('speak no evil') in UK practice. Particular attention is given to the use of significance criteria and thresholds, including the development ofmore » a typology of approaches applied within the context of noise and landscape/visual impacts. Following a broader discussion of issues surrounding the formulation, application and interpretation of significance criteria, conclusions and recommendations relevant to wider EIA practice are suggested.« less
The Threshold of Toxicologic Concern (TTC) is an approach used for a decades in human hazard assessment. A TTC establishes an exposure level for a chemical below which no appreciable risk to human health is expected based upon a de minimis value for toxicity identified for many ...
Śliwińska-Kowalska, Mariola; Zaborowski, Kamil
2017-09-27
Background : Hearing loss is defined as worsening of hearing acuity and is usually expressed as an increase in the hearing threshold. Tinnitus, defined as "ringing in the ear", is a common and often disturbing accompaniment of hearing loss. Hearing loss and environmental exposures to noise are increasingly recognized health problems. Objectives : The objective was to assess whether the exposure-response relationship can be established between exposures to non-occupational noise and permanent hearing outcomes such as permanent hearing loss and tinnitus. Methods: Information sources : Computer searches of all accessible medical and other databases (PubMed, Web of Science, Scopus) were performed and complemented with manual searches. The search was not limited to a particular time span, except for the effects of personal listening devices (PLDs). The latter was limited to the years 2008-June 2015, since previous knowledge was summarized by SCENIHR descriptive systematic review published in 2008. Study eligibility criteria: The inclusion criteria were as follows: the exposure to noise was measured in sound pressure levels (SPLs) and expressed in individual equivalent decibel values (L EX,8h ), the studies included both exposed and reference groups, the outcome was a permanent health effect, i.e., permanent hearing loss assessed with pure-tone audiometry and/or permanent tinnitus assessed with a questionnaire. The eligibility criteria were evaluated by two independent reviewers. Study appraisal and synthesis methods: The risk of bias was assessed for all of the papers using a template for assessment of quality and the risk of bias. The GRADE (grading of recommendations assessment, development, and evaluation) approach was used to assess the overall quality of evidence. Meta-analysis was not possible due to methodological heterogeneity of included studies and the inadequacy of data. Results: Out of 220 references identified, five studies fulfilled the inclusion criteria. All of them were related to the use of PLDs and comprised in total of 1551 teenagers and young adults. Three studies used hearing loss as the outcome and three tinnitus. There was a positive correlation between noise level and hearing loss either at standard or extended high frequencies in all three of the studies on hearing loss. In one study, there was also a positive correlation between the duration of PLD use and hearing loss. There was no association between prolonged listening to loud music through PLDs and tinnitus or the results were contradictory. All of the evidence was of low quality. Limitations: The studies are cross-sectional. No study provides odds ratios of hearing loss by the level of exposure to noise. Conclusions: While using very strict inclusion criteria, there is low quality GRADE evidence that prolonged listening to loud music through PLDs increases the risk of hearing loss and results in worsening standard frequency audiometric thresholds. However, specific threshold analyses focused on stratifying risk according to clearly defined levels of exposure are missing. Future studies are needed to provide actionable guidance for PLDs users. No studies fulfilling the inclusion criteria related to other isolated or combined exposures to environmental noise were identified.
Śliwińska-Kowalska, Mariola; Zaborowski, Kamil
2017-01-01
Background: Hearing loss is defined as worsening of hearing acuity and is usually expressed as an increase in the hearing threshold. Tinnitus, defined as “ringing in the ear”, is a common and often disturbing accompaniment of hearing loss. Hearing loss and environmental exposures to noise are increasingly recognized health problems. Objectives: The objective was to assess whether the exposure-response relationship can be established between exposures to non-occupational noise and permanent hearing outcomes such as permanent hearing loss and tinnitus. Methods: Information sources: Computer searches of all accessible medical and other databases (PubMed, Web of Science, Scopus) were performed and complemented with manual searches. The search was not limited to a particular time span, except for the effects of personal listening devices (PLDs). The latter was limited to the years 2008–June 2015, since previous knowledge was summarized by SCENIHR descriptive systematic review published in 2008. Study eligibility criteria: The inclusion criteria were as follows: the exposure to noise was measured in sound pressure levels (SPLs) and expressed in individual equivalent decibel values (LEX,8h), the studies included both exposed and reference groups, the outcome was a permanent health effect, i.e., permanent hearing loss assessed with pure-tone audiometry and/or permanent tinnitus assessed with a questionnaire. The eligibility criteria were evaluated by two independent reviewers. Study appraisal and synthesis methods: The risk of bias was assessed for all of the papers using a template for assessment of quality and the risk of bias. The GRADE (grading of recommendations assessment, development, and evaluation) approach was used to assess the overall quality of evidence. Meta-analysis was not possible due to methodological heterogeneity of included studies and the inadequacy of data. Results: Out of 220 references identified, five studies fulfilled the inclusion criteria. All of them were related to the use of PLDs and comprised in total of 1551 teenagers and young adults. Three studies used hearing loss as the outcome and three tinnitus. There was a positive correlation between noise level and hearing loss either at standard or extended high frequencies in all three of the studies on hearing loss. In one study, there was also a positive correlation between the duration of PLD use and hearing loss. There was no association between prolonged listening to loud music through PLDs and tinnitus or the results were contradictory. All of the evidence was of low quality. Limitations: The studies are cross-sectional. No study provides odds ratios of hearing loss by the level of exposure to noise. Conclusions: While using very strict inclusion criteria, there is low quality GRADE evidence that prolonged listening to loud music through PLDs increases the risk of hearing loss and results in worsening standard frequency audiometric thresholds. However, specific threshold analyses focused on stratifying risk according to clearly defined levels of exposure are missing. Future studies are needed to provide actionable guidance for PLDs users. No studies fulfilling the inclusion criteria related to other isolated or combined exposures to environmental noise were identified. PMID:28953238
Cameron, David; Ubels, Jasper; Norström, Fredrik
2018-01-01
The amount a government should be willing to invest in adopting new medical treatments has long been under debate. With many countries using formal cost-effectiveness (C/E) thresholds when examining potential new treatments and ever-growing medical costs, accurately setting the level of a C/E threshold can be essential for an efficient healthcare system. The aim of this systematic review is to describe the prominent approaches to setting a C/E threshold, compile available national-level C/E threshold data and willingness-to-pay (WTP) data, and to discern whether associations exist between these values, gross domestic product (GDP) and health-adjusted life expectancy (HALE). This review further examines current obstacles faced with the presently available data. A systematic review was performed to collect articles which have studied national C/E thresholds and willingness-to-pay (WTP) per quality-adjusted life year (QALY) in the general population. Associations between GDP, HALE, WTP, and C/E thresholds were analyzed with correlations. Seventeen countries were identified from nine unique sources to have formal C/E thresholds within our inclusion criteria. Thirteen countries from nine sources were identified to have WTP per QALY data within our inclusion criteria. Two possible associations were identified: C/E thresholds with HALE (quadratic correlation of 0.63), and C/E thresholds with GDP per capita (polynomial correlation of 0.84). However, these results are based on few observations and therefore firm conclusions cannot be made. Most national C/E thresholds identified in our review fall within the WHO's recommended range of one-to-three times GDP per capita. However, the quality and quantity of data available regarding national average WTP per QALY, opportunity costs, and C/E thresholds is poor in comparison to the importance of adequate investment in healthcare. There exists an obvious risk that countries might either over- or underinvest in healthcare if they base their decision-making process on erroneous presumptions or non-evidence-based methodologies. The commonly referred to value of 100,000$ USD per QALY may potentially have some basis.
Angst, Jules; Cui, Lihong; Swendsen, J. Joel; Rothen, S.; Cravchik, Anibal; Kessler, Ronald; Merikangas, Kathleen
2011-01-01
Objectives There is growing clinical and epidemiologic evidence indicating that major mood disorders form a spectrum from Major Depressive Disorder (MDD) to pure mania. The present investigation examined the prevalence and clinical correlates of MDD with sub-threshold bipolarity vs. pure MDD in the National Comorbidity Survey Replication (NCS-R). Methods The NCS-R is a nationally representative face-to-face household survey of the U.S. population conducted between February, 2001 and April, 2003. Lifetime history of mood disorders, symptoms and clinical indicators of severity were collected using version 3.0 of the WHO Composite International Diagnostic Interview, a fully structured lay-administered diagnostic interview. Results Nearly 40% of study participants with a history of major depressive disorder had a history of sub-threshold hypomania. This subgroup had a younger age of disorder onset, more episodes of depression, and higher rates of comorbidity than those without a history of hypomania, and lower levels of clinical severity than those with bipolar II disorder. Conclusions The findings demonstrate heterogeneity of major depressive disorder and support the validity of inclusion of sub-threshold mania in the diagnostic classification. The broadening of criteria for bipolar disorder would have important implications for research and clinical practice. PMID:20713498
Kohli, Preeti; Storck, Kristina A.; Schlosser, Rodney J.
2016-01-01
Differences in testing modalities and cut-points used to define olfactory dysfunction contribute to the wide variability in estimating the prevalence of olfactory dysfunction in chronic rhinosinusitis (CRS). The aim of this study is to report the prevalence of olfactory impairment using each component of the Sniffin’ Sticks test (threshold, discrimination, identification, and total score) with age-adjusted and ideal cut-points from normative populations. Patients meeting diagnostic criteria for CRS were enrolled from rhinology clinics at a tertiary academic center. Olfaction was assessed using the Sniffin’ Sticks test. The study population consisted of 110 patients. The prevalence of normosmia, hyposmia, and anosmia using total Sniffin’ Sticks score was 41.8%, 20.0%, and 38.2% using age-appropriate cut-points and 20.9%, 40.9%, and 38.2% using ideal cut-points. Olfactory impairment estimates for each dimension mirrored these findings, with threshold yielding the highest values. Threshold, discrimination, and identification were also found to be significantly correlated to each other (P < 0.001). In addition, computed tomography scores, asthma, allergy, and diabetes were found to be associated with olfactory dysfunction. In conclusion, the prevalence of olfactory dysfunction is dependent upon olfactory dimension and if age-adjusted cut-points are used. The method of olfactory testing should be chosen based upon specific clinical and research goals. PMID:27469973
Diamond, Ivan R; Grant, Robert C; Feldman, Brian M; Pencharz, Paul B; Ling, Simon C; Moore, Aideen M; Wales, Paul W
2014-04-01
To investigate how consensus is operationalized in Delphi studies and to explore the role of consensus in determining the results of these studies. Systematic review of a random sample of 100 English language Delphi studies, from two large multidisciplinary databases [ISI Web of Science (Thompson Reuters, New York, NY) and Scopus (Elsevier, Amsterdam, NL)], published between 2000 and 2009. About 98 of the Delphi studies purported to assess consensus, although a definition for consensus was only provided in 72 of the studies (64 a priori). The most common definition for consensus was percent agreement (25 studies), with 75% being the median threshold to define consensus. Although the authors concluded in 86 of the studies that consensus was achieved, consensus was only specified a priori (with a threshold value) in 42 of these studies. Achievement of consensus was related to the decision to stop the Delphi study in only 23 studies, with 70 studies terminating after a specified number of rounds. Although consensus generally is felt to be of primary importance to the Delphi process, definitions of consensus vary widely and are poorly reported. Improved criteria for reporting of methods of Delphi studies are required. Copyright © 2014 Elsevier Inc. All rights reserved.
Travelling Policies and Contextual Considerations: On Threshold Criteria
ERIC Educational Resources Information Center
Nir, Adam; Kondakci, Yasar; Emil, Serap
2018-01-01
Educational policy borrowing has become rather common in our globalised world. However, the literature lacks contextual criteria that may be employed by researchers and policy makers to assess the correspondence of a particular policy to the local context of a borrowing system. Based on a secondary analysis of documents and research reports, this…
38 CFR 61.13 - Rating criteria for capital grant applications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... capital grant applications. 61.13 Section 61.13 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... capital grant applications. (a) Applicants that meet the threshold requirements in § 61.12 of this part, will then be rated using the selection criteria listed in this section. To be eligible for a capital...
2012-01-01
Background The test characteristics of head circumference (HC) measurement percentile criteria for the identification of previously undetected pathology associated with head enlargement in primary care are unknown. Methods Electronic patient records were reviewed to identify children age 3 days to 3 years with new diagnoses of intracranial expansive conditions (IEC) and metabolic and genetic conditions associated with macrocephaly (MGCM). We tested the following HC percentile threshold criteria: ever above the 95th, 97th, or 99.6th percentile and ever crossing 2, 4, or 6 increasing major percentile lines. The Centers for Disease Control and World Health Organization growth curves were used, as well as the primary care network (PCN) curves previously derived from this cohort. Results Among 74,428 subjects, 85 (0.11%) had a new diagnosis of IEC (n = 56) or MGCM (n = 29), and between these 2 groups, 24 received intervention. The 99.6th percentile of the PCN curve was the only threshold with a PPV over 1% (PPV 1.8%); the sensitivity of this threshold was only 15%. Test characteristics for the 95th percentiles were: sensitivity (CDC: 46%; WHO: 55%; PCN: 40%), positive predictive value (PPV: CDC: 0.3%; WHO: 0.3%; PCN: 0.4%), and likelihood ratios positive (LR+: CDC: 2.8; WHO: 2.2; PCN: 3.9). Test characteristics for the 97th percentiles were: sensitivity (CDC: 40%; WHO: 48%; PCN: 34%), PPV (CDC: 0.4%; WHO: 0.3%; PCN: 0.6%), and LR+ (CDC: 3.6; WHO: 2.7; PCN: 5.6). Test characteristics for crossing 2 increasing major percentile lines were: sensitivity (CDC: 60%; WHO: 40%; PCN: 31%), PPV (CDC: 0.2%; WHO: 0.1%; PCN: 0.2%), and LR+ (CDC: 1.3; WHO: 1.1; PCN: 1.5). Conclusions Commonly used HC percentile thresholds had low sensitivity and low positive predictive value for diagnosing new pathology associated with head enlargement in children in a primary care network. PMID:22269214
Ding, Changfeng; Ma, Yibing; Li, Xiaogang; Zhang, Taolin; Wang, Xingxiang
2018-04-01
Cadmium (Cd) is an environmental toxicant with high rates of soil-plant transfer. It is essential to establish an accurate soil threshold for the implementation of soil management practices. This study takes root vegetable as an example to derive soil thresholds for Cd based on the food quality standard as well as health risk assessment using species sensitivity distribution (SSD). A soil type-specific bioconcentration factor (BCF, ratio of Cd concentration in plant to that in soil) generated from soil with a proper Cd concentration gradient was calculated and applied in the derivation of soil thresholds instead of a generic BCF value to minimize the uncertainty. The sensitivity variations of twelve root vegetable cultivars for accumulating soil Cd and the empirical soil-plant transfer model were investigated and developed in greenhouse experiments. After normalization, the hazardous concentrations from the fifth percentile of the distribution based on added Cd (HC5 add ) were calculated from the SSD curves fitted by Burr Type III distribution. The derived soil thresholds were presented as continuous or scenario criteria depending on the combination of soil pH and organic carbon content. The soil thresholds based on food quality standard were on average 0.7-fold of those based on health risk assessment, and were further validated to be reliable using independent data from field survey and published articles. The results suggested that deriving soil thresholds for Cd using SSD method is robust and also applicable to other crops as well as other trace elements that have the potential to cause health risk issues. Copyright © 2017 Elsevier B.V. All rights reserved.
Assessment of Maximum Aerobic Capacity and Anaerobic Threshold of Elite Ballet Dancers.
Wyon, Matthew A; Allen, Nick; Cloak, Ross; Beck, Sarah; Davies, Paul; Clarke, Frances
2016-09-01
An athlete's cardiorespiratory profile, maximal aerobic capacity, and anaerobic threshold is affected by training regimen and competition demands. The present study aimed to ascertain whether there are company rank differences in maximal aerobic capacity and anaerobic threshold in elite classical ballet dancers. Seventy-four volunteers (M 34, F 40) were recruited from two full-time professional classical ballet companies. All participants completed a continuous incremental treadmill protocol with a 1-km/hr speed increase at the end of each 1-min stage until termination criteria had been achieved (e.g., voluntary cessation, respiratory exchange ratio <1.15, HR ±5 bpm of estimated HRmax). Peak VO2 (5-breathe smooth) was recorded and anaerobic threshold calculated using ventilatory curve and ventilatory equivalents methods. Statistical analysis reported between-subject effects for gender (F1,67=35.18, p<0.001) and rank (F1,67=8.67, p<0.001); post hoc tests reported soloists (39.5±5.15 mL/kg/min) as having significantly lower VO2 peak than artists (45.9±5.75 mL/kg/min, p<0.001) and principal dancers (48.07±3.24 mL/kg/min, p<0.001). Significant differences in anaerobic threshold were reported for age (F1,67=7.68, p=0.008) and rank (F1,67=3.56, p=0.034); post hoc tests reported artists (75.8±5.45%) having significantly lower anaerobic threshold than soloists (80.9±5.71, p<0.01) and principals (84.1±4.84%, p<0.001). The observed differences in VO2 peak and anaerobic threshold between the ranks in ballet companies are probably due to the different rehearsal and performance demands.
Cormier, Susan M; Zheng, Lei; Hayslip, Gretchen; Flaherty, Colleen M
2018-08-15
The concentration of salts in streams is increasing world-wide making freshwater a declining resource. Developing thresholds for freshwater with low specific conductivity (SC), a measure of dissolved ions in water, may protect high quality resources that are refugia for aquatic life and that dilute downstream waters. In this case example, methods are illustrated for estimating protective levels for streams with low SC. The Cascades in the Pacific Northwest of the United States of America was selected for the case study because a geophysical model indicated that the SC of freshwater streams was likely to be very low. Also, there was an insufficient range in the SC data to accurately derive a criterion using the 2011, US Environmental Protection Agency field-based extirpation concentration distribution method. Instead, background and a regression model was used to estimate chronic and acute SC levels that could extirpate 5% of benthic invertebrate genera. Background SC was estimated at the 25th centile (33μS/cm) of the measured data and used as the independent variable in a least squares empirical background-to-criteria (B-C) model. Because no comparison could be made with effect levels estimated from a paired SC and biological data set from the Cascades, the lower 50% prediction limit (PL) was identified as an example chronic water quality criterion (97μS/cm). The maximum exposure threshold was estimated at the 90th centile SC of streams meeting the chronic SC level. The example acute SC level was 190μS/cm. Because paired aquatic life and SC data are often sparse, the B-C method is useful for developing SC criteria for other systems with limited data. Published by Elsevier B.V.
Kournetas, N; Spintzyk, S; Schweizer, E; Sawada, T; Said, F; Schmid, P; Geis-Gerstorfer, J; Eliades, G; Rupp, F
2017-08-01
Comparability of topographical data of implant surfaces in literature is low and their clinical relevance often equivocal. The aim of this study was to investigate the ability of scanning electron microscopy and optical interferometry to assess statistically similar 3-dimensional roughness parameter results and to evaluate these data based on predefined criteria regarded relevant for a favorable biological response. Four different commercial dental screw-type implants (NanoTite Certain Prevail, TiUnite Brånemark Mk III, XiVE S Plus and SLA Standard Plus) were analyzed by stereo scanning electron microscopy and white light interferometry. Surface height, spatial and hybrid roughness parameters (Sa, Sz, Ssk, Sku, Sal, Str, Sdr) were assessed from raw and filtered data (Gaussian 50μm and 5μm cut-off-filters), respectively. Data were statistically compared by one-way ANOVA and Tukey-Kramer post-hoc test. For a clinically relevant interpretation, a categorizing evaluation approach was used based on predefined threshold criteria for each roughness parameter. The two methods exhibited predominantly statistical differences. Dependent on roughness parameters and filter settings, both methods showed variations in rankings of the implant surfaces and differed in their ability to discriminate the different topographies. Overall, the analyses revealed scale-dependent roughness data. Compared to the pure statistical approach, the categorizing evaluation resulted in much more similarities between the two methods. This study suggests to reconsider current approaches for the topographical evaluation of implant surfaces and to further seek after proper experimental settings. Furthermore, the specific role of different roughness parameters for the bioresponse has to be studied in detail in order to better define clinically relevant, scale-dependent and parameter-specific thresholds and ranges. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Alabbadi, Ibrahim; Crealey, Grainne; Scott, Michael; Baird, Simon; Trouton, Tom; Mairs, Jill; McElnay, James
2006-01-01
System of Objectified Judgement Analysis (SOJA) is a structured approach to the selection of drugs for formulary inclusion. How- ever, while SOJA is a very important advance in drug selection for formulary purposes, it is hospital based and can only be applied to one indication at a time. In SOJA, cost has been given a primary role in the selection process as it has been included as a selection criterion from the start. Cost may therefore drive the selection of a particular drug product at the expense of other basic criteria such as safety or efficacy. The aims of this study were to use a modified SOJA approach in the selection of ACE inhibitors (ACEIs) for use in a joint formulary that bridges primary and secondary care within a health board in Northern Ireland, and to investigate the potential impact of the joint formulary on prescribing costs of ACEIs in that health board. The modified SOJA approach involved four phases in sequence: an evidence-based pharmacotherapeutic evaluation of all available ACEI drug entities, a separate safety/risk assessment analysis of products containing agents that exceeded the pharmacotherapeutic threshold, a budget-impact analysis and, finally, the selection of product lines. A comprehensive literature review and expert panel judgement informed the selection of criteria (and their relative weighting) for the pharmacotherapeutic evaluation. The resultant criteria/scoring system was circulated (in questionnaire format) to prescribers and stakeholders for comment. Based on statistical analysis of the latter survey results, the final scoring system was developed. Drug entities that exceeded the evidence threshold were sequentially entered into the second and third phases of the process. Five drug entities (11 currently available in the UK) exceeded the evidence threshold and 22 of 26 submitted product lines containing these drug entities satisfied the safety/risk assessment criteria. Three product lines, each containing a different drug entity, were selected for formulary inclusion after budget impact analysis was performed. The estimated potential annual cost savings for ACEIs (based on estimated annual usage in defined daily doses) for this particular health board was 42%. The modified SOJA approach has a significant contribution to make in containing the costs of ACEIs. Applying modified SOJA as a practical method for all indications will allow the development of a unified formulary that bridges secondary and primary care.
Marek, Ryan J; Ben-Porath, Yossef S; Ashton, Kathleen; Heinberg, Leslie J
2014-07-01
Binge eating disorder (BED) was recently included in the DSM-5. The prevalence rate for BED using the DSM-IV-TR research criteria tends to be higher in bariatric surgery candidates than the normative population; however, no studies have examined how many more bariatric surgery candidates will meet the new, less conservative criteria of DSM-5. We explore the current BED prevalence rate change in a sample of bariatric surgery candidates. Data were obtained for 1,283 bariatric surgery candidates. 84 men and 213 women were diagnosed with current BED using DSM-IV-TR research criteria. A semi-structured interview, the binge eating scale (BES), and a Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF) were given to every patient as part of standard procedures mandated by the facility. An additional 3.43% (p < .001) of bariatric surgery candidates met the diagnostic threshold for BED when using DSM-5 criteria. These individuals were demographical similar and produced similar MMPI-2-RF and BES scores when compared with patients who met DSM-IV-TR criteria for BED. Thus, the current investigation indicates that individuals meeting BED criteria based on DSM-5 are similar to those meeting the more conservative diagnostic threshold outlined in DSM-IV-TR in a sample of bariatric surgery candidates. © 2014 Wiley Periodicals, Inc.
Arous, Edward J; Simons, Jessica P; Flahive, Julie M; Beck, Adam W; Stone, David H; Hoel, Andrew W; Messina, Louis M; Schanzer, Andres
2015-10-01
Carotid endarterectomy (CEA) for asymptomatic carotid artery stenosis is among the most common procedures performed in the United States. However, consensus is lacking regarding optimal preoperative imaging, carotid duplex ultrasound criteria, and ultimately, the threshold for surgery. We sought to characterize national variation in preoperative imaging, carotid duplex ultrasound criteria, and threshold for surgery for asymptomatic CEA. The Society for Vascular Surgery Vascular Quality Initiative (VQI) database was used to identify all CEA procedures performed for asymptomatic carotid artery stenosis between 2003 and 2014. VQI currently captures 100% of CEA procedures performed at >300 centers by >2000 physicians nationwide. Three analyses were performed to quantify the variation in (1) preoperative imaging, (2) carotid duplex ultrasound criteria, and (3) threshold for surgery. Of 35,695 CEA procedures in 33,488 patients, the study cohort was limited to 19,610 CEA procedures (55%) performed for asymptomatic disease. The preoperative imaging modality used before CEA varied widely, with 57% of patients receiving a single preoperative imaging study (duplex ultrasound imaging, 46%; computed tomography angiography, 7.5%; magnetic resonance angiography, 2.0%; cerebral angiography, 1.3%) and 43% of patients receiving multiple preoperative imaging studies. Of the 16,452 asymptomatic patients (89%) who underwent preoperative duplex ultrasound imaging, there was significant variability between centers in the degree of stenosis (50%-69%, 70%-79%, 80%-99%) designated for a given peak systolic velocity, end diastolic velocity, and internal carotid artery-to-common carotid artery ratio. Although 68% of CEA procedures in asymptomatic patients were performed for an 80% to 99% stenosis, 26% were performed for a 70% to 79% stenosis, and 4.1% were performed for a 50% to 69% stenosis. At the surgeon level, the range in the percentage of CEA procedures performed for a <80% asymptomatic carotid artery stenosis is from 0% to 100%. Similarly, at the center level, institutions range in the percentage of CEA procedures performed for a <80% asymptomatic carotid artery stenosis from 0% to 100%. Despite CEA being an extremely common procedure, there is widespread variation in the three primary determinants-preoperative imaging, carotid duplex ultrasound criteria, and threshold for surgery-of whether CEA is performed for asymptomatic carotid stenosis. Standardizing the approach to care for asymptomatic carotid artery stenosis will mitigate the significant downstream effects of this variation on health care costs. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Yang, Xujun; Li, Chuandong; Song, Qiankun; Chen, Jiyang; Huang, Junjian
2018-05-04
This paper talks about the stability and synchronization problems of fractional-order quaternion-valued neural networks (FQVNNs) with linear threshold neurons. On account of the non-commutativity of quaternion multiplication resulting from Hamilton rules, the FQVNN models are separated into four real-valued neural network (RVNN) models. Consequently, the dynamic analysis of FQVNNs can be realized by investigating the real-valued ones. Based on the method of M-matrix, the existence and uniqueness of the equilibrium point of the FQVNNs are obtained without detailed proof. Afterwards, several sufficient criteria ensuring the global Mittag-Leffler stability for the unique equilibrium point of the FQVNNs are derived by applying the Lyapunov direct method, the theory of fractional differential equation, the theory of matrix eigenvalue, and some inequality techniques. In the meanwhile, global Mittag-Leffler synchronization for the drive-response models of the addressed FQVNNs are investigated explicitly. Finally, simulation examples are designed to verify the feasibility and availability of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Flethøj, Mette; Kanters, Jørgen K; Pedersen, Philip J; Haugaard, Maria M; Carstensen, Helena; Olsen, Lisbeth H; Buhl, Rikke
2016-11-28
Although premature beats are a matter of concern in horses, the interpretation of equine ECG recordings is complicated by a lack of standardized analysis criteria and a limited knowledge of the normal beat-to-beat variation of equine cardiac rhythm. The purpose of this study was to determine the appropriate threshold levels of maximum acceptable deviation of RR intervals in equine ECG analysis, and to evaluate a novel two-step timing algorithm by quantifying the frequency of arrhythmias in a cohort of healthy adult endurance horses. Beat-to-beat variation differed considerably with heart rate (HR), and an adaptable model consisting of three different HR ranges with separate threshold levels of maximum acceptable RR deviation was consequently defined. For resting HRs <60 beats/min (bpm) the threshold level of RR deviation was set at 20%, for HRs in the intermediate range between 60 and 100 bpm the threshold was 10%, and for exercising HRs >100 bpm, the threshold level was 4%. Supraventricular premature beats represented the most prevalent arrhythmia category with varying frequencies in seven horses at rest (median 7, range 2-86) and six horses during exercise (median 2, range 1-24). Beat-to-beat variation of equine cardiac rhythm varies according to HR, and threshold levels in equine ECG analysis should be adjusted accordingly. Standardization of the analysis criteria will enable comparisons of studies and follow-up examinations of patients. A small number of supraventricular premature beats appears to be a normal finding in endurance horses. Further studies are required to validate the findings and determine the clinical significance of premature beats in horses.
NASA Astrophysics Data System (ADS)
Choi, Mi-Ran; Hundertmark, Dirk; Lee, Young-Ran
2017-10-01
We prove a threshold phenomenon for the existence/non-existence of energy minimizing solitary solutions of the diffraction management equation for strictly positive and zero average diffraction. Our methods allow for a large class of nonlinearities; they are, for example, allowed to change sign, and the weakest possible condition, it only has to be locally integrable, on the local diffraction profile. The solutions are found as minimizers of a nonlinear and nonlocal variational problem which is translation invariant. There exists a critical threshold λcr such that minimizers for this variational problem exist if their power is bigger than λcr and no minimizers exist with power less than the critical threshold. We also give simple criteria for the finiteness and strict positivity of the critical threshold. Our proof of existence of minimizers is rather direct and avoids the use of Lions' concentration compactness argument. Furthermore, we give precise quantitative lower bounds on the exponential decay rate of the diffraction management solitons, which confirm the physical heuristic prediction for the asymptotic decay rate. Moreover, for ground state solutions, these bounds give a quantitative lower bound for the divergence of the exponential decay rate in the limit of vanishing average diffraction. For zero average diffraction, we prove quantitative bounds which show that the solitons decay much faster than exponentially. Our results considerably extend and strengthen the results of Hundertmark and Lee [J. Nonlinear Sci. 22, 1-38 (2012) and Commun. Math. Phys. 309(1), 1-21 (2012)].
Ensrud, Kristine E; Taylor, Brent C; Peters, Katherine W; Gourlay, Margaret L; Donaldson, Meghan G; Leslie, William D; Blackwell, Terri L; Fink, Howard A; Orwoll, Eric S; Schousboe, John
2014-07-03
To quantify incremental effects of applying different criteria to identify men who are candidates for drug treatment to prevent fracture and to examine the extent to which fracture probabilities vary across distinct categories of men defined by these criteria. Cross sectional and longitudinal analysis of a prospective cohort study. Multicenter Osteoporotic Fractures in Men (MrOS) study in the United States. 5880 untreated community dwelling men aged 65 years or over classified into four distinct groups: osteoporosis by World Health Organization criteria alone; osteoporosis by National Osteoporosis Foundation (NOF) but not WHO criteria; no osteoporosis but at high fracture risk (at or above NOF derived FRAX intervention thresholds recommended for US); and no osteoporosis and at low fracture risk (below NOF derived FRAX intervention thresholds recommended for US). Proportion of men identified for drug treatment; predicted 10 year probabilities of hip and major osteoporotic fracture calculated using FRAX algorithm with femoral neck bone mineral density; observed 10 year probabilities for confirmed incident hip and major osteoporotic (hip, clinical vertebral, wrist, or humerus) fracture events calculated using cumulative incidence estimation, accounting for competing risk of mortality. 130 (2.2%) men were identified as having osteoporosis by using the WHO definition, and an additional 422 were identified by applying the NOF definition (total osteoporosis prevalence 9.4%). Application of NOF derived FRAX intervention thresholds led to 936 (15.9%) additional men without osteoporosis being identified as at high fracture risk, raising the total prevalence of men potentially eligible for drug treatment to 25.3%. Observed 10 year hip fracture probabilities were 20.6% for men with osteoporosis by WHO criteria alone, 6.8% for men with osteoporosis by NOF (but not WHO) criteria, 6.4% for men without osteoporosis but classified as at high fracture risk, and 1.5% for men without osteoporosis and classified as at low fracture risk. A similar pattern was noted in observed fracture probabilities for major osteoporotic fracture. Among men with osteoporosis by WHO criteria, observed fracture probabilities were greater than FRAX predicted probabilities (20.6% v 9.5% for hip fracture and 30.0% v 17.4% for major osteoporotic fracture). Choice of definition of osteoporosis and use of NOF derived FRAX intervention thresholds have major effects on the proportion of older men identified as warranting drug treatment to prevent fracture. Among men identified with osteoporosis by WHO criteria, who comprised 2% of the study population, actual observed fracture probabilities during 10 years of follow-up were highest and exceeded FRAX predicted fracture probabilities. On the basis of findings from randomized trials in women, these men are most likely to benefit from treatment. Expanding indications for treatment beyond this small group has uncertain value owing to lower observed fracture probabilities and uncertain benefits of treatment among men not selected on the basis of WHO criteria. © Ensrud et al 2014.
Environmental Assessment: West Coast Basing of C-17 Aircraft
2003-06-01
will not be regionally significant by United States Environmental Protection Agency standards, will not exceed de minimis thresholds, and that a...emissions for criteria pollutants will not be regionally significant by United States Environmental Protection Agency standards, will not exceed de minimis...would not exceed de minimis thresholds, and that a Conformity Determination would not be required. MTRs. Emissions from C-17 operations on the
A Study on Micropipetting Detection Technology of Automatic Enzyme Immunoassay Analyzer.
Shang, Zhiwu; Zhou, Xiangping; Li, Cheng; Tsai, Sang-Bing
2018-04-10
In order to improve the accuracy and reliability of micropipetting, a method of micro-pipette detection and calibration combining the dynamic pressure monitoring in pipetting process and quantitative identification of pipette volume in image processing was proposed. Firstly, the normalized pressure model for the pipetting process was established with the kinematic model of the pipetting operation, and the pressure model is corrected by the experimental method. Through the pipetting process pressure and pressure of the first derivative of real-time monitoring, the use of segmentation of the double threshold method as pipetting fault evaluation criteria, and the pressure sensor data are processed by Kalman filtering, the accuracy of fault diagnosis is improved. When there is a fault, the pipette tip image is collected through the camera, extract the boundary of the liquid region by the background contrast method, and obtain the liquid volume in the tip according to the geometric characteristics of the pipette tip. The pipette deviation feedback to the automatic pipetting module and deviation correction is carried out. The titration test results show that the combination of the segmented pipetting kinematic model of the double threshold method of pressure monitoring, can effectively real-time judgment and classification of the pipette fault. The method of closed-loop adjustment of pipetting volume can effectively improve the accuracy and reliability of the pipetting system.
Development of a precipitation-area curve for warning criteria of short-duration flash flood
NASA Astrophysics Data System (ADS)
Bae, Deg-Hyo; Lee, Moon-Hwan; Moon, Sung-Keun
2018-01-01
This paper presents quantitative criteria for flash flood warning that can be used to rapidly assess flash flood occurrence based on only rainfall estimates. This study was conducted for 200 small mountainous sub-catchments of the Han River basin in South Korea because South Korea has recently suffered many flash flood events. The quantitative criteria are calculated based on flash flood guidance (FFG), which is defined as the depth of rainfall of a given duration required to cause frequent flooding (1-2-year return period) at the outlet of a small stream basin and is estimated using threshold runoff (TR) and antecedent soil moisture conditions in all sub-basins. The soil moisture conditions were estimated during the flooding season, i.e., July, August and September, over 7 years (2002-2009) using the Sejong University Rainfall Runoff (SURR) model. A ROC (receiver operating characteristic) analysis was used to obtain optimum rainfall values and a generalized precipitation-area (P-A) curve was developed for flash flood warning thresholds. The threshold function was derived as a P-A curve because the precipitation threshold with a short duration is more closely related to basin area than any other variables. For a brief description of the P-A curve, generalized thresholds for flash flood warnings can be suggested for rainfall rates of 42, 32 and 20 mm h-1 in sub-basins with areas of 22-40, 40-100 and > 100 km2, respectively. The proposed P-A curve was validated based on observed flash flood events in different sub-basins. Flash flood occurrences were captured for 9 out of 12 events. This result can be used instead of FFG to identify brief flash flood (less than 1 h), and it can provide warning information to decision-makers or citizens that is relatively simple, clear and immediate.
Smith, Michael T.; Wickwire, Emerson M.; Grace, Edward G.; Edwards, Robert R.; Buenaver, Luis F.; Peterson, Stephen; Klick, Brendan; Haythornthwaite, Jennifer A.
2009-01-01
Study Objectives: We characterized sleep disorder rates in temporomandibular joint disorder (TMD) and evaluated possible associations between sleep disorders and laboratory measures of pain sensitivity. Design: Research diagnostic examinations were conducted, followed by two consecutive overnight polysomnographic studies with morning and evening assessments of pain threshold. Setting: Orofacial pain clinic and inpatient sleep research facility Participants: Fifty-three patients meeting research diagnostic criteria for myofascial TMD. Interventions: N/A Measurements and Results: We determined sleep disorder diagnostic rates and conducted algometric measures of pressure pain threshold on the masseter and forearm. Heat pain threshold was measured on the forearm; 75% met self-report criteria for sleep bruxism, but only 17% met PSG criteria for active sleep bruxism. Two or more sleep disorders were diagnosed in 43% of patients. Insomnia disorder (36%) and sleep apnea (28.4%) demonstrated the highest frequencies. Primary insomnia (PI) (26%) comprised the largest subcategory of insomnia. Even after controlling for multiple potential confounds, PI was associated with reduced mechanical and thermal pain thresholds at all sites (P < 0.05). Conversely, the respiratory disturbance index was associated with increased mechanical pain thresholds on the forearm (P < 0.05). Conclusions: High rates of PI and sleep apnea highlight the need to refer TMD patients complaining of sleep disturbance for polysomnographic evaluation. The association of PI and hyperalgesia at a non-orofacial site suggests that PI may be linked with central sensitivity and could play an etiologic role in idiopathic pain disorders. The association between sleep disordered breathing and hypoalgesia requires further study and may provide novel insight into the complex interactions between sleep and pain-regulatory processes. Citation: Smith MT; Wickwire EM; Grace EG; Edwards RR; Buenaver LF; Peterson S; Klick B; Haythornthwaite JA. Sleep disorders and their association with laboratory pain sensitivity in temporomandibular joint disorder. SLEEP 2009;32(6):779–790. PMID:19544755
Kanis, John A; Harvey, Nicholas C; Cooper, Cyrus; Johansson, Helena; Odén, Anders; McCloskey, Eugene V
2016-01-01
In most assessment guidelines, treatment for osteoporosis is recommended in individuals with prior fragility fractures, especially fractures at spine and hip. However, for those without prior fractures, the intervention thresholds can be derived using different methods. The aim of this report was to undertake a systematic review of the available information on the use of FRAX® in assessment guidelines, in particular the setting of thresholds and their validation. We identified 120 guidelines or academic papers that incorporated FRAX of which 38 provided no clear statement on how the fracture probabilities derived are to be used in decision-making in clinical practice. The remainder recommended a fixed intervention threshold (n=58), most commonly as a component of more complex guidance (e.g. bone mineral density (BMD) thresholds) or an age-dependent threshold (n=22). Two guidelines have adopted both age-dependent and fixed thresholds. Fixed probability thresholds have ranged from 4 to 20 % for a major fracture and 1.3-5 % for hip fracture. More than one half (39) of the 58 publications identified utilized a threshold probability of 20 % for a major osteoporotic fracture, many of which also mention a hip fracture probability of 3 % as an alternative intervention threshold. In nearly all instances, no rationale is provided other than that this was the threshold used by the National Osteoporosis Foundation of the US. Where undertaken, fixed probability thresholds have been determined from tests of discrimination (Hong Kong), health economic assessment (US, Switzerland), to match the prevalence of osteoporosis (China) or to align with pre-existing guidelines or reimbursement criteria (Japan, Poland). Age-dependent intervention thresholds, first developed by the National Osteoporosis Guideline Group (NOGG), are based on the rationale that if a woman with a prior fragility fracture is eligible for treatment, then, at any given age, a man or woman with the same fracture probability but in the absence of a previous fracture (i.e. at the ‘fracture threshold’) should also be eligible. Under current NOGG guidelines, based on age-dependent probability thresholds, inequalities in access to therapy arise especially at older ages (≥ 70 years) depending on the presence or absence of a prior fracture. An alternative threshold using a hybrid model reduces this disparity. The use of FRAX (fixed or age-dependent thresholds) as the gateway to assessment identifies individuals at high risk more effectively than the use of BMD. However, the setting of intervention thresholds need to be country-specific. PMID:27465509
Stoller, Oliver; Schindelholz, Matthias; Hunt, Kenneth J.
2016-01-01
Background Neurological impairments can limit the implementation of conventional cardiopulmonary exercise testing (CPET) and cardiovascular training strategies. A promising approach to provoke cardiovascular stress while facilitating task-specific exercise in people with disabilities is feedback-controlled robot-assisted end-effector-based stair climbing (RASC). The aim of this study was to evaluate the feasibility, reliability, and repeatability of augmented RASC-based CPET in able-bodied subjects, with a view towards future research and applications in neurologically impaired populations. Methods Twenty able-bodied subjects performed a familiarisation session and 2 consecutive incremental CPETs using augmented RASC. Outcome measures focussed on standard cardiopulmonary performance parameters and on accuracy of work rate tracking (RMSEP−root mean square error). Criteria for feasibility were cardiopulmonary responsiveness and technical implementation. Relative and absolute test-retest reliability were assessed by intraclass correlation coefficients (ICC), standard error of the measurement (SEM), and minimal detectable change (MDC). Mean differences, limits of agreement, and coefficients of variation (CoV) were estimated to assess repeatability. Results All criteria for feasibility were achieved. Mean V′O2peak was 106±9% of predicted V′O2max and mean HRpeak was 99±3% of predicted HRmax. 95% of the subjects achieved at least 1 criterion for V′O2max, and the detection of the sub-maximal ventilatory thresholds was successful (ventilatory anaerobic threshold 100%, respiratory compensation point 90% of the subjects). Excellent reliability was found for peak cardiopulmonary outcome measures (ICC ≥ 0.890, SEM ≤ 0.60%, MDC ≤ 1.67%). Repeatability for the primary outcomes was good (CoV ≤ 0.12). Conclusions RASC-based CPET with feedback-guided exercise intensity demonstrated comparable or higher peak cardiopulmonary performance variables relative to predicted values, achieved the criteria for V′O2max, and allowed determination of sub-maximal ventilatory thresholds. The reliability and repeatability were found to be high. There is potential for augmented RASC to be used for exercise testing and prescription in populations with neurological impairments who would benefit from repetitive task-specific training. PMID:26849137
NASA Astrophysics Data System (ADS)
Dalla Libera, Nico; Fabbri, Paolo; Mason, Leonardo; Piccinini, Leonardo; Pola, Marco
2017-04-01
Arsenic groundwater contamination affects worldwide shallower groundwater bodies. Starting from the actual knowledges around arsenic origin into groundwater, we know that the major part of dissolved arsenic is naturally occurring through the dissolution of As-bearing minerals and ores. Several studies on the shallow aquifers of both the regional Venetian Plain (NE Italy) and the local Drainage Basin to the Venice Lagoon (DBVL) show local high arsenic concentration related to peculiar geochemical conditions, which drive arsenic mobilization. The uncertainty of arsenic spatial distribution makes difficult both the evaluation of the processes involved in arsenic mobilization and the stakeholders' decision about environmental management. Considering the latter aspect, the present study treats the problem of the Natural Background Level (NBL) definition as the threshold discriminating the natural contamination from the anthropogenic pollution. Actually, the UE's Directive 2006/118/EC suggests the procedures and criteria to set up the water quality standards guaranteeing a healthy status and reversing any contamination trends. In addition, the UE's BRIDGE project proposes some criteria, based on the 90th percentile of the contaminant's concentrations dataset, to estimate the NBL. Nevertheless, these methods provides just a statistical NBL for the whole area without considering the spatial variation of the contaminant's concentration. In this sense, we would reinforce the NBL concept using a geostatistical approach, which is able to give some detailed information about the distribution of arsenic concentrations and unveiling zones with high concentrations referred to the Italian drinking water standard (IDWS = 10 µg/liter). Once obtained the spatial information about arsenic distribution, we can apply the 90th percentile methods to estimate some Local NBL referring to every zones with arsenic higher than IDWS. The indicator kriging method was considered because it estimates the spatial distribution of the exceedance probabilities respect some pre-defined thresholds. This approach is largely mentioned in literature to face similar environmental problems. To test the validity of the procedure, we used the dataset from "A.Li.Na" project (founded by the Regional Environmental Agency) that defined regional NBLs of As, Fe, Mn and NH4+ into DBVL's groundwater. Primarily, we defined two thresholds corresponding respectively to the IDWS and the median of the data over the IDWS. These values were decided basing on the dataset's statistical structure and the quality criteria of the GWD 2006/118/EC. Subsequently, we evaluated the spatial distribution of the probability to exceed the defined thresholds using the Indicator kriging. The results highlight different zones with high exceedance probability ranging from 75% to 95% respect both the IDWS and the median value. Considering the geological setting of the DBVL, these probability values correspond with the occurrence of both organic matter and reducing conditions. In conclusion, the spatial prediction of the exceedance probability could be useful to define the areas in which estimate the local NBLs, enhancing the procedure of NBL definition. In that way, the NBL estimation could be more realistic because it considers the spatial distribution of the studied contaminant, distinguishing areas with high natural concentrations from polluted ones.
Harst, Lorenz; Timpel, Patrick; Otto, Lena; Wollschlaeger, Bastian; Richter, Peggy; Schlieter, Hannes
2018-01-01
This paper presents an approach for an evaluation of finished telemedicine projects using qualitative methods. Telemedicine applications are said to improve the performance of health care systems. While there are countless telemedicine projects, the vast majority never makes the threshold from testing to implementation and diffusion. Projects were collected from German project databases in the area of telemedicine following systematically developed criteria. In a testing phase, ten projects were subject to a qualitative content analysis to identify limitations, need for further research, and lessons learned. Using Mayring's method of inductive category development, six categories of possible future research were derived. Thus, the proposed method is an important contribution to diffusion and translation research regarding telemedicine, as it is applicable to a systematic research of databases.
Dilatancy Criteria for Salt Cavern Design: A Comparison Between Stress- and Strain-Based Approaches
NASA Astrophysics Data System (ADS)
Labaune, P.; Rouabhi, A.; Tijani, M.; Blanco-Martín, L.; You, T.
2018-02-01
This paper presents a new approach for salt cavern design, based on the use of the onset of dilatancy as a design threshold. In the proposed approach, a rheological model that includes dilatancy at the constitutive level is developed, and a strain-based dilatancy criterion is defined. As compared to classical design methods that consist in simulating cavern behavior through creep laws (fitted on long-term tests) and then using a criterion (derived from short-terms tests or experience) to determine the stability of the excavation, the proposed approach is consistent both with short- and long-term conditions. The new strain-based dilatancy criterion is compared to a stress-based dilatancy criterion through numerical simulations of salt caverns under cyclic loading conditions. The dilatancy zones predicted by the strain-based criterion are larger than the ones predicted by the stress-based criteria, which is conservative yet constructive for design purposes.
Accelerating rates of cognitive decline and imaging markers associated with β-amyloid pathology.
Insel, Philip S; Mattsson, Niklas; Mackin, R Scott; Schöll, Michael; Nosheny, Rachel L; Tosun, Duygu; Donohue, Michael C; Aisen, Paul S; Jagust, William J; Weiner, Michael W
2016-05-17
To estimate points along the spectrum of β-amyloid pathology at which rates of change of several measures of neuronal injury and cognitive decline begin to accelerate. In 460 patients with mild cognitive impairment (MCI), we estimated the points at which rates of florbetapir PET, fluorodeoxyglucose (FDG) PET, MRI, and cognitive and functional decline begin to accelerate with respect to baseline CSF Aβ42. Points of initial acceleration in rates of decline were estimated using mixed-effects regression. Rates of neuronal injury and cognitive and even functional decline accelerate substantially before the conventional threshold for amyloid positivity, with rates of florbetapir PET and FDG PET accelerating early. Temporal lobe atrophy rates also accelerate prior to the threshold, but not before the acceleration of cognitive and functional decline. A considerable proportion of patients with MCI would not meet inclusion criteria for a trial using the current threshold for amyloid positivity, even though on average, they are experiencing cognitive/functional decline associated with prethreshold levels of CSF Aβ42. Future trials in early Alzheimer disease might consider revising the criteria regarding β-amyloid thresholds to include the range of amyloid associated with the first signs of accelerating rates of decline. © 2016 American Academy of Neurology.
Dawson, Deborah A; Saha, Tulshi D; Grant, Bridget F
2010-02-01
The relative severity of the 11 DSM-IV alcohol use disorder (AUD) criteria are represented by their severity threshold scores, an item response theory (IRT) model parameter inversely proportional to their prevalence. These scores can be used to create a continuous severity measure comprising the total number of criteria endorsed, each weighted by its relative severity. This paper assesses the validity of the severity ranking of the 11 criteria and the overall severity score with respect to known AUD correlates, including alcohol consumption, psychological functioning, family history, antisociality, and early initiation of drinking, in a representative population sample of U.S. past-year drinkers (n=26,946). The unadjusted mean values for all validating measures increased steadily with the severity threshold score, except that legal problems, the criterion with the highest score, was associated with lower values than expected. After adjusting for the total number of criteria endorsed, this direct relationship was no longer evident. The overall severity score was no more highly correlated with the validating measures than a simple count of criteria endorsed, nor did the two measures yield different risk curves. This reflects both within-criterion variation in severity and the fact that the number of criteria endorsed and their severity are so highly correlated that severity is essentially redundant. Attempts to formulate a scalar measure of AUD will do as well by relying on simple counts of criteria or symptom items as by using scales weighted by IRT measures of severity. Published by Elsevier Ireland Ltd.
De Cloedt, Lise; Emeriaud, Guillaume; Lefebvre, Émilie; Kleiber, Niina; Robitaille, Nancy; Jarlot, Christine; Lacroix, Jacques; Gauvin, France
2018-04-01
The incidence of transfusion-associated circulatory overload (TACO) is not well known in children, especially in pediatric intensive care unit (PICU) patients. All consecutive patients admitted over 1 year to the PICU of CHU Sainte-Justine were included after they received their first red blood cell transfusion. TACO was diagnosed using the criteria of the International Society of Blood Transfusion, with two different ways of defining abnormal values: 1) using normal pediatric values published in the Nelson Textbook of Pediatrics and 2) by using the patient as its own control and comparing pre- and posttransfusion values with either 10 or 20% difference threshold. We monitored for TACO up to 24 hours posttransfusion. A total of 136 patients were included. Using the "normal pediatric values" definition, we diagnosed 63, 88, and 104 patients with TACO at 6, 12, and 24 hours posttransfusion, respectively. Using the "10% threshold" definition we detected 4, 15, and 27 TACO cases in the same periods, respectively; using the "20% threshold" definition, the number of TACO cases was 2, 6, and 17, respectively. Chest radiograph was the most frequent missing item, especially at 6 and 12 hours posttransfusion. Overall, the incidence of TACO varied from 1.5% to 76% depending on the definition. A more operational definition of TACO is needed in PICU patients. Using a threshold could be more optimal but more studies are needed to confirm the best threshold. © 2018 AABB.
Validity of Simpson-Angus Scale (SAS) in a naturalistic schizophrenia population
Janno, Sven; Holi, Matti M; Tuisku, Katinka; Wahlbeck, Kristian
2005-01-01
Background Simpson-Angus Scale (SAS) is an established instrument for neuroleptic-induced parkinsonism (NIP), but its statistical properties have been studied insufficiently. Some shortcomings concerning its content have been suggested as well. According to a recent report, the widely used SAS mean score cut-off value 0.3 of for NIP detection may be too low. Our aim was to evaluate SAS against DSM-IV diagnostic criteria for NIP and objective motor assessment (actometry). Methods Ninety-nine chronic institutionalised schizophrenia patients were evaluated during the same interview by standardised actometric recording and SAS. The diagnosis of NIP was based on DSM-IV criteria. Internal consistency measured by Cronbach's α, convergence to actometry and the capacity for NIP case detection were assessed. Results Cronbach's α for the scale was 0.79. SAS discriminated between DSM-IV NIP and non-NIP patients. The actometric findings did not correlate with SAS. ROC-analysis yielded a good case detection power for SAS mean score. The optimal threshold value of SAS mean score was between 0.65 and 0.95, i.e. clearly higher than previously suggested threshold value. Conclusion We conclude that SAS seems a reliable and valid instrument. The previously commonly used cut-off mean score of 0.3 has been too low resulting in low specificity, and we suggest a new cut-off value of 0.65, whereby specificity could be doubled without loosing sensitivity. PMID:15774006
Definitions and factors associated with subthreshold depressive conditions: a systematic review
2012-01-01
Background Subthreshold depressive disorders (minor and subthrehold depression) have been defined in a wide range of forms, varying on the number of symptoms and duration required. Disability associated with these conditions has also been reported. Our aim was to review the different definitions and to determine factors associated with these conditions in order to clarify the nosological implications of these disorders. Methods A Medline search was conducted of the published literature between January 2001 and September 2011. Bibliographies of the retrieved papers were also analysed. Results There is a wide heterogeneity in the definition and diagnostic criteria of minor and subthreshold depression. Minor depression was defined according to DSM-IV criteria. Regarding subthreshold depression, also called subclinical depression or subsyndromal symptomatic depression, between 2 and 5 depressive symptoms were required for the diagnosis, and a minimum duration of 2 weeks. Significant impairment associated with subthreshold depressive conditions, as well as comorbidity with other mental disorders, has been described. Conclusions Depression as a disorder is better explained as a spectrum rather than as a collection of discrete categories. Minor and subthreshold depression are common conditions and patients falling below the diagnostic threshold experience significant difficulties in functioning and a negative impact on their quality of life. Current diagnostic systems need to reexamine the thresholds for depressive disorders and distinguish them from ordinary feelings of sadness. PMID:23110575
A Computational Approach to Finding Novel Targets for Existing Drugs
Li, Yvonne Y.; An, Jianghong; Jones, Steven J. M.
2011-01-01
Repositioning existing drugs for new therapeutic uses is an efficient approach to drug discovery. We have developed a computational drug repositioning pipeline to perform large-scale molecular docking of small molecule drugs against protein drug targets, in order to map the drug-target interaction space and find novel interactions. Our method emphasizes removing false positive interaction predictions using criteria from known interaction docking, consensus scoring, and specificity. In all, our database contains 252 human protein drug targets that we classify as reliable-for-docking as well as 4621 approved and experimental small molecule drugs from DrugBank. These were cross-docked, then filtered through stringent scoring criteria to select top drug-target interactions. In particular, we used MAPK14 and the kinase inhibitor BIM-8 as examples where our stringent thresholds enriched the predicted drug-target interactions with known interactions up to 20 times compared to standard score thresholds. We validated nilotinib as a potent MAPK14 inhibitor in vitro (IC50 40 nM), suggesting a potential use for this drug in treating inflammatory diseases. The published literature indicated experimental evidence for 31 of the top predicted interactions, highlighting the promising nature of our approach. Novel interactions discovered may lead to the drug being repositioned as a therapeutic treatment for its off-target's associated disease, added insight into the drug's mechanism of action, and added insight into the drug's side effects. PMID:21909252
Lim, Kheng Choon; Wang, Vivian W; Siddiqui, Fahad J; Shi, Luming; Chan, Edwin S Y; Oh, Hong Choon; Tan, Say Beng; Chow, Pierce K H
2015-01-01
Both liver resection (LR) and cadaveric liver transplantation (CLT) are potentially curative treatments for patients with hepatocellular carcinoma (HCC) within the Milan criteria and with adequate liver function. Adopting either as a first-line therapy carries major cost and resource implications. The objective of this study was to estimate the relative cost-effectiveness of LR against CLT for patients with HCC within the Milan criteria using a decision analytic model. A Markov cohort model was developed to simulate a cohort of patients aged 55 years with HCC within the Milan criteria and Child-Pugh A/B cirrhosis, undergoing LR or CLT, and followed up over their remaining life expectancy. Analysis was performed in different geographical cost settings: the USA, Switzerland and Singapore. Transition probabilities were obtained from systematic literature reviews, supplemented by databases from Singapore and the Organ Procurement and Transplantation Network (USA). Utility and cost data were obtained from open sources. LR produced 3.9 quality-adjusted life years (QALYs) while CLT had an additional 1.4 QALYs. The incremental cost-effectiveness ratio (ICER) of CLT versus LR ranged from $111,821/QALY in Singapore to $156,300/QALY in Switzerland, and was above thresholds for cost-effectiveness in all three countries. Sensitivity analysis revealed that CLT-related 5-year cumulative survival, one-time cost of CLT, and post-LR 5-year cumulative recurrence rates were the most sensitive parameters in all cost scenarios. ICERs were reduced below threshold when CLT-related 5-year cumulative survival exceeded 84.9% and 87.6% in Singapore and the USA, respectively. For Switzerland, the ICER remained above the cost-effectiveness threshold regardless of the variations. In patients with HCC within the Milan criteria and Child-Pugh A/B cirrhosis, LR is more cost-effective than CLT across three different costing scenarios: the USA, Switzerland, Singapore. © 2014 by the American Association for the Study of Liver Diseases.
Fragrances Categorized According to Relative Human Skin Sensitization Potency
Api, Anne Marie; Parakhia, Rahul; O'Brien, Devin; Basketter, David A.
2017-01-01
Background The development of non-animal alternatives for skin sensitization potency prediction is dependent upon the availability of a sufficient dataset whose human potency is well characterized. Previously, establishment of basic categorization criteria for 6 defined potency categories, allowed 131 substances to be allocated into them entirely on the basis of human information. Objectives To supplement the original dataset with an extended range of fragrance substances. Methods A more fully described version of the original criteria was used to assess 89 fragrance chemicals, allowing their allocation into one of the 6 potency categories. Results None of the fragrance substances were assigned to the most potent group, category 1, whereas 11 were category 2, 22 were category 3, 37 were category 4, and 19 were category 5. Although none were identified as non-sensitizing, note that substances in category 5 also do not pass the threshold for regulatory classification. Conclusions The combined datasets of >200 substances placed into potency categories solely on the basis of human data provides an essential resource for the elaboration and evaluation of predictive non-animal methods. PMID:28691948
Assessment of statistical methods used in library-based approaches to microbial source tracking.
Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D
2003-12-01
Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.
McKenzie, Elizabeth M; Balter, Peter A; Stingo, Francesco C; Jones, Jimmy; Followill, David S; Kry, Stephen F
2014-12-01
The authors investigated the performance of several patient-specific intensity-modulated radiation therapy (IMRT) quality assurance (QA) dosimeters in terms of their ability to correctly identify dosimetrically acceptable and unacceptable IMRT patient plans, as determined by an in-house-designed multiple ion chamber phantom used as the gold standard. A further goal was to examine optimal threshold criteria that were consistent and based on the same criteria among the various dosimeters. The authors used receiver operating characteristic (ROC) curves to determine the sensitivity and specificity of (1) a 2D diode array undergoing anterior irradiation with field-by-field evaluation, (2) a 2D diode array undergoing anterior irradiation with composite evaluation, (3) a 2D diode array using planned irradiation angles with composite evaluation, (4) a helical diode array, (5) radiographic film, and (6) an ion chamber. This was done with a variety of evaluation criteria for a set of 15 dosimetrically unacceptable and 9 acceptable clinical IMRT patient plans, where acceptability was defined on the basis of multiple ion chamber measurements using independent ion chambers and a phantom. The area under the curve (AUC) on the ROC curves was used to compare dosimeter performance across all thresholds. Optimal threshold values were obtained from the ROC curves while incorporating considerations for cost and prevalence of unacceptable plans. Using common clinical acceptance thresholds, most devices performed very poorly in terms of identifying unacceptable plans. Grouping the detector performance based on AUC showed two significantly different groups. The ion chamber, radiographic film, helical diode array, and anterior-delivered composite 2D diode array were in the better-performing group, whereas the anterior-delivered field-by-field and planned gantry angle delivery using the 2D diode array performed less well. Additionally, based on the AUCs, there was no significant difference in the performance of any device between gamma criteria of 2%/2 mm, 3%/3 mm, and 5%/3 mm. Finally, optimal cutoffs (e.g., percent of pixels passing gamma) were determined for each device and while clinical practice commonly uses a threshold of 90% of pixels passing for most cases, these results showed variability in the optimal cutoff among devices. IMRT QA devices have differences in their ability to accurately detect dosimetrically acceptable and unacceptable plans. Field-by-field analysis with a MapCheck device and use of the MapCheck with a MapPhan phantom while delivering at planned rotational gantry angles resulted in a significantly poorer ability to accurately sort acceptable and unacceptable plans compared with the other techniques examined. Patient-specific IMRT QA techniques in general should be thoroughly evaluated for their ability to correctly differentiate acceptable and unacceptable plans. Additionally, optimal agreement thresholds should be identified and used as common clinical thresholds typically worked very poorly to identify unacceptable plans.
Optimizing Functional Network Representation of Multivariate Time Series
NASA Astrophysics Data System (ADS)
Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco Del; Menasalvas, Ernestina; Boccaletti, Stefano
2012-09-01
By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks.
Optimizing Functional Network Representation of Multivariate Time Series
Zanin, Massimiliano; Sousa, Pedro; Papo, David; Bajo, Ricardo; García-Prieto, Juan; Pozo, Francisco del; Menasalvas, Ernestina; Boccaletti, Stefano
2012-01-01
By combining complex network theory and data mining techniques, we provide objective criteria for optimization of the functional network representation of generic multivariate time series. In particular, we propose a method for the principled selection of the threshold value for functional network reconstruction from raw data, and for proper identification of the network's indicators that unveil the most discriminative information on the system for classification purposes. We illustrate our method by analysing networks of functional brain activity of healthy subjects, and patients suffering from Mild Cognitive Impairment, an intermediate stage between the expected cognitive decline of normal aging and the more pronounced decline of dementia. We discuss extensions of the scope of the proposed methodology to network engineering purposes, and to other data mining tasks. PMID:22953051
Schumm, Phillip; Scoglio, Caterina; Zhang, Qian; Balcan, Duygu
2015-02-21
Through the characterization of a metapopulation cattle disease model on a directed network having source, transit, and sink nodes, we derive two global epidemic invasion thresholds. The first threshold defines the conditions necessary for an epidemic to successfully spread at the global scale. The second threshold defines the criteria that permit an epidemic to move out of the giant strongly connected component and to invade the populations of the sink nodes. As each sink node represents a final waypoint for cattle before slaughter, the existence of an epidemic among the sink nodes is a serious threat to food security. We find that the relationship between these two thresholds depends on the relative proportions of transit and sink nodes in the system and the distributions of the in-degrees of both node types. These analytic results are verified through numerical realizations of the metapopulation cattle model. Published by Elsevier Ltd.
Image quality, threshold contrast and mean glandular dose in CR mammography
NASA Astrophysics Data System (ADS)
Jakubiak, R. R.; Gamba, H. R.; Neves, E. B.; Peixoto, J. E.
2013-09-01
In many countries, computed radiography (CR) systems represent the majority of equipment used in digital mammography. This study presents a method for optimizing image quality and dose in CR mammography of patients with breast thicknesses between 45 and 75 mm. Initially, clinical images of 67 patients (group 1) were analyzed by three experienced radiologists, reporting about anatomical structures, noise and contrast in low and high pixel value areas, and image sharpness and contrast. Exposure parameters (kV, mAs and target/filter combination) used in the examinations of these patients were reproduced to determine the contrast-to-noise ratio (CNR) and mean glandular dose (MGD). The parameters were also used to radiograph a CDMAM (version 3.4) phantom (Artinis Medical Systems, The Netherlands) for image threshold contrast evaluation. After that, different breast thicknesses were simulated with polymethylmethacrylate layers and various sets of exposure parameters were used in order to determine optimal radiographic parameters. For each simulated breast thickness, optimal beam quality was defined as giving a target CNR to reach the threshold contrast of CDMAM images for acceptable MGD. These results were used for adjustments in the automatic exposure control (AEC) by the maintenance team. Using optimized exposure parameters, clinical images of 63 patients (group 2) were evaluated as described above. Threshold contrast, CNR and MGD for such exposure parameters were also determined. Results showed that the proposed optimization method was effective for all breast thicknesses studied in phantoms. The best result was found for breasts of 75 mm. While in group 1 there was no detection of the 0.1 mm critical diameter detail with threshold contrast below 23%, after the optimization, detection occurred in 47.6% of the images. There was also an average MGD reduction of 7.5%. The clinical image quality criteria were attended in 91.7% for all breast thicknesses evaluated in both patient groups. Finally, this study also concluded that the use of the AEC of the x-ray unit based on the constant dose to the detector may bring some difficulties to CR systems to operate under optimal conditions. More studies must be performed, so that the compatibility between systems and optimization methodologies can be evaluated, as well as this optimization method. Most methods are developed for phantoms, so comparative studies including clinical images must be developed.
Polte, Christian L; Gao, Sinsia A; Johnsson, Åse A; Lagerstrand, Kerstin M; Bech-Hanssen, Odd
2017-06-15
Grading of chronic aortic regurgitation (AR) and mitral regurgitation (MR) by cardiovascular magnetic resonance (CMR) is currently based on thresholds, which are neither modality nor quantification method specific. Accordingly, this study sought to identify CMR-specific and quantification method-specific thresholds for regurgitant volumes (RVols), RVol indexes, and regurgitant fractions (RFs), which denote severe chronic AR or MR with an indication for surgery. The study comprised patients with moderate and severe chronic AR (n = 38) and MR (n = 40). Echocardiography and CMR was performed at baseline and in all operated AR/MR patients (n = 23/25) 10 ± 1 months after surgery. CMR quantification of AR: direct (aortic flow) and indirect method (left ventricular stroke volume [LVSV] - pulmonary stroke volume [PuSV]); MR: 2 indirect methods (LVSV - aortic forward flow [AoFF]; mitral inflow [MiIF] - AoFF). All operated patients had severe regurgitation and benefited from surgery, indicated by a significant postsurgical reduction in end-diastolic volume index and improvement or relief of symptoms. The discriminatory ability between moderate and severe AR was strong for RVol >40 ml, RVol index >20 ml/m 2 , and RF >30% (direct method) and RVol >62 ml, RVol index >31 ml/m 2 , and RF >36% (LVSV-PuSV) with a negative likelihood ratio ≤ 0.2. In MR, the discriminatory ability was very strong for RVol >64 ml, RVol index >32 ml/m 2 , and RF >41% (LVSV-AoFF) and RVol >40 ml, RVol index >20 ml/m 2 , and RF >30% (MiIF-AoFF) with a negative likelihood ratio < 0.1. In conclusion, CMR grading of chronic AR and MR should be based on modality-specific and quantification method-specific thresholds, as they differ largely from recognized guideline criteria, to assure appropriate clinical decision-making and timing of surgery. Copyright © 2017 Elsevier Inc. All rights reserved.
Brownstone, Lisa M.; Bardone-Cone, Anna M.; Fitzsimmons-Craft, Ellen E.; Printz, Katherine S.; Le Grange, Daniel; Mitchell, James E.; Crow, Scott J.; Peterson, Carol B.; Crosby, Ross D.; Klein, Marjorie H.; Wonderlich, Stephen A.; Joiner, Thomas E.
2013-01-01
Objective The current study explored the clinical meaningfulness of distinguishing subjective (SBE) from objective binge eating (OBE) among individuals with threshold/subthreshold bulimia nervosa (BN). We examined relations between OBEs and SBEs and eating disorder symptoms, negative affect, and personality dimensions using both a group comparison and a continuous approach. Method Participants were 204 adult females meeting criteria for threshold/subthreshold BN who completed questionnaires related to disordered eating, affect, and personality. Results Group comparisons indicated that SBE and OBE groups did not significantly differ on eating disorder pathology or negative affect, but did differ on two personality dimensions (cognitive distortion and attentional impulsivity). Using the continuous approach, we found that frequencies of SBEs (not OBEs) accounted for unique variance in weight/shape concern, diuretic use frequency, depressive symptoms, anxiety, social avoidance, insecure attachment, and cognitive distortion. Discussion SBEs in the context of BN may indicate broader areas of psychopathology. PMID:23109272
NASA Astrophysics Data System (ADS)
Feng, Yanchun; Lei, Deqing; Hu, Changqin
We created a rapid detection procedure for identifying herbal medicines illegally adulterated with synthetic drugs using near infrared spectroscopy. This procedure includes a reverse correlation coefficient method (RCCM) and comparison of characteristic peaks. Moreover, we made improvements to the RCCM based on new strategies for threshold settings. Any tested herbal medicine must meet two criteria to be identified with our procedure as adulterated. First, the correlation coefficient between the tested sample and the reference must be greater than the RCCM threshold. Next, the NIR spectrum of the tested sample must contain the same characteristic peaks as the reference. In this study, four pure synthetic anti-diabetic drugs (i.e., metformin, gliclazide, glibenclamide and glimepiride), 174 batches of laboratory samples and 127 batches of herbal anti-diabetic medicines were used to construct and validate the procedure. The accuracy of this procedure was greater than 80%. Our data suggest that this protocol is a rapid screening tool to identify synthetic drug adulterants in herbal medicines on the market.
Treatment of osteoporosis in men
Kaufman, J.-M.; Reginster, J.-Y.; Boonen, S.; Brandi, M. L.; Cooper, C.; Dere, W.; Devogelaer, J.-P.; Diez-Perez, A.; Kanis, J. A.; McCloskey, E.; Mitlak, B.; Orwoll, E.; Ringe, J.D.; Weryha, G.; Rizzoli, R.
2013-01-01
Summary Aspects of osteoporosis in men, such as screening and identification strategies, definitions of diagnosis and intervention thresholds, and treatment options (both approved and in the pipeline) are discussed. Introduction Awareness of osteoporosis in men is improving, although it remains under-diagnosed and under-treated. A European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis (ESCEO) workshop was convened to discuss osteoporosis in men and to provide a report by a panel of experts (the authors). Methods A debate with an expert panel on preselected topics was conducted. Results and Conclusions Although additional fracture data are needed to endorse the clinical care of osteoporosis in men, consensus views were reached on diagnostic criteria and intervention thresholds. Empirical data in men display similarities with data acquired in women, despite pathophysiological differences, which may not be clinically relevant. Men should receive treatment at a similar 10-year fracture probability as in women. The design of mixed studies may reduce the lag between comparable treatments for osteoporosis in women becoming available in men. PMID:23201268
2D modeling based comprehensive analysis of short channel effects in DMG strained VSTB FET
NASA Astrophysics Data System (ADS)
Saha, Priyanka; Banerjee, Pritha; Sarkar, Subir Kumar
2018-06-01
The paper aims to develop two dimensional analytical model of the proposed dual material (DM) Vertical Super Thin Body (VSTB) strained Field Effect Transistor (FET) with focus on its short channel behaviour in nanometer regime. Electrostatic potential across gate/channel and dielectric wall/channel interface is derived by solving 2D Poisson's equation with parabolic approximation method by applying appropriate boundary conditions. Threshold voltage is then calculated by using the criteria of minimum surface potential considering both gate and dielectric wall side potential. Performance analysis of the present structure is demonstrated in terms of potential, electric field, threshold voltage characteristics and subthreshold behaviour by varying various device parameters and applied biases. Effect of application of strain in channel is further explored to establish the superiority of the proposed device in comparison to conventional VSTB FET counterpart. All analytical results are compared with Silvaco ATLAS device simulated data to substantiate the accuracy of our derived model.
New developments in supra-threshold perimetry.
Henson, David B; Artes, Paul H
2002-09-01
To describe a series of recent enhancements to supra-threshold perimetry. Computer simulations were used to develop an improved algorithm (HEART) for the setting of the supra-threshold test intensity at the beginning of a field test, and to evaluate the relationship between various pass/fail criteria and the test's performance (sensitivity and specificity) and how they compare with modern threshold perimetry. Data were collected in optometric practices to evaluate HEART and to assess how the patient's response times can be analysed to detect false positive response errors in visual field test results. The HEART algorithm shows improved performance (reduced between-eye differences) over current algorithms. A pass/fail criterion of '3 stimuli seen of 3-5 presentations' at each test location reduces test/retest variability and combines high sensitivity and specificity. A large percentage of false positive responses can be detected by comparing their latencies to the average response time of a patient. Optimised supra-threshold visual field tests can perform as well as modern threshold techniques. Such tests may be easier to perform for novice patients, compared with the more demanding threshold tests.
Himes Boor, Gina K
2014-02-01
For species listed under the U.S. Endangered Species Act (ESA), the U.S. Fish and Wildlife Service and National Marine Fisheries Service are tasked with writing recovery plans that include "objective, measurable criteria" that define when a species is no longer at risk of extinction, but neither the act itself nor agency guidelines provide an explicit definition of objective, measurable criteria. Past reviews of recovery plans, including one published in 2012, show that many criteria lack quantitative metrics with clear biological rationale and are not meeting the measureable and objective mandate. I reviewed how objective, measureable criteria have been defined implicitly and explicitly in peer-reviewed literature, the ESA, other U.S. statutes, and legal decisions. Based on a synthesis of these sources, I propose the following 6 standards be used as minimum requirements for objective, measurable criteria: contain a quantitative threshold with calculable units, stipulate a timeframe over which they must be met, explicitly define the spatial extent or population to which they apply, specify a sampling procedure that includes sample size, specify a statistical significance level, and include justification by providing scientific evidence that the criteria define a species whose extinction risk has been reduced to the desired level. To meet these 6 standards, I suggest that recovery plans be explicitly guided by and organized around a population viability modeling framework even if data or agency resources are too limited to complete a viability model. When data and resources are available, recovery criteria can be developed from the population viability model results, but when data and resources are insufficient for model implementation, extinction risk thresholds can be used as criteria. A recovery-planning approach centered on viability modeling will also yield appropriately focused data-acquisition and monitoring plans and will facilitate a seamless transition from recovery planning to delisting. © 2013 Society for Conservation Biology.
Fereshtehnejad, Seyed-Mohammad; Montplaisir, Jacques Y; Pelletier, Amelie; Gagnon, Jean-François; Berg, Daniela; Postuma, Ronald B
2017-06-01
Recently, the International Parkinson and Movement Disorder Society introduced the prodromal criteria for PD. Objectives Our study aimed to examine diagnostic accuracy of the criteria as well as the independence of prodromal markers to predict conversion to PD or dementia with Lewy bodies. This prospective cohort study was performed on 121 individuals with rapid eye movement sleep behavior disorder who were followed annually for 1 to 12 years. Using data from a comprehensive panel of prodromal markers, likelihood ratio and post-test probability of the criteria were calculated at baseline and during each follow-up visit. Forty-eight (39.7%) individuals with rapid eye movement sleep behavior disorder converted to PD/dementia with Lewy bodies. The prodromal criteria had 81.3% sensitivity and 67.9% specificity for conversion to PD/dementia with Lewy bodies at 4-year follow-up. One year before conversion, sensitivity was 100%. The criteria predicted dementia with Lewy bodies with even higher accuracy than PD without dementia at onset. Those who met the threshold of prodromal criteria at baseline had significantly more rapid conversion into a neurodegenerative state (4.8 vs. 9.1 years; P < 0.001). Pair-wise combinations of different prodromal markers showed that markers were independent of one another. The prodromal criteria are a promising tool for predicting incidence of PD/dementia with Lewy bodies and conversion time in a rapid eye movement sleep behavior disorder cohort, with high sensitivity and high specificity with long follow-up. Prodromal markers influence the overall likelihood ratio independently, allowing them to be reliably multiplied. Defining additional markers with high likelihood ratio, further studies with longitudinal assessment and testing thresholds in different target populations will improve the criteria. © 2017 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and Movement Disorder Society.
Bi-criteria travelling salesman subtour problem with time threshold
NASA Astrophysics Data System (ADS)
Kumar Thenepalle, Jayanth; Singamsetty, Purusotham
2018-03-01
This paper deals with the bi-criteria travelling salesman subtour problem with time threshold (BTSSP-T), which comes from the family of the travelling salesman problem (TSP) and is NP-hard in the strong sense. The problem arises in several application domains, mainly in routing and scheduling contexts. Here, the model focuses on two criteria: total travel distance and gains attained. The BTSSP-T aims to determine a subtour that starts and ends at the same city and visits a subset of cities at a minimum travel distance with maximum gains, such that the time spent on the tour does not exceed the predefined time threshold. A zero-one integer-programming problem is adopted to formulate this model with all practical constraints, and it includes a finite set of feasible solutions (one for each tour). Two algorithms, namely, the Lexi-Search Algorithm (LSA) and the Tabu Search (TS) algorithm have been developed to solve the BTSSP-T problem. The proposed LSA implicitly enumerates the feasible patterns and provides an efficient solution with backtracking, whereas the TS, which is metaheuristic, will give the better approximate solution. A numerical example is demonstrated in order to understand the search mechanism of the LSA. Numerical experiments are carried out in the MATLAB environment, on the different benchmark instances available in the TSPLIB domain as well as on randomly generated test instances. The experimental results show that the proposed LSA works better than the TS algorithm in terms of solution quality and, computationally, both LSA and TS are competitive.
Strahm, E; Emery, C; Saugy, M; Dvorak, J; Saudan, C
2009-01-01
Background and objectives: The determination of the carbon isotope ratio in androgen metabolites has been previously shown to be a reliable, direct method to detect testosterone misuse in the context of antidoping testing. Here, the variability in the 13C/12C ratios in urinary steroids in a widely heterogeneous cohort of professional soccer players residing in different countries (Argentina, Italy, Japan, South Africa, Switzerland and Uganda) is examined. Methods: Carbon isotope ratios of selected androgens in urine specimens were determined using gas chromatography/combustion/isotope ratio mass spectrometry (GC-C-IRMS). Results: Urinary steroids in Italian and Swiss populations were found to be enriched in 13C relative to other groups, reflecting higher consumption of C3 plants in these two countries. Importantly, detection criteria based on the difference in the carbon isotope ratio of androsterone and pregnanediol for each population were found to be well below the established threshold value for positive cases. Conclusions: The results obtained with the tested diet groups highlight the importance of adapting the criteria if one wishes to increase the sensitivity of exogenous testosterone detection. In addition, confirmatory tests might be rendered more efficient by combining isotope ratio mass spectrometry with refined interpretation criteria for positivity and subject-based profiling of steroids. PMID:19549614
Thresholds for the cost-effectiveness of interventions: alternative approaches.
Marseille, Elliot; Larson, Bruce; Kazi, Dhruv S; Kahn, James G; Rosen, Sydney
2015-02-01
Many countries use the cost-effectiveness thresholds recommended by the World Health Organization's Choosing Interventions that are Cost-Effective project (WHO-CHOICE) when evaluating health interventions. This project sets the threshold for cost-effectiveness as the cost of the intervention per disability-adjusted life-year (DALY) averted less than three times the country's annual gross domestic product (GDP) per capita. Highly cost-effective interventions are defined as meeting a threshold per DALY averted of once the annual GDP per capita. We argue that reliance on these thresholds reduces the value of cost-effectiveness analyses and makes such analyses too blunt to be useful for most decision-making in the field of public health. Use of these thresholds has little theoretical justification, skirts the difficult but necessary ranking of the relative values of locally-applicable interventions and omits any consideration of what is truly affordable. The WHO-CHOICE thresholds set such a low bar for cost-effectiveness that very few interventions with evidence of efficacy can be ruled out. The thresholds have little value in assessing the trade-offs that decision-makers must confront. We present alternative approaches for applying cost-effectiveness criteria to choices in the allocation of health-care resources.
Marin-Oyaga, Victor A; Salavati, Ali; Houshmand, Sina; Pasha, Ahmed Khurshid; Gharavi, Mohammad; Saboury, Babak; Basu, Sandip; Torigian, Drew A; Alavi, Abass
2015-01-01
Treatment of malignant pleural mesothelioma (MPM) remains very challenging. Assessment of response to treatment is necessary for modifying treatment and using new drugs. Global disease assessment (GDA) by implementing image processing methods to extract more information out of positron emission tomography (PET) images may provide reliable information. In this study we show the feasibility of this method of semi-quantification in patients with mesothelioma, and compare it with the conventional methods. We also present a review of the literature about this topic. Nineteen subjects with histologically proven MPM who had undergone fluoride-18-fluorodeoxyglucose PET/computed tomography ((18)F-FDG PET/CT) before and after treatment were included in this study. An adaptive contrast-oriented thresholding algorithm was used for the image analysis and semi-quantification. Metabolic tumor volume (MTV), maximum and mean standardized uptake volume (SUVmax, SUVmean) and total lesion glycolysis (TLG) were calculated for each region of interest. The global tumor glycolysis (GTG) was obtained by summing up all TLG. Treatment response was assessed by the European Organisation for Research and Treatment of Cancer (EORTC) criteria and the changes of GTG. Agreement between global disease assessment and conventional method was also determined. In patients with progressive disease based on EORTC criteria, GTG showed an increase of 150.7 but in patients with stable or partial response, GTG showed a decrease of 433.1. The SUVmax of patients before treatment was 5.95 (SD: 2.93) and after the treatment it increased to 6.38 (SD: 3.19). Overall concordance of conventional method with GDA method was 57%. Concordance of progression of disease based on conventional method was 44%, stable disease was 85% and partial response was 33%. Discordance was 55%, 14% and 66%. Adaptive contrast-oriented thresholding algorithm is a promising method to quantify the whole tumor glycolysis in patients with mesothelioma. We are able to assess the total metabolic lesion volume, lesion glycolysis, SUVmax, tumor SUVmean and GTG for this particular tumor. Also we were able to demonstrate the potential use of this technique in the monitoring of treatment response. More studies comparing this technique with conventional and other global disease assessment methods are needed in order to clarify its role in the assessment of treatment response and prognosis of these patients.
Spirometry in 3-5-year-old children with asthma.
Nève, Véronique; Edmé, Jean-Louis; Devos, Patrick; Deschildre, Antoine; Thumerelle, Caroline; Santos, Clarisse; Methlin, Catherine-Marie; Matran, Murielle; Matran, Régis
2006-08-01
Spirometry with incentive games was applied to 207 2-5-year-old preschool children (PSC) with asthma in order to refine the quality-control criteria proposed by Aurora et al. (Am J Respir Crit Care Med 2004;169:1152-159). The data set in our study was much larger compared to that in Aurora et al. (Am J Respir Crit Care Med 2004;169:1152-159), where 42 children with cystic fibrosis and 37 healthy control were studied. At least two acceptable maneuvers were obtained in 178 (86%) children. Data were focused on 3-5-year-old children (n = 171). The proportion of children achieving a larger number of thresholds for each quality-control criterion (backward-extrapolated volume (Vbe), Vbe in percent of forced vital capacity (FVC, Vbe/FVC), time-to-peak expiratory flow (time-to-PEF), and difference (Delta) between the two FVCs (DeltaFVC), forced expiratory volume in 1 sec (DeltaFEV(1)), and forced expiratory volume in 0.5 sec (DeltaFEV(0.5)) from the two "best" curves) was calculated, and cumulative plots were obtained. The optimal threshold was determined for all ages by derivative function of rate of success-threshold curves, close to the inflexion point. The following thresholds were defined for acceptability: Vbe
Detection of Operator Performance Breakdown as an Automation Triggering Mechanism
NASA Technical Reports Server (NTRS)
Yoo, Hyo-Sang; Lee, Paul U.; Landry, Steven J.
2015-01-01
Performance breakdown (PB) has been anecdotally described as a state where the human operator "loses control of context" and "cannot maintain required task performance." Preventing such a decline in performance is critical to assure the safety and reliability of human-integrated systems, and therefore PB could be useful as a point at which automation can be applied to support human performance. However, PB has never been scientifically defined or empirically demonstrated. Moreover, there is no validated objective way of detecting such a state or the transition to that state. The purpose of this work is: 1) to empirically demonstrate a PB state, and 2) to develop an objective way of detecting such a state. This paper defines PB and proposes an objective method for its detection. A human-in-the-loop study was conducted: 1) to demonstrate PB by increasing workload until the subject reported being in a state of PB, and 2) to identify possible parameters of a detection method for objectively identifying the subjectively-reported PB point, and 3) to determine if the parameters are idiosyncratic to an individual/context or are more generally applicable. In the experiment, fifteen participants were asked to manage three concurrent tasks (one primary and two secondary) for 18 minutes. The difficulty of the primary task was manipulated over time to induce PB while the difficulty of the secondary tasks remained static. The participants' task performance data was collected. Three hypotheses were constructed: 1) increasing workload will induce subjectively-identified PB, 2) there exists criteria that identifies the threshold parameters that best matches the subjectively-identified PB point, and 3) the criteria for choosing the threshold parameters is consistent across individuals. The results show that increasing workload can induce subjectively-identified PB, although it might not be generalizable-only 12 out of 15 participants declared PB. The PB detection method based on signal detection analysis was applied to the performance data and the results showed that PB can be identified using the method, particularly when the values of the parameters for the detection method were calibrated individually.
Personally Modifiable Risk Factors Associated with Pediatric Hearing Loss: A Systematic Review
Vasconcellos, Adam P.; Kyle, Meghann E.; Gilani, Sapideh; Shin, Jennifer J.
2015-01-01
Background Pediatric hearing loss is an increasingly recognized problem with significant implications. Increasing our quantitative understanding of potentially modifiable environmental risk factors for hearing loss may form the foundation for prevention and screening programs. Objective To determine whether specific threshold exposure levels of personally modifiable risk factors for hearing loss have been defined, with the overarching goal of providing actionable guidance for the prevention of pediatric hearing loss. Data Sources A systematic review was performed. Computerized searches of PubMed, EMBASE, and the Cochrane Library were completed and supplemented with manual searches. Review Methods Inclusion/exclusion criteria were designed to determine specific threshold values of personally modifiable risk factors on hearing loss in the pediatric population. Searches and data extraction were performed by independent reviewers. Results There were 38 criterion-meeting studies, including a total of 50,651 subjects. Threshold noise exposures significantly associated with hearing loss in youth included: (1) more than 4 hours per week or more than 5 years of personal headphone usage, (2) more than 4 visits per month to a discotheque, and (3) working on a mechanized farm. Quantified tobacco levels of concern included any level of in utero smoke exposure as well as secondhand exposure sufficient to elevate serum cotinine. Conclusions Specific thresholds analyses are limited. Future studies would ideally focus on stratifying risk according to clearly defined levels of exposure, in order to provide actionable guidance for children and families. PMID:24671457
NASA Astrophysics Data System (ADS)
Touati, Sarah; Naylor, Mark; Main, Ian
2016-02-01
The recent spate of mega-earthquakes since 2004 has led to speculation of an underlying change in the global `background' rate of large events. At a regional scale, detecting changes in background rate is also an important practical problem for operational forecasting and risk calculation, for example due to volcanic processes, seismicity induced by fluid injection or withdrawal, or due to redistribution of Coulomb stress after natural large events. Here we examine the general problem of detecting changes in background rate in earthquake catalogues with and without correlated events, for the first time using the Bayes factor as a discriminant for models of varying complexity. First we use synthetic Poisson (purely random) and Epidemic-Type Aftershock Sequence (ETAS) models (which also allow for earthquake triggering) to test the effectiveness of many standard methods of addressing this question. These fall into two classes: those that evaluate the relative likelihood of different models, for example using Information Criteria or the Bayes Factor; and those that evaluate the probability of the observations (including extreme events or clusters of events) under a single null hypothesis, for example by applying the Kolmogorov-Smirnov and `runs' tests, and a variety of Z-score tests. The results demonstrate that the effectiveness among these tests varies widely. Information Criteria worked at least as well as the more computationally expensive Bayes factor method, and the Kolmogorov-Smirnov and runs tests proved to be the relatively ineffective in reliably detecting a change point. We then apply the methods tested to events at different thresholds above magnitude M ≥ 7 in the global earthquake catalogue since 1918, after first declustering the catalogue. This is most effectively done by removing likely correlated events using a much lower magnitude threshold (M ≥ 5), where triggering is much more obvious. We find no strong evidence that the background rate of large events worldwide has increased in recent years.
NASA Technical Reports Server (NTRS)
Littell, Justin D.; Binienda, Wieslaw K.; Arnold, William A.; Roberts, Gary d.; Goldberg, Robert K.
2008-01-01
In previous work, the ballistic impact resistance of triaxial braided carbon/epoxy composites made with large flat tows (12k and 24k) was examined by impacting 2 X2 X0.125" composite panels with gelatin projectiles. Several high strength, intermediate modulus carbon fibers were used in combination with both untoughened and toughened matrix materials. A wide range of penetration thresholds were measured for the various fiber/matrix combinations. However, there was no clear relationship between the penetration threshold and the properties of the constituents. During some of these experiments high speed cameras were used to view the failure process, and full-field strain measurements were made to determine the strain at the onset of failure. However, these experiments provided only limited insight into the microscopic failure processes responsible for the wide range of impact resistance observed. In order to investigate potential microscopic failure processes in more detail, quasi-static tests were performed in tension, compression, and shear. Full-field strain measurement techniques were used to identify local regions of high strain resulting from microscopic failures. Microscopic failure events near the specimen surface, such as splitting of fiber bundles in surface plies, were easily identified. Subsurface damage, such as fiber fracture or fiber bundle splitting, could be identified by its effect on in-plane surface strains. Subsurface delamination could be detected as an out-of-plane deflection at the surface. Using this data, failure criteria could be established at the fiber tow level for use in analysis. An analytical formulation was developed to allow the microscopic failure criteria to be used in place of macroscopic properties as input to simulations performed using the commercial explicit finite element code, LS-DYNA. The test methods developed to investigate microscopic failure will be presented along with methods for determining local failure criteria that can be used in analysis. Results of simulations performed using LS-DYNA will be presented to illustrate the capabilities and limitations for simulating failure during quasi-static deformation and during ballistic impact of large unit cell size triaxial braid composites.
Vogel, M N; Schmücker, S; Maksimovic, O; Hartmann, J; Claussen, C D; Horger, M
2012-01-01
Objectives This study compares tumour response assessment by automated CT volumetry and standard manual measurements regarding the impact on treatment decisions and patient outcome. Methods 58 consecutive patients with 203 pulmonary metastases undergoing baseline and follow-up multirow detector CT (MDCT) under chemotherapy were assessed for response to chemotherapy. Tumour burden of pulmonary target lesions was quantified in three ways: (1) following response evaluation criteria in solid tumours (RECIST); (2) following the volume equivalents of RECIST (i.e. with a threshold of −65/+73%); and (3) using calculated limits for stable disease (SD). For volumetry, calculated limits had been set at ±38% prior to the study by repeated quantification of nodules scanned twice. Results were compared using non-weighted κ-values and were evaluated for their impact on treatment decisions and patient outcome. Results In 15 (17%) of the 58 patients, the results of response assessment were inconsistent with 1 of the 3 methods, which would have had an impact on treatment decisions in 8 (13%). Patient outcome regarding therapy response could be verified in 5 (33%) of the 15 patients with inconsistent measurement results and was consistent with both RECIST and volumetry in 1, with calculated limits in 3 and with none in 1. Diagnosis as to the overall response was consistent with RECIST in six patients, with volumetry in six and with calculated limits in eight cases. There is an impact of different methods for therapy response assessment on treatment decisions. Conclusion A reduction of threshold for SD to ±30–40% of volume change seems reasonable when using volumetry. PMID:22745205
Radiological Determination of Postoperative Cervical Fusion: A Systematic Review.
Rhee, John M; Chapman, Jens R; Norvell, Daniel C; Smith, Justin; Sherry, Ned A; Riew, K Daniel
2015-07-01
Systematic review. To determine best criteria for radiological determination of postoperative subaxial cervical fusion to be applied to current clinical practice and ongoing future research assessing fusion to standardize assessment and improve comparability. Despite availability of multiple imaging modalities and criteria, there remains no method of determining cervical fusion with absolute certainty, nor clear consensus on specific criteria to be applied. A systematic search in MEDLINE/Cochrane Collaboration Library (through March 2014). Included studies assessed C2 to C7 via anterior or posterior approach, at 12 weeks or more postoperative, with any graft or implant. Overall body of evidence with respect to 6 posited key questions was determined using Grading of Recommendations Assessment, Development and Evaluation and Agency for Healthcare Research and Quality precepts. Of plain radiographical modalities, there is moderate evidence that the interspinous process motion method (<1 mm) is more accurate than the Cobb angle method for assessing anterior cervical fusion. Of the advanced imaging modalities, there is moderate evidence that computed tomography (CT) is more accurate and reliable than magnetic resonance imaging in assessing anterior cervical fusion. There is insufficient evidence regarding the optimal modality and criteria for assessing posterior cervical fusions and insufficient evidence to support a single time point after surgery as being optimal for determining fusion, although some evidence suggest that reliability of radiography and CT improves with increasing time postoperatively. We recommend using less than 1-mm motion as the initial modality for determining anterior cervical arthrodesis for both clinical and research applications. If further imaging is needed because of indeterminate radiographical evaluation, we recommend CT, which has relatively high accuracy and reliability, but due to greater radiation exposure and cost, it is not routinely suggested. We recommend that plain radiographs also be the initial method of determining posterior cervical fusion but suggest a lower threshold for obtaining CT scans because dynamic radiographs may not be as useful if spinous processes have been removed by laminectomy. 1.
A comparison of DSM-III-R and ICD-10 personality disorder criteria in an out-patient population.
Sara, G; Raven, P; Mann, A
1996-01-01
This study reports the results of a comparison of DSM-III-R and ICD-10 personality disorder criteria by application of both sets of criteria to the same group of patients. Despite the clinical relevance of these disorders and the need for reliable diagnostic criteria, such a comparison has not previously been reported. DSM-III-R and ICD-10 have converged in their classification of personality disorders, but some important differences between the two systems remain. Personality disorder diagnoses from both systems were obtained in 52 out-patients, using the Standardized Assessment of Personality (SAP), a brief, informant-based interview which yields diagnoses in both DSM-III-R and ICD-10. For individual personality disorder diagnoses, agreement between systems was limited. Thirty-four subjects received a personality disorder diagnosis that had an equivalent form in both systems, but only 10 subjects (29%) received the same primary diagnosis in each system. There was a difference in rate of diagnosis, with ICD-10 making significantly more personality disorder diagnoses. The lower diagnostic threshold of the ICD-10 contributed most of this effect. Further modifications in ICD-10 Diagnostic Criteria for Research (DCR) and DSM-IV to the personality disorder category have been considered. The omission in DSM-IV of three categories unique to that system and the raising of the threshold in ICD-10 DCR, do seem to have been helpful in promoting convergence.
Martin, Thomas J.; Grigg, Amanda; Kim, Susy A.; Ririe, Douglas G.; Eisenach, James C.
2014-01-01
Background The 5 choice serial reaction time task (5CSRTT) is commonly used to assess attention in rodents. We sought to develop a variant of the 5CSRTT that would speed training to objective success criteria, and to test whether this variant could determine attention capability in each subject. New Method Fisher 344 rats were trained to perform a variant of the 5CSRTT in which the duration of visual cue presentation (cue duration) was titrated between trials based upon performance. The cue duration was decreased when the subject made a correct response, or increased with incorrect responses or omissions. Additionally, test day challenges were provided consisting of lengthening the intertrial interval and inclusion of a visual distracting stimulus. Results Rats readily titrated the cue duration to less than 1 sec in 25 training sessions or less (mean ± SEM, 22.9 ± 0.7), and the median cue duration (MCD) was calculated as a measure of attention threshold. Increasing the intertrial interval increased premature responses, decreased the number of trials completed, and increased the MCD. Decreasing the intertrial interval and time allotted for consuming the food reward demonstrated that a minimum of 3.5 sec is required for rats to consume two food pellets and successfully attend to the next trial. Visual distraction in the form of a 3 Hz flashing light increased the MCD and both premature and time out responses. Comparison with existing method The titration variant of the 5CSRTT is a useful method that dynamically measures attention threshold across a wide range of subject performance, and significantly decreases the time required for training. Task challenges produce similar effects in the titration method as reported for the classical procedure. Conclusions The titration 5CSRTT method is an efficient training procedure for assessing attention and can be utilized to assess the limit in performance ability across subjects and various schedule manipulations. PMID:25528113
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-09
...: allopathic and osteopathic medicine; pharmacy; dentistry; and behavioral or mental health. Individual schools... = 1.0 percent. ``Other'' COE graduation rate eligibility threshold = 14.1 percent. DENTISTRY (Doctors...
Measures of Groundwater Drought from the Long-term Monitoring Data in Korea
NASA Astrophysics Data System (ADS)
Chung, E.; Park, J.; Woo, N. C.
2017-12-01
Recently, drought has been increased in its severity and frequency along the climate change in Korea. There are several criteria for alarming drought, for instance, based on the no-rainfall days, the amount of stream discharge, and the water levels of reservoirs. However, farmers depending on groundwater still have been suffered in preparing drought especially in the Spring. No-rainfall days continue, groundwater exploitation increases, water table declines, stream discharge decreases, and then the effects of drought become serious. Thus, the drought index based on the groundwater level is needed for the preparedness of drought disaster. Palmer et al.(1965, USGS) has proposed a method to set the threshold for the decline of the groundwater level in 5 stages based on the daily water-level data over the last 30 years. In this study, according to Peters et al.(2003), the threshold of groundwater level was estimated using the daily water-level data at five sites with significant drought experiences in Korea. Water levels and precipitations data were obtained from the national groundwater monitoring wells and the automatic weather stations, respectively, for 10 years from 2005 to 2014. From the water-level changes, the threshold was calculated when the value of the drought criterion (c), the ratio of the deficit below the threshold to the deficit below the average, is 0.3. As a result, the monthly drought days were high in 2009 and 2011 in Uiryeong, and from 2005 to 2008 in Boeun. The validity of the approach and the threshold can be evaluated by comparing calculated monthly drought days with recorded drought in the past. Through groundwater drought research, it is expected that not only surface water also groundwater resource management should be implemented more efficiently to overcome drought disaster.
Towards a Delamination Fatigue Methodology for Composite Materials
NASA Technical Reports Server (NTRS)
OBrien, Thomas K.
2007-01-01
A methodology that accounts for both delaminaton onset and growth in composite structural components is proposed for improved fatigue life prediction to reduce life cycle costs and improve accept/reject criteria for manufacturing flaws. The benefits of using a Delamination Onset Threshold (DOT) approach in combination with a Modified Damage Tolerance (MDT) approach is highlighted. The use of this combined approach to establish accept/reject criteria, requiring less conservative initial manufacturing flaw sizes, is illustrated.
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
Al-Badriyeh, Daoud; Fahey, Michael; Alabbadi, Ibrahim; Al-Khal, Abdullatif; Zaidan, Manal
2015-12-01
Statin selection for the largest hospital formulary in Qatar is not systematic, not comparative, and does not consider the multi-indication nature of statins. There are no reports in the literature of multi-indication-based comparative scoring models of statins or of statin selection criteria weights that are based primarily on local clinicians' preferences and experiences. This study sought to comparatively evaluate statins for first-line therapy in Qatar, and to quantify the economic impact of this. An evidence-based, multi-indication, multi-criteria pharmacotherapeutic model was developed for the scoring of statins from the perspective of the main health care provider in Qatar. The literature and an expert panel informed the selection criteria of statins. Relative weighting of selection criteria was based on the input of the relevant local clinician population. Statins were comparatively scored based on literature evidence, with those exceeding a defined scoring threshold being recommended for use. With 95% CI and 5% margin of error, the scoring model was successfully developed. Selection criteria comprised 28 subcriteria under the following main criteria: clinical efficacy, best publish evidence and experience, adverse effects, drug interaction, dosing time, and fixed dose combination availability. Outcome measures for multiple indications were related to effects on LDL cholesterol, HDL cholesterol, triglyceride, total cholesterol, and C-reactive protein. Atorvastatin, pravastatin, and rosuvastatin exceeded defined pharmacotherapeutic thresholds. Atorvastatin and pravastatin were recommended as first-line use and rosuvastatin as a nonformulary alternative. It was estimated that this would produce a 17.6% cost savings in statins expenditure. Sensitivity analyses confirmed the robustness of the evaluation's outcomes against input uncertainties. Incorporating a comparative evaluation of statins in Qatari practices based on a locally developed, transparent, multi-indication, multi-criteria scoring model has the potential to considerably reduce expenditures on statins. Atorvastatin and pravastatin should be the first-line statin therapies in the main Qatari health care provider, with rosuvastatin as an alternative. Copyright © 2015 Elsevier HS Journals, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Marchetti, E.; Ripepe, M.; Ulivieri, G.; Kogelnig, A.
2015-11-01
Avalanche risk management is strongly related to the ability to identify and timely report the occurrence of snow avalanches. Infrasound has been applied to avalanche research and monitoring for the last 20 years but it never turned into an operational tool to identify clear signals related to avalanches. We present here a method based on the analysis of infrasound signals recorded by a small aperture array in Ischgl (Austria), which provides a significant improvement to overcome this limit. The method is based on array-derived wave parameters, such as back azimuth and apparent velocity. The method defines threshold criteria for automatic avalanche identification by considering avalanches as a moving source of infrasound. We validate the efficiency of the automatic infrasound detection with continuous observations with Doppler radar and we show how the velocity of a snow avalanche in any given path around the array can be efficiently derived. Our results indicate that a proper infrasound array analysis allows a robust, real-time, remote detection of snow avalanches that is able to provide the number and the time of occurrence of snow avalanches occurring all around the array, which represent key information for a proper validation of avalanche forecast models and risk management in a given area.
Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform
NASA Astrophysics Data System (ADS)
Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li
2017-12-01
In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.
Tang, Jing; Zheng, Jianbin; Wang, Yang; Yu, Lie; Zhan, Enqi; Song, Qiuzhi
2018-02-06
This paper presents a novel methodology for detecting the gait phase of human walking on level ground. The previous threshold method (TM) sets a threshold to divide the ground contact forces (GCFs) into on-ground and off-ground states. However, the previous methods for gait phase detection demonstrate no adaptability to different people and different walking speeds. Therefore, this paper presents a self-tuning triple threshold algorithm (STTTA) that calculates adjustable thresholds to adapt to human walking. Two force sensitive resistors (FSRs) were placed on the ball and heel to measure GCFs. Three thresholds (i.e., high-threshold, middle-threshold andlow-threshold) were used to search out the maximum and minimum GCFs for the self-adjustments of thresholds. The high-threshold was the main threshold used to divide the GCFs into on-ground and off-ground statuses. Then, the gait phases were obtained through the gait phase detection algorithm (GPDA), which provides the rules that determine calculations for STTTA. Finally, the STTTA reliability is determined by comparing the results between STTTA and Mariani method referenced as the timing analysis module (TAM) and Lopez-Meyer methods. Experimental results show that the proposed method can be used to detect gait phases in real time and obtain high reliability when compared with the previous methods in the literature. In addition, the proposed method exhibits strong adaptability to different wearers walking at different walking speeds.
75 FR 76729 - Market Access Agreement
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-09
... falls below pre-established financial performance thresholds. The draft amendment (MAA Amendment) is... System banks and the Funding Corporation that establishes certain financial performance criteria. Under... Agreement). FOR FURTHER INFORMATION CONTACT: Chris Wilson, Financial Analyst, Office of Regulatory Policy...
The human, primate and rabbit ultraviolet action spectra
NASA Technical Reports Server (NTRS)
Pitts, D. G.; Gibbons, W. D.
1972-01-01
A 5000 watt xenon-mercury high pressure lamp was used to produce a continuous ultraviolet spectrum. Human and animal exposures were made to establish the photokeratitis threshold and abiotic action spectrum. The lower limit of the abiotic action spectrum was 220 nm while the upper limit was 310 nm. The radiant exposure threshold at 270 nm was 0.005 watts/sq cm for the rabbit, 0.004 watts/sq cm for the primate, and 0.004 watts/ sq cm for the human. The rabbit curve was bi-peaked with minimums at 220 nm, 240 nm and 270 nm. The primate curve was tri-peaked with minimums at 220 nm, 240 nm and 270 nm. The human data showed a rather shallow curve with a minimum at 270 nm. Formulas and calculations are given to predict minimum exposure times for ocular damage to man in outer space, to establish valid safety criteria, and to establish protective design criteria.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukada, Junichi, E-mail: fukada@rad.med.keio.ac.jp; Shigematsu, Naoyuki; Takeuchi, Hiroya
Purpose: We investigated clinical and treatment-related factors as predictors of symptomatic pericardial effusion in esophageal cancer patients after concurrent chemoradiation therapy. Methods and Materials: We reviewed 214 consecutive primary esophageal cancer patients treated with concurrent chemoradiation therapy between 2001 and 2010 in our institute. Pericardial effusion was detected on follow-up computed tomography. Symptomatic effusion was defined as effusion ≥grade 3 according to Common Terminology Criteria for Adverse Events v4.0 criteria. Percent volume irradiated with 5 to 65 Gy (V5-V65) and mean dose to the pericardium were evaluated employing dose-volume histograms. To evaluate dosimetry for patients treated with two-dimensional planning inmore » the earlier period (2001-2005), computed tomography data at diagnosis were transferred to a treatment planning system to reconstruct three-dimensional plans without modification. Optimal dosimetric thresholds for symptomatic pericardial effusion were calculated by receiver operating characteristic curves. Associating clinical and treatment-related risk factors for symptomatic pericardial effusion were detected by univariate and multivariate analyses. Results: The median follow-up was 29 (range, 6-121) months for eligible 167 patients. Symptomatic pericardial effusion was observed in 14 (8.4%) patients. Dosimetric analyses revealed average values of V30 to V45 for the pericardium and mean pericardial doses were significantly higher in patients with symptomatic pericardial effusion than in those with asymptomatic pericardial effusion (P<.05). Pericardial V5 to V55 and mean pericardial doses were significantly higher in patients with symptomatic pericardial effusion than in those without pericardial effusion (P<.001). Mean pericardial doses of 36.5 Gy and V45 of 58% were selected as optimal cutoff values for predicting symptomatic pericardial effusion. Multivariate analysis identified mean pericardial dose as the strongest risk factor for symptomatic pericardial effusion. Conclusions: Dose-volume thresholds for the pericardium facilitate predicting symptomatic pericardial effusion. Mean pericardial dose was selected based not only on the optimal dose-volume threshold but also on the most significant risk factor for symptomatic pericardial effusion.« less
Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds
Deeks, J.J.; Martin, E.C.; Riley, R.D.
2017-01-01
Introduction For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta‐analysis at each threshold. A standard meta‐analysis (no imputation [NI]) ignores such missing data. A single imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method that performs multiple imputation of the missing threshold results using discrete combinations (MIDC). Methods The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for 2 known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI, and MIDC approaches via simulation. Results Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between‐study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary receiver operating characteristic curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented. Conclusions The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta‐analysis of test accuracy studies. PMID:29052347
Király, Orsolya; Sleczka, Pawel; Pontes, Halley M; Urbán, Róbert; Griffiths, Mark D; Demetrovics, Zsolt
2017-01-01
The inclusion of Internet Gaming Disorder (IGD) in the DSM-5 (Section 3) has given rise to much scholarly debate regarding the proposed criteria and their operationalization. The present study's aim was threefold: to (i) develop and validate a brief psychometric instrument (Ten-Item Internet Gaming Disorder Test; IGDT-10) to assess IGD using definitions suggested in DSM-5, (ii) contribute to ongoing debate regards the usefulness and validity of each of the nine IGD criteria (using Item Response Theory [IRT]), and (iii) investigate the cut-off threshold suggested in the DSM-5. An online gamer sample of 4887 gamers (age range 14-64years, mean age 22.2years [SD=6.4], 92.5% male) was collected through Facebook and a gaming-related website with the cooperation of a popular Hungarian gaming magazine. A shopping voucher of approx. 300 Euros was drawn between participants to boost participation (i.e., lottery incentive). Confirmatory factor analysis and a structural regression model were used to test the psychometric properties of the IGDT-10 and IRT analysis was conducted to test the measurement performance of the nine IGD criteria. Finally, Latent Class Analysis along with sensitivity and specificity analysis were used to investigate the cut-off threshold proposed in the DSM-5. Analysis supported IGDT-10's validity, reliability, and suitability to be used in future research. Findings of the IRT analysis suggest IGD is manifested through a different set of symptoms depending on the level of severity of the disorder. More specifically, "continuation", "preoccupation", "negative consequences" and "escape" were associated with lower severity of IGD, while "tolerance", "loss of control", "giving up other activities" and "deception" criteria were associated with more severe levels. "Preoccupation" and "escape" provided very little information to the estimation IGD severity. Finally, the DSM-5 suggested threshold appeared to be supported by our statistical analyses. IGDT-10 is a valid and reliable instrument to assess IGD as proposed in the DSM-5. Apparently the nine criteria do not explain IGD in the same way, suggesting that additional studies are needed to assess the characteristics and intricacies of each criterion and how they account to explain IGD. Copyright © 2015 Elsevier Ltd. All rights reserved.
Brightbill, Robin A.; Limbeck, Robert; Silldorff, Erik; Eggleston, Heather L.
2011-01-01
The Delaware River Basin Commission is charged with establishing water-quality objectives for the tidal and non-tidal portions of the Delaware River, which include developing nutrient standards that are scientifically defensible. The U.S. Geological Survey, in cooperation with the Delaware River Basin Commission and the Academy of Natural Sciences, studied the effects of nutrient enrichment in the upper, middle, and lower sections of the non-tidal Delaware River. Algal samples were collected from the natural habitat using rock scrapes and from the artificial nutrient enrichment samplers, Matlock periphytometers. The knowledge gained from this study is to be used in helping determine appropriate nutrient criteria for the Delaware River in the oligotrophic, mesotrophic, and eutrophic sections of the river and is a first step toward gathering data that can be used in selecting nutrient effect levels or criteria thresholds for aquatic-life use protection. This report describes the methods for data collection and presents the data collected as part of this study.
NASA Astrophysics Data System (ADS)
Bijl, Piet
2016-10-01
When acquiring a new imaging system and operational task performance is a critical factor for success, it is necessary to specify minimum acceptance requirements that need to be met using a sensor performance model and/or performance tests. Currently, there exist a variety of models and test from different origin (defense, security, road safety, optometry) and they all do different predictions. This study reviews a number of frequently used methods and shows the effects that small changes in procedure or threshold criteria can have on the outcome of a test. For example, a system may meet the acceptance requirements but not satisfy the needs for the operational task, or the choice of test may determine the rank order of candidate sensors. The goal of the paper is to make people aware of the pitfalls associated with the acquisition process, by i) illustrating potential tricks to have a system accepted that is actually not suited for the operational task, and ii) providing tips to avoid this unwanted situation.
Albanese, Mark A; Farrell, Philip; Dottl, Susan L
2005-01-01
Using Medical College Admission Test-grade point average (MCAT-GPA) scores as a threshold has the potential to address issues raised in recent Supreme Court cases, but it introduces complicated methodological issues for medical school admissions. To assess various statistical indexes to determine optimally discriminating thresholds for MCAT-GPA scores. Entering classes from 1992 through 1998 (N = 752) are used to develop guidelines for cut scores that optimize discrimination between students who pass and do not pass the United States Medical Licensing Examination (USMLE) Step 1 on the first attempt. Risk differences, odds ratios, sensitivity, and specificity discriminated best for setting thresholds. Compensatory versus noncompensatory procedures both accounted for 54% of Step 1 failures, but demanded different performance requirements (noncompensatory MCAT-biological sciences = 8, physical sciences = 7, verbal reasoning = 7--sum of scores = 22; compensatory MCAT total = 24). Rational and defensible intellectual achievement thresholds that are likely to comply with recent Supreme Court decisions can be set from MCAT scores and GPAs.
Wang, Xinchen; Tucker, Nathan R; Rizki, Gizem; Mills, Robert; Krijger, Peter HL; de Wit, Elzo; Subramanian, Vidya; Bartell, Eric; Nguyen, Xinh-Xinh; Ye, Jiangchuan; Leyton-Mange, Jordan; Dolmatova, Elena V; van der Harst, Pim; de Laat, Wouter; Ellinor, Patrick T; Newton-Cheh, Christopher; Milan, David J; Kellis, Manolis; Boyer, Laurie A
2016-01-01
Genetic variants identified by genome-wide association studies explain only a modest proportion of heritability, suggesting that meaningful associations lie 'hidden' below current thresholds. Here, we integrate information from association studies with epigenomic maps to demonstrate that enhancers significantly overlap known loci associated with the cardiac QT interval and QRS duration. We apply functional criteria to identify loci associated with QT interval that do not meet genome-wide significance and are missed by existing studies. We demonstrate that these 'sub-threshold' signals represent novel loci, and that epigenomic maps are effective at discriminating true biological signals from noise. We experimentally validate the molecular, gene-regulatory, cellular and organismal phenotypes of these sub-threshold loci, demonstrating that most sub-threshold loci have regulatory consequences and that genetic perturbation of nearby genes causes cardiac phenotypes in mouse. Our work provides a general approach for improving the detection of novel loci associated with complex human traits. DOI: http://dx.doi.org/10.7554/eLife.10557.001 PMID:27162171
Ugaz, Ana G; Boyd, C. Trenton; Croft, Vicki F; Carrigan, Esther E; Anderson, Katherine M
2010-01-01
Objective: This paper presents the methods and results of a study designed to produce the third edition of the “Basic List of Veterinary Medical Serials,” which was established by the Veterinary Medical Libraries Section in 1976 and last updated in 1986. Methods: A set of 238 titles were evaluated using a decision matrix in order to systematically assign points for both objective and subjective criteria and determine an overall score for each journal. Criteria included: coverage in four major indexes, scholarly impact rank as tracked in two sources, identification as a recommended journal in preparing for specialty board examinations, and a veterinary librarian survey rating. Results: Of the 238 titles considered, a minimum scoring threshold determined the 123 (52%) journals that constituted the final list. The 36 subject categories represented on the list include general and specialty disciplines in veterinary medicine. A ranked list of journals and a list by subject category were produced. Conclusion: Serials appearing on the third edition of the “Basic List of Veterinary Medical Serials” met expanded objective measures of quality and impact as well as subjective perceptions of value by both librarians and veterinary practitioners. PMID:20936066
Trends in Medicare Part D Medication Therapy Management Eligibility Criteria
Wang, Junling; Shih, Ya-Chen Tina; Qin, Yolanda; Young, Theo; Thomas, Zachary; Spivey, Christina A.; Solomon, David K.; Chisholm-Burns, Marie
2015-01-01
Background To increase the enrollment rate of medication therapy management (MTM) programs in Medicare Part D plans, the US Centers for Medicare & Medicaid Services (CMS) lowered the allowable eligibility thresholds based on the number of chronic diseases and Part D drugs for Medicare Part D plans for 2010 and after. However, an increase in MTM enrollment rates has not been realized. Objectives To describe trends in MTM eligibility thresholds used by Medicare Part D plans and to identify patterns that may hinder enrollment in MTM programs. Methods This study analyzed data extracted from the Medicare Part D MTM Programs Fact Sheets (2008–2014). The annual percentages of utilizing each threshold value of the number of chronic diseases and Part D drugs, as well as other aspects of MTM enrollment practices, were analyzed among Medicare MTM programs that were established by Medicare Part D plans. Results For 2010 and after, increased proportions of Medicare Part D plans set their eligibility thresholds at the maximum numbers allowable. For example, in 2008, 48.7% of Medicare Part D plans (N = 347:712) opened MTM enrollment to Medicare beneficiaries with only 2 chronic disease states (specific diseases varied between plans), whereas the other half restricted enrollment to patients with a minimum of 3 to 5 chronic disease states. After 2010, only approximately 20% of plans opened their MTM enrollment to patients with 2 chronic disease states, with the remaining 80% restricting enrollment to patients with 3 or more chronic diseases. Conclusion The policy change by CMS for 2010 and after is associated with increased proportions of plans setting their MTM eligibility thresholds at the maximum numbers allowable. Changes to the eligibility thresholds by Medicare Part D plans might have acted as a barrier for increased MTM enrollment. Thus, CMS may need to identify alternative strategies to increase MTM enrollment in Medicare plans. PMID:26380030
Evaluating Alerting and Guidance Performance of a UAS Detect-And-Avoid System
NASA Technical Reports Server (NTRS)
Lee, Seung Man; Park, Chunki; Thipphavong, David P.; Isaacson, Douglas R.; Santiago, Confesor
2016-01-01
A key challenge to the routine, safe operation of unmanned aircraft systems (UAS) is the development of detect-and-avoid (DAA) systems to aid the UAS pilot in remaining "well clear" of nearby aircraft. The goal of this study is to investigate the effect of alerting criteria and pilot response delay on the safety and performance of UAS DAA systems in the context of routine civil UAS operations in the National Airspace System (NAS). A NAS-wide fast-time simulation study was conducted to assess UAS DAA system performance with a large number of encounters and a broad set of DAA alerting and guidance system parameters. Three attributes of the DAA system were controlled as independent variables in the study to conduct trade-off analyses: UAS trajectory prediction method (dead-reckoning vs. intent-based), alerting time threshold (related to predicted time to LoWC), and alerting distance threshold (related to predicted Horizontal Miss Distance, or HMD). A set of metrics, such as the percentage of true positive, false positive, and missed alerts, based on signal detection theory and analysis methods utilizing the Receiver Operating Characteristic (ROC) curves were proposed to evaluate the safety and performance of DAA alerting and guidance systems and aid development of DAA system performance standards. The effect of pilot response delay on the performance of DAA systems was evaluated using a DAA alerting and guidance model and a pilot model developed to support this study. A total of 18 fast-time simulations were conducted with nine different DAA alerting threshold settings and two different trajectory prediction methods, using recorded radar traffic from current Visual Flight Rules (VFR) operations, and supplemented with DAA-equipped UAS traffic based on mission profiles modeling future UAS operations. Results indicate DAA alerting distance threshold has a greater effect on DAA system performance than DAA alerting time threshold or ownship trajectory prediction method. Further analysis on the alert lead time (time in advance of predicted loss of well clear at which a DAA alert is first issued) indicated a strong positive correlation between alert lead time and DAA system performance (i.e. the ability of the UAS pilot to maneuver the unmanned aircraft to remain well clear). While bigger distance thresholds had beneficial effects on alert lead time and missed alert rate, it also generated a higher rate of false alerts. In the design and development of DAA alerting and guidance systems, therefore, the positive and negative effects of false alerts and missed alerts should be carefully considered to achieve acceptable alerting system performance by balancing false and missed alerts. The results and methodology presented in this study are expected to help stakeholders, policymakers and standards committees define the appropriate setting of DAA system parameter thresholds for UAS that ensure safety while minimizing operational impacts to the NAS and equipage requirements for its users before DAA operational performance standards can be finalized.
Vogel, M N; Schmücker, S; Maksimovic, O; Hartmann, J; Claussen, C D; Horger, M
2012-07-01
This study compares tumour response assessment by automated CT volumetry and standard manual measurements regarding the impact on treatment decisions and patient outcome. 58 consecutive patients with 203 pulmonary metastases undergoing baseline and follow-up multirow detector CT (MDCT) under chemotherapy were assessed for response to chemotherapy. Tumour burden of pulmonary target lesions was quantified in three ways: (1) following response evaluation criteria in solid tumours (RECIST); (2) following the volume equivalents of RECIST (i.e. with a threshold of -65/+73%); and (3) using calculated limits for stable disease (SD). For volumetry, calculated limits had been set at ±38% prior to the study by repeated quantification of nodules scanned twice. Results were compared using non-weighted κ-values and were evaluated for their impact on treatment decisions and patient outcome. In 15 (17%) of the 58 patients, the results of response assessment were inconsistent with 1 of the 3 methods, which would have had an impact on treatment decisions in 8 (13%). Patient outcome regarding therapy response could be verified in 5 (33%) of the 15 patients with inconsistent measurement results and was consistent with both RECIST and volumetry in 1, with calculated limits in 3 and with none in 1. Diagnosis as to the overall response was consistent with RECIST in six patients, with volumetry in six and with calculated limits in eight cases. There is an impact of different methods for therapy response assessment on treatment decisions. A reduction of threshold for SD to ±30-40% of volume change seems reasonable when using volumetry.
Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert
2018-05-08
In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.
Quinn, Molly M; Kao, Chia-Ning; Ahmad, Asima; Lenhart, Nikolaus; Shinkai, Kanade; Cedars, Marcelle I; Huddleston, Heather G
2016-10-01
To characterize the population of patients excluded from a diagnosis of polycystic ovary syndrome (PCOS) when follicle number criteria are increased to 25 per ovary as suggested by the Androgen Excess and Polycystic Ovary Syndrome Society's recent task force. Cross-sectional study. Tertiary academic center. A total of 259 women with PCOS according to Rotterdam criteria who were systematically examined from 2007 to 2015, with 1,100 ovulatory women participating in the Ovarian Aging (OVA) Study as controls. Anthropometric measurements, serum testing, ultrasonic imaging, and comprehensive dermatologic exams. Body mass index (BMI), waist to hip ratio (WHR), serum cholesterol, fasting glucose and insulin, follicle count per ovary, biochemical hyperandrogenemia, and hirsutism. Forty-seven of 259 women meeting the Rotterdam criteria (18.1%) were excluded from a diagnosis of PCOS when the follicle number criteria was increased to 25. These women had clinical evidence of hyperandrogenism (68.1%) and biochemical hyperandrogenemia (44.7%), although fewer reported oligoanovulation (26.8%). The excluded women had elevated total cholesterol, fasting insulin, and homeostatic model of insulin resistance (HOMA-IR) when compared with controls despite controlling for age and BMI. The women excluded from the PCOS diagnosis by raising the threshold of follicle number per ovary to ≥25 continue to show evidence of metabolic risk. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Yazdanpanah, Yazdan; Wolf, Lindsey L; Anglaret, Xavier; Gabillard, Delphine; Walensky, Rochelle P; Moh, Raoul; Danel, Christine; Sloan, Caroline E; Losina, Elena; Freedberg, Kenneth A
2010-01-01
International trials have shown that CD4+ T-cell-guided structured treatment interruptions (STI) of antiretroviral therapy (ART) lead to worse outcomes than continuous treatment. We simulated continuous ART and STI strategies with higher CD4+ T-cell interruption/reintroduction thresholds than those assessed in actual trials. Using a model of HIV, we simulated cohorts of African adults with different baseline CD4+ T-cell counts (< or = 200; 201-350; and 351-500 cells/microl). We varied ART initiation criteria (immediate; CD4+ T-cell count < 350 cells/microl or > or = 350 cells/microl with severe HIV-related disease; and CD4+ T-cell count <200 cells/microl or > or = 200 cells/microl with severe HIV-related disease), and ART interruption/reintroduction thresholds (350/250; 500/350; and 700/500 cells/microl). First-line therapy was non-nucleoside reverse transcriptase inhibitor (NNRTI)-based and second-line therapy was protease inhibitor (PI)-based. STI generally reduced life expectancy compared with continuous ART. Life expectancy increased with earlier ART initiation and higher interruption/reintroduction thresholds. STI reduced life expectancy by 48-69 and 11-30 months compared with continuous ART when interruption/reintroduction thresholds were 350/250 and 500/350 cells/microl, depending on ART initiation criteria. When patients interrupted/reintroduced ART at 700/500 cells/microl, life expectancies ranged from 2 months lower to 1 month higher than continuous ART. STI-related life expectancy increased with decreased risk of virological resistance after ART interruptions. STI with NNRTI-based regimens was almost always less effective than continuous treatment, regardless of interruption/reintroduction thresholds. The risks associated with STI decrease only if patients start ART earlier, interrupt/reintroduce treatment at very high CD4+ T-cell thresholds (700/500 cells/microl) and use first-line medications with higher resistance barriers, such as PIs.
Wetzel, Hermann
2006-01-01
In a large number of mostly retrospective association studies, a statistical relationship between volume and quality of health care has been reported. However, the relevance of these results is frequently limited by methodological shortcomings. In this article, criteria for the evidence and definition of thresholds for volume-outcome relations are proposed, e.g. the specification of relevant outcomes for quality indicators, analysis of volume as a continuous variable with an adequate case-mix and risk adjustment, accounting for cluster effects and considering mathematical models for the derivation of cut-off values. Moreover, volume thresholds are regarded as surrogate parameters for the indirect classification of the quality of care, whose diagnostic validity and effectiveness in improving health care quality need to be evaluated in prospective studies.
On the mixing time of geographical threshold graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan
In this paper, we study the mixing time of random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Wemore » specifically study the mixing times of random walks on 2-dimensional GTGs near the connectivity threshold. We provide a set of criteria on the distribution of vertex weights that guarantees that the mixing time is {Theta}(n log n).« less
NASA Astrophysics Data System (ADS)
Buono, D.; Nocerino, G.; Solimeno, S.; Porzio, A.
2014-07-01
Entanglement, one of the most intriguing aspects of quantum mechanics, marks itself into different features of quantum states. For this reason different criteria can be used for verifying entanglement. In this paper we review some of the entanglement criteria casted for continuous variable states and link them to peculiar aspects of the original debate on the famous Einstein-Podolsky-Rosen (EPR) paradox. We also provide a useful expression for valuating Bell-type non-locality on Gaussian states. We also present the experimental measurement of a particular realization of the Bell operator over continuous variable entangled states produced by a sub-threshold type-II optical parametric oscillators (OPOs).
Challenges for operational forecasting and early warning of rainfall induced landslides
NASA Astrophysics Data System (ADS)
Guzzetti, Fausto
2017-04-01
In many areas of the world, landslides occur every year, claiming lives and producing severe economic and environmental damage. Many of the landslides with human or economic consequences are the result of intense or prolonged rainfall. For this reason, in many areas the timely forecast of rainfall-induced landslides is of both scientific interest and social relevance. In the recent years, there has been a mounting interest and an increasing demand for operational landslide forecasting, and for associated landslide early warning systems. Despite the relevance of the problem, and the increasing interest and demand, only a few systems have been designed, and are currently operated. Inspection of the - limited - literature on operational landslide forecasting, and on the associated early warning systems, reveals that common criteria and standards for the design, the implementation, the operation, and the evaluation of the performances of the systems, are lacking. This limits the possibility to compare and to evaluate the systems critically, to identify their inherent strengths and weaknesses, and to improve the performance of the systems. Lack of common criteria and of established standards can also limit the credibility of the systems, and consequently their usefulness and potential practical impact. Landslides are very diversified phenomena, and the information and the modelling tools used to attempt landslide forecasting vary largely, depending on the type and size of the landslides, the extent of the geographical area considered, the timeframe of the forecasts, and the scope of the predictions. Consequently, systems for landslide forecasting and early warning can be designed and implemented at several different geographical scales, from the local (site or slope specific) to the regional, or even national scale. The talk focuses on regional to national scale landslide forecasting systems, and specifically on operational systems based on empirical rainfall threshold models. Building on the experience gained in designing, implementing, and operating national and regional landslide forecasting systems in Italy, and on a preliminary review of the existing literature on regional landslide early warning systems, the talk discusses concepts, limitations and challenges inherent to the design of reliable forecasting and early warning systems for rainfall-triggered landslides, the evaluation of the performances of the systems, and on problems related to the use of the forecasts and the issuing of landslide warnings. Several of the typical elements of an operational landslide forecasting system are considered, including: (i) the rainfall and landslide information used to establish the threshold models, (ii) the methods and tools used to define the empirical rainfall thresholds, and their associated uncertainty, (iii) the quality (e.g., the temporal and spatial resolution) of the rainfall information used for operational forecasting, including rain gauge and radar measurements, satellite estimates, and quantitative weather forecasts, (iv) the ancillary information used to prepare the forecasts, including e.g., the terrain subdivisions and the landslide susceptibility zonations, (v) the criteria used to transform the forecasts into landslide warnings and the methods used to communicate the warnings, and (vi) the criteria and strategies adopted to evaluate the performances of the systems, and to define minimum or optimal performance levels.
Krastinova, Evguenia; Seng, Remonie; Yeni, Patrick; Viard, Jean-Paul; Vittecoq, Daniel; Lascoux-Combe, Caroline; Fourn, Erwan; Pahlavan, Golriz; Delfraissy, Jean François; Meyer, Laurence
2013-01-01
Objective Guidelines for initiating HIV treatment are regularly revised. We explored how physicians in France have applied these evolving guidelines for ART initiation over the last decade in two different situations: chronic (CHI) and primary HIV-1 infection (PHI), since specific recommendations for PHI are also provided in France. Methods Data came from the ANRS PRIMO (1267 patients enrolled during PHI in 1996–2010) and COPANA (800 subjects enrolled at HIV diagnosis in 2004–2008) cohorts. We defined as guidelines-inconsistent during PHI and CHI, patients meeting criteria for ART initiation and not treated in the following month and during the next 6 months, respectively. Results ART initiation during PHI dramatically decreased from 91% of patients in 1996–99 to 22% in 2007 and increased to 60% in 2010, following changes in recommendations. In 2007, however, after the CD4 count threshold was raised to 350 cells/mm3 in 2006, only 55% of the patients with CD4≤350 were treated and 66% in 2008. During CHI, ART was more frequently initiated in patients who met the criteria at entry (96%) than during follow-up: 83% when recommendation to treat was 200 and 73% when it was 350 cells/mm3. Independent risk factors for not being treated during CHI despite meeting the criteria were lower viral load, lower educational level, and poorer living conditions. Conclusion HIV ART initiation guidelines are largely followed by practitioners in France. What can still be improved, however, is time to treat when CD4 cell counts reach the threshold to treat. Risk factors for lack of timely treatment highlight the need to understand better how patients’ living conditions and physicians’ perceptions influence the decision to initiate treatment. PMID:23936509
Societal-level Risk Factors Associated with Pediatric Hearing Loss: A Systematic Review
Vasconcellos, Adam P.; Colello, Stephanie; Kyle, Meghann E.; Shin, Jennifer J.
2015-01-01
Objective To determine if the current body of evidence describes specific threshold values of concern for modifiable societal-level risk factors for pediatric hearing loss, with the overarching goal of providing actionable guidance for the prevention and screening of audiological deficits in children. Data Sources Three related systematic reviews were performed. Computerized PubMed, Embase, and Cochrane Library searches were performed from inception through October 2013 and were supplemented with manual searches. Review Methods Inclusion/exclusion criteria were designed to determine specific threshold values of societal-level risk factors on hearing loss in the pediatric population. Searches and data extraction were performed by independent reviewers. Results There were 20 criterion-meeting studies with 29,128 participants. Infants less than 2 standard deviations below standardized weight, length, or body mass index were at increased risk. Specific nutritional deficiencies related to iodine and thiamine may also increase risk, although data are limited and threshold values of concern have not been quantified. Blood lead levels above 10 μg/dL were significantly associated with pediatric sensorineural loss, and mixed findings were noted for other heavy metals. Hearing loss was also more prevalent among children of socioeconomically disadvantaged families, as measured by a poverty income ratio less than 0.3 to 1, higher deprivation category status, and head of household employment as a manual laborer. Conclusions Increasing our understanding of specific thresholds of risk associated with causative factors forms the foundation for preventive and targeted screening programs as well as future research endeavors. PMID:24671458
NASA Astrophysics Data System (ADS)
MacDonald, Garrick Richard
To limit biodiversity loss caused by human activity, conservation planning must protect biodiversity while considering socio-economic cost criteria. This research aimed to determine the effects of socio-economic criteria and spatial configurations on the development of CANs for three species with different distribution patterns, while simultaneously attempting to address the uncertainty and sensitivity of CANs produced by ConsNet. The socio-economic factors and spatial criteria included the cost of land, population density, agricultural output value, area, average cluster area, number of clusters, shape, and perimeter. Three sensitive mammal species with different distribution patterns were selected and included the Bobcat, Ringtail, and a custom created mammal distribution. Forty problems and the corresponding number of CANs were formulated and computed by running each predicted presence species model with and without the four different socioeconomic threshold groups at two different resolutions. Thirty-two percent less area was conserved after considering multiple socio-economic constraints and spatial configurations in comparison to CANs that did not consider multiple socio-economic constraints and spatial configurations. Without including socio-economic costs, ConsNet's ALL_CELLS heuristic solution was the highest ranking CAN. After considering multiple socio-economic costs, the number one ranking CAN was no longer the ALL_CELLS heuristic solution, but a spatially different meta-heuristic solution. The effects of multiple constraints and objectives on the design of CANs with different distribution patterns did not vary significantly across the criteria. The CANs produced by ConsNet appeared to demonstrate some uncertainty surrounding particular criteria, but did not demonstrate substantial uncertainty across all criteria used to rank the CANs. Similarly, the range of socio-economic criteria thresholds did not have a substantial impact. ConsNet was very applicable to the research project, however, it did exhibit a few limitations. Both the advantages and disadvantages of ConsNet should be considered before using ConsNet for future conservation planning projects. The research project is an example of a large data scenario undertaken with a multiple criteria decision analysis (MCDA) approach.
Do Shale Pore Throats Have a Threshold Diameter for Oil Storage?
Zou, Caineng; Jin, Xu; Zhu, Rukai; Gong, Guangming; Sun, Liang; Dai, Jinxing; Meng, Depeng; Wang, Xiaoqi; Li, Jianming; Wu, Songtao; Liu, Xiaodan; Wu, Juntao; Jiang, Lei
2015-01-01
In this work, a nanoporous template with a controllable channel diameter was used to simulate the oil storage ability of shale pore throats. On the basis of the wetting behaviours at the nanoscale solid-liquid interfaces, the seepage of oil in nano-channels of different diameters was examined to accurately and systematically determine the effect of the pore diameter on the oil storage capacity. The results indicated that the lower threshold for oil storage was a pore throat of 20 nm, under certain conditions. This proposed pore size threshold provides novel, evidence-based criteria for estimating the geological reserves, recoverable reserves and economically recoverable reserves of shale oil. This new understanding of shale oil processes could revolutionize the related industries. PMID:26314637
Do Shale Pore Throats Have a Threshold Diameter for Oil Storage?
Zou, Caineng; Jin, Xu; Zhu, Rukai; Gong, Guangming; Sun, Liang; Dai, Jinxing; Meng, Depeng; Wang, Xiaoqi; Li, Jianming; Wu, Songtao; Liu, Xiaodan; Wu, Juntao; Jiang, Lei
2015-08-28
In this work, a nanoporous template with a controllable channel diameter was used to simulate the oil storage ability of shale pore throats. On the basis of the wetting behaviours at the nanoscale solid-liquid interfaces, the seepage of oil in nano-channels of different diameters was examined to accurately and systematically determine the effect of the pore diameter on the oil storage capacity. The results indicated that the lower threshold for oil storage was a pore throat of 20 nm, under certain conditions. This proposed pore size threshold provides novel, evidence-based criteria for estimating the geological reserves, recoverable reserves and economically recoverable reserves of shale oil. This new understanding of shale oil processes could revolutionize the related industries.
Methods of Muscle Activation Onset Timing Recorded During Spinal Manipulation.
Currie, Stuart J; Myers, Casey A; Krishnamurthy, Ashok; Enebo, Brian A; Davidson, Bradley S
2016-05-01
The purpose of this study was to determine electromyographic threshold parameters that most reliably characterize the muscular response to spinal manipulation and compare 2 methods that detect muscle activity onset delay: the double-threshold method and cross-correlation method. Surface and indwelling electromyography were recorded during lumbar side-lying manipulations in 17 asymptomatic participants. Muscle activity onset delays in relation to the thrusting force were compared across methods and muscles using a generalized linear model. The threshold combinations that resulted in the lowest Detection Failures were the "8 SD-0 milliseconds" threshold (Detection Failures = 8) and the "8 SD-10 milliseconds" threshold (Detection Failures = 9). The average muscle activity onset delay for the double-threshold method across all participants was 149 ± 152 milliseconds for the multifidus and 252 ± 204 milliseconds for the erector spinae. The average onset delay for the cross-correlation method was 26 ± 101 for the multifidus and 67 ± 116 for the erector spinae. There were no statistical interactions, and a main effect of method demonstrated that the delays were higher when using the double-threshold method compared with cross-correlation. The threshold parameters that best characterized activity onset delays were an 8-SD amplitude and a 10-millisecond duration threshold. The double-threshold method correlated well with visual supervision of muscle activity. The cross-correlation method provides several advantages in signal processing; however, supervision was required for some results, negating this advantage. These results help standardize methods when recording neuromuscular responses of spinal manipulation and improve comparisons within and across investigations. Copyright © 2016 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.
Bayesian methods for estimating GEBVs of threshold traits
Wang, C-L; Ding, X-D; Wang, J-Y; Liu, J-F; Fu, W-X; Zhang, Z; Yin, Z-J; Zhang, Q
2013-01-01
Estimation of genomic breeding values is the key step in genomic selection (GS). Many methods have been proposed for continuous traits, but methods for threshold traits are still scarce. Here we introduced threshold model to the framework of GS, and specifically, we extended the three Bayesian methods BayesA, BayesB and BayesCπ on the basis of threshold model for estimating genomic breeding values of threshold traits, and the extended methods are correspondingly termed BayesTA, BayesTB and BayesTCπ. Computing procedures of the three BayesT methods using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the benefit of the presented methods in accuracy with the genomic estimated breeding values (GEBVs) for threshold traits. Factors affecting the performance of the three BayesT methods were addressed. As expected, the three BayesT methods generally performed better than the corresponding normal Bayesian methods, in particular when the number of phenotypic categories was small. In the standard scenario (number of categories=2, incidence=30%, number of quantitative trait loci=50, h2=0.3), the accuracies were improved by 30.4%, 2.4%, and 5.7% points, respectively. In most scenarios, BayesTB and BayesTCπ generated similar accuracies and both performed better than BayesTA. In conclusion, our work proved that threshold model fits well for predicting GEBVs of threshold traits, and BayesTCπ is supposed to be the method of choice for GS of threshold traits. PMID:23149458
Tornado risks and design windspeeds for the Oak Ridge Plant Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1975-08-01
The effects of tornadoes and other extreme winds should be considered in establishing design criteria for structures to resist wind loads. Design standards that are incorporated in building codes do not normally include the effects of tornadoes in their wind load criteria. Some tornado risk models ignore the presence of nontornadic extreme winds. The purpose of this study is to determine the probability of tornadic and straight winds exceeding a threshold value in the geographical region surrounding the Oak Ridge, Tennessee plant site.
Wavelet-based adaptive thresholding method for image segmentation
NASA Astrophysics Data System (ADS)
Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl
2001-05-01
A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.
Dexter, Franklin; Epstein, Richard H; Ledolter, Johannes; Dasovich, Susan M; Herman, Jay H; Maga, Joni M; Schwenk, Eric S
2018-05-01
Hospitals review allogeneic red blood cell (RBC) transfusions for appropriateness. Audit criteria have been published that apply to 5 common procedures. We expanded on this work to study the management decision of selecting which cases involving transfusion of at least 1 RBC unit to audit (review) among all surgical procedures, including those previously studied. This retrospective, observational study included 400,000 cases among 1891 different procedures over an 11-year period. There were 12,616 cases with RBC transfusion. We studied the proportions of cases that would be audited based on criteria of nadir hemoglobin (Hb) greater than the hospital's selected transfusion threshold, or absent Hb or missing estimated blood loss (EBL) among procedures with median EBL <500 mL. This threshold EBL was selected because it is approximately the volume removed during the donation of a single unit of whole blood at a blood bank. Missing EBL is important to the audit decision for cases in which the procedures' median EBL is <500 mL because, without an indication of the extent of bleeding, there are insufficient data to assume that there was sufficient blood loss to justify the transfusion. Most cases (>50%) that would be audited and most cases (>50%) with transfusion were among procedures with median EBL <500 mL (P < .0001). Among cases with transfusion and nadir Hb >9 g/dL, the procedure's median EBL was <500 mL for 3.0 times more cases than for procedures having a median EBL ≥500 mL. A greater percentage of cases would be recommended for audit based on missing values for Hb and/or EBL than based on exceeding the Hb threshold among cases of procedures with median EBL ≥500 mL (P < .0001). There were 3.7 times as many cases with transfusion that had missing values for Hb and/or EBL than had a nadir Hb >9 g/dL and median EBL for the procedure ≥500 mL. An automated process to select cases for audit of intraoperative transfusion of RBC needs to consider the median EBL of the procedure, whether the nadir Hb is below the hospital's Hb transfusion threshold for surgical cases, and the absence of either a Hb or entry of the EBL for the case. This conclusion applies to all surgical cases and procedures.
2013-01-01
The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. Virtual Slides The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017. PMID:23531405
Korzynska, Anna; Roszkowiak, Lukasz; Lopez, Carlos; Bosch, Ramon; Witkowski, Lukasz; Lejeune, Marylene
2013-03-25
The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the 'brown component' extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017.
Excitonic lasing in solution-processed subwavelength nanosphere assemblies
Appavoo, Kannatassen; Liu, Xiaoze; Menon, Vinod; ...
2016-02-03
Lasing in solution-processed nanomaterials has gained significant interest because of the potential for low-cost integrated photonic devices. Still, a key challenge is to utilize a comprehensive knowledge of the system’s spectral and temporal dynamics to design low-threshold lasing devices. Here, we demonstrate intrinsic lasing (without external cavity) at low-threshold in an ultrathin film of coupled, highly crystalline nanospheres with overall thickness on the order of ~λ/4. The cavity-free geometry consists of ~35 nm zinc oxide nanospheres that collectively localize the in-plane emissive light fields while minimizing scattering losses, resulting in excitonic lasing with fluence thresholds at least an order ofmore » magnitude lower than previous UV-blue random and quantum-dot lasers (<75 μJ/cm 2). Fluence-dependent effects, as quantified by subpicosecond transient spectroscopy, highlight the role of phonon-mediated processes in excitonic lasing. Subpicosecond evolution of distinct lasing modes, together with three-dimensional electromagnetic simulations, indicate a random lasing process, which is in violation of the commonly cited criteria of strong scattering from individual nanostructures and an optically thick sample. Subsequently, an electron–hole plasma mechanism is observed with increased fluence. Furthermore, these results suggest that coupled nanostructures with high crystallinity, fabricated by low-cost solution-processing methods, can function as viable building blocks for high-performance optoelectronics devices.« less
Peters, Amy T.; Shankman, Stewart A.; Deckersbach, Thilo; West, Amy E.
2015-01-01
Background The aim of this study is to assess predictors of first-episode major depression in a community-based sample of adults with and without sub-threshold depression. Method Data were from Waves 1 & 2 of the National Epidemiological Survey on Alcohol and Related Conditions (NESARC). Participants meeting criteria for a sub-threshold depressive episode (sMDE; n = 3,901) reported lifetime depressed mood/loss of interest lasting at least two weeks and at least two of the seven other DSM-IV symptoms of MDD. Predictors of MDE 3 years later were compared in those with and without (n = 31,022) sMDE. Results Being female, history of alcohol or substance use, and child abuse increased the odds of developing MDD to a greater degree in individuals without sMDE relative to those with sMDE. Among those with sMDE and additional risk factors (low education, substance use), younger age was associated with marginally increased risk of MDD. Conclusion Several demographic risk factors may help identify individuals at risk for developing MDD in individuals who have not experienced an sMDE who may be candidates for early intervention. Future work should assess whether preventative interventions targeting substance/alcohol use and child abuse could reduce the risk of depression. PMID:26343831
Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization
NASA Astrophysics Data System (ADS)
Li, Li
2018-03-01
In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.
Development and evaluation of an influenza pandemic intensive care unit triage protocol.
Cheung, Winston; Myburgh, John; Seppelt, Ian M; Parr, Michael J; Blackwell, Nikki; Demonte, Shannon; Gandhi, Kalpesh; Hoyling, Larissa; Nair, Priya; Passer, Melissa; Reynolds, Claire; Saunders, Nicholas M; Saxena, Manoj K; Thanakrishnan, Govindasamy
2012-09-01
To develop an influenza pandemic ICU triage (iPIT) protocol that excludes patients with the highest and lowest predicted mortality rates, and to determine the increase in ICU bed availability that would result. Post-hoc analysis of a study evaluating two triage protocols, designed to determine which patients should be excluded from access to ICU resources during an influenza pandemic. ICU mortality rates were determined for the individual triage criteria in the protocols and included criteria based on the Sequential Organ Failure Assessment (SOFA) score. Criteria resulting in mortality rates outside the 25th and 75th percentiles were used as exclusion criteria in a new iPIT-1 protocol. The SOFA threshold component was modified further and reported as iPIT-2 and iPIT-3. Increase in ICU bed availability. The 25th and 75th percentiles for ICU mortality were 8.3% and 35.2%, respectively. Applying the iPIT-1 protocol resulted in an increase in ICU bed availability at admission of 71.7% ± 0.6%. Decreasing the lower SOFA score exclusion criteria to ≤6 (iPIT-2) and ≤4 (iPIT-3) resulted in an increase in ICU bed availability at admission of 66.9% ± 0.6% and 59.4 ± 0.7%, respectively (P < 0.001). The iPIT protocol excludes patients with the lowest and highest ICU mortality, and provides increases in ICU bed availability. Adjusting the lower SOFA score exclusion limit provides a method of escalation or de- escalation to cope with demand.
Generalized Lawson Criteria for Inertial Confinement Fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipton, Robert E.
2015-08-27
The Lawson Criterion was proposed by John D. Lawson in 1955 as a general measure of the conditions necessary for a magnetic fusion device to reach thermonuclear ignition. Over the years, similar ignition criteria have been proposed which would be suitable for Inertial Confinement Fusion (ICF) designs. This paper will compare and contrast several ICF ignition criteria based on Lawson’s original ideas. Both analytical and numerical results will be presented which will demonstrate that although the various criteria differ in some details, they are closely related and perform similarly as ignition criteria. A simple approximation will also be presented whichmore » allows the inference of each ignition parameter directly from the measured data taken on most shots fired at the National Ignition Facility (NIF) with a minimum reliance on computer simulations. Evidence will be presented which indicates that the experimentally inferred ignition parameters on the best NIF shots are very close to the ignition threshold.« less
McDonald, Scott D; Thompson, NiVonne L; Stratton, Kelcey J; Calhoun, Patrick S
2014-03-01
Self-report questionnaires are frequently used to identify PTSD among U.S. military personnel and Veterans. Two common scoring methods used to classify PTSD include: (1) a cut score threshold and (2) endorsement of PTSD symptoms meeting DSM-IV-TR symptom cluster criteria (SCM). A third method requiring a cut score in addition to SCM has been proposed, but has received little study. The current study examined the diagnostic accuracy of three scoring methods for the Davidson Trauma Scale (DTS) among 804 Afghanistan and Iraq war-era military Service Members and Veterans. Data were weighted to approximate the prevalence of PTSD and other Axis I disorders in VA primary care. As expected, adding a cut score criterion to SCM improved specificity and positive predictive power. However, a cut score of 68-72 provided optimal diagnostic accuracy. The utility of the DTS, the role of baseline prevalence, and recommendations for future research are discussed. Published by Elsevier Ltd.
McBurnie, Mary Ann; Vollmer, William M.; Gudmundsson, Gunnar; Welte, Tobias; Nizankowska-Mogilnicka, Ewa; Studnicka, Michael; Bateman, Eric; Anto, Josep M.; Burney, Peter; Mannino, David M.; Buist, Sonia A.
2011-01-01
Background: Never smokers comprise a substantial proportion of patients with COPD. Their characteristics and possible risk factors in this population are not yet well defined. Methods: We analyzed data from 14 countries that participated in the international, population-based Burden of Obstructive Lung Disease (BOLD) study. Participants were aged ≥ 40 years and completed postbronchodilator spirometry testing plus questionnaires about respiratory symptoms, health status, and exposure to COPD risk factors. A diagnosis of COPD was based on the postbronchodilator FEV1/FVC ratio, according to current GOLD (Global Initiative for Obstructive Lung Disease) guidelines. In addition to this, the lower limit of normal (LLN) was evaluated as an alternative threshold for the FEV1/FVC ratio. Results: Among 4,291 never smokers, 6.6% met criteria for mild (GOLD stage I) COPD, and 5.6% met criteria for moderate to very severe (GOLD stage II+) COPD. Although never smokers were less likely to have COPD and had less severe COPD than ever smokers, never smokers nonetheless comprised 23.3% (240/1,031) of those classified with GOLD stage II+ COPD. This proportion was similar, 20.5% (171/832), even when the LLN was used as a threshold for the FEV1/FVC ratio. Predictors of COPD in never smokers include age, education, occupational exposure, childhood respiratory diseases, and BMI alterations. Conclusion: This multicenter international study confirms previous evidence that never smokers comprise a substantial proportion of individuals with COPD. Our data suggest that, in addition to increased age, a prior diagnosis of asthma and, among women, lower education levels are associated with an increased risk for COPD among never smokers. PMID:20884729
NASA Astrophysics Data System (ADS)
Aoki, Hirooki; Ichimura, Shiro; Fujiwara, Toyoki; Kiyooka, Satoru; Koshiji, Kohji; Tsuzuki, Keishi; Nakamura, Hidetoshi; Fujimoto, Hideo
We proposed a calculation method of the ventilation threshold using the non-contact respiration measurement with dot-matrix pattern light projection under pedaling exercise. The validity and effectiveness of our proposed method is examined by simultaneous measurement with the expiration gas analyzer. The experimental result showed that the correlation existed between the quasi ventilation thresholds calculated by our proposed method and the ventilation thresholds calculated by the expiration gas analyzer. This result indicates the possibility of the non-contact measurement of the ventilation threshold by the proposed method.
Evaluation of thresholding techniques for segmenting scaffold images in tissue engineering
NASA Astrophysics Data System (ADS)
Rajagopalan, Srinivasan; Yaszemski, Michael J.; Robb, Richard A.
2004-05-01
Tissue engineering attempts to address the ever widening gap between the demand and supply of organ and tissue transplants using natural and biomimetic scaffolds. The regeneration of specific tissues aided by synthetic materials is dependent on the structural and morphometric properties of the scaffold. These properties can be derived non-destructively using quantitative analysis of high resolution microCT scans of scaffolds. Thresholding of the scanned images into polymeric and porous phase is central to the outcome of the subsequent structural and morphometric analysis. Visual thresholding of scaffolds produced using stochastic processes is inaccurate. Depending on the algorithmic assumptions made, automatic thresholding might also be inaccurate. Hence there is a need to analyze the performance of different techniques and propose alternate ones, if needed. This paper provides a quantitative comparison of different thresholding techniques for segmenting scaffold images. The thresholding algorithms examined include those that exploit spatial information, locally adaptive characteristics, histogram entropy information, histogram shape information, and clustering of gray-level information. The performance of different techniques was evaluated using established criteria, including misclassification error, edge mismatch, relative foreground error, and region non-uniformity. Algorithms that exploit local image characteristics seem to perform much better than those using global information.
Optimal thresholds for the estimation of area rain-rate moments by the threshold method
NASA Technical Reports Server (NTRS)
Short, David A.; Shimizu, Kunio; Kedem, Benjamin
1993-01-01
Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.
Thermal detection thresholds in 5-year-old preterm born children; IQ does matter.
de Graaf, Joke; Valkenburg, Abraham J; Tibboel, Dick; van Dijk, Monique
2012-07-01
Experiencing pain at newborn age may have consequences on one's somatosensory perception later in life. Children's perception for cold and warm stimuli may be determined with the Thermal Sensory Analyzer (TSA) device by two different methods. This pilot study in 5-year-old children born preterm aimed at establishing whether the TSA method of limits, which is dependent of reaction time, and the method of levels, which is independent of reaction time, would yield different cold and warm detection thresholds. The second aim was to establish possible associations between intellectual ability and the detection thresholds obtained with either method. A convenience sample was drawn from the participants in an ongoing 5-year follow-up study of a randomized controlled trial on effects of morphine during mechanical ventilation. Thresholds were assessed using both methods and statistically compared. Possible associations between the child's intelligence quotient (IQ) and threshold levels were analyzed. The method of levels yielded more sensitive thresholds than did the method of limits, i.e. mean (SD) cold detection thresholds: 30.3 (1.4) versus 28.4 (1.7) (Cohen'sd=1.2, P=0.001) and warm detection thresholds; 33.9 (1.9) versus 35.6 (2.1) (Cohen's d=0.8, P=0.04). IQ was statistically significantly associated only with the detection thresholds obtained with the method of limits (cold: r=0.64, warm: r=-0.52). The TSA method of levels, is to be preferred over the method of limits in 5-year-old preterm born children, as it establishes more sensitive detection thresholds and is independent of IQ. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sakai, Shinobu; Adachi, Reiko; Akiyama, Hiroshi; Teshima, Reiko
2013-06-19
A labeling system for food allergenic ingredients was established in Japan in April 2002. To monitor the labeling, the Japanese government announced official methods for detecting allergens in processed foods in November 2002. The official methods consist of quantitative screening tests using enzyme-linked immunosorbent assays (ELISAs) and qualitative confirmation tests using Western blotting or polymerase chain reactions (PCR). In addition, the Japanese government designated 10 μg protein/g food (the corresponding allergenic ingredient soluble protein weight/food weight), determined by ELISA, as the labeling threshold. To standardize the official methods, the criteria for the validation protocol were described in the official guidelines. This paper, which was presented at the Advances in Food Allergen Detection Symposium, ACS National Meeting and Expo, San Diego, CA, Spring 2012, describes the validation protocol outlined in the official Japanese guidelines, the results of interlaboratory studies for the quantitative detection method (ELISA for crustacean proteins) and the qualitative detection method (PCR for shrimp and crab DNAs), and the reliability of the detection methods.
Jafri, Nazia F; Newitt, David C; Kornak, John; Esserman, Laura J; Joe, Bonnie N; Hylton, Nola M
2014-08-01
To evaluate optimal contrast kinetics thresholds for measuring functional tumor volume (FTV) by breast magnetic resonance imaging (MRI) for assessment of recurrence-free survival (RFS). In this Institutional Review Board (IRB)-approved retrospective study of 64 patients (ages 29-72, median age of 48.6) undergoing neoadjuvant chemotherapy (NACT) for breast cancer, all patients underwent pre-MRI1 and postchemotherapy MRI4 of the breast. Tumor was defined as voxels meeting thresholds for early percent enhancement (PEthresh) and early-to-late signal enhancement ratio (SERthresh); and FTV (PEthresh, SERthresh) by summing all voxels meeting threshold criteria and minimum connectivity requirements. Ranges of PEthresh from 50% to 220% and SERthresh from 0.0 to 2.0 were evaluated. A Cox proportional hazard model determined associations between change in FTV over treatment and RFS at different PE and SER thresholds. The plot of hazard ratios for change in FTV from MRI1 to MRI4 showed a broad peak with the maximum hazard ratio and highest significance occurring at PE threshold of 70% and SER threshold of 1.0 (hazard ratio = 8.71, 95% confidence interval 2.86-25.5, P < 0.00015), indicating optimal model fit. Enhancement thresholds affect the ability of MRI tumor volume to predict RFS. The value is robust over a wide range of thresholds, supporting the use of FTV as a biomarker. © 2013 Wiley Periodicals, Inc.
How much is enough? Examining frequency criteria for NSSI disorder in adolescent inpatients.
Muehlenkamp, Jennifer J; Brausch, Amy M; Washburn, Jason J
2017-06-01
To empirically evaluate the diagnostic relevance of the proposed Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5 ; APA, 2013) Criterion-A frequency threshold for nonsuicidal self-injury (NSSI) disorder. Archival, de-identified, self-reported clinical assessment data from 746 adolescent psychiatric patients (Mage = 14.97; 88% female; 76% White) were used. The sample was randomly split into 2 unique samples for data analyses. Measures included assessments of NSSI, proposed DSM-5 NSSI-disorder criteria, psychopathology, dysfunction, distress, functional impairment, and suicidality. Discriminant-function analyses run with Sample A identified a significant differentiation of groups based on a frequency of NSSI at 25 or more days in the past year, Λ = .814, χ2(54) = 72.59, p < .05, canonical R2 = .36. This cutoff was replicated in the second sample. All patients were coded into 1 of 3 empirically derived NSSI-frequency cutoff groups: high (>25 days), moderate (5-24 days), and low (1-4 days) and compared. The high-NSSI group scored higher on most NSSI features, including DSM-5 -proposed Criterion-B and -C symptoms, depression, psychotic symptoms, substance abuse, borderline personality-disorder features, suicidal ideation, and suicide plans, than the moderate- and low-NSSI groups, who did not differ from each other on many of the variables. The currently proposed DSM-5 Criterion-A frequency threshold for NSSI disorder lacks validity and clinical utility. The field needs to consider raising the frequency threshold to ensure that a meaningful and valid set of diagnostic criteria are established, and to avoid overpathologizing individuals who infrequently engage in NSSI. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Using Statistical and Machine Learning Methods to Evaluate the Prognostic Accuracy of SIRS and qSOFA
Liu, Tieming; Shepherd, Scott; Paiva, William
2018-01-01
Objectives The objective of this study was to compare the performance of two popularly used early sepsis diagnostic criteria, systemic inflammatory response syndrome (SIRS) and quick Sepsis-related Organ Failure Assessment (qSOFA), using statistical and machine learning approaches. Methods This retrospective study examined patient visits in Emergency Department (ED) with sepsis related diagnosis. The outcome was 28-day in-hospital mortality. Using odds ratio (OR) and modeling methods (decision tree [DT], multivariate logistic regression [LR], and naïve Bayes [NB]), the relationships between diagnostic criteria and mortality were examined. Results Of 132,704 eligible patient visits, 14% died within 28 days of ED admission. The association of qSOFA ≥2 with mortality (OR = 3.06; 95% confidence interval [CI], 2.96–3.17) greater than the association of SIRS ≥2 with mortality (OR = 1.22; 95% CI, 1.18–1.26). The area under the ROC curve for qSOFA (AUROC = 0.70) was significantly greater than for SIRS (AUROC = 0.63). For qSOFA, the sensitivity and specificity were DT = 0.39, LR = 0.64, NB = 0.62 and DT = 0.89, LR = 0.63, NB = 0.66, respectively. For SIRS, the sensitivity and specificity were DT = 0.46, LR = 0.62, NB = 0.62 and DT = 0.70, LR = 0.59, NB = 0.58, respectively. Conclusions The evidences suggest that qSOFA is a better diagnostic criteria than SIRS. The low sensitivity of qSOFA can be improved by carefully selecting the threshold to translate the predicted probabilities into labels. These findings can guide healthcare providers in selecting risk-stratification measures for patients presenting to an ED with sepsis. PMID:29770247
SU-F-T-264: VMAT QA with 2D Radiation Measuring Equipment Attached to Gantry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fung, A
2016-06-15
Purpose: To introduce a method of VMAT QA by 2D measuring device. The 2D device is attached to the gantry throughout measurement duration. This eliminates error caused by the angular dependence of the radiation detectors. Methods: A 2D radiation measuring device was attached to the gantry of linear accelerator. The center of the detector plane was at the isocenter. For each patient plan, two verification plans were created for QA purpose. One was like an ordinary VMAT plan, to be used for radiation delivery. The other is a plan with gantry angle fixed at zero, so the dose distribution asmore » seen by the rotating 2D device. Points above 10% dose threshold were analyzed. Data is in tolerance if it fits within the 3 mm or 3% dose gamma criteria. For each patient, the plan was passed when 95% of all the points in the 2D matrix fit the gamma criteria. The following statistics were calculated: number of patient plans passed, percentage of all points passed, average percentage difference of all points. Results: VMAT QA was performed for patients during one year in our department, and the results were analyzed. All irradiation was with 6 MV photon beam. Each plan has calculated and measured doses compared. After collecting one year’s result, with 81 patient plans analyzed, all (100%) of the plans passed the gamma criteria. Of the points analyzed from all plans, 98.8% of all points passed. Conclusion: This method of attaching a 2D measuring device on the linac gantry proves to be an accurate way for VMAT QA. It is simple to use and low cost, and it eliminates the problem of directional dependence.« less
Strahm, E; Emery, C; Saugy, M; Dvorak, J; Saudan, C
2009-12-01
The determination of the carbon isotope ratio in androgen metabolites has been previously shown to be a reliable, direct method to detect testosterone misuse in the context of antidoping testing. Here, the variability in the 13C/12C ratios in urinary steroids in a widely heterogeneous cohort of professional soccer players residing in different countries (Argentina, Italy, Japan, South Africa, Switzerland and Uganda) is examined. Carbon isotope ratios of selected androgens in urine specimens were determined using gas chromatography/combustion/isotope ratio mass spectrometry (GC-C-IRMS). Urinary steroids in Italian and Swiss populations were found to be enriched in 13C relative to other groups, reflecting higher consumption of C3 plants in these two countries. Importantly, detection criteria based on the difference in the carbon isotope ratio of androsterone and pregnanediol for each population were found to be well below the established threshold value for positive cases. The results obtained with the tested diet groups highlight the importance of adapting the criteria if one wishes to increase the sensitivity of exogenous testosterone detection. In addition, confirmatory tests might be rendered more efficient by combining isotope ratio mass spectrometry with refined interpretation criteria for positivity and subject-based profiling of steroids.
Liu, Yang; Hoppe, Brenda O; Convertino, Matteo
2018-04-10
Emergency risk communication (ERC) programs that activate when the ambient temperature is expected to cross certain extreme thresholds are widely used to manage relevant public health risks. In practice, however, the effectiveness of these thresholds has rarely been examined. The goal of this study is to test if the activation criteria based on extreme temperature thresholds, both cold and heat, capture elevated health risks for all-cause and cause-specific mortality and morbidity in the Minneapolis-St. Paul Metropolitan Area. A distributed lag nonlinear model (DLNM) combined with a quasi-Poisson generalized linear model is used to derive the exposure-response functions between daily maximum heat index and mortality (1998-2014) and morbidity (emergency department visits; 2007-2014). Specific causes considered include cardiovascular, respiratory, renal diseases, and diabetes. Six extreme temperature thresholds, corresponding to 1st-3rd and 97th-99th percentiles of local exposure history, are examined. All six extreme temperature thresholds capture significantly increased relative risks for all-cause mortality and morbidity. However, the cause-specific analyses reveal heterogeneity. Extreme cold thresholds capture increased mortality and morbidity risks for cardiovascular and respiratory diseases and extreme heat thresholds for renal disease. Percentile-based extreme temperature thresholds are appropriate for initiating ERC targeting the general population. Tailoring ERC by specific causes may protect some but not all individuals with health conditions exacerbated by hazardous ambient temperature exposure. © 2018 Society for Risk Analysis.
Justus, B.G.; Mize, Scott V.; Kroes, Daniel; Wallace, James E.
2012-01-01
Dissolved oxygen (DO) concentrations in lowland streams are naturally lower than those in upland streams; however, in some regions where monitoring data are lacking, DO criteria originally established for upland streams have been applied to lowland streams. This study investigated the DO concentrations at which fish and invertebrate assemblages at 35 sites located on lowland streams in southwestern Louisiana began to demonstrate biological thresholds.Average threshold values for taxa richness, diversity and abundance metrics were 2.6 and 2.3 mg/L for the invertebrate and fish assemblages, respectively. These thresholds are approximately twice the DO concentration that some native fish species are capable of tolerating and are comparable with DO criteria that have been recently applied to some coastal streams in Louisiana and Texas. DO minima >2.5 mg/L were favoured for all but extremely tolerant taxa. Extremely tolerant taxa had respiratory adaptations that gave them a competitive advantage, and their success when DO minima were <2 mg/L could be related more to reductions in competition or predation than to DO concentration directly.DO generally had an inverse relation to the amount of agriculture in the buffer area; however, DO concentrations at sites with both low and high amounts of agriculture (including three least-disturbed sites) declined to <2.5 mg/L. Thus, although DO fell below a concentration that was identified as an approximate biological threshold, sources of this condition were sometimes natural (allochthonous material) and had little relation to anthropogenic activity.
2012-01-01
Background A standardized definition of remission criteria in schizophrenia was proposed by the International group of NC Andreasen in 2005 (low symptom threshold for the eight core Positive and Negative Syndrome Scale (PANSS) symptoms for at least 6 consecutive months). Methods A cross-sectional study of remission rate, using a 6-month follow-up to assess symptomatic stability, was conducted in two healthcare districts (first and second) of an outpatient psychiatric service in Moscow. The key inclusion criteria were outpatients with an International Classification of Diseases, 10th edition (ICD-10) diagnosis of schizophrenia or schizoaffective disorder. Remission was assessed using modern criteria (severity and time criteria), PANSS and Global Assessment of Functioning (GAF). Patients who were stable but did not satisfied the symptomatic criteria were included in a further 1-year observational study, with the first group (first district) receiving risperidone (long-acting, injectable) (RLAI) and the second group (second district) continuing to receiving routine treatment. Symptoms were assessed with PANSS, social functioning with the personal and social performance scale, compliance with rating of medication influences scale, and extrapyramidal side effects with the Simpson-Angus scale. Results Only 64 (31.5%) of 203 outpatients met the criteria for symptomatic remission in the cross-sectional study, but at the end of the 6-month follow-up period, 158 (77.8%) were stable (irrespective of remission status). Among these only 53 (26.1%) patients fulfilled the remission criteria. The observational study had 42 stable patients in the RLAI group and 35 in the routine treatment group: 19.0% in the RLAI group and 5.7% in the control group met remission criteria after 12 months of therapy. Furthermore, reduction of PANSS total and subscale scores, as well as improvement in social functioning, was more significant in the first group. Conclusions Only around one-quarter of our outpatient schizophrenic population met full remission criteria. Use of RLAI gave a better remission rate than achieved in standard care with routine treatment. Criteria for remission should take into account clinical course and functioning to support clinical care. PMID:22221826
Zhao, Changsen; Yang, Shengtian; Liu, Junguo; Liu, Changming; Hao, Fanghua; Wang, Zhonggen; Zhang, Huitong; Song, Jinxi; Mitrovic, Simon M; Lim, Richard P
2018-05-15
The survival of aquatic biota in stream ecosystems depends on both water quantity and quality, and is particularly susceptible to degraded water quality in regulated rivers. Maintenance of environmental flows (e-flows) for aquatic biota with optimum water quantity and quality is essential for sustainable ecosystem services, especially in developing regions with insufficient stream monitoring of hydrology, water quality and aquatic biota. Few e-flow methods are available that closely link aquatic biota tolerances to pollutant concentrations in a simple and practical manner. In this paper a new method was proposed to assess e-flows that aimed to satisfy the requirements of aquatic biota for both the quantity and quality of the streamflow by linking fish tolerances to water quality criteria, or the allowable concentration of pollutants. For better operation of water projects and control of pollutants discharged into streams, this paper presented two coefficients for streamflow adjustment and pollutant control. Assessment of e-flows in the Wei River, the largest tributary of the Yellow River, shows that streamflow in dry seasons failed to meet e-flow requirements. Pollutant influx exerted a large pressure on the aquatic ecosystem, with pollutant concentrations much higher than that of the fish tolerance thresholds. We found that both flow velocity and water temperature exerted great influences on the pollutant degradation rate. Flow velocity had a much greater influence on pollutant degradation than did the standard deviation of flow velocity. This study provides new methods to closely link the tolerance of aquatic biota to water quality criteria for e-flow assessment. The recommended coefficients for streamflow adjustment and pollutant control, to dynamically regulate streamflow and control pollutant discharge, are helpful for river management and ecosystems rehabilitation. The relatively low data requirement also makes the method easy to use efficiently in developing regions, and thus this study has significant implications for managing flows in polluted and regulated rivers worldwide. Copyright © 2018. Published by Elsevier Ltd.
Rapid bacteriological screening of cosmetic raw materials by using bioluminescence.
Nielsen, P; Van Dellen, E
1989-01-01
Incoming cosmetic raw materials are routinely tested for microbial content. Standard plate count methods require up to 72 h. A rapid, sensitive, and inexpensive raw material screening method was developed that detects the presence of bacteria by means of ATP (bioluminescence). With a 24-h broth enrichment, the minimum bacterial ATP detection threshold of 1 cfu/g sample can be achieved using purified firefly luciferin-luciferase and an ATP releasing reagent. By using this rapid screen, microbiologically free material may be released for production within 24 h, while contaminated material undergoes further quantitative and identification testing. In order for a raw material to be validated for this method it must be evaluated for (1) a potential nonmicrobial light-contributing reaction resulting in a false positive or, (2) degradation of the ATP giving a false negative, and (3) confirmation that the raw material has not overwhelmed the buffering capacity of the enrichment broth. The key criteria for a rapid screen was the sensitivity to detect less than one colony forming unit per g product, the speed to do this within 24 h, and cost efficiency. Bioluminescence meets these criteria. With an enrichment step, it can detect less than one cfu/g sample. After the enrichment step, analysis time per sample is approximately 2 min and the cost for material and reagents is less than one dollar per sample.
Lao, Xiang Qian; Yu, Ignatius Tak Sun; Au, Dennis Kin Kwok; Chiu, Yuk Lan; Wong, Claudie Chiu Yi; Wong, Tze Wai
2013-01-01
Background Noise-induced hearing loss (NIHL) is a major concern in the non-manufacturing industries. This study aimed to investigate the occupational noise exposure and the NIHL among Chinese restaurant workers and entertainment employees working in the service industry in Hong Kong. Methods This cross-sectional survey involved a total of 1,670 participants. Among them, 937 were randomly selected from the workers of Chinese restaurants and 733 were selected from workers in three entertainment sectors: radio and television stations; cultural performance halls or auditoria of the Leisure and Cultural Services Department (LCSD); and karaoke bars. Noise exposure levels were measured in the sampled restaurants and entertainment sectors. Each participant received an audiometric screening test. Those who were found to have abnormalities were required to take another diagnostic test in the health center. The “Klockhoff digit” method was used to classify NIHL in the present study. Results The main source of noise inside restaurants was the stoves. The mean hearing thresholds showed a typical dip at 3 to 6 KHz and a substantial proportion (23.7%) of the workers fulfilled the criteria for presumptive NIHL. For entertainment sectors, employees in radio and television stations generally had higher exposure levels than those in the halls or auditoria of the LCSD and karaoke bars. The mean hearing thresholds showed a typical dip at 6 KHz and a substantial proportion of the employees fulfilled the criteria for presumptive NIHL (38.6%, 95%CI: 35.1–42.1%). Being male, older, and having longer service and daily alcohol consumption were associated with noise-induced hearing impairment both in restaurant workers and entertainment employees. Conclusion Excessive noise exposure is common in the Chinese restaurant and entertainment industries and a substantial proportion of restaurant workers and entertainment employees suffer from NIHL. Comprehensive hearing conservation programs should be introduced to the service industry in Hong Kong. PMID:23976950
Harford, Thomas C; Yi, Hsiao-ye; Faden, Vivian B; Chen, Chiung M
2009-05-01
There is limited information on the validity of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) alcohol use disorders (AUD) symptom criteria among adolescents in the general population. The purpose of this study is to assess the DSM-IV AUD symptom criteria as reported by adolescent and adult drinkers in a single representative sample of the U.S. population aged 12 years and older. This design avoids potential confounding due to differences in survey methodology when comparing adolescents and adults from different surveys. A total of 133,231 current drinkers (had at least 1 drink in the past year) aged 12 years and older were drawn from respondents to the 2002 to 2005 National Surveys on Drug Use and Health. DSM-IV AUD criteria were assessed by questions related to specific symptoms occurring during the past 12 months. Factor analytic and item response theory models were applied to the 11 AUD symptom criteria to assess the probabilities of symptom item endorsements across different values of the underlying trait. A 1-factor model provided an adequate and parsimonious interpretation for the 11 AUD criteria for the total sample and for each of the gender-age groups. The MIMIC model exhibited significant indication for item bias among some criteria by gender, age, and race/ethnicity. Symptom criteria for "tolerance,"time spent," and "hazardous use" had lower item thresholds (i.e., lower severity) and low item discrimination, and they were well separated from the other symptoms, especially in the 2 younger age groups (12 to 17 and 18 to 25). "Larger amounts,"cut down,"withdrawal," and "legal problems" had higher item thresholds but generally lower item discrimination, and they tend to exhibit greater dispersion at higher AUD severity, particularly in the youngest age group (12 to 17). Findings from the present study do not provide support for the 2 separate DSM-IV diagnoses of alcohol abuse and dependence among either adolescents or adults. Variations in criteria severity for both abuse and dependence offer support for a dimensional approach to diagnosis which should be considered in the ongoing development of DSM-V.
NASA Astrophysics Data System (ADS)
Ghannadpour, Seyyed Saeed; Hezarkhani, Ardeshir
2016-03-01
The U-statistic method is one of the most important structural methods to separate the anomaly from the background. It considers the location of samples and carries out the statistical analysis of the data without judging from a geochemical point of view and tries to separate subpopulations and determine anomalous areas. In the present study, to use U-statistic method in three-dimensional (3D) condition, U-statistic is applied on the grade of two ideal test examples, by considering sample Z values (elevation). So far, this is the first time that this method has been applied on a 3D condition. To evaluate the performance of 3D U-statistic method and in order to compare U-statistic with one non-structural method, the method of threshold assessment based on median and standard deviation (MSD method) is applied on the two example tests. Results show that the samples indicated by U-statistic method as anomalous are more regular and involve less dispersion than those indicated by the MSD method. So that, according to the location of anomalous samples, denser areas of them can be determined as promising zones. Moreover, results show that at a threshold of U = 0, the total error of misclassification for U-statistic method is much smaller than the total error of criteria of bar {x}+n× s. Finally, 3D model of two test examples for separating anomaly from background using 3D U-statistic method is provided. The source code for a software program, which was developed in the MATLAB programming language in order to perform the calculations of the 3D U-spatial statistic method, is additionally provided. This software is compatible with all the geochemical varieties and can be used in similar exploration projects.
Modeling of cw OIL energy performance based on similarity criteria
NASA Astrophysics Data System (ADS)
Mezhenin, Andrey V.; Pichugin, Sergey Y.; Azyazov, Valeriy N.
2012-01-01
A simplified two-level generation model predicts that power extraction from an cw oxygen-iodine laser (OIL) with stable resonator depends on three similarity criteria. Criterion τd is the ratio of the residence time of active medium in the resonator to the O2(1Δ) reduction time at the infinitely large intraresonator intensity. Criterion Π is small-signal gain to the threshold ratio. Criterion Λ is the relaxation to excitation rate ratio for the electronically excited iodine atoms I(2P1/2). Effective power extraction from a cw OIL is achieved when the values of the similarity criteria are located in the intervals: τd=5-8, Π=3-8 and Λ<=0.01.
Nagarajan, Mahesh B; Huber, Markus B; Schlossbauer, Thomas; Leinsinger, Gerda; Krol, Andrzej; Wismüller, Axel
2013-10-01
Characterizing the dignity of breast lesions as benign or malignant is specifically difficult for small lesions; they don't exhibit typical characteristics of malignancy and are harder to segment since margins are harder to visualize. Previous attempts at using dynamic or morphologic criteria to classify small lesions (mean lesion diameter of about 1 cm) have not yielded satisfactory results. The goal of this work was to improve the classification performance in such small diagnostically challenging lesions while concurrently eliminating the need for precise lesion segmentation. To this end, we introduce a method for topological characterization of lesion enhancement patterns over time. Three Minkowski Functionals were extracted from all five post-contrast images of sixty annotated lesions on dynamic breast MRI exams. For each Minkowski Functional, topological features extracted from each post-contrast image of the lesions were combined into a high-dimensional texture feature vector. These feature vectors were classified in a machine learning task with support vector regression. For comparison, conventional Haralick texture features derived from gray-level co-occurrence matrices (GLCM) were also used. A new method for extracting thresholded GLCM features was also introduced and investigated here. The best classification performance was observed with Minkowski Functionals area and perimeter , thresholded GLCM features f8 and f9, and conventional GLCM features f4 and f6. However, both Minkowski Functionals and thresholded GLCM achieved such results without lesion segmentation while the performance of GLCM features significantly deteriorated when lesions were not segmented ( p < 0.05). This suggests that such advanced spatio-temporal characterization can improve the classification performance achieved in such small lesions, while simultaneously eliminating the need for precise segmentation.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Flagging threshold optimization for manual blood smear review in primary care laboratory.
Bihl, Pierre-Adrien
2018-04-01
Manual blood smear review is required when an anomaly detected by the automated hematologic analyzer triggers a flag. Our will through this study is to optimize these flagging thresholds for manual slide review in order to limit workload, while insuring clinical care through no extra false-negative. Flagging causes of 4,373 samples were investigated by manual slide review, after having been run on ADVIA 2120i. A set of 6 user-adjustments is proposed. By implementing all recommendations that we made, false-positive rate falls from 81.8% to 58.6%, while PPV increases from 18.2% to 23.7%. Hence, use of such optimized thresholds enables us to maximize efficiency without altering clinical care, but each laboratory should establish its own criteria to take into consideration local distinctive features.
The prevalence of cognitive distortion in depressed adolescents.
Marton, P; Kutcher, S
1995-01-01
This study examined the prevalence of cognitive distortion in depressed adolescents. Ninety-four consecutive depressed adolescent psychiatric outpatients were administered the Beck Depression Inventory, the Dysfunctional Attitude Scale, the Interpersonal Dependency Inventory and the Maudsley Personality Inventory. Depressed patients who scored above a threshold for cognitive distortion were compared to those who fell below the threshold. Of the depressed patients, 47.4% were found to meet the severity criteria for cognitive distortion, while the remaining 52.6% were found to be below the severity threshold. Cognitive distortion was associated with more severe symptoms of depression, lack of social self confidence and greater introversion. These results do not support the hypothesis that cognitive distortion is universal in clinical depression. However, they do suggest that cognitive distortion is associated with more severe depression.
The prevalence of cognitive distortion in depressed adolescents.
Marton, P; Kutcher, S
1995-01-01
This study examined the prevalence of cognitive distortion in depressed adolescents. Ninety-four consecutive depressed adolescent psychiatric outpatients were administered the Beck Depression Inventory, the Dysfunctional Attitude Scale, the Interpersonal Dependency Inventory and the Maudsley Personality Inventory. Depressed patients who scored above a threshold for cognitive distortion were compared to those who fell below the threshold. Of the depressed patients, 47.4% were found to meet the severity criteria for cognitive distortion, while the remaining 52.6% were found to be below the severity threshold. Cognitive distortion was associated with more severe symptoms of depression, lack of social self confidence and greater introversion. These results do not support the hypothesis that cognitive distortion is universal in clinical depression. However, they do suggest that cognitive distortion is associated with more severe depression. PMID:7865499
Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.
OConnor, William; Runquist, Elizabeth A
2008-07-01
Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.
Unger, E R; Lin, J-M S; Tian, H; Gurbaxani, B M; Boneva, R S; Jones, J F
2016-01-01
Multiple case definitions are in use to identify chronic fatigue syndrome (CFS). Even when using the same definition, methods used to apply definitional criteria may affect results. The Centers for Disease Control and Prevention (CDC) conducted two population-based studies estimating CFS prevalence using the 1994 case definition; one relied on direct questions for criteria of fatigue, functional impairment and symptoms (1997 Wichita; Method 1), and the other used subscale score thresholds of standardized questionnaires for criteria (2004 Georgia; Method 2). Compared to previous reports the 2004 CFS prevalence estimate was higher, raising questions about whether changes in the method of operationalizing affected this and illness characteristics. The follow-up of the Georgia cohort allowed direct comparison of both methods of applying the 1994 case definition. Of 1961 participants (53 % of eligible) who completed the detailed telephone interview, 919 (47 %) were eligible for and 751 (81 %) underwent clinical evaluation including medical/psychiatric evaluations. Data from the 499 individuals with complete data and without exclusionary conditions was available for this analysis. A total of 86 participants were classified as CFS by one or both methods; 44 cases identified by both methods, 15 only identified by Method 1, and 27 only identified by Method 2 (Kappa 0.63; 95 % confidence interval [CI]: 0.53, 0.73 and concordance 91.59 %). The CFS group identified by both methods were more fatigued, had worse functioning, and more symptoms than those identified by only one method. Moderate to severe depression was noted in only one individual who was classified as CFS by both methods. When comparing the CFS groups identified by only one method, those only identified by Method 2 were either similar to or more severely affected in fatigue, function, and symptoms than those only identified by Method 1. The two methods demonstrated substantial concordance. While Method 2 classified more participants as CFS, there was no indication that they were less severely ill or more depressed. The classification differences do not fully explain the prevalence increase noted in the 2004 Georgia study. Use of standardized instruments for the major CFS domains provides advantages for disease stratification and comparing CFS patients to other illnesses.
Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars
NASA Astrophysics Data System (ADS)
Ruml, Mirjana; Vuković, Ana; Milatović, Dragan
2010-07-01
The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.
Ho, Sirikit; Lukacs, Zoltan; Hoffmann, Georg F; Lindner, Martin; Wetter, Thomas
2007-07-01
In newborn screening with tandem mass spectrometry, multiple intermediary metabolites are quantified in a single analytical run for the diagnosis of fatty-acid oxidation disorders, organic acidurias, and aminoacidurias. Published diagnostic criteria for these disorders normally incorporate a primary metabolic marker combined with secondary markers, often analyte ratios, for which the markers have been chosen to reflect metabolic pathway deviations. We applied a procedure to extract new markers and diagnostic criteria for newborn screening to the data of newborns with confirmed medium-chain acyl-CoA dehydrogenase deficiency (MCADD) and a control group from the newborn screening program, Heidelberg, Germany. We validated the results with external data of the screening center in Hamburg, Germany. We extracted new markers by performing a systematic search for analyte combinations (features) with high discriminatory performance for MCADD. To select feature thresholds, we applied automated procedures to separate controls and cases on the basis of the feature values. Finally, we built classifiers from these new markers to serve as diagnostic criteria in screening for MCADD. On the basis of chi(2) scores, we identified approximately 800 of >628,000 new analyte combinations with superior discriminatory performance compared with the best published combinations. Classifiers built with the new features achieved diagnostic sensitivities and specificities approaching 100%. Feature construction methods provide ways to disclose information hidden in the set of measured analytes. Other diagnostic tasks based on high-dimensional metabolic data might also profit from this approach.
NASA Astrophysics Data System (ADS)
Dionne, J. P.; Levine, J.; Makris, A.
2018-01-01
To design the next generation of blast mitigation helmets that offer increasing levels of protection against explosive devices, manufacturers must be able to rely on appropriate test methodologies and human surrogates that will differentiate the performance level of various helmet solutions and ensure user safety. Ideally, such test methodologies and associated injury thresholds should be based on widely accepted injury criteria relevant within the context of blast. Unfortunately, even though significant research has taken place over the last decade in the area of blast neurotrauma, there currently exists no agreement in terms of injury mechanisms for blast-induced traumatic brain injury. In absence of such widely accepted test methods and injury criteria, the current study presents a specific blast test methodology focusing on explosive ordnance disposal protective equipment, involving the readily available Hybrid III mannequin, initially developed for the automotive industry. The unlikely applicability of the associated brain injury criteria (based on both linear and rotational head acceleration) is discussed in the context of blast. Test results encompassing a large number of blast configurations and personal protective equipment are presented, emphasizing the possibility to develop useful correlations between blast parameters, such as the scaled distance, and mannequin engineering measurements (head acceleration). Suggestions are put forward for a practical standardized blast testing methodology taking into account limitations in the applicability of acceleration-based injury criteria as well as the inherent variability in blast testing results.
ten Haaf, Kevin; Tammemägi, Martin C.; Han, Summer S.; Kong, Chung Yin; Plevritis, Sylvia K.; de Koning, Harry J.; Steyerberg, Ewout W.
2017-01-01
Background Selection of candidates for lung cancer screening based on individual risk has been proposed as an alternative to criteria based on age and cumulative smoking exposure (pack-years). Nine previously established risk models were assessed for their ability to identify those most likely to develop or die from lung cancer. All models considered age and various aspects of smoking exposure (smoking status, smoking duration, cigarettes per day, pack-years smoked, time since smoking cessation) as risk predictors. In addition, some models considered factors such as gender, race, ethnicity, education, body mass index, chronic obstructive pulmonary disease, emphysema, personal history of cancer, personal history of pneumonia, and family history of lung cancer. Methods and findings Retrospective analyses were performed on 53,452 National Lung Screening Trial (NLST) participants (1,925 lung cancer cases and 884 lung cancer deaths) and 80,672 Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial (PLCO) ever-smoking participants (1,463 lung cancer cases and 915 lung cancer deaths). Six-year lung cancer incidence and mortality risk predictions were assessed for (1) calibration (graphically) by comparing the agreement between the predicted and the observed risks, (2) discrimination (area under the receiver operating characteristic curve [AUC]) between individuals with and without lung cancer (death), and (3) clinical usefulness (net benefit in decision curve analysis) by identifying risk thresholds at which applying risk-based eligibility would improve lung cancer screening efficacy. To further assess performance, risk model sensitivities and specificities in the PLCO were compared to those based on the NLST eligibility criteria. Calibration was satisfactory, but discrimination ranged widely (AUCs from 0.61 to 0.81). The models outperformed the NLST eligibility criteria over a substantial range of risk thresholds in decision curve analysis, with a higher sensitivity for all models and a slightly higher specificity for some models. The PLCOm2012, Bach, and Two-Stage Clonal Expansion incidence models had the best overall performance, with AUCs >0.68 in the NLST and >0.77 in the PLCO. These three models had the highest sensitivity and specificity for predicting 6-y lung cancer incidence in the PLCO chest radiography arm, with sensitivities >79.8% and specificities >62.3%. In contrast, the NLST eligibility criteria yielded a sensitivity of 71.4% and a specificity of 62.2%. Limitations of this study include the lack of identification of optimal risk thresholds, as this requires additional information on the long-term benefits (e.g., life-years gained and mortality reduction) and harms (e.g., overdiagnosis) of risk-based screening strategies using these models. In addition, information on some predictor variables included in the risk prediction models was not available. Conclusions Selection of individuals for lung cancer screening using individual risk is superior to selection criteria based on age and pack-years alone. The benefits, harms, and feasibility of implementing lung cancer screening policies based on risk prediction models should be assessed and compared with those of current recommendations. PMID:28376113
Haumann, Sabine; Hohmann, Volker; Meis, Markus; Herzke, Tobias; Lenarz, Thomas; Büchner, Andreas
2012-01-01
Owing to technological progress and a growing body of clinical experience, indication criteria for cochlear implants (CI) are being extended to less severe hearing impairments. It is, therefore, worth reconsidering these indication criteria by introducing novel testing procedures. The diagnostic evidence collected will be evaluated. The investigation includes postlingually deafened adults seeking a CI. Prior to surgery, speech perception tests [Freiburg Speech Test and Oldenburg sentence (OLSA) test] were performed unaided and aided using the Oldenburg Master Hearing Aid (MHA) system. Linguistic skills were assessed with the visual Text Reception Threshold (TRT) test, and general state of health, socio-economic status (SES) and subjective hearing were evaluated through questionnaires. After surgery, the speech tests were repeated aided with a CI. To date, 97 complete data sets are available for evaluation. Statistical analyses showed significant correlations between postsurgical speech reception threshold (SRT) measured with the adaptive OLSA test and pre-surgical data such as the TRT test (r=−0.29), SES (r=−0.22) and (if available) aided SRT (r=0.53). The results suggest that new measures and setups such as the TRT test, SES and speech perception with the MHA provide valuable extra information regarding indication for CI. PMID:26557327
Anaerobic Threshold and Salivary α-amylase during Incremental Exercise.
Akizuki, Kazunori; Yazaki, Syouichirou; Echizenya, Yuki; Ohashi, Yukari
2014-07-01
[Purpose] The purpose of this study was to clarify the validity of salivary α-amylase as a method of quickly estimating anaerobic threshold and to establish the relationship between salivary α-amylase and double-product breakpoint in order to create a way to adjust exercise intensity to a safe and effective range. [Subjects and Methods] Eleven healthy young adults performed an incremental exercise test using a cycle ergometer. During the incremental exercise test, oxygen consumption, carbon dioxide production, and ventilatory equivalent were measured using a breath-by-breath gas analyzer. Systolic blood pressure and heart rate were measured to calculate the double product, from which double-product breakpoint was determined. Salivary α-amylase was measured to calculate the salivary threshold. [Results] One-way ANOVA revealed no significant differences among workloads at the anaerobic threshold, double-product breakpoint, and salivary threshold. Significant correlations were found between anaerobic threshold and salivary threshold and between anaerobic threshold and double-product breakpoint. [Conclusion] As a method for estimating anaerobic threshold, salivary threshold was as good as or better than determination of double-product breakpoint because the correlation between anaerobic threshold and salivary threshold was higher than the correlation between anaerobic threshold and double-product breakpoint. Therefore, salivary threshold is a useful index of anaerobic threshold during an incremental workload.
Respiratory Artefact Removal in Forced Oscillation Measurements: A Machine Learning Approach.
Pham, Thuy T; Thamrin, Cindy; Robinson, Paul D; McEwan, Alistair L; Leong, Philip H W
2017-08-01
Respiratory artefact removal for the forced oscillation technique can be treated as an anomaly detection problem. Manual removal is currently considered the gold standard, but this approach is laborious and subjective. Most existing automated techniques used simple statistics and/or rejected anomalous data points. Unfortunately, simple statistics are insensitive to numerous artefacts, leading to low reproducibility of results. Furthermore, rejecting anomalous data points causes an imbalance between the inspiratory and expiratory contributions. From a machine learning perspective, such methods are unsupervised and can be considered simple feature extraction. We hypothesize that supervised techniques can be used to find improved features that are more discriminative and more highly correlated with the desired output. Features thus found are then used for anomaly detection by applying quartile thresholding, which rejects complete breaths if one of its features is out of range. The thresholds are determined by both saliency and performance metrics rather than qualitative assumptions as in previous works. Feature ranking indicates that our new landmark features are among the highest scoring candidates regardless of age across saliency criteria. F1-scores, receiver operating characteristic, and variability of the mean resistance metrics show that the proposed scheme outperforms previous simple feature extraction approaches. Our subject-independent detector, 1IQR-SU, demonstrated approval rates of 80.6% for adults and 98% for children, higher than existing methods. Our new features are more relevant. Our removal is objective and comparable to the manual method. This is a critical work to automate forced oscillation technique quality control.
Predicting Geriatric Falls Following an Episode of Emergency Department Care: A Systematic Review
Carpenter, Christopher R.; Avidan, Michael S.; Wildes, Tanya; Stark, Susan; Fowler, Susan A.; Lo, Alexander X.
2015-01-01
Background Falls are the leading cause of traumatic mortality in geriatric adults. Despite recent multispecialty guideline recommendations that advocate for proactive fall prevention protocols in the emergency department (ED), the ability of risk factors or risk stratification instruments to identify subsets of geriatric patients at increased risk for short-term falls is largely unexplored. Objectives This was a systematic review and meta-analysis of ED-based history, physical examination, and fall risk stratification instruments with the primary objective of providing a quantitative estimate for each risk factor’s accuracy to predict future falls. A secondary objective was to quantify ED fall risk assessment test and treatment thresholds using derived estimates of sensitivity and specificity. Methods A medical librarian and two emergency physicians (EPs) conducted a medical literature search of PUBMED, EMBASE, CINAHL, CENTRAL, DARE, the Cochrane Registry, and Clinical Trials. Unpublished research was located by a hand search of emergency medicine (EM) research abstracts from national meetings. Inclusion criteria for original studies included ED-based assessment of pre-ED or post-ED fall risk in patients 65 years and older with sufficient detail to reproduce contingency tables for meta-analysis. Original study authors were contacted for additional details when necessary. The Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS-2) was used to assess individual study quality for those studies that met inclusion criteria. When more than one qualitatively similar study assessed the same risk factor for falls at the same interval following an ED evaluation, then meta-analysis was performed using Meta-DiSc software. The primary outcomes were sensitivity, specificity, and likelihood ratios for fall risk factors or risk stratification instruments. Secondary outcomes included estimates of test and treatment thresholds using the Pauker method based on accuracy, screening risk, and the projected benefits or harms of fall prevention interventions in the ED. Results A total of 608 unique and potentially relevant studies were identified, but only three met our inclusion criteria. Two studies that included 660 patients assessed 29 risk factors and two risk stratification instruments for falls in geriatric patients in the 6 months following an ED evaluation, while one study of 107 patients assessed the risk of falls in the preceding 12 months. A self-report of depression was associated with the highest positive likelihood ratio (LR) of 6.55 (95% confidence interval [CI] = 1.41 to 30.48). Six fall predictors were identified in more than one study (past falls, living alone, use of walking aid, depression, cognitive deficit, and more than six medications) and meta-analysis was performed for these risk factors. One screening instrument was sufficiently accurate to identify a subset of geriatric ED patients at low risk for falls with a negative LR of 0.11 (95% CI = 0.06 to 0.20). The test threshold was 6.6% and the treatment threshold was 27.5%. Conclusions This study demonstrates the paucity of evidence in the literature regarding ED-based screening for risk of future falls among older adults. The screening tools and individual characteristics identified in this study provide an evidentiary basis on which to develop screening protocols for geriatrics adults in the ED to reduce fall risk PMID:25293956
Model selection for clustering of pharmacokinetic responses.
Guerra, Rui P; Carvalho, Alexandra M; Mateus, Paulo
2018-08-01
Pharmacokinetics comprises the study of drug absorption, distribution, metabolism and excretion over time. Clinical pharmacokinetics, focusing on therapeutic management, offers important insights towards personalised medicine through the study of efficacy and toxicity of drug therapies. This study is hampered by subject's high variability in drug blood concentration, when starting a therapy with the same drug dosage. Clustering of pharmacokinetics responses has been addressed recently as a way to stratify subjects and provide different drug doses for each stratum. This clustering method, however, is not able to automatically determine the correct number of clusters, using an user-defined parameter for collapsing clusters that are closer than a given heuristic threshold. We aim to use information-theoretical approaches to address parameter-free model selection. We propose two model selection criteria for clustering pharmacokinetics responses, founded on the Minimum Description Length and on the Normalised Maximum Likelihood. Experimental results show the ability of model selection schemes to unveil the correct number of clusters underlying the mixture of pharmacokinetics responses. In this work we were able to devise two model selection criteria to determine the number of clusters in a mixture of pharmacokinetics curves, advancing over previous works. A cost-efficient parallel implementation in Java of the proposed method is publicly available for the community. Copyright © 2018 Elsevier B.V. All rights reserved.
An integrative perspective of the anaerobic threshold.
Sales, Marcelo Magalhães; Sousa, Caio Victor; da Silva Aguiar, Samuel; Knechtle, Beat; Nikolaidis, Pantelis Theodoros; Alves, Polissandro Mortoza; Simões, Herbert Gustavo
2017-12-14
The concept of anaerobic threshold (AT) was introduced during the nineteen sixties. Since then, several methods to identify the anaerobic threshold (AT) have been studied and suggested as novel 'thresholds' based upon the variable used for its detection (i.e. lactate threshold, ventilatory threshold, glucose threshold). These different techniques have brought some confusion about how we should name this parameter, for instance, anaerobic threshold or the physiological measure used (i.e. lactate, ventilation). On the other hand, the modernization of scientific methods and apparatus to detect AT, as well as the body of literature formed in the past decades, could provide a more cohesive understanding over the AT and the multiple physiological systems involved. Thus, the purpose of this review was to provide an integrative perspective of the methods to determine AT. Copyright © 2017 Elsevier Inc. All rights reserved.
Schmuziger, Nicolas; Probst, Rudolf; Smurzynski, Jacek
2004-04-01
The purposes of the study were: (1) To evaluate the intrasession test-retest reliability of pure-tone thresholds measured in the 0.5-16 kHz frequency range for a group of otologically healthy subjects using Sennheiser HDA 200 circumaural and Etymotic Research ER-2 insert earphones and (2) to compare the data with existing criteria of significant threshold shifts related to ototoxicity and noise-induced hearing loss. Auditory thresholds in the frequency range from 0.5 to 6 kHz and in the extended high-frequency range from 8 to 16 kHz were measured in one ear of 138 otologically healthy subjects (77 women, 61 men; mean age, 24.4 yr; range, 12-51 yr) using HDA 200 and ER-2 earphones. For each subject, measurements of thresholds were obtained twice for both transducers during the same test session. For analysis, the extended high-frequency range from 8 to 16 kHz was subdivided into 8 to 12.5 and 14 to 16 kHz ranges. Data for each frequency and frequency range were analyzed separately. There were no significant differences in repeatability for the two transducer types for all frequency ranges. The intrasession variability increased slightly, but significantly, as frequency increased with the greatest amount of variability in the 14 to 16 kHz range. Analyzing each individual frequency, variability was increased particularly at 16 kHz. At each individual frequency and for both transducer types, intrasession test-retest repeatability from 0.5 to 6 kHz and 8 to 16 kHz was within 10 dB for >99% and >94% of measurements, respectively. The results indicated a false-positive rate of <3% in reference to the criteria for cochleotoxicity for both transducer types. In reference to the Occupational Safety and Health Administration Standard Threshold Shift criteria for noise-induced hazards, the results showed a minor false-positive rate of <1% for the HDA 200. Repeatability was similar for both transducer types. Intrasession test-retest repeatability from 0.5 to 12.5 kHz at each individual frequency including the frequency range susceptible to noise-induced hearing loss was excellent for both transducers. Repeatability was slightly, but significantly poorer in the frequency range from 14 to 16 kHz compared with the frequency ranges from 0.5 to 6 or 8 to 12.5 kHz. Measurements in the extended high-frequency range from 8 to 14 kHz, but not up to 16 kHz, may be recommended for monitoring purposes.
24 CFR 1003.301 - Selection process.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Selection process. 1003.301 Section... Application and Selection Process § 1003.301 Selection process. (a) Threshold requirement. An applicant that... establish weights for the selection criteria, will specify the maximum points available, and will describe...
9th Annual CMMI Technology Conference and User Group-Tuesday
2009-11-19
evaluating and quantifying risk likelihood and severity risks. Step 4 Project defines thresholds for each risk category. Step 5 Project defines bounds on the...defines consistent criteria for evaluating and quantifying risk likelihood and severity risks in the Risk Management Plan. Step 4 Project defines
Estimating Historical Nitrogen Loading Rates to Great Bay Estuary, NH USA
The state of New Hampshire is developing nutrient criteria for the Great Bay Estuary (GBE). Threshold values were proposed for total nitrogen concentration, chlorophyll-a, and light attenuation to be protective of aquatic life uses related to hypoxia and seagrass habitat. A previ...
Towards bioavailability-based soil criteria: Past, present and future perspectives
USDA-ARS?s Scientific Manuscript database
Bioavailability has been used as a key indicator in chemical risk assessment, yet it is a poorly quantified risk factor. Worldwide, the framework used to assess potentially contaminated sites is similar and the decisions are based on threshold contaminant concentration. The uncertainty in the defin...
The urine output definition of acute kidney injury is too liberal
2013-01-01
Introduction The urine output criterion of 0.5 ml/kg/hour for 6 hours for acute kidney injury (AKI) has not been prospectively validated. Urine output criteria for AKI (AKIUO) as predictors of in-hospital mortality or dialysis need were compared. Methods All admissions to a general ICU were prospectively screened for 12 months and hourly urine output analysed in collection intervals between 1 and 12 hours. Prediction of the composite of mortality or dialysis by urine output was analysed in increments of 0.1 ml/kg/hour from 0.1 to 1 ml/kg/hour and the optimal threshold for each collection interval determined. AKICr was defined as an increase in plasma creatinine ≥26.5 μmol/l within 48 hours or ≥50% from baseline. Results Of 725 admissions, 72% had either AKICr or AKIUO or both. AKIUO (33.7%) alone was more frequent than AKICr (11.0%) alone (P <0.0001). A 6-hour urine output collection threshold of 0.3 ml/kg/hour was associated with a stepped increase in in-hospital mortality or dialysis (from 10% above to 30% less than 0.3 ml/kg/hour). Hazard ratios for in-hospital mortality and 1-year mortality were 2.25 (1.40 to 3.61) and 2.15 (1.47 to 3.15) respectively after adjustment for age, body weight, severity of illness, fluid balance, and vasopressor use. In contrast, after adjustment AKIUO was not associated with in-hospital mortality or 1-year mortality. The optimal urine output threshold was linearly related to duration of urine collection (r2 = 0.93). Conclusions A 6-hour urine output threshold of 0.3 ml/kg/hour best associated with mortality and dialysis, and was independently predictive of both hospital mortality and 1-year mortality. This suggests that the current AKI urine output definition is too liberally defined. Shorter urine collection intervals may be used to define AKI using lower urine output thresholds. PMID:23787055
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syh, J; Ding, X; Syh, J
2015-06-15
Purpose: An approved proton pencil beam scanning (PBS) treatment plan might not be able to deliver because of existed extremely low monitor unit per beam spot. A dual hybrid plan with higher efficiency of higher spot monitor unit and the efficacy of less number of energy layers were searched and optimized. The range of monitor unit threshold setting was investigated and the plan quality was evaluated by target dose conformity. Methods: Certain limitations and requirements need to be checks and tested before a nominal proton PBS treatment plan can be delivered. The plan needs to be met the machine characterization,more » specification in record and verification to deliver the beams. Minimal threshold of monitor unit, e.g. 0.02, per spot was set to filter the low counts and plan was re-computed. Further MU threshold increment was tested in sequence without sacrificing the plan quality. The number of energy layer was also alternated due to elimination of low count layer(s). Results: Minimal MU/spot threshold, spot spacing in each energy layer and total number of energy layer and the MU weighting of beam spots of each beam were evaluated. Plan optimization between increases of the spot MU (efficiency) and less energy layers of delivery (efficacy) was adjusted. 5% weighting limit of total monitor unit per beam was feasible. Scarce spreading of beam spots was not discouraging as long as target dose conformity within 3% criteria. Conclusion: Each spot size is equivalent to the relative dose in the beam delivery system. The energy layer is associated with the depth of the targeting tumor. Our work is crucial to maintain the best possible quality plan. To keep integrity of all intrinsic elements such as spot size, spot number, layer number and the carried weighting of spots in each layer is important in this study.« less
Sullivan, Edith V.; Brumback, Ty; Tapert, Susan F.; Fama, Rosemary; Prouty, Devin; Brown, Sandra A.; Cummins, Kevin; Thompson, Wesley K.; Colrain, Ian M.; Baker, Fiona C.; De Bellis, Michael D.; Hooper, Stephen R.; Clark, Duncan B.; Chung, Tammy; Nagel, Bonnie J.; Nichols, B. Nolan; Rohlfing, Torsten; Chu, Weiwei; Pohl, Kilian M.; Pfefferbaum, Adolf
2015-01-01
Objective To investigate development of cognitive and motor functions in healthy adolescents and to explore whether hazardous drinking affects the normal developmental course of those functions. Method Participants were 831 adolescents recruited across five United States sites of the National Consortium on Alcohol and NeuroDevelopment in Adolescence (NCANDA): 692 met criteria for no/low alcohol exposure, and 139 exceeded drinking thresholds. Cross-sectional, baseline data were collected with computerized and traditional neuropsychological tests assessing eight functional domains expressed as composite scores. General additive modeling evaluated factors potentially modulating performance (age, sex, ethnicity, socioeconomic status, and pubertal developmental stage). Results Older no/low-drinking participants achieved better scores than younger ones on five Accuracy composites (General Ability, Abstraction, Attention, Emotion, and Balance). Speeded responses for Attention, Motor Speed, and General Ability were sensitive to age and pubertal development. The exceeds-threshold group (accounting for age, sex, and other demographic factors) performed significantly below the no/low-drinking group on Balance accuracy and on General Ability, Attention, Episodic Memory, Emotion, and Motor speed scores and showed evidence for faster speed at the expense of accuracy. Delay Discounting performance was consistent with poor impulse control in the younger no/low drinkers and in exceeds-threshold drinkers regardless of age. Conclusions Higher achievement with older age and pubertal stage in General Ability, Abstraction, Attention, Emotion, and Balance suggests continued functional development through adolescence, possibly supported by concurrently maturing frontal, limbic, and cerebellar brain systems. Whether low scores by the exceeds-threshold group resulted from drinking or from other pre-existing factors requires longitudinal study. PMID:26752122
Dusing, Reginald W.; Peng, Warner; Lai, Sue-Min; Grado, Gordon L.; Holzbeierlein, Jeffrey M.; Thrasher, J. Brantley; Hill, Jacqueline; Van Veldhuizen, Peter J.
2014-01-01
Purpose The aim of this study was to identify which patient characteristics are associated with the highest likelihood of positive findings on 11C-acetate PET/computed tomography attenuation correction (CTAC) (PET/CTAC) scan when imaging for recurrent prostate cancer. Methods From 2007 to 2011, 250 11C-acetate PET/CTAC scans were performed at a single institution on patients with prostate cancer recurrence after surgery, brachytherapy, or external beam radiation. Of these patients, 120 met our inclusion criteria. Logistic regression analysis was used to examine the relationship between predictability of positive findings and patients’ characteristics, such as prostate-specific antigen (PSA) level at the time of scan, PSA kinetics, Gleason score, staging, and type of treatment before scan. Results In total, 68.3% of the 120 11C-acetate PET/CTAC scans were positive. The percentage of positive scans and PSA at the time of scanning and PSA velocity (PSAV) had positive correlations. The putative sensitivity and specificity were 86.6% and 65.8%, respectively, when a PSA level greater than 1.24 ng/mL was used as the threshold for scanning. The putative sensitivity and specificity were 74% and 75%, respectively, when a PSAV level greater than 1.32 ng/mL/y was used as the threshold. No significant associations were found between scan positivity and age, PSA doubling time, Gleason score, staging, or type of treatment before scanning. Conclusions This retrospective study suggests that threshold models of PSA greater than 1.24 ng/mL or PSAV greater than 1.32 ng/mL per year are independent predictors of positive findings in 11C-acetate PET/CTAC imaging of recurrent prostate cancer. PMID:25036021
Rethinking the Clinically Based Thresholds of TransCelerate BioPharma for Risk-Based Monitoring.
Zink, Richard C; Dmitrienko, Anastasia; Dmitrienko, Alex
2018-01-01
The quality of data from clinical trials has received a great deal of attention in recent years. Of central importance is the need to protect the well-being of study participants and maintain the integrity of final analysis results. However, traditional approaches to assess data quality have come under increased scrutiny as providing little benefit for the substantial cost. Numerous regulatory guidance documents and industry position papers have described risk-based approaches to identify quality and safety issues. In particular, the position paper of TransCelerate BioPharma recommends defining risk thresholds to assess safety and quality risks based on past clinical experience. This exercise can be extremely time-consuming, and the resulting thresholds may only be relevant to a particular therapeutic area, patient or clinical site population. In addition, predefined thresholds cannot account for safety or quality issues where the underlying rate of observing a particular problem may change over the course of a clinical trial, and often do not consider varying patient exposure. In this manuscript, we appropriate rules commonly utilized for funnel plots to define a traffic-light system for risk indicators based on statistical criteria that consider the duration of patient follow-up. Further, we describe how these methods can be adapted to assess changing risk over time. Finally, we illustrate numerous graphical approaches to summarize and communicate risk, and discuss hybrid clinical-statistical approaches to allow for the assessment of risk at sites with low patient enrollment. We illustrate the aforementioned methodologies for a clinical trial in patients with schizophrenia. Funnel plots are a flexible graphical technique that can form the basis for a risk-based strategy to assess data integrity, while considering site sample size, patient exposure, and changing risk across time.
Rainfall Threshold Assessment Corresponding to the Maximum Allowable Turbidity for Source Water.
Fan, Shu-Kai S; Kuan, Wen-Hui; Fan, Chihhao; Chen, Chiu-Yang
2016-12-01
This study aims to assess the upstream rainfall thresholds corresponding to the maximum allowable turbidity of source water, using monitoring data and artificial neural network computation. The Taipei Water Source Domain was selected as the study area, and the upstream rainfall records were collected for statistical analysis. Using analysis of variance (ANOVA), the cumulative rainfall records of one-day Ping-lin, two-day Ping-lin, two-day Tong-hou, one-day Guie-shan, and one-day Tai-ping (rainfall in the previous 24 or 48 hours at the named weather stations) were found to be the five most significant parameters for downstream turbidity development. An artificial neural network model was constructed to predict the downstream turbidity in the area investigated. The observed and model-calculated turbidity data were applied to assess the rainfall thresholds in the studied area. By setting preselected turbidity criteria, the upstream rainfall thresholds for these statistically determined rain gauge stations were calculated.
Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.
Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela
Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.
A threshold selection method based on edge preserving
NASA Astrophysics Data System (ADS)
Lou, Liantang; Dan, Wei; Chen, Jiaqi
2015-12-01
A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.
Ari, Timucin; Ari, Nilgun
2013-01-01
Early detection of occlusal caries in children is challenging for the dentists, because of the morphology of pit and fissures. The aim of this study was to compare in vitro the diagnostic performance of low-powered magnification with light-emitting diode headlight (LPMLED) using ICDAS-II criteria and AC Impedance Spectroscopy (ACIS) device, on occlusal surfaces of primary molars. The occlusal surfaces of 18 extracted primary molars were examined blindly by two examiners. The teeth were sectioned and examined under light microscopy using Downer's histological criteria as gold standard. Good to excellent inter- and intraexaminer reproducibility, higher sensitivity, specificity, and AUC values were achieved by LPMLED at D1 threshold. Also the relationship between histology and LPMLED was statistically significant. In conclusion visual aids have the potential to improve the performance of early caries detection and clinical diagnostics in children. Despite its potential, ACIS device should be considered as an adjunct method in detecting caries on primary teeth.
Herrmann, Henning; Nolde, Jürgen; Berger, Svend; Heise, Susanne
2016-02-01
Rare earth elements (REE) used to be taken as tracers of geological origin for fluvial transport. Nowadays their increased applications in innovative environmental-friendly technology (e.g. in catalysts, superconductors, lasers, batteries) and medical applications (e.g. MRI contrast agent) lead to man-made, elevated levels in the environment. So far, no regulatory thresholds for REE concentrations and emissions to the environment have been set because information on risks from REE is scarce. However, evidence gathers that REE have to be acknowledged as new, emerging contaminants with manifold ways of entry into the environment, e.g. through waste water from hospitals or through industrial effluents. This paper reviews existing information on bioaccumulation and ecotoxicity of lanthanum in the aquatic environment. Lanthanum is of specific interest as one of the major lanthanides in industrial effluents. This review focuses on the freshwater and the marine environment, and tackles the water column and sediments. From these data, methods to derive quality criteria for sediment and water are discussed and preliminary suggestions are made. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Waller, Jess M.; Saulsberry, Regor L.; Lucero, Ralph; Nichols, Charles T.; Wentzel, Daniel J.
2010-01-01
ASTM-based ILH methods were found to give a reproducible, quantitative estimate of the stress threshold at which significant accumulated damage began to occur. a) FR events are low energy (<2 V(exp 20 microsec) b) FR events occur close to the observed failure locus. c) FR events consist of more than 30% fiber breakage (>300 kHz) d) FR events show a consistent hierarchy of cooperative damage for composite tow, and for the COPV tested, regardless of applied load. Application of ILH or related stress profiles could lead to robust pass/fail acceptance criteria based on the FR. Initial application of FR and FFT analysis of AE data acquired on COPVs is promising.
NASA Astrophysics Data System (ADS)
Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng
2018-04-01
This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.
Mulsow, Jason; Finneran, James J; Houser, Dorian S
2011-04-01
Although electrophysiological methods of measuring the hearing sensitivity of pinnipeds are not yet as refined as those for dolphins and porpoises, they appear to be a promising supplement to traditional psychophysical procedures. In order to further standardize electrophysiological methods with pinnipeds, a within-subject comparison of psychophysical and auditory steady-state response (ASSR) measures of aerial hearing sensitivity was conducted with a 1.5-yr-old California sea lion. The psychophysical audiogram was similar to those previously reported for otariids, with a U-shape, and thresholds near 10 dB re 20 μPa at 8 and 16 kHz. ASSR thresholds measured using both single and multiple simultaneous amplitude-modulated tones closely reproduced the psychophysical audiogram, although the mean ASSR thresholds were elevated relative to psychophysical thresholds. Differences between psychophysical and ASSR thresholds were greatest at the low- and high-frequency ends of the audiogram. Thresholds measured using the multiple ASSR method were not different from those measured using the single ASSR method. The multiple ASSR method was more rapid than the single ASSR method, and allowed for threshold measurements at seven frequencies in less than 20 min. The multiple ASSR method may be especially advantageous for hearing sensitivity measurements with otariid subjects that are untrained for psychophysical procedures.
Ahn, Ilyoung; Kim, Tae-Sung; Jung, Eun-Sun; Yi, Jung-Sun; Jang, Won-Hee; Jung, Kyoung-Mi; Park, Miyoung; Jung, Mi-Sook; Jeon, Eun-Young; Yeo, Kyeong-Uk; Jo, Ji-Hoon; Park, Jung-Eun; Kim, Chang-Yul; Park, Yeong-Chul; Seong, Won-Keun; Lee, Ai-Young; Chun, Young Jin; Jeong, Tae Cheon; Jeung, Eui Bae; Lim, Kyung-Min; Bae, SeungJin; Sohn, Soojung; Heo, Yong
2016-10-01
Local lymph node assay: 5-bromo-2-deoxyuridine-flow cytometry method (LLNA: BrdU-FCM) is a modified non-radioisotopic technique with the additional advantages of accommodating multiple endpoints with the introduction of FCM, and refinement and reduction of animal use by using a sophisticated prescreening scheme. Reliability and accuracy of the LLNA: BrdU-FCM was determined according to OECD Test Guideline (TG) No. 429 (Skin Sensitization: Local Lymph Node Assay) performance standards (PS), with the participation of four laboratories. Transferability was demonstrated through successfully producing stimulation index (SI) values for 25% hexyl cinnamic aldehyde (HCA) consistently greater than 3, a predetermined threshold, by all participating laboratories. Within- and between-laboratory reproducibility was shown using HCA and 2,4-dinitrochlorobenzene, in which EC2.7 values (the estimated concentrations eliciting an SI of 2.7, the threshold for LLNA: BrdU-FCM) fell consistently within the acceptance ranges, 0.025-0.1% and 5-20%, respectively. Predictive capacity was tested using the final protocol version 1.3 for the 18 reference chemicals listed in OECD TG 429, of which results showed 84.6% sensitivity, 100% specificity, and 88.9% accuracy compared with the original LLNA. The data presented are considered to meet the performance criteria for the PS, and its predictive capacity was also sufficiently validated. Copyright © 2016 Elsevier Inc. All rights reserved.
Extended charge banking model of dual path shocks for implantable cardioverter defibrillators
Dosdall, Derek J; Sweeney, James D
2008-01-01
Background Single path defibrillation shock methods have been improved through the use of the Charge Banking Model of defibrillation, which predicts the response of the heart to shocks as a simple resistor-capacitor (RC) circuit. While dual path defibrillation configurations have significantly reduced defibrillation thresholds, improvements to dual path defibrillation techniques have been limited to experimental observations without a practical model to aid in improving dual path defibrillation techniques. Methods The Charge Banking Model has been extended into a new Extended Charge Banking Model of defibrillation that represents small sections of the heart as separate RC circuits, uses a weighting factor based on published defibrillation shock field gradient measures, and implements a critical mass criteria to predict the relative efficacy of single and dual path defibrillation shocks. Results The new model reproduced the results from several published experimental protocols that demonstrated the relative efficacy of dual path defibrillation shocks. The model predicts that time between phases or pulses of dual path defibrillation shock configurations should be minimized to maximize shock efficacy. Discussion Through this approach the Extended Charge Banking Model predictions may be used to improve dual path and multi-pulse defibrillation techniques, which have been shown experimentally to lower defibrillation thresholds substantially. The new model may be a useful tool to help in further improving dual path and multiple pulse defibrillation techniques by predicting optimal pulse durations and shock timing parameters. PMID:18673561
Novel high/low solubility classification methods for new molecular entities.
Dave, Rutwij A; Morris, Marilyn E
2016-09-10
This research describes a rapid solubility classification approach that could be used in the discovery and development of new molecular entities. Compounds (N=635) were divided into two groups based on information available in the literature: high solubility (BDDCS/BCS 1/3) and low solubility (BDDCS/BCS 2/4). We established decision rules for determining solubility classes using measured log solubility in molar units (MLogSM) or measured solubility (MSol) in mg/ml units. ROC curve analysis was applied to determine statistically significant threshold values of MSol and MLogSM. Results indicated that NMEs with MLogSM>-3.05 or MSol>0.30mg/mL will have ≥85% probability of being highly soluble and new molecular entities with MLogSM≤-3.05 or MSol≤0.30mg/mL will have ≥85% probability of being poorly soluble. When comparing solubility classification using the threshold values of MLogSM or MSol with BDDCS, we were able to correctly classify 85% of compounds. We also evaluated solubility classification of an independent set of 108 orally administered drugs using MSol (0.3mg/mL) and our method correctly classified 81% and 95% of compounds into high and low solubility classes, respectively. The high/low solubility classification using MLogSM or MSol is novel and independent of traditionally used dose number criteria. Copyright © 2016 Elsevier B.V. All rights reserved.
Solano, Gabriela; Gómez, Aarón; León, Guillermo
2015-10-01
Snake antivenoms are parenterally administered; therefore, endotoxin content must be strictly controlled. Following international indications to calculate endotoxin limits, it was determined that antivenom doses between 20 mL and 120 mL should not exceed 17.5 Endotoxin Units per milliliter (EU/mL) and 2.9 EU/mL, respectively. The rabbit pyrogen test (RPT) has been used to evaluate endotoxin contamination in antivenoms, but some laboratories have recently implemented the LAL assay. We compared the capability of both tests to evaluate endotoxin contamination in antivenoms, and we found that both methods can detect all endotoxin concentrations in the range of the antivenom specifications. The acceptance criteria of RPT and LAL must be harmonized by calculating the endotoxin limit as the quotient of the threshold pyrogenic dose and the therapeutic dose and the dose administered to rabbits as the quotient of the threshold pyrogenic dose and the endotoxin limit. Since endotoxins from Gram-negative bacteria exert different pyrogenicity, if contamination occurred, antivenom batches that induce pyrogenic reactions may be found in spite of passing LAL specifications. Although LAL assay can be used to assess endotoxin content throughout the antivenom manufacturing process, we recommend that the release of final products be based on the results of both methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
Black, R.W.; Moran, P.W.; Frankforter, J.D.
2011-01-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria. ?? 2010 The Author(s).
Black, Robert W; Moran, Patrick W; Frankforter, Jill D
2011-04-01
Many streams within the United States are impaired due to nutrient enrichment, particularly in agricultural settings. The present study examines the response of benthic algal communities in agricultural and minimally disturbed sites from across the western United States to a suite of environmental factors, including nutrients, collected at multiple scales. The first objective was to identify the relative importance of nutrients, habitat and watershed features, and macroinvertebrate trophic structure to explain algal metrics derived from deposition and erosion habitats. The second objective was to determine if thresholds in total nitrogen (TN) and total phosphorus (TP) related to algal metrics could be identified and how these thresholds varied across metrics and habitats. Nutrient concentrations within the agricultural areas were elevated and greater than published threshold values. All algal metrics examined responded to nutrients as hypothesized. Although nutrients typically were the most important variables in explaining the variation in each of the algal metrics, environmental factors operating at multiple scales also were important. Calculated thresholds for TN or TP based on the algal metrics generated from samples collected from erosion and deposition habitats were not significantly different. Little variability in threshold values for each metric for TN and TP was observed. The consistency of the threshold values measured across multiple metrics and habitats suggest that the thresholds identified in this study are ecologically relevant. Additional work to characterize the relationship between algal metrics, physical and chemical features, and nuisance algal growth would be of benefit to the development of nutrient thresholds and criteria.
Threshold and subthreshold Generalized Anxiety Disorder (GAD) and suicide ideation.
Gilmour, Heather
2016-11-16
Subthreshold Generalized Anxiety Disorder (GAD) has been reported to be at least as prevalent as threshold GAD and of comparable clinical significance. It is not clear if GAD is uniquely associated with the risk of suicide, or if psychiatric comorbidity drives the association. Data from the 2012 Canadian Community Health Survey-Mental Health were used to estimate the prevalence of threshold and subthreshold GAD in the household population aged 15 or older. As well, the relationship between GAD and suicide ideation was studied. Multivariate logistic regression was used in a sample of 24,785 people to identify significant associations, while adjusting for the confounding effects of sociodemographic factors and other mental disorders. In 2012, an estimated 722,000 Canadians aged 15 or older (2.6%) met the criteria for threshold GAD; an additional 2.3% (655,000) had subthreshold GAD. For people with threshold GAD, past 12-month suicide ideation was more prevalent among men than women (32.0% versus 21.2% respectively). In multivariate models that controlled sociodemographic factors, the odds of past 12-month suicide ideation among people with either past 12-month threshold or subthreshold GAD were significantly higher than the odds for those without GAD. When psychiatric comorbidity was also controlled, associations between threshold and subthreshold GAD and suicidal ideation were attenuated, but remained significant. Threshold and subthreshold GAD affect similar percentages of the Canadian household population. This study adds to the literature that has identified an independent association between threshold GAD and suicide ideation, and demonstrates that an association is also apparent for subthreshold GAD.
[Analysis and experimental verification of sensitivity and SNR of laser warning receiver].
Zhang, Ji-Long; Wang, Ming; Tian, Er-Ming; Li, Xiao; Wang, Zhi-Bin; Zhang, Yue
2009-01-01
In order to countermeasure increasingly serious threat from hostile laser in modern war, it is urgent to do research on laser warning technology and system, and the sensitivity and signal to noise ratio (SNR) are two important performance parameters in laser warning system. In the present paper, based on the signal statistical detection theory, a method for calculation of the sensitivity and SNR in coherent detection laser warning receiver (LWR) has been proposed. Firstly, the probabilities of the laser signal and receiver noise were analyzed. Secondly, based on the threshold detection theory and Neyman-Pearson criteria, the signal current equation was established by introducing detection probability factor and false alarm rate factor, then, the mathematical expressions of sensitivity and SNR were deduced. Finally, by using method, the sensitivity and SNR of the sinusoidal grating laser warning receiver developed by our group were analyzed, and the theoretic calculation and experimental results indicate that the SNR analysis method is feasible, and can be used in performance analysis of LWR.
Neural networks and fault probability evaluation for diagnosis issues.
Kourd, Yahia; Lefebvre, Dimitri; Guersi, Noureddine
2014-01-01
This paper presents a new FDI technique for fault detection and isolation in unknown nonlinear systems. The objective of the research is to construct and analyze residuals by means of artificial intelligence and probabilistic methods. Artificial neural networks are first used for modeling issues. Neural networks models are designed for learning the fault-free and the faulty behaviors of the considered systems. Once the residuals generated, an evaluation using probabilistic criteria is applied to them to determine what is the most likely fault among a set of candidate faults. The study also includes a comparison between the contributions of these tools and their limitations, particularly through the establishment of quantitative indicators to assess their performance. According to the computation of a confidence factor, the proposed method is suitable to evaluate the reliability of the FDI decision. The approach is applied to detect and isolate 19 fault candidates in the DAMADICS benchmark. The results obtained with the proposed scheme are compared with the results obtained according to a usual thresholding method.
Sinex, Donal G.
2013-01-01
Binary time-frequency (TF) masks can be applied to separate speech from noise. Previous studies have shown that with appropriate parameters, ideal TF masks can extract highly intelligible speech even at very low speech-to-noise ratios (SNRs). Two psychophysical experiments provided additional information about the dependence of intelligibility on the frequency resolution and threshold criteria that define the ideal TF mask. Listeners identified AzBio Sentences in noise, before and after application of TF masks. Masks generated with 8 or 16 frequency bands per octave supported nearly-perfect identification. Word recognition accuracy was slightly lower and more variable with 4 bands per octave. When TF masks were generated with a local threshold criterion of 0 dB SNR, the mean speech reception threshold was −9.5 dB SNR, compared to −5.7 dB for unprocessed sentences in noise. Speech reception thresholds decreased by about 1 dB per dB of additional decrease in the local threshold criterion. Information reported here about the dependence of speech intelligibility on frequency and level parameters has relevance for the development of non-ideal TF masks for clinical applications such as speech processing for hearing aids. PMID:23556604
Establishing endangered species recovery criteria using predictive simulation modeling
McGowan, Conor P.; Catlin, Daniel H.; Shaffer, Terry L.; Gratto-Trevor, Cheri L.; Aron, Carol
2014-01-01
Listing a species under the Endangered Species Act (ESA) and developing a recovery plan requires U.S. Fish and Wildlife Service to establish specific and measurable criteria for delisting. Generally, species are listed because they face (or are perceived to face) elevated risk of extinction due to issues such as habitat loss, invasive species, or other factors. Recovery plans identify recovery criteria that reduce extinction risk to an acceptable level. It logically follows that the recovery criteria, the defined conditions for removing a species from ESA protections, need to be closely related to extinction risk. Extinction probability is a population parameter estimated with a model that uses current demographic information to project the population into the future over a number of replicates, calculating the proportion of replicated populations that go extinct. We simulated extinction probabilities of piping plovers in the Great Plains and estimated the relationship between extinction probability and various demographic parameters. We tested the fit of regression models linking initial abundance, productivity, or population growth rate to extinction risk, and then, using the regression parameter estimates, determined the conditions required to reduce extinction probability to some pre-defined acceptable threshold. Binomial regression models with mean population growth rate and the natural log of initial abundance were the best predictors of extinction probability 50 years into the future. For example, based on our regression models, an initial abundance of approximately 2400 females with an expected mean population growth rate of 1.0 will limit extinction risk for piping plovers in the Great Plains to less than 0.048. Our method provides a straightforward way of developing specific and measurable recovery criteria linked directly to the core issue of extinction risk. Published by Elsevier Ltd.
Marcu, Orly; Dodson, Emma-Joy; Alam, Nawsad; Sperber, Michal; Kozakov, Dima; Lensink, Marc F; Schueler-Furman, Ora
2017-03-01
CAPRI rounds 28 and 29 included, for the first time, peptide-receptor targets of three different systems, reflecting increased appreciation of the importance of peptide-protein interactions. The CAPRI rounds allowed us to objectively assess the performance of Rosetta FlexPepDock, one of the first protocols to explicitly include peptide flexibility in docking, accounting for peptide conformational changes upon binding. We discuss here successes and challenges in modeling these targets: we obtain top-performing, high-resolution models of the peptide motif for cases with known binding sites but there is a need for better modeling of flanking regions, as well as better selection criteria, in particular for unknown binding sites. These rounds have also provided us the opportunity to reassess the success criteria, to better reflect the quality of a peptide-protein complex model. Using all models submitted to CAPRI, we analyze the correlation between current classification criteria and the ability to retrieve critical interface features, such as hydrogen bonds and hotspots. We find that loosening the backbone (and ligand) RMSD threshold, together with a restriction on the side chain RMSD measure, allows us to improve the selection of high-accuracy models. We also suggest a new measure to assess interface hydrogen bond recovery, which is not assessed by the current CAPRI criteria. Finally, we find that surprisingly much can be learned from rather inaccurate models about binding hotspots, suggesting that the current status of peptide-protein docking methods, as reflected by the submitted CAPRI models, can already have a significant impact on our understanding of protein interactions. Proteins 2017; 85:445-462. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Problematic Internet Use: Perceptions of Addiction Counsellors
ERIC Educational Resources Information Center
Acier, Didier; Kern, Laurence
2011-01-01
Despite a growing number of publications on problematic Internet use (PIU), there is no consensus on the nature of the phenomenon, its constituent criteria, and its clinical threshold. This qualitative study examines the perceptions of addiction counsellors who have managed individuals with PIU in Quebec (Canada). Four focus groups were conducted…
1986-04-14
CONCIPT DIFINITION OIVILOPMINTITIST I OPERATION ANO ■ MAINTENANCE ■ TRACK MOifCTIO PROGRAMS • «VIIW CRITICAL ISSUIS . Mt PARI INPUTS TO PMO...development and beyond, evaluation criteria must Include quantitative goals (the desired value) and thresholds (the value beyond which the charac
Integrated Strategic Planning and Analysis Network Increment 4 (ISPAN Inc 4)
2016-03-01
Defense Acquisition Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY...Inc 4 will achieve FDD completion criteria when: 1) the system meets all the KPP thresholds as verified through an Initial Operational Test and
Sustainability of Reef Ecosystem Services under Expanded Water Quality Standards in St. Croix, USVI
Under the U.S. Clean Water Act, States and Territories are to establish water quality criteria to protect designated uses, such as fishable or swimmable water resources. However, establishment of chemical and physical thresholds does not necessarily ensure protection of the biot...
Foreshock search over a long duration using a method of setting appropriate criteria
NASA Astrophysics Data System (ADS)
Toyomoto, Y.; Kawakata, H.; Hirano, S.; Doi, I.
2016-12-01
Recently, small foreshocks have been detected using cross-correlation techniques (e.g., Bouchon et al., 2011) in which the foreshocks are identified when the cross-correlation coefficient (CC) exceeded a certain threshold. For some shallow intraplate earthquakes, foreshocks whose hypocenters were estimated to be adjacent to the main shock hypocenter were detected from several tens of minutes before the main shock occurrence (Doi and Kawakata, 2012; 2013). At least two problems remain in the cross-correlation techniques employed. First, previous studies on foreshocks used data whose durations are at most a month (Kato et al., 2013); this is insufficient to check if such events occurred only before the main shock occurrence or not. Second, CC is used for detection criteria without considering validity of the threshold. In this study, we search for foreshocks of an M 5.4 earthquake in central Nagano prefecture in Japan on June 30, 2011 with a vertical-component waveform at N.MWDH (Hi-net) station due to one of the cataloged foreshocks (M 1) as a template to calculate CC. We calculate CC between the template and continuous waveforms of the same component at the same station for two years before the main shock occurrence, and we try to overcome the problems mentioned above. We find that histogram of CC is well modeled with the normal distribution, which is similar to previous studies on tremors (e.g., Ohta and Ide, 2008). According to the model, the expected number of misdetection is less than 1 when CC > 0.63. Therefore, we regard that the waveform is due to a foreshock when CC > 0.63. As a result, foreshocks are detected only within thirteen hours immediately before the main shock occurrence for the two years. By setting an appropriate threshold, we conclude that foreshocks just before the main shock occurrence are not stationary events. Acknowledgments: We use continuous waveform records of NIED high sensitivity seismograph network in Japan (Hi-net) and the JMA unified hypocenter catalogs. This work is supported by MEXT of Japan, under its Earthquake and Volcano Hazards Observation and Research Program.
Experimental and environmental factors affect spurious detection of ecological thresholds
Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.
2012-01-01
Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.
Modeling the Interactions Between Multiple Crack Closure Mechanisms at Threshold
NASA Technical Reports Server (NTRS)
Newman, John A.; Riddell, William T.; Piascik, Robert S.
2003-01-01
A fatigue crack closure model is developed that includes interactions between the three closure mechanisms most likely to occur at threshold; plasticity, roughness, and oxide. This model, herein referred to as the CROP model (for Closure, Roughness, Oxide, and Plasticity), also includes the effects of out-of plane cracking and multi-axial loading. These features make the CROP closure model uniquely suited for, but not limited to, threshold applications. Rough cracks are idealized here as two-dimensional sawtooths, whose geometry induces mixed-mode crack- tip stresses. Continuum mechanics and crack-tip dislocation concepts are combined to relate crack face displacements to crack-tip loads. Geometric criteria are used to determine closure loads from crack-face displacements. Finite element results, used to verify model predictions, provide critical information about the locations where crack closure occurs.
Richter, Trevor; Nestler-Parr, Sandra; Babela, Robert; Khan, Zeba M; Tesoro, Theresa; Molsen, Elizabeth; Hughes, Dyfrig A
2015-09-01
At present, there is no universal definition of rare disease. To provide an overview of rare disease definitions currently used globally. We systematically searched for definitions related to rare disease from organizations in 32 international jurisdictions. Descriptive statistics of definitions were generated and prevalence thresholds were calculated. We identified 296 definitions from 1109 organizations. The terms "rare disease(s)" and "orphan drug(s)" were used most frequently (38% and 27% of the definitions, respectively). Qualitative descriptors such as "life-threatening" were used infrequently. A prevalence threshold was specified in at least one definition in 88% of the jurisdictions. The average prevalence threshold across organizations within individual jurisdictions ranged from 5 to 76 cases/100,000 people. Most jurisdictions (66%) had an average prevalence threshold between 40 and 50 cases/100,000 people, with a global average of 40 cases/100,000 people. Prevalence thresholds used by different organizations within individual jurisdictions varied substantially. Across jurisdictions, umbrella patient organizations had the highest (most liberal) average prevalence threshold (47 cases/100,000 people), whereas private payers had the lowest threshold (18 cases/100,000 people). Despite variation in the terminology and prevalence thresholds used to define rare diseases among different jurisdictions and organizations, the terms "rare disease" and "orphan drug" are used most widely and the average prevalence threshold is between 40 and 50 cases/100,000 people. These findings highlight the existing diversity among definitions of rare diseases, but suggest that any attempts to harmonize rare disease definitions should focus on standardizing objective criteria such as prevalence thresholds and avoid qualitative descriptors. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Sex Differences in Antiretroviral Therapy Initiation in Pediatric HIV Infection
Swordy, Alice; Mori, Luisa; Laker, Leana; Muenchhoff, Maximilian; Matthews, Philippa C.; Tudor-Williams, Gareth; Lavandier, Nora; van Zyl, Anriette; Hurst, Jacob; Walker, Bruce D.; Ndung’u, Thumbi; Prendergast, Andrew; Goulder, Philip; Jooste, Pieter
2015-01-01
The incidence and severity of infections in childhood is typically greater in males. The basis for these observed sex differences is not well understood, and potentially may facilitate novel approaches to reducing disease from a range of conditions. We here investigated sex differences in HIV-infected children in relation to antiretroviral therapy (ART) initiation and post-treatment outcome. In a South African cohort of 2,101 HIV-infected children, we observed that absolute CD4+ count and CD4% were significantly higher in ART-naïve female, compared to age-matched male, HIV-infected children. Absolute CD4 count and CD4% were also significantly higher in HIV-uninfected female versus male neonates. We next showed that significantly more male than female children were initiated on ART (47% female); and children not meeting criteria to start ART by >5yrs were more frequently female (59%; p<0.001). Among ART-treated children, immune reconstitution of CD4 T-cells was more rapid and more complete in female children, even after adjustment for pre-ART absolute CD4 count or CD4% (p=0.011, p=0.030, respectively). However, while ART was initiated as a result of meeting CD4 criteria less often in females (45%), ART initiation as a result of clinical disease in children whose CD4 counts were above treatment thresholds occurred more often in females (57%, p<0.001). The main sex difference in morbidity observed in children initiating ART above CD4 thresholds, above that of TB disease, was as a result of wasting and stunting observed in females with above-threshold CD4 counts (p=0.002). These findings suggest the possibility that optimal treatment of HIV-infected children might incorporate differential CD4 treatment thresholds for ART initiation according to sex. PMID:26151555
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, C G; Mathews, S
2006-09-07
Current regulatory schemes use generic or industrial sector specific benchmarks to evaluate the quality of industrial stormwater discharges. While benchmarks can be a useful tool for facility stormwater managers in evaluating the quality stormwater runoff, benchmarks typically do not take into account site-specific conditions, such as: soil chemistry, atmospheric deposition, seasonal changes in water source, and upstream land use. Failing to account for these factors may lead to unnecessary costs to trace a source of natural variation, or potentially missing a significant local water quality problem. Site-specific water quality thresholds, established upon the statistical evaluation of historic data take intomore » account these factors, are a better tool for the direct evaluation of runoff quality, and a more cost-effective trigger to investigate anomalous results. Lawrence Livermore National Laboratory (LLNL), a federal facility, established stormwater monitoring programs to comply with the requirements of the industrial stormwater permit and Department of Energy orders, which require the evaluation of the impact of effluent discharges on the environment. LLNL recognized the need to create a tool to evaluate and manage stormwater quality that would allow analysts to identify trends in stormwater quality and recognize anomalous results so that trace-back and corrective actions could be initiated. LLNL created the site-specific water quality threshold tool to better understand the nature of the stormwater influent and effluent, to establish a technical basis for determining when facility operations might be impacting the quality of stormwater discharges, and to provide ''action levels'' to initiate follow-up to analytical results. The threshold criteria were based on a statistical analysis of the historic stormwater monitoring data and a review of relevant water quality objectives.« less
NASA Astrophysics Data System (ADS)
Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.
2017-04-01
Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.
Systemic inflammatory response syndrome criteria in defining severe sepsis.
Kaukonen, Kirsi-Maija; Bailey, Michael; Pilcher, David; Cooper, D Jamie; Bellomo, Rinaldo
2015-04-23
The consensus definition of severe sepsis requires suspected or proven infection, organ failure, and signs that meet two or more criteria for the systemic inflammatory response syndrome (SIRS). We aimed to test the sensitivity, face validity, and construct validity of this approach. We studied data from patients from 172 intensive care units in Australia and New Zealand from 2000 through 2013. We identified patients with infection and organ failure and categorized them according to whether they had signs meeting two or more SIRS criteria (SIRS-positive severe sepsis) or less than two SIRS criteria (SIRS-negative severe sepsis). We compared their characteristics and outcomes and assessed them for the presence of a step increase in the risk of death at a threshold of two SIRS criteria. Of 1,171,797 patients, a total of 109,663 had infection and organ failure. Among these, 96,385 patients (87.9%) had SIRS-positive severe sepsis and 13,278 (12.1%) had SIRS-negative severe sepsis. Over a period of 14 years, these groups had similar characteristics and changes in mortality (SIRS-positive group: from 36.1% [829 of 2296 patients] to 18.3% [2037 of 11,119], P<0.001; SIRS-negative group: from 27.7% [100 of 361] to 9.3% [122 of 1315], P<0.001). Moreover, this pattern remained similar after adjustment for baseline characteristics (odds ratio in the SIRS-positive group, 0.96; 95% confidence interval [CI], 0.96 to 0.97; odds ratio in the SIRS-negative group, 0.96; 95% CI, 0.94 to 0.98; P=0.12 for between-group difference). In the adjusted analysis, mortality increased linearly with each additional SIRS criterion (odds ratio for each additional criterion, 1.13; 95% CI, 1.11 to 1.15; P<0.001) without any transitional increase in risk at a threshold of two SIRS criteria. The need for two or more SIRS criteria to define severe sepsis excluded one in eight otherwise similar patients with infection, organ failure, and substantial mortality and failed to define a transition point in the risk of death. (Funded by the Australian and New Zealand Intensive Care Research Centre.).
Jauk, Emanuel; Benedek, Mathias; Dunst, Beate; Neubauer, Aljoscha C.
2013-01-01
The relationship between intelligence and creativity has been subject to empirical research for decades. Nevertheless, there is yet no consensus on how these constructs are related. One of the most prominent notions concerning the interplay between intelligence and creativity is the threshold hypothesis, which assumes that above-average intelligence represents a necessary condition for high-level creativity. While earlier research mostly supported the threshold hypothesis, it has come under fire in recent investigations. The threshold hypothesis is commonly investigated by splitting a sample at a given threshold (e.g., at 120 IQ points) and estimating separate correlations for lower and upper IQ ranges. However, there is no compelling reason why the threshold should be fixed at an IQ of 120, and to date, no attempts have been made to detect the threshold empirically. Therefore, this study examined the relationship between intelligence and different indicators of creative potential and of creative achievement by means of segmented regression analysis in a sample of 297 participants. Segmented regression allows for the detection of a threshold in continuous data by means of iterative computational algorithms. We found thresholds only for measures of creative potential but not for creative achievement. For the former the thresholds varied as a function of criteria: When investigating a liberal criterion of ideational originality (i.e., two original ideas), a threshold was detected at around 100 IQ points. In contrast, a threshold of 120 IQ points emerged when the criterion was more demanding (i.e., many original ideas). Moreover, an IQ of around 85 IQ points was found to form the threshold for a purely quantitative measure of creative potential (i.e., ideational fluency). These results confirm the threshold hypothesis for qualitative indicators of creative potential and may explain some of the observed discrepancies in previous research. In addition, we obtained evidence that once the intelligence threshold is met, personality factors become more predictive for creativity. On the contrary, no threshold was found for creative achievement, i.e. creative achievement benefits from higher intelligence even at fairly high levels of intellectual ability. PMID:23825884
Masoner, Jason R.; Haggard, Brian E.; Rea, Alan
2002-01-01
The U.S.Environmental Protection Agency has developed nutrient criteria using ecoregions to manage and protect rivers and streams in the United States. Individual states and tribes are encouraged by the U.S. Environmental Protection Agency to modify or improve upon the ecoregion approach. The Oklahoma Water Resources Board uses a dichotomous process that stratifies streams using environmental characteristics such as stream order and stream slope. This process is called the Use Support Assessment Protocols, subchapter15. The Use Support Assessment Protocols can be used to identify streams threatened by excessive amounts of nutrients, dependant upon a beneficial use designation for each stream. The Use Support Assessment Protocols, subchapter 15 uses nutrient and environmental characteristic thresholds developed from a study conducted in the Netherlands, but the Oklahoma Water Resources Board wants to modify the thresholds to reflect hydrologic and ecological conditions relevant to Oklahoma streams and rivers. Environmental characteristics thought to affect impairment from nutrient concentrations in Oklahoma streams and rivers were determined for 798 water-quality sites in Oklahoma. Nutrient, chlorophyll, water-properties, and location data were retrieved from the U.S. Environmental Protection Agency STORET database including data from the U.S. Geological Survey, Oklahoma Conservation Commission, and Oklahoma Water Resources Board. Drainage-basin area, stream order, stream slope, and land-use proportions were determined for each site using a Geographic Information System. The methods, procedures, and data sets used to determine the environmental characteristics are described.
Koyama, Kazuya; Mitsumoto, Takuya; Shiraishi, Takahiro; Tsuda, Keisuke; Nishiyama, Atsushi; Inoue, Kazumasa; Yoshikawa, Kyosan; Hatano, Kazuo; Kubota, Kazuo; Fukushi, Masahiro
2017-09-01
We aimed to determine the difference in tumor volume associated with the reconstruction model in positron-emission tomography (PET). To reduce the influence of the reconstruction model, we suggested a method to measure the tumor volume using the relative threshold method with a fixed threshold based on peak standardized uptake value (SUV peak ). The efficacy of our method was verified using 18 F-2-fluoro-2-deoxy-D-glucose PET/computed tomography images of 20 patients with lung cancer. The tumor volume was determined using the relative threshold method with a fixed threshold based on the SUV peak . The PET data were reconstructed using the ordered-subset expectation maximization (OSEM) model, the OSEM + time-of-flight (TOF) model, and the OSEM + TOF + point-spread function (PSF) model. The volume differences associated with the reconstruction algorithm (%VD) were compared. For comparison, the tumor volume was measured using the relative threshold method based on the maximum SUV (SUV max ). For the OSEM and TOF models, the mean %VD values were -0.06 ± 8.07 and -2.04 ± 4.23% for the fixed 40% threshold according to the SUV max and the SUV peak, respectively. The effect of our method in this case seemed to be minor. For the OSEM and PSF models, the mean %VD values were -20.41 ± 14.47 and -13.87 ± 6.59% for the fixed 40% threshold according to the SUV max and SUV peak , respectively. Our new method enabled the measurement of tumor volume with a fixed threshold and reduced the influence of the changes in tumor volume associated with the reconstruction model.
NASA Astrophysics Data System (ADS)
Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde
2006-03-01
European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.
Setting Priorities in Global Child Health Research Investments: Addressing Values of Stakeholders
Kapiriri, Lydia; Tomlinson, Mark; Gibson, Jennifer; Chopra, Mickey; El Arifeen, Shams; Black, Robert E.; Rudan, Igor
2007-01-01
Aim To identify main groups of stakeholders in the process of health research priority setting and propose strategies for addressing their systems of values. Methods In three separate exercises that took place between March and June 2006 we interviewed three different groups of stakeholders: 1) members of the global research priority setting network; 2) a diverse group of national-level stakeholders from South Africa; and 3) participants at the conference related to international child health held in Washington, DC, USA. Each of the groups was administered different version of the questionnaire in which they were asked to set weights to criteria (and also minimum required thresholds, where applicable) that were a priori defined as relevant to health research priority setting by the consultants of the Child Health and Nutrition Research initiative (CHNRI). Results At the global level, the wide and diverse group of respondents placed the greatest importance (weight) to the criterion of maximum potential for disease burden reduction, while the most stringent threshold was placed on the criterion of answerability in an ethical way. Among the stakeholders’ representatives attending the international conference, the criterion of deliverability, answerability, and sustainability of health research results was proposed as the most important one. At the national level in South Africa, the greatest weight was placed on the criterion addressing the predicted impact on equity of the proposed health research. Conclusions Involving a large group of stakeholders when setting priorities in health research investments is important because the criteria of relevance to scientists and technical experts, whose knowledge and technical expertise is usually central to the process, may not be appropriate to specific contexts and in accordance with the views and values of those who invest in health research, those who benefit from it, or wider society as a whole. PMID:17948948
Panday, Seema; Kathard, Harsha; Pillay, Mershen; Wilson, Wayne
2018-03-29
The purpose of this study was to consider the value of adding first-language speaker ratings to the process of validating word recordings for use in a new speech reception threshold (SRT) test in audiology. Previous studies had identified 28 word recordings as being suitable for use in a new SRT test. These word recordings had been shown to satisfy the linguistic criteria of familiarity, phonetic dissimilarity and tone, and the psychometric criterion of homogeneity of audibility. Objectives: The aim of the study was to consider the value of adding first-language speakers' ratings when validating word recordings for a new SRT test. Method: A single observation, cross-sectional design was used to collect and analyse quantitative data in this study. Eleven first-language isiZulu speakers, purposively selected, were asked to rate each of the word recordings for pitch, clarity, naturalness, speech rate and quality on a 5-point Likert scale. The percent agreement and Friedman test were used for analysis. Results: More than 20% of these 11 participants rated the three-word recordings below 'strongly agree' in the category of pitch or tone, and one-word recording below 'strongly agree' in the categories of pitch or tone, clarity or articulation and naturalness or dialect. Conclusion: The first-language speaker ratings proved to be a valuable addition to the process of selecting word recordings for use in a new SRT test. In particular, these ratings identified potentially problematic word recordings in the new SRT test that had been missed by the previously and more commonly used linguistic and psychometric selection criteria.
Kanazawa, Yoshie; Nakao, Toshiyuki; Murai, Seizo; Okada, Tomonari; Matsumoto, Hiroshi
2017-07-01
The International Society of Renal Nutrition and Metabolism (ISRNM) has proposed the diagnostic criteria for protein-energy wasting (PEW). We studied Japanese haemodialysis (HD) patients to verify the diagnostic method, especially with respect to the body mass index (BMI) criterion, as well as the prevalence of PEW and its association with mortality. Japanese patients receiving maintenance HD at three outpatient clinics in Tokyo (n = 210) were enrolled, and prospectively followed-up for 3 years. PEW was diagnosed at baseline, according to the four categories (serum chemistry, body mass, muscle mass and dietary intake) recommended by the ISRNM. For the category of body mass, we select a body mass index (BMI) and set up three thresholds, <18.5, <20.0 and <23.0 kg/m 2 , as the diagnostic criterion. The patients who satisfied at least three out of the four categories were diagnosed as PEW. Protein-energy wasting, when the threshold of a BMI among the diagnostic criteria was defined as <18.5 kg/m 2 , was recognized as an independent risk factor for mortality. However, PEW was not recognized as a risk factor when the BMI diagnostic criterion was set at <20.0 or <23.0 kg/m 2 . Overall, 14.8% of the patients had PEW. The survival rate of PEW patients was significantly lower than that of non-PEW patients (log rank, P < 0.001). The diagnosis algorithm of PEW proposed by an expert panel of the ISRNM strongly associates with mortality. However, given differences in body size in Japan, we suggest to revise the BMI criterion from <23.0 kg/m 2 to <18.5 kg/m 2 . © 2016 Asian Pacific Society of Nephrology.
Detection and Use of Load and Gage Output Repeats of Wind Tunnel Strain-Gage Balance Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2017-01-01
Criteria are discussed that may be used for the detection of load and gage output repeats of wind tunnel strain-gage balance data. First, empirical thresholds are introduced that help determine if the loads or electrical outputs of a pair of balance calibration or check load data points match. A threshold of 0.01 percent of the load capacity is suggested for the identification of matching loads. Similarly, a threshold of 0.1 microV/V is recommended for the identification of matching electrical outputs. Two examples for the use of load and output repeats are discussed to illustrate benefits of the implementation of a repeat point detection algorithm in a balance data analysis software package. The first example uses the suggested load threshold to identify repeat data points that may be used to compute pure errors of the balance loads. This type of analysis may reveal hidden data quality issues that could potentially be avoided by making calibration process improvements. The second example uses the electrical output threshold for the identification of balance fouling. Data from the calibration of a six-component force balance is used to illustrate the calculation of the pure error of the balance loads.
Diagnostic criteria for vascular cognitive disorders: a VASCOG statement.
Sachdev, Perminder; Kalaria, Raj; O'Brien, John; Skoog, Ingmar; Alladi, Suvarna; Black, Sandra E; Blacker, Deborah; Blazer, Dan G; Chen, Christopher; Chui, Helena; Ganguli, Mary; Jellinger, Kurt; Jeste, Dilip V; Pasquier, Florence; Paulsen, Jane; Prins, Niels; Rockwood, Kenneth; Roman, Gustavo; Scheltens, Philip
2014-01-01
Several sets of diagnostic criteria have been published for vascular dementia since the 1960s. The continuing ambiguity in vascular dementia definition warrants a critical reexamination. Participants at a special symposium of the International Society for Vascular Behavioral and Cognitive Disorders (VASCOG) in 2009 critiqued the current criteria. They drafted a proposal for a new set of criteria, later reviewed through multiple drafts by the group, including additional experts and the members of the Neurocognitive Disorders Work Group of the fifth revision of Diagnostic and Statistical Manual (DSM-5) Task Force. Cognitive disorders of vascular etiology are a heterogeneous group of disorders with diverse pathologies and clinical manifestations, discussed broadly under the rubric of vascular cognitive disorders (VCD). The continuum of vascular cognitive impairment is recognized by the categories of Mild Vascular Cognitive Disorder, and Vascular Dementia or Major Vascular Cognitive Disorder. Diagnostic thresholds are defined. Clinical and neuroimaging criteria are proposed for establishing vascular etiology. Subtypes of VCD are described, and the frequent cooccurrence of Alzheimer disease pathology emphasized. The proposed criteria for VCD provide a coherent approach to the diagnosis of this diverse group of disorders, with a view to stimulating clinical and pathologic validation studies. These criteria can be harmonized with the DSM-5 criteria such that an international consensus on the criteria for VCD may be achieved.
Threshold selection for classification of MR brain images by clustering method
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita
2015-12-01
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
Nimdet, Khachapon; Chaiyakunapruk, Nathorn; Vichansavakul, Kittaya; Ngorsuraches, Surachat
2015-01-01
Background A number of studies have been conducted to estimate willingness to pay (WTP) per quality-adjusted life years (QALY) in patients or general population for various diseases. However, there has not been any systematic review summarizing the relationship between WTP per QALY and cost-effectiveness (CE) threshold based on World Health Organization (WHO) recommendation. Objective To systematically review willingness-to-pay per quality-adjusted-life-year (WTP per QALY) literature, to compare WTP per QALY with Cost-effectiveness (CE) threshold recommended by WHO, and to determine potential influencing factors. Methods We searched MEDLINE, EMBASE, Psyinfo, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Center of Research Dissemination (CRD), and EconLit from inception through 15 July 2014. To be included, studies have to estimate WTP per QALY in health-related issues using stated preference method. Two investigators independently reviewed each abstract, completed full-text reviews, and extracted information for included studies. We compared WTP per QALY to GDP per capita, analyzed, and summarized potential influencing factors. Results Out of 3,914 articles founded, 14 studies were included. Most studies (92.85%) used contingent valuation method, while only one study used discrete choice experiments. Sample size varied from 104 to 21,896 persons. The ratio between WTP per QALY and GDP per capita varied widely from 0.05 to 5.40, depending on scenario outcomes (e.g., whether it extended/saved life or improved quality of life), severity of hypothetical scenarios, duration of scenario, and source of funding. The average ratio of WTP per QALY and GDP per capita for extending life or saving life (2.03) was significantly higher than the average for improving quality of life (0.59) with the mean difference of 1.43 (95% CI, 1.81 to 1.06). Conclusion This systematic review provides an overview summary of all studies estimating WTP per QALY studies. The variation of ratio of WTP per QALY and GDP per capita depended on several factors may prompt discussions on the CE threshold policy. Our research work provides a foundation for defining future direction of decision criteria for an evidence-informed decision making system. PMID:25855971
MO-FG-202-06: Improving the Performance of Gamma Analysis QA with Radiomics- Based Image Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wootton, L; Nyflot, M; Ford, E
2016-06-15
Purpose: The use of gamma analysis for IMRT quality assurance has well-known limitations. Traditionally, a simple thresholding technique is used to evaluated passing criteria. However, like any image the gamma distribution is rich in information which thresholding mostly discards. We therefore propose a novel method of analyzing gamma images that uses quantitative image features borrowed from radiomics, with the goal of improving error detection. Methods: 368 gamma images were generated from 184 clinical IMRT beams. For each beam the dose to a phantom was measured with EPID dosimetry and compared to the TPS dose calculated with and without normally distributedmore » (2mm sigma) errors in MLC positions. The magnitude of 17 intensity histogram and size-zone radiomic features were derived from each image. The features that differed most significantly between image sets were determined with ROC analysis. A linear machine-learning model was trained on these features to classify images as with or without errors on 180 gamma images.The model was then applied to an independent validation set of 188 additional gamma distributions, half with and half without errors. Results: The most significant features for detecting errors were histogram kurtosis (p=0.007) and three size-zone metrics (p<1e-6 for each). The sizezone metrics detected clusters of high gamma-value pixels under mispositioned MLCs. The model applied to the validation set had an AUC of 0.8, compared to 0.56 for traditional gamma analysis with the decision threshold restricted to 98% or less. Conclusion: A radiomics-based image analysis method was developed that is more effective in detecting error than traditional gamma analysis. Though the pilot study here considers only MLC position errors, radiomics-based methods for other error types are being developed, which may provide better error detection and useful information on the source of detected errors. This work was partially supported by a grant from the Agency for Healthcare Research and Quality, grant number R18 HS022244-01.« less
Influenza surveillance in Europe: establishing epidemic thresholds by the Moving Epidemic Method
Vega, Tomás; Lozano, Jose Eugenio; Meerhoff, Tamara; Snacken, René; Mott, Joshua; Ortiz de Lejarazu, Raul; Nunes, Baltazar
2012-01-01
Please cite this paper as: Vega et al. (2012) Influenza surveillance in Europe: establishing epidemic thresholds by the moving epidemic method. Influenza and Other Respiratory Viruses 7(4), 546–558. Background Timely influenza surveillance is important to monitor influenza epidemics. Objectives (i) To calculate the epidemic threshold for influenza‐like illness (ILI) and acute respiratory infections (ARI) in 19 countries, as well as the thresholds for different levels of intensity. (ii) To evaluate the performance of these thresholds. Methods The moving epidemic method (MEM) has been developed to determine the baseline influenza activity and an epidemic threshold. False alerts, detection lags and timeliness of the detection of epidemics were calculated. The performance was evaluated using a cross‐validation procedure. Results The overall sensitivity of the MEM threshold was 71·8% and the specificity was 95·5%. The median of the timeliness was 1 week (range: 0–4·5). Conclusions The method produced a robust and specific signal to detect influenza epidemics. The good balance between the sensitivity and specificity of the epidemic threshold to detect seasonal epidemics and avoid false alerts has advantages for public health purposes. This method may serve as standard to define the start of the annual influenza epidemic in countries in Europe. PMID:22897919
NASA Astrophysics Data System (ADS)
Bai, F.; Gagar, D.; Foote, P.; Zhao, Y.
2017-02-01
Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors in an array is essential in performing localisation. Currently, this is determined using a fixed threshold which is particularly prone to errors when not set to optimal values. This paper presents three new methods for determining the onset of AE signals without the need for a predetermined threshold. The performance of the techniques is evaluated using AE signals generated during fatigue crack growth and compared to the established Akaike Information Criterion (AIC) and fixed threshold methods. It was found that the 1D location accuracy of the new methods was within the range of < 1 - 7.1 % of the monitored region compared to 2.7% for the AIC method and a range of 1.8-9.4% for the conventional Fixed Threshold method at different threshold levels.
Evaluation of Maryland abutment scour equation through selected threshold velocity methods
Benedict, S.T.
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland State Highway Administration, used field measurements of scour to evaluate the sensitivity of the Maryland abutment scour equation to the critical (or threshold) velocity variable. Four selected methods for estimating threshold velocity were applied to the Maryland abutment scour equation, and the predicted scour to the field measurements were compared. Results indicated that performance of the Maryland abutment scour equation was sensitive to the threshold velocity with some threshold velocity methods producing better estimates of predicted scour than did others. In addition, results indicated that regional stream characteristics can affect the performance of the Maryland abutment scour equation with moderate-gradient streams performing differently from low-gradient streams. On the basis of the findings of the investigation, guidance for selecting threshold velocity methods for application to the Maryland abutment scour equation are provided, and limitations are noted.
Nimdet, Khachapon; Chaiyakunapruk, Nathorn; Vichansavakul, Kittaya; Ngorsuraches, Surachat
2015-01-01
A number of studies have been conducted to estimate willingness to pay (WTP) per quality-adjusted life years (QALY) in patients or general population for various diseases. However, there has not been any systematic review summarizing the relationship between WTP per QALY and cost-effectiveness (CE) threshold based on World Health Organization (WHO) recommendation. To systematically review willingness-to-pay per quality-adjusted-life-year (WTP per QALY) literature, to compare WTP per QALY with Cost-effectiveness (CE) threshold recommended by WHO, and to determine potential influencing factors. We searched MEDLINE, EMBASE, Psyinfo, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Center of Research Dissemination (CRD), and EconLit from inception through 15 July 2014. To be included, studies have to estimate WTP per QALY in health-related issues using stated preference method. Two investigators independently reviewed each abstract, completed full-text reviews, and extracted information for included studies. We compared WTP per QALY to GDP per capita, analyzed, and summarized potential influencing factors. Out of 3,914 articles founded, 14 studies were included. Most studies (92.85%) used contingent valuation method, while only one study used discrete choice experiments. Sample size varied from 104 to 21,896 persons. The ratio between WTP per QALY and GDP per capita varied widely from 0.05 to 5.40, depending on scenario outcomes (e.g., whether it extended/saved life or improved quality of life), severity of hypothetical scenarios, duration of scenario, and source of funding. The average ratio of WTP per QALY and GDP per capita for extending life or saving life (2.03) was significantly higher than the average for improving quality of life (0.59) with the mean difference of 1.43 (95% CI, 1.81 to 1.06). This systematic review provides an overview summary of all studies estimating WTP per QALY studies. The variation of ratio of WTP per QALY and GDP per capita depended on several factors may prompt discussions on the CE threshold policy. Our research work provides a foundation for defining future direction of decision criteria for an evidence-informed decision making system.
NASA Astrophysics Data System (ADS)
Villagrasa, Carmen; Meylan, Sylvain; Gonon, Geraldine; Gruel, Gaëtan; Giesen, Ulrich; Bueno, Marta; Rabus, Hans
2017-09-01
In this work we present results obtained in the frame of the BioQuaRT project. The objective of the study was the correlation between the number of radiation-induced double strand breaks (DSB) of the DNA molecule and the probability of detecting nuclear foci after targeted microbeam irradiation of cells with protons and alpha particles of different LET. The former were obtained by simulation with new methods integrated into Geant4-DNA that permit calculating the number of DSB in a DNA target model induced by direct and indirect radiation effects. A particular focus was laid in this work on evaluating the influence of different criteria applied to the simulated results for predicting the formation of a direct SSB. Indeed, these criteria have an important impact on the predicted number of DSB per particle track and its dependence with LET. Among the criteria tested in this work, the case that a direct radiation interaction leads to a strand break if the cumulative energy deposited in the backbone part of one nucleotide exceeds a threshold of 17.5 eV leads to the best agreement with the relative LET dependence of number of radiation induced foci. Further calculations and experimental data are nevertheless needed in order to fix the simulation parameters and to help interpreting the biological experimental data observed by immunofluorescence in terms of the DSB complexity.
Dimensionality effects in chalcogenide-based devices
NASA Astrophysics Data System (ADS)
Kostylev, S. A.
2013-06-01
The multiplicity of fundamental bulk effects with small characteristic dimensions and short times and diversity of their combinations attracts a lot of researcher and industrialist attention in nanoelectronics and photonics to chalcogenide materials. Experimental data presented on dimensional effects of electrical chalcogenide switching (threshold voltage and threshold current dependence on device area and the film thickness), and in phase-change memory (switching, programming and read parameters), are analyzed from the point of view of choice of low dimensional materials with S-NDC and participation of electrical instabilities - high current density filaments. New ways of improving parameters of phase-change devices are proposed together with new criteria of material choice.
Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding
NASA Astrophysics Data System (ADS)
Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry
2014-07-01
Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.
Accuracy of cancellous bone volume fraction measured by micro-CT scanning.
Ding, M; Odgaard, A; Hvid, I
1999-03-01
Volume fraction, the single most important parameter in describing trabecular microstructure, can easily be calculated from three-dimensional reconstructions of micro-CT images. This study sought to quantify the accuracy of this measurement. One hundred and sixty human cancellous bone specimens which covered a large range of volume fraction (9.8-39.8%) were produced. The specimens were micro-CT scanned, and the volume fraction based on Archimedes' principle was determined as a reference. After scanning, all micro-CT data were segmented using individual thresholds determined by the scanner supplied algorithm (method I). A significant deviation of volume fraction from method I was found: both the y-intercept and the slope of the regression line were significantly different from those of the Archimedes-based volume fraction (p < 0.001). New individual thresholds were determined based on a calibration of volume fraction to the Archimedes-based volume fractions (method II). The mean thresholds of the two methods were applied to segment 20 randomly selected specimens. The results showed that volume fraction using the mean threshold of method I was underestimated by 4% (p = 0.001), whereas the mean threshold of method II yielded accurate values. The precision of the measurement was excellent. Our data show that care must be taken when applying thresholds in generating 3-D data, and that a fixed threshold may be used to obtain reliable volume fraction data. This fixed threshold may be determined from the Archimedes-based volume fraction of a subgroup of specimens. The threshold may vary between different materials, and so it should be determined whenever a study series is performed.
Choosing front-of-package food labelling nutritional criteria: how smart were 'Smart Choices'?
Roberto, Christina A; Bragg, Marie A; Livingston, Kara A; Harris, Jennifer L; Thompson, Jackie M; Seamans, Marissa J; Brownell, Kelly D
2012-02-01
The 'Smart Choices' programme was an industry-driven, front-of-package (FOP) nutritional labelling system introduced in the USA in August 2009, ostensibly to help consumers select healthier options during food shopping. Its nutritional criteria were developed by members of the food industry in collaboration with nutrition and public health experts and government officials. The aim of the present study was to test the extent to which products labelled as 'Smart Choices' could be classified as healthy choices on the basis of the Nutrient Profile Model (NPM), a non-industry-developed, validated nutritional standard. A total of 100 packaged products that qualified for a 'Smart Choices' designation were sampled from eight food and beverage categories. All products were evaluated using the NPM method. In all, 64 % of the products deemed 'Smart Choices' did not meet the NPM standard for a healthy product. Within each 'Smart Choices' category, 0 % of condiments, 8·70 % of fats and oils, 15·63 % of cereals and 31·58 % of snacks and sweets met NPM thresholds. All sampled soups, beverages, desserts and grains deemed 'Smart Choices' were considered healthy according to the NPM standard. The 'Smart Choices' programme is an example of industries' attempts at self-regulation. More than 60 % of foods that received the 'Smart Choices' label did not meet standard nutritional criteria for a 'healthy' food choice, suggesting that industries' involvement in designing labelling systems should be scrutinized. The NPM system may be a good option as the basis for establishing FOP labelling criteria, although more comparisons with other systems are needed.
NASA Astrophysics Data System (ADS)
Oh, Kyonghwan; Kwon, Oh-Kyong
2012-03-01
A threshold-voltage-shift compensation and suppression method for active matrix organic light-emitting diode (AMOLED) displays fabricated using a hydrogenated amorphous silicon thin-film transistor (TFT) backplane is proposed. The proposed method compensates for the threshold voltage variation of TFTs due to different threshold voltage shifts during emission time and extends the lifetime of the AMOLED panel. Measurement results show that the error range of emission current is from -1.1 to +1.7% when the threshold voltage of TFTs varies from 1.2 to 3.0 V.
Reliability of the method of levels for determining cutaneous temperature sensitivity
NASA Astrophysics Data System (ADS)
Jakovljević, Miroljub; Mekjavić, Igor B.
2012-09-01
Determination of the thermal thresholds is used clinically for evaluation of peripheral nervous system function. The aim of this study was to evaluate reliability of the method of levels performed with a new, low cost device for determining cutaneous temperature sensitivity. Nineteen male subjects were included in the study. Thermal thresholds were tested on the right side at the volar surface of mid-forearm, lateral surface of mid-upper arm and front area of mid-thigh. Thermal testing was carried out by the method of levels with an initial temperature step of 2°C. Variability of thermal thresholds was expressed by means of the ratio between the second and the first testing, coefficient of variation (CV), coefficient of repeatability (CR), intraclass correlation coefficient (ICC), mean difference between sessions (S1-S2diff), standard error of measurement (SEM) and minimally detectable change (MDC). There were no statistically significant changes between sessions for warm or cold thresholds, or between warm and cold thresholds. Within-subject CVs were acceptable. The CR estimates for warm thresholds ranged from 0.74°C to 1.06°C and from 0.67°C to 1.07°C for cold thresholds. The ICC values for intra-rater reliability ranged from 0.41 to 0.72 for warm thresholds and from 0.67 to 0.84 for cold thresholds. S1-S2diff ranged from -0.15°C to 0.07°C for warm thresholds, and from -0.08°C to 0.07°C for cold thresholds. SEM ranged from 0.26°C to 0.38°C for warm thresholds, and from 0.23°C to 0.38°C for cold thresholds. Estimated MDC values were between 0.60°C and 0.88°C for warm thresholds, and 0.53°C and 0.88°C for cold thresholds. The method of levels for determining cutaneous temperature sensitivity has acceptable reliability.
Screening for increased cardiometabolic risk in primary care: a systematic review
den Engelsen, Corine; Koekkoek, Paula S; Godefrooij, Merijn B; Spigt, Mark G; Rutten, Guy E
2014-01-01
Background Many programmes to detect and prevent cardiovascular disease (CVD) have been performed, but the optimal strategy is not yet clear. Aim To present a systematic review of cardiometabolic screening programmes performed among apparently healthy people (not yet known to have CVD, diabetes, or cardiometabolic risk factors) and mixed populations (apparently healthy people and people diagnosed with risk factor or disease) to define the optimal screening strategy. Design and setting Systematic review of studies performed in primary care in Western countries. Method MEDLINE, Embase, and CINAHL databases were searched for studies screening for increased cardiometabolic risk. Exclusion criteria were studies designed to assess prevalence of risk factors without follow-up or treatment; without involving a GP; when fewer than two risk factors were considered as the primary outcome; and studies constrained to ethnic minorities. Results The search strategy yielded 11 445 hits; 26 met the inclusion criteria. Five studies (1995–2012) were conducted in apparently healthy populations: three used a stepwise method. Response rates varied from 24% to 79%. Twenty-one studies (1967–2012) were performed in mixed populations; one used a stepwise method. Response rates varied from 50% to 75%. Prevalence rates could not be compared because of heterogeneity of used thresholds and eligible populations. Observed time trends were a shift from mixed to apparently healthy populations, increasing use of risk scores, and increasing use of stepwise screening methods. Conclusion The optimal screening strategy in primary care is likely stepwise, in apparently healthy people, with the use of risk scores. Increasing public awareness and actively involving GPs might facilitate screening efficiency and uptake. PMID:25267047
Lower-upper-threshold correlation for underwater range-gated imaging self-adaptive enhancement.
Sun, Liang; Wang, Xinwei; Liu, Xiaoquan; Ren, Pengdao; Lei, Pingshun; He, Jun; Fan, Songtao; Zhou, Yan; Liu, Yuliang
2016-10-10
In underwater range-gated imaging (URGI), enhancement of low-brightness and low-contrast images is critical for human observation. Traditional histogram equalizations over-enhance images, with the result of details being lost. To compress over-enhancement, a lower-upper-threshold correlation method is proposed for underwater range-gated imaging self-adaptive enhancement based on double-plateau histogram equalization. The lower threshold determines image details and compresses over-enhancement. It is correlated with the upper threshold. First, the upper threshold is updated by searching for the local maximum in real time, and then the lower threshold is calculated by the upper threshold and the number of nonzero units selected from a filtered histogram. With this method, the backgrounds of underwater images are constrained with enhanced details. Finally, the proof experiments are performed. Peak signal-to-noise-ratio, variance, contrast, and human visual properties are used to evaluate the objective quality of the global and regions of interest images. The evaluation results demonstrate that the proposed method adaptively selects the proper upper and lower thresholds under different conditions. The proposed method contributes to URGI with effective image enhancement for human eyes.
The Role of Parametric Assumptions in Adaptive Bayesian Estimation
ERIC Educational Resources Information Center
Alcala-Quintana, Rocio; Garcia-Perez, Miguel A.
2004-01-01
Variants of adaptive Bayesian procedures for estimating the 5% point on a psychometric function were studied by simulation. Bias and standard error were the criteria to evaluate performance. The results indicated a superiority of (a) uniform priors, (b) model likelihood functions that are odd symmetric about threshold and that have parameter…
77 FR 43561 - Proposed Eligibility Criteria for Bound Printed Matter Parcels
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-25
... physical density threshold for individual mailpieces. DATES: Comments on this advance notice are due.... Code, require that each class of mail or type of mail service bear the direct and indirect costs... a 98.8% cost coverage. Greater efficiency in the packaging of BPM parcels will provide for more...
Future Shock--Education 1984: The Economists' Viewpoint.
ERIC Educational Resources Information Center
Hanlon, J. William
Education, like other institutions of our society, is susceptible to "future shock", the inadequate preparation for a radically different future. Our nation is on the threshold of an age of scarcity, and the impact on education will be the accelerated demands for educators to justify their use of resources based on impersonal objective criteria.…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-20
...: Sources: Sources: Quarterly reporting Indicator tracking Quarterly tables reporting. Survey of MCC staff Quarterly reporting GAO Audits. Impact evaluations Survey of MCC staff. Similarly, the Board may consider a... reporting, documentation of changes in timing or scope of a threshold program in implementation, a survey of...
ERIC Educational Resources Information Center
Stockard, Jean; Wood, Timothy W.
2017-01-01
Most evaluators have embraced the goal of evidence-based practice (EBP). Yet, many have criticized EBP review systems that prioritize randomized control trials and use various criteria to limit the studies examined. They suggest this could produce policy recommendations based on small, unrepresentative segments of the literature and recommend a…
Stress/strain changes and triggered seismicity at The Geysers, California
Gomberg, J.; Davis, S.
1996-01-01
The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency or equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.
Stress/strain changes and triggered seismicity at The Geysers, California
NASA Astrophysics Data System (ADS)
Gomberg, Joan; Davis, Scott
1996-01-01
The principal results of this study of remotely triggered seismicity in The Geysers geothermal field are the demonstration that triggering (initiation of earthquake failure) depends on a critical strain threshold and that the threshold level increases with decreasing frequency, or, equivalently, depends on strain rate. This threshold function derives from (1) analyses of dynamic strains associated with surface waves of the triggering earthquakes, (2) statistically measured aftershock zone dimensions, and (3) analytic functional representations of strains associated with power production and tides. The threshold is also consistent with triggering by static strain changes and implies that both static and dynamic strains may cause aftershocks. The observation that triggered seismicity probably occurs in addition to background activity also provides an important constraint on the triggering process. Assuming the physical processes underlying earthquake nucleation to be the same, Gomberg [this issue] discusses seismicity triggered by the MW 7.3 Landers earthquake, its constraints on the variability of triggering thresholds with site, and the implications of time delays between triggering and triggered earthquakes. Our results enable us to reject the hypothesis that dynamic strains simply nudge prestressed faults over a Coulomb failure threshold sooner than they would have otherwise. We interpret the rate-dependent triggering threshold as evidence of several competing processes with different time constants, the faster one(s) facilitating failure and the other(s) inhibiting it. Such competition is a common feature of theories of slip instability. All these results, not surprisingly, imply that to understand earthquake triggering one must consider not only simple failure criteria requiring exceedence of some constant threshold but also the requirements for generating instabilities.
Marshall, John W; Dahlstrom, Dean B; Powley, Kramer D
2011-06-01
To satisfy the Criminal Code of Canada's definition of a firearm, a barreled weapon must be capable of causing serious bodily injury or death to a person. Canadian courts have accepted the forensically established criteria of "penetration or rupture of an eye" as serious bodily injury. The minimal velocity of nonconventional ammunition required to penetrate the eye including airsoft projectiles has yet to be established. To establish minimal threshold requirements for eye penetration, empirical tests were conducted using a variety of airsoft projectiles. Using the data obtained from these tests, and previous research using "air gun" projectiles, an "energy density" parameter was calculated for the minimum penetration threshold of an eye. Airsoft guns capable of achieving velocities in excess of 99 m/s (325 ft/s) using conventional 6-mm airsoft ammunition will satisfy the forensically established criteria of "serious bodily injury." The energy density parameter for typical 6-mm plastic airsoft projectiles is 4.3 to 4.8 J/cm². This calculation also encompasses 4.5-mm steel BBs.
McLaughlin, Douglas B
2012-01-01
The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors. Copyright © 2011 SETAC.
Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image
NASA Astrophysics Data System (ADS)
Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.
2017-12-01
Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.
Should criteria be specified for depressive disorder not otherwise specified?
Zimmerman, Mark; Martinez, Jennifer H; Dalrymple, Kristy; Chelminski, Iwona; Young, Diane
2013-05-01
Many patients have clinically significant symptoms of depression that do not meet the DSM-IV diagnostic thresholds for major depressive disorder (MDD) or dysthymic disorder. DSM-IV does not specify criteria for depressive disorder not otherwise specified (DDNOS). While it is not surprising that research on subthreshold depression has used diverse criteria, some consensus has emerged to define minor depression analogous to MDD, though requiring fewer than the 5 symptoms required to diagnose MDD. In the present report from the Rhode Island Methods to Improve Diagnostic Assessment and Services (MIDAS) project, we examined how many patients diagnosed with DDNOS met the DSM-IV proposed research criteria for minor depression, and we compared the demographic and clinical profiles of patients diagnosed with DDNOS who did and did not meet the criteria for minor depression Three thousand four hundred psychiatric patients presenting to the Rhode Island Hospital outpatient practice were evaluated with semi-structured diagnostic interviews for DSM-IV Axis I and Axis II disorders and measures of psychosocial morbidity. More than 6% of the 3400 patients were diagnosed with DDNOS (n=227). Only a minority of the patients with DDNOS met the criteria for minor depression (39.8%). There was no difference between patients with "subthreshold" depression who did and did not meet the DSM-IV research criteria for minor depression in demographic characteristics, the prevalence of comorbid Axis I or Axis II disorders, history of major depressive disorder, and family history of depression. The present study was conducted in a single outpatient practice in which the majority of patients were white, female, and had health insurance. Although the study was limited to a single site, a strength of the recruitment procedure was that the sample was not selected for participation in a treatment study, and exclusion and inclusion criteria did not reduce the representativeness of the patient groups. While we examined a number of validators, we did not systematically record the treatment the patients received and the outcome of treatment. Amongst psychiatric outpatients with clinically significant depression not meeting criteria for MDD or dysthymic disorder, there was little difference between patients who did and did not meet the DSM-IV research criteria for minor depressive disorder. Copyright © 2012 Elsevier B.V. All rights reserved.
A new edge detection algorithm based on Canny idea
NASA Astrophysics Data System (ADS)
Feng, Yingke; Zhang, Jinmin; Wang, Siming
2017-10-01
The traditional Canny algorithm has poor self-adaptability threshold, and it is more sensitive to noise. In order to overcome these drawbacks, this paper proposed a new edge detection method based on Canny algorithm. Firstly, the media filtering and filtering based on the method of Euclidean distance are adopted to process it; secondly using the Frei-chen algorithm to calculate gradient amplitude; finally, using the Otsu algorithm to calculate partial gradient amplitude operation to get images of thresholds value, then find the average of all thresholds that had been calculated, half of the average is high threshold value, and the half of the high threshold value is low threshold value. Experiment results show that this new method can effectively suppress noise disturbance, keep the edge information, and also improve the edge detection accuracy.
New pediatric vision screener, part II: electronics, software, signal processing and validation.
Gramatikov, Boris I; Irsch, Kristina; Wu, Yi-Kai; Guyton, David L
2016-02-04
We have developed an improved pediatric vision screener (PVS) that can reliably detect central fixation, eye alignment and focus. The instrument identifies risk factors for amblyopia, namely eye misalignment and defocus. The device uses the birefringence of the human fovea (the most sensitive part of the retina). The optics have been reported in more detail previously. The present article focuses on the electronics and the analysis algorithms used. The objective of this study was to optimize the analog design, data acquisition, noise suppression techniques, the classification algorithms and the decision making thresholds, as well as to validate the performance of the research instrument on an initial group of young test subjects-18 patients with known vision abnormalities (eight male and 10 female), ages 4-25 (only one above 18) and 19 controls with proven lack of vision issues. Four statistical methods were used to derive decision making thresholds that would best separate patients with abnormalities from controls. Sensitivity and specificity were calculated for each method, and the most suitable one was selected. Both the central fixation and the focus detection criteria worked robustly and allowed reliable separation between normal test subjects and symptomatic subjects. The sensitivity of the instrument was 100 % for both central fixation and focus detection. The specificity was 100 % for central fixation and 89.5 % for focus detection. The overall sensitivity was 100 % and the overall specificity was 94.7 %. Despite the relatively small initial sample size, we believe that the PVS instrument design, the analysis methods employed, and the device as a whole, will prove valuable for mass screening of children.
NASA Astrophysics Data System (ADS)
Stevenson, R. Jan
Frameworks for solving environmental problems have been presented over the past 40 years from many organizations and disciplines, often with a strong focus on their own discipline. This paper describes a modification of an existing framework that can be better applied to manage environmental problems. Human well being, environmental policy, human activities, stressors (contaminants and habitat alterations), and ecosystem services are highlighted as five elements of the coupled human and natural system in the proposed framework. Thresholds in relationships among elements in coupled human and natural systems are key attributes of couplings because of their use in development of environmental criteria by facilitating stakeholder consensus and preventing catastrophic changes. Propagation of thresholds through coupled human and natural systems is hypothesized to be a significant driver of policy development. The application of the framework is related to managing eutrophication and algal bloom problems.
Liu, Chengyu; Liu, Changchun; Shao, Peng; Li, Liping; Sun, Xin; Wang, Xinpei; Liu, Feng
2011-02-01
Approximate entropy (ApEn) is widely accepted as a complexity measure of the heart rate variability (HRV) signal, but selecting the criteria for the threshold value r is controversial. This paper aims to verify whether Chon's method of forecasting the r(max) is an appropriate one for the HRV signal. The standard limb lead ECG signals of 120 subjects were recorded for 10 min in a supine position. The subjects were divided into two groups: the heart failure (22 females and 38 males, median age 62.4 ± 12.6) and healthy control group (33 females and 27 males, median age 51.5 ± 16.9). Three types of ApEn were calculated: the ApEn(0.2) using the recommended constant r = 0.2, the ApEn(chon) using Chon's method and the ApEn(max) using the true r(max). A Wilcoxon rank sum test showed that the ApEn(0.2) (p = 0.267) and the ApEn(max) (p = 0.813) had no statistical differences between the two groups, while the ApEn(chon) (p = 0.040) had. We generated a synthetic database to study the effect of two influential factors (the signal length N and the ratio of short- and long-term variability sd(1)/sd(2)) on the empirical formula in Chon's method (Chon et al 2009 IEEE Eng. Med. Biol. Mag. 28 18-23). The results showed that the empirical formula proposed by Chon et al is a good method for analyzing the random signal, but not an appropriate tool for analyzing nonlinear signals, such as the logistic or HRV signals.
'Nuisance Dust' - a Case for Recalibration?
NASA Astrophysics Data System (ADS)
Datson, Hugh; Marker, Brian
2013-04-01
This paper considers the case for a review and recalibration of limit values and acceptability criteria for 'nuisance dust', a widely encountered but poorly defined and regulated aspect of particulate matter pollution. Specific dust fractions such as PM10 and asbestiforms are well characterised and have limit values enshrined in legislation. National, and international, limit values for acceptable concentrations of PM10 and other fractions of particulate matter have been defined and agreed. In the United Kingdom (UK), these apply to both public and workplace exposures. By contrast, there is no standard definition or universal criteria against which acceptable levels for 'nuisance dust' can be assessed. This has implications for land-use planning and resource utilisation. Without meaningful limit values, inappropriate development might take place too near to residential dwellings or land containing economically important mineral resources may be effectively sterilised. Furthermore, the expression 'nuisance dust' is unhelpful in that 'nuisance' has a specific meaning in environmental law whilst 'nuisance dust' is often taken to mean 'generally visible particulate matter'. As such, it is associated with the social and broader environmental impacts of particulate matter. PM10 concentrations are usually expressed as a mass concentration over time. These can be determined using a range of techniques. While results from different instruments are generally comparable, data obtained from alternative methods for measuring 'nuisance dust' are rarely interchangeable. In the UK, many of the methods typically used are derived from approaches developed under the HMIP (Her Majesty's Inspectorate of Pollution) regime in the 1960s onwards. Typical methods for 'nuisance dust' sampling focus on measurement of dust mass (from the weight of dust collected in an open container over time) or dust soiling (from loss of reflectance and or obscuration of a surface discoloured by dust over time). 'Custom and practice' acceptance criteria for dust samples obtained by mass or soiling techniques have been developed and are widely applied even though they were not necessarily calibrated thoroughly and have not been reviewed recently. Furthermore, as sampling techniques have evolved, criteria developed for one method have been adapted for another. Criteria and limit values have sometimes been based on an insufficient knowledge of sampler characteristics. Ideally, limit values should be calibrated for the locality to take differences in dust density and visibility into account. Work is needed on the definition of criteria and limit values, and sampling practices for coarse dust fractions, followed by discussion of good practices for securing effective monitoring that is proportionate and fit for purpose. With social changes and the evolution of environmental controls since the 1960s, the public perception of 'nuisance dust' has changed and needs to be addressed by reviewing existing thresholds in relation to the range of monitoring devices currently in use.
Threshold selection for classification of MR brain images by clustering method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moldovanu, Simona; Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi; Obreja, Cristian
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzedmore » images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.« less
Color Vision Losses in Autism Spectrum Disorders
Zachi, Elaine C.; Costa, Thiago L.; Barboni, Mirella T. S.; Costa, Marcelo F.; Bonci, Daniela M. O.; Ventura, Dora F.
2017-01-01
Autism spectrum disorders (ASDs) are neurodevelopmental conditions characterized by impairments in social/communication abilities and restricted behaviors. The present study aims to examine color vision discrimination in ASD children and adolescents without intellectual disability. The participants were also subdivided in order to compare color vision thresholds of autistic participants and those who achieved diagnostic criteria for Asperger Syndrome (AS). Nine subjects with autism, 11 participants with AS and 36 typically developing children and adolescents participated in the study. Color vision was assessed by the Cambridge Color Test (CCT). The Trivector protocol was administered to determine color discrimination thresholds along the protan, deutan, and tritan color confusion lines. Data from ASD participants were compared to tolerance limits for 90% of the population with 90% probability obtained from controls thresholds. Of the 20 ASD individuals examined, 6 (30%) showed color vision losses. Elevated color discrimination thresholds were found in 3/9 participants with autism and in 3/11 AS participants. Diffuse and tritan deficits were found. Mechanisms for chromatic losses may be either at the retinal level and/or reflect reduced cortical integration. PMID:28713324
Interlaminar shear fracture toughness and fatigue thresholds for composite materials
NASA Technical Reports Server (NTRS)
Obrien, T. Kevin; Murri, Gretchen B.; Salpekar, Satish A.
1987-01-01
Static and cyclic end notched flexure tests were conducted on a graphite epoxy, a glass epoxy, and graphite thermoplastic to determine their interlaminar shear fracture toughness and fatigue thresholds for delamination in terms of limiting values of the mode II strain energy release rate, G-II, for delamination growth. The influence of precracking and data reduction schemes are discussed. Finite element analysis indicated that the beam theory calculation for G-II with the transverse shear contribution included was reasonably accurate over the entire range of crack lengths. Cyclic loading significantly reduced the critical G-II for delamination. A threshold value of the maximum cyclic G-II below which no delamination occurred after one million cycles was identified for each material. Also, residual static toughness tests were conducted on glass epoxy specimens that had undergone one million cycles without delamination. A linear mixed-mode delamination criteria was used to characterize the static toughness of several composite materials; however, a total G threshold criterion appears to characterize the fatigue delamination durability of composite materials with a wide range of static toughness.
D'Armini, Andrea M; Ghofrani, Hossein-Ardeschir; Kim, Nick H; Mayer, Eckhard; Morsolini, Marco; Pulido-Zamudio, Tomás; Simonneau, Gerald; Wilkins, Martin R; Curram, John; Davie, Neil; Hoeper, Marius M
2015-03-01
In the Chronic Thromboembolic Pulmonary Hypertension Soluble Guanylate Cyclase - Stimulator Trial 1 (CHEST-1) study, riociguat improved 6-minute walking distance (6MWD) vs placebo in patients with inoperable chronic thromboembolic pulmonary hypertension or persistent/recurrent pulmonary hypertension after pulmonary endarterectomy. In this study, the proportion of patients who achieved responder thresholds that correlate with improved outcome in patients with pulmonary arterial hypertension was determined at baseline and at the end of CHEST-1. Patients received placebo or riociguat individually adjusted up to 2.5 mg 3 times a day for 16 weeks. Response criteria were defined as follows: 6MWD increase ≥40 m, 6MWD ≥380 m, cardiac index ≥2.5 liters/min/m(2), pulmonary vascular resistance <500 dyn∙sec∙cm(-5), mixed venous oxygen saturation ≥65%, World Health Organization functional class I/II, N-terminal pro-brain natriuretic peptide <1,800 pg/ml, and right atrial pressure <8 mm Hg. Riociguat increased the proportion of patients with 6MWD ≥380 m, World Health Organization functional class I/II, and pulmonary vascular resistance <500 dyn∙sec∙cm(-5) from 37%, 34%, and 25% at baseline to 58%, 57%, and 50% at Week 16, whereas there was little change in placebo-treated patients (6MWD ≥380 m, 43% vs 44%; World Health Organization functional class I/II, 29% vs 38%; pulmonary vascular resistance <500 dyn∙sec∙cm(-5), 27% vs 26%). Similar changes were observed for thresholds for cardiac index, mixed venous oxygen saturation, N-terminal pro-brain natriuretic peptide, and right atrial pressure. In this exploratory analysis, riociguat increased the proportion of patients with inoperable chronic thromboembolic pulmonary hypertension or persistent/recurrent pulmonary hypertension after pulmonary endarterectomy achieving criteria defining a positive response to therapy. Copyright © 2015 International Society for Heart and Lung Transplantation. Published by Elsevier Inc. All rights reserved.
Gayoso-Diz, Pilar; Otero-González, Alfonso; Rodriguez-Alvarez, María Xosé; Gude, Francisco; García, Fernando; De Francisco, Angel; Quintela, Arturo González
2013-10-16
Insulin resistance has been associated with metabolic and hemodynamic alterations and higher cardio metabolic risk. There is great variability in the threshold homeostasis model assessment of insulin resistance (HOMA-IR) levels to define insulin resistance. The purpose of this study was to describe the influence of age and gender in the estimation of HOMA-IR optimal cut-off values to identify subjects with higher cardio metabolic risk in a general adult population. It included 2459 adults (range 20-92 years, 58.4% women) in a random Spanish population sample. As an accurate indicator of cardio metabolic risk, Metabolic Syndrome (MetS), both by International Diabetes Federation criteria and by Adult Treatment Panel III criteria, were used. The effect of age was analyzed in individuals with and without diabetes mellitus separately. ROC regression methodology was used to evaluate the effect of age on HOMA-IR performance in classifying cardio metabolic risk. In Spanish population the threshold value of HOMA-IR drops from 3.46 using 90th percentile criteria to 2.05 taking into account of MetS components. In non-diabetic women, but no in men, we found a significant non-linear effect of age on the accuracy of HOMA-IR. In non-diabetic men, the cut-off values were 1.85. All values are between 70th-75th percentiles of HOMA-IR levels in adult Spanish population. The consideration of the cardio metabolic risk to establish the cut-off points of HOMA-IR, to define insulin resistance instead of using a percentile of the population distribution, would increase its clinical utility in identifying those patients in whom the presence of multiple metabolic risk factors imparts an increased metabolic and cardiovascular risk. The threshold levels must be modified by age in non-diabetic women.
Recursive feature selection with significant variables of support vectors.
Tsai, Chen-An; Huang, Chien-Hsun; Chang, Ching-Wei; Chen, Chun-Houh
2012-01-01
The development of DNA microarray makes researchers screen thousands of genes simultaneously and it also helps determine high- and low-expression level genes in normal and disease tissues. Selecting relevant genes for cancer classification is an important issue. Most of the gene selection methods use univariate ranking criteria and arbitrarily choose a threshold to choose genes. However, the parameter setting may not be compatible to the selected classification algorithms. In this paper, we propose a new gene selection method (SVM-t) based on the use of t-statistics embedded in support vector machine. We compared the performance to two similar SVM-based methods: SVM recursive feature elimination (SVMRFE) and recursive support vector machine (RSVM). The three methods were compared based on extensive simulation experiments and analyses of two published microarray datasets. In the simulation experiments, we found that the proposed method is more robust in selecting informative genes than SVMRFE and RSVM and capable to attain good classification performance when the variations of informative and noninformative genes are different. In the analysis of two microarray datasets, the proposed method yields better performance in identifying fewer genes with good prediction accuracy, compared to SVMRFE and RSVM.
Psychophysics with children: Investigating the effects of attentional lapses on threshold estimates.
Manning, Catherine; Jones, Pete R; Dekker, Tessa M; Pellicano, Elizabeth
2018-03-26
When assessing the perceptual abilities of children, researchers tend to use psychophysical techniques designed for use with adults. However, children's poorer attentiveness might bias the threshold estimates obtained by these methods. Here, we obtained speed discrimination threshold estimates in 6- to 7-year-old children in UK Key Stage 1 (KS1), 7- to 9-year-old children in Key Stage 2 (KS2), and adults using three psychophysical procedures: QUEST, a 1-up 2-down Levitt staircase, and Method of Constant Stimuli (MCS). We estimated inattentiveness using responses to "easy" catch trials. As expected, children had higher threshold estimates and made more errors on catch trials than adults. Lower threshold estimates were obtained from psychometric functions fit to the data in the QUEST condition than the MCS and Levitt staircases, and the threshold estimates obtained when fitting a psychometric function to the QUEST data were also lower than when using the QUEST mode. This suggests that threshold estimates cannot be compared directly across methods. Differences between the procedures did not vary significantly with age group. Simulations indicated that inattentiveness biased threshold estimates particularly when threshold estimates were computed as the QUEST mode or the average of staircase reversals. In contrast, thresholds estimated by post-hoc psychometric function fitting were less biased by attentional lapses. Our results suggest that some psychophysical methods are more robust to attentiveness, which has important implications for assessing the perception of children and clinical groups.
Livingston, Kara A.; Chung, Mei; Sawicki, Caleigh M.; Lyle, Barbara J.; Wang, Ding Ding; Roberts, Susan B.; McKeown, Nicola M.
2016-01-01
Background Dietary fiber is a broad category of compounds historically defined as partially or completely indigestible plant-based carbohydrates and lignin with, more recently, the additional criteria that fibers incorporated into foods as additives should demonstrate functional human health outcomes to receive a fiber classification. Thousands of research studies have been published examining fibers and health outcomes. Objectives (1) Develop a database listing studies testing fiber and physiological health outcomes identified by experts at the Ninth Vahouny Conference; (2) Use evidence mapping methodology to summarize this body of literature. This paper summarizes the rationale, methodology, and resulting database. The database will help both scientists and policy-makers to evaluate evidence linking specific fibers with physiological health outcomes, and identify missing information. Methods To build this database, we conducted a systematic literature search for human intervention studies published in English from 1946 to May 2015. Our search strategy included a broad definition of fiber search terms, as well as search terms for nine physiological health outcomes identified at the Ninth Vahouny Fiber Symposium. Abstracts were screened using a priori defined eligibility criteria and a low threshold for inclusion to minimize the likelihood of rejecting articles of interest. Publications then were reviewed in full text, applying additional a priori defined exclusion criteria. The database was built and published on the Systematic Review Data Repository (SRDR™), a web-based, publicly available application. Conclusions A fiber database was created. This resource will reduce the unnecessary replication of effort in conducting systematic reviews by serving as both a central database archiving PICO (population, intervention, comparator, outcome) data on published studies and as a searchable tool through which this data can be extracted and updated. PMID:27348733
Prediction of Fracture Initiation in Hot Compression of Burn-Resistant Ti-35V-15Cr-0.3Si-0.1C Alloy
NASA Astrophysics Data System (ADS)
Zhang, Saifei; Zeng, Weidong; Zhou, Dadi; Lai, Yunjin
2015-11-01
An important concern in hot working of metals is whether the desired deformation can be accomplished without fracture of the material. This paper builds a fracture prediction model to predict fracture initiation in hot compression of a burn-resistant beta-stabilized titanium alloy Ti-35V-15Cr-0.3Si-0.1C using a combined approach of upsetting experiments, theoretical failure criteria and finite element (FE) simulation techniques. A series of isothermal compression experiments on cylindrical specimens were conducted in temperature range of 900-1150 °C, strain rate of 0.01-10 s-1 first to obtain fracture samples and primary reduction data. Based on that, a comparison of eight commonly used theoretical failure criteria was made and Oh criterion was selected and coded into a subroutine. FE simulation of upsetting experiments on cylindrical specimens was then performed to determine the fracture threshold values of Oh criterion. By building a correlation between threshold values and the deforming parameters (temperature and strain rate, or Zener-Hollomon parameter), a new fracture prediction model based on Oh criterion was established. The new model shows an exponential decay relationship between threshold values and Zener-Hollomon parameter (Z), and the relative error of the model is less than 15%. This model was then applied successfully in the cogging of Ti-35V-15Cr-0.3Si-0.1C billet.
Petry, Nancy M.; Blanco, Carlos; Jin, Chelsea; Grant, Bridget F.
2015-01-01
The fifth edition of the Diagnostic and Statistic Manual for Mental Disorders (DSM-5) eliminates the committing illegal acts criterion and reduces the threshold for a diagnosis of gambling disorder to four of nine criteria. This study compared the DSM-5 “4 of 9” classification system to the “5 of 10” DSM-IV system, as well as other permutations (i.e., just lowing the threshold to four criteria or just eliminating the illegal acts criterion) in 43,093 respondents to the National Epidemiological Survey of Alcohol and Related Conditions. Subgroups were analyzed to ascertain if changes will impact differentially diagnoses based on gender, age or race/ethnicity. In the full sample and each subpopulation, prevalence rates were higher when the DSM-5 classification system was employed relative to the DSM-IV system, but the hit rate between the two systems ranged from 99.80% to 99.96%. Across all gender, age and racial/ethnic subgroups, specificity was greater than 99% when the DSM-5 system was employed relative to the DSM-IV system, and sensitivity was 100%. Results from this study suggest that eliminating the illegal acts criterion has little impact on diagnosis of gambling disorder, but lowering the threshold for diagnosis does increase the base rate in the general population and each subgroup, even though overall rates remain low and sensitivity and specificity are high. PMID:24588275
Can quantitative sensory testing predict responses to analgesic treatment?
Grosen, K; Fischer, I W D; Olesen, A E; Drewes, A M
2013-10-01
The role of quantitative sensory testing (QST) in prediction of analgesic effect in humans is scarcely investigated. This updated review assesses the effectiveness in predicting analgesic effects in healthy volunteers, surgical patients and patients with chronic pain. A systematic review of English written, peer-reviewed articles was conducted using PubMed and Embase (1980-2013). Additional studies were identified by chain searching. Search terms included 'quantitative sensory testing', 'sensory testing' and 'analgesics'. Studies on the relationship between QST and response to analgesic treatment in human adults were included. Appraisal of the methodological quality of the included studies was based on evaluative criteria for prognostic studies. Fourteen studies (including 720 individuals) met the inclusion criteria. Significant correlations were observed between responses to analgesics and several QST parameters including (1) heat pain threshold in experimental human pain, (2) electrical and heat pain thresholds, pressure pain tolerance and suprathreshold heat pain in surgical patients, and (3) electrical and heat pain threshold and conditioned pain modulation in patients with chronic pain. Heterogeneity among studies was observed especially with regard to application of QST and type and use of analgesics. Although promising, the current evidence is not sufficiently robust to recommend the use of any specific QST parameter in predicting analgesic response. Future studies should focus on a range of different experimental pain modalities rather than a single static pain stimulation paradigm. © 2013 European Federation of International Association for the Study of Pain Chapters.
Vestibular evoked myogenic potentials (VEMP) can detect asymptomatic saccular hydrops.
Lin, Ming-Yee; Timmer, Ferdinand C A; Oriel, Brad S; Zhou, Guangwei; Guinan, John J; Kujawa, Sharon G; Herrmann, Barbara S; Merchant, Saumil N; Rauch, Steven D
2006-06-01
The objective of this study was to explore the useful of vestibular evoked myogenic potential (VEMP) testing for detecting endolymphatic hydrops, especially in the second ear of patients with unilateral Ménière disease (MD). This study was performed at a tertiary care academic medical center. Part I consisted of postmortem temporal bone specimens from the temporal bone collection of the Massachusetts Eye & Ear Infirmary; part II consisted of consecutive consenting adult patients (n = 82) with unilateral MD by American Academy of Otolaryngology-Head and Neck Surgery criteria case histories. Outcome measures consisted of VEMP thresholds in patients and histologic saccular endolymphatic hydrops in postmortem temporal bones. Saccular hydrops was observed in the asymptomatic ear in six of 17 (35%) of temporal bones from donors with unilateral MD. Clinic patients with unilateral MD showed elevated mean VEMP thresholds and altered VEMP tuning in their symptomatic ears and, to a lesser degree, in their asymptomatic ears. Specific VEMP frequency and tuning criteria were used to define a "Ménière-like" response. This "Ménière-like" response was seen in 27% of asymptomatic ears of our patients with unilateral MD. Bilateral involvement is seen in approximately one third of MD cases. Saccular hydrops appears to precede symptoms in bilateral MD. Changes in VEMP threshold and tuning appear to be sensitive to these structural changes in the saccule. If so, then VEMP may be useful as a detector of asymptomatic saccular hydrops and as a predictor of evolving bilateral MD.
de Kleijn, Jasper L; van Kalmthout, Ludwike W M; van der Vossen, Martijn J B; Vonck, Bernard M D; Topsakal, Vedat; Bruijnzeel, Hanneke
2018-05-24
Although current guidelines recommend cochlear implantation only for children with profound hearing impairment (HI) (>90 decibel [dB] hearing level [HL]), studies show that children with severe hearing impairment (>70-90 dB HL) could also benefit from cochlear implantation. To perform a systematic review to identify audiologic thresholds (in dB HL) that could serve as an audiologic candidacy criterion for pediatric cochlear implantation using 4 domains of speech and language development as independent outcome measures (speech production, speech perception, receptive language, and auditory performance). PubMed and Embase databases were searched up to June 28, 2017, to identify studies comparing speech and language development between children who were profoundly deaf using cochlear implants and children with severe hearing loss using hearing aids, because no studies are available directly comparing children with severe HI in both groups. If cochlear implant users with profound HI score better on speech and language tests than those with severe HI who use hearing aids, this outcome could support adjusting cochlear implantation candidacy criteria to lower audiologic thresholds. Literature search, screening, and article selection were performed using a predefined strategy. Article screening was executed independently by 4 authors in 2 pairs; consensus on article inclusion was reached by discussion between these 4 authors. This study is reported according to the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) statement. Title and abstract screening of 2822 articles resulted in selection of 130 articles for full-text review. Twenty-one studies were selected for critical appraisal, resulting in selection of 10 articles for data extraction. Two studies formulated audiologic thresholds (in dB HLs) at which children could qualify for cochlear implantation: (1) at 4-frequency pure-tone average (PTA) thresholds of 80 dB HL or greater based on speech perception and auditory performance subtests and (2) at PTA thresholds of 88 and 96 dB HL based on a speech perception subtest. In 8 of the 18 outcome measures, children with profound HI using cochlear implants performed similarly to children with severe HI using hearing aids. Better performance of cochlear implant users was shown with a picture-naming test and a speech perception in noise test. Owing to large heterogeneity in study population and selected tests, it was not possible to conduct a meta-analysis. Studies indicate that lower audiologic thresholds (≥80 dB HL) than are advised in current national and manufacturer guidelines would be appropriate as audiologic candidacy criteria for pediatric cochlear implantation.
Cost-effectiveness thresholds: methods for setting and examples from around the world.
Santos, André Soares; Guerra-Junior, Augusto Afonso; Godman, Brian; Morton, Alec; Ruas, Cristina Mariano
2018-06-01
Cost-effectiveness thresholds (CETs) are used to judge if an intervention represents sufficient value for money to merit adoption in healthcare systems. The study was motivated by the Brazilian context of HTA, where meetings are being conducted to decide on the definition of a threshold. Areas covered: An electronic search was conducted on Medline (via PubMed), Lilacs (via BVS) and ScienceDirect followed by a complementary search of references of included studies, Google Scholar and conference abstracts. Cost-effectiveness thresholds are usually calculated through three different approaches: the willingness-to-pay, representative of welfare economics; the precedent method, based on the value of an already funded technology; and the opportunity cost method, which links the threshold to the volume of health displaced. An explicit threshold has never been formally adopted in most places. Some countries have defined thresholds, with some flexibility to consider other factors. An implicit threshold could be determined by research of funded cases. Expert commentary: CETs have had an important role as a 'bridging concept' between the world of academic research and the 'real world' of healthcare prioritization. The definition of a cost-effectiveness threshold is paramount for the construction of a transparent and efficient Health Technology Assessment system.
A study of the threshold method utilizing raingage data
NASA Technical Reports Server (NTRS)
Short, David A.; Wolff, David B.; Rosenfeld, Daniel; Atlas, David
1993-01-01
The threshold method for estimation of area-average rain rate relies on determination of the fractional area where rain rate exceeds a preset level of intensity. Previous studies have shown that the optimal threshold level depends on the climatological rain-rate distribution (RRD). It has also been noted, however, that the climatological RRD may be composed of an aggregate of distributions, one for each of several distinctly different synoptic conditions, each having its own optimal threshold. In this study, the impact of RRD variations on the threshold method is shown in an analysis of 1-min rainrate data from a network of tipping-bucket gauges in Darwin, Australia. Data are analyzed for two distinct regimes: the premonsoon environment, having isolated intense thunderstorms, and the active monsoon rains, having organized convective cell clusters that generate large areas of stratiform rain. It is found that a threshold of 10 mm/h results in the same threshold coefficient for both regimes, suggesting an alternative definition of optimal threshold as that which is least sensitive to distribution variations. The observed behavior of the threshold coefficient is well simulated by assumption of lognormal distributions with different scale parameters and same shape parameters.
Mizumura, Sunao; Nishikawa, Kazuhiro; Murata, Akihiro; Yoshimura, Kosei; Ishii, Nobutomo; Kokubo, Tadashi; Morooka, Miyako; Kajiyama, Akiko; Terahara, Atsuro
2018-05-01
In Japan, the Southampton method for dopamine transporter (DAT) SPECT is widely used to quantitatively evaluate striatal radioactivity. The specific binding ratio (SBR) is the ratio of specific to non-specific binding observed after placing pentagonal striatal voxels of interest (VOIs) as references. Although the method can reduce the partial volume effect, the SBR may fluctuate due to the presence of low-count areas of cerebrospinal fluid (CSF), caused by brain atrophy, in the striatal VOIs. We examined the effect of the exclusion of low-count VOIs on SBR measurement. We retrospectively reviewed DAT imaging of 36 patients with parkinsonian syndromes performed after injection of 123 I-FP-CIT. SPECT data were reconstructed using three conditions. We defined the CSF area in each SPECT image after segmenting the brain tissues. A merged image of gray and white matter images was constructed from each patient's magnetic resonance imaging (MRI) to create an idealized brain image that excluded the CSF fraction (MRI-mask method). We calculated the SBR and asymmetric index (AI) in the MRI-mask method for each reconstruction condition. We then calculated the mean and standard deviation (SD) of voxel RI counts in the reference VOI without the striatal VOIs in each image, and determined the SBR by excluding the low-count pixels (threshold method) using five thresholds: mean-0.0SD, mean-0.5SD, mean-1.0SD, mean-1.5SD, and mean-2.0SD. We also calculated the AIs from the SBRs measured using the threshold method. We examined the correlation among the SBRs of the threshold method, between the uncorrected SBRs and the SBRs of the MRI-mask method, and between the uncorrected AIs and the AIs of the MRI-mask method. The intraclass correlation coefficient indicated an extremely high correlation among the SBRs and among the AIs of the MRI-mask and threshold methods at thresholds between mean-2.0D and mean-1.0SD, regardless of the reconstruction correction. The differences among the SBRs and the AIs of the two methods were smallest at thresholds between man-2.0SD and mean-1.0SD. The SBR calculated using the threshold method was highly correlated with the MRI-SBR. These results suggest that the CSF correction of the threshold method is effective for the calculation of idealized SBR and AI values.
NASA Astrophysics Data System (ADS)
Choi, Woo Young; Woo, Dong-Soo; Choi, Byung Yong; Lee, Jong Duk; Park, Byung-Gook
2004-04-01
We proposed a stable extraction algorithm for threshold voltage using transconductance change method by optimizing node interval. With the algorithm, noise-free gm2 (=dgm/dVGS) profiles can be extracted within one-percent error, which leads to more physically-meaningful threshold voltage calculation by the transconductance change method. The extracted threshold voltage predicts the gate-to-source voltage at which the surface potential is within kT/q of φs=2φf+VSB. Our algorithm makes the transconductance change method more practical by overcoming noise problem. This threshold voltage extraction algorithm yields the threshold roll-off behavior of nanoscale metal oxide semiconductor field effect transistor (MOSFETs) accurately and makes it possible to calculate the surface potential φs at any other point on the drain-to-source current (IDS) versus gate-to-source voltage (VGS) curve. It will provide us with a useful analysis tool in the field of device modeling, simulation and characterization.
Threshold Assessment of Gear Diagnostic Tools on Flight and Test Rig Data
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Mosher, Marianne; Huff, Edward M.
2003-01-01
A method for defining thresholds for vibration-based algorithms that provides the minimum number of false alarms while maintaining sensitivity to gear damage was developed. This analysis focused on two vibration based gear damage detection algorithms, FM4 and MSA. This method was developed using vibration data collected during surface fatigue tests performed in a spur gearbox rig. The thresholds were defined based on damage progression during tests with damage. The thresholds false alarm rates were then evaluated on spur gear tests without damage. Next, the same thresholds were applied to flight data from an OH-58 helicopter transmission. Results showed that thresholds defined in test rigs can be used to define thresholds in flight to correctly classify the transmission operation as normal.
NASA Astrophysics Data System (ADS)
Zhong, Keyuan; Zheng, Fenli; Xu, Ximeng; Qin, Chao
2018-06-01
Different precipitation phases (rain, snow or sleet) differ greatly in their hydrological and erosional processes. Therefore, accurate discrimination of the precipitation phase is highly important when researching hydrologic processes and climate change at high latitudes and mountainous regions. The objective of this study was to identify suitable temperature thresholds for discriminating the precipitation phase in the Songhua River Basin (SRB) based on 20-year daily precipitation collected from 60 meteorological stations located in and around the basin. Two methods, the air temperature method (AT method) and the wet bulb temperature method (WBT method), were used to discriminate the precipitation phase. Thirteen temperature thresholds were used to discriminate snowfall in the SRB. These thresholds included air temperatures from 0 to 5.5 °C at intervals of 0.5 °C and the wet bulb temperature (WBT). Three evaluation indices, the error percentage of discriminated snowfall days (Ep), the relative error of discriminated snowfall (Re) and the determination coefficient (R2), were applied to assess the discrimination accuracy. The results showed that 2.5 °C was the optimum threshold temperature for discriminating snowfall at the scale of the entire basin. Due to differences in the landscape conditions at the different stations, the optimum threshold varied by station. The optimal threshold ranged 1.5-4.0 °C, and 19 stations, 17 stations and 18 stations had optimal thresholds of 2.5 °C, 3.0 °C, and 3.5 °C respectively, occupying 90% of all stations. Compared with using a single suitable temperature threshold to discriminate snowfall throughout the basin, it was more accurate to use the optimum threshold at each station to estimate snowfall in the basin. In addition, snowfall was underestimated when the temperature threshold was the WBT and when the temperature threshold was below 2.5 °C, whereas snowfall was overestimated when the temperature threshold exceeded 4.0 °C at most stations. The results of this study provide information for climate change research and hydrological process simulations in the SRB, as well as provide reference information for discriminating precipitation phase in other regions.
DYNAMIC PATTERN RECOGNITION BY MEANS OF THRESHOLD NETS,
A method is expounded for the recognition of visual patterns. A circuit diagram of a device is described which is based on a multilayer threshold ...structure synthesized in accordance with the proposed method. Coded signals received each time an image is displayed are transmitted to the threshold ...circuit which distinguishes the signs, and from there to the layers of threshold resolving elements. The image at each layer is made to correspond
NASA Astrophysics Data System (ADS)
Li, Q.; Wang, Y. L.; Li, H. C.; Zhang, M.; Li, C. Z.; Chen, X.
2017-12-01
Rainfall threshold plays an important role in flash flood warning. A simple and easy method, using Rational Equation to calculate rainfall threshold, was proposed in this study. The critical rainfall equation was deduced from the Rational Equation. On the basis of the Manning equation and the results of Chinese Flash Flood Survey and Evaluation (CFFSE) Project, the critical flow was obtained, and the net rainfall was calculated. Three aspects of the rainfall losses, i.e. depression storage, vegetation interception, and soil infiltration were considered. The critical rainfall was the sum of the net rainfall and the rainfall losses. Rainfall threshold was estimated after considering the watershed soil moisture using the critical rainfall. In order to demonstrate this method, Zuojiao watershed in Yunnan Province was chosen as study area. The results showed the rainfall thresholds calculated by the Rational Equation method were approximated to the rainfall thresholds obtained from CFFSE, and were in accordance with the observed rainfall during flash flood events. Thus the calculated results are reasonable and the method is effective. This study provided a quick and convenient way to calculated rainfall threshold of flash flood warning for the grass root staffs and offered technical support for estimating rainfall threshold.
A threshold method for immunological correlates of protection
2013-01-01
Background Immunological correlates of protection are biological markers such as disease-specific antibodies which correlate with protection against disease and which are measurable with immunological assays. It is common in vaccine research and in setting immunization policy to rely on threshold values for the correlate where the accepted threshold differentiates between individuals who are considered to be protected against disease and those who are susceptible. Examples where thresholds are used include development of a new generation 13-valent pneumococcal conjugate vaccine which was required in clinical trials to meet accepted thresholds for the older 7-valent vaccine, and public health decision making on vaccination policy based on long-term maintenance of protective thresholds for Hepatitis A, rubella, measles, Japanese encephalitis and others. Despite widespread use of such thresholds in vaccine policy and research, few statistical approaches have been formally developed which specifically incorporate a threshold parameter in order to estimate the value of the protective threshold from data. Methods We propose a 3-parameter statistical model called the a:b model which incorporates parameters for a threshold and constant but different infection probabilities below and above the threshold estimated using profile likelihood or least squares methods. Evaluation of the estimated threshold can be performed by a significance test for the existence of a threshold using a modified likelihood ratio test which follows a chi-squared distribution with 3 degrees of freedom, and confidence intervals for the threshold can be obtained by bootstrapping. The model also permits assessment of relative risk of infection in patients achieving the threshold or not. Goodness-of-fit of the a:b model may be assessed using the Hosmer-Lemeshow approach. The model is applied to 15 datasets from published clinical trials on pertussis, respiratory syncytial virus and varicella. Results Highly significant thresholds with p-values less than 0.01 were found for 13 of the 15 datasets. Considerable variability was seen in the widths of confidence intervals. Relative risks indicated around 70% or better protection in 11 datasets and relevance of the estimated threshold to imply strong protection. Goodness-of-fit was generally acceptable. Conclusions The a:b model offers a formal statistical method of estimation of thresholds differentiating susceptible from protected individuals which has previously depended on putative statements based on visual inspection of data. PMID:23448322
Acceptance criteria for urban dispersion model evaluation
NASA Astrophysics Data System (ADS)
Hanna, Steven; Chang, Joseph
2012-05-01
The authors suggested acceptance criteria for rural dispersion models' performance measures in this journal in 2004. The current paper suggests modified values of acceptance criteria for urban applications and tests them with tracer data from four urban field experiments. For the arc-maximum concentrations, the fractional bias should have a magnitude <0.67 (i.e., the relative mean bias is less than a factor of 2); the normalized mean-square error should be <6 (i.e., the random scatter is less than about 2.4 times the mean); and the fraction of predictions that are within a factor of two of the observations (FAC2) should be >0.3. For all data paired in space, for which a threshold concentration must always be defined, the normalized absolute difference should be <0.50, when the threshold is three times the instrument's limit of quantification (LOQ). An overall criterion is then applied that the total set of acceptance criteria should be satisfied in at least half of the field experiments. These acceptance criteria are applied to evaluations of the US Department of Defense's Joint Effects Model (JEM) with tracer data from US urban field experiments in Salt Lake City (U2000), Oklahoma City (JU2003), and Manhattan (MSG05 and MID05). JEM includes the SCIPUFF dispersion model with the urban canopy option and the urban dispersion model (UDM) option. In each set of evaluations, three or four likely options are tested for meteorological inputs (e.g., a local building top wind speed, the closest National Weather Service airport observations, or outputs from numerical weather prediction models). It is found that, due to large natural variability in the urban data, there is not a large difference between the performance measures for the two model options and the three or four meteorological input options. The more detailed UDM and the state-of-the-art numerical weather models do provide a slight improvement over the other options. The proposed urban dispersion model acceptance criteria are satisfied at over half of the field experiments.
Farrar, Diane; Fairley, Lesley; Santorelli, Gillian; Tuffnell, Derek; Sheldon, Trevor A; Wright, John; van Overveld, Lydia; Lawlor, Debbie A
2015-01-01
Summary Background Diagnosis of gestational diabetes predicts risk of infants who are large for gestational age (LGA) and with high adiposity, which in turn aims to predict a future risk of obesity in the offspring. South Asian women have higher risk of gestational diabetes, lower risk of LGA, and on average give birth to infants with greater adiposity than do white European women. Whether the same diagnostic criteria for gestational diabetes should apply to both groups of women is unclear. We aimed to assess the association between maternal glucose and adverse perinatal outcomes to ascertain whether thresholds used to diagnose gestational diabetes should differ between south Asian and white British women. We also aimed to assess whether ethnic origin affected prevalence of gestational diabetes irrespective of criteria used. Methods We used data (including results of a 26–28 week gestation oral glucose tolerance test) of women from the Born in Bradford study, a prospective study that recruited women attending the antenatal clinic at the Bradford Royal Infirmary, UK, between 2007 and 2011 and who intended to give birth to their infant in that hospital. We studied the association between fasting and 2 h post-load glucose and three primary outcomes (LGA [defined as birthweight >90th percentile for gestational age], high infant adiposity [sum of skinfolds >90th percentile for gestational age], and caesarean section). We calculated adjusted odds ratios (ORs) and their 95% confidence intervals (CIs) for a 1 SD increase in fasting and post-load glucose. We established fasting and post-load glucose thresholds that equated to an OR of 1·75 for LGA and high infant adiposity in each group of women to identify ethnic-specific criteria for diagnosis of gestational diabetes. Findings Of 13 773 pregnancies, 3420 were excluded from analyses. Of 10 353 eligible pregnancies, 4088 women were white British, 5408 were south Asian, and 857 were of other ethnic origin. The adjusted ORs of LGA per 1 SD fasting glucose were 1·22 (95% CI 1·08–1·38) in white British women and 1·43 (1·23–1·67) in south Asian women (pinteraction with ethnicity = 0·39). Results for high infant adiposity were 1·35 (1·23–1·49) and 1·35 (1·18–1·54; pinteraction with ethnicity=0·98), and for caesarean section they were 1·06 (0·97–1·16) and 1·11 (1·02–1·20; pinteraction with ethnicity=0·47). Associations between post-load glucose and the three primary outcomes were weaker than for fasting glucose. A fasting glucose concentration of 5·4 mmol/L or a 2 h post-load level of 7·5 mmol/L identified white British women with 75% or higher relative risk of LGA or high infant adiposity; in south Asian women, the cutoffs were 5·2 mmol/L or 7·2 mml/L; in the whole cohort, the cutoffs were 5·3 mmol/L or 7·5 mml/L. The prevalence of gestational diabetes in our cohort ranged from 1·2% to 8·7% in white British women and 4% to 24% in south Asian women using six different criteria. Compared with the application of our whole-cohort criteria, use of our ethnic-specific criteria increased the prevalence of gestational diabetes in south Asian women from 17·4% (95% CI 16·4–18·4) to 24·2% (23·1–25·3). Interpretation Our data support the use of lower fasting and post-load glucose thresholds to diagnose gestational diabetes in south Asian than white British women. They also suggest that diagnostic criteria for gestational diabetes recommended by UK NICE might underestimate the prevalence of gestational diabetes compared with our criteria or those recommended by the International Association of Diabetes and Pregnancy Study Groups and WHO, especially in south Asian women. Funding The National Institute for Health Research. PMID:26355010
Lesmes, Luis A.; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A.; Albright, Thomas D.
2015-01-01
Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold—the signal intensity corresponding to a pre-defined sensitivity level (d′ = 1)—in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks—(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection—the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10–0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods. PMID:26300798
Methods for SBS Threshold Reduction
1994-01-30
We have investigated methods for reducing the threshold for stimulated Brillouin scattering (SBS) using a frequency-narrowed Cr,Tm,Ho:YAG laser...operating at 2.12 micrometers. The SBS medium was carbon disulfide. Single-focus SBS and threshold reduction by using two foci, a loop, and a ring have
The absolute threshold of cone vision
Koeing, Darran; Hofer, Heidi
2013-01-01
We report measurements of the absolute threshold of cone vision, which has been previously underestimated due to sub-optimal conditions or overly strict subjective response criteria. We avoided these limitations by using optimized stimuli and experimental conditions while having subjects respond within a rating scale framework. Small (1′ fwhm), brief (34 msec), monochromatic (550 nm) stimuli were foveally presented at multiple intensities in dark-adapted retina for 5 subjects. For comparison, 4 subjects underwent similar testing with rod-optimized stimuli. Cone absolute threshold, that is, the minimum light energy for which subjects were just able to detect a visual stimulus with any response criterion, was 203 ± 38 photons at the cornea, ∼0.47 log units lower than previously reported. Two-alternative forced-choice measurements in a subset of subjects yielded consistent results. Cone thresholds were less responsive to criterion changes than rod thresholds, suggesting a limit to the stimulus information recoverable from the cone mosaic in addition to the limit imposed by Poisson noise. Results were consistent with expectations for detection in the face of stimulus uncertainty. We discuss implications of these findings for modeling the first stages of human cone vision and interpreting psychophysical data acquired with adaptive optics at the spatial scale of the receptor mosaic. PMID:21270115
Soil quality standards and guidelines for forest sustainability in northwestern North America
Deborah Page-Dumroese; Martin Jurgensen; William Elliot; Thomas Rice; John Nesser; Thomas Collins; Robert Meurisse
2000-01-01
Soil quality standards and guidelines of the USDA Forest Service were some of the first in the world to be developed to evaluate changes in forest soil productivity and sustainability after harvesting and site preparation. International and national development of criteria and indicators for maintenance of soil productivity make it imperative to have adequate threshold...
32 CFR 3.8 - DoD access to records policy.
Code of Federal Regulations, 2013 CFR
2013-07-01
... to use an IPA: the business unit's name, address and the expected value of its award. When the clause... business unit that will perform the OT agreement, or a subawardee, meets the criteria for an audit pursuant... paragraph (c) of this section. The value establishing the threshold is the total value of the agreement...
32 CFR 3.8 - DoD access to records policy.
Code of Federal Regulations, 2014 CFR
2014-07-01
... to use an IPA: the business unit's name, address and the expected value of its award. When the clause... business unit that will perform the OT agreement, or a subawardee, meets the criteria for an audit pursuant... paragraph (c) of this section. The value establishing the threshold is the total value of the agreement...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-12
... schools. (3) A multi-year financial and operating model for the organization, a demonstrated commitment of... school model and to expand the number of high-quality charter schools available to students across the... percent threshold in this priority is consistent with the average percentage of students in large urban...
21 CFR 1313.22 - Contents of export declaration.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Contents of export declaration. 1313.22 Section... EXPORTATION OF LIST I AND LIST II CHEMICALS Exportation of Listed Chemicals § 1313.22 Contents of export... quantitative threshold criteria established in § 1310.04(f) of this chapter may be exported if that chemical is...
47 CFR 4.9 - Outage reporting requirements-threshold criteria.
Code of Federal Regulations, 2014 CFR
2014-10-01
... call load data for the same day(s) of the week and the same time(s) of day as the outage, and for a... where, for whatever reason, real-time and historic carried call load data are unavailable to the provider, even after a detailed investigation, the provider must determine the carried call load based on...
Ryder, Mark I.; Yao, Tzy-Jyun; Russell, Jonathan S.; Moscicki, Anna-Barbara; Shiboski, Caroline H.
2016-01-01
Aims To compare the prevalence and severity of periodontal diseases between 180 perinatally HIV-infected (PHIV) and 118 perinatally HIV-exposed and uninfected (PHEU) youth in a cross-sectional study conducted at 11 clinical sites in the United States and Puerto Rico from the Adolescent Master Protocol (AMP) study of the Pediatric HIV/AIDS cohort study (PHACS) network. Methods Several analyses were conducted, employing the current CDC/AAP classification for periodontitis and incorporating a definition of gingivitis based on a bleeding on probing threshold, and analyses based on more detailed whole mouth, intraoral regionally, site-based, and tooth-based criteria of bleeding on probing, plaque levels, pockets depths and clinical attachment levels. Results After adjusting for plaque control habits, and behavioral and sociodemographic factors, there were no significant differences in periodontal diseases between the PHIV and PHEU youth using any of these criteria. For PHIV youth, there was no significant association between parameters of periodontal disease and current HIV status. Conclusions While no significant differences in periodontal parameters were noted between the PHIV and PHEU youth, the influence of antiretroviral therapy on merits further exploration in this cohort in a longitudinal study. PMID:27801947
Ugaz, Ana G; Boyd, C Trenton; Croft, Vicki F; Carrigan, Esther E; Anderson, Katherine M
2010-10-01
This paper presents the methods and results of a study designed to produce the third edition of the "Basic List of Veterinary Medical Serials," which was established by the Veterinary Medical Libraries Section in 1976 and last updated in 1986. A set of 238 titles were evaluated using a decision matrix in order to systematically assign points for both objective and subjective criteria and determine an overall score for each journal. Criteria included: coverage in four major indexes, scholarly impact rank as tracked in two sources, identification as a recommended journal in preparing for specialty board examinations, and a veterinary librarian survey rating. Of the 238 titles considered, a minimum scoring threshold determined the 123 (52%) journals that constituted the final list. The 36 subject categories represented on the list include general and specialty disciplines in veterinary medicine. A ranked list of journals and a list by subject category were produced. Serials appearing on the third edition of the "Basic List of Veterinary Medical Serials" met expanded objective measures of quality and impact as well as subjective perceptions of value by both librarians and veterinary practitioners.
Evaluating biomarkers for prognostic enrichment of clinical trials.
Kerr, Kathleen F; Roth, Jeremy; Zhu, Kehao; Thiessen-Philbrook, Heather; Meisner, Allison; Wilson, Francis Perry; Coca, Steven; Parikh, Chirag R
2017-12-01
A potential use of biomarkers is to assist in prognostic enrichment of clinical trials, where only patients at relatively higher risk for an outcome of interest are eligible for the trial. We investigated methods for evaluating biomarkers for prognostic enrichment. We identified five key considerations when considering a biomarker and a screening threshold for prognostic enrichment: (1) clinical trial sample size, (2) calendar time to enroll the trial, (3) total patient screening costs and the total per-patient trial costs, (4) generalizability of trial results, and (5) ethical evaluation of trial eligibility criteria. Items (1)-(3) are amenable to quantitative analysis. We developed the Biomarker Prognostic Enrichment Tool for evaluating biomarkers for prognostic enrichment at varying levels of screening stringency. We demonstrate that both modestly prognostic and strongly prognostic biomarkers can improve trial metrics using Biomarker Prognostic Enrichment Tool. Biomarker Prognostic Enrichment Tool is available as a webtool at http://prognosticenrichment.com and as a package for the R statistical computing platform. In some clinical settings, even biomarkers with modest prognostic performance can be useful for prognostic enrichment. In addition to the quantitative analysis provided by Biomarker Prognostic Enrichment Tool, investigators must consider the generalizability of trial results and evaluate the ethics of trial eligibility criteria.
Anti-collusion forensics of multimedia fingerprinting using orthogonal modulation.
Wang, Z Jane; Wu, Min; Zhao, Hong Vicky; Trappe, Wade; Liu, K J Ray
2005-06-01
Digital fingerprinting is a method for protecting digital data in which fingerprints that are embedded in multimedia are capable of identifying unauthorized use of digital content. A powerful attack that can be employed to reduce this tracing capability is collusion, where several users combine their copies of the same content to attenuate/remove the original fingerprints. In this paper, we study the collusion resistance of a fingerprinting system employing Gaussian distributed fingerprints and orthogonal modulation. We introduce the maximum detector and the thresholding detector for colluder identification. We then analyze the collusion resistance of a system to the averaging collusion attack for the performance criteria represented by the probability of a false negative and the probability of a false positive. Lower and upper bounds for the maximum number of colluders K(max) are derived. We then show that the detectors are robust to different collusion attacks. We further study different sets of performance criteria, and our results indicate that attacks based on a few dozen independent copies can confound such a fingerprinting system. We also propose a likelihood-based approach to estimate the number of colluders. Finally, we demonstrate the performance for detecting colluders through experiments using real images.
TAP score: torsion angle propensity normalization applied to local protein structure evaluation
Tosatto, Silvio CE; Battistutta, Roberto
2007-01-01
Background Experimentally determined protein structures may contain errors and require validation. Conformational criteria based on the Ramachandran plot are mainly used to distinguish between distorted and adequately refined models. While the readily available criteria are sufficient to detect totally wrong structures, establishing the more subtle differences between plausible structures remains more challenging. Results A new criterion, called TAP score, measuring local sequence to structure fitness based on torsion angle propensities normalized against the global minimum and maximum is introduced. It is shown to be more accurate than previous methods at estimating the validity of a protein model in terms of commonly used experimental quality parameters on two test sets representing the full PDB database and a subset of obsolete PDB structures. Highly selective TAP thresholds are derived to recognize over 90% of the top experimental structures in the absence of experimental information. Both a web server and an executable version of the TAP score are available at . Conclusion A novel procedure for energy normalization (TAP) has significantly improved the possibility to recognize the best experimental structures. It will allow the user to more reliably isolate problematic structures in the context of automated experimental structure determination. PMID:17504537
A new iterative triclass thresholding technique in image segmentation.
Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin
2014-03-01
We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.
Roman, Sabine; Pandolfino, John E; Chen, Joan; Boris, Lubomyr; Luger, Daniel; Kahrilas, Peter J
2013-01-01
Backgrounds & Aims This study aimed to refine the criteria for esophageal hypercontractility in high resolution esophageal pressure topography (EPT) and examine the clinical context in which it occurs. Subjects & Methods 72 control subjects were used to define the threshold for hypercontractility as a distal contractile integral (DCI) greater than observed in normals. 2,000 consecutive EPT studies were reviewed to find patients exceeding this threshold. Concomitant EPT and clinical variables were explored. Results The greatest DCI value observed in any swallow among the control subjects was 7,732 mmHg-s-cm; the threshold for hypercontractility was established as a swallow with DCI >8,000 mmHg-s-cm. 44 patients were identified with a median maximal DCI of 11,077 mmHg-s-cm, all with normal contractile propagation and normal distal contractile latency, thereby excluding achalasia and distal esophageal spasm. Hypercontractility was associated with multipeaked contractions in 82% of instances leading to the name Jackhammer Esophagus . Dysphagia was the dominant symptom although subsets of patients had hypercontractility in the context of EGJ outflow obstruction, reflux disease, or as an apparent primary motility disorder. Conclusion We describe an extreme phenotype of hypercontractility characterized in EPT by the occurrence of at least a single contraction with DCI > 8,000 mmHg-s-cm, a value not encountered in control subjects. This phenomenon, branded Jackhammer Esophagus was usually accompanied by dysphagia and occurred both in association with other esophageal pathology (EGJ outflow obstruction, reflux disease) or as an isolated motility disturbance. Further studies are required to define the pathophysiology and treatment of this disorder. PMID:21931377
Lin, Daniel W; Crawford, E David; Keane, Thomas; Evans, Brent; Reid, Julia; Rajamani, Saradha; Brown, Krystal; Gutin, Alexander; Tward, Jonathan; Scardino, Peter; Brawer, Michael; Stone, Steven; Cuzick, Jack
2018-06-01
A combined clinical cell-cycle risk (CCR) score that incorporates prognostic molecular and clinical information has been recently developed and validated to improve prostate cancer mortality (PCM) risk stratification over clinical features alone. As clinical features are currently used to select men for active surveillance (AS), we developed and validated a CCR score threshold to improve the identification of men with low-risk disease who are appropriate for AS. The score threshold was selected based on the 90th percentile of CCR scores among men who might typically be considered for AS based on NCCN low/favorable-intermediate risk criteria (CCR = 0.8). The threshold was validated using 10-year PCM in an unselected, conservatively managed cohort and in the subset of the same cohort after excluding men with high-risk features. The clinical effect was evaluated in a contemporary clinical cohort. In the unselected validation cohort, men with CCR scores below the threshold had a predicted mean 10-year PCM of 2.7%, and the threshold significantly dichotomized low- and high-risk disease (P = 1.2 × 10 -5 ). After excluding high-risk men from the validation cohort, men with CCR scores below the threshold had a predicted mean 10-year PCM of 2.3%, and the threshold significantly dichotomized low- and high-risk disease (P = 0.020). There were no prostate cancer-specific deaths in men with CCR scores below the threshold in either analysis. The proportion of men in the clinical testing cohort identified as candidates for AS was substantially higher using the threshold (68.8%) compared to clinicopathologic features alone (42.6%), while mean 10-year predicted PCM risks remained essentially identical (1.9% vs. 2.0%, respectively). The CCR score threshold appropriately dichotomized patients into low- and high-risk groups for 10-year PCM, and may enable more appropriate selection of patients for AS. Copyright © 2018 Elsevier Inc. All rights reserved.
Djulbegovic, Benjamin; van den Ende, Jef; Hamm, Robert M; Mayrhofer, Thomas; Hozo, Iztok; Pauker, Stephen G
2015-05-01
The threshold model represents an important advance in the field of medical decision-making. It is a linchpin between evidence (which exists on the continuum of credibility) and decision-making (which is a categorical exercise - we decide to act or not act). The threshold concept is closely related to the question of rational decision-making. When should the physician act, that is order a diagnostic test, or prescribe treatment? The threshold model embodies the decision theoretic rationality that says the most rational decision is to prescribe treatment when the expected treatment benefit outweighs its expected harms. However, the well-documented large variation in the way physicians order diagnostic tests or decide to administer treatments is consistent with a notion that physicians' individual action thresholds vary. We present a narrative review summarizing the existing literature on physicians' use of a threshold strategy for decision-making. We found that the observed variation in decision action thresholds is partially due to the way people integrate benefits and harms. That is, explanation of variation in clinical practice can be reduced to a consideration of thresholds. Limited evidence suggests that non-expected utility threshold (non-EUT) models, such as regret-based and dual-processing models, may explain current medical practice better. However, inclusion of costs and recognition of risk attitudes towards uncertain treatment effects and comorbidities may improve the explanatory and predictive value of the EUT-based threshold models. The decision when to act is closely related to the question of rational choice. We conclude that the medical community has not yet fully defined criteria for rational clinical decision-making. The traditional notion of rationality rooted in EUT may need to be supplemented by reflective rationality, which strives to integrate all aspects of medical practice - medical, humanistic and socio-economic - within a coherent reasoning system. © 2015 Stichting European Society for Clinical Investigation Journal Foundation.
NASA Technical Reports Server (NTRS)
Mayes, W. H.; Stephens, D. G.; Holmes, H. K.; Lewis, R. B.; Holliday, B. G.; Ward, D. W.; Deloach, R.; Cawthorn, J. M.; Finley, T. D.; Lynch, J. W.
1978-01-01
Outdoor and indoor noise levels resulting from aircraft flyovers and certain nonaircraft events were recorded, as were the associated vibration levels in the walls, windows, and floors at building test sites. In addition, limited subjective tests were conducted to examine the human detection and annoyance thresholds for building vibration and rattle caused by aircraft noise. Representative peak levels of aircraft noise-induced building vibrations are reported and comparisons are made with structural damage criteria and with vibration levels induced by common domestic events. In addition, results of a pilot study are reported which indicate the human detection threshold for noise-induced floor vibrations.
A comparison of earthquake backprojection imaging methods for dense local arrays
NASA Astrophysics Data System (ADS)
Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.
2018-03-01
Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we therefore recommend backprojecting kurtosis waveforms, followed by a second pass on the detected events using noise-filtered raw waveforms to achieve the best of all criteria.
Quantifying ecological thresholds from response surfaces
Heather E. Lintz; Bruce McCune; Andrew N. Gray; Katherine A. McCulloh
2011-01-01
Ecological thresholds are abrupt changes of ecological state. While an ecological threshold is a widely accepted concept, most empirical methods detect them in time or across geographic space. Although useful, these approaches do not quantify the direct drivers of threshold response. Causal understanding of thresholds detected empirically requires their investigation...
Threshold-adaptive canny operator based on cross-zero points
NASA Astrophysics Data System (ADS)
Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu
2018-03-01
Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.
A unified framework for evaluating the risk of re-identification of text de-identification tools.
Scaiano, Martin; Middleton, Grant; Arbuckle, Luk; Kolhatkar, Varada; Peyton, Liam; Dowling, Moira; Gipson, Debbie S; El Emam, Khaled
2016-10-01
It has become regular practice to de-identify unstructured medical text for use in research using automatic methods, the goal of which is to remove patient identifying information to minimize re-identification risk. The metrics commonly used to determine if these systems are performing well do not accurately reflect the risk of a patient being re-identified. We therefore developed a framework for measuring the risk of re-identification associated with textual data releases. We apply the proposed evaluation framework to a data set from the University of Michigan Medical School. Our risk assessment results are then compared with those that would be obtained using a typical contemporary micro-average evaluation of recall in order to illustrate the difference between the proposed evaluation framework and the current baseline method. We demonstrate how this framework compares against common measures of the re-identification risk associated with an automated text de-identification process. For the probability of re-identification using our evaluation framework we obtained a mean value for direct identifiers of 0.0074 and a mean value for quasi-identifiers of 0.0022. The 95% confidence interval for these estimates were below the relevant thresholds. The threshold for direct identifier risk was based on previously used approaches in the literature. The threshold for quasi-identifiers was determined based on the context of the data release following commonly used de-identification criteria for structured data. Our framework attempts to correct for poorly distributed evaluation corpora, accounts for the data release context, and avoids the often optimistic assumptions that are made using the more traditional evaluation approach. It therefore provides a more realistic estimate of the true probability of re-identification. This framework should be used as a basis for computing re-identification risk in order to more realistically evaluate future text de-identification tools. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Assessment of the Focal Hepatic Lesions Using Diffusion Tensor Magnetic Resonance Imaging
Oussous, Siham Ait; Boujraf, Saïd; Kamaoui, Imane
2016-01-01
The goal is assessing the diffusion magnetic resonance imaging (dMRI) method efficiency in characterizing focal hepatic lesions (FHLs). About 28-FHL patients were studied in Radiology and Clinical Imaging Department of our University Hospital using 1.5 Tesla MRI system between January 2010 and June 2011. Patients underwent hepatic MRI consisting of dynamic T1- and T2-weighted imaging. The dMRI was performed with b-values of 200 s/mm2 and 600 s/mm2. About 42 lesions measuring more than 1 cm were studied including the variation of the signal according to the b-value and the apparent diffusion coefficient (ADC). The diagnostic imaging reference was based on standard MRI techniques data for typical lesions and on histology after surgical biopsy for atypical lesions. About 38 lesions were assessed including 13 benign lesions consisting of 1 focal nodular hyperplasia, 8 angiomas, and 4 cysts. About 25 malignant lesions included 11 hepatocellular carcinoma, 9 hepatic metastases, 1 cholangiocarcinoma, and 4 lymphomas. dMRI of soft lesions demonstrated higher ADC of 2.26 ± 0.75 mm2/s, whereas solid lesions showed lower ADC 1.19 ± 0.33 mm2/s with significant difference (P = 0.05). Discrete values collections were noticed. These results were correlated to standard MRI and histological findings. Sensitivity of 93% and specificity of 84% were found in diagnoses of malignant tumors with an ADC threshold of 1.6 × 10−3 mm2/s. dMRI is important characterization method of FHL. However, it should not be used as single criteria of hepatic lesions malignity. MRI, clinical, and biological data must be correlated. Significant difference was found between benign and solid malignant lesions without threshold ADC values. Hence, it is difficult to confirm ADC threshold differentiating the lesion classification. PMID:27186537
Harvey, Philip D; Jacobson, William; Zhong, Wei; Nomikos, George G; Cronquist Christensen, Michael; Kurre Olsen, Christina; Merikle, Elizabeth
2017-04-15
This article reports an evaluation of the psychometric properties and clinically important difference (CID) threshold of the UCSD Performance-Based Skills Assessment (UPSA) in major depressive disorder (MDD), using data from a large-scale study of the effects of vortioxetine on cognitive functioning and functional capacity in MDD patients. Adults with moderate-to-severe recurrent MDD and self-reported cognitive dysfunction were randomized to 8 weeks of double-blind treatment with vortioxetine 10/20mg QD (flexible), duloxetine 60mg QD, or placebo. Pearson correlation coefficients were calculated between UPSA composite score and demographic/disease characteristics at baseline to examine construct validity. Two methods (distribution-based and anchor-based) were used to establish a CID threshold. A total of 602 patients were randomized; 528 comprised the full analysis set. For the entire sample mean UPSA composite scores were 77.8 at baseline and 83.9 at week 8 (mean change, +6.1). As hypothesized, at baseline, the UPSA composite score correlated with cognitive functioning (Digit Symbol Substitution Test: r=0.36, P<0.001) and workplace productivity (Work Limitations Questionnaire: r=-0.17, P=0.008), but not depressive symptoms (Montgomery-Åsberg Depression Rating Scale: r=0.02, P=0.707) or subjective cognitive dysfunction (Perceived Deficits Questionnaire: r=-0.02, P=0.698). Two versions of the UPSA were used and no inclusion/exclusion criteria were based on the UPSA. These results support the construct validity of UPSA for assessing functional capacity independent of mood symptoms. The estimated CID for changes in UPSA scores was quite consistent at +6.4 points and +6.7 based on distribution-based and anchor-based methods, respectively. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
MO-DE-207A-12: Toward Patient-Specific 4DCT Reconstruction Using Adaptive Velocity Binning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, E.D.; Glide-Hurst, C.; Wayne State University, Detroit, MI
2016-06-15
Purpose: While 4DCT provides organ/tumor motion information, it often samples data over 10–20 breathing cycles. For patients presenting with compromised pulmonary function, breathing patterns can change over the acquisition time, potentially leading to tumor delineation discrepancies. This work introduces a novel adaptive velocity-modulated binning (AVB) 4DCT algorithm that modulates the reconstruction based on the respiratory waveform, yielding a patient-specific 4DCT solution. Methods: AVB was implemented in a research reconstruction configuration. After filtering the respiratory waveform, the algorithm examines neighboring data to a phase reconstruction point and the temporal gate is widened until the difference between the reconstruction point and waveformmore » exceeds a threshold value—defined as percent difference between maximum/minimum waveform amplitude. The algorithm only impacts reconstruction if the gate width exceeds a set minimum temporal width required for accurate reconstruction. A sensitivity experiment of threshold values (0.5, 1, 5, 10, and 12%) was conducted to examine the interplay between threshold, signal to noise ratio (SNR), and image sharpness for phantom and several patient 4DCT cases using ten-phase reconstructions. Individual phase reconstructions were examined. Subtraction images and regions of interest were compared to quantify changes in SNR. Results: AVB increased signal in reconstructed 4DCT slices for respiratory waveforms that met the prescribed criteria. For the end-exhale phases, where the respiratory velocity is low, patient data revealed a threshold of 0.5% demonstrated increased SNR in the AVB reconstructions. For intermediate breathing phases, threshold values were required to be >10% to notice appreciable changes in CT intensity with AVB. AVB reconstructions exhibited appreciably higher SNR and reduced noise in regions of interest that were photon deprived such as the liver. Conclusion: We demonstrated that patient-specific velocity-based 4DCT reconstruction is feasible. Image noise was reduced with AVB, suggesting potential applications for low-dose acquisitions and to improve 4DCT reconstruction for irregular breathing patients. The submitting institution holds research agreements with Philips Healthcare.« less
Dealing With Uncertainty When Assessing Fish Passage Through Culvert Road Crossings
NASA Astrophysics Data System (ADS)
Anderson, Gregory B.; Freeman, Mary C.; Freeman, Byron J.; Straight, Carrie A.; Hagler, Megan M.; Peterson, James T.
2012-09-01
Assessing the passage of aquatic organisms through culvert road crossings has become increasingly common in efforts to restore stream habitat. Several federal and state agencies and local stakeholders have adopted assessment approaches based on literature-derived criteria for culvert impassability. However, criteria differ and are typically specific to larger-bodied fishes. In an analysis to prioritize culverts for remediation to benefit imperiled, small-bodied fishes in the Upper Coosa River system in the southeastern United States, we assessed the sensitivity of prioritization to the use of differing but plausible criteria for culvert impassability. Using measurements at 256 road crossings, we assessed culvert impassability using four alternative criteria sets represented in Bayesian belief networks. Two criteria sets scored culverts as either passable or impassable based on alternative thresholds of culvert characteristics (outlet elevation, baseflow water velocity). Two additional criteria sets incorporated uncertainty concerning ability of small-bodied fishes to pass through culverts and estimated a probability of culvert impassability. To prioritize culverts for remediation, we combined estimated culvert impassability with culvert position in the stream network relative to other barriers to compute prospective gain in connected stream habitat for the target fish species. Although four culverts ranked highly for remediation regardless of which criteria were used to assess impassability, other culverts differed widely in priority depending on criteria. Our results emphasize the value of explicitly incorporating uncertainty into criteria underlying remediation decisions. Comparing outcomes among alternative, plausible criteria may also help to identify research most needed to narrow management uncertainty.
Dealing with uncertainty when assessing fish passage through culvert road crossings.
Anderson, Gregory B; Freeman, Mary C; Freeman, Byron J; Straight, Carrie A; Hagler, Megan M; Peterson, James T
2012-09-01
Assessing the passage of aquatic organisms through culvert road crossings has become increasingly common in efforts to restore stream habitat. Several federal and state agencies and local stakeholders have adopted assessment approaches based on literature-derived criteria for culvert impassability. However, criteria differ and are typically specific to larger-bodied fishes. In an analysis to prioritize culverts for remediation to benefit imperiled, small-bodied fishes in the Upper Coosa River system in the southeastern United States, we assessed the sensitivity of prioritization to the use of differing but plausible criteria for culvert impassability. Using measurements at 256 road crossings, we assessed culvert impassability using four alternative criteria sets represented in Bayesian belief networks. Two criteria sets scored culverts as either passable or impassable based on alternative thresholds of culvert characteristics (outlet elevation, baseflow water velocity). Two additional criteria sets incorporated uncertainty concerning ability of small-bodied fishes to pass through culverts and estimated a probability of culvert impassability. To prioritize culverts for remediation, we combined estimated culvert impassability with culvert position in the stream network relative to other barriers to compute prospective gain in connected stream habitat for the target fish species. Although four culverts ranked highly for remediation regardless of which criteria were used to assess impassability, other culverts differed widely in priority depending on criteria. Our results emphasize the value of explicitly incorporating uncertainty into criteria underlying remediation decisions. Comparing outcomes among alternative, plausible criteria may also help to identify research most needed to narrow management uncertainty.
Dealing with uncertainty when assessing fish passage through culvert road crossings
Anderson, Gregory B.; Freeman, Mary C.; Freeman, Byron J.; Straight, Carrie A.; Hagler, Megan M.; Peterson, James T.
2012-01-01
Assessing the passage of aquatic organisms through culvert road crossings has become increasingly common in efforts to restore stream habitat. Several federal and state agencies and local stakeholders have adopted assessment approaches based on literature-derived criteria for culvert impassability. However, criteria differ and are typically specific to larger-bodied fishes. In an analysis to prioritize culverts for remediation to benefit imperiled, small-bodied fishes in the Upper Coosa River system in the southeastern United States, we assessed the sensitivity of prioritization to the use of differing but plausible criteria for culvert impassability. Using measurements at 256 road crossings, we assessed culvert impassability using four alternative criteria sets represented in Bayesian belief networks. Two criteria sets scored culverts as either passable or impassable based on alternative thresholds of culvert characteristics (outlet elevation, baseflow water velocity). Two additional criteria sets incorporated uncertainty concerning ability of small-bodied fishes to pass through culverts and estimated a probability of culvert impassability. To prioritize culverts for remediation, we combined estimated culvert impassability with culvert position in the stream network relative to other barriers to compute prospective gain in connected stream habitat for the target fish species. Although four culverts ranked highly for remediation regardless of which criteria were used to assess impassability, other culverts differed widely in priority depending on criteria. Our results emphasize the value of explicitly incorporating uncertainty into criteria underlying remediation decisions. Comparing outcomes among alternative, plausible criteria may also help to identify research most needed to narrow management uncertainty.
Reducing noise component on medical images
NASA Astrophysics Data System (ADS)
Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana
2018-04-01
Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.
An automated detection for axonal boutons in vivo two-photon imaging of mouse
NASA Astrophysics Data System (ADS)
Li, Weifu; Zhang, Dandan; Xie, Qiwei; Chen, Xi; Han, Hua
2017-02-01
Activity-dependent changes in the synaptic connections of the brain are tightly related to learning and memory. Previous studies have shown that essentially all new synaptic contacts were made by adding new partners to existing synaptic elements. To further explore synaptic dynamics in specific pathways, concurrent imaging of pre and postsynaptic structures in identified connections is required. Consequently, considerable attention has been paid for the automated detection of axonal boutons. Different from most previous methods proposed in vitro data, this paper considers a more practical case in vivo neuron images which can provide real time information and direct observation of the dynamics of a disease process in mouse. Additionally, we present an automated approach for detecting axonal boutons by starting with deconvolving the original images, then thresholding the enhanced images, and reserving the regions fulfilling a series of criteria. Experimental result in vivo two-photon imaging of mouse demonstrates the effectiveness of our proposed method.
Detection of hail signatures from single-polarization C-band radar reflectivity
NASA Astrophysics Data System (ADS)
Kunz, Michael; Kugel, Petra I. S.
2015-02-01
Five different criteria that estimate hail signatures from single-polarization radar data are statistically evaluated over a 15-year period by categorical verification against loss data provided by a building insurance company. The criteria consider different levels or thresholds of radar reflectivity, some of them complemented by estimates of the 0 °C level or cloud top temperature. Applied to reflectivity data from a single C-band radar in southwest Germany, it is found that all criteria are able to reproduce most of the past damage-causing hail events. However, the criteria substantially overestimate hail occurrence by up to 80%, mainly due to the verification process using damage data. Best results in terms of highest Heidke Skill Score HSS or Critical Success Index CSI are obtained for the Hail Detection Algorithm (HDA) and the Probability of Severe Hail (POSH). Radar-derived hail probability shows a high spatial variability with a maximum on the lee side of the Black Forest mountains and a minimum in the broad Rhine valley.
[Substance-related and addictive disorders in the DSM-5].
Thomasius, Rainer; Sack, Peter-Michael; Strittmatter, Esther; Kaess, Michael
2014-03-01
This paper concerns the revised classification of Substance-Related and Addictive Disorders in the fifth edition of the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM-5). In DSM-5, substance use disorders are diagnosed on a continuum of severity specified by explicit operationalized diagnostic criteria. "Gambling disorder" is the only behavioral addiction added to the DSM. Furthermore, preliminary criteria for "Caffeine Use Disorder" and "Internet Gaming Disorder" have now been defined in the manual. Adopting the DSM-5 criteria catalogue within the German treatment system for children and adolescents with substance use disorders or at risk for developing substance use disorders would be of great significance. Since the diagnostic threshold is lower, more patients would be eligible for treatment. Thus, early intervention in the area of substance use disorders should be strengthened, a development that appears to be highly desirable from the perspective of child and adolescent psychiatry. The current Section III diagnoses, with their now comprehensive diagnostic criteria, facilitate more internationally compatible research.
NASA Technical Reports Server (NTRS)
Clevenson, S. A.; Leatherwood, J. D.; Hollenbaugh, D. D.
1983-01-01
The results of physical measurements of the interior noise and vibration obtained within eight operational military helicopters are presented. The data were extensively analyzed and are presented in the following forms: noise and vibration spectra, overall root-mean-square acceleration levels in three linear axes, peak accelerations at dominant blade passage frequencies, acceleration exceedance data, and overall and ""A'' weighted sound pressure levels. Peak acceleration levels were compared to the ISO 1-hr reduced comfort and fatigue decreased proficiency boundaries and the NASA discomfort criteria. The ""A'' weighted noise levels were compared to the NASA annoyance criteria, and the overall noise spectra were compared to MIL-STD-1294 (""Acoustical Noise Limits in Helicopters''). Specific vibration components at blade passage frequencies for several aircraft exceeded both the ISO reduced comfort boundary and the NASA passenger discomfort criteria. The ""A'' weighted noise levels, corrected for SPH-4 helmet attenuation characteristics, exceeded the NASA annoyance threshold for several aircraft.
Barkley, R A
2009-02-27
A number of problems have been identified through research and clinical practice with the current DSM-IV criteria for the diagnosis of attention deficit hyperactivity disorder (ADHD). This paper reviews some of these issues along with possible solutions for consideration in the construction of the criteria for DSM-V. Issues related to the length of symptom lists and how best to conceptualize the neuropsychological constructs they represent, differing developmental thresholds for diagnosis for adults vs. children and teens, the criterion for age of onset, problems related to the current approach to subtyping, and the development of new items for the adult stage of the disorder are discussed along with other issues pertinent to the continuing effort to test and revise the DSM criteria for ADHD as a function of ongoing empirical research. The present paper has briefly raised a number of issues that require some attention by the various workgroups charged with creating the DSM-V diagnostic criteria for ADHD.
The dimensionality of DSM5 alcohol use disorder in Puerto Rico.
Caetano, Raul; Vaeth, Patrice A C; Santiago, Katyana; Canino, Glorisa
2016-11-01
Test the dimensionality and measurement properties of lifetime DSM-5 AUD criteria in a sample of adults from the metropolitan area of San Juan, Puerto Rico. Cross-sectional study with survey data collected in 2013-2014. General population. Random household sample of the adult population 18 to 64years of age in San Juan, Puerto Rico (N=1510; lifetime drinker N=1107). DSM-5 alcohol use disorder (2 or more criteria present in 12months). Lifetime reports of AUD criteria were consistent with a one-dimensional model. Scalar measurement invariance was observed across gender, but measurement parameters for tolerance varied across age, with younger ages showing a lower threshold and steeper loading. Results provide support for a unidimensional DSM-5 AUD construct in a sample from a Latin American country. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gender differences in developmental dyscalculia depend on diagnostic criteria.
Devine, Amy; Soltész, Fruzsina; Nobes, Alison; Goswami, Usha; Szűcs, Dénes
2013-10-01
Developmental dyscalculia (DD) is a learning difficulty specific to mathematics learning. The prevalence of DD may be equivalent to that of dyslexia, posing an important challenge for effective educational provision. Nevertheless, there is no agreed definition of DD and there are controversies surrounding cutoff decisions, specificity and gender differences. In the current study, 1004 British primary school children completed mathematics and reading assessments. The prevalence of DD and gender ratio were estimated in this sample using different criteria. When using absolute thresholds, the prevalence of DD was the same for both genders regardless of the cutoff criteria applied, however gender differences emerged when using a mathematics-reading discrepancy definition. Correlations between mathematics performance and the control measures selected to identify a specific learning difficulty affect both prevalence estimates and whether a gender difference is in fact identified. Educational implications are discussed.
Determination of Tsunami Warning Criteria for Current Velocity
NASA Astrophysics Data System (ADS)
Chen, R.; Wang, D.
2015-12-01
Present Tsunami warning issuance largely depends on an event's predicted wave height and inundation depth. Specifically, a warning is issued if the on-shore wave height is greater than 1m. This project examines whether any consideration should be given to current velocity. We apply the idea of force balance to determine theoretical minimum velocity thresholds for injuring people and damaging properties as a function of wave height. Results show that even at a water depth of less than 1m, a current velocity of 2 m/s is enough to pose a threat to humans and cause potential damage to cars and houses. Next, we employ a 1-dimensional shallow water model to simulate Tsunamis with various amplitudes and an assumed wavelength of 250km. This allows for the profiling of current velocity and wave height behavior as the Tsunamis reach shore. We compare this data against our theoretical thresholds to see if any real world scenarios would be dangerous to people and properties. We conclude that for such Tsunamis, the present warning criteria are effective at protecting people against larger events with amplitude greater than ~0.3m. However, for events with amplitude less than ~0.2m, it is possible to have waves less than 1m with current velocity high enough to endanger humans. Thus, the inclusion of current velocity data would help the present Tsunami warning criteria become more robust and efficient, especially for smaller Tsunami events.
Rest and treatment/rehabilitation following sport-related concussion: a systematic review.
Schneider, Kathryn J; Leddy, John J; Guskiewicz, Kevin M; Seifert, Tad; McCrea, Michael; Silverberg, Noah D; Feddermann-Demont, Nina; Iverson, Grant L; Hayden, Alix; Makdissi, Michael
2017-06-01
The objective of this systematic review was to evaluate the evidence regarding rest and active treatment/rehabilitation following sport-related concussion (SRC). Systematic review. MEDLINE (OVID), CINAHL (EbscoHost), PsycInfo (OVID), Cochrane Central Register of Controlled Trials (OVID), SPORTDiscus (EbscoHost), EMBASE (OVID) and Proquest DissertationsandTheses Global (Proquest) were searched systematically. Studies were included if they met the following criteria: (1) original research; (2) reported SRC as the diagnosis; and (3) evaluated the effect of rest or active treatment/rehabilitation. Review articles were excluded. Twenty-eight studies met the inclusion criteria (9 regarding the effects of rest and 19 evaluating active treatment). The methodological quality of the literature was limited; only five randomised controlled trials (RCTs) met the eligibility criteria. Those RCTs included rest, cervical and vestibular rehabilitation, subsymptom threshold aerobic exercise and multifaceted collaborative care. A brief period (24-48 hours) of cognitive and physical rest is appropriate for most patients. Following this, patients should be encouraged to gradually increase activity. The exact amount and duration of rest are not yet well defined and require further investigation. The data support interventions including cervical and vestibular rehabilitation and multifaceted collaborative care. Closely monitored subsymptom threshold, submaximal exercise may be of benefit. PROSPERO 2016:CRD42016039570. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz
2017-01-01
To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.
NASA Astrophysics Data System (ADS)
Feng, Judy J.; Ip, Horace H.; Cheng, Shuk H.
2004-05-01
Many grey-level thresholding methods based on histogram or other statistic information about the interest image such as maximum entropy and so on have been proposed in the past. However, most methods based on statistic analysis of the images concerned little about the characteristics of morphology of interest objects, which sometimes could provide very important indication which can help to find the optimum threshold, especially for those organisms which have special texture morphologies such as vasculature, neuro-network etc. in medical imaging. In this paper, we propose a novel method for thresholding the fluorescent vasculature image series recorded from Confocal Scanning Laser Microscope. After extracting the basic orientation of the slice of vessels inside a sub-region partitioned from the images, we analysis the intensity profiles perpendicular to the vessel orientation to get the reasonable initial threshold for each region. Then the threshold values of those regions near the interest one both in x-y and optical directions have been referenced to get the final result of thresholds of the region, which makes the whole stack of images look more continuous. The resulting images are characterized by suppressing both noise and non-interest tissues conglutinated to vessels, while improving the vessel connectivities and edge definitions. The value of the method for idealized thresholding the fluorescence images of biological objects is demonstrated by a comparison of the results of 3D vascular reconstruction.
Analyses of Fatigue Crack Growth and Closure Near Threshold Conditions for Large-Crack Behavior
NASA Technical Reports Server (NTRS)
Newman, J. C., Jr.
1999-01-01
A plasticity-induced crack-closure model was used to study fatigue crack growth and closure in thin 2024-T3 aluminum alloy under constant-R and constant-K(sub max) threshold testing procedures. Two methods of calculating crack-opening stresses were compared. One method was based on a contact-K analyses and the other on crack-opening-displacement (COD) analyses. These methods gave nearly identical results under constant-amplitude loading, but under threshold simulations the contact-K analyses gave lower opening stresses than the contact COD method. Crack-growth predictions tend to support the use of contact-K analyses. Crack-growth simulations showed that remote closure can cause a rapid rise in opening stresses in the near threshold regime for low-constraint and high applied stress levels. Under low applied stress levels and high constraint, a rise in opening stresses was not observed near threshold conditions. But crack-tip-opening displacement (CTOD) were of the order of measured oxide thicknesses in the 2024 alloy under constant-R simulations. In contrast, under constant-K(sub max) testing the CTOD near threshold conditions were an order-of-magnitude larger than measured oxide thicknesses. Residual-plastic deformations under both constant-R and constant-K(sub max) threshold simulations were several times larger than the expected oxide thicknesses. Thus, residual-plastic deformations, in addition to oxide and roughness, play an integral part in threshold development.
Basel, Türker; Lütkenhöner, Bernd
2013-01-01
Nearly half a century ago, administration of glycerol was shown to temporarily improve the threshold of hearing in patients with suspected Menière's disease (glycerol test). Although a positive test result provides strong evidence of Menière's disease, the test has not gained widespread acceptance. A probable reason is that there is no consensus as to the definition of positive. Moreover, a negative test result is of little diagnostic value because Menière's disease cannot be excluded. By reanalyzing archived data, the authors sought to understand the test in light of signal detection theory. Moreover, they explored the possibility of estimating the probability of a positive test result from the pretest audiogram. The study is based on audiograms from 347 patients (356 ears) who underwent a glycerol test to corroborate a suspected diagnosis of Menière's disease. Subsequent to an initial pure-tone audiogram, glycerol (1.2 mL/kg body weight) was orally administered; follow-up audiograms were obtained after 1, 2, 3, and 4 hr. Transcription of the audiograms into a computer-readable form made them available for automated reanalysis. Averaged difference audiograms provided detailed insight into the frequency dependence and the temporal dynamics of the glycerol-induced threshold reduction. The strongest threshold reduction was observed 4 hr after glycerol intake, although nearly the same effect was already found after 3 hr. Strong overall threshold reductions were associated with a pronounced maximum at approximately 1000 Hz; weaker effects were associated with a plateau between 125 and 1000 Hz and a rapid decrease toward higher frequencies. To date, criteria suggested for a positive test result vastly differ in both sensitivity (with regard to the detection of a threshold reduction) and specificity (1 minus false-positive rate). Here, a criterion based on the aggregate threshold reduction in adjacent audiometric frequencies is suggested. This approach does not only seem to be more robust but also permits to freely adjust the false-positive rate. A positive test result is particularly likely when the mean low-frequency hearing loss is approximately 60 dB and the mean high-frequency hearing loss does not exceed 50 dB. If the pretest audiogram does not render a positive test result unlikely, a state-of-the-art implementation of the glycerol test is a competitive method for corroborating a suspected diagnosis of Menière's disease.
Stinchfield, Randy; McCready, John; Turner, Nigel E; Jimenez-Murcia, Susana; Petry, Nancy M; Grant, Jon; Welte, John; Chapman, Heather; Winters, Ken C
2016-09-01
The DSM-5 was published in 2013 and it included two substantive revisions for gambling disorder (GD). These changes are the reduction in the threshold from five to four criteria and elimination of the illegal activities criterion. The purpose of this study was to twofold. First, to assess the reliability, validity and classification accuracy of the DSM-5 diagnostic criteria for GD. Second, to compare the DSM-5-DSM-IV on reliability, validity, and classification accuracy, including an examination of the effect of the elimination of the illegal acts criterion on diagnostic accuracy. To compare DSM-5 and DSM-IV, eight datasets from three different countries (Canada, USA, and Spain; total N = 3247) were used. All datasets were based on similar research methods. Participants were recruited from outpatient gambling treatment services to represent the group with a GD and from the community to represent the group without a GD. All participants were administered a standardized measure of diagnostic criteria. The DSM-5 yielded satisfactory reliability, validity and classification accuracy. In comparing the DSM-5 to the DSM-IV, most comparisons of reliability, validity and classification accuracy showed more similarities than differences. There was evidence of modest improvements in classification accuracy for DSM-5 over DSM-IV, particularly in reduction of false negative errors. This reduction in false negative errors was largely a function of lowering the cut score from five to four and this revision is an improvement over DSM-IV. From a statistical standpoint, eliminating the illegal acts criterion did not make a significant impact on diagnostic accuracy. From a clinical standpoint, illegal acts can still be addressed in the context of the DSM-5 criterion of lying to others.
Carroll, Carlos; Fredrickson, Richard J; Lacy, Robert C
2014-02-01
Restoring connectivity between fragmented populations is an important tool for alleviating genetic threats to endangered species. Yet recovery plans typically lack quantitative criteria for ensuring such population connectivity. We demonstrate how models that integrate habitat, genetic, and demographic data can be used to develop connectivity criteria for the endangered Mexican wolf (Canis lupus baileyi), which is currently being restored to the wild from a captive population descended from 7 founders. We used population viability analysis that incorporated pedigree data to evaluate the relation between connectivity and persistence for a restored Mexican wolf metapopulation of 3 populations of equal size. Decreasing dispersal rates greatly increased extinction risk for small populations (<150-200), especially as dispersal rates dropped below 0.5 genetically effective migrants per generation. We compared observed migration rates in the Northern Rocky Mountains (NRM) wolf metapopulation to 2 habitat-based effective distance metrics, least-cost and resistance distance. We then used effective distance between potential primary core populations in a restored Mexican wolf metapopulation to evaluate potential dispersal rates. Although potential connectivity was lower in the Mexican wolf versus the NRM wolf metapopulation, a connectivity rate of >0.5 genetically effective migrants per generation may be achievable via natural dispersal under current landscape conditions. When sufficient data are available, these methods allow planners to move beyond general aspirational connectivity goals or rules of thumb to develop objective and measurable connectivity criteria that more effectively support species recovery. The shift from simple connectivity rules of thumb to species-specific analyses parallels the previous shift from general minimum-viable-population thresholds to detailed viability modeling in endangered species recovery planning. © 2013 Society for Conservation Biology.
A Chronic Fatigue Syndrome (CFS) severity score based on case designation criteria
Baraniuk, James N; Adewuyi, Oluwatoyin; Merck, Samantha Jean; Ali, Mushtaq; Ravindran, Murugan K; Timbol, Christian R; Rayhan, Rakib; Zheng, Yin; Le, Uyenphuong; Esteitie, Rania; Petrie, Kristina N
2013-01-01
Background: Chronic Fatigue Syndrome case designation criteria are scored as physicians’ subjective, nominal interpretations of patient fatigue, pain (headaches, myalgia, arthralgia, sore throat and lymph nodes), cognitive dysfunction, sleep and exertional exhaustion. Methods: Subjects self-reported symptoms using an anchored ordinal scale of 0 (no symptom), 1 (trivial complaints), 2 (mild), 3 (moderate), and 4 (severe). Fatigue of 3 or 4 distinguished “Fatigued” from “Not Fatigued” subjects. The sum of the 8(Sum8) ancillary criteria was tested as a proxy for fatigue. All subjects had history and physical examinations to exclude medical fatigue, and ensure categorization as healthy or CFS subjects. Results: Fatigued subjects were divided into CFS with ≥4 symptoms or Chronic Idiopathic Fatigue (CIF) with ≤3 symptoms. ROC of Sum8 for CFS and Not Fatigued subjects generated a threshold of 14 (specificity=0.934; sensitivity=0.928). CFS (n=256) and CIF (n=55) criteria were refined to include Sum8≥14 and ≤13, respectively. Not Fatigued subjects had highly skewed Sum8 responses. Healthy Controls (HC; n=269) were defined by fatigue≤2 and Sum8≤13. Those with Sum8≥14 were defined as CFS–Like With Insufficient Fatigue Syndrome (CFSLWIFS; n=20). Sum8 and Fatigue were highly correlated (R2=0.977; Cronbach’s alpha=0.924) indicating an intimate relationship between symptom constructs. Cluster analysis suggested 4 clades each in CFS and HC. Translational utility was inferred from the clustering of proteomics from cerebrospinal fluid. Conclusions: Plotting Fatigue severity versus Sum8 produced an internally consistent classifying system. This is a necessary step for translating symptom profiles into fatigue phenotypes and their pathophysiological mechanisms. PMID:23390566
Rapid Response Team composition, resourcing and calling criteria in Australia.
Jones, Daryl; Drennan, Kelly; Hart, Graeme K; Bellomo, Rinaldo; Web, Steven A R
2012-05-01
Rapid Response Teams (RRTs) have been introduced into at least 60% of Intensive Care Unit (ICU) - equipped Australian hospitals to review deteriorating ward patients. Most studies have assessed their impact on patient outcome and less information exists on team composition or aspects of their calling criteria. We obtained information on team composition, resourcing and details of activation criteria from 39 of 108 (36.1%) RRT-equipped Australian hospitals. We found that all 39 teams operated 24/7 (h/days), but only 10 (25.6%) had received additional funding for the service. Although 38/39 teams, were physician-led medical emergency teams, in 7 (17.9%) sites the most senior member would be unlikely to have advanced airway skills. Three quarters of calling criteria were structured into "ABCD", and approximately 40% included cardiac and/or respiratory arrest as a calling criterion. Thresholds for calling criteria varied widely (particularly for respiratory rate and heart rate), as did the wording of the worried/concerned criterion. There was also wide variation in the number and nature of additional activation criteria. Our findings imply the likelihood of significant practice variation in relation to RRT composition, staff skill set and activation criteria between hospitals. We recommend improved resourcing of RRTs, training of the team members, and consideration for improved standardisation of calling criteria across institutions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Peng, Mei; Jaeger, Sara R; Hautus, Michael J
2014-03-01
Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (<4 replications) due to the susceptibility of human olfactory receptors to fatigue and adaptation. This article introduces a new method for fitting individual-judge psychometric functions to olfactory data obtained using the current standard protocol-American Society for Testing and Materials (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.
High Resolution Manometry Correlates of Ineffective Esophageal Motility
Xiao, Yinglian; Kahrilas, Peter J.; Kwasny, Mary J.; Roman, Sabine; Lin, Zhiyue; Nicodème, Frédéric; Lu, Chang; Pandolfino, John E.
2013-01-01
Background There are currently no criteria for ineffective esophageal motility (IEM) and ineffective swallow (IES) in High Resolution Manometry (HRM) and Esophageal Pressure Topography (EPT). Our aims were to utilize HRM metrics to define IEM within the Chicago Classification and to determine the distal contractile integral (DCI) threshold for IES. Methods The EPT of 150 patients with either dysphagia or reflux symptoms were reviewed for the breaks >2 cm in the proximal, middle and distal esophagus in the 20 mmHg isobaric contour (IBC). Peristaltic function in EPT was defined by the Chicago Classification, the corresponding conventional line tracing (CLT) were reviewed separately for IEM and IES. Generalized linear mixed models were used to find thresholds for DCI corresponding to traditionally determined IES and failed swallows. An external validation sample was used to confirm these thresholds. Results In terms of swallow subtypes, IES in CLT were a mixture of normal, weak and failed peristalsis in EPT. A DCI of 450mmHg-s-cm was determined to be optimal in predicting IES. In the validation sample, the threshold of 450 mmHg-s-cm showed strong agreement with CLT determination of IES (positive percent agreement 83%, negative percent agreement 90%) Thirty-three among 42 IEM patients in CLT had large peristaltic breaks, small peristaltic breaks or ‘frequent failed peristalsis’ in EPT; 87.2% (34/39) of patients classified as normal in CLT had proximal IBC-breaks in EPT. the patient level diagnostic agreement between CLT and EPT was good (78.6% positive percent agreement, 63.9% negative percent agreement), with negative agreement increasing to 92.0% if proximal breaks were excluded. Conclusions The manometric correlate of IEM in EPT is a mixture of failed swallows and IBC break in the middle/ distal troughs. A DCI value<450 mmHg-s-cm can be utilized to predict IES previously defined in CLT. IEM can be defined by >5 swallows with weak /failed peristalsis or with a DCI <450 mmHg-s-cm. PMID:22929758
Edwards, Rachael M; Godwin, J David; Hippe, Dan S; Kicska, Gregory
2016-01-01
It is known that atelectasis demonstrates greater contrast enhancement than pneumonia on computed tomography (CT). However, the effectiveness of using a Hounsfield unit (HU) threshold to distinguish pneumonia from atelectasis has never been shown. The objective of the study is to demonstrate that an HU threshold can be quantitatively used to effectively distinguish pneumonia from atelectasis. Retrospectively identified CT pulmonary angiogram examinations that did not show pulmonary embolism but contained nonaerated lungs were classified as atelectasis or pneumonia based on established clinical criteria. The HU attenuation was measured in these nonaerated lungs. Receiver operating characteristic (ROC) analysis was performed to determine the area under the ROC curve, sensitivity, and specificity of using the attenuation to distinguish pneumonia from atelectasis. Sixty-eight nonaerated lungs were measured in 55 patients. The mean (SD) enhancement was 62 (18) HU in pneumonia and 119 (24) HU in atelectasis (P < 0.001). A threshold of 92 HU diagnosed pneumonia with 97% sensitivity (confidence interval [CI], 80%-99%) and 85% specificity (CI, 70-93). Accuracy, measured as area under the ROC curve, was 0.97 (CI, 0.89-0.99). We have established that a threshold HU value can be used to confidently distinguish pneumonia from atelectasis with our standard CT pulmonary angiogram imaging protocol and patient population. This suggests that a similar threshold HU value may be determined for other scanning protocols, and application of this threshold may facilitate a more confident diagnosis of pneumonia and thus speed treatment.
Diagnostic performance of BMI percentiles to identify adolescents with metabolic syndrome.
Laurson, Kelly R; Welk, Gregory J; Eisenmann, Joey C
2014-02-01
To compare the diagnostic performance of the Centers for Disease Control and Prevention (CDC) and FITNESSGRAM (FGram) BMI standards for quantifying metabolic risk in youth. Adolescents in the NHANES (n = 3385) were measured for anthropometric variables and metabolic risk factors. BMI percentiles were calculated, and youth were categorized by weight status (using CDC and FGram thresholds). Participants were also categorized by presence or absence of metabolic syndrome. The CDC and FGram standards were compared by prevalence of metabolic abnormalities, various diagnostic criteria, and odds of metabolic syndrome. Receiver operating characteristic curves were also created to identify optimal BMI percentiles to detect metabolic syndrome. The prevalence of metabolic syndrome in obese youth was 19% to 35%, compared with <2% in the normal-weight groups. The odds of metabolic syndrome for obese boys and girls were 46 to 67 and 19 to 22 times greater, respectively, than for normal-weight youth. The receiver operating characteristic analyses identified optimal thresholds similar to the CDC standards for boys and the FGram standards for girls. Overall, BMI thresholds were more strongly associated with metabolic syndrome in boys than in girls. Both the CDC and FGram standards are predictive of metabolic syndrome. The diagnostic utility of the CDC thresholds outperformed the FGram values for boys, whereas FGram standards were slightly better thresholds for girls. The use of a common set of thresholds for school and clinical applications would provide advantages for public health and clinical research and practice.
Artes, Paul H; Henson, David B; Harper, Robert; McLeod, David
2003-06-01
To compare a multisampling suprathreshold strategy with conventional suprathreshold and full-threshold strategies in detecting localized visual field defects and in quantifying the area of loss. Probability theory was applied to examine various suprathreshold pass criteria (i.e., the number of stimuli that have to be seen for a test location to be classified as normal). A suprathreshold strategy that requires three seen or three missed stimuli per test location (multisampling suprathreshold) was selected for further investigation. Simulation was used to determine how the multisampling suprathreshold, conventional suprathreshold, and full-threshold strategies detect localized field loss. To determine the systematic error and variability in estimates of loss area, artificial fields were generated with clustered defects (0-25 field locations with 8- and 16-dB loss) and, for each condition, the number of test locations classified as defective (suprathreshold strategies) and with pattern deviation probability less than 5% (full-threshold strategy), was derived from 1000 simulated test results. The full-threshold and multisampling suprathreshold strategies had similar sensitivity to field loss. Both detected defects earlier than the conventional suprathreshold strategy. The pattern deviation probability analyses of full-threshold results underestimated the area of field loss. The conventional suprathreshold perimetry also underestimated the defect area. With multisampling suprathreshold perimetry, the estimates of defect area were less variable and exhibited lower systematic error. Multisampling suprathreshold paradigms may be a powerful alternative to other strategies of visual field testing. Clinical trials are needed to verify these findings.
NASA Astrophysics Data System (ADS)
Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.
2017-05-01
Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the "ground" rainfall registered by rain gauges.
NASA Technical Reports Server (NTRS)
Rossi, M.; Luciani, S.; Valigi, D.; Kirschbaum, D.; Brunetti, M. T.; Peruccacci, S.; Guzzetti, F.
2017-01-01
Models for forecasting rainfall-induced landslides are mostly based on the identification of empirical rainfall thresholds obtained exploiting rain gauge data. Despite their increased availability, satellite rainfall estimates are scarcely used for this purpose. Satellite data should be useful in ungauged and remote areas, or should provide a significant spatial and temporal reference in gauged areas. In this paper, the analysis of the reliability of rainfall thresholds based on rainfall remote sensed and rain gauge data for the prediction of landslide occurrence is carried out. To date, the estimation of the uncertainty associated with the empirical rainfall thresholds is mostly based on a bootstrap resampling of the rainfall duration and the cumulated event rainfall pairs (D,E) characterizing rainfall events responsible for past failures. This estimation does not consider the measurement uncertainty associated with D and E. In the paper, we propose (i) a new automated procedure to reconstruct ED conditions responsible for the landslide triggering and their uncertainties, and (ii) three new methods to identify rainfall threshold for the possible landslide occurrence, exploiting rain gauge and satellite data. In particular, the proposed methods are based on Least Square (LS), Quantile Regression (QR) and Nonlinear Least Square (NLS) statistical approaches. We applied the new procedure and methods to define empirical rainfall thresholds and their associated uncertainties in the Umbria region (central Italy) using both rain-gauge measurements and satellite estimates. We finally validated the thresholds and tested the effectiveness of the different threshold definition methods with independent landslide information. The NLS method among the others performed better in calculating thresholds in the full range of rainfall durations. We found that the thresholds obtained from satellite data are lower than those obtained from rain gauge measurements. This is in agreement with the literature, where satellite rainfall data underestimate the 'ground' rainfall registered by rain gauges.
Wong, Tien Y; Liew, Gerald; Tapp, Robyn J; Schmidt, Maria Inês; Wang, Jie Jin; Mitchell, Paul; Klein, Ronald; Klein, Barbara E K; Zimmet, Paul; Shaw, Jonathan
2008-03-01
The WHO and American Diabetes Association criteria for diagnosing diabetes mellitus assume the presence of a glycaemic threshold with high sensitivity for identifying retinopathy. However, this assumption is based on data from three previous studies that had important limitations in detecting retinopathy. We aimed to provide updated data for the relation between fasting plasma glucose (FPG) and retinopathy, and to assess the diagnostic accuracy of current FPG thresholds in identifying both prevalent and incident retinopathy. We examined the data from three cross-sectional adult populations: those in the Blue Mountains Eye Study (BMES, Australia, n=3162), the Australian Diabetes, Obesity and Lifestyle Study (AusDiab, Australia, n=2182), and the Multi-Ethnic Study of Atherosclerosis (MESA, USA, n=6079). Retinopathy was diagnosed from multiple retinal photographs of each eye, and graded according to the modified Airlie House Classification system. Plasma glucose concentrations were measured from fasting venous blood samples. The overall prevalence of retinopathy was 11.5% in BMES (95% CI 10.4-12.6%), 9.6% in AusDiab (8.4-10.9), and 15.8% in MESA (14.9-16.7). However, we found inconsistent evidence of a uniform glycaemic threshold for prevalent and incident retinopathy, with analyses suggesting a continuous relation. The widely used diabetes FPG cutoff of 7.0 mmol/L or higher had sensitivity less than 40% (range 14.8-39.1) for detecting retinopathy, with specificity between 80.8% and 95.8%. The area under receiver operating characteristic curves for FPG and retinopathy was low and ranged from 0.56 to 0.61. We saw no evidence of a clear and consistent glycaemic threshold for the presence or incidence of retinopathy across different populations. The current FPG cutoff of 7.0 mmol/L used to diagnose diabetes did not accurately identify people with and without retinopathy. These findings suggest that the criteria for diagnosing diabetes could need reassessment.
Bettembourg, Charles; Diot, Christian; Dameron, Olivier
2015-01-01
Background The analysis of gene annotations referencing back to Gene Ontology plays an important role in the interpretation of high-throughput experiments results. This analysis typically involves semantic similarity and particularity measures that quantify the importance of the Gene Ontology annotations. However, there is currently no sound method supporting the interpretation of the similarity and particularity values in order to determine whether two genes are similar or whether one gene has some significant particular function. Interpretation is frequently based either on an implicit threshold, or an arbitrary one (typically 0.5). Here we investigate a method for determining thresholds supporting the interpretation of the results of a semantic comparison. Results We propose a method for determining the optimal similarity threshold by minimizing the proportions of false-positive and false-negative similarity matches. We compared the distributions of the similarity values of pairs of similar genes and pairs of non-similar genes. These comparisons were performed separately for all three branches of the Gene Ontology. In all situations, we found overlap between the similar and the non-similar distributions, indicating that some similar genes had a similarity value lower than the similarity value of some non-similar genes. We then extend this method to the semantic particularity measure and to a similarity measure applied to the ChEBI ontology. Thresholds were evaluated over the whole HomoloGene database. For each group of homologous genes, we computed all the similarity and particularity values between pairs of genes. Finally, we focused on the PPAR multigene family to show that the similarity and particularity patterns obtained with our thresholds were better at discriminating orthologs and paralogs than those obtained using default thresholds. Conclusion We developed a method for determining optimal semantic similarity and particularity thresholds. We applied this method on the GO and ChEBI ontologies. Qualitative analysis using the thresholds on the PPAR multigene family yielded biologically-relevant patterns. PMID:26230274
Sadeghi-Tehran, Pouria; Virlet, Nicolas; Sabermanesh, Kasra; Hawkesford, Malcolm J
2017-01-01
Accurately segmenting vegetation from the background within digital images is both a fundamental and a challenging task in phenotyping. The performance of traditional methods is satisfactory in homogeneous environments, however, performance decreases when applied to images acquired in dynamic field environments. In this paper, a multi-feature learning method is proposed to quantify vegetation growth in outdoor field conditions. The introduced technique is compared with the state-of the-art and other learning methods on digital images. All methods are compared and evaluated with different environmental conditions and the following criteria: (1) comparison with ground-truth images, (2) variation along a day with changes in ambient illumination, (3) comparison with manual measurements and (4) an estimation of performance along the full life cycle of a wheat canopy. The method described is capable of coping with the environmental challenges faced in field conditions, with high levels of adaptiveness and without the need for adjusting a threshold for each digital image. The proposed method is also an ideal candidate to process a time series of phenotypic information throughout the crop growth acquired in the field. Moreover, the introduced method has an advantage that it is not limited to growth measurements only but can be applied on other applications such as identifying weeds, diseases, stress, etc.
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Pazzaglia, Angela M.; Rotello, Caren M.; Hautus, Michael J.; Macmillan, Neil A.
2013-01-01
Source memory zROC slopes change from below 1 to above 1 depending on which source gets the strongest learning. This effect has been attributed to memory processes, either in terms of a threshold source recollection process or changes in the variability of continuous source evidence. We propose 2 decision mechanisms that can produce the slope…
Sociocultural Experiences of Bulimic and Non-Bulimic Adolescents in a School-Based Chinese Sample
ERIC Educational Resources Information Center
Jackson, Todd; Chen, Hong
2010-01-01
From a large school-based sample (N = 3,084), 49 Mainland Chinese adolescents (31 girls, 18 boys) who endorsed all DSM-IV criteria for bulimia nervosa (BN) or sub-threshold BN and 49 matched controls (31 girls, 18 boys) completed measures of demographics and sociocultural experiences related to body image. Compared to less symptomatic peers, those…
Psychophysical Criteria for Visual Simulation Systems.
1980-05-01
definitive data were found to estab- lish detection thresholds; therefore, this is one area where a psycho- physical study was recommended. Differential size...The specific functional relationships needinq quantification were the following: 1. The effect of Horizontal Aniseikonia on Target Detection and...Transition Technique 6. The Effects of Scene Complexity and Separation on the Detection of Scene Misalignment 7. Absolute Brightness Levels in
Munderi, Paula; Kityo, Cissy; Reid, Andrew; Katabira, Elly; Goodall, Ruth L.; Grosskurth, Heiner; Mugyenyi, Peter; Hakim, James; Gibb, Diana M.
2013-01-01
Background In low-income countries, viral load (VL) monitoring of antiretroviral therapy (ART) is rarely available in the public sector for HIV-infected adults or children. Using clinical failure alone to identify first-line ART failure and trigger regimen switch may result in unnecessary use of costly second-line therapy. Our objective was to identify CD4 threshold values to confirm clinically-determined ART failure when VL is unavailable. Methods 3316 HIV-infected Ugandan/Zimbabwean adults were randomised to first-line ART with Clinically-Driven (CDM, CD4s measured but blinded) or routine Laboratory and Clinical Monitoring (LCM, 12-weekly CD4s) in the DART trial. CD4 at switch and ART failure criteria (new/recurrent WHO 4, single/multiple WHO 3 event; LCM: CD4<100 cells/mm3) were reviewed in 361 LCM, 314 CDM participants who switched over median 5 years follow-up. Retrospective VLs were available in 368 (55%) participants. Results Overall, 265/361 (73%) LCM participants failed with CD4<100 cells/mm3; only 7 (2%) switched with CD4≥250 cells/mm3, four switches triggered by WHO events. Without CD4 monitoring, 207/314 (66%) CDM participants failed with WHO 4 events, and 77(25%)/30(10%) with single/multiple WHO 3 events. Failure/switching with single WHO 3 events was more likely with CD4≥250 cells/mm3 (28/77; 36%) (p = 0.0002). CD4 monitoring reduced switching with viral suppression: 23/187 (12%) LCM versus 49/181 (27%) CDM had VL<400 copies/ml at failure/switch (p<0.0001). Amongst CDM participants with CD4<250 cells/mm3 only 11/133 (8%) had VL<400copies/ml, compared with 38/48 (79%) with CD4≥250 cells/mm3 (p<0.0001). Conclusion Multiple, but not single, WHO 3 events predicted first-line ART failure. A CD4 threshold ‘tiebreaker’ of ≥250 cells/mm3 for clinically-monitored patients failing first-line could identify ∼80% with VL<400 copies/ml, who are unlikely to benefit from second-line. Targeting CD4s to single WHO stage 3 ‘clinical failures’ would particularly avoid premature, costly switch to second-line ART. PMID:23437399
Crystal growth for high-efficiency silicon solar cells workshop: Summary
NASA Technical Reports Server (NTRS)
Dumas, K. A.
1985-01-01
The state of the art in the growth of silicon crystals for high-efficiency solar cells are reviewed, sheet requirements are defined, and furture areas of research are identified. Silicon sheet material characteristics that limit cell efficiencies and yields were described as well as the criteria for the ideal sheet-growth method. The device engineers wish list to the material engineer included: silicon sheet with long minority carrier lifetime that is uniform throughout the sheet, and which doesn't change during processing; and sheet material that stays flat throughout device processing, has uniform good mechanical strength, and is low cost. Impurities in silicon solar cells depreciate cell performance by reducing diffusion length and degrading junctions. The impurity behavior, degradation mechanisms, and variations in degradation threshold with diffusion length for silicon solar cells were described.
Agrawal, Arpana; Lynskey, Michael T
2007-05-11
Previous research has noted that a unidimensional latent construct underlies criteria for cannabis abuse and dependence. However, no study to date has explored whether gender contributes to heterogeneity in the latent abuse and dependence construct and furthermore, whether after accounting for differences in the mean scores of abuse and dependence across genders, there is any evidence for heterogeneity in the individual abuse and dependence criteria. The present study utilizes data on criteria for cannabis abuse and dependence from a large, nationally representative sample (National Epidemiological Survey on Alcohol and Related Conditions) of 8172 lifetime cannabis users to investigate whether gender contributes to heterogeneity in the underlying construct of cannabis abuse and dependence, and in each individual criterion as well. Analyses, all of which were conducted in MPlus, included factor analysis, as well as MIMIC and multiple-group models for an examination of dimensionality and gender heterogeneity, respectively. Results favor a unidimensional construct for cannabis abuse/dependence, as seen in prior research. We also identify two abuse (legal and hazard) and two dependence (quit and problems) criteria, which show significant gender heterogeneity with the abuse criteria exhibiting higher thresholds in women and the dependence criteria in men. We conclude that the criteria that serve as indicators of DSM-IV cannabis abuse and dependence do not function identically in men and women and that certain criteria (e.g. hazardous use) require further refinement.
Chirico, Nicola; Gramatica, Paola
2011-09-26
The main utility of QSAR models is their ability to predict activities/properties for new chemicals, and this external prediction ability is evaluated by means of various validation criteria. As a measure for such evaluation the OECD guidelines have proposed the predictive squared correlation coefficient Q(2)(F1) (Shi et al.). However, other validation criteria have been proposed by other authors: the Golbraikh-Tropsha method, r(2)(m) (Roy), Q(2)(F2) (Schüürmann et al.), Q(2)(F3) (Consonni et al.). In QSAR studies these measures are usually in accordance, though this is not always the case, thus doubts can arise when contradictory results are obtained. It is likely that none of the aforementioned criteria is the best in every situation, so a comparative study using simulated data sets is proposed here, using threshold values suggested by the proponents or those widely used in QSAR modeling. In addition, a different and simple external validation measure, the concordance correlation coefficient (CCC), is proposed and compared with other criteria. Huge data sets were used to study the general behavior of validation measures, and the concordance correlation coefficient was shown to be the most restrictive. On using simulated data sets of a more realistic size, it was found that CCC was broadly in agreement, about 96% of the time, with other validation measures in accepting models as predictive, and in almost all the examples it was the most precautionary. The proposed concordance correlation coefficient also works well on real data sets, where it seems to be more stable, and helps in making decisions when the validation measures are in conflict. Since it is conceptually simple, and given its stability and restrictiveness, we propose the concordance correlation coefficient as a complementary, or alternative, more prudent measure of a QSAR model to be externally predictive.
Evidence of hearing loss in a “normally-hearing” college-student population
Le Prell, C. G.; Hensley, B.N.; Campbell, K. C. M.; Hall, J. W.; Guire, K.
2011-01-01
We report pure-tone hearing threshold findings in 56 college students. All subjects reported normal hearing during telephone interviews, yet not all subjects had normal sensitivity as defined by well-accepted criteria. At one or more test frequencies (0.25–8 kHz), 7% of ears had thresholds ≥25 dB HL and 12% had thresholds ≥20 dB HL. The proportion of ears with abnormal findings decreased when three-frequency pure-tone-averages were used. Low-frequency PTA hearing loss was detected in 2.7% of ears and high-frequency PTA hearing loss was detected in 7.1% of ears; however, there was little evidence for “notched” audiograms. There was a statistically reliable relationship in which personal music player use was correlated with decreased hearing status in male subjects. Routine screening and education regarding hearing loss risk factors are critical as college students do not always self-identify early changes in hearing. Large-scale systematic investigations of college students’ hearing status appear to be warranted; the current sample size was not adequate to precisely measure potential contributions of different sound sources to the elevated thresholds measured in some subjects. PMID:21288064
Probing the Cosmic Gamma-Ray Burst Rate with Trigger Simulations of the Swift Burst Alert Telescope
NASA Technical Reports Server (NTRS)
Lien, Amy; Sakamoto, Takanori; Gehrels, Neil; Palmer, David M.; Barthelmy, Scott D.; Graziani, Carlo; Cannizzo, John K.
2013-01-01
The gamma-ray burst (GRB) rate is essential for revealing the connection between GRBs, supernovae and stellar evolution. Additionally, the GRB rate at high redshift provides a strong probe of star formation history in the early universe. While hundreds of GRBs are observed by Swift, it remains difficult to determine the intrinsic GRB rate due to the complex trigger algorithm of Swift. Current studies of the GRB rate usually approximate the Swift trigger algorithm by a single detection threshold. However, unlike the previously own GRB instruments, Swift has over 500 trigger criteria based on photon count rate and additional image threshold for localization. To investigate possible systematic biases and explore the intrinsic GRB properties, we develop a program that is capable of simulating all the rate trigger criteria and mimicking the image threshold. Our simulations show that adopting the complex trigger algorithm of Swift increases the detection rate of dim bursts. As a result, our simulations suggest bursts need to be dimmer than previously expected to avoid over-producing the number of detections and to match with Swift observations. Moreover, our results indicate that these dim bursts are more likely to be high redshift events than low-luminosity GRBs. This would imply an even higher cosmic GRB rate at large redshifts than previous expectations based on star-formation rate measurements, unless other factors, such as the luminosity evolution, are taken into account. The GRB rate from our best result gives a total number of 4568 +825 -1429 GRBs per year that are beamed toward us in the whole universe.
Diagnostic depressive symptoms of the mixed bipolar episode.
Cassidy, F; Ahearn, E; Murry, E; Forest, K; Carroll, B J
2000-03-01
There is not yet consensus on the best diagnostic definition of mixed bipolar episodes. Many have suggested the DSM-III-R/-IV definition is too rigid. We propose alternative criteria using data from a large patient cohort. We evaluated 237 manic in-patients using DSM-III-R criteria and the Scale for Manic States (SMS). A bimodally distributed factor of dysphoric mood has been reported from the SMS data. We used both the factor and the DSM-III-R classifications to identify candidate depressive symptoms and then developed three candidate depressive symptom sets. Using ROC analysis we determined the optimal threshold number of symptoms in each set and compared the three ROC solutions. The optimal solution was tested against the DSM-III-R classification for crossvalidation. The optimal ROC solution was a set, derived from both the DSM-III-R and the SMS, and the optimal threshold for diagnosis was two or more symptoms. Applying this set iteratively to the DSM-III-R classification produced the identical ROC solution. The prevalence of mixed episodes in the cohort was 13.9% by DSM-III-R, 20.2% by the dysphoria factor and 27.4% by the new ROC solution. A diagnostic set of six dysphoric symptoms (depressed mood, anhedonia, guilt, suicide, fatigue and anxiety), with a threshold of two symptoms, is proposed for a mixed episode. This new definition has a foundation in clinical data, in the proved diagnostic performance of the qualifying symptoms, and in ROC validation against two previous definitions that each have face validity.
An improved PRoPHET routing protocol in delay tolerant network.
Han, Seung Deok; Chung, Yun Won
2015-01-01
In delay tolerant network (DTN), an end-to-end path is not guaranteed and packets are delivered from a source node to a destination node via store-carry-forward based routing. In DTN, a source node or an intermediate node stores packets in buffer and carries them while it moves around. These packets are forwarded to other nodes based on predefined criteria and finally are delivered to a destination node via multiple hops. In this paper, we improve the dissemination speed of PRoPHET (probability routing protocol using history of encounters and transitivity) protocol by employing epidemic protocol for disseminating message m, if forwarding counter and hop counter values are smaller than or equal to the threshold values. The performance of the proposed protocol was analyzed from the aspect of delivery probability, average delay, and overhead ratio. Numerical results show that the proposed protocol can improve the delivery probability, average delay, and overhead ratio of PRoPHET protocol by appropriately selecting the threshold forwarding counter and threshold hop counter values.
Eagle-eyed visual acuity: an experimental investigation of enhanced perception in autism.
Ashwin, Emma; Ashwin, Chris; Rhydderch, Danielle; Howells, Jessica; Baron-Cohen, Simon
2009-01-01
Anecdotal accounts of sensory hypersensitivity in individuals with autism spectrum conditions (ASC) have been noted since the first reports of the condition. Over time, empirical evidence has supported the notion that those with ASC have superior visual abilities compared with control subjects. However, it remains unclear whether these abilities are specifically the result of differences in sensory thresholds (low-level processing), rather than higher-level cognitive processes. This study investigates visual threshold in n = 15 individuals with ASC and n = 15 individuals without ASC, using a standardized optometric test, the Freiburg Visual Acuity and Contrast Test, to investigate basic low-level visual acuity. Individuals with ASC have significantly better visual acuity (20:7) compared with control subjects (20:13)-acuity so superior that it lies in the region reported for birds of prey. The results of this study suggest that inclusion of sensory hypersensitivity in the diagnostic criteria for ASC may be warranted and that basic standardized tests of sensory thresholds may inform causal theories of ASC.
Generation of synthetic flood hydrographs by hydrological donors (SHYDONHY method)
NASA Astrophysics Data System (ADS)
Paquet, Emmanuel
2017-04-01
For the design of hydraulic infrastructures like dams, a design hydrograph is required in most of the cases. Some of its features (e.g. peak value, duration, volume) corresponding to a given return period are computed thanks to a wide range of methods: historical records, mono or multivariate statistical analysis, stochastic simulation, etc. Then various methods have been proposed to construct design hydrographs having such characteristics, ranging from traditional unit-hydrograph to statistical methods (Yue et al., 2002). A new method to build design hydrographs (or more generally synthetic hydrographs) is introduced here, named SHYDONHY, French acronym for "Synthèse d'HYdrogrammes par DONneurs HYdrologiques". It is based on an extensive database of 100 000 flood hydrographs recorded at hourly time-step on 1300 gauging stations in France and Switzerland, covering a wide range of catchment size and climatology. For each station, an average of two hydrographs per year of record has been selected by a peak-over-threshold (POT) method with independence criteria (Lang et al., 1999). This sampling ensures that only hydrographs of intense floods are gathered in the dataset. For a given catchment, where few or no hydrograph is available at the outlet, a sub-set of 10 "donor stations" is selected within the complete dataset, considering several criteria: proximity, size, mean annual values and regimes for both total runoff and POT-selected floods. This sub-set of stations (and their corresponding flood hydrographs) will allow to: • Estimate a characteristic duration of flood hydrographs (e.g. duration for which the discharge is above 50% of the peak value). • For a given duration (e.g. one day), estimate the average peak-to- volume ratio of floods. • For a given duration and peak-to-volume ratio, generation of a synthetic reference hydrograph by combining appropriate hydrographs of the sub-set. • For a given daily discharge sequence, being observed or generated for extreme flood estimation, generate a suitable synthetic hydrograph, also by combining selected hydrographs of the sub-set. The reliability of the method is assessed by performing a jackknife validation on the whole dataset of stations, in particular by reconstructing the hydrograph of the biggest flood of each station and comparing it to the actual one. Some applications are presented, e.g. the coupling of SHYDONHY with the SCHADEX method (Paquet et al., 2003) for the stochastic simulation of extreme reservoir level in dams. References: Lang, M., Ouarda, T. B. M. J., & Bobée, B. (1999). Towards operational guidelines for over-threshold modeling. Journal of hydrology, 225(3), 103-117. Paquet, E., Garavaglia, F., Garçon, R., & Gailhard, J. (2013). The SCHADEX method: A semi-continuous rainfall-runoff simulation for extreme flood estimation. Journal of Hydrology, 495, 23-37. Yue, S., Ouarda, T. B., Bobée, B., Legendre, P., & Bruneau, P. (2002). Approach for describing statistical properties of flood hydrograph. Journal of hydrologic engineering, 7(2), 147-153.
Jing, X; Cimino, J J
2014-01-01
Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases. Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection. The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on "heavily represented topics" in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone. Our filtering method reduces large graphs to a manageable size by removing relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical terminology. The method is applicable to large data sets (such as a hundred thousand records or more) and can be used to generate new hypotheses from data sets coded with hierarchical terminologies.
NASA Astrophysics Data System (ADS)
Liang, J.; Liu, D.
2017-12-01
Emergency responses to floods require timely information on water extents that can be produced by satellite-based remote sensing. As SAR image can be acquired in adverse illumination and weather conditions, it is particularly suitable for delineating water extent during a flood event. Thresholding SAR imagery is one of the most widely used approaches to delineate water extent. However, most studies apply only one threshold to separate water and dry land without considering the complexity and variability of different dry land surface types in an image. This paper proposes a new thresholding method for SAR image to delineate water from other different land cover types. A probability distribution of SAR backscatter intensity is fitted for each land cover type including water before a flood event and the intersection between two distributions is regarded as a threshold to classify the two. To extract water, a set of thresholds are applied to several pairs of land cover types—water and urban or water and forest. The subsets are merged to form the water distribution for the SAR image during or after the flooding. Experiments show that this land cover based thresholding approach outperformed the traditional single thresholding by about 5% to 15%. This method has great application potential with the broadly acceptance of the thresholding based methods and availability of land cover data, especially for heterogeneous regions.
Methods for threshold determination in multiplexed assays
Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J
2014-06-24
Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kyung-Min; Min Kim, Chul; Moon Jeong, Tae, E-mail: jeongtm@gist.ac.kr
A computational method based on a first-principles multiscale simulation has been used for calculating the optical response and the ablation threshold of an optical material irradiated with an ultrashort intense laser pulse. The method employs Maxwell's equations to describe laser pulse propagation and time-dependent density functional theory to describe the generation of conduction band electrons in an optical medium. Optical properties, such as reflectance and absorption, were investigated for laser intensities in the range 10{sup 10} W/cm{sup 2} to 2 × 10{sup 15} W/cm{sup 2} based on the theory of generation and spatial distribution of the conduction band electrons. The method was applied tomore » investigate the changes in the optical reflectance of α-quartz bulk, half-wavelength thin-film, and quarter-wavelength thin-film and to estimate their ablation thresholds. Despite the adiabatic local density approximation used in calculating the exchange–correlation potential, the reflectance and the ablation threshold obtained from our method agree well with the previous theoretical and experimental results. The method can be applied to estimate the ablation thresholds for optical materials, in general. The ablation threshold data can be used to design ultra-broadband high-damage-threshold coating structures.« less
Blick, Gary
2013-03-01
To determine the incidence of hypogonadism in men with human immunodeficiency virus (HIV)/acquired immunodeficiency virus (AIDS), the most useful serum testosterone measurement and threshold for diagnosing hypogonadism, and the comparative efficacy of 2 testosterone replacement therapy (TRT) 1% gels (AndroGel® [Abbott Laboratories] and Testim® [Auxilium Pharmaceuticals, Inc.]). This was a 2-stage observational study. In stage 1, patient records from 2 medical practices specializing in HIV/AIDS were reviewed. Eligible patients were aged ≥ 18 years; had HIV-seropositive status confirmed by enzyme-linked immunosorbent assay and western blot test or HIV-1 viremia confirmed by HIV-1 RNA polymerase chain reaction; and had prior baseline testosterone assessments for hypogonadism (ie, presence of signs/symptoms of hypogonadism as well as total testosterone [TT] and free testosterone [FT] level measurements). Stage 2 included the evaluation of patients from stage 1 who were treated with 5 to 10 g/day of TRT. The stage 2 inclusion criteria were a diagnosis of low testosterone (defined as TT level < 300 ng/dL and/or FT level < 50 pg/mL, as per The Endocrine Society guidelines and presence/absence of hypogonadal signs and symptoms); ≥ 12 months of evaluable sign and symptom assessments and TT/FT level measurements while on TRT with either Testim® or AndroGel®; and ≥ 4 weeks on initial TRT if the initial TRT was switched or discontinued. Four hundred one of 422 patients met the stage 1 inclusion criteria and 167 of 401 patients (AndroGel®, n = 92; Testim®, n = 75) met the stage 2 inclusion criteria. Total testosterone level < 300 ng/dL alone identified 24% (94 of 390) of patients as hypogonadal, but failed to diagnose an additional 111 patients (67.7%) with FT levels < 100 pg/mL and hypogonadal symptoms. Through month 12, AndroGel® increased mean TT levels by +42.8% and FT levels by +66.9%, compared with +178.7% (P = 0.017) and +191% (P = 0.039), respectively, for Testim®. Patients treated with Testim® showed significantly greater improvements in libido, sexual performance, nighttime energy, focus/concentration, and abdominal girth, and trends for greater improvement in fatigue and erectile dysfunction than patients treated with AndroGel®. No patients discontinued therapy due to adverse events. The most useful serum testosterone measurement and threshold for diagnosing hypogonadism in men with HIV/AIDS was FT level < 100 pg/mL, which identified 64% of men as hypogonadal with the presence of ≥ 1 hypogonadal symptom. This is above currently accepted thresholds. Criteria using TT level < 300 ng/dL and FT level < 50 pg/mL only diagnosed 24% and 19% of patients, respectively, as having hypogonadism. Testim® was more effective than AndroGel® in increasing TT and FT levels and improving hypogonadal symptoms.
Determination of ReQuest-based symptom thresholds to define symptom relief in GERD clinical studies.
Stanghellini, Vincenzo; Armstrong, David; Mönnikes, Hubert; Berghöfer, Peter; Gatz, Gudrun; Bardhan, Karna Dev
2007-01-01
The growing importance of symptom assessment is evident from the numerous clinical studies on gastroesophageal reflux disease (GERD) assessing treatment-induced symptom relief. However, to date, the a priori selection of criteria defining symptom relief has been arbitrary. The present study was designed to prospectively identify GERD symptom thresholds for the broad spectrum of GERD-related symptoms assessed by the validated reflux questionnaire (ReQuest) and its subscales, ReQuest-GI (gastrointestinal symptoms) and ReQuest-WSO (general well-being, sleep disturbances, other complaints), in individuals without evidence of GERD. In this 4-day evaluation in Germany, 385 individuals without evidence of GERD were included. On the first day, participants completed the ReQuest, the Gastrointestinal Symptom Rating Scale, and the Psychological General Well-Being scale. On the other days, participants filled in the ReQuest only. GERD symptom thresholds were calculated for ReQuest and its subscales, based on the respective 90th percentiles. GERD symptom thresholds were 3.37 for ReQuest, 0.95 for ReQuest-GI, and 2.46 for ReQuest-WSO. Even individuals without evidence of GERD may experience some mild symptoms that are commonly ascribed to GERD. GERD symptom thresholds derived in this study can be used to define the global symptom relief in patients with GERD. Copyright 2007 S. Karger AG, Basel.
Yang, Chihae; Barlow, Susan M; Muldoon Jacobs, Kristi L; Vitcheva, Vessela; Boobis, Alan R; Felter, Susan P; Arvidson, Kirk B; Keller, Detlef; Cronin, Mark T D; Enoch, Steven; Worth, Andrew; Hollnagel, Heli M
2017-11-01
A new dataset of cosmetics-related chemicals for the Threshold of Toxicological Concern (TTC) approach has been compiled, comprising 552 chemicals with 219, 40, and 293 chemicals in Cramer Classes I, II, and III, respectively. Data were integrated and curated to create a database of No-/Lowest-Observed-Adverse-Effect Level (NOAEL/LOAEL) values, from which the final COSMOS TTC dataset was developed. Criteria for study inclusion and NOAEL decisions were defined, and rigorous quality control was performed for study details and assignment of Cramer classes. From the final COSMOS TTC dataset, human exposure thresholds of 42 and 7.9 μg/kg-bw/day were derived for Cramer Classes I and III, respectively. The size of Cramer Class II was insufficient for derivation of a TTC value. The COSMOS TTC dataset was then federated with the dataset of Munro and colleagues, previously published in 1996, after updating the latter using the quality control processes for this project. This federated dataset expands the chemical space and provides more robust thresholds. The 966 substances in the federated database comprise 245, 49 and 672 chemicals in Cramer Classes I, II and III, respectively. The corresponding TTC values of 46, 6.2 and 2.3 μg/kg-bw/day are broadly similar to those of the original Munro dataset. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Patel, Kushang V; Longo, Dan L; Ershler, William B; Yu, Binbing; Semba, Richard D; Ferrucci, Luigi; Guralnik, Jack M
2009-05-01
Mildly low haemoglobin concentration is associated with increased mortality in older adults. However, this relationship has not been well characterized in racial/ethnic minorities. Therefore, this study determined the haemoglobin threshold below which risk of death is significantly increased in older non-Hispanic whites, non-Hispanic blacks, and Mexican Americans. Data on 4089 participants of the 1988-94 US National Health and Nutrition Examination Survey who were > or =65 years of age were analyzed with mortality follow-up through December 31, 2000. Mean haemoglobin in non-Hispanic whites (n = 2686) and Mexican Americans (n = 663) was 140 g/l, while in non-Hispanic blacks (n = 740) the mean was 10 g/l lower. A total of 1944 (47.5%) participants died. Among non-Hispanic whites and Mexican Americans, age- and sex-adjusted models showed that the haemoglobin thresholds below which mortality risk was significantly increased were 4 and 2 g/l respectively, above the World Health Organization (WHO) cut-off points for anaemia. In contrast, the threshold for non-Hispanic blacks was 7 g/l below the WHO criteria. Similar threshold effects were observed when analyzing haemoglobin in categories and adjusting for multiple confounders. In conclusion, the haemoglobin threshold below which mortality rises significantly is a full g/dl lower in non-Hispanic blacks than in non-Hispanic whites and Mexican Americans.
Chleborad, Alan F.; Baum, Rex L.; Godt, Jonathan W.
2006-01-01
Empirical rainfall thresholds and related information form a basis for forecasting landslides in the Seattle area. A formula for a cumulative rainfall threshold (CT), P3=3.5-0.67P15, defined by rainfall amounts (in inches) during the last 3 days (72 hours), P3, and the previous 15 days (360 hours), P15, was developed from analysis of historical data for 91 landslides that occurred as part of 3-day events of three or more landslides between 1933 and 1997. Comparison with historical records for 577 landslides (including some used in developing the CT) indicates that the CT captures more than 90 percent of historical landslide events of three or more landslides in 1-day and 3-day periods that were recorded from 1978 to 2003. However, the probability of landslide occurrence on a day when the CT is exceeded at any single rain gage (8.4 percent) is low, and additional criteria are needed to confidently forecast landslide occurrence. Exceedance of a rainfall intensity-duration threshold I=3.257D-1.13, for intensity, I, (inch per hour) and duration, D, (hours), corresponds to a higher probability of landslide occurrence (42 percent at any 3 rain gages or 65 percent at any 10 rain gages), but it predicts fewer landslides. Both thresholds must be used in tandem to forecast landslide occurrence in Seattle.
Optimum threshold selection method of centroid computation for Gaussian spot
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; Wang, Caixia
2015-10-01
Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.
A New Integrated Threshold Selection Methodology for Spatial Forecast Verification of Extreme Events
NASA Astrophysics Data System (ADS)
Kholodovsky, V.
2017-12-01
Extreme weather and climate events such as heavy precipitation, heat waves and strong winds can cause extensive damage to the society in terms of human lives and financial losses. As climate changes, it is important to understand how extreme weather events may change as a result. Climate and statistical models are often independently used to model those phenomena. To better assess performance of the climate models, a variety of spatial forecast verification methods have been developed. However, spatial verification metrics that are widely used in comparing mean states, in most cases, do not have an adequate theoretical justification to benchmark extreme weather events. We proposed a new integrated threshold selection methodology for spatial forecast verification of extreme events that couples existing pattern recognition indices with high threshold choices. This integrated approach has three main steps: 1) dimension reduction; 2) geometric domain mapping; and 3) thresholds clustering. We apply this approach to an observed precipitation dataset over CONUS. The results are evaluated by displaying threshold distribution seasonally, monthly and annually. The method offers user the flexibility of selecting a high threshold that is linked to desired geometrical properties. The proposed high threshold methodology could either complement existing spatial verification methods, where threshold selection is arbitrary, or be directly applicable in extreme value theory.
Gender differences in developmental dyscalculia depend on diagnostic criteria
Devine, Amy; Soltész, Fruzsina; Nobes, Alison; Goswami, Usha; Szűcs, Dénes
2013-01-01
Developmental dyscalculia (DD) is a learning difficulty specific to mathematics learning. The prevalence of DD may be equivalent to that of dyslexia, posing an important challenge for effective educational provision. Nevertheless, there is no agreed definition of DD and there are controversies surrounding cutoff decisions, specificity and gender differences. In the current study, 1004 British primary school children completed mathematics and reading assessments. The prevalence of DD and gender ratio were estimated in this sample using different criteria. When using absolute thresholds, the prevalence of DD was the same for both genders regardless of the cutoff criteria applied, however gender differences emerged when using a mathematics-reading discrepancy definition. Correlations between mathematics performance and the control measures selected to identify a specific learning difficulty affect both prevalence estimates and whether a gender difference is in fact identified. Educational implications are discussed. PMID:27667904
Girodon, François; Bonicelli, Gilles; Schaeffer, Céline; Mounier, Morgane; Carillo, Serge; Lafon, Ingrid; Carli, Paule Marie; Janoray, Inès; Ferrant, Emmanuelle; Maynadié, Marc
2009-01-01
To observe the effect of the new World Health Organization (WHO) criteria on the incidence of myeloproliferative neoplasms, we performed a retrospective study of a population-based registry in the Côte d’Or area, France, from 1980 to 2007. A total of 524 myeloproliferative neoplasms were registered for the 1980–2007 period, including 135 polycythemia vera, 308 essential thrombocythemia and 81 idiopathic myelofibroses. No change in the incidence of either polycythemia vera or idiopathic myelofibrosis was observed for the 2005–2007 period, compared to 1980–2004. On the contrary, a pronounced increase in the incidence of essential thrombocythemia was noted after 2005, mainly due to the use of JAK2 mutation screening and a lower threshold of platelet count. Our study confirms the relevance of the new WHO diagnostic criteria in allowing earlier diagnosis of essential thrombocythemia. PMID:19377078
NASA Astrophysics Data System (ADS)
Katzensteiner, H.; Bell, R.; Petschko, H.; Glade, T.
2012-04-01
The prediction and forecast of widespread landsliding for a given triggering event is an open research question. Numerous studies tried to link spatial rainfall and landslide distributions. This study focuses on analysing the relationship between intensive precipitation and rainfall-triggered shallow landslides in the year 2009 in Lower Austria. Landslide distributions were gained from the building ground register, which is maintained by the Geological Survey of Lower Austria. It contains detailed information of landslides, which were registered due to damage reports. Spatially distributed rainfall estimates were extracted from INCA (Integrated Nowcasting through Comprehensive Analysis) precipitation analysis, which is a combination of station data interpolation and radar data in a spatial resolution of 1km developed by the Central Institute for Meteorology and Geodynamics (ZAMG), Vienna, Austria. The importance of the data source is shown by comparing rainfall data based on reference gauges, spatial interpolation and INCA-analysis for a certain storm period. INCA precipitation data can detect precipitating cells that do not hit a station but might trigger a landslide, which is an advantage over the application of reference stations for the definition of rainfall thresholds. Empirical thresholds at regional scale were determined based on rainfall-intensity and duration in the year 2009 and landslide information. These thresholds are dependent on the criteria which separate the landslide triggering and non-triggering precipitation events from each other. Different approaches for defining thresholds alter the shape of the threshold as well. A temporarily threshold I=8,8263*D^(-0.672) for extreme rainfall events in summer in Lower Austria was defined. A verification of the threshold with similar events of other years as well as following analyses based on a larger landslide database are in progress.
Generalizability of Evidence-Based Assessment Recommendations for Pediatric Bipolar Disorder
Jenkins, Melissa M.; Youngstrom, Eric A.; Youngstrom, Jennifer Kogos; Feeny, Norah C.; Findling, Robert L.
2013-01-01
Bipolar disorder is frequently clinically diagnosed in youths who do not actually satisfy DSM-IV criteria, yet cases that would satisfy full DSM-IV criteria are often undetected clinically. Evidence-based assessment methods that incorporate Bayesian reasoning have demonstrated improved diagnostic accuracy, and consistency; however, their clinical utility is largely unexplored. The present study examines the effectiveness of promising evidence-based decision-making compared to the clinical gold standard. Participants were 562 youth, ages 5-17 and predominantly African American, drawn from a community mental health clinic. Research diagnoses combined semi-structured interview with youths’ psychiatric, developmental, and family mental health histories. Independent Bayesian estimates relied on published risk estimates from other samples discriminated bipolar diagnoses, Area Under Curve=.75, p<.00005. The Bayes and confidence ratings correlated rs =.30. Agreement about an evidence-based assessment intervention “threshold model” (wait/assess/treat) had K=.24, p<.05. No potential moderators of agreement between the Bayesian estimates and confidence ratings, including type of bipolar illness, were significant. Bayesian risk estimates were highly correlated with logistic regression estimates using optimal sample weights, r=.81, p<.0005. Clinical and Bayesian approaches agree in terms of overall concordance and deciding next clinical action, even when Bayesian predictions are based on published estimates from clinically and demographically different samples. Evidence-based assessment methods may be useful in settings that cannot routinely employ gold standard assessments, and they may help decrease rates of overdiagnosis while promoting earlier identification of true cases. PMID:22004538
Novel wavelet threshold denoising method in axle press-fit zone ultrasonic detection
NASA Astrophysics Data System (ADS)
Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai
2017-02-01
Axles are important part of railway locomotives and vehicles. Periodic ultrasonic inspection of axles can effectively detect and monitor axle fatigue cracks. However, in the axle press-fit zone, the complex interface contact condition reduces the signal-noise ratio (SNR). Therefore, the probability of false positives and false negatives increases. In this work, a novel wavelet threshold function is created to remove noise and suppress press-fit interface echoes in axle ultrasonic defect detection. The novel wavelet threshold function with two variables is designed to ensure the precision of optimum searching process. Based on the positive correlation between the correlation coefficient and SNR and with the experiment phenomenon that the defect and the press-fit interface echo have different axle-circumferential correlation characteristics, a discrete optimum searching process for two undetermined variables in novel wavelet threshold function is conducted. The performance of the proposed method is assessed by comparing it with traditional threshold methods using real data. The statistic results of the amplitude and the peak SNR of defect echoes show that the proposed wavelet threshold denoising method not only maintains the amplitude of defect echoes but also has a higher peak SNR.
Alava, Juan José; Ross, Peter S; Lachmuth, Cara; Ford, John K B; Hickie, Brendan E; Gobas, Frank A P C
2012-11-20
The development of an area-based polychlorinated biphenyl (PCB) food-web bioaccumulation model enabled a critical evaluation of the efficacy of sediment quality criteria and prey tissue residue guidelines in protecting fish-eating resident killer whales of British Columbia and adjacent waters. Model-predicted and observed PCB concentrations in resident killer whales and Chinook salmon were in good agreement, supporting the model's application for risk assessment and criteria development. Model application shows that PCB concentrations in the sediments from the resident killer whale's Critical Habitats and entire foraging range leads to PCB concentrations in most killer whales that exceed PCB toxicity threshold concentrations reported for marine mammals. Results further indicate that current PCB sediment quality and prey tissue residue criteria for fish-eating wildlife are not protective of killer whales and are not appropriate for assessing risks of PCB-contaminated sediments to high trophic level biota. We present a novel methodology for deriving sediment quality criteria and tissue residue guidelines that protect biota of high trophic levels under various PCB management scenarios. PCB concentrations in sediments and in prey that are deemed protective of resident killer whale health are much lower than current criteria values, underscoring the extreme vulnerability of high trophic level marine mammals to persistent and bioaccumulative contaminants.
Regional rainfall thresholds for landslide occurrence using a centenary database
NASA Astrophysics Data System (ADS)
Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia
2018-04-01
This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.
Predicting geriatric falls following an episode of emergency department care: a systematic review.
Carpenter, Christopher R; Avidan, Michael S; Wildes, Tanya; Stark, Susan; Fowler, Susan A; Lo, Alexander X
2014-10-01
Falls are the leading cause of traumatic mortality in geriatric adults. Despite recent multispecialty guideline recommendations that advocate for proactive fall prevention protocols in the emergency department (ED), the ability of risk factors or risk stratification instruments to identify subsets of geriatric patients at increased risk for short-term falls is largely unexplored. This was a systematic review and meta-analysis of ED-based history, physical examination, and fall risk stratification instruments with the primary objective of providing a quantitative estimate for each risk factor's accuracy to predict future falls. A secondary objective was to quantify ED fall risk assessment test and treatment thresholds using derived estimates of sensitivity and specificity. A medical librarian and two emergency physicians (EPs) conducted a medical literature search of PUBMED, EMBASE, CINAHL, CENTRAL, DARE, the Cochrane Registry, and Clinical Trials. Unpublished research was located by a hand search of emergency medicine (EM) research abstracts from national meetings. Inclusion criteria for original studies included ED-based assessment of pre-ED or post-ED fall risk in patients 65 years and older with sufficient detail to reproduce contingency tables for meta-analysis. Original study authors were contacted for additional details when necessary. The Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS-2) was used to assess individual study quality for those studies that met inclusion criteria. When more than one qualitatively similar study assessed the same risk factor for falls at the same interval following an ED evaluation, then meta-analysis was performed using Meta-DiSc software. The primary outcomes were sensitivity, specificity, and likelihood ratios for fall risk factors or risk stratification instruments. Secondary outcomes included estimates of test and treatment thresholds using the Pauker method based on accuracy, screening risk, and the projected benefits or harms of fall prevention interventions in the ED. A total of 608 unique and potentially relevant studies were identified, but only three met our inclusion criteria. Two studies that included 660 patients assessed 29 risk factors and two risk stratification instruments for falls in geriatric patients in the 6 months following an ED evaluation, while one study of 107 patients assessed the risk of falls in the preceding 12 months. A self-report of depression was associated with the highest positive likelihood ratio (LR) of 6.55 (95% confidence interval [CI] = 1.41 to 30.48). Six fall predictors were identified in more than one study (past falls, living alone, use of walking aid, depression, cognitive deficit, and more than six medications) and meta-analysis was performed for these risk factors. One screening instrument was sufficiently accurate to identify a subset of geriatric ED patients at low risk for falls with a negative LR of 0.11 (95% CI = 0.06 to 0.20). The test threshold was 6.6% and the treatment threshold was 27.5%. This study demonstrates the paucity of evidence in the literature regarding ED-based screening for risk of future falls among older adults. The screening tools and individual characteristics identified in this study provide an evidentiary basis on which to develop screening protocols for geriatrics adults in the ED to reduce fall risk. © 2014 by the Society for Academic Emergency Medicine.
Genetic variance of tolerance and the toxicant threshold model.
Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki
2012-04-01
A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change. Copyright © 2012 SETAC.
Lin, Guoping; Candela, Y; Tillement, O; Cai, Zhiping; Lefèvre-Seguin, V; Hare, J
2012-12-15
A method based on thermal bistability for ultralow-threshold microlaser optimization is demonstrated. When sweeping the pump laser frequency across a pump resonance, the dynamic thermal bistability slows down the power variation. The resulting line shape modification enables a real-time monitoring of the laser characteristic. We demonstrate this method for a functionalized microsphere exhibiting a submicrowatt laser threshold. This approach is confirmed by comparing the results with a step-by-step recording in quasi-static thermal conditions.