Baseline predictability of daily east Asian summer monsoon circulation indices
NASA Astrophysics Data System (ADS)
Ai, Shucong; Chen, Quanliang; Li, Jianping; Ding, Ruiqiang; Zhong, Quanjia
2017-05-01
The nonlinear local Lyapunov exponent (NLLE) method is adopted to quantitatively determine the predictability limit of East Asian summer monsoon (EASM) intensity indices on a synoptic timescale. The predictability limit of EASM indices varies widely according to the definitions of indices. EASM indices defined by zonal shear have a limit of around 7 days, which is higher than the predictability limit of EASM indices defined by sea level pressure (SLP) difference and meridional wind shear (about 5 days). The initial error of EASM indices defined by SLP difference and meridional wind shear shows a faster growth than indices defined by zonal wind shear. Furthermore, the indices defined by zonal wind shear appear to fluctuate at lower frequencies, whereas the indices defined by SLP difference and meridional wind shear generally fluctuate at higher frequencies. This result may explain why the daily variability of the EASM indices defined by zonal wind shear tends be more predictable than those defined by SLP difference and meridional wind shear. Analysis of the temporal correlation coefficient (TCC) skill for EASM indices obtained from observations and from NCEP's Global Ensemble Forecasting System (GEFS) historical weather forecast dataset shows that GEFS has a higher forecast skill for the EASM indices defined by zonal wind shear than for indices defined by SLP difference and meridional wind shear. The predictability limit estimated by the NLLE method is shorter than that in GEFS. In addition, the June-September average TCC skill for different daily EASM indices shows significant interannual variations from 1985 to 2015 in GEFS. However, the TCC for different types of EASM indices does not show coherent interannual fluctuations.
2003-02-01
Holistic Life Prediction Methodology Engineering is a profession based in science, but in the face of limited data or resources, the application of...the process. (see Table 1). "* HLPM uses continuum mechanics but defines limits of applicability - is material and process specific. "* HLPM defines...LEFM - EPFM ?) Nucleated Structure dominated Data base** Tensile/compressive discontinuity (not crack growth buckling inherent) type, size, Appropriate
Woodin, Sarah A; Hilbish, Thomas J; Helmuth, Brian; Jones, Sierra J; Wethey, David S
2013-09-01
Modeling the biogeographic consequences of climate change requires confidence in model predictions under novel conditions. However, models often fail when extended to new locales, and such instances have been used as evidence of a change in physiological tolerance, that is, a fundamental niche shift. We explore an alternative explanation and propose a method for predicting the likelihood of failure based on physiological performance curves and environmental variance in the original and new environments. We define the transient event margin (TEM) as the gap between energetic performance failure, defined as CTmax, and the upper lethal limit, defined as LTmax. If TEM is large relative to environmental fluctuations, models will likely fail in new locales. If TEM is small relative to environmental fluctuations, models are likely to be robust for new locales, even when mechanism is unknown. Using temperature, we predict when biogeographic models are likely to fail and illustrate this with a case study. We suggest that failure is predictable from an understanding of how climate drives nonlethal physiological responses, but for many species such data have not been collected. Successful biogeographic forecasting thus depends on understanding when the mechanisms limiting distribution of a species will differ among geographic regions, or at different times, resulting in realized niche shifts. TEM allows prediction of the likelihood of such model failure.
ERIC Educational Resources Information Center
Glover, Rebecca J.; Natesan, Prathiba; Wang, Jie; Rohr, Danielle; McAfee-Etheridge, Lauri; Booker, Dana D.; Bishop, James; Lee, David; Kildare, Cory; Wu, Minwei
2014-01-01
Explorations of relationships between Haidt's Moral Foundations Questionnaire (MFQ) and indices of moral decision-making assessed by the Defining Issues Test have been limited to correlational analyses. This study used Harm, Fairness, Ingroup, Authority and Purity to predict overall moral judgment and individual Defining Issues Test-2 (DIT-2)…
IDENTIFICATION AND PREDICTION OF FISH ASSEMBLAGES IN STREAMS OF THE MID-ATLANTIC HIGHLANDS, USA
Managing aquatic resources requires meaningful assessment endpoints on which to base decisions. In freshwater streams, assessment endpoints are often defined as fish communities. Given limited resources available for environmental monitoring, having a means of predicting fish a...
Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.
2012-01-01
Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this study represent a computationally simple means of deriving critical values for other studies of nonlinear phenomena characterized by thresholds.
NASA Astrophysics Data System (ADS)
Banabic, D.; Vos, M.; Paraianu, L.; Jurco, P.
2007-05-01
The experimental research on the formability of metal sheets has shown that there is a significant dispersion of the limit strains in an area delimited by two curves: a lower curve (LFLC) and an upper one (UFLC). The region between the two curves defines the so-called Forming Limit Band (FLB). So far, this forming band has only been determined experimentally. In this paper the authors suggested a method to predict the Forming Limit Band. The proposed method is illustrated on the AA6111-T43 aluminium alloy.
Defining sarcopenia in terms of incident adverse outcomes.
Woo, Jean; Leung, Jason; Morley, J E
2015-03-01
The objectives of this study were to compare the performance of different diagnoses of sarcopenia using European Working Group on Sarcopenia in Older People, International Working Group on Sarcopenia, and the US Foundation of National Institutes of Health (FNIH) criteria, and the screening tool SARC-F, against the Asian Working Group for Sarcopenia consensus panel definitions, in predicting physical limitation, slow walking speed, and repeated chair stand performance, days of hospital stay and mortality at follow up. Longitudinal study. Community survey in Hong Kong. Participants were 4000 men and women 65 years and older living in the community. Information from questionnaire regarding activities of daily living, physical functioning limitations, and constituent questions of SARC-F; body mass index (BMI), grip strength (GS), walking speed, and appendicular muscle mass (ASM). FNIH, consensus panel definitions, and the screening tool SARC-F all have similar AUC values in predicting incident physical limitation and physical performance measures at 4 years, walking speed at 7 years, days of hospital stay at 7 years, and mortality at 10 years. None of the definitions predicted increase in physical limitation at 4 years or mortality at 10 years in women, and none predicted all the adverse outcomes. The highest AUC values were observed for walking speed at 4 and 7 years. When applied to a Chinese elderly population, criteria used for diagnosis of sarcopenia derived from European, Asian, and international consensus panels, from US cutoff values defined from incident physical limitation, and the SARC-F screening tool, all have similar performance in predicting incident physical limitation and mortality. Copyright © 2015 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Brooker, Simon; Beasley, Michael; Ndinaromtan, Montanan; Madjiouroum, Ester Mobele; Baboguel, Marie; Djenguinabe, Elie; Hay, Simon I.; Bundy, Don A. P.
2002-01-01
OBJECTIVE: To design and implement a rapid and valid epidemiological assessment of helminths among schoolchildren in Chad using ecological zones defined by remote sensing satellite sensor data and to investigate the environmental limits of helminth distribution. METHODS: Remote sensing proxy environmental data were used to define seven ecological zones in Chad. These were combined with population data in a geographical information system (GIS) in order to define a sampling protocol. On this basis, 20 schools were surveyed. Multilevel analysis, by means of generalized estimating equations to account for clustering at the school level, was used to investigate the relationship between infection patterns and key environmental variables. FINDINGS: In a sample of 1023 schoolchildren, 22.5% were infected with Schistosoma haematobium and 32.7% with hookworm. None were infected with Ascaris lumbricoides or Trichuris trichiura. The prevalence of S. haematobium and hookworm showed marked geographical heterogeneity and the observed patterns showed a close association with the defined ecological zones and significant relationships with environmental variables. These results contribute towards defining the thermal limits of geohelminth species. Predictions of infection prevalence were made for each school surveyed with the aid of models previously developed for Cameroon. These models correctly predicted that A. lumbricoides and T. trichiura would not occur in Chad but the predictions for S. haematobium were less reliable at the school level. CONCLUSION: GIS and remote sensing can play an important part in the rapid planning of helminth control programmes where little information on disease burden is available. Remote sensing prediction models can indicate patterns of geohelminth infection but can only identify potential areas of high risk for S. haematobium. PMID:12471398
2010-11-01
defined herein as terrain whose surface deformation due to a single vehicle traversing the surface is negligible, such as paved roads (both asphalt ...ground vehicle reliability predictions. Current application of this work is limited to the analysis of U.S. Highways, comprised of both asphalt and...Highways that are consistent between asphalt and concrete roads b. The principle terrain characteristics are defined with analytic basis vectors
Pragmatic perspective on aerobic scope: peaking, plummeting, pejus and apportioning.
Farrell, A P
2016-01-01
A major challenge for fish biologists in the 21st century is to predict the biotic effects of global climate change. With marked changes in biogeographic distribution already in evidence for a variety of aquatic animals, mechanistic explanations for these shifts are being sought, ones that then can be used as a foundation for predictive models of future climatic scenarios. One mechanistic explanation for the thermal performance of fishes that has gained some traction is the oxygen and capacity-limited thermal tolerance (OCLTT) hypothesis, which suggests that an aquatic organism's capacity to supply oxygen to tissues becomes limited when body temperature reaches extremes. Central to this hypothesis is an optimum temperature for absolute aerobic scope (AAS, loosely defined as the capacity to deliver oxygen to tissues beyond a basic need). On either side of this peak for AAS are pejus temperatures that define when AAS falls off and thereby reduces an animal's absolute capacity for activity. This article provides a brief perspective on the potential uses and limitations of some of the key physiological indicators related to aerobic scope in fishes. The intent is that practitioners who attempt predictive ecological applications can better recognize limitations and make better use of the OCLTT hypothesis and its underlying physiology. © 2015 The Fisheries Society of the British Isles.
NASA Astrophysics Data System (ADS)
Pegion, K.; DelSole, T. M.; Becker, E.; Cicerone, T.
2016-12-01
Predictability represents the upper limit of prediction skill if we had an infinite member ensemble and a perfect model. It is an intrinsic limit of the climate system associated with the chaotic nature of the atmosphere. Producing a forecast system that can make predictions very near to this limit is the ultimate goal of forecast system development. Estimates of predictability together with calculations of current prediction skill are often used to define the gaps in our prediction capabilities on subseasonal to seasonal timescales and to inform the scientific issues that must be addressed to build the next forecast system. Quantification of the predictability is also important for providing a scientific basis for relaying to stakeholders what kind of climate information can be provided to inform decision-making and what kind of information is not possible given the intrinsic predictability of the climate system. One challenge with predictability estimates is that different prediction systems can give different estimates of the upper limit of skill. How do we know which estimate of predictability is most representative of the true predictability of the climate system? Previous studies have used the spread-error relationship and the autocorrelation to evaluate the fidelity of the signal and noise estimates. Using a multi-model ensemble prediction system, we can quantify whether these metrics accurately indicate an individual model's ability to properly estimate the signal, noise, and predictability. We use this information to identify the best estimates of predictability for 2-meter temperature, precipitation, and sea surface temperature from the North American Multi-model Ensemble and compare with current skill to indicate the regions with potential for improving skill.
Revealing the Earth's mantle from the tallest mountains using the Jinping Neutrino Experiment.
Šrámek, Ondřej; Roskovec, Bedřich; Wipperfurth, Scott A; Xi, Yufei; McDonough, William F
2016-09-09
The Earth's engine is driven by unknown proportions of primordial energy and heat produced in radioactive decay. Unfortunately, competing models of Earth's composition reveal an order of magnitude uncertainty in the amount of radiogenic power driving mantle dynamics. Recent measurements of the Earth's flux of geoneutrinos, electron antineutrinos from terrestrial natural radioactivity, reveal the amount of uranium and thorium in the Earth and set limits on the residual proportion of primordial energy. Comparison of the flux measured at large underground neutrino experiments with geologically informed predictions of geoneutrino emission from the crust provide the critical test needed to define the mantle's radiogenic power. Measurement at an oceanic location, distant from nuclear reactors and continental crust, would best reveal the mantle flux, however, no such experiment is anticipated. We predict the geoneutrino flux at the site of the Jinping Neutrino Experiment (Sichuan, China). Within 8 years, the combination of existing data and measurements from soon to come experiments, including Jinping, will exclude end-member models at the 1σ level, define the mantle's radiogenic contribution to the surface heat loss, set limits on the composition of the silicate Earth, and provide significant parameter bounds for models defining the mode of mantle convection.
Crystal structure prediction supported by incomplete experimental data
NASA Astrophysics Data System (ADS)
Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji
2018-05-01
We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.
Orbital Signature Analyzer (OSA): A spacecraft health/safety monitoring and analysis tool
NASA Technical Reports Server (NTRS)
Weaver, Steven; Degeorges, Charles; Bush, Joy; Shendock, Robert; Mandl, Daniel
1993-01-01
Fixed or static limit sensing is employed in control centers to ensure that spacecraft parameters remain within a nominal range. However, many critical parameters, such as power system telemetry, are time-varying and, as such, their 'nominal' range is necessarily time-varying as well. Predicted data, manual limits checking, and widened limit-checking ranges are often employed in an attempt to monitor these parameters without generating excessive limits violations. Generating predicted data and manual limits checking are both resource intensive, while broadening limit ranges for time-varying parameters is clearly inadequate to detect all but catastrophic problems. OSA provides a low-cost solution by using analytically selected data as a reference upon which to base its limits. These limits are always defined relative to the time-varying reference data, rather than as fixed upper and lower limits. In effect, OSA provides individual limits tailored to each value throughout all the data. A side benefit of using relative limits is that they automatically adjust to new reference data. In addition, OSA provides a wealth of analytical by-products in its execution.
On the Gause predator-prey model with a refuge: a fresh look at the history.
Křivan, Vlastimil
2011-04-07
This article re-analyses a prey-predator model with a refuge introduced by one of the founders of population ecology Gause and his co-workers to explain discrepancies between their observations and predictions of the Lotka-Volterra prey-predator model. They replaced the linear functional response used by Lotka and Volterra by a saturating functional response with a discontinuity at a critical prey density. At concentrations below this critical density prey were effectively in a refuge while at a higher densities they were available to predators. Thus, their functional response was of the Holling type III. They analyzed this model and predicted existence of a limit cycle in predator-prey dynamics. In this article I show that their model is ill posed, because trajectories are not well defined. Using the Filippov method, I define and analyze solutions of the Gause model. I show that depending on parameter values, there are three possibilities: (1) trajectories converge to a limit cycle, as predicted by Gause, (2) trajectories converge to an equilibrium, or (3) the prey population escapes predator control and grows to infinity. Copyright © 2011 Elsevier Ltd. All rights reserved.
Advances in modeling trait-based plant community assembly.
Laughlin, Daniel C; Laughlin, David E
2013-10-01
In this review, we examine two new trait-based models of community assembly that predict the relative abundance of species from a regional species pool. The models use fundamentally different mathematical approaches and the predictions can differ considerably. Maxent obtains the most even probability distribution subject to community-weighted mean trait constraints. Traitspace predicts low probabilities for any species whose trait distribution does not pass through the environmental filter. Neither model maximizes functional diversity because of the emphasis on environmental filtering over limiting similarity. Traitspace can test for the effects of limiting similarity by explicitly incorporating intraspecific trait variation. The range of solutions in both models could be used to define the range of natural variability of community composition in restoration projects. Copyright © 2013 Elsevier Ltd. All rights reserved.
Interspecies Correlation Estimation (ICE) models predict supplemental toxicity data for SSDs
Species sensitivity distributions (SSD) require a large number of toxicity values for a diversity of taxa to define a hazard level protective of multiple species. For most chemicals, measured toxicity data are limited to a few standard test species that are unlikely to adequately...
Krishnamurthy, Dilip; Sumaria, Vaidish; Viswanathan, Venkatasubramanian
2018-02-01
Density functional theory (DFT) calculations are being routinely used to identify new material candidates that approach activity near fundamental limits imposed by thermodynamics or scaling relations. DFT calculations are associated with inherent uncertainty, which limits the ability to delineate materials (distinguishability) that possess high activity. Development of error-estimation capabilities in DFT has enabled uncertainty propagation through activity-prediction models. In this work, we demonstrate an approach to propagating uncertainty through thermodynamic activity models leading to a probability distribution of the computed activity and thereby its expectation value. A new metric, prediction efficiency, is defined, which provides a quantitative measure of the ability to distinguish activity of materials and can be used to identify the optimal descriptor(s) ΔG opt . We demonstrate the framework for four important electrochemical reactions: hydrogen evolution, chlorine evolution, oxygen reduction and oxygen evolution. Future studies could utilize expected activity and prediction efficiency to significantly improve the prediction accuracy of highly active material candidates.
What is Bottom-Up and What is Top-Down in Predictive Coding?
Rauss, Karsten; Pourtois, Gilles
2013-01-01
Everyone knows what bottom-up is, and how it is different from top-down. At least one is tempted to think so, given that both terms are ubiquitously used, but only rarely defined in the psychology and neuroscience literature. In this review, we highlight the problems and limitations of our current understanding of bottom-up and top-down processes, and we propose a reformulation of this distinction in terms of predictive coding. PMID:23730295
Defining a predictive model of developmental toxicity from in vitro and high-throughput screening (HTS) assays can be limited by the availability of developmental defects data. ToxRefDB (www.epa.gov/ncct/todrefdb) was built from animal studies on data-rich environmental chemicals...
Density and nest survival of golden-cheeked warblers: Spatial scale matters
Jennifer L. Reidy; Frank R., III Thompson; Lisa O' Donnell
2017-01-01
Conservation and management plans often rely on indicators such as species occupancy or density to define habitat quality, ignoring factors that influence reproductive success, and potentially limiting conservation achievements. We examined relationships between predicted density and nest survival with environmental features at multiple spatial scales for the golden-...
Revealing the Earth’s mantle from the tallest mountains using the Jinping Neutrino Experiment
NASA Astrophysics Data System (ADS)
Šrámek, Ondřej; Roskovec, Bedřich; Wipperfurth, Scott A.; Xi, Yufei; McDonough, William F.
2016-09-01
The Earth’s engine is driven by unknown proportions of primordial energy and heat produced in radioactive decay. Unfortunately, competing models of Earth’s composition reveal an order of magnitude uncertainty in the amount of radiogenic power driving mantle dynamics. Recent measurements of the Earth’s flux of geoneutrinos, electron antineutrinos from terrestrial natural radioactivity, reveal the amount of uranium and thorium in the Earth and set limits on the residual proportion of primordial energy. Comparison of the flux measured at large underground neutrino experiments with geologically informed predictions of geoneutrino emission from the crust provide the critical test needed to define the mantle’s radiogenic power. Measurement at an oceanic location, distant from nuclear reactors and continental crust, would best reveal the mantle flux, however, no such experiment is anticipated. We predict the geoneutrino flux at the site of the Jinping Neutrino Experiment (Sichuan, China). Within 8 years, the combination of existing data and measurements from soon to come experiments, including Jinping, will exclude end-member models at the 1σ level, define the mantle’s radiogenic contribution to the surface heat loss, set limits on the composition of the silicate Earth, and provide significant parameter bounds for models defining the mode of mantle convection.
Revealing the Earth’s mantle from the tallest mountains using the Jinping Neutrino Experiment
Šrámek, Ondřej; Roskovec, Bedřich; Wipperfurth, Scott A.; Xi, Yufei; McDonough, William F.
2016-01-01
The Earth’s engine is driven by unknown proportions of primordial energy and heat produced in radioactive decay. Unfortunately, competing models of Earth’s composition reveal an order of magnitude uncertainty in the amount of radiogenic power driving mantle dynamics. Recent measurements of the Earth’s flux of geoneutrinos, electron antineutrinos from terrestrial natural radioactivity, reveal the amount of uranium and thorium in the Earth and set limits on the residual proportion of primordial energy. Comparison of the flux measured at large underground neutrino experiments with geologically informed predictions of geoneutrino emission from the crust provide the critical test needed to define the mantle’s radiogenic power. Measurement at an oceanic location, distant from nuclear reactors and continental crust, would best reveal the mantle flux, however, no such experiment is anticipated. We predict the geoneutrino flux at the site of the Jinping Neutrino Experiment (Sichuan, China). Within 8 years, the combination of existing data and measurements from soon to come experiments, including Jinping, will exclude end-member models at the 1σ level, define the mantle’s radiogenic contribution to the surface heat loss, set limits on the composition of the silicate Earth, and provide significant parameter bounds for models defining the mode of mantle convection. PMID:27611737
D'Ambrosio, Antonio; Heiser, Willem J
2016-09-01
Preference rankings usually depend on the characteristics of both the individuals judging a set of objects and the objects being judged. This topic has been handled in the literature with log-linear representations of the generalized Bradley-Terry model and, recently, with distance-based tree models for rankings. A limitation of these approaches is that they only work with full rankings or with a pre-specified pattern governing the presence of ties, and/or they are based on quite strict distributional assumptions. To overcome these limitations, we propose a new prediction tree method for ranking data that is totally distribution-free. It combines Kemeny's axiomatic approach to define a unique distance between rankings with the CART approach to find a stable prediction tree. Furthermore, our method is not limited by any particular design of the pattern of ties. The method is evaluated in an extensive full-factorial Monte Carlo study with a new simulation design.
Validation of Model-Based Prognostics for Pneumatic Valves in a Demonstration Testbed
2014-10-02
predict end of life ( EOL ) and remaining useful life (RUL). The approach still follows the general estimation-prediction framework devel- oped in the...atmosphere, with linearly increasing leak area. kA2leak = Cleak (16) We define valve end of life ( EOL ) through open/close time limits of the valves, as in...represents end of life ( EOL ), and ∆kE represents remaining useful life (RUL). For valves, timing requirements are provided that de- fine the maximum
Asian Pentecostalism: A Religion Whose Only Limit Is the Sky
ERIC Educational Resources Information Center
Ma, Wonsuk
2004-01-01
The study surveys the current growth of Pentecostal Christianity, as defined broadly, in Asia, particularly in comparison with Latin America and Africa, predicting that the future growth is expected to be exponential. In a brief historical survey, the continent is divided into four categories depending on the beginning and development of…
Fundamental Algorithms of the Goddard Battery Model
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1985-01-01
The Goddard Space Flight Center (GSFC) is currently producing a computer model to predict Nickel Cadmium (NiCd) performance in a Low Earth Orbit (LEO) cycling regime. The model proper is currently still in development, but the inherent, fundamental algorithms (or methodologies) of the model are defined. At present, the model is closely dependent on empirical data and the data base currently used is of questionable accuracy. Even so, very good correlations have been determined between model predictions and actual cycling data. A more accurate and encompassing data base has been generated to serve dual functions: show the limitations of the current data base, and be inbred in the model properly for more accurate predictions. The fundamental algorithms of the model, and the present data base and its limitations, are described and a brief preliminary analysis of the new data base and its verification of the model's methodology are presented.
NASA Astrophysics Data System (ADS)
Ogden, F. L.
2017-12-01
HIgh performance computing and the widespread availabilities of geospatial physiographic and forcing datasets have enabled consideration of flood impact predictions with longer lead times and more detailed spatial descriptions. We are now considering multi-hour flash flood forecast lead times at the subdivision level in so-called hydroblind regions away from the National Hydrography network. However, the computational demands of such models are high, necessitating a nested simulation approach. Research on hyper-resolution hydrologic modeling over the past three decades have illustrated some fundamental limits on predictability that are simultaneously related to runoff generation mechanism(s), antecedent conditions, rates and total amounts of precipitation, discretization of the model domain, and complexity or completeness of the model formulation. This latter point is an acknowledgement that in some ways hydrologic understanding in key areas related to land use, land cover, tillage practices, seasonality, and biological effects has some glaring deficiencies. This presentation represents a review of what is known related to the interacting effects of precipitation amount, model spatial discretization, antecedent conditions, physiographic characteristics and model formulation completeness for runoff predictions. These interactions define a region in multidimensional forcing, parameter and process space where there are in some cases clear limits on predictability, and in other cases diminished uncertainty.
NASA Astrophysics Data System (ADS)
Judt, Falko
2017-04-01
A tremendous increase in computing power has facilitated the advent of global convection-resolving numerical weather prediction (NWP) models. Although this technological breakthrough allows for the seamless prediction of weather from local to global scales, the predictability of multiscale weather phenomena in these models is not very well known. To address this issue, we conducted a global high-resolution (4-km) predictability experiment using the Model for Prediction Across Scales (MPAS), a state-of-the-art global NWP model developed at the National Center for Atmospheric Research. The goals of this experiment are to investigate error growth from convective to planetary scales and to quantify the intrinsic, scale-dependent predictability limits of atmospheric motions. The globally uniform resolution of 4 km allows for the explicit treatment of organized deep moist convection, alleviating grave limitations of previous predictability studies that either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. Error growth is analyzed within the context of an "identical twin" experiment setup: the error is defined as the difference between a 20-day long "nature run" and a simulation that was perturbed with small-amplitude noise, but is otherwise identical. It is found that in convectively active regions, errors grow by several orders of magnitude within the first 24 h ("super-exponential growth"). The errors then spread to larger scales and begin a phase of exponential growth after 2-3 days when contaminating the baroclinic zones. After 16 days, the globally averaged error saturates—suggesting that the intrinsic limit of atmospheric predictability (in a general sense) is about two weeks, which is in line with earlier estimates. However, error growth rates differ between the tropics and mid-latitudes as well as between the troposphere and stratosphere, highlighting that atmospheric predictability is a complex problem. The comparatively slower error growth in the tropics and in the stratosphere indicates that certain weather phenomena could potentially have longer predictability than currently thought.
Addition of multiple limiting resources reduces grassland diversity.
Harpole, W Stanley; Sullivan, Lauren L; Lind, Eric M; Firn, Jennifer; Adler, Peter B; Borer, Elizabeth T; Chase, Jonathan; Fay, Philip A; Hautier, Yann; Hillebrand, Helmut; MacDougall, Andrew S; Seabloom, Eric W; Williams, Ryan; Bakker, Jonathan D; Cadotte, Marc W; Chaneton, Enrique J; Chu, Chengjin; Cleland, Elsa E; D'Antonio, Carla; Davies, Kendi F; Gruner, Daniel S; Hagenah, Nicole; Kirkman, Kevin; Knops, Johannes M H; La Pierre, Kimberly J; McCulley, Rebecca L; Moore, Joslin L; Morgan, John W; Prober, Suzanne M; Risch, Anita C; Schuetz, Martin; Stevens, Carly J; Wragg, Peter D
2016-09-01
Niche dimensionality provides a general theoretical explanation for biodiversity-more niches, defined by more limiting factors, allow for more ways that species can coexist. Because plant species compete for the same set of limiting resources, theory predicts that addition of a limiting resource eliminates potential trade-offs, reducing the number of species that can coexist. Multiple nutrient limitation of plant production is common and therefore fertilization may reduce diversity by reducing the number or dimensionality of belowground limiting factors. At the same time, nutrient addition, by increasing biomass, should ultimately shift competition from belowground nutrients towards a one-dimensional competitive trade-off for light. Here we show that plant species diversity decreased when a greater number of limiting nutrients were added across 45 grassland sites from a multi-continent experimental network. The number of added nutrients predicted diversity loss, even after controlling for effects of plant biomass, and even where biomass production was not nutrient-limited. We found that elevated resource supply reduced niche dimensionality and diversity and increased both productivity and compositional turnover. Our results point to the importance of understanding dimensionality in ecological systems that are undergoing diversity loss in response to multiple global change factors.
Seyssel, Kevin; Suter, Michel; Pattou, François; Caiazzo, Robert; Verkindt, Helene; Raverdy, Violeta; Jolivet, Mathieu; Disse, Emmanuel; Robert, Maud; Giusti, Vittorio
2018-06-19
Different factors, such as age, gender, preoperative weight but also the patient's motivation, are known to impact outcomes after Roux-en-Y gastric bypass (RYGBP). Weight loss prediction is helpful to define realistic expectations and maintain motivation during follow-up, but also to select good candidates for surgery and limit failures. Therefore, developing a realistic predictive tool appears interesting. A Swiss cohort (n = 444), who underwent RYGBP, was used, with multiple linear regression models, to predict weight loss up to 60 months after surgery considering age, height, gender and weight at baseline. We then applied our model on two French cohorts and compared predicted weight to the one finally reached. Accuracy of our model was controlled using root mean square error (RMSE). Mean weight loss was 43.6 ± 13.0 and 40.8 ± 15.4 kg at 12 and 60 months respectively. The model was reliable to predict weight loss (0.37 < R 2 < 0.48) and RMSE between 5.0 and 12.2 kg. High preoperative weight and young age were positively correlated to weight loss, as well as male gender. Correlations between predicted weight and real weight were highly significant in both validation cohorts (R ≥ 0.7 and P < 0.01) and RMSE increased throughout follow-up between 6.2 and 15.4 kg. Our statistical model to predict weight loss outcomes after RYGBP seems accurate. It could be a valuable tool to define realistic weight loss expectations and to improve patient selection and outcomes during follow-up. Further research is needed to demonstrate the interest of this model in improving patients' motivation and results and limit the failures.
ERIC Educational Resources Information Center
Sacks, Vanessa Harbin; Moore, Kristin Anderson; Terzian, Mary A.; Constance, Nicole
2014-01-01
Schools take different approaches to creating and fostering a healthy and safe environment for youth. Varied approaches include setting limits for acceptable behavior, defining the consequences for breaking school rules, and the provision of services to address problem behaviors. One important issue that schools have to address is substance use…
Religiousness and Infidelity: Attendance, but not Faith and Prayer, Predict Marital Fidelity
ERIC Educational Resources Information Center
Atkins, David C.; Kessel, Deborah E.
2008-01-01
High religiousness has been consistently linked with a decreased likelihood of past infidelity but has been solely defined by religious service attendance, a limited assessment of a complex facet of life. The current study developed nine religiousness subscales using items from the 1998 General Social Survey to more fully explore the association…
Self-imposed length limits in recreational fisheries
Chizinski, Christopher J.; Martin, Dustin R.; Hurley, Keith L.; Pope, Kevin L.
2014-01-01
A primary motivating factor on the decision to harvest a fish among consumptive-orientated anglers is the size of the fish. There is likely a cost-benefit trade-off for harvest of individual fish that is size and species dependent, which should produce a logistic-type response of fish fate (release or harvest) as a function of fish size and species. We define the self-imposed length limit as the length at which a captured fish had a 50% probability of being harvested, which was selected because it marks the length of the fish where the probability of harvest becomes greater than the probability of release. We assessed the influences of fish size, catch per unit effort, size distribution of caught fish, and creel limit on the self-imposed length limits for bluegill Lepomis macrochirus, channel catfish Ictalurus punctatus, black crappie Pomoxis nigromaculatus and white crappie Pomoxis annularis combined, white bass Morone chrysops, and yellow perch Perca flavescens at six lakes in Nebraska, USA. As we predicted, the probability of harvest increased with increasing size for all species harvested, which supported the concept of a size-dependent trade-off in costs and benefits of harvesting individual fish. It was also clear that probability of harvest was not simply defined by fish length, but rather was likely influenced to various degrees by interactions between species, catch rate, size distribution, creel-limit regulation and fish size. A greater understanding of harvest decisions within the context of perceived likelihood that a creel limit will be realized by a given angler party, which is a function of fish availability, harvest regulation and angler skill and orientation, is needed to predict the influence that anglers have on fish communities and to allow managers to sustainable manage exploited fish populations in recreational fisheries.
Chng, Tze Wei; Lee, Jonathan Y H; Lee, C Soon; Li, HuiHua; Tan, Min-Han; Tan, Puay Hoon
2016-12-01
To validate the utility of the Singapore nomogram for outcome prediction in breast phyllodes tumours. Histological parameters, surgical margin status and clinical follow-up data of 34 women diagnosed with phyllodes tumours were analysed. Biostatistics modelling was performed, and the concordance between predicted and observed survivals was calculated. Women with a high nomogram score had an increased risk of developing relapse, which was predicted using the parameters defined by the Singapore nomogram. The Singapore nomogram is useful in predicting outcome in breast phyllodes tumours when applied to an Australian cohort of 34 women. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Hessel, Ellen V S; Staal, Yvonne C M; Piersma, Aldert H
2018-03-13
Developmental neurotoxicity entails one of the most complex areas in toxicology. Animal studies provide only limited information as to human relevance. A multitude of alternative models have been developed over the years, providing insights into mechanisms of action. We give an overview of fundamental processes in neural tube formation, brain development and neural specification, aiming at illustrating complexity rather than comprehensiveness. We also give a flavor of the wealth of alternative methods in this area. Given the impressive progress in mechanistic knowledge of human biology and toxicology, the time is right for a conceptual approach for designing testing strategies that cover the integral mechanistic landscape of developmental neurotoxicity. The ontology approach provides a framework for defining this landscape, upon which an integral in silico model for predicting toxicity can be built. It subsequently directs the selection of in vitro assays for rate-limiting events in the biological network, to feed parameter tuning in the model, leading to prediction of the toxicological outcome. Validation of such models requires primary attention to coverage of the biological domain, rather than classical predictive value of individual tests. Proofs of concept for such an approach are already available. The challenge is in mining modern biology, toxicology and chemical information to feed intelligent designs, which will define testing strategies for neurodevelopmental toxicity testing. Copyright © 2018 Elsevier Inc. All rights reserved.
Villarreal, Miguel L.; van Riper, Charles; Petrakis, Roy E.
2013-01-01
Riparian vegetation provides important wildlife habitat in the Southwestern United States, but limited distributions and spatial complexity often leads to inaccurate representation in maps used to guide conservation. We test the use of data conflation and aggregation on multiple vegetation/land-cover maps to improve the accuracy of habitat models for the threatened western yellow-billed cuckoo (Coccyzus americanus occidentalis). We used species observations (n = 479) from a state-wide survey to develop habitat models from 1) three vegetation/land-cover maps produced at different geographic scales ranging from state to national, and 2) new aggregate maps defined by the spatial agreement of cover types, which were defined as high (agreement = all data sets), moderate (agreement ≥ 2), and low (no agreement required). Model accuracies, predicted habitat locations, and total area of predicted habitat varied considerably, illustrating the effects of input data quality on habitat predictions and resulting potential impacts on conservation planning. Habitat models based on aggregated and conflated data were more accurate and had higher model sensitivity than original vegetation/land-cover, but this accuracy came at the cost of reduced geographic extent of predicted habitat. Using the highest performing models, we assessed cuckoo habitat preference and distribution in Arizona and found that major watersheds containing high-probably habitat are fragmented by a wide swath of low-probability habitat. Focus on riparian restoration in these areas could provide more breeding habitat for the threatened cuckoo, offset potential future habitat losses in adjacent watershed, and increase regional connectivity for other threatened vertebrates that also use riparian corridors.
Srinivas, Nuggehally R
2016-01-01
In the present age of polypharmacy, limited sampling strategy becomes important to verify if drug levels are within the prescribed threshold limits from efficacy and safety considerations. The need to establish reliable single time concentration dependent models to predict exposure becomes important from cost and time perspectives. A simple unweighted linear regression model was developed to describe the relationship between Cmax versus AUC for fexofenadine, losartan, EXP3174, itraconazole and hydroxyitraconazole. The fold difference, defined as the quotient of the observed and predicted AUC values, were evaluated along with statistical comparison of the predicted versus observed values. The correlation between Cmax versus AUC was well established for all the five drugs with a correlation coefficient (r) ranging from 0.9130 to 0.9997. Majority of the predicted values for all the five drugs (77%) were contained within a narrow boundary of 0.75- to 1.5-fold difference. The r values for observed versus predicted AUC were 0.9653 (n = 145), 0.8342 (n = 76), 0.9524 (n = 88), 0.9339 (n = 89) and 0.9452 (n = 66) for fexofenadine, losartan, EXP3174, itraconazole and hydroxyitraconazole, respectively. Cmax versus AUC relationships were established for all drugs and were amenable for limited sampling strategy for AUC prediction. However, fexofenadine, EXP3174 and hydroxyitraconazole may be most relevant for AUC prediction by a single time concentration as judged by the various criteria applied in this study.
Search for Production of Single Top Quarks Via tcg and tug Flavor-Changing-Neutral-Current Couplings
NASA Astrophysics Data System (ADS)
Abazov, V. M.; Abbott, B.; Abolins, M.; Acharya, B. S.; Adams, M.; Adams, T.; Aguilo, E.; Ahn, S. H.; Ahsan, M.; Alexeev, G. D.; Alkhazov, G.; Alton, A.; Alverson, G.; Alves, G. A.; Anastasoaie, M.; Ancu, L. S.; Andeen, T.; Anderson, S.; Andrieu, B.; Anzelc, M. S.; Arnoud, Y.; Arov, M.; Askew, A.; Åsman, B.; Assis Jesus, A. C. S.; Atramentov, O.; Autermann, C.; Avila, C.; Ay, C.; Badaud, F.; Baden, A.; Bagby, L.; Baldin, B.; Bandurin, D. V.; Banerjee, P.; Banerjee, S.; Barberis, E.; Barfuss, A.-F.; Bargassa, P.; Baringer, P.; Barnes, C.; Barreto, J.; Bartlett, J. F.; Bassler, U.; Bauer, D.; Beale, S.; Bean, A.; Begalli, M.; Begel, M.; Belanger-Champagne, C.; Bellantoni, L.; Bellavance, A.; Benitez, J. A.; Beri, S. B.; Bernardi, G.; Bernhard, R.; Berntzon, L.; Bertram, I.; Besançon, M.; Beuselinck, R.; Bezzubov, V. A.; Bhat, P. C.; Bhatnagar, V.; Binder, M.; Biscarat, C.; Blackler, I.; Blazey, G.; Blekman, F.; Blessing, S.; Bloch, D.; Bloom, K.; Boehnlein, A.; Boline, D.; Bolton, T. A.; Boos, E. E.; Borissov, G.; Bos, K.; Bose, T.; Brandt, A.; Brock, R.; Brooijmans, G.; Bross, A.; Brown, D.; Buchanan, N. J.; Buchholz, D.; Buehler, M.; Buescher, V.; Bunichev, V.; Burdin, S.; Burke, S.; Burnett, T. H.; Busato, E.; Buszello, C. P.; Butler, J. M.; Calfayan, P.; Calvet, S.; Cammin, J.; Caron, S.; Carvalho, W.; Casey, B. C. K.; Cason, N. M.; Castilla-Valdez, H.; Chakrabarti, S.; Chakraborty, D.; Chan, K.; Chan, K. M.; Chandra, A.; Charles, F.; Cheu, E.; Chevallier, F.; Cho, D. K.; Choi, S.; Choudhary, B.; Christofek, L.; Christoudias, T.; Claes, D.; Clément, B.; Clément, C.; Coadou, Y.; Cooke, M.; Cooper, W. E.; Corcoran, M.; Couderc, F.; Cousinou, M.-C.; Cox, B.; Crépé-Renaudin, S.; Cutts, D.; Ćwiok, M.; da Motta, H.; Das, A.; Davies, B.; Davies, G.; de, K.; de Jong, P.; de Jong, S. J.; de La Cruz-Burelo, E.; de Oliveira Martins, C.; Degenhardt, J. D.; Déliot, F.; Demarteau, M.; Demina, R.; Denisov, D.; Denisov, S. P.; Desai, S.; Diehl, H. T.; Diesburg, M.; Doidge, M.; Dominguez, A.; Dong, H.; Dudko, L. V.; Duflot, L.; Dugad, S. R.; Duggan, D.; Duperrin, A.; Dyer, J.; Dyshkant, A.; Eads, M.; Edmunds, D.; Ellison, J.; Elvira, V. D.; Enari, Y.; Eno, S.; Ermolov, P.; Evans, H.; Evdokimov, A.; Evdokimov, V. N.; Ferapontov, A. V.; Ferbel, T.; Fiedler, F.; Filthaut, F.; Fisher, W.; Fisk, H. E.; Ford, M.; Fortner, M.; Fox, H.; Fu, S.; Fuess, S.; Gadfort, T.; Galea, C. F.; Gallas, E.; Galyaev, E.; Garcia, C.; Garcia-Bellido, A.; Gavrilov, V.; Gay, P.; Geist, W.; Gelé, D.; Gerber, C. E.; Gershtein, Y.; Gillberg, D.; Ginther, G.; Gollub, N.; Gómez, B.; Goussiou, A.; Grannis, P. D.; Greenlee, H.; Greenwood, Z. D.; Gregores, E. M.; Grenier, G.; Gris, Ph.; Grivaz, J.-F.; Grohsjean, A.; Grünendahl, S.; Grünewald, M. W.; Guo, F.; Guo, J.; Gutierrez, G.; Gutierrez, P.; Haas, A.; Hadley, N. J.; Haefner, P.; Hagopian, S.; Haley, J.; Hall, I.; Hall, R. E.; Han, L.; Hanagaki, K.; Hansson, P.; Harder, K.; Harel, A.; Harrington, R.; Hauptman, J. M.; Hauser, R.; Hays, J.; Hebbeker, T.; Hedin, D.; Hegeman, J. G.; Heinmiller, J. M.; Heinson, A. P.; Heintz, U.; Hensel, C.; Herner, K.; Hesketh, G.; Hildreth, M. D.; Hirosky, R.; Hobbs, J. D.; Hoeneisen, B.; Hoeth, H.; Hohlfeld, M.; Hong, S. J.; Hooper, R.; Houben, P.; Hu, Y.; Hubacek, Z.; Hynek, V.; Iashvili, I.; Illingworth, R.; Ito, A. S.; Jabeen, S.; Jaffré, M.; Jain, S.; Jakobs, K.; Jarvis, C.; Jenkins, A.; Jesik, R.; Johns, K.; Johnson, C.; Johnson, M.; Jonckheere, A.; Jonsson, P.; Juste, A.; Käfer, D.; Kahn, S.; Kajfasz, E.; Kalinin, A. M.; Kalk, J. M.; Kalk, J. R.; Kappler, S.; Karmanov, D.; Kasper, J.; Kasper, P.; Katsanos, I.; Kau, D.; Kaur, R.; Kehoe, R.; Kermiche, S.; Khalatyan, N.; Khanov, A.; Kharchilava, A.; Kharzheev, Y. M.; Khatidze, D.; Kim, H.; Kim, T. J.; Kirby, M. H.; Klima, B.; Kohli, J. M.; Konrath, J.-P.; Kopal, M.; Korablev, V. M.; Kotcher, J.; Kothari, B.; Koubarovsky, A.; Kozelov, A. V.; Krop, D.; Kryemadhi, A.; Kuhl, T.; Kumar, A.; Kunori, S.; Kupco, A.; Kurča, T.; Kvita, J.; Lam, D.; Lammers, S.; Landsberg, G.; Lazoflores, J.; Lebrun, P.; Lee, W. M.; Leflat, A.; Lehner, F.; Lesne, V.; Leveque, J.; Lewis, P.; Li, J.; Li, L.; Li, Q. Z.; Lietti, S. M.; Lima, J. G. R.; Lincoln, D.; Linnemann, J.; Lipaev, V. V.; Lipton, R.; Liu, Z.; Lobo, L.; Lobodenko, A.; Lokajicek, M.; Lounis, A.; Love, P.; Lubatti, H. J.; Lynker, M.; Lyon, A. L.; Maciel, A. K. A.; Madaras, R. J.; Mättig, P.; Magass, C.; Magerkurth, A.; Makovec, N.; Mal, P. K.; Malbouisson, H. B.; Malik, S.; Malyshev, V. L.; Mao, H. S.; Maravin, Y.; Martin, B.; McCarthy, R.; Melnitchouk, A.; Mendes, A.; Mendoza, L.; Mercadante, P. G.; Merkin, M.; Merritt, K. W.; Meyer, A.; Meyer, J.; Michaut, M.; Miettinen, H.; Millet, T.; Mitrevski, J.; Molina, J.; Mommsen, R. K.; Mondal, N. K.; Monk, J.; Moore, R. W.; Moulik, T.; Muanza, G. S.; Mulders, M.; Mulhearn, M.; Mundal, O.; Mundim, L.; Nagy, E.; Naimuddin, M.; Narain, M.; Naumann, N. A.; Neal, H. A.; Negret, J. P.; Neustroev, P.; Nilsen, H.; Noeding, C.; Nomerotski, A.; Novaes, S. F.; Nunnemann, T.; O'Dell, V.; O'Neil, D. C.; Obrant, G.; Ochando, C.; Oguri, V.; Oliveira, N.; Onoprienko, D.; Oshima, N.; Osta, J.; Otec, R.; Otero Y Garzón, G. J.; Owen, M.; Padley, P.; Pangilinan, M.; Parashar, N.; Park, S.-J.; Park, S. K.; Parsons, J.; Partridge, R.; Parua, N.; Patwa, A.; Pawloski, G.; Perea, P. M.; Perfilov, M.; Peters, K.; Peters, Y.; Pétroff, P.; Petteni, M.; Piegaia, R.; Piper, J.; Pleier, M.-A.; Podesta-Lerma, P. L. M.; Podstavkov, V. M.; Pogorelov, Y.; Pol, M.-E.; Pompoš, A.; Pope, B. G.; Popov, A. V.; Potter, C.; Prado da Silva, W. L.; Prosper, H. B.; Protopopescu, S.; Qian, J.; Quadt, A.; Quinn, B.; Rangel, M. S.; Rani, K. J.; Ranjan, K.; Ratoff, P. N.; Renkel, P.; Reucroft, S.; Rijssenbeek, M.; Ripp-Baudot, I.; Rizatdinova, F.; Robinson, S.; Rodrigues, R. F.; Royon, C.; Rubinov, P.; Ruchti, R.; Sajot, G.; Sánchez-Hernández, A.; Sanders, M. P.; Santoro, A.; Savage, G.; Sawyer, L.; Scanlon, T.; Schaile, D.; Schamberger, R. D.; Scheglov, Y.; Schellman, H.; Schieferdecker, P.; Schmitt, C.; Schwanenberger, C.; Schwartzman, A.; Schwienhorst, R.; Sekaric, J.; Sengupta, S.; Severini, H.; Shabalina, E.; Shamim, M.; Shary, V.; Shchukin, A. A.; Shivpuri, R. K.; Shpakov, D.; Siccardi, V.; Sidwell, R. A.; Simak, V.; Sirotenko, V.; Skubic, P.; Slattery, P.; Smirnov, D.; Smith, R. P.; Snow, G. R.; Snow, J.; Snyder, S.; Söldner-Rembold, S.; Sonnenschein, L.; Sopczak, A.; Sosebee, M.; Soustruznik, K.; Souza, M.; Spurlock, B.; Stark, J.; Steele, J.; Stolin, V.; Stone, A.; Stoyanova, D. A.; Strandberg, J.; Strandberg, S.; Strang, M. A.; Strauss, M.; Ströhmer, R.; Strom, D.; Strovink, M.; Stutte, L.; Sumowidagdo, S.; Svoisky, P.; Sznajder, A.; Talby, M.; Tamburello, P.; Taylor, W.; Telford, P.; Temple, J.; Tiller, B.; Tissandier, F.; Titov, M.; Tokmenin, V. V.; Tomoto, M.; Toole, T.; Torchiani, I.; Trefzger, T.; Trincaz-Duvoid, S.; Tsybychev, D.; Tuchming, B.; Tully, C.; Tuts, P. M.; Unalan, R.; Uvarov, L.; Uvarov, S.; Uzunyan, S.; Vachon, B.; van den Berg, P. J.; van Eijk, B.; van Kooten, R.; van Leeuwen, W. M.; Varelas, N.; Varnes, E. W.; Vartapetian, A.; Vasilyev, I. A.; Vaupel, M.; Verdier, P.; Vertogradov, L. S.; Verzocchi, M.; Villeneuve-Seguier, F.; Vint, P.; Vlimant, J.-R.; von Toerne, E.; Voutilainen, M.; Vreeswijk, M.; Wahl, H. D.; Wang, L.; Wang, M. H. L. S.; Warchol, J.; Watts, G.; Wayne, M.; Weber, G.; Weber, M.; Weerts, H.; Wenger, A.; Wermes, N.; Wetstein, M.; White, A.; Wicke, D.; Wilson, G. W.; Wimpenny, S. J.; Wobisch, M.; Wood, D. R.; Wyatt, T. R.; Xie, Y.; Yacoob, S.; Yamada, R.; Yan, M.; Yasuda, T.; Yatsunenko, Y. A.; Yip, K.; Yoo, H. D.; Youn, S. W.; Yu, C.; Yu, J.; Yurkewicz, A.; Zatserklyaniy, A.; Zeitnitz, C.; Zhang, D.; Zhao, T.; Zhou, B.; Zhu, J.; Zielinski, M.; Zieminska, D.; Zieminski, A.; Zutshi, V.; Zverev, E. G.
2007-11-01
We search for the production of single top quarks via flavor-changing-neutral-current couplings of a gluon to the top quark and a charm (c) or up (u) quark. We analyze 230pb-1 of lepton+jets data from pp¯ collisions at a center of mass energy of 1.96 TeV collected by the D0 detector at the Fermilab Tevatron Collider. We observe no significant deviation from standard model predictions, and hence set upper limits on the anomalous coupling parameters κgc/Λ and κgu/Λ, where κg define the strength of tcg and tug couplings, and Λ defines the scale of new physics. The limits at 95% C.L. are κgc/Λ<0.15TeV-1 and κgu/Λ<0.037TeV-1.
ERIC Educational Resources Information Center
Han, Jinjoo; O'Connor, Erin E.; McCormick, Meghan P.; McClowry, Sandee G.
2017-01-01
Research Findings: Home-based involvement--defined as the actions parents take to promote children's learning outside of school--is often the most efficient way for low-income parents to be involved with their children's education. However, there is limited research examining the factors predicting home-based involvement at kindergarten entry for…
Genotype-specific relationships among phosphorus use, growth and abundance in Daphnia pulicaria
Chowdhury, Priyanka Roy; Baker, Kristina D.; Weider, Lawrence J.; Jeyasingh, Punidan D.
2017-01-01
The framework ecological stoichiometry uses elemental composition of species to make predictions about growth and competitive ability in defined elemental supply conditions. Although intraspecific differences in stoichiometry have been observed, we have yet to understand the mechanisms generating and maintaining such variation. We used variation in phosphorus (P) content within a Daphnia species to test the extent to which %P can explain variation in growth and competition. Further, we measured 33P kinetics (acquisition, assimilation, incorporation and retention) to understand the extent to which such variables improved predictions. Genotypes showed significant variation in P content, 33P kinetics and growth rate. P content alone was a poor predictor of growth rate and competitive ability. While most genotypes exhibited the typical growth penalty under P limitation, a few varied little in growth between P diets. These observations indicate that some genotypes can maintain growth under P-limited conditions by altering P use, suggesting that decomposing P content of an individual into physiological components of P kinetics will improve stoichiometric models. More generally, attention to the interplay between nutrient content and nutrient-use is required to make inferences regarding the success of genotypes in defined conditions of nutrient supply. PMID:29308224
Formability prediction for AHSS materials using damage models
NASA Astrophysics Data System (ADS)
Amaral, R.; Santos, Abel D.; José, César de Sá; Miranda, Sara
2017-05-01
Advanced high strength steels (AHSS) are seeing an increased use, mostly due to lightweight design in automobile industry and strict regulations on safety and greenhouse gases emissions. However, the use of these materials, characterized by a high strength to weight ratio, stiffness and high work hardening at early stages of plastic deformation, have imposed many challenges in sheet metal industry, mainly their low formability and different behaviour, when compared to traditional steels, which may represent a defying task, both to obtain a successful component and also when using numerical simulation to predict material behaviour and its fracture limits. Although numerical prediction of critical strains in sheet metal forming processes is still very often based on the classic forming limit diagrams, alternative approaches can use damage models, which are based on stress states to predict failure during the forming process and they can be classified as empirical, physics based and phenomenological models. In the present paper a comparative analysis of different ductile damage models is carried out, in order numerically evaluate two isotropic coupled damage models proposed by Johnson-Cook and Gurson-Tvergaard-Needleman (GTN), each of them corresponding to the first two previous group classification. Finite element analysis is used considering these damage mechanics approaches and the obtained results are compared with experimental Nakajima tests, thus being possible to evaluate and validate the ability to predict damage and formability limits for previous defined approaches.
Towards cleaner combustion engines through groundbreaking detailed chemical kinetic models
Battin-Leclerc, Frédérique; Blurock, Edward; Bounaceur, Roda; Fournet, René; Glaude, Pierre-Alexandre; Herbinet, Olivier; Sirjean, Baptiste; Warth, V.
2013-01-01
In the context of limiting the environmental impact of transportation, this paper reviews new directions which are being followed in the development of more predictive and more accurate detailed chemical kinetic models for the combustion of fuels. In the first part, the performance of current models, especially in terms of the prediction of pollutant formation, is evaluated. In the next parts, recent methods and ways to improve these models are described. An emphasis is given on the development of detailed models based on elementary reactions, on the production of the related thermochemical and kinetic parameters, and on the experimental techniques available to produce the data necessary to evaluate model predictions under well defined conditions. PMID:21597604
Philpott, H; Nandurkar, S; Royce, S G; Thien, F; Gibson, P R
2016-08-01
The use of allergy tests to guide dietary treatment for eosinophilic oesophagitis (EoE) is controversial and data are limited. Aeroallergen sensitisation patterns and food triggers have been defined in Northern Hemisphere cohorts only. To determine if allergy tests that are routinely available can predict food triggers in adult patients with EoE. To define the food triggers and aeroallergen sensitisation patterns in a novel Southern Hemisphere (Australian) cohort of patients. Consecutive patients with EoE who elected to undergo dietary therapy were prospectively assessed, demographic details and atopic characteristics recorded, and allergy tests, comprising skin-prick and skin-patch tests, serum allergen-specific IgE, basophil activation test and serum food-specific IgG, were performed. Patients underwent a six-food elimination diet with a structured algorithm that included endoscopic and histological examination of the oesophagus a minimum of 2 weeks after each challenge. Response was defined as <15 eosinophils per HPF. Foods defined as triggers were considered as gold standard and were compared with those identified by allergy testing. No allergy test could accurately predict actual food triggers. Concordance among skin-prick and serum allergen-specific IgE was high for aeroallergens only. Among seasonal aeroallergens, rye-grass sensitisation was predominant. Food triggers were commonly wheat, milk and egg, alone or in combination. None of the currently-available allergy tests predicts food triggers for EoE. Exclusion-rechallenge methodology with oesophageal histological assessment remains the only effective investigation. The same food triggers were identified in this southern hemisphere cohort as previously described. © 2016 John Wiley & Sons Ltd.
[ProteoСat: a tool for planning of proteomic experiments].
Skvortsov, V S; Alekseychuk, N N; Khudyakov, D V; Mikurova, A V; Rybina, A V; Novikova, S E; Tikhonova, O V
2015-01-01
ProteoCat is a computer program has been designed to help researchers in the planning of large-scale proteomic experiments. The central part of this program is the subprogram of hydrolysis simulation that supports 4 proteases (trypsin, lysine C, endoproteinases AspN and GluC). For the peptides obtained after virtual hydrolysis or loaded from data file a number of properties important in mass-spectrometric experiments can be calculated or predicted. The data can be analyzed or filtered to reduce a set of peptides. The program is using new and improved modification of our methods developed to predict pI and probability of peptide detection; pI can also be predicted for a number of popular pKa's scales, proposed by other investigators. The algorithm for prediction of peptide retention time was realized similar to the algorithm used in the program SSRCalc. ProteoCat can estimate the coverage of amino acid sequences of proteins under defined limitation on peptides detection, as well as the possibility of assembly of peptide fragments with user-defined size of "sticky" ends. The program has a graphical user interface, written on JAVA and available at http://www.ibmc.msk.ru/LPCIT/ProteoCat.
Seaton, Sarah E; Manktelow, Bradley N
2012-07-16
Emphasis is increasingly being placed on the monitoring of clinical outcomes for health care providers. Funnel plots have become an increasingly popular graphical methodology used to identify potential outliers. It is assumed that a provider only displaying expected random variation (i.e. 'in-control') will fall outside a control limit with a known probability. In reality, the discrete count nature of these data, and the differing methods, can lead to true probabilities quite different from the nominal value. This paper investigates the true probability of an 'in control' provider falling outside control limits for the Standardised Mortality Ratio (SMR). The true probabilities of an 'in control' provider falling outside control limits for the SMR were calculated and compared for three commonly used limits: Wald confidence interval; 'exact' confidence interval; probability-based prediction interval. The probability of falling above the upper limit, or below the lower limit, often varied greatly from the nominal value. This was particularly apparent when there were a small number of expected events: for expected events ≤ 50 the median probability of an 'in-control' provider falling above the upper 95% limit was 0.0301 (Wald), 0.0121 ('exact'), 0.0201 (prediction). It is important to understand the properties and probability of being identified as an outlier by each of these different methods to aid the correct identification of poorly performing health care providers. The limits obtained using probability-based prediction limits have the most intuitive interpretation and their properties can be defined a priori. Funnel plot control limits for the SMR should not be based on confidence intervals.
NASA Technical Reports Server (NTRS)
West, Jeff; Yang, H. Q.; Brodnick, Jacob; Sansone, Marco; Westra, Douglas
2016-01-01
The Miles equation has long been used to predict slosh damping in liquid propellant tanks due to ring baffles. The original work by Miles identifies defined limits to its range of application. Recent evaluations of the Space Launch System identified that the Core Stage baffle designs resulted in violating the limits of the application of the Miles equation. This paper describes the work conducted by NASA/MSFC to develop methods to predict slosh damping from ring baffles for conditions for which Miles equation is not applicable. For asymptotically small slosh amplitudes or conversely large baffle widths, an asymptotic expression for slosh damping was developed and calibrated using historical experimental sub-scale slosh damping data. For the parameter space that lies between region of applicability of the asymptotic expression and the Miles equation, Computational Fluid Dynamics simulations of slosh damping were used to develop an expression for slosh damping. The combined multi-regime slosh prediction methodology is shown to be smooth at regime boundaries and consistent with both sub-scale experimental slosh damping data and the results of validated Computational Fluid Dynamics predictions of slosh damping due to ring baffles.
NASA Astrophysics Data System (ADS)
Luthfiani, T. A.; Sinaga, P.; Samsudin, A.
2018-05-01
We have been analyzed that there were limited research about Predict-Observe- Explain which use writing process with conceptual change text strategy. This study aims to develop a learning model namely Predict-Observe-Explain-Apply-Writing (POEAW) which is able to enhance students’ understanding level. The research method utilized the 4D model (Defining, Designing, Developing and Disseminating) that is formally limited to Developing Stage. There are four experts who judge the learning component (syntax, lesson plan, teaching material and student worksheet) and matter component (learning quality and content component). The result of this study are obtained expert validity test score average of 87% for learning content and 89% for matter component that means the POEAW is valid and can be tested in classroom learning. This research producing POEAW learning model that has five main steps, Predict, Observe, Explain, Apply and Write. To sum up, we have early developed POEAW in enhancing K-11 students’ understanding levels on impulse and momentum.
Adaptive estimation of a time-varying phase with a power-law spectrum via continuous squeezed states
NASA Astrophysics Data System (ADS)
Dinani, Hossein T.; Berry, Dominic W.
2017-06-01
When measuring a time-varying phase, the standard quantum limit and Heisenberg limit as usually defined, for a constant phase, do not apply. If the phase has Gaussian statistics and a power-law spectrum 1 /|ω| p with p >1 , then the generalized standard quantum limit and Heisenberg limit have recently been found to have scalings of 1 /N(p -1 )/p and 1 /N2 (p -1 )/(p +1 ) , respectively, where N is the mean photon flux. We show that this Heisenberg scaling can be achieved via adaptive measurements on squeezed states. We predict the experimental parameters analytically, and test them with numerical simulations. Previous work had considered the special case of p =2 .
Freezing in stripe states for kinetic Ising models: a comparative study of three dynamics
NASA Astrophysics Data System (ADS)
Godrèche, Claude; Pleimling, Michel
2018-04-01
We present a comparative study of the fate of an Ising ferromagnet on the square lattice with periodic boundary conditions evolving under three different zero-temperature dynamics. The first one is Glauber dynamics, the two other dynamics correspond to two limits of the directed Ising model, defined by rules that break the full symmetry of the former, yet sharing the same Boltzmann-Gibbs distribution at stationarity. In one of these limits the directed Ising model is reversible, in the other one it is irreversible. For the kinetic Ising-Glauber model, several recent studies have demonstrated the role of critical percolation to predict the probabilities for the system to reach the ground state or to fall in a metastable state. We investigate to what extent the predictions coming from critical percolation still apply to the two other dynamics.
Assessing predictability of a hydrological stochastic-dynamical system
NASA Astrophysics Data System (ADS)
Gelfan, Alexander
2014-05-01
The water cycle includes the processes with different memory that creates potential for predictability of hydrological system based on separating its long and short memory components and conditioning long-term prediction on slower evolving components (similar to approaches in climate prediction). In the face of the Panta Rhei IAHS Decade questions, it is important to find a conceptual approach to classify hydrological system components with respect to their predictability, define predictable/unpredictable patterns, extend lead-time and improve reliability of hydrological predictions based on the predictable patterns. Representation of hydrological systems as the dynamical systems subjected to the effect of noise (stochastic-dynamical systems) provides possible tool for such conceptualization. A method has been proposed for assessing predictability of hydrological system caused by its sensitivity to both initial and boundary conditions. The predictability is defined through a procedure of convergence of pre-assigned probabilistic measure (e.g. variance) of the system state to stable value. The time interval of the convergence, that is the time interval during which the system losses memory about its initial state, defines limit of the system predictability. The proposed method was applied to assess predictability of soil moisture dynamics in the Nizhnedevitskaya experimental station (51.516N; 38.383E) located in the agricultural zone of the central European Russia. A stochastic-dynamical model combining a deterministic one-dimensional model of hydrothermal regime of soil with a stochastic model of meteorological inputs was developed. The deterministic model describes processes of coupled heat and moisture transfer through unfrozen/frozen soil and accounts for the influence of phase changes on water flow. The stochastic model produces time series of daily meteorological variables (precipitation, air temperature and humidity), whose statistical properties are similar to those of the corresponding series of the actual data measured at the station. Beginning from the initial conditions and being forced by Monte-Carlo generated synthetic meteorological series, the model simulated diverging trajectories of soil moisture characteristics (water content of soil column, moisture of different soil layers, etc.). Limit of predictability of the specific characteristic was determined through time of stabilization of variance of the characteristic between the trajectories, as they move away from the initial state. Numerical experiments were carried out with the stochastic-dynamical model to analyze sensitivity of the soil moisture predictability assessments to uncertainty in the initial conditions, to determine effects of the soil hydraulic properties and processes of soil freezing on the predictability. It was found, particularly, that soil water content predictability is sensitive to errors in the initial conditions and strongly depends on the hydraulic properties of soil under both unfrozen and frozen conditions. Even if the initial conditions are "well-established", the assessed predictability of water content of unfrozen soil does not exceed 30-40 days, while for frozen conditions it may be as long as 3-4 months. The latter creates opportunity for utilizing the autumn water content of soil as the predictor for spring snowmelt runoff in the region under consideration.
Sumner, Jennifer A.; Mineka, Susan; McAdams, Dan P.
2012-01-01
Reduced autobiographical memory specificity (AMS) is an important cognitive marker in depression that is typically measured with the Autobiographical Memory Test (AMT; Williams & Broadbent, 1986). The AMT is widely used, but the overreliance on a single methodology for assessing AMS is a limitation in the field. The current study investigated memory narratives as an alternative measure of AMS in an undergraduate student sample selected for being high or low on a measure of depressive symptoms (N = 55). We employed a multi-method design to compare narrative- and AMT-based measures of AMS. Participants generated personally significant self-defining memory narratives, and also completed two versions of the AMT (with and without instructions to retrieve specific memories). Greater AMS in self-defining memory narratives correlated with greater AMS in performance on both versions of the AMT in the full sample, and the patterns of relationships between the different AMS measures were generally similar in low and high dysphoric participants. Furthermore, AMS in self-defining memory narratives was prospectively associated with depressive symptom levels. Specifically, greater AMS in self-defining memory narratives predicted fewer depressive symptoms at a 10-week follow-up over and above baseline symptom levels. Implications for future research and clinical applications are discussed. PMID:23240988
Sumner, Jennifer A; Mineka, Susan; McAdams, Dan P
2013-01-01
Reduced autobiographical memory specificity (AMS) is an important cognitive marker in depression that is typically measured with the Autobiographical Memory Test (AMT; Williams & Broadbent, 1986). The AMT is widely used, but the over-reliance on a single methodology for assessing AMS is a limitation in the field. The current study investigated memory narratives as an alternative measure of AMS in an undergraduate student sample selected for being high or low on a measure of depressive symptoms (N=55). We employed a multi-method design to compare narrative- and AMT-based measures of AMS. Participants generated personally significant self-defining memory narratives, and also completed two versions of the AMT (with and without instructions to retrieve specific memories). Greater AMS in self-defining memory narratives correlated with greater AMS in performance on both versions of the AMT in the full sample, and the patterns of relationships between the different AMS measures were generally similar in low and high dysphoric participants. Furthermore, AMS in self-defining memory narratives was prospectively associated with depressive symptom levels. Specifically, greater AMS in self-defining memory narratives predicted fewer depressive symptoms at a 10-week follow-up over and above baseline symptom levels. Implications for future research and clinical applications are discussed.
Abazov, V M; Abbott, B; Abolins, M; Acharya, B S; Adams, M; Adams, T; Aguilo, E; Ahn, S H; Ahsan, M; Alexeev, G D; Alkhazov, G; Alton, A; Alverson, G; Alves, G A; Anastasoaie, M; Ancu, L S; Andeen, T; Anderson, S; Andrieu, B; Anzelc, M S; Arnoud, Y; Arov, M; Askew, A; Asman, B; Assis Jesus, A C S; Atramentov, O; Autermann, C; Avila, C; Ay, C; Badaud, F; Baden, A; Bagby, L; Baldin, B; Bandurin, D V; Banerjee, P; Banerjee, S; Barberis, E; Barfuss, A-F; Bargassa, P; Baringer, P; Barnes, C; Barreto, J; Bartlett, J F; Bassler, U; Bauer, D; Beale, S; Bean, A; Begalli, M; Begel, M; Belanger-Champagne, C; Bellantoni, L; Bellavance, A; Benitez, J A; Beri, S B; Bernardi, G; Bernhard, R; Berntzon, L; Bertram, I; Besançon, M; Beuselinck, R; Bezzubov, V A; Bhat, P C; Bhatnagar, V; Binder, M; Biscarat, C; Blackler, I; Blazey, G; Blekman, F; Blessing, S; Bloch, D; Bloom, K; Boehnlein, A; Boline, D; Bolton, T A; Boos, E E; Borissov, G; Bos, K; Bose, T; Brandt, A; Brock, R; Brooijmans, G; Bross, A; Brown, D; Buchanan, N J; Buchholz, D; Buehler, M; Buescher, V; Bunichev, V; Burdin, S; Burke, S; Burnett, T H; Busato, E; Buszello, C P; Butler, J M; Calfayan, P; Calvet, S; Cammin, J; Caron, S; Carvalho, W; Casey, B C K; Cason, N M; Castilla-Valdez, H; Chakrabarti, S; Chakraborty, D; Chan, K; Chan, K M; Chandra, A; Charles, F; Cheu, E; Chevallier, F; Cho, D K; Choi, S; Choudhary, B; Christofek, L; Christoudias, T; Claes, D; Clément, B; Clément, C; Coadou, Y; Cooke, M; Cooper, W E; Corcoran, M; Couderc, F; Cousinou, M-C; Cox, B; Crépé-Renaudin, S; Cutts, D; Cwiok, M; da Motta, H; Das, A; Davies, B; Davies, G; De, K; de Jong, P; de Jong, S J; De La Cruz-Burelo, E; De Oliveira Martins, C; Degenhardt, J D; Déliot, F; Demarteau, M; Demina, R; Denisov, D; Denisov, S P; Desai, S; Diehl, H T; Diesburg, M; Doidge, M; Dominguez, A; Dong, H; Dudko, L V; Duflot, L; Dugad, S R; Duggan, D; Duperrin, A; Dyer, J; Dyshkant, A; Eads, M; Edmunds, D; Ellison, J; Elvira, V D; Enari, Y; Eno, S; Ermolov, P; Evans, H; Evdokimov, A; Evdokimov, V N; Ferapontov, A V; Ferbel, T; Fiedler, F; Filthaut, F; Fisher, W; Fisk, H E; Ford, M; Fortner, M; Fox, H; Fu, S; Fuess, S; Gadfort, T; Galea, C F; Gallas, E; Galyaev, E; Garcia, C; Garcia-Bellido, A; Gavrilov, V; Gay, P; Geist, W; Gelé, D; Gerber, C E; Gershtein, Y; Gillberg, D; Ginther, G; Gollub, N; Gómez, B; Goussiou, A; Grannis, P D; Greenlee, H; Greenwood, Z D; Gregores, E M; Grenier, G; Gris, Ph; Grivaz, J-F; Grohsjean, A; Grünendahl, S; Grünewald, M W; Guo, F; Guo, J; Gutierrez, G; Gutierrez, P; Haas, A; Hadley, N J; Haefner, P; Hagopian, S; Haley, J; Hall, I; Hall, R E; Han, L; Hanagaki, K; Hansson, P; Harder, K; Harel, A; Harrington, R; Hauptman, J M; Hauser, R; Hays, J; Hebbeker, T; Hedin, D; Hegeman, J G; Heinmiller, J M; Heinson, A P; Heintz, U; Hensel, C; Herner, K; Hesketh, G; Hildreth, M D; Hirosky, R; Hobbs, J D; Hoeneisen, B; Hoeth, H; Hohlfeld, M; Hong, S J; Hooper, R; Houben, P; Hu, Y; Hubacek, Z; Hynek, V; Iashvili, I; Illingworth, R; Ito, A S; Jabeen, S; Jaffré, M; Jain, S; Jakobs, K; Jarvis, C; Jenkins, A; Jesik, R; Johns, K; Johnson, C; Johnson, M; Jonckheere, A; Jonsson, P; Juste, A; Käfer, D; Kahn, S; Kajfasz, E; Kalinin, A M; Kalk, J M; Kalk, J R; Kappler, S; Karmanov, D; Kasper, J; Kasper, P; Katsanos, I; Kau, D; Kaur, R; Kehoe, R; Kermiche, S; Khalatyan, N; Khanov, A; Kharchilava, A; Kharzheev, Y M; Khatidze, D; Kim, H; Kim, T J; Kirby, M H; Klima, B; Kohli, J M; Konrath, J-P; Kopal, M; Korablev, V M; Kotcher, J; Kothari, B; Koubarovsky, A; Kozelov, A V; Krop, D; Kryemadhi, A; Kuhl, T; Kumar, A; Kunori, S; Kupco, A; Kurca, T; Kvita, J; Lam, D; Lammers, S; Landsberg, G; Lazoflores, J; Lebrun, P; Lee, W M; Leflat, A; Lehner, F; Lesne, V; Leveque, J; Lewis, P; Li, J; Li, L; Li, Q Z; Lietti, S M; Lima, J G R; Lincoln, D; Linnemann, J; Lipaev, V V; Lipton, R; Liu, Z; Lobo, L; Lobodenko, A; Lokajicek, M; Lounis, A; Love, P; Lubatti, H J; Lynker, M; Lyon, A L; Maciel, A K A; Madaras, R J; Mättig, P; Magass, C; Magerkurth, A; Makovec, N; Mal, P K; Malbouisson, H B; Malik, S; Malyshev, V L; Mao, H S; Maravin, Y; Martin, B; McCarthy, R; Melnitchouk, A; Mendes, A; Mendoza, L; Mercadante, P G; Merkin, M; Merritt, K W; Meyer, A; Meyer, J; Michaut, M; Miettinen, H; Millet, T; Mitrevski, J; Molina, J; Mommsen, R K; Mondal, N K; Monk, J; Moore, R W; Moulik, T; Muanza, G S; Mulders, M; Mulhearn, M; Mundal, O; Mundim, L; Nagy, E; Naimuddin, M; Narain, M; Naumann, N A; Neal, H A; Negret, J P; Neustroev, P; Nilsen, H; Noeding, C; Nomerotski, A; Novaes, S F; Nunnemann, T; O'Dell, V; O'Neil, D C; Obrant, G; Ochando, C; Oguri, V; Oliveira, N; Onoprienko, D; Oshima, N; Osta, J; Otec, R; Otero Y Garzón, G J; Owen, M; Padley, P; Pangilinan, M; Parashar, N; Park, S-J; Park, S K; Parsons, J; Partridge, R; Parua, N; Patwa, A; Pawloski, G; Perea, P M; Perfilov, M; Peters, K; Peters, Y; Pétroff, P; Petteni, M; Piegaia, R; Piper, J; Pleier, M-A; Podesta-Lerma, P L M; Podstavkov, V M; Pogorelov, Y; Pol, M-E; Pompos, A; Pope, B G; Popov, A V; Potter, C; Prado da Silva, W L; Prosper, H B; Protopopescu, S; Qian, J; Quadt, A; Quinn, B; Rangel, M S; Rani, K J; Ranjan, K; Ratoff, P N; Renkel, P; Reucroft, S; Rijssenbeek, M; Ripp-Baudot, I; Rizatdinova, F; Robinson, S; Rodrigues, R F; Royon, C; Rubinov, P; Ruchti, R; Sajot, G; Sánchez-Hernández, A; Sanders, M P; Santoro, A; Savage, G; Sawyer, L; Scanlon, T; Schaile, D; Schamberger, R D; Scheglov, Y; Schellman, H; Schieferdecker, P; Schmitt, C; Schwanenberger, C; Schwartzman, A; Schwienhorst, R; Sekaric, J; Sengupta, S; Severini, H; Shabalina, E; Shamim, M; Shary, V; Shchukin, A A; Shivpuri, R K; Shpakov, D; Siccardi, V; Sidwell, R A; Simak, V; Sirotenko, V; Skubic, P; Slattery, P; Smirnov, D; Smith, R P; Snow, G R; Snow, J; Snyder, S; Söldner-Rembold, S; Sonnenschein, L; Sopczak, A; Sosebee, M; Soustruznik, K; Souza, M; Spurlock, B; Stark, J; Steele, J; Stolin, V; Stone, A; Stoyanova, D A; Strandberg, J; Strandberg, S; Strang, M A; Strauss, M; Ströhmer, R; Strom, D; Strovink, M; Stutte, L; Sumowidagdo, S; Svoisky, P; Sznajder, A; Talby, M; Tamburello, P; Taylor, W; Telford, P; Temple, J; Tiller, B; Tissandier, F; Titov, M; Tokmenin, V V; Tomoto, M; Toole, T; Torchiani, I; Trefzger, T; Trincaz-Duvoid, S; Tsybychev, D; Tuchming, B; Tully, C; Tuts, P M; Unalan, R; Uvarov, L; Uvarov, S; Uzunyan, S; Vachon, B; van den Berg, P J; van Eijk, B; Van Kooten, R; van Leeuwen, W M; Varelas, N; Varnes, E W; Vartapetian, A; Vasilyev, I A; Vaupel, M; Verdier, P; Vertogradov, L S; Verzocchi, M; Villeneuve-Seguier, F; Vint, P; Vlimant, J-R; Von Toerne, E; Voutilainen, M; Vreeswijk, M; Wahl, H D; Wang, L; Wang, M H L S; Warchol, J; Watts, G; Wayne, M; Weber, G; Weber, M; Weerts, H; Wenger, A; Wermes, N; Wetstein, M; White, A; Wicke, D; Wilson, G W; Wimpenny, S J; Wobisch, M; Wood, D R; Wyatt, T R; Xie, Y; Yacoob, S; Yamada, R; Yan, M; Yasuda, T; Yatsunenko, Y A; Yip, K; Yoo, H D; Youn, S W; Yu, C; Yu, J; Yurkewicz, A; Zatserklyaniy, A; Zeitnitz, C; Zhang, D; Zhao, T; Zhou, B; Zhu, J; Zielinski, M; Zieminska, D; Zieminski, A; Zutshi, V; Zverev, E G
2007-11-09
We search for the production of single top quarks via flavor-changing-neutral-current couplings of a gluon to the top quark and a charm (c) or up (u) quark. We analyze 230 pb{-1} of lepton+jets data from pp[over] collisions at a center of mass energy of 1.96 TeV collected by the D0 detector at the Fermilab Tevatron Collider. We observe no significant deviation from standard model predictions, and hence set upper limits on the anomalous coupling parameters kappa{g}{c}/Lambda and kappa{g}{u}/Lambda, where kappa{g} define the strength of tcg and tug couplings, and Lambda defines the scale of new physics. The limits at 95% C.L. are kappa{g}{c}/Lambda<0.15 TeV-1 and kappa{g}{u}/Lambda<0.037 TeV-1.
Farm elders define health as the ability to work.
Reed, Deborah B; Rayens, Mary Kay; Conley, Christina K; Westneat, Susan; Adkins, Sarah M
2012-08-01
Thirty percent of America's 2.2 million farms are operated by individuals older than 65 years. This study examined how older farmers define health and determined whether demographic characteristics, farm work, and physical and mental health status predict health definition. Data were collected via telephone and mailed surveys during the baseline wave of data collection in a longitudinal study of family farmers residing in two southern states (n=1,288). Nearly 42% defined health as the "ability to work" compared to a physical health-related definition. Predictors of defining health as the ability to work included being White, performing more farm tasks in the past week, taking prescription medications daily, and having minimal health-related limitations to farm work. Health behaviors are centered on the individual's perception of health. Understanding the defining attributes of health can support better approaches to health care and health promotion, particularly among rural subcultures such as farmers, whose identity is rooted in their work. Copyright 2012, SLACK Incorporated.
Majumder, Biswanath; Baraneedharan, Ulaganathan; Thiyagarajan, Saravanan; Radhakrishnan, Padhma; Narasimhan, Harikrishna; Dhandapani, Muthu; Brijwani, Nilesh; Pinto, Dency D; Prasath, Arun; Shanthappa, Basavaraja U; Thayakumar, Allen; Surendran, Rajagopalan; Babu, Govind K; Shenoy, Ashok M; Kuriakose, Moni A; Bergthold, Guillaume; Horowitz, Peleg; Loda, Massimo; Beroukhim, Rameen; Agarwal, Shivani; Sengupta, Shiladitya; Sundaram, Mallikarjun; Majumder, Pradip K
2015-02-27
Predicting clinical response to anticancer drugs remains a major challenge in cancer treatment. Emerging reports indicate that the tumour microenvironment and heterogeneity can limit the predictive power of current biomarker-guided strategies for chemotherapy. Here we report the engineering of personalized tumour ecosystems that contextually conserve the tumour heterogeneity, and phenocopy the tumour microenvironment using tumour explants maintained in defined tumour grade-matched matrix support and autologous patient serum. The functional response of tumour ecosystems, engineered from 109 patients, to anticancer drugs, together with the corresponding clinical outcomes, is used to train a machine learning algorithm; the learned model is then applied to predict the clinical response in an independent validation group of 55 patients, where we achieve 100% sensitivity in predictions while keeping specificity in a desired high range. The tumour ecosystem and algorithm, together termed the CANScript technology, can emerge as a powerful platform for enabling personalized medicine.
Kuroki, Kenji; Nogami, Akihiko; Igarashi, Miyako; Masuda, Keita; Kowase, Shinya; Kurosaki, Kenji; Komatsu, Yuki; Naruse, Yoshihisa; Machino, Takeshi; Yamasaki, Hiro; Xu, Dongzhu; Murakoshi, Nobuyuki; Sekiguchi, Yukio; Aonuma, Kazutaka
2018-04-01
Several conducting channels of ventricular tachycardia (VT) can be identified using voltage limit adjustment (VLA) of substrate mapping. However, the sensitivity or specificity to predict a VT isthmus is not high by using VLA alone. This study aimed to evaluate the efficacy of the combined use of VLA and fast-Fourier transform analysis to predict VT isthmuses. VLA and fast-Fourier transform analyses of local ventricular bipolar electrograms during sinus rhythm were performed in 9 postinfarction patients who underwent catheter ablation for a total of 13 monomorphic VTs. Relatively higher voltage areas on an electroanatomical map were defined as high voltage channels (HVCs), and relatively higher fast-Fourier transform areas were defined as high-frequency channels (HFCs). HVCs were classified into full or partial HVCs (the entire or >30% of HVC can be detectable, respectively). Twelve full HVCs were identified in 7 of 9 patients. HFCs were located on 7 of 12 full HVCs. Five VT isthmuses (71%) were included in the 7 full HVC+/HFC+ sites, whereas no VT isthmus was found in the 5 full HVC+/HFC- sites. HFCs were identical to 9 of 16 partial HVCs. Eight VT isthmuses (89%) were included in the 9 partial HVC+/HFC+ sites, whereas no VT isthmus was found in the 7 partial HVC+/HFC- sites. All HVC+/HFC+ sites predicted VT isthmus with a sensitivity of 100% and a specificity of 80%. Combined use of VLA and fast-Fourier transform analysis may be a useful method to detect VT isthmuses. © 2018 American Heart Association, Inc.
NASA Technical Reports Server (NTRS)
Schmidt, R. C.; Patankar, S. V.
1991-01-01
The capability of two k-epsilon low-Reynolds number (LRN) turbulence models, those of Jones and Launder (1972) and Lam and Bremhorst (1981), to predict transition in external boundary-layer flows subject to free-stream turbulence is analyzed. Both models correctly predict the basic qualitative aspects of boundary-layer transition with free stream turbulence, but for calculations started at low values of certain defined Reynolds numbers, the transition is generally predicted at unrealistically early locations. Also, the methods predict transition lengths significantly shorter than those found experimentally. An approach to overcoming these deficiencies without abandoning the basic LRN k-epsilon framework is developed. This approach limits the production term in the turbulent kinetic energy equation and is based on a simple stability criterion. It is correlated to the free-stream turbulence value. The modification is shown to improve the qualitative and quantitative characteristics of the transition predictions.
The kinetic study of hydrogen bacteria and methanotrophs in pure and defined mixed cultures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arora, D.K.
The kinetics of pure and mixed cultures of Alcaligenes eutrophus H 16 and Methylobacterium organophilum CRL 26 under double substrate limited conditions were studied. In pure culture growth kinetics, a non-interactive model was found to fit the experimental data best. The yield of biomass on limiting substrate was found to vary with the dilution rate. The variation in the biomass yield may be attributed to the change in metabolic pathways resulting from a shift in the limiting substrates. Both species exhibited wall growth in the chemostat under dark conditions. However, under illuminated conditions, there was significant reduction in wall growth.more » Poly-{beta}-hydroxybutyric acid was synthesized by both species under ammonia and oxygen limiting conditions. The feed gas mixture was optimized to achieve the steady-state coexistence of these two species in a chemostate for the first time. In mixed cultures, the biomass species assays were differentiated on the basis of their selective growth on particular compounds: Sarcosine and D-arabinose were selected for hydrogen bacteria and methylotrophs, respectively. The kinetics parameters estimated from pure cultures were used to predict the growth kinetics of these species in defined mixed cultures.« less
Rolland, Yves; Dupuy, Charlotte; Abellan Van Kan, Gabor; Cesari, Matteo; Vellas, Bruno; Faruch, Marie; Dray, Cedric; de Souto Barreto, Philipe
2017-10-01
Screening for sarcopenia in daily practice can be challenging. Our objective was to explore whether the SARC-F questionnaire is a valid screening tool for sarcopenia (defined by the Foundation for the National Institutes of Health [FNIH] criteria). Moreover, we evaluated the physical performance of older women according to the SARC-F questionnaire. Cross-sectional study. Data from the Toulouse and Lyon EPIDémiologie de l'OStéoporose study (EPIDOS) on 3025 women living in the community (mean age: 80.5 ± 3.9 years), without a previous history of hip fracture, were assessed. The SARC-F self-report questionnaire score ranges from 0 to 10: a score ≥4 defines sarcopenia. The FNIH criteria uses handgrip strength (GS) and appendicular lean mass (ALM; assessed by DXA) divided by body mass index (BMI) to define sarcopenia. Outcome measures were the following performance-based tests: knee-extension strength, 6-m gait speed, and a repeated chair-stand test. The associations of sarcopenia with performance-based tests was examined using bootstrap multiple linear-regression models; adjusted R 2 determined the percentage variation for each outcome explained by the model. Prevalence of sarcopenia was 16.7% (n = 504) according to the SARC-F questionnaire and 1.8% (n = 49) using the FNIH criteria. Sensibility and specificity of the SARC-F to diagnose sarcopenia (defined by FNIH criteria) were 34% and 85%, respectively. Sarcopenic women defined by SARC-F had significantly lower physical performance than nonsarcopenic women. The SARC-F improved the ability to predict poor physical performance. The validity of the SARC-F questionnaire to screen for sarcopenia, when compared with the FNIH criteria, was limited. However, sarcopenia defined by the SARC-F questionnaire substantially improved the predictive value of clinical characteristics of patients to predict poor physical performance. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Shuttle TPS thermal performance and analysis methodology
NASA Technical Reports Server (NTRS)
Neuenschwander, W. E.; Mcbride, D. U.; Armour, G. A.
1983-01-01
Thermal performance of the thermal protection system was approximately as predicted. The only extensive anomalies were filler bar scorching and over-predictions in the high Delta p gap heating regions of the orbiter. A technique to predict filler bar scorching has been developed that can aid in defining a solution. Improvement in high Delta p gap heating methodology is still under study. Minor anomalies were also examined for improvements in modeling techniques and prediction capabilities. These include improved definition of low Delta p gap heating, an analytical model for inner mode line convection heat transfer, better modeling of structure, and inclusion of sneak heating. The limited number of problems related to penetration items that presented themselves during orbital flight tests were resolved expeditiously, and designs were changed and proved successful within the time frame of that program.
Steric interactions determine side-chain conformations in protein cores.
Caballero, D; Virrueta, A; O'Hern, C S; Regan, L
2016-09-01
We investigate the role of steric interactions in defining side-chain conformations in protein cores. Previously, we explored the strengths and limitations of hard-sphere dipeptide models in defining sterically allowed side-chain conformations and recapitulating key features of the side-chain dihedral angle distributions observed in high-resolution protein structures. Here, we show that modeling residues in the context of a particular protein environment, with both intra- and inter-residue steric interactions, is sufficient to specify which of the allowed side-chain conformations is adopted. This model predicts 97% of the side-chain conformations of Leu, Ile, Val, Phe, Tyr, Trp and Thr core residues to within 20°. Although the hard-sphere dipeptide model predicts the observed side-chain dihedral angle distributions for both Thr and Ser, the model including the protein environment predicts side-chain conformations to within 20° for only 60% of core Ser residues. Thus, this approach can identify the amino acids for which hard-sphere interactions alone are sufficient and those for which additional interactions are necessary to accurately predict side-chain conformations in protein cores. We also show that our approach can predict alternate side-chain conformations of core residues, which are supported by the observed electron density. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Three approaches to define desired soil organic matter contents.
Sparling, G; Parfitt, R L; Hewitt, A E; Schipper, L A
2003-01-01
Soil organic C is often suggested as an indicator of soil quality, but desirable targets are rarely specified. We tested three approaches to define maximum and lowest desirable soil C contents for four New Zealand soil orders. Approach 1 used the New Zealand National Soils Database (NSD). The maximum C content was defined as the median value of long-term pastures, and the lower quartile defined the lowest desirable soil C content. Approach 2 used the CENTURY model to predict maximum C contents of long-term pasture. Lowest desirable content was defined by the level that still allowed recovery to 80% of the maximum C content over 25 yr. Approach 3 used an expert panel to define desirable C contents based on production and environmental criteria. Median C contents (0-20 cm) for the Recent, Granular, Melanic, and Allophanic orders were 72, 88, 98, 132 Mg ha(-1), and similar to contents predicted by the CENTURY model (78, 93, 102, and 134 Mg ha(-1), respectively). Lower quartile values (54, 78, 73, and 103 Mg ha(-1), respectively) were similar to the lowest desirable C contents calculated by CENTURY (55, 54, 67, and 104 Mg ha(-1), respectively). Expert opinion was that C contents could be depleted below these values with tolerable effects on production but less so for the environment. The CENTURY model is our preferred approach for setting soil organic C targets, but the model needs calibrating for other soils and land uses. The statistical and expert opinion approaches are less defensible in setting lower limits for desirable C contents.
Cogswell, Rebecca; Kobashigawa, Erin; McGlothlin, Dana; Shaw, Robin; De Marco, Teresa
2012-11-01
The Registry to Evaluate Early and Long-Term Pulmonary Arterial (PAH) Hypertension Disease Management (REVEAL) model was designed to predict 1-year survival in patients with PAH. Multivariate prediction models need to be evaluated in cohorts distinct from the derivation set to determine external validity. In addition, limited data exist on the utility of this model in the prediction of long-term survival. REVEAL model performance was assessed to predict 1-year and 5-year outcomes, defined as survival or composite survival or freedom from lung transplant, in 140 patients with PAH. The validation cohort had a higher proportion of human immunodeficiency virus (7.9% vs 1.9%, p < 0.0001), methamphetamine use (19.3% vs 4.9%, p < 0.0001), and portal hypertension PAH (16.4% vs 5.1%, p < 0.0001) compared with the development cohort. The C-index of the model to predict survival was 0.765 at 1 year and 0.712 at 5 years of follow-up. The C-index of the model to predict composite survival or freedom from lung transplant was 0.805 and 0.724 at 1 and 5 years of follow-up, respectively. Prediction by the model, however, was weakest among patients with intermediate-risk predicted survival. The REVEAL model had adequate discrimination to predict 1-year survival in this small but clinically distinct validation cohort. Although the model also had predictive ability out to 5 years, prediction was limited among patients of intermediate risk, suggesting our prediction methods can still be improved. Copyright © 2012. Published by Elsevier Inc.
Theoretical prediction of Grüneisen parameter for SiO{sub 2}.TiO{sub 2} bulk metallic glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Chandra K.; Pandey, Brijesh K., E-mail: bkpmmmec11@gmail.com; Pandey, Anjani K.
2016-05-23
The Grüneisen parameter (γ) is very important to decide the limitations for the prediction of thermoelastic properties of bulk metallic glasses. It can be defined in terms of microscopic and macroscopic parameters of the material in which former is based on vibrational frequencies of atoms in the material while later is closely related to its thermodynamic properties. Different formulation and equation of states are used by the pioneer researchers of this field to predict the true sense of Gruneisen parameter for BMG but for SiO{sub 2}.TiO{sub 2} very few and insufficient information is available till now. In the present workmore » we have tested the validity of two different isothermal EOS viz. Poirrior-Tarantola EOS and Usual-Tait EOS to predict the true value of Gruneisen parameter for SiO{sub 2}.TiO{sub 2} as a function of compression. Using different thermodynamic limitations related to the material constraints and analyzing obtained result it is concluded that the Poirrior-Tarantola EOS gives better numeric values of Grüneisen parameter (γ) for SiO{sub 2}.TiO{sub 2} BMG.« less
Multi-objective optimization for model predictive control.
Wojsznis, Willy; Mehta, Ashish; Wojsznis, Peter; Thiele, Dirk; Blevins, Terry
2007-06-01
This paper presents a technique of multi-objective optimization for Model Predictive Control (MPC) where the optimization has three levels of the objective function, in order of priority: handling constraints, maximizing economics, and maintaining control. The greatest weights are assigned dynamically to control or constraint variables that are predicted to be out of their limits. The weights assigned for economics have to out-weigh those assigned for control objectives. Control variables (CV) can be controlled at fixed targets or within one- or two-sided ranges around the targets. Manipulated Variables (MV) can have assigned targets too, which may be predefined values or current actual values. This MV functionality is extremely useful when economic objectives are not defined for some or all the MVs. To achieve this complex operation, handle process outputs predicted to go out of limits, and have a guaranteed solution for any condition, the technique makes use of the priority structure, penalties on slack variables, and redefinition of the constraint and control model. An engineering implementation of this approach is shown in the MPC embedded in an industrial control system. The optimization and control of a distillation column, the standard Shell heavy oil fractionator (HOF) problem, is adequately achieved with this MPC.
Kaiser, W; Faber, T S; Findeis, M
1996-01-01
The authors developed a computer program that detects myocardial infarction (MI) and left ventricular hypertrophy (LVH) in two steps: (1) by extracting parameter values from a 10-second, 12-lead electrocardiogram, and (2) by classifying the extracted parameter values with rule sets. Every disease has its dedicated set of rules. Hence, there are separate rule sets for anterior MI, inferior MI, and LVH. If at least one rule is satisfied, the disease is said to be detected. The computer program automatically develops these rule sets. A database (learning set) of healthy subjects and patients with MI, LVH, and mixed MI+LVH was used. After defining the rule type, initial limits, and expected quality of the rules (positive predictive value, minimum number of patients), the program creates a set of rules by varying the limits. The general rule type is defined as: disease = lim1l < p1 < or = lim1u and lim2l < p2 < or = lim2u and ... limnl < pn < or = limnu. When defining the rule types, only the parameters (p1 ... pn) that are known as clinical electrocardiographic criteria (amplitudes [mV] of Q, R, and T waves and ST-segment; duration [ms] of Q wave; frontal angle [degrees]) were used. This allowed for submitting the learned rule sets to an independent investigator for medical verification. It also allowed the creation of explanatory texts with the rules. These advantages are not offered by the neurons of a neural network. The learned rules were checked against a test set and the following results were obtained: MI: sensitivity 76.2%, positive predictive value 98.6%; LVH: sensitivity 72.3%, positive predictive value 90.9%. The specificity ratings for MI are better than 98%; for LVH, better than 90%.
Elskens, Marc; Vloeberghs, Daniel; Van Elsen, Liesbeth; Baeyens, Willy; Goeyens, Leo
2012-09-15
For reasons of food safety, packaging and food contact materials must be submitted to migration tests. Testing of silicone moulds is often very laborious, since three replicate tests are required to decide about their compliancy. This paper presents a general modelling framework to predict the sample's compliance or non-compliance using results of the first two migration tests. It compares the outcomes of models with multiple continuous predictors with a class of models involving latent and dummy variables. The model's prediction ability was tested using cross and external validations, i.e. model revalidation each time a new measurement set became available. At the overall migration limit of 10 mg dm(-2), the relative uncertainty on a prediction was estimated to be ~10%. Taking the default values for α and β equal to 0.05, the maximum value that can be predicted for sample compliance was therefore 7 mg dm(-2). Beyond this limit the risk for false compliant results increases significantly, and a third migration test should be performed. The result of this latter test defines the sample's compliance or non-compliance. Propositions for compliancy control inspired by the current dioxin control strategy are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
Auyeung, Tung Wai; Lee, Jenny Shun Wah; Leung, Jason; Kwok, Timothy; Woo, Jean
2013-08-01
Conventionally, sarcopenia is defined by muscle mass and physical performance. We hypothesized that the disability caused by sarcopenia and sarcopenic obesity was related to the amount of adiposity or body weight bearing on a unit of muscle mass, or the adiposity to muscle ratio. We therefore examined whether this ratio could predict physical limitation by secondary analysis of the data in our previous study. We recruited 3,153 community-dwelling adults aged >65 years and their body composition was measured by dual-energy X-ray absorptiometry. Assessment of physical limitation was undertaken 4 years later. The relationship between baseline adiposity to muscle ratio and incident physical limitation was examined by logistic regression. In men, the adiposity to muscle ratios, namely total body fat to lower-limb muscle mass, total body fat to fat-free mass (FFM), and body weight to FFM, were predictive of physical limitation before and after adjustment for the covariates: age, Mini-mental Status Examination score, Geriatric Depression Scale score >8, and the diagnosis of chronic obstructive pulmonary disease, diabetes mellitus, hypertension, heart disease, and stroke (all p values < 0.001), when the total body fat to lower-limb muscle mass ratio was greater than or equal to 0.75. In women, throughout the entire range of that ratio, all three adiposity to muscle ratios were associated with physical limitation 4 years later both before and after adjustment for the same set of covariates (all p values < 0.05). Sarcopenia and sarcopenic obesity as measured by the body weight or adiposity bearing on a unit of muscle mass (the adiposity to muscle ratio) could predict incident or worsening physical limitation in older women across the entire range of the total body fat to lower-limb muscle mass ratio; and in older men when this ratio was equal to or greater than 0.75.
Angulo, Javier C; Andrés, Guillermo; Ashour, Nadia; Sánchez-Chapado, Manuel; López, Jose I; Ropero, Santiago
2016-03-01
Detection of DNA hypermethylation has emerged as a novel molecular biomarker for prostate cancer diagnosis and evaluation of prognosis. We sought to define whether a hypermethylation profile of patients with prostate cancer on androgen deprivation would predict castrate resistant prostate cancer. Genome-wide methylation analysis was performed using a methylation cancer panel in 10 normal prostates and 45 tumor samples from patients placed on androgen deprivation who were followed until castrate resistant disease developed. Castrate resistant disease was defined according to EAU (European Association of Urology) guideline criteria. Two pathologists reviewed the Gleason score, Ki-67 index and neuroendocrine differentiation. Hierarchical clustering analysis was performed and relationships with outcome were investigated by Cox regression and log rank analysis. We found 61 genes that were significantly hypermethylated in greater than 20% of tumors analyzed. Three clusters of patients were characterized by a DNA methylation profile, including 1 at risk for earlier castrate resistant disease (log rank p = 0.019) and specific mortality (log rank p = 0.002). Hypermethylation of ETV1 (HR 3.75) and ZNF215 (HR 2.89) predicted disease progression despite androgen deprivation. Hypermethylation of IRAK3 (HR 13.72), ZNF215 (HR 4.81) and SEPT9 (HR 7.64) were independent markers of prognosis. Prostate specific antigen greater than 25 ng/ml, Gleason pattern 5, Ki-67 index greater than 12% and metastasis at diagnosis also predicted a negative response to androgen deprivation. Study limitations included the retrospective design and limited number of cases. Epigenetic silencing of the mentioned genes could be novel molecular markers for the prognosis of advanced prostate cancer. It might predict castrate resistance during hormone deprivation and, thus, disease specific mortality. Gene hypermethylation is associated with disease progression in patients who receive hormone therapy. It could serve as a marker of the treatment response. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Poststroke Fatigue: Who Is at Risk for an Increase in Fatigue?
van Eijsden, Hanna Maria; van de Port, Ingrid Gerrie Lambert; Visser-Meily, Johanna Maria August; Kwakkel, Gert
2012-01-01
Background. Several studies have examined determinants related to post-stroke fatigue. However, it is unclear which determinants can predict an increase in poststroke fatigue over time. Aim. This prospective cohort study aimed to identify determinants which predict an increase in post-stroke fatigue. Methods. A total of 250 patients with stroke were examined at inpatient rehabilitation discharge (T0) and 24 weeks later (T1). Fatigue was measured using the Fatigue Severity Scale (FSS). An increase in post-stroke fatigue was defined as an increase in the FSS score beyond the 95% limits of the standard error of measurement of the FSS (i.e., 1.41 points) between T0 and T1. Candidate determinants included personal factors, stroke characteristics, physical, cognitive, and emotional functions, and activities and participation and were assessed at T0. Factors predicting an increase in fatigue were identified using forward multivariate logistic regression analysis. Results. The only independent predictor of an increase in post-stroke fatigue was FSS (OR 0.50; 0.38–0.64, P < 0.001). The model including FSS at baseline correctly predicted 7.9% of the patients who showed increased fatigue at T1. Conclusion. The prognostic model to predict an increase in fatigue after stroke has limited predictive value, but baseline fatigue is the most important independent predictor. Overall, fatigue levels remained stable over time. PMID:22028989
Pharmacogenetics of schizophrenia.
Reynolds, Gavin P; Templeman, Lucy A; Godlewska, Beata R
2006-08-01
There is substantial unexplained interindividual variability in the drug treatment of schizophrenia. A substantial proportion of patients respond inadequately to antipsychotic drugs, and many experience limiting side effects. As genetic factors are likely to contribute to this variability, the pharmacogenetics of schizophrenia has attracted substantial effort. The approaches have mainly been limited to association studies of polymorphisms in candidate genes, which have been indicated by the pharmacology of antipsychotic drugs. Although some advances have been made, particularly in understanding the pharmacogenetics of some limiting side effects, genetic prediction of symptom response remains elusive. Nevertheless, with improvements in defining the response phenotype in carefully assessed and homogeneous subject groups, the near future is likely to see the identification of genetic predictors of outcome that may inform the choice of pharmacotherapy.
Numerical modeling of the Indo-Australian intraplate deformation
NASA Astrophysics Data System (ADS)
Brandon, Vincent; Royer, Jean-Yves
2014-05-01
The Indo-Australian plate is perhaps the best example of wide intraplate deformation within an oceanic plate. The deformation is expressed by an unusual level of intraplate seismicity, including magnitude Mw > 8 events, large-scale folding and deep faulting of the oceanic lithosphere and reactivation of extinct fracture zones. The deformation pattern and kinematic data inversions suggest that the Indo-Australian plate can be viewed as a composite plate made of three rigid component plates - India, Capricorn, Australia - separated by wide and diffuse boundaries undergoing either extensional or compressional deformation. We tested this model using the SHELLS numerical code (Kong & Bird, 1995). The Indo-Australian plate is modeled by a mesh of 5281 spherical triangular finite elements. Mesh edges parallel the major extinct fracture zones so that they can be reactivated by reducing their friction rates. Strength of the plate is defined by the age of the lithosphere and seafloor topography. Model boundary conditions are only defined by the plate velocities predicted by the rotation vectors between rigid components of the Indo-Australian plate and their neighboring plates. Since the mesh limits all belong to rigid plates with fully defined Euler vectors, no conditions are imposed on the location, extent and limits of the diffuse and deforming zones. Using MORVEL plate velocities (DeMets et al., 2010), predicted deformation patterns are very consistent with that observed. Pre-existing structures of the lithosphere play an important role in the intraplate deformation and its distribution. The Chagos Bank focuses most of the extensional deformation between the Indian and Capricorn plates. Agreement between models and observation improves by weakening fossil fracture zones relative to the surrounding crust; however only limited sections of FZ's accommodate deformation. The reactivation of the Eocene FZ's in the Central Indian Basin (CIB) and Wharton Basin (WB) explains the drastic change in the deformation style between these basins across the Ninetyeast ridge. The highest slip rates along the WB FZ's are predicted where two major strike-slip faulting earthquakes occurred in April 2012 (Mw=8.6 and 8.2). The best model is obtained when adding a local HF anomaly in the center of the CIB (proxy for weakening the lithospheric strength), consistent with evidence of mantle serpentinization in the CIB where deep seismics image a series of N-S dipping thrust faults reaching Moho depths. The rates of extension or shortening, inferred from the predicted strain rates, are consistent with previous estimates based on different approaches. This finite element modeling confirms that oceanic lithosphere, like the continental lithosphere, can slowly deform over very broad areas (> 1000 x 1000 km).
Kohonen and counterpropagation neural networks applied for mapping and interpretation of IR spectra.
Novic, Marjana
2008-01-01
The principles of learning strategy of Kohonen and counterpropagation neural networks are introduced. The advantages of unsupervised learning are discussed. The self-organizing maps produced in both methods are suitable for a wide range of applications. Here, we present an example of Kohonen and counterpropagation neural networks used for mapping, interpretation, and simulation of infrared (IR) spectra. The artificial neural network models were trained for prediction of structural fragments of an unknown compound from its infrared spectrum. The training set contained over 3,200 IR spectra of diverse compounds of known chemical structure. The structure-spectra relationship was encompassed by the counterpropagation neural network, which assigned structural fragments to individual compounds within certain probability limits, assessed from the predictions of test compounds. The counterpropagation neural network model for prediction of fragments of chemical structure is reversible, which means that, for a given structural domain, limited to the training data set in the study, it can be used to simulate the IR spectrum of a chemical defined with a set of structural fragments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzo, Davinia B.; Blackburn, Mark R.
As systems become more complex, systems engineers rely on experts to inform decisions. There are few experts and limited data in many complex new technologies. This challenges systems engineers as they strive to plan activities such as qualification in an environment where technical constraints are coupled with the traditional cost, risk, and schedule constraints. Bayesian network (BN) models provide a framework to aid systems engineers in planning qualification efforts with complex constraints by harnessing expert knowledge and incorporating technical factors. By quantifying causal factors, a BN model can provide data about the risk of implementing a decision supplemented with informationmore » on driving factors. This allows a systems engineer to make informed decisions and examine “what-if” scenarios. This paper discusses a novel process developed to define a BN model structure based primarily on expert knowledge supplemented with extremely limited data (25 data sets or less). The model was developed to aid qualification decisions—specifically to predict the suitability of six degrees of freedom (6DOF) vibration testing for qualification. The process defined the model structure with expert knowledge in an unbiased manner. Finally, validation during the process execution and of the model provided evidence the process may be an effective tool in harnessing expert knowledge for a BN model.« less
Rizzo, Davinia B.; Blackburn, Mark R.
2018-03-30
As systems become more complex, systems engineers rely on experts to inform decisions. There are few experts and limited data in many complex new technologies. This challenges systems engineers as they strive to plan activities such as qualification in an environment where technical constraints are coupled with the traditional cost, risk, and schedule constraints. Bayesian network (BN) models provide a framework to aid systems engineers in planning qualification efforts with complex constraints by harnessing expert knowledge and incorporating technical factors. By quantifying causal factors, a BN model can provide data about the risk of implementing a decision supplemented with informationmore » on driving factors. This allows a systems engineer to make informed decisions and examine “what-if” scenarios. This paper discusses a novel process developed to define a BN model structure based primarily on expert knowledge supplemented with extremely limited data (25 data sets or less). The model was developed to aid qualification decisions—specifically to predict the suitability of six degrees of freedom (6DOF) vibration testing for qualification. The process defined the model structure with expert knowledge in an unbiased manner. Finally, validation during the process execution and of the model provided evidence the process may be an effective tool in harnessing expert knowledge for a BN model.« less
Eye Dominance Predicts fMRI Signals in Human Retinotopic Cortex
Mendola, Janine D.; Conner, Ian P.
2009-01-01
There have been many attempts to define eye dominance in normal subjects, but limited consensus exists, and relevant physiological data is scarce. In this study, we consider two different behavioral methods for assignment of eye dominance, and how well they predict fMRI signals evoked by monocular stimulation. Sighting eye dominance was assessed with two standard tests, the Porta Test, and a ‘hole in hand’ variation of the Miles Test. Acuity dominance was tested with a standard eye chart and with a computerized test of grating acuity. We found limited agreement between the sighting and acuity methods for assigning dominance in our individual subjects. We then compared the fMRI response generated by dominant eye stimulation to that generated by non-dominant eye, according to both methods, in 7 normal subjects. The stimulus consisted of a high contrast hemifield stimulus alternating with no stimulus in a blocked paradigm. In separate scans, we used standard techniques to label the borders of visual areas V1, V2, V3, VP, V4, V3A, and MT. These regions of interest (ROIs) were used to analyze each visual area separately. We found that percent change in fMRI BOLD signal was stronger for the dominant eye as defined by the acuity method, and this effect was significant for areas located in the ventral occipital territory (V1v, V2v, VP, V4). In contrast, assigning dominance based on sighting produced no significant interocular BOLD differences. We conclude that interocular BOLD differences in normal subjects exist, and may be predicted by acuity measures. PMID:17194544
Dual role of starvation signaling in promoting growth and recovery
Leshkowitz, Dena; Barkai, Naama
2017-01-01
Growing cells are subject to cycles of nutrient depletion and repletion. A shortage of nutrients activates a starvation program that promotes growth in limiting conditions. To examine whether nutrient-deprived cells prepare also for their subsequent recovery, we followed the transcription program activated in budding yeast transferred to low-phosphate media and defined its contribution to cell growth during phosphate limitation and upon recovery. An initial transcription wave was induced by moderate phosphate depletion that did not affect cell growth. A second transcription wave followed when phosphate became growth limiting. The starvation program contributed to growth only in the second, growth-limiting phase. Notably, the early response, activated at moderate depletion, promoted recovery from starvation by increasing phosphate influx upon transfer to rich medium. Our results suggest that cells subject to nutrient depletion prepare not only for growth in the limiting conditions but also for their predicted recovery once nutrients are replenished. PMID:29236696
Dang, Mia; Ramsaran, Kalinda D; Street, Melissa E; Syed, S Noreen; Barclay-Goddard, Ruth; Stratford, Paul W; Miller, Patricia A
2011-01-01
To estimate the predictive accuracy and clinical usefulness of the Chedoke-McMaster Stroke Assessment (CMSA) predictive equations. A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from -0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted.
Prediction, scenarios and insight: The uses of an end-to-end model
NASA Astrophysics Data System (ADS)
Steele, John H.
2012-09-01
A major function of ecosystem models is to provide extrapolations from observed data in terms of predictions or scenarios or insight. These models can be at various levels of taxonomic resolution such as total community production, abundance of functional groups, or species composition, depending on the data input as drivers. A 40-year dynamic simulation of end-to-end processes in the Georges Bank food web is used to illustrate the input/output relations and the insights gained at the three levels of food web aggregation. The focus is on the intermediate level and the longer term changes in three functional fish guilds - planktivores, benthivores and piscivores - in terms of three ecosystem-based metrics - nutrient input, relative productivity of plankton and benthos, and food intake by juvenile fish. These simulations can describe the long term constraints imposed on guild structure and productivity by energy fluxes over the 40 years but cannot explain concurrent switches in abundance of individual species within guilds. Comparing time series data for individual species with model output provides insights; but including the data in the model would confer only limited extra information. The advantages and limitations of the three levels of resolution of models in relation to ecosystem-based management are: The correlations between primary production and total yield of fish imply a “bottom-up” constraint on end-to-end energy flow through the food web that can provide predictions of such yields. Functionally defined metrics such as nutrient input, relative productivity of plankton and benthos and food intake by juvenile fish, represent bottom-up, mid-level and top-down forcing of the food web. Model scenarios using these metrics can demonstrate constraints on the productivity of these functionally defined guilds within the limits set by (1). Comparisons of guild simulations with time series of fish species provide insight into the switches in species dominance that accompany changes in guild productivity and can illuminate the top-down aspects of regime shifts.
NASA Astrophysics Data System (ADS)
Obreschkow, D.; Klöckner, H.-R.; Heywood, I.; Levrier, F.; Rawlings, S.
2009-10-01
We present a sky simulation of the atomic H I-emission line and the first 10 12C16O rotational emission lines of molecular gas in galaxies beyond the Milky Way. The simulated sky field has a comoving diameter of 500 h -1 Mpc; hence, the actual field of view depends on the (user-defined) maximal redshift z max; e.g., for z max = 10, the field of view yields ~4 × 4 deg2. For all galaxies, we estimate the line fluxes, line profiles, and angular sizes of the H I and CO-emission lines. The galaxy sample is complete for galaxies with cold hydrogen masses above 108 M sun. This sky simulation builds on a semi-analytic model of the cosmic evolution of galaxies in a Λ cold dark matter (ΛCDM) cosmology. The evolving CDM distribution was adopted from the Millennium Simulation, an N-body CDM simulation in a cubic box with a side length of 500 h -1 Mpc. This side length limits the coherence scale of our sky simulation: it is long enough to allow the extraction of the baryon acoustic oscillations in the galaxy power spectrum, yet the position and amplitude of the first acoustic peak will be imperfectly defined. This sky simulation is a tangible aid to the design and operation of future telescopes, such as the Square Kilometre Array, Large Millimeter Telescope, and Atacama Large Millimeter/Submillimeter Array. The results presented in this paper have been restricted to a graphical representation of the simulated sky and fundamental dN/dz analyses for peak flux density limited and total flux limited surveys of H I and CO. A key prediction is that H I will be harder to detect at redshifts z gsim 2 than predicted by a no-evolution model. The future verification or falsification of this prediction will allow us to qualify the semi-analytic models. -SAX-Sky"
A computer program for thermal radiation from gaseous rocket exhuast plumes (GASRAD)
NASA Technical Reports Server (NTRS)
Reardon, J. E.; Lee, Y. C.
1979-01-01
A computer code is presented for predicting incident thermal radiation from defined plume gas properties in either axisymmetric or cylindrical coordinate systems. The radiation model is a statistical band model for exponential line strength distribution with Lorentz/Doppler line shapes for 5 gaseous species (H2O, CO2, CO, HCl and HF) and an appoximate (non-scattering) treatment of carbon particles. The Curtis-Godson approximation is used for inhomogeneous gases, but a subroutine is available for using Young's intuitive derivative method for H2O with Lorentz line shape and exponentially-tailed-inverse line strength distribution. The geometry model provides integration over a hemisphere with up to 6 individually oriented identical axisymmetric plumes, a single 3-D plume, Shading surfaces may be used in any of 7 shapes, and a conical limit may be defined for the plume to set individual line-of-signt limits. Intermediate coordinate systems may specified to simplify input of plumes and shading surfaces.
Machine Learning for Flood Prediction in Google Earth Engine
NASA Astrophysics Data System (ADS)
Kuhn, C.; Tellman, B.; Max, S. A.; Schwarz, B.
2015-12-01
With the increasing availability of high-resolution satellite imagery, dynamic flood mapping in near real time is becoming a reachable goal for decision-makers. This talk describes a newly developed framework for predicting biophysical flood vulnerability using public data, cloud computing and machine learning. Our objective is to define an approach to flood inundation modeling using statistical learning methods deployed in a cloud-based computing platform. Traditionally, static flood extent maps grounded in physically based hydrologic models can require hours of human expertise to construct at significant financial cost. In addition, desktop modeling software and limited local server storage can impose restraints on the size and resolution of input datasets. Data-driven, cloud-based processing holds promise for predictive watershed modeling at a wide range of spatio-temporal scales. However, these benefits come with constraints. In particular, parallel computing limits a modeler's ability to simulate the flow of water across a landscape, rendering traditional routing algorithms unusable in this platform. Our project pushes these limits by testing the performance of two machine learning algorithms, Support Vector Machine (SVM) and Random Forests, at predicting flood extent. Constructed in Google Earth Engine, the model mines a suite of publicly available satellite imagery layers to use as algorithm inputs. Results are cross-validated using MODIS-based flood maps created using the Dartmouth Flood Observatory detection algorithm. Model uncertainty highlights the difficulty of deploying unbalanced training data sets based on rare extreme events.
Carlson, Ross; Srienc, Friedrich
2004-04-20
We have previously shown that the metabolism for most efficient cell growth can be realized by a combination of two types of elementary modes. One mode produces biomass while the second mode generates only energy. The identity of the four most efficient biomass and energy pathway pairs changes, depending on the degree of oxygen limitation. The identification of such pathway pairs for different growth conditions offers a pathway-based explanation of maintenance energy generation. For a given growth rate, experimental aerobic glucose consumption rates can be used to estimate the contribution of each pathway type to the overall metabolic flux pattern. All metabolic fluxes are then completely determined by the stoichiometries of involved pathways defining all nutrient consumption and metabolite secretion rates. We present here equations that permit computation of network fluxes on the basis of unique pathways for the case of optimal, glucose-limited Escherichia coli growth under varying levels of oxygen stress. Predicted glucose and oxygen uptake rates and some metabolite secretion rates are in remarkable agreement with experimental observations supporting the validity of the presented approach. The entire most efficient, steady-state, metabolic rate structure is explicitly defined by the developed equations without need for additional computer simulations. The approach should be generally useful for analyzing and interpreting genomic data by predicting concise, pathway-based metabolic rate structures. Copyright 2004 Wiley Periodicals, Inc.
Review of Factors, Methods, and Outcome Definition in Designing Opioid Abuse Predictive Models.
Alzeer, Abdullah H; Jones, Josette; Bair, Matthew J
2018-05-01
Several opioid risk assessment tools are available to prescribers to evaluate opioid analgesic abuse among chronic patients. The objectives of this study are to 1) identify variables available in the literature to predict opioid abuse; 2) explore and compare methods (population, database, and analysis) used to develop statistical models that predict opioid abuse; and 3) understand how outcomes were defined in each statistical model predicting opioid abuse. The OVID database was searched for this study. The search was limited to articles written in English and published from January 1990 to April 2016. This search generated 1,409 articles. Only seven studies and nine models met our inclusion-exclusion criteria. We found nine models and identified 75 distinct variables. Three studies used administrative claims data, and four studies used electronic health record data. The majority, four out of seven articles (six out of nine models), were primarily dependent on the presence or absence of opioid abuse or dependence (ICD-9 diagnosis code) to define opioid abuse. However, two articles used a predefined list of opioid-related aberrant behaviors. We identified variables used to predict opioid abuse from electronic health records and administrative data. Medication variables are the recurrent variables in the articles reviewed (33 variables). Age and gender are the most consistent demographic variables in predicting opioid abuse. Overall, there is similarity in the sampling method and inclusion/exclusion criteria (age, number of prescriptions, follow-up period, and data analysis methods). Intuitive research to utilize unstructured data may increase opioid abuse models' accuracy.
Shibata, Yohei; Ishii, Hideki; Suzuki, Susumu; Tanaka, Akihito; Tatami, Yosuke; Harata, Shingo; Ota, Tomoyuki; Shimbo, Yusaku; Takayama, Yohei; Kunimura, Ayako; Hirayama, Kenshi; Harada, Kazuhiro; Osugi, Naohiro; Murohara, Toyoaki
2017-05-01
Previous studies have shown that aortic valve calcification (AVC) was associated with cardiovascular events and mortality. On the other hand, periprocedural myocardial injury (PMI) in percutaneous coronary intervention (PCI) is a well-known predictor of subsequent mortality and poor clinical outcomes. The purpose of the study was to assess the hypothesis that the presence of AVC could predict PMI in PCI. This study included 370 patients treated with PCI for stable angina pectoris. AVC was defined as bright echoes >1 mm on one or more cusps of the aortic valve on ultrasound cardiography (UCG). PMI was defined as an increase in high-sensitivity troponin T level of >5 times the upper normal limit (>0.070 ng/ml) at 24 hours after PCI. AVC was detected in 45.9% of the patients (n=170). The incidence of PMI was significantly higher in the patients with AVC than in those without AVC (43.5% vs 21.0%, p<0.001). The presence of AVC independently predicted PMI after adjusting for other significant variables (odds ratio 2.26, 95% confidence interval 1.37-3.74, p=0.002). Other predictors were male sex, age, estimated glomerular filtration rate, and total stent length. Furthermore to predict PMI, adding AVC to the established risk factors significantly improved the area under the receiver operating characteristic curves, from 0.68 to 0.72, of the PMI prediction model (p=0.025). The presence of AVC detected in UCG could predict the incidence of PMI.
Velstra, Inge-Marie; Bolliger, Marc; Krebs, Jörg; Rietman, Johan S; Curt, Armin
2016-05-01
To determine which single or combined upper limb muscles as defined by the International Standards for the Neurological Classification of Spinal Cord Injury (ISNCSCI); upper extremity motor score (UEMS) and the Graded Redefined Assessment of Strength, Sensibility, and Prehension (GRASSP), best predict upper limb function and independence in activities of daily living (ADLs) and to assess the predictive value of qualitative grasp movements (QlG) on upper limb function in individuals with acute tetraplegia. As part of a Europe-wide, prospective, longitudinal, multicenter study ISNCSCI, GRASSP, and Spinal Cord Independence Measure (SCIM III) scores were recorded at 1 and 6 months after SCI. For prediction of upper limb function and ADLs, a logistic regression model and unbiased recursive partitioning conditional inference tree (URP-CTREE) were used. Results: Logistic regression and URP-CTREE revealed that a combination of ISNCSCI and GRASSP muscles (to a maximum of 4) demonstrated the best prediction (specificity and sensitivity ranged from 81.8% to 96.0%) of upper limb function and identified homogenous outcome cohorts at 6 months. The URP-CTREE model with the QlG predictors for upper limb function showed similar results. Prediction of upper limb function can be achieved through a combination of defined, specific upper limb muscles assessed in the ISNCSCI and GRASSP. A combination of a limited number of proximal and distal muscles along with an assessment of grasping movements can be applied for clinical decision making for rehabilitation interventions and clinical trials. © The Author(s) 2015.
Shibata, Yohei; Suzuki, Susumu; Tanaka, Akihito; Tatami, Yosuke; Harata, Shingo; Ota, Tomoyuki; Shimbo, Yusaku; Takayama, Yohei; Kunimura, Ayako; Hirayama, Kenshi; Harada, Kazuhiro; Osugi, Naohiro; Murohara, Toyoaki
2017-01-01
Aims: Previous studies have shown that aortic valve calcification (AVC) was associated with cardiovascular events and mortality. On the other hand, periprocedural myocardial injury (PMI) in percutaneous coronary intervention (PCI) is a well-known predictor of subsequent mortality and poor clinical outcomes. The purpose of the study was to assess the hypothesis that the presence of AVC could predict PMI in PCI. Methods: This study included 370 patients treated with PCI for stable angina pectoris. AVC was defined as bright echoes > 1 mm on one or more cusps of the aortic valve on ultrasound cardiography (UCG). PMI was defined as an increase in high-sensitivity troponin T level of > 5 times the upper normal limit (> 0.070 ng/ml) at 24 hours after PCI. Results: AVC was detected in 45.9% of the patients (n = 170). The incidence of PMI was significantly higher in the patients with AVC than in those without AVC (43.5% vs 21.0%, p < 0.001). The presence of AVC independently predicted PMI after adjusting for other significant variables (odds ratio 2.26, 95% confidence interval 1.37–3.74, p = 0.002). Other predictors were male sex, age, estimated glomerular filtration rate, and total stent length. Furthermore to predict PMI, adding AVC to the established risk factors significantly improved the area under the receiver operating characteristic curves, from 0.68 to 0.72, of the PMI prediction model (p = 0.025). Conclusion: The presence of AVC detected in UCG could predict the incidence of PMI. PMID:27733732
Research Domain Criteria as Psychiatric Nosology.
Akram, Faisal; Giordano, James
2017-10-01
Diagnostic classification systems in psychiatry have continued to rely on clinical phenomenology, despite limitations inherent in that approach. In view of these limitations and recent progress in neuroscience, the National Institute of Mental Health (NIMH) has initiated the Research Domain Criteria (RDoC) project to develop a more neuroscientifically based system of characterizing and classifying psychiatric disorders. The RDoC initiative aims to transform psychiatry into an integrative science of psychopathology in which mental illnesses will be defined as involving putative dysfunctions in neural nodes and networks. However, conceptual, methodological, neuroethical, and social issues inherent in and/or derived from the use of RDoC need to be addressed before any attempt is made to implement their use in clinical psychiatry. This article describes current progress in RDoC; defines key technical, neuroethical, and social issues generated by RDoC adoption and use; and posits key questions that must be addressed and resolved if RDoC are to be employed for psychiatric diagnoses and therapeutics. Specifically, we posit that objectivization of complex mental phenomena may raise ethical questions about autonomy, the value of subjective experience, what constitutes normality, what constitutes a disorder, and what represents a treatment, enablement, and/or enhancement. Ethical issues may also arise from the (mis)use of biomarkers and phenotypes in predicting and treating mental disorders, and what such definitions, predictions, and interventions portend for concepts and views of sickness, criminality, professional competency, and social functioning. Given these issues, we offer that a preparatory neuroethical framework is required to define and guide the ways in which RDoC-oriented research can-and arguably should-be utilized in clinical psychiatry, and perhaps more broadly, in the social sphere.
A parametric approach to irregular fatigue prediction
NASA Technical Reports Server (NTRS)
Erismann, T. H.
1972-01-01
A parametric approach to irregular fatigue protection is presented. The method proposed consists of two parts: empirical determination of certain characteristics of a material by means of a relatively small number of well-defined standard tests, and arithmetical application of the results obtained to arbitrary loading histories. The following groups of parameters are thus taken into account: (1) the variations of the mean stress, (2) the interaction of these variations and the superposed oscillating stresses, (3) the spectrum of the oscillating-stress amplitudes, and (4) the sequence of the oscillating-stress amplitudes. It is pointed out that only experimental verification can throw sufficient light upon possibilities and limitations of this (or any other) prediction method.
How I treat acute graft-versus-host disease of the gastrointestinal tract and the liver.
McDonald, George B
2016-03-24
Treatment of acute graft-versus-host disease (GVHD) has evolved from a one-size-fits-all approach to a more nuanced strategy based on predicted outcomes. Lower and time-limited doses of immune suppression for patients predicted to have low-risk GVHD are safe and effective. In more severe GVHD, prolonged exposure to immunosuppressive therapies, failure to achieve tolerance, and inadequate clinical responses are the proximate causes of GVHD-related deaths. This article presents acute GVHD-related scenarios representing, respectively, certainty of diagnosis, multiple causes of symptoms, jaundice, an initial therapy algorithm, secondary therapy, and defining futility of treatment. © 2016 by The American Society of Hematology.
Where and When do Species Interactions Set Range Limits?
Louthan, Allison M; Doak, Daniel F; Angert, Amy L
2015-12-01
A long-standing theory, originating with Darwin, suggests that abiotic forces set species range limits at high latitude, high elevation, and other abiotically 'stressful' areas, while species interactions set range limits in apparently more benign regions. This theory is of considerable importance for both basic and applied ecology, and while it is often assumed to be a ubiquitous pattern, it has not been clearly defined or broadly tested. We review tests of this idea and dissect how the strength of species interactions must vary across stress gradients to generate the predicted pattern. We conclude by suggesting approaches to better test this theory, which will deepen our understanding of the forces that determine species ranges and govern responses to climate change. Copyright © 2015 Elsevier Ltd. All rights reserved.
Predictability of the Lagrangian Motion in the Upper Ocean
NASA Astrophysics Data System (ADS)
Piterbarg, L. I.; Griffa, A.; Griffa, A.; Mariano, A. J.; Ozgokmen, T. M.; Ryan, E. H.
2001-12-01
The complex non-linear dynamics of the upper ocean leads to chaotic behavior of drifter trajectories in the ocean. Our study is focused on estimating the predictability limit for the position of an individual Lagrangian particle or a particle cluster based on the knowledge of mean currents and observations of nearby particles (predictors). The Lagrangian prediction problem, besides being a fundamental scientific problem, is also of great importance for practical applications such as search and rescue operations and for modeling the spread of fish larvae. A stochastic multi-particle model for the Lagrangian motion has been rigorously formulated and is a generalization of the well known "random flight" model for a single particle. Our model is mathematically consistent and includes a few easily interpreted parameters, such as the Lagrangian velocity decorrelation time scale, the turbulent velocity variance, and the velocity decorrelation radius, that can be estimated from data. The top Lyapunov exponent for an isotropic version of the model is explicitly expressed as a function of these parameters enabling us to approximate the predictability limit to first order. Lagrangian prediction errors for two new prediction algorithms are evaluated against simple algorithms and each other and are used to test the predictability limits of the stochastic model for isotropic turbulence. The first algorithm is based on a Kalman filter and uses the developed stochastic model. Its implementation for drifter clusters in both the Tropical Pacific and Adriatic Sea, showed good prediction skill over a period of 1-2 weeks. The prediction error is primarily a function of the data density, defined as the number of predictors within a velocity decorrelation spatial scale from the particle to be predicted. The second algorithm is model independent and is based on spatial regression considerations. Preliminary results, based on simulated, as well as, real data, indicate that it performs better than the Kalman-based algorithm in strong shear flows. An important component of our research is the optimal predictor location problem; Where should floats be launched in order to minimize the Lagrangian prediction error? Preliminary Lagrangian sampling results for different flow scenarios will be presented.
Recknagel, Friedrich; Orr, Philip T; Bartkow, Michael; Swanepoel, Annelie; Cao, Hongqing
2017-11-01
An early warning scheme is proposed that runs ensembles of inferential models for predicting the cyanobacterial population dynamics and cyanotoxin concentrations in drinking water reservoirs on a diel basis driven by in situ sonde water quality data. When the 10- to 30-day-ahead predicted concentrations of cyanobacteria cells or cyanotoxins exceed pre-defined limit values, an early warning automatically activates an action plan considering in-lake control, e.g. intermittent mixing and ad hoc water treatment in water works, respectively. Case studies of the sub-tropical Lake Wivenhoe (Australia) and the Mediterranean Vaal Reservoir (South Africa) demonstrate that ensembles of inferential models developed by the hybrid evolutionary algorithm HEA are capable of up to 30days forecasts of cyanobacteria and cyanotoxins using data collected in situ. The resulting models for Dolicospermum circinale displayed validity for up to 10days ahead, whilst concentrations of Cylindrospermopsis raciborskii and microcystins were successfully predicted up to 30days ahead. Implementing the proposed scheme for drinking water reservoirs enhances current water quality monitoring practices by solely utilising in situ monitoring data, in addition to cyanobacteria and cyanotoxin measurements. Access to routinely measured cyanotoxin data allows for development of models that predict explicitly cyanotoxin concentrations that avoid to inadvertently model and predict non-toxic cyanobacterial strains. Copyright © 2017 Elsevier B.V. All rights reserved.
Auria, Richard; Boileau, Céline; Davidson, Sylvain; Casalot, Laurence; Christen, Pierre; Liebgott, Pierre Pol; Combet-Blanc, Yannick
2016-01-01
Thermotoga maritima is a hyperthermophilic bacterium known to produce hydrogen from a large variety of substrates. The aim of the present study is to propose a mathematical model incorporating kinetics of growth, consumption of substrates, product formations, and inhibition by hydrogen in order to predict hydrogen production depending on defined culture conditions. Our mathematical model, incorporating data concerning growth, substrates, and products, was developed to predict hydrogen production from batch fermentations of the hyperthermophilic bacterium, T. maritima . It includes the inhibition by hydrogen and the liquid-to-gas mass transfer of H 2 , CO 2 , and H 2 S. Most kinetic parameters of the model were obtained from batch experiments without any fitting. The mathematical model is adequate for glucose, yeast extract, and thiosulfate concentrations ranging from 2.5 to 20 mmol/L, 0.2-0.5 g/L, or 0.01-0.06 mmol/L, respectively, corresponding to one of these compounds being the growth-limiting factor of T. maritima . When glucose, yeast extract, and thiosulfate concentrations are all higher than these ranges, the model overestimates all the variables. In the window of the model validity, predictions of the model show that the combination of both variables (increase in limiting factor concentration and in inlet gas stream) leads up to a twofold increase of the maximum H 2 -specific productivity with the lowest inhibition. A mathematical model predicting H 2 production in T. maritima was successfully designed and confirmed in this study. However, it shows the limit of validity of such mathematical models. Their limit of applicability must take into account the range of validity in which the parameters were established.
The shape parameter and its modification for defining coastal profiles
NASA Astrophysics Data System (ADS)
Türker, Umut; Kabdaşli, M. Sedat
2009-03-01
The shape parameter is important for the theoretical description of the sandy coastal profiles. This parameter has previously been defined as a function of the sediment-settling velocity. However, the settling velocity cannot be characterized over a wide range of sediment grains. This, in turn, limits the calculation of the shape parameter over a wide range. This paper provides a simpler and faster analytical equation to describe the shape parameter. The validity of the equation has been tested and compared with the previously estimated values given in both graphical and tabular forms. The results of this study indicate that the analytical solutions of the shape parameter improved the usability of profile better than graphical solutions, predicting better results both at the surf zone and offshore.
Bazzini, Ariel A; Johnstone, Timothy G; Christiano, Romain; Mackowiak, Sebastian D; Obermayer, Benedikt; Fleming, Elizabeth S; Vejnar, Charles E; Lee, Miler T; Rajewsky, Nikolaus; Walther, Tobias C; Giraldez, Antonio J
2014-01-01
Identification of the coding elements in the genome is a fundamental step to understanding the building blocks of living systems. Short peptides (< 100 aa) have emerged as important regulators of development and physiology, but their identification has been limited by their size. We have leveraged the periodicity of ribosome movement on the mRNA to define actively translated ORFs by ribosome footprinting. This approach identifies several hundred translated small ORFs in zebrafish and human. Computational prediction of small ORFs from codon conservation patterns corroborates and extends these findings and identifies conserved sequences in zebrafish and human, suggesting functional peptide products (micropeptides). These results identify micropeptide-encoding genes in vertebrates, providing an entry point to define their function in vivo. PMID:24705786
Second look at the spread of epidemics on networks
NASA Astrophysics Data System (ADS)
Kenah, Eben; Robins, James M.
2007-09-01
In an important paper, Newman [Phys. Rev. E66, 016128 (2002)] claimed that a general network-based stochastic Susceptible-Infectious-Removed (SIR) epidemic model is isomorphic to a bond percolation model, where the bonds are the edges of the contact network and the bond occupation probability is equal to the marginal probability of transmission from an infected node to a susceptible neighbor. In this paper, we show that this isomorphism is incorrect and define a semidirected random network we call the epidemic percolation network that is exactly isomorphic to the SIR epidemic model in any finite population. In the limit of a large population, (i) the distribution of (self-limited) outbreak sizes is identical to the size distribution of (small) out-components, (ii) the epidemic threshold corresponds to the phase transition where a giant strongly connected component appears, (iii) the probability of a large epidemic is equal to the probability that an initial infection occurs in the giant in-component, and (iv) the relative final size of an epidemic is equal to the proportion of the network contained in the giant out-component. For the SIR model considered by Newman, we show that the epidemic percolation network predicts the same mean outbreak size below the epidemic threshold, the same epidemic threshold, and the same final size of an epidemic as the bond percolation model. However, the bond percolation model fails to predict the correct outbreak size distribution and probability of an epidemic when there is a nondegenerate infectious period distribution. We confirm our findings by comparing predictions from percolation networks and bond percolation models to the results of simulations. In the Appendix, we show that an isomorphism to an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model.
Summer drought predictability over Europe: empirical versus dynamical forecasts
NASA Astrophysics Data System (ADS)
Turco, Marco; Ceglar, Andrej; Prodhomme, Chloé; Soret, Albert; Toreti, Andrea; Doblas-Reyes Francisco, J.
2017-08-01
Seasonal climate forecasts could be an important planning tool for farmers, government and insurance companies that can lead to better and timely management of seasonal climate risks. However, climate seasonal forecasts are often under-used, because potential users are not well aware of the capabilities and limitations of these products. This study aims at assessing the merits and caveats of a statistical empirical method, the ensemble streamflow prediction system (ESP, an ensemble based on reordering historical data) and an operational dynamical forecast system, the European Centre for Medium-Range Weather Forecasts—System 4 (S4) in predicting summer drought in Europe. Droughts are defined using the Standardized Precipitation Evapotranspiration Index for the month of August integrated over 6 months. Both systems show useful and mostly comparable deterministic skill. We argue that this source of predictability is mostly attributable to the observed initial conditions. S4 shows only higher skill in terms of ability to probabilistically identify drought occurrence. Thus, currently, both approaches provide useful information and ESP represents a computationally fast alternative to dynamical prediction applications for drought prediction.
New Modelling of Localized Necking in Sheet Metal Stretching
NASA Astrophysics Data System (ADS)
Bressan, José Divo
2011-01-01
Present work examines a new mathematical model to predict the onset of localized necking in the industrial processes of sheet metal forming such as biaxial stretching. Sheet metal formability is usually assessed experimentally by testing such as the Nakajima test to obtain the Forming Limit Curve, FLC, which is an essential material parameter necessary to numerical simulations by FEM. The Forming Limit Diagram or "Forming Principal Strain Map" shows the experimental FLC which is the plot of principal true strains in the sheet metal surface, ɛ1 and ɛ2, occurring at critical points obtained in laboratory formability tests or in the fabrication process. Two types of undesirable rupture mechanisms can occur in sheet metal forming products: localized necking and shear induced fracture. Therefore, two kinds of limit strain curves can be plotted: the local necking limit curve FLC-N and the shear fracture limit curve FLC-S. Localized necking is theoretically anticipated to initiate at a thickness defect ƒin = hib/hia inside the grooved sheet thickness hia, but only at the instability point of maximum load. The inception of grooving on the sheet surface evolves from instability point to localized necking and final rupture, during further sheet metal straining. Work hardening law is defined for a strain and strain rate material by the effective stress σ¯ = σo(1+βɛ¯)n???ɛM. The average experimental hardening law curve for tensile tests at 0°, 45° and 90°, assuming isotropic plasticity, was used to analyze the plasticity behavior during the biaxial stretching of sheet metals. Theoretical predicted curves of local necking limits are plotted in the positive quadrant of FPSM for different defect values ƒin and plasticity parameters. Limit strains are obtained from a software developed by the author. Some experimental results of forming limit curve obtained from experiments for IF steel sheets are compared with the theoretical predicted curves: the correlation is good.
Data mining for signals in spontaneous reporting databases: proceed with caution.
Stephenson, Wendy P; Hauben, Manfred
2007-04-01
To provide commentary and points of caution to consider before incorporating data mining as a routine component of any Pharmacovigilance program, and to stimulate further research aimed at better defining the predictive value of these new tools as well as their incremental value as an adjunct to traditional methods of post-marketing surveillance. Commentary includes review of current data mining methodologies employed and their limitations, caveats to consider in the use of spontaneous reporting databases and caution against over-confidence in the results of data mining. Future research should focus on more clearly delineating the limitations of the various quantitative approaches as well as the incremental value that they bring to traditional methods of pharmacovigilance.
Bridges, Kristie Grove; Jarrett, Traci; Thorpe, Anthony; Baus, Adam; Cochran, Jill
2015-01-01
Background Studies have suggested that triglyceride to HDL-cholesterol ratio (TRG/HDL) is a surrogate marker of insulin resistance (IR), but information regarding its use in pediatric patients is limited. Objective This study investigated the ability of TRG/HDL ratio to assess IR in obese and overweight children. Subjects The sample consisted of de-identified electronic medical records of patients aged 10–17 years (n = 223). Materials and methods Logistic regression was performed using TRG/HDL ratio as a predictor of hyperinsulinemia or IR defined using homeostasis model assessment score. Results TRG/HDL ratio had limited ability to predict hyperinsulinemia (AUROC 0.71) or IR (AUROC 0.72). Although females had higher insulin levels, male patients were significantly more likely to have hypertriglyceridemia and impaired fasting glucose. Conclusions TRG/HDL ratio was not adequate for predicting IR in this population. Gender differences in the development of obesity-related metabolic abnormalities may impact the choice of screening studies in pediatric patients. PMID:26352085
Limit Cycle Analysis Applied to the Oscillations of Decelerating Blunt-Body Entry Vehicles
NASA Technical Reports Server (NTRS)
Schoenenberger, Mark; Queen, Eric M.
2008-01-01
Many blunt-body entry vehicles have nonlinear dynamic stability characteristics that produce self-limiting oscillations in flight. Several different test techniques can be used to extract dynamic aerodynamic coefficients to predict this oscillatory behavior for planetary entry mission design and analysis. Most of these test techniques impose boundary conditions that alter the oscillatory behavior from that seen in flight. Three sets of test conditions, representing three commonly used test techniques, are presented to highlight these effects. Analytical solutions to the constant-coefficient planar equations-of-motion for each case are developed to show how the same blunt body behaves differently depending on the imposed test conditions. The energy equation is applied to further illustrate the governing dynamics. Then, the mean value theorem is applied to the energy rate equation to find the effective damping for an example blunt body with nonlinear, self-limiting dynamic characteristics. This approach is used to predict constant-energy oscillatory behavior and the equilibrium oscillation amplitudes for the various test conditions. These predictions are verified with planar simulations. The analysis presented provides an overview of dynamic stability test techniques and illustrates the effects of dynamic stability, static aerodynamics and test conditions on observed dynamic motions. It is proposed that these effects may be leveraged to develop new test techniques and refine test matrices in future tests to better define the nonlinear functional forms of blunt body dynamic stability curves.
Hydraulic limits preceding mortality in a piñon-juniper woodland under experimental drought.
Plaut, Jennifer A; Yepez, Enrico A; Hill, Judson; Pangle, Robert; Sperry, John S; Pockman, William T; McDowell, Nate G
2012-09-01
Drought-related tree mortality occurs globally and may increase in the future, but we lack sufficient mechanistic understanding to accurately predict it. Here we present the first field assessment of the physiological mechanisms leading to mortality in an ecosystem-scale rainfall manipulation of a piñon-juniper (Pinus edulis-Juniperus monosperma) woodland. We measured transpiration (E) and modelled the transpiration rate initiating hydraulic failure (E(crit) ). We predicted that isohydric piñon would experience mortality after prolonged periods of severely limited gas exchange as required to avoid hydraulic failure; anisohydric juniper would also avoid hydraulic failure, but sustain gas exchange due to its greater cavitation resistance. After 1 year of treatment, 67% of droughted mature piñon died with concomitant infestation by bark beetles (Ips confusus) and bluestain fungus (Ophiostoma spp.); no mortality occurred in juniper or in control piñon. As predicted, both species avoided hydraulic failure, but safety margins from E(crit) were much smaller in piñon, especially droughted piñon, which also experienced chronically low hydraulic conductance. The defining characteristic of trees that died was a 7 month period of near-zero gas exchange, versus 2 months for surviving piñon. Hydraulic limits to gas exchange, not hydraulic failure per se, promoted drought-related mortality in piñon pine. © 2012 Blackwell Publishing Ltd.
Micrometeoroid and Orbital Debris Threat Assessment: Mars Sample Return Earth Entry Vehicle
NASA Technical Reports Server (NTRS)
Christiansen, Eric L.; Hyde, James L.; Bjorkman, Michael D.; Hoffman, Kevin D.; Lear, Dana M.; Prior, Thomas G.
2011-01-01
This report provides results of a Micrometeoroid and Orbital Debris (MMOD) risk assessment of the Mars Sample Return Earth Entry Vehicle (MSR EEV). The assessment was performed using standard risk assessment methodology illustrated in Figure 1-1. Central to the process is the Bumper risk assessment code (Figure 1-2), which calculates the critical penetration risk based on geometry, shielding configurations and flight parameters. The assessment process begins by building a finite element model (FEM) of the spacecraft, which defines the size and shape of the spacecraft as well as the locations of the various shielding configurations. This model is built using the NX I-deas software package from Siemens PLM Software. The FEM is constructed using triangular and quadrilateral elements that define the outer shell of the spacecraft. Bumper-II uses the model file to determine the geometry of the spacecraft for the analysis. The next step of the process is to identify the ballistic limit characteristics for the various shield types. These ballistic limits define the critical size particle that will penetrate a shield at a given impact angle and impact velocity. When the finite element model is built, each individual element is assigned a property identifier (PID) to act as an index for its shielding properties. Using the ballistic limit equations (BLEs) built into the Bumper-II code, the shield characteristics are defined for each and every PID in the model. The final stage of the analysis is to determine the probability of no penetration (PNP) on the spacecraft. This is done using the micrometeoroid and orbital debris environment definitions that are built into the Bumper-II code. These engineering models take into account orbit inclination, altitude, attitude and analysis date in order to predict an impacting particle flux on the spacecraft. Using the geometry and shielding characteristics previously defined for the spacecraft and combining that information with the environment model calculations, the Bumper-II code calculates a probability of no penetration for the spacecraft.
Dang, Mia; Ramsaran, Kalinda D.; Street, Melissa E.; Syed, S. Noreen; Barclay-Goddard, Ruth; Miller, Patricia A.
2011-01-01
ABSTRACT Purpose: To estimate the predictive accuracy and clinical usefulness of the Chedoke–McMaster Stroke Assessment (CMSA) predictive equations. Method: A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Results: Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from −0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. Conclusions: This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted. PMID:22654239
Maier, Holger; Döhr, Stefanie; Grote, Korbinian; O'Keeffe, Sean; Werner, Thomas; Hrabé de Angelis, Martin; Schneider, Ralf
2005-07-01
The LitMiner software is a literature data-mining tool that facilitates the identification of major gene regulation key players related to a user-defined field of interest in PubMed abstracts. The prediction of gene-regulatory relationships is based on co-occurrence analysis of key terms within the abstracts. LitMiner predicts relationships between key terms from the biomedical domain in four categories (genes, chemical compounds, diseases and tissues). Owing to the limitations (no direction, unverified automatic prediction) of the co-occurrence approach, the primary data in the LitMiner database represent postulated basic gene-gene relationships. The usefulness of the LitMiner system has been demonstrated recently in a study that reconstructed disease-related regulatory networks by promoter modelling that was initiated by a LitMiner generated primary gene list. To overcome the limitations and to verify and improve the data, we developed WikiGene, a Wiki-based curation tool that allows revision of the data by expert users over the Internet. LitMiner (http://andromeda.gsf.de/litminer) and WikiGene (http://andromeda.gsf.de/wiki) can be used unrestricted with any Internet browser.
Tuite, Ashleigh R; Tien, Joseph; Eisenberg, Marisa; Earn, David J D; Ma, Junling; Fisman, David N
2011-05-03
Haiti is in the midst of a cholera epidemic. Surveillance data for formulating models of the epidemic are limited, but such models can aid understanding of epidemic processes and help define control strategies. To predict, by using a mathematical model, the sequence and timing of regional cholera epidemics in Haiti and explore the potential effects of disease-control strategies. Compartmental mathematical model allowing person-to-person and waterborne transmission of cholera. Within- and between-region epidemic spread was modeled, with the latter dependent on population sizes and distance between regional centroids (a "gravity" model). Haiti, 2010 to 2011. Haitian hospitalization data, 2009 census data, literature-derived parameter values, and model calibration. Dates of epidemic onset and hospitalizations. The plausible range for cholera's basic reproductive number (R(0), defined as the number of secondary cases per primary case in a susceptible population without intervention) was 2.06 to 2.78. The order and timing of regional cholera outbreaks predicted by the gravity model were closely correlated with empirical observations. Analysis of changes in disease dynamics over time suggests that public health interventions have substantially affected this epidemic. A limited vaccine supply provided late in the epidemic was projected to have a modest effect. Assumptions were simplified, which was necessary for modeling. Projections are based on the initial dynamics of the epidemic, which may change. Despite limited surveillance data from the cholera epidemic in Haiti, a model simulating between-region disease transmission according to population and distance closely reproduces reported disease patterns. This model is a tool that planners, policymakers, and medical personnel seeking to manage the epidemic could use immediately.
Symmetry limit theory for cantilever beam-columns subjected to cyclic reversed bending
NASA Astrophysics Data System (ADS)
Uetani, K.; Nakamura, Tsuneyoshi
THE BEHAVIOR of a linear strain-hardening cantilever beam-column subjected to completely reversed plastic bending of a new idealized program under constant axial compression consists of three stages: a sequence of symmetric steady states, a subsequent sequence of asymmetric steady states and a divergent behavior involving unbounded growth of an anti-symmetric deflection mode. A new concept "symmetry limit" is introduced here as the smallest critical value of the tip-deflection amplitude at which transition from a symmetric steady state to an asymmetric steady state can occur in the response of a beam-column. A new theory is presented for predicting the symmetry limits. Although this transition phenomenon is phenomenologically and conceptually different from the branching phenomenon on an equilibrium path, it is shown that a symmetry limit may theoretically be regarded as a branching point on a "steady-state path" defined anew. The symmetry limit theory and the fundamental hypotheses are verified through numerical analysis of hysteretic responses of discretized beam-column models.
Agrawal, Swastik; Sharma, Surendra Kumar; Sreenivas, Vishnubhatla; Lakshmy, Ramakrishnan; Mishra, Hemant K
2012-09-01
Syndrome Z is the occurrence of metabolic syndrome (MS) with obstructive sleep apnea. Knowledge of its risk factors is useful to screen patients requiring further evaluation for syndrome Z. Consecutive patients referred from sleep clinic undergoing polysomnography in the Sleep Laboratory of AIIMS Hospital, New Delhi were screened between June 2008 and May 2010, and 227 patients were recruited. Anthropometry, body composition analysis, blood pressure, fasting blood sugar, and lipid profile were measured. MS was defined using the National Cholesterol Education Program (adult treatment panel III) criteria, with Asian cutoff values for abdominal obesity. Prevalence of MS and syndrome Z was 74% and 65%, respectively. Age, percent body fat, excessive daytime sleepiness (EDS), and ΔSaO(2) (defined as difference between baseline and minimum SaO(2) during polysomnography) were independently associated with syndrome Z. Using a cutoff of 15% for level of desaturation, the stepped predictive score using these risk factors had sensitivity, specificity, positive predictive value, and negative predictive value of 75%, 73%, 84%, and 61%, respectively for the diagnosis of syndrome Z. It correctly characterized presence of syndrome Z 75% of the time and obviated need for detailed evaluation in 42% of the screened subjects. A large proportion of patients presenting to sleep clinics have MS and syndrome Z. Age, percent body fat, EDS, and ΔSaO(2) are independent risk factors for syndrome Z. A stepped predictive score using these parameters is cost-effective and useful in diagnosing syndrome Z in resource-limited settings.
Pathak, Amit
2018-04-12
Motile cells sense the stiffness of their extracellular matrix (ECM) through adhesions and respond by modulating the generated forces, which in turn lead to varying mechanosensitive migration phenotypes. Through modeling and experiments, cell migration speed is known to vary with matrix stiffness in a biphasic manner, with optimal motility at an intermediate stiffness. Here, we present a two-dimensional cell model defined by nodes and elements, integrated with subcellular modeling components corresponding to mechanotransductive adhesion formation, force generation, protrusions and node displacement. On 2D matrices, our calculations reproduce the classic biphasic dependence of migration speed on matrix stiffness and predict that cell types with higher force-generating ability do not slow down on very stiff matrices, thus disabling the biphasic response. We also predict that cell types defined by lower number of total receptors require stiffer matrices for optimal motility, which also limits the biphasic response. For a cell type with robust biphasic migration on 2D surface, simulations in channel-like confined environments of varying width and height predict faster migration in more confined matrices. Simulations performed in shallower channels predict that the biphasic mechanosensitive cell migration response is more robust on 2D micro-patterns as compared to the channel-like 3D confinement. Thus, variations in the dimensionality of matrix confinement alters the way migratory cells sense and respond to the matrix stiffness. Our calculations reveal new phenotypes of stiffness- and topography-sensitive cell migration that critically depend on both cell-intrinsic and matrix properties. These predictions may inform our understanding of various mechanosensitive modes of cell motility that could enable tumor invasion through topographically heterogeneous microenvironments. © 2018 IOP Publishing Ltd.
Donahue, D A; Kaufman, L E; Avalos, J; Simion, F A; Cerven, D R
2011-03-01
The Chorioallantoic Membrane Vascular Assay (CAMVA) and Bovine Corneal Opacity and Permeability (BCOP) test are widely used to predict ocular irritation potential for consumer-use products. These in vitro assays do not require live animals, produce reliable predictive data for defined applicability domains compared to the Draize rabbit eye test, and are rapid and inexpensive. Data from 304 CAMVA and/or BCOP studies (319 formulations) were surveyed to determine the feasibility of predicting ocular irritation potential for various formulations. Hair shampoos, skin cleansers, and ethanol-based hair styling sprays were repeatedly predicted to be ocular irritants (accuracy rate=0.90-1.00), with skin cleanser and hair shampoo irritation largely dependent on surfactant species and concentration. Conversely, skin lotions/moisturizers and hair styling gels/lotions were repeatedly predicted to be non-irritants (accuracy rate=0.92 and 0.82, respectively). For hair shampoos, ethanol-based hair stylers, skin cleansers, and skin lotions/moisturizers, future ocular irritation testing (i.e., CAMVA/BCOP) can be nearly eliminated if new formulations are systematically compared to those previously tested using a defined decision tree. For other tested product categories, new formulations should continue to be evaluated in CAMVA/BCOP for ocular irritation potential because either the historical data exhibit significant variability (hair conditioners and mousses) or the historical sample size is too small to permit definitive conclusions (deodorants, make-up removers, massage oils, facial masks, body sprays, and other hair styling products). All decision tree conclusions should be made within a conservative weight-of-evidence context, considering the reported limitations of the BCOP test for alcohols, ketones, and solids. Copyright © 2010 Elsevier Ltd. All rights reserved.
Upper Stage Tank Thermodynamic Modeling Using SINDA/FLUINT
NASA Technical Reports Server (NTRS)
Schallhorn, Paul; Campbell, D. Michael; Chase, Sukhdeep; Piquero, Jorge; Fortenberry, Cindy; Li, Xiaoyi; Grob, Lisa
2006-01-01
Modeling to predict the condition of cryogenic propellants in an upper stage of a launch vehicle is necessary for mission planning and successful execution. Traditionally, this effort was performed using custom, in-house proprietary codes, limiting accessibility and application. Phenomena responsible for influencing the thermodynamic state of the propellant have been characterized as distinct events whose sequence defines a mission. These events include thermal stratification, passive thermal control roll (rotation), slosh, and engine firing. This paper demonstrates the use of an off the shelf, commercially available, thermal/fluid-network code to predict the thermodynamic state of propellant during the coast phase between engine firings, i.e. the first three of the above identified events. Results of this effort will also be presented.
J-2X Turbopump Cavitation Diagnostics
NASA Technical Reports Server (NTRS)
Santi, I. Michael; Butas, John P.; Tyler, Thomas R., Jr.; Aguilar, Robert; Sowers, T. Shane
2010-01-01
The J-2X is the upper stage engine currently being designed by Pratt & Whitney Rocketdyne (PWR) for the Ares I Crew Launch Vehicle (CLV). Propellant supply requirements for the J-2X are defined by the Ares Upper Stage to J-2X Interface Control Document (ICD). Supply conditions outside ICD defined start or run boxes can induce turbopump cavitation leading to interruption of J-2X propellant flow during hot fire operation. In severe cases, cavitation can lead to uncontained engine failure with the potential to cause a vehicle catastrophic event. Turbopump and engine system performance models supported by system design information and test data are required to predict existence, severity, and consequences of a cavitation event. A cavitation model for each of the J-2X fuel and oxidizer turbopumps was developed using data from pump water flow test facilities at Pratt & Whitney Rocketdyne (PWR) and Marshall Space Flight Center (MSFC) together with data from Powerpack 1A testing at Stennis Space Center (SSC) and from heritage systems. These component models were implemented within the PWR J-2X Real Time Model (RTM) to provide a foundation for predicting system level effects following turbopump cavitation. The RTM serves as a general failure simulation platform supporting estimation of J-2X redline system effectiveness. A study to compare cavitation induced conditions with component level structural limit thresholds throughout the engine was performed using the RTM. Results provided insight into system level turbopump cavitation effects and redline system effectiveness in preventing structural limit violations. A need to better understand structural limits and redline system failure mitigation potential in the event of fuel side cavitation was indicated. This paper examines study results, efforts to mature J-2X turbopump cavitation models and structural limits, and issues with engine redline detection of cavitation and the use of vehicle-side abort triggers to augment the engine redline system.
da Silva, Richardson Augusto Rosendo; Costa, Romanniny Hévillyn Silva; Nelson, Ana Raquel Cortês; Duarte, Fernando Hiago da Silva; Prado, Nanete Caroline da Costa; Rodrigues, Eduardo Henrique Fagundes
2016-01-01
Abstract Objective: to identify the predictive factors for the nursing diagnoses in people living with Acquired Immune Deficiency Syndrome. Method: a cross-sectional study, undertaken with 113 people living with AIDS. The data were collected using an interview script and physical examination. Logistic regression was used for the data analysis, considering a level of significance of 10%. Results: the predictive factors identified were: for the nursing diagnosis of knowledge deficit-inadequate following of instructions and verbalization of the problem; for the nursing diagnosis of failure to adhere - years of study, behavior indicative of failure to adhere, participation in the treatment and forgetfulness; for the nursing diagnosis of sexual dysfunction - family income, reduced frequency of sexual practice, perceived deficit in sexual desire, perceived limitations imposed by the disease and altered body function. Conclusion: the predictive factors for these nursing diagnoses involved sociodemographic and clinical characteristics, defining characteristics, and related factors, which must be taken into consideration during the assistance provided by the nurse. PMID:27384466
Automated adaptive inference of phenomenological dynamical models.
Daniels, Bryan C; Nemenman, Ilya
2015-08-21
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.
Defining a Cancer Dependency Map.
Tsherniak, Aviad; Vazquez, Francisca; Montgomery, Phil G; Weir, Barbara A; Kryukov, Gregory; Cowley, Glenn S; Gill, Stanley; Harrington, William F; Pantel, Sasha; Krill-Burger, John M; Meyers, Robin M; Ali, Levi; Goodale, Amy; Lee, Yenarae; Jiang, Guozhi; Hsiao, Jessica; Gerath, William F J; Howell, Sara; Merkel, Erin; Ghandi, Mahmoud; Garraway, Levi A; Root, David E; Golub, Todd R; Boehm, Jesse S; Hahn, William C
2017-07-27
Most human epithelial tumors harbor numerous alterations, making it difficult to predict which genes are required for tumor survival. To systematically identify cancer dependencies, we analyzed 501 genome-scale loss-of-function screens performed in diverse human cancer cell lines. We developed DEMETER, an analytical framework that segregates on- from off-target effects of RNAi. 769 genes were differentially required in subsets of these cell lines at a threshold of six SDs from the mean. We found predictive models for 426 dependencies (55%) by nonlinear regression modeling considering 66,646 molecular features. Many dependencies fall into a limited number of classes, and unexpectedly, in 82% of models, the top biomarkers were expression based. We demonstrated the basis behind one such predictive model linking hypermethylation of the UBB ubiquitin gene to a dependency on UBC. Together, these observations provide a foundation for a cancer dependency map that facilitates the prioritization of therapeutic targets. Copyright © 2017 Elsevier Inc. All rights reserved.
Automated adaptive inference of phenomenological dynamical models
Daniels, Bryan C.; Nemenman, Ilya
2015-01-01
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508
NASA Technical Reports Server (NTRS)
Yechout, T. R.; Braman, K. B.
1984-01-01
The development, implementation and flight test evaluation of a performance modeling technique which required a limited amount of quasisteady state flight test data to predict the overall one g performance characteristics of an aircraft. The concept definition phase of the program include development of: (1) the relationship for defining aerodynamic characteristics from quasi steady state maneuvers; (2) a simplified in flight thrust and airflow prediction technique; (3) a flight test maneuvering sequence which efficiently provided definition of baseline aerodynamic and engine characteristics including power effects on lift and drag; and (4) the algorithms necessary for cruise and flight trajectory predictions. Implementation of the concept include design of the overall flight test data flow, definition of instrumentation system and ground test requirements, development and verification of all applicable software and consolidation of the overall requirements in a flight test plan.
Gravitational redshift of galaxies in clusters as predicted by general relativity.
Wojtak, Radosław; Hansen, Steen H; Hjorth, Jens
2011-09-28
The theoretical framework of cosmology is mainly defined by gravity, of which general relativity is the current model. Recent tests of general relativity within the Lambda Cold Dark Matter (ΛCDM) model have found a concordance between predictions and the observations of the growth rate and clustering of the cosmic web. General relativity has not hitherto been tested on cosmological scales independently of the assumptions of the ΛCDM model. Here we report an observation of the gravitational redshift of light coming from galaxies in clusters at the 99 per cent confidence level, based on archival data. Our measurement agrees with the predictions of general relativity and its modification created to explain cosmic acceleration without the need for dark energy (the f(R) theory), but is inconsistent with alternative models designed to avoid the presence of dark matter. © 2011 Macmillan Publishers Limited. All rights reserved
NASA Technical Reports Server (NTRS)
Lowell, C. E.; Deadmore, D. J.; Santoro, G. J.; Kohl, F. J.
1981-01-01
The effects of trace metal impurities in coal-derived liquids on deposition, high temperature corrosion and fouling were examined. Alloys were burner rig tested from 800 to 1100 C and corrosion was evaluated as a function of potential impurities. Actual and doped fuel test were used to define an empirical life prediction equation. An evaluation of inhibitors to reduce or eliminate accelerated corrosion was made. Barium and strontium were found to limit attack. Intermittent application of the inhibitors or silicon additions were found to be effective techniques for controlling deposition without losing the inhibitor benefits. A computer program was used to predict the dew points and compositions of deposits. These predictions were confirmed in deposition test. The potential for such deposits to plug cooling holes of turbine airfoils was evaluated. Tests indicated that, while a potential problem exists, it strongly depended on minor impurity variations.
Quantifying Information Gain from Dynamic Downscaling Experiments
NASA Astrophysics Data System (ADS)
Tian, Y.; Peters-Lidard, C. D.
2015-12-01
Dynamic climate downscaling experiments are designed to produce information at higher spatial and temporal resolutions. Such additional information is generated from the low-resolution initial and boundary conditions via the predictive power of the physical laws. However, errors and uncertainties in the initial and boundary conditions can be propagated and even amplified to the downscaled simulations. Additionally, the limit of predictability in nonlinear dynamical systems will also damper the information gain, even if the initial and boundary conditions were error-free. Thus it is critical to quantitatively define and measure the amount of information increase from dynamic downscaling experiments, to better understand and appreciate their potentials and limitations. We present a scheme to objectively measure the information gain from such experiments. The scheme is based on information theory, and we argue that if a downscaling experiment is to exhibit value, it has to produce more information than what can be simply inferred from information sources already available. These information sources include the initial and boundary conditions, the coarse resolution model in which the higher-resolution models are embedded, and the same set of physical laws. These existing information sources define an "information threshold" as a function of the spatial and temporal resolution, and this threshold serves as a benchmark to quantify the information gain from the downscaling experiments, or any other approaches. For a downscaling experiment to shown any value, the information has to be above this threshold. A recent NASA-supported downscaling experiment is used as an example to illustrate the application of this scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lafontaine Rivera, Jimmy G.; Theisen, Matthew K.; Chen, Po-Wei
The product formation yield (product formed per unit substrate consumed) is often the most important performance indicator in metabolic engineering. Until now, the actual yield cannot be predicted, but it can be bounded by its maximum theoretical value. The maximum theoretical yield is calculated by considering the stoichiometry of the pathways and cofactor regeneration involved. Here in this paper we found that in many cases, dynamic stability becomes an issue when excessive pathway flux is drawn to a product. This constraint reduces the yield and renders the maximal theoretical yield too loose to be predictive. We propose a more realisticmore » quantity, defined as the kinetically accessible yield (KAY) to predict the maximum accessible yield for a given flux alteration. KAY is either determined by the point of instability, beyond which steady states become unstable and disappear, or a local maximum before becoming unstable. Thus, KAY is the maximum flux that can be redirected for a given metabolic engineering strategy without losing stability. Strictly speaking, calculation of KAY requires complete kinetic information. With limited or no kinetic information, an Ensemble Modeling strategy can be used to determine a range of likely values for KAY, including an average prediction. We first apply the KAY concept with a toy model to demonstrate the principle of kinetic limitations on yield. We then used a full-scale E. coli model (193 reactions, 153 metabolites) and this approach was successful in E. coli for predicting production of isobutanol: the calculated KAY values are consistent with experimental data for three genotypes previously published.« less
van Dijk, Wouter; Tan, Wan; Li, Pei; Guo, Best; Li, Summer; Benedetti, Andrea; Bourbeau, Jean
2015-01-01
The way in which spirometry is interpreted can lead to misdiagnosis of chronic obstructive pulmonary disease (COPD) resulting in inappropriate treatment. We compared the clinical relevance of 2 criteria for defining a low ratio of forced expiratory volume in 1 second to forced vital capacity (FEV1/FVC): the fixed ratio and the lower limit of normal. We analyzed data from the cross-sectional phase of the population-based Canadian Cohort of Obstructive Lung Disease (CanCOLD) study. We determined associations of the spirometric criteria for airflow limitation with patient-reported adverse outcomes, including respiratory symptoms, disability, health status, exacerbations, and cardiovascular disease. Sensitivity analyses were used to explore the impact of age and severity of airflow limitation on these associations. We analyzed data from 4,882 patients aged 40 years and older. The prevalence of airflow limitation was 17% by fixed ratio and 11% by lower limit of normal. Patients classified as having airflow limitation by fixed ratio only had generally small, nonsignificant increases in the odds of adverse outcomes. Patients having airflow limitation based on both fixed ratio and lower limit of normal had larger, significant increases in odds. But strongest associations were seen for patients who had airflow limitation by both fixed ratio and lower limit of normal and also had a low FEV1, defined as one less than 80% of the predicted value. Our results suggest that use of the fixed ratio alone may lead to misdiagnosis of COPD. A diagnosis established by both a low FEV1/FVC (according to fixed ratio and/or lower limit of normal) and a low FEV1 is strongly associated with clinical outcomes. Guidelines should be reconsidered to require both spirometry abnormalities so as to reduce overdiagnosis of COPD. © 2015 Annals of Family Medicine, Inc.
Lamb, John R.; Zhang, Chunsheng; Xie, Tao; Wang, Kai; Zhang, Bin; Hao, Ke; Chudin, Eugene; Fraser, Hunter B.; Millstein, Joshua; Ferguson, Mark; Suver, Christine; Ivanovska, Irena; Scott, Martin; Philippar, Ulrike; Bansal, Dimple; Zhang, Zhan; Burchard, Julja; Smith, Ryan; Greenawalt, Danielle; Cleary, Michele; Derry, Jonathan; Loboda, Andrey; Watters, James; Poon, Ronnie T. P.; Fan, Sheung T.; Yeung, Chun; Lee, Nikki P. Y.; Guinney, Justin; Molony, Cliona; Emilsson, Valur; Buser-Doepner, Carolyn; Zhu, Jun; Friend, Stephen; Mao, Mao; Shaw, Peter M.; Dai, Hongyue; Luk, John M.; Schadt, Eric E.
2011-01-01
Background In hepatocellular carcinoma (HCC) genes predictive of survival have been found in both adjacent normal (AN) and tumor (TU) tissues. The relationships between these two sets of predictive genes and the general process of tumorigenesis and disease progression remains unclear. Methodology/Principal Findings Here we have investigated HCC tumorigenesis by comparing gene expression, DNA copy number variation and survival using ∼250 AN and TU samples representing, respectively, the pre-cancer state, and the result of tumorigenesis. Genes that participate in tumorigenesis were defined using a gene-gene correlation meta-analysis procedure that compared AN versus TU tissues. Genes predictive of survival in AN (AN-survival genes) were found to be enriched in the differential gene-gene correlation gene set indicating that they directly participate in the process of tumorigenesis. Additionally the AN-survival genes were mostly not predictive after tumorigenesis in TU tissue and this transition was associated with and could largely be explained by the effect of somatic DNA copy number variation (sCNV) in cis and in trans. The data was consistent with the variance of AN-survival genes being rate-limiting steps in tumorigenesis and this was confirmed using a treatment that promotes HCC tumorigenesis that selectively altered AN-survival genes and genes differentially correlated between AN and TU. Conclusions/Significance This suggests that the process of tumor evolution involves rate-limiting steps related to the background from which the tumor evolved where these were frequently predictive of clinical outcome. Additionally treatments that alter the likelihood of tumorigenesis occurring may act by altering AN-survival genes, suggesting that the process can be manipulated. Further sCNV explains a substantial fraction of tumor specific expression and may therefore be a causal driver of tumor evolution in HCC and perhaps many solid tumor types. PMID:21750698
CHEMICAL PRIORITIZATION FOR DEVELOPMENTAL ...
Defining a predictive model of developmental toxicity from in vitro and high-throughput screening (HTS) assays can be limited by the availability of developmental defects data. ToxRefDB (www.epa.gov/ncct/todrefdb) was built from animal studies on data-rich environmental chemicals, and has been used as an anchor for predictive modeling of ToxCast™ data. Scaling to thousands of untested chemicals requires another approach. ToxPlorer™ was developed as a tool to query and extract specific facts about defined biological entities from the open scientific literature and to coherently synthesize relevant knowledge about relationships, pathways and processes in toxicity. Here, we investigated the specific application of ToxPlorer to weighting HTS assay targets for relevance to developmental defects as defined in the literature. First, we systemically analyzed 88,193 Pubmed abstracts selected by bulk query using harmonized terminology for 862 developmental endpoints (www.devtox.net) and 364,334 dictionary term entities in our VT-KB (virtual tissues knowledgebase). We specifically focused on entities corresponding to genes/proteins mapped across of >500 ToxCast HTS assays. The 88,193 devtox abstracts mentioned 244 gene/protein entities in an aggregated total of ~8,000 occurrences. Each of the 244 assays was scored and weighted by the number of devtox articles and relevance to developmental processes. This score was used as a feature for chemical prioritization by Toxic
NASA Astrophysics Data System (ADS)
Vasić, M.; Radojević, Z.
2017-08-01
One of the main disadvantages of the recently reported method, for setting up the drying regime based on the theory of moisture migration during drying, lies in a fact that it is based on a large number of isothermal experiments. In addition each isothermal experiment requires the use of different drying air parameters. The main goal of this paper was to find a way how to reduce the number of isothermal experiments without affecting the quality of the previously proposed calculation method. The first task was to define the lower and upper inputs as well as the output of the “black box” which will be used in the Box-Wilkinson’s orthogonal multi-factorial experimental design. Three inputs (drying air temperature, humidity and velocity) were used within the experimental design. The output parameter of the model represents the time interval between any two chosen characteristic points presented on the Deff - t. The second task was to calculate the output parameter for each planed experiments. The final output of the model is the equation which can predict the time interval between any two chosen characteristic points as a function of the drying air parameters. This equation is valid for any value of the drying air parameters which are within the defined area designated with lower and upper limiting values.
NASA Astrophysics Data System (ADS)
Diak, Bradley James
Forming limit predictions that incorporate crystal plasticity models still cannot adequately predict the deformation performance of polycrystalline materials. The reason for the limitation in predictive power is that the constitutive equations used to connect to the atomic scale assume an affine deformation which do not have a physical basis, but give general trends. This study was undertaken to better elucidate the microplastic process and how it manifests itself phenomenologically. In this endeavour, the strain rate sensitivity of the flow stress was identified as one parameter that greatly affects the forming limit. Hence, an attempt was made to properly define and measure the strain rate sensitivity according to the dictates of thermodynamics. The thermodynamics of systems can delineate the evolution of the state of a material if the state variables can be characterized and measured. Inevitably, these variables must be determined at constant structure. Using the theory of thermally activated flow, where the movement of dislocations past obstacles is the rate controlling step, the mechanical testing techniques have been designed to statistically assess the dynamic evolution of the microstructure by controlling the temperature, T, and strain rate, dotvarepsilon, and measuring the stress, sigma, mean slip distance, lambda, and mean slip velocity, dotlambda, to define sigma=f(lambda,dotlambda, T). The apparent activation volume, which characterizes the obstacle resistance of strain centres, is determined at constant structure by applying the strain rate change technique. Strain rate sensitivity data are compared to the Cottrell-Stokes relation, and the Haasen plot is used to separate the different contributions to the flow stress. Using these precise measurements at interrupted segments of strain, the evolution of a microstructure during plastic flow can be monitored. By this examination of different rate controlling obstacles, the microstructural parameters which correlate to formability were assessed. Detailed experimental evidence is given for different aluminum alloys containing mainly fast or slow diffusing solute species, transition precipitates, dispersed particles, and/or dislocation debris. These systems of Al-Fe, Al-Cr, Al-Cu, Al-Mg, and Al-Mg-Si, all displayed unique dislocation-defect interactions which could be elucidated by the current theory of thermally activated flow.
Defining the Lower Limit of a "Critical Bone Defect" in Open Diaphyseal Tibial Fractures.
Haines, Nikkole M; Lack, William D; Seymour, Rachel B; Bosse, Michael J
2016-05-01
To determine healing outcomes of open diaphyseal tibial shaft fractures treated with reamed intramedullary nailing (IMN) with a bone gap of 10-50 mm on ≥50% of the cortical circumference and to better define a "critical bone defect" based on healing outcome. Retrospective cohort study. Forty patients, age 18-65, with open diaphyseal tibial fractures with a bone gap of 10-50 mm on ≥50% of the circumference as measured on standard anteroposterior and lateral postoperative radiographs treated with IMN. IMN of an open diaphyseal tibial fracture with a bone gap. Level-1 trauma center. Healing outcomes, union or nonunion. Forty patients were analyzed. Twenty-one (52.5%) went on to nonunion and nineteen (47.5%) achieved union. Radiographic apparent bone gap (RABG) and infection were the only 2 covariates predicting nonunion outcome (P = 0.046 for infection). The RABG was determined by measuring the bone gap on each cortex and averaging over 4 cortices. Fractures achieving union had a RABG of 12 ± 1 mm versus 20 ± 2 mm in those going on to nonunion (P < 0.01). This remained significant when patients with infection were removed. Receiver operator characteristic analysis demonstrated that RABG was predictive of outcome (area under the curve of 0.79). A RABG of 25 mm was the statistically optimal threshold for prediction of healing outcome. Patients with open diaphyseal tibial fractures treated with IMN and a <25 mm RABG have a reasonable probability of achieving union without additional intervention, whereas those with larger gaps have a higher probability of nonunion. Research investigating interventions for RABGs should use a predictive threshold for defining a critical bone defect that is associated with greater than 50% risk of nonunion without supplementary treatment. Prognostic Level III. See Instructions for Authors for a complete description of levels of evidence.
NASA transmission research and its probable effects on helicopter transmission design
NASA Technical Reports Server (NTRS)
Zaretsky, E. V.; Coy, J. J.; Townsend, D. P.
1983-01-01
Transmissions studied for application to helicopters in addition to the more conventional geared transmissions include hybrid (traction/gear), bearingless planetary, and split torque transmissions. Research is being performed to establish the validity of analysis and computer codes developed to predict the performance, efficiency, life, and reliability of these transmissions. Results of this research should provide the transmission designer with analytical tools to design for minimum weight and noise with maximum life and efficiency. In addition, the advantages and limitations of drive systems as well as the more conventional systems will be defined.
NASA transmission research and its probable effects on helicopter transmission design
NASA Technical Reports Server (NTRS)
Zaretsky, E. V.; Coy, J. J.; Townsend, D. P.
1984-01-01
Transmissions studied for application to helicopters in addition to the more conventional geared transmissions include hybrid (traction/gear), bearingless planetary, and split torque transmissions. Research is being performed to establish the validity of analysis and computer codes developed to predict the performance, efficiency, life, and reliability of these transmissions. Results of this research should provide the transmission designer with analytical tools to design for minimum weight and noise with maximum life and efficiency. In addition, the advantages and limitations of drive systems as well as the more conventional systems will be defined.
Distribution of indoor radon concentrations in Pennsylvania, 1990-2007
Gross, Eliza L.
2013-01-01
Median indoor radon concentrations aggregated according to geologic units and hydrogeologic settings are useful for drawing general conclusions about the occurrence of indoor radon in specific geologic units and hydrogeologic settings, but the associated data and maps have limitations. The aggregated indoor radon data have testing and spatial accuracy limitations due to lack of available information regarding testing conditions and the imprecision of geocoded test locations. In addition, the associated data describing geologic units and hydrogeologic settings have spatial and interpretation accuracy limitations, which are a result of using statewide data to define conditions at test locations and geologic data that represent a broad interpretation of geologic units across the State. As a result, indoor air radon concentration distributions are not proposed for use in predicting individual concentrations at specific sites nor for use as a decision-making tool for property owners to decide whether to test for indoor radon concentrations at specific property locations.
How Structure Defines Affinity in Protein-Protein Interactions
Erijman, Ariel; Rosenthal, Eran; Shifman, Julia M.
2014-01-01
Protein-protein interactions (PPI) in nature are conveyed by a multitude of binding modes involving various surfaces, secondary structure elements and intermolecular interactions. This diversity results in PPI binding affinities that span more than nine orders of magnitude. Several early studies attempted to correlate PPI binding affinities to various structure-derived features with limited success. The growing number of high-resolution structures, the appearance of more precise methods for measuring binding affinities and the development of new computational algorithms enable more thorough investigations in this direction. Here, we use a large dataset of PPI structures with the documented binding affinities to calculate a number of structure-based features that could potentially define binding energetics. We explore how well each calculated biophysical feature alone correlates with binding affinity and determine the features that could be used to distinguish between high-, medium- and low- affinity PPIs. Furthermore, we test how various combinations of features could be applied to predict binding affinity and observe a slow improvement in correlation as more features are incorporated into the equation. In addition, we observe a considerable improvement in predictions if we exclude from our analysis low-resolution and NMR structures, revealing the importance of capturing exact intermolecular interactions in our calculations. Our analysis should facilitate prediction of new interactions on the genome scale, better characterization of signaling networks and design of novel binding partners for various target proteins. PMID:25329579
Dynamics of Droplet Extinction in Slow Convective Flows
NASA Technical Reports Server (NTRS)
Nayagam, V.; Haggard, J. B., Jr.; Williams, F. A.
1999-01-01
The classical model for droplet combustion predicts that the square of the droplet diameter decreases linearly with time. It also predicts that a droplet of any size will burn to completion over a period of time. However, it has been known for some time that under certain conditions flames surrounding a droplet, in a quiescent environment, could extinguish because of insufficient residence time for the chemistry to proceed to completion. This type of extinction that occurs for smaller droplets has been studied extensively in the past. Large droplets, on the other hand, exhibit a different type of extinction where excessive radiative heat loss from the flame zone leads to extinction. This mode of "radiative extinction" was theoretically predicted for droplet burning by Chao et al. and was observed in recent space experiments in a quiescent environment. Thus far, the fundamental flammability limit prescribed by radiative extinction of liquid droplets has been measured only under quiescent environmental conditions. In many space platforms, however, ventilation systems produce small convective flows and understanding of the influences of this convection on the extinction process will help better define the radiative extinction flammability boundaries. Boundaries defined by experiments and captured using theoretical models could provide enhanced fire safety margin in space explor1999063d investigation of convective effects will help in interpretations of burning-rate data obtained during free-floated droplet combustion experiments with small residual velocities.
Li, Xiaohong; Blount, Patricia L; Vaughan, Thomas L; Reid, Brian J
2011-02-01
Aside from primary prevention, early detection remains the most effective way to decrease mortality associated with the majority of solid cancers. Previous cancer screening models are largely based on classification of at-risk populations into three conceptually defined groups (normal, cancer without symptoms, and cancer with symptoms). Unfortunately, this approach has achieved limited successes in reducing cancer mortality. With advances in molecular biology and genomic technologies, many candidate somatic genetic and epigenetic "biomarkers" have been identified as potential predictors of cancer risk. However, none have yet been validated as robust predictors of progression to cancer or shown to reduce cancer mortality. In this Perspective, we first define the necessary and sufficient conditions for precise prediction of future cancer development and early cancer detection within a simple physical model framework. We then evaluate cancer risk prediction and early detection from a dynamic clonal evolution point of view, examining the implications of dynamic clonal evolution of biomarkers and the application of clonal evolution for cancer risk management in clinical practice. Finally, we propose a framework to guide future collaborative research between mathematical modelers and biomarker researchers to design studies to investigate and model dynamic clonal evolution. This approach will allow optimization of available resources for cancer control and intervention timing based on molecular biomarkers in predicting cancer among various risk subsets that dynamically evolve over time.
Mind the Gap: Bridging economic and naturalistic risk-taking with cognitive neuroscience
Schonberg, Tom; Fox, Craig R.; Poldrack, Russell A.
2010-01-01
Economists define risk in terms of variability of possible outcomes whereas clinicians and laypeople generally view risk as exposure to possible loss or harm. Neuroeconomic studies using relatively simple behavioral tasks have identified a network of brain regions that respond to economic risk, but these studies have had limited success predicting naturalistic risk-taking. In contrast, more complex behavioral tasks developed by clinicians (e.g., Balloon Analogue Risk Task and Iowa Gambling Task) correlate with naturalistic risk-taking but resist decomposition into distinct cognitive constructs. We propose that to bridge this gap and better understand neural substrates of naturalistic risk-taking, new tasks are needed that: (1) are decomposable into basic cognitive/economic constructs; (2) predict naturalistic risk-taking; and (3) engender dynamic, affective engagement. PMID:21130018
Methods to determine the growth domain in a multidimensional environmental space.
Le Marc, Yvan; Pin, Carmen; Baranyi, József
2005-04-15
Data from a database on microbial responses to the food environment (ComBase, see www.combase.cc) were used to study the boundary of growth several pathogens (Aeromonas hydrophila, Escherichia coli, Listeria monocytogenes, Yersinia enterocolitica). Two methods were used to evaluate the growth/no growth interface. The first one is an application of the Minimum Convex Polyhedron (MCP) introduced by Baranyi et al. [Baranyi, J., Ross, T., McMeekin, T., Roberts, T.A., 1996. The effect of parameterisation on the performance of empirical models used in Predictive Microbiology. Food Microbiol. 13, 83-91.]. The second method applies logistic regression to define the boundary of growth. The combination of these two different techniques can be a useful tool to handle the problem of extrapolation of predictive models at the growth limits.
Numerical and Experimental Validation of a New Damage Initiation Criterion
NASA Astrophysics Data System (ADS)
Sadhinoch, M.; Atzema, E. H.; Perdahcioglu, E. S.; van den Boogaard, A. H.
2017-09-01
Most commercial finite element software packages, like Abaqus, have a built-in coupled damage model where a damage evolution needs to be defined in terms of a single fracture energy value for all stress states. The Johnson-Cook criterion has been modified to be Lode parameter dependent and this Modified Johnson-Cook (MJC) criterion is used as a Damage Initiation Surface (DIS) in combination with the built-in Abaqus ductile damage model. An exponential damage evolution law has been used with a single fracture energy value. Ultimately, the simulated force-displacement curves are compared with experiments to validate the MJC criterion. 7 out of 9 fracture experiments were predicted accurately. The limitations and accuracy of the failure predictions of the newly developed damage initiation criterion will be discussed shortly.
Optimal firing rate estimation
NASA Technical Reports Server (NTRS)
Paulin, M. G.; Hoffman, L. F.
2001-01-01
We define a measure for evaluating the quality of a predictive model of the behavior of a spiking neuron. This measure, information gain per spike (Is), indicates how much more information is provided by the model than if the prediction were made by specifying the neuron's average firing rate over the same time period. We apply a maximum Is criterion to optimize the performance of Gaussian smoothing filters for estimating neural firing rates. With data from bullfrog vestibular semicircular canal neurons and data from simulated integrate-and-fire neurons, the optimal bandwidth for firing rate estimation is typically similar to the average firing rate. Precise timing and average rate models are limiting cases that perform poorly. We estimate that bullfrog semicircular canal sensory neurons transmit in the order of 1 bit of stimulus-related information per spike.
Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.
Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal
2015-08-28
We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.
Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems
NASA Astrophysics Data System (ADS)
Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal
2015-08-01
We report a new limitation on the ability of physical systems to perform computation—one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system—such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.
Learning and cognitive styles in web-based learning: theory, evidence, and application.
Cook, David A
2005-03-01
Cognitive and learning styles (CLS) have long been investigated as a basis to adapt instruction and enhance learning. Web-based learning (WBL) can reach large, heterogenous audiences, and adaptation to CLS may increase its effectiveness. Adaptation is only useful if some learners (with a defined trait) do better with one method and other learners (with a complementary trait) do better with another method (aptitude-treatment interaction). A comprehensive search of health professions education literature found 12 articles on CLS in computer-assisted learning and WBL. Because so few reports were found, research from non-medical education was also included. Among all the reports, four CLS predominated. Each CLS construct was used to predict relationships between CLS and WBL. Evidence was then reviewed to support or refute these predictions. The wholist-analytic construct shows consistent aptitude-treatment interactions consonant with predictions (wholists need structure, a broad-before-deep approach, and social interaction, while analytics need less structure and a deep-before-broad approach). Limited evidence for the active-reflective construct suggests aptitude-treatment interaction, with active learners doing better with interactive learning and reflective learners doing better with methods to promote reflection. As predicted, no consistent interaction between the concrete-abstract construct and computer format was found, but one study suggests that there is interaction with instructional method. Contrary to predictions, no interaction was found for the verbal-imager construct. Teachers developing WBL activities should consider assessing and adapting to accommodate learners defined by the wholist-analytic and active-reflective constructs. Other adaptations should be considered experimental. Further WBL research could clarify the feasibility and effectiveness of assessing and adapting to CLS.
Design prediction for long term stress rupture service of composite pressure vessels
NASA Technical Reports Server (NTRS)
Robinson, Ernest Y.
1992-01-01
Extensive stress rupture studies on glass composites and Kevlar composites were conducted by the Lawrence Radiation Laboratory beginning in the late 1960's and extending to about 8 years in some cases. Some of the data from these studies published over the years were incomplete or were tainted by spurious failures, such as grip slippage. Updated data sets were defined for both fiberglass and Kevlar composite stand test specimens. These updated data are analyzed in this report by a convenient form of the bivariate Weibull distribution, to establish a consistent set of design prediction charts that may be used as a conservative basis for predicting the stress rupture life of composite pressure vessels. The updated glass composite data exhibit an invariant Weibull modulus with lifetime. The data are analyzed in terms of homologous service load (referenced to the observed median strength). The equations relating life, homologous load, and probability are given, and corresponding design prediction charts are presented. A similar approach is taken for Kevlar composites, where the updated stand data do show a turndown tendency at long life accompanied by a corresponding change (increase) of the Weibull modulus. The turndown characteristic is not present in stress rupture test data of Kevlar pressure vessels. A modification of the stress rupture equations is presented to incorporate a latent, but limited, strength drop, and design prediction charts are presented that incorporate such behavior. The methods presented utilize Cartesian plots of the probability distributions (which are a more natural display for the design engineer), based on median normalized data that are independent of statistical parameters and are readily defined for any set of test data.
Can terrestrial diversity be predicted from soil morphology?
NASA Astrophysics Data System (ADS)
Fournier, Bertrand; Guenat, Claire; Mitchell, Edward
2010-05-01
Restoration ecology is a young discipline and, as a consequence, many concepts and methods are not yet mature. A good example of this is the case of floodplains which have been intensively embanked, dammed or otherwise engineered in industrialized countries, but are now increasingly being restored, often at high cost. There is however much confusion over the goals of floodplain restoration projects and the methods, criteria, and indicators to assess their success. Nature practitioners are interested in knowing how many and which variables are needed for an efficient monitoring and/or assessment. Although many restoration success assessment methods have been developed to meet this need, most indicators currently used are complicated and expensive or provide only spatially or temporally limited information on these complex systems. Perhaps as a result, no standard method has yet been defined and post-restoration monitoring is not systematically done. Optimizing indicators would help improve the credibility of restoration projects and would thus help to convince stakeholders and managers to support monitoring programs. As a result, defining the predictive power of restoration success indicators, as well as selecting the most pertinent variables among the ones currently used is of major importance for a sustainable and adaptive management of our river ecosystems. Soil characteristics determine key functions (e.g. decomposition) and ecosystem structure (e.g. vegetation) in terrestrial ecosystems. They therefore have a high potential information value that is, however, generally not considered in floodplain restoration assessment. In order to explore this potential, we recently developed a new synthetic indicator based on soil morphology for the evaluation of river restoration success. Following Hutchinson's ecological niche concept, we hypothesised that terrestrial biodiversity can be predicted based on soil characteristics, but that these characteristics do not perform equivalently for all taxonomic group. In this study, we explored the potential of soil morphology as a proxy for biodiversity. We used results of a previous research seeking at developing soil morphology based indicators for floodplain restoration assessment, as well as surveys of vegetation, bacteria, earthworms, and terrestrial arthropods from the same site (River Thur, CCES project RECORD: http://www.swiss-experiment.ch/index.php/Record:Home) to analyse the relationships among soil morphology and biodiversity variables and assess the efficiency of this river widening. Furthermore, we defined the best performing predictive soil variables for each taxa. Soil morphology indicators performed well in predicting terrestrial arthropod richness supporting the idea that this relatively simple indicator may represent a useful tool for the rapid assessment of floodplain restoration success. However, the indicators performed variously concerning other taxa highlighting the methods limitation and giving clues for future improvements. We conclude by discussing the potential of soil morphology in conservation biology and its possible applications for nature practitioners.
The PD-1 pathway as a therapeutic target to overcome immune escape mechanisms in cancer.
Henick, Brian S; Herbst, Roy S; Goldberg, Sarah B
2014-12-01
Immunotherapy is emerging as a powerful approach in cancer treatment. Preclinical data predicted the antineoplastic effects seen in clinical trials of programmed death-1 (PD-1) pathway inhibitors, as well as their observed toxicities. The results of early clinical trials are extraordinarily promising in several cancer types and have shaped the direction of ongoing and future studies. This review describes the biological rationale for targeting the PD-1 pathway with monoclonal antibodies for the treatment of cancer as a context for examining the results of early clinical trials. It also surveys the landscape of ongoing clinical trials and discusses their anticipated strengths and limitations. PD-1 pathway inhibition represents a new frontier in cancer immunotherapy, which shows clear evidence of activity in various tumor types including NSCLC and melanoma. Ongoing and upcoming trials will examine optimal combinations of these agents, which should further define their role across tumor types. Current limitations include the absence of a reliable companion diagnostic to predict likely responders, as well as lack of data in early-stage cancer when treatment has the potential to increase cure rates.
Exact calculation of distributions on integers, with application to sequence alignment.
Newberg, Lee A; Lawrence, Charles E
2009-01-01
Computational biology is replete with high-dimensional discrete prediction and inference problems. Dynamic programming recursions can be applied to several of the most important of these, including sequence alignment, RNA secondary-structure prediction, phylogenetic inference, and motif finding. In these problems, attention is frequently focused on some scalar quantity of interest, a score, such as an alignment score or the free energy of an RNA secondary structure. In many cases, score is naturally defined on integers, such as a count of the number of pairing differences between two sequence alignments, or else an integer score has been adopted for computational reasons, such as in the test of significance of motif scores. The probability distribution of the score under an appropriate probabilistic model is of interest, such as in tests of significance of motif scores, or in calculation of Bayesian confidence limits around an alignment. Here we present three algorithms for calculating the exact distribution of a score of this type; then, in the context of pairwise local sequence alignments, we apply the approach so as to find the alignment score distribution and Bayesian confidence limits.
Brockmeyer, Matthias; Schmitt, Cornelia; Haupert, Alexander; Kohn, Dieter; Lorbach, Olaf
2017-12-01
The reliable diagnosis of partial-thickness tears of the rotator cuff is still elusive in clinical practise. Therefore, the purpose of the study was to determine the diagnostic accuracy of MR imaging and clinical tests for detecting partial-thickness tears of the rotator cuff as well as the combination of these parameters. 334 consecutive shoulder arthroscopies for rotator cuff pathologies performed during the time period between 2010 and 2012 were analyzed retrospectively for the findings of common clinical signs for rotator cuff lesions and preoperative MR imaging. These were compared with the intraoperative arthroscopic findings as "gold standard". The reports of the MR imaging were evaluated with regard to the integrity of the rotator cuff. The Ellman Classification was used to define partial-thickness tears of the rotator cuff in accordance with the arthroscopic findings. Descriptive statistics, sensitivity, specificity, positive and negative predictive value were calculated. MR imaging showed 80 partial-thickness and 70 full-thickness tears of the rotator cuff. The arthroscopic examination confirmed 64 partial-thickness tears of which 52 needed debridement or refixation of the rotator cuff. Sensitivity for MR imaging to identify partial-thickness tears was 51.6%, specificity 77.2%, positive predictive value 41.3% and negative predictive value 83.7%. For the Jobe-test, sensitivity was 64.1%, specificity 43.2%, positive predictive value 25.9% and negative predictive value 79.5%. Sensitivity for the Impingement-sign was 76.7%, specificity 46.6%, positive predictive value 30.8% and negative predictive value 86.5%. For the combination of MR imaging, Jobe-test and Impingement-sign sensitivity was 46.9%, specificity 85.4%, positive predictive value 50% and negative predictive value 83.8%. The diagnostic accuracy of MR imaging and clinical tests (Jobe-test and Impingement-sign) alone is limited for detecting partial-thickness tears of the rotator cuff. Additionally, the combination of MR imaging and clinical tests does not improve diagnostic accuracy. Level II, Diagnostic study.
Can We Predict Patient Wait Time?
Pianykh, Oleg S; Rosenthal, Daniel I
2015-10-01
The importance of patient wait-time management and predictability can hardly be overestimated: For most hospitals, it is the patient queues that drive and define every bit of clinical workflow. The objective of this work was to study the predictability of patient wait time and identify its most influential predictors. To solve this problem, we developed a comprehensive list of 25 wait-related parameters, suggested in earlier work and observed in our own experiments. All parameters were chosen as derivable from a typical Hospital Information System dataset. The parameters were fed into several time-predicting models, and the best parameter subsets, discovered through exhaustive model search, were applied to a large sample of actual patient wait data. We were able to discover the most efficient wait-time prediction factors and models, such as the line-size models introduced in this work. Moreover, these models proved to be equally accurate and computationally efficient. Finally, the selected models were implemented in our patient waiting areas, displaying predicted wait times on the monitors located at the front desks. The limitations of these models are also discussed. Optimal regression models based on wait-line sizes can provide accurate and efficient predictions for patient wait time. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Horobin, R W; Stockert, J C; Rashid-Doubell, F
2015-05-01
We discuss a variety of biological targets including generic biomembranes and the membranes of the endoplasmic reticulum, endosomes/lysosomes, Golgi body, mitochondria (outer and inner membranes) and the plasma membrane of usual fluidity. For each target, we discuss the access of probes to the target membrane, probe uptake into the membrane and the mechanism of selectivity of the probe uptake. A statement of the QSAR decision rule that describes the required physicochemical features of probes that enable selective staining also is provided, followed by comments on exceptions and limits. Examples of probes typically used to demonstrate each target structure are noted and decision rule tabulations are provided for probes that localize in particular targets; these tabulations show distribution of probes in the conceptual space defined by the relevant structure parameters ("parameter space"). Some general implications and limitations of the QSAR models for probe targeting are discussed including the roles of certain cell and protocol factors that play significant roles in lipid staining. A case example illustrates the predictive ability of QSAR models. Key limiting values of the head group hydrophilicity parameter associated with membrane-probe interactions are discussed in an appendix.
NASA Astrophysics Data System (ADS)
Chakraborty, Prodyut R.; Hiremath, Kirankumar R.; Sharma, Manvendra
2017-02-01
Evaporation rate of water is strongly influenced by energy barrier due to molecular collision and heat transfer limitations. The evaporation coefficient, defined as the ratio of experimentally measured evaporation rate to that maximum possible theoretical limit, varies over a conflicting three orders of magnitude. In the present work, a semi-analytical transient heat diffusion model of droplet evaporation is developed considering the effect of change in droplet size due to evaporation from its surface, when the droplet is injected into vacuum. Negligible effect of droplet size reduction due to evaporation on cooling rate is found to be true. However, the evaporation coefficient is found to approach theoretical limit of unity, when the droplet radius is less than that of mean free path of vapor molecules on droplet surface contrary to the reported theoretical predictions. Evaporation coefficient was found to reduce rapidly when the droplet under consideration has a radius larger than the mean free path of evaporating molecules, confirming the molecular collision barrier to evaporation rate. The trend of change in evaporation coefficient with increasing droplet size predicted by the proposed model will facilitate obtaining functional relation of evaporation coefficient with droplet size, and can be used for benchmarking the interaction between multiple droplets during evaporation in vacuum.
Speckman, Heather N.; Huhn, Bridger J.; Strawn, Rachel N.; Weinig, Cynthia
2017-01-01
Climate models predict widespread increases in both drought intensity and duration in the next decades. Although water deficiency is a significant determinant of plant survival, limited understanding of plant responses to extreme drought impedes forecasts of both forest and crop productivity under increasing aridity. Drought induces a suite of physiological responses; however, we lack an accurate mechanistic description of plant response to lethal drought that would improve predictive understanding of mortality under altered climate conditions. Here, proxies for leaf cellular damage, chlorophyll a fluorescence, and electrolyte leakage were directly associated with failure to recover from drought upon rewatering in Brassica rapa (genotype R500) and thus define the exact timing of drought-induced death. We validated our results using a second genotype (imb211) that differs substantially in life history traits. Our study demonstrates that whereas changes in carbon dynamics and water transport are critical indicators of drought stress, they can be unrelated to visible metrics of mortality, i.e. lack of meristematic activity and regrowth. In contrast, membrane failure at the cellular scale is the most proximate cause of death. This hypothesis was corroborated in two gymnosperms (Picea engelmannii and Pinus contorta) that experienced lethal water stress in the field and in laboratory conditions. We suggest that measurement of chlorophyll a fluorescence can be used to operationally define plant death arising from drought, and improved plant characterization can enhance surface model predictions of drought mortality and its consequences to ecosystem services at a global scale. PMID:28710130
Guadagno, Carmela R; Ewers, Brent E; Speckman, Heather N; Aston, Timothy Llewellyn; Huhn, Bridger J; DeVore, Stanley B; Ladwig, Joshua T; Strawn, Rachel N; Weinig, Cynthia
2017-09-01
Climate models predict widespread increases in both drought intensity and duration in the next decades. Although water deficiency is a significant determinant of plant survival, limited understanding of plant responses to extreme drought impedes forecasts of both forest and crop productivity under increasing aridity. Drought induces a suite of physiological responses; however, we lack an accurate mechanistic description of plant response to lethal drought that would improve predictive understanding of mortality under altered climate conditions. Here, proxies for leaf cellular damage, chlorophyll a fluorescence, and electrolyte leakage were directly associated with failure to recover from drought upon rewatering in Brassica rapa (genotype R500) and thus define the exact timing of drought-induced death. We validated our results using a second genotype (imb211) that differs substantially in life history traits. Our study demonstrates that whereas changes in carbon dynamics and water transport are critical indicators of drought stress, they can be unrelated to visible metrics of mortality, i.e. lack of meristematic activity and regrowth. In contrast, membrane failure at the cellular scale is the most proximate cause of death. This hypothesis was corroborated in two gymnosperms ( Picea engelmannii and Pinus contorta ) that experienced lethal water stress in the field and in laboratory conditions. We suggest that measurement of chlorophyll a fluorescence can be used to operationally define plant death arising from drought, and improved plant characterization can enhance surface model predictions of drought mortality and its consequences to ecosystem services at a global scale. © 2017 American Society of Plant Biologists. All Rights Reserved.
Kinetically accessible yield (KAY) for redirection of metabolism to produce exo-metabolites
Lafontaine Rivera, Jimmy G.; Theisen, Matthew K.; Chen, Po-Wei; ...
2017-04-05
The product formation yield (product formed per unit substrate consumed) is often the most important performance indicator in metabolic engineering. Until now, the actual yield cannot be predicted, but it can be bounded by its maximum theoretical value. The maximum theoretical yield is calculated by considering the stoichiometry of the pathways and cofactor regeneration involved. Here in this paper we found that in many cases, dynamic stability becomes an issue when excessive pathway flux is drawn to a product. This constraint reduces the yield and renders the maximal theoretical yield too loose to be predictive. We propose a more realisticmore » quantity, defined as the kinetically accessible yield (KAY) to predict the maximum accessible yield for a given flux alteration. KAY is either determined by the point of instability, beyond which steady states become unstable and disappear, or a local maximum before becoming unstable. Thus, KAY is the maximum flux that can be redirected for a given metabolic engineering strategy without losing stability. Strictly speaking, calculation of KAY requires complete kinetic information. With limited or no kinetic information, an Ensemble Modeling strategy can be used to determine a range of likely values for KAY, including an average prediction. We first apply the KAY concept with a toy model to demonstrate the principle of kinetic limitations on yield. We then used a full-scale E. coli model (193 reactions, 153 metabolites) and this approach was successful in E. coli for predicting production of isobutanol: the calculated KAY values are consistent with experimental data for three genotypes previously published.« less
Oberg, T
2007-01-01
The vapour pressure is the most important property of an anthropogenic organic compound in determining its partitioning between the atmosphere and the other environmental media. The enthalpy of vaporisation quantifies the temperature dependence of the vapour pressure and its value around 298 K is needed for environmental modelling. The enthalpy of vaporisation can be determined by different experimental methods, but estimation methods are needed to extend the current database and several approaches are available from the literature. However, these methods have limitations, such as a need for other experimental results as input data, a limited applicability domain, a lack of domain definition, and a lack of predictive validation. Here we have attempted to develop a quantitative structure-property relationship (QSPR) that has general applicability and is thoroughly validated. Enthalpies of vaporisation at 298 K were collected from the literature for 1835 pure compounds. The three-dimensional (3D) structures were optimised and each compound was described by a set of computationally derived descriptors. The compounds were randomly assigned into a calibration set and a prediction set. Partial least squares regression (PLSR) was used to estimate a low-dimensional QSPR model with 12 latent variables. The predictive performance of this model, within the domain of application, was estimated at n=560, q2Ext=0.968 and s=0.028 (log transformed values). The QSPR model was subsequently applied to a database of 100,000+ structures, after a similar 3D optimisation and descriptor generation. Reliable predictions can be reported for compounds within the previously defined applicability domain.
Katz, Matthew L.; Viney, Tim J.; Nikolic, Konstantin
2016-01-01
Sensory stimuli are encoded by diverse kinds of neurons but the identities of the recorded neurons that are studied are often unknown. We explored in detail the firing patterns of eight previously defined genetically-identified retinal ganglion cell (RGC) types from a single transgenic mouse line. We first introduce a new technique of deriving receptive field vectors (RFVs) which utilises a modified form of mutual information (“Quadratic Mutual Information”). We analysed the firing patterns of RGCs during presentation of short duration (~10 second) complex visual scenes (natural movies). We probed the high dimensional space formed by the visual input for a much smaller dimensional subspace of RFVs that give the most information about the response of each cell. The new technique is very efficient and fast and the derivation of novel types of RFVs formed by the natural scene visual input was possible even with limited numbers of spikes per cell. This approach enabled us to estimate the 'visual memory' of each cell type and the corresponding receptive field area by calculating Mutual Information as a function of the number of frames and radius. Finally, we made predictions of biologically relevant functions based on the RFVs of each cell type. RGC class analysis was complemented with results for the cells’ response to simple visual input in the form of black and white spot stimulation, and their classification on several key physiological metrics. Thus RFVs lead to predictions of biological roles based on limited data and facilitate analysis of sensory-evoked spiking data from defined cell types. PMID:26845435
NASA Astrophysics Data System (ADS)
Bruening, J. M.; Tran, T. J.; Bunn, A. G.; Salzer, M. W.; Weiss, S. B.
2015-12-01
Great Basin bristlecone pine (Pinus longaeva) is a valuable paleoclimate resource due to the climatic sensitivity of its annually-resolved rings. Recent work has shown that low growing season temperatures limit tree growth at the upper treeline ecotone. The presence of precisely dated remnant wood above modern treeline shows that this ecotone shifts at centennial timescales; in some areas during the Holocene climatic optimum treeline was 100 m higher than at present. A recent model from Paulsen and Körner (2014, doi:10.1007/s00035-014-0124-0) predicts global potential treeline position as a function of climate. The model develops three parameters necessary to sustain a temperature-limited treeline; a growing season longer than 94 days, defined by all days with a mean temperature >0.9 °C, and a mean temperature of 6.4 °C across the entire growing season. While maintaining impressive global accuracy in treeline prediction, these parameters are not specific to the semi-arid Great Basin bristlecone pine treelines in Nevada. In this study, we used 49 temperature sensors arrayed across approximately one square kilometer of complex terrain at treeline on Mount Washington to model temperatures using topographic indices. Results show relatively accurate prediction throughout the growing season (e.g., July average daily temperatures were modeled with an R2 of 0.80 and an RMSE of 0.29 °C). The modeled temperatures enabled calibration of a regional treeline model, yielding different parameters needed to predict potential treeline than the global model. Preliminary results indicate that modern Bristlecone pine treeline on and around Mount Washington occurs in areas with a longer growing season length (~160 days defined by all days with a mean temperature >0.9 °C) and a warmer seasonal mean temperature (~9 °C) than the global average. This work will provide a baseline data set on treeline position in the Snake Range derived only from parameters physiologically relevant to demography, and may assist in understanding climate refugia for this species.
Public understanding of cyclone warning in India: Can wind be predicted?
Dash, Biswanath
2015-11-01
In spite of meteorological warning, many human lives are lost every year to cyclone mainly because vulnerable populations were not evacuated on time to a safe shelter as per recommendation. It raises several questions, most prominently what explains people's behaviour in the face of such danger from a cyclonic storm? How do people view meteorological advisories issued for cyclone and what role they play in defining the threat? What shapes public response during such situation? This article based on an ethnographic study carried out in coastal state of Odisha, India, argues that local public recognising inherent limitations of meteorological warning, fall back on their own system of observation and forecasting. Not only are the contents of cyclone warning understood, its limitations are accommodated and explained. © The Author(s) 2014.
Boggia, Lorenzo; Pignata, Giuseppe; Sgorbini, Barbara; Colombo, Maria Laura; Marengo, Arianna; Casale, Manuela; Nicola, Silvana; Bicchi, Carlo; Rubiolo, Patrizia
2017-04-05
Artemisia umbelliformis, commonly known as "white génépi", is characterized by a volatile fraction rich in α- and β-thujones, two monoterpenoids; under European Union (EU) regulations these are limited to 35 mg/L in Artemisia-based beverages because of their recognized activity on the human central nervous system. This study reports the results of an investigation to define the geographical origin and thujone content of individual plants of A. umbelliformis from different geographical sites, cultivated experimentally at a single site, and to predict the thujone content in the resulting liqueurs through their volatile fraction. Headspace solid phase microextraction (HS-SPME) combined with gas chromatography-mass spectrometry (GC-MS) and non-separative HS-SPME-MS were used as analytical platforms to create a database suitable for chemometric description and prediction through linear discriminant analysis (LDA). HS-SPME-MS was applied to shorten analysis time. With both approaches, a diagnostic prediction of (i) plant geographical origin and (ii) thujone content of plant-related liqueurs could be made.
FutureTox II: In vitro Data and In Silico Models for Predictive Toxicology
Knudsen, Thomas B.; Keller, Douglas A.; Sander, Miriam; Carney, Edward W.; Doerrer, Nancy G.; Eaton, David L.; Fitzpatrick, Suzanne Compton; Hastings, Kenneth L.; Mendrick, Donna L.; Tice, Raymond R.; Watkins, Paul B.; Whelan, Maurice
2015-01-01
FutureTox II, a Society of Toxicology Contemporary Concepts in Toxicology workshop, was held in January, 2014. The meeting goals were to review and discuss the state of the science in toxicology in the context of implementing the NRC 21st century vision of predicting in vivo responses from in vitro and in silico data, and to define the goals for the future. Presentations and discussions were held on priority concerns such as predicting and modeling of metabolism, cell growth and differentiation, effects on sensitive subpopulations, and integrating data into risk assessment. Emerging trends in technologies such as stem cell-derived human cells, 3D organotypic culture models, mathematical modeling of cellular processes and morphogenesis, adverse outcome pathway development, and high-content imaging of in vivo systems were discussed. Although advances in moving towards an in vitro/in silico based risk assessment paradigm were apparent, knowledge gaps in these areas and limitations of technologies were identified. Specific recommendations were made for future directions and research needs in the areas of hepatotoxicity, cancer prediction, developmental toxicity, and regulatory toxicology. PMID:25628403
Early prediction of thiopurine-induced hepatotoxicity in inflammatory bowel disease.
Wong, D R; Coenen, M J H; Derijks, L J J; Vermeulen, S H; van Marrewijk, C J; Klungel, O H; Scheffer, H; Franke, B; Guchelaar, H-J; de Jong, D J; Engels, L G J B; Verbeek, A L M; Hooymans, P M
2017-02-01
Hepatotoxicity, gastrointestinal complaints and general malaise are common limiting adverse reactions of azathioprine and mercaptopurine in IBD patients, often related to high steady-state 6-methylmercaptopurine ribonucleotide (6-MMPR) metabolite concentrations. To determine the predictive value of 6-MMPR concentrations 1 week after treatment initiation (T1) for the development of these adverse reactions, especially hepatotoxicity, during the first 20 weeks of treatment. The cohort study consisted of the first 270 IBD patients starting thiopurine treatment as part of the Dutch randomised-controlled trial evaluating pre-treatment thiopurine S-methyltransferase genotype testing (ClinicalTrials.gov NCT00521950). Blood samples for metabolite assessment were collected at T1. Hepatotoxicity was defined by alanine aminotransaminase elevations >2 times the upper normal limit or a ratio of alanine aminotransaminase/alkaline phosphatase ≥5. Forty-seven patients (17%) presented hepatotoxicity during the first 20 weeks of thiopurine treatment. A T1 6-MMPR threshold of 3615 pmol/8 × 10 8 erythrocytes was defined. Analysis of patients on stable thiopurine dose (n = 174) showed that those exceeding the 6-MMPR threshold were at increased risk of hepatotoxicity: OR = 3.8 (95% CI: 1.8-8.0). Age, male gender and BMI were significant determinants. A predictive algorithm was developed based on these determinants and the 6-MMPR threshold to assess hepatotoxicity risk [AUC = 0.83 (95% CI: 0.75-0.91)]. 6-MMPR concentrations above the threshold also correlated with gastrointestinal complaints: OR = 2.4 (95% CI: 1.4-4.3), and general malaise: OR = 2.0 (95% CI: 1.1-3.7). In more than 80% of patients, thiopurine-induced hepatotoxicity could be explained by elevated T1 6-MMPR concentrations and the independent risk factors age, gender and BMI, allowing personalised thiopurine treatment in IBD to prevent early failure. © 2016 John Wiley & Sons Ltd.
Linear array ultrasonography to stage rectal neoplasias suitable for local treatment.
Ravizza, Davide; Tamayo, Darina; Fiori, Giancarla; Trovato, Cristina; De Roberto, Giuseppe; de Leone, Annalisa; Crosta, Cristiano
2011-08-01
Because of the many therapeutic options available, a reliable staging is crucial for rectal neoplasia management. Adenomas and cancers limited to the submucosa without lymph node involvement may be treated locally. The aim of this study is to evaluate the diagnostic accuracy of endorectal ultrasonography in the staging of neoplasias suitable for local treatment. We considered all patients who underwent endorectal ultrasonography between 2001 and 2010. The study population consisted of 92 patients with 92 neoplasias (68 adenocarcinomas and 24 adenomas). A 5 and 7.5MHz linear array echoendoscope was used. The postoperative histopathologic result was compared with the preoperative staging defined by endorectal ultrasonography. Adenomas and cancers limited to the submucosa were considered together (pT0-1). The sensitivity, specificity, overall accuracy rate, positive predictive value, and negative predictive value of endorectal ultrasonography for pT0-1 were 86%, 95.6%, 91.3%, 94.9% and 88.7%. Those for nodal involvement were 45.4%, 95.5%, 83%, 76.9% and 84%, with 3 false positive results and 12 false negative. For combined pT0-1 and pN0, endorectal ultrasonography showed an 87.5% sensitivity, 95.9% specificity, 92% overall accuracy rate, 94.9% positive predictive value and 90.2% negative predictive value. Endorectal linear array ultrasonography is a reliable tool to detect rectal neoplasias suitable for local treatment. Copyright © 2011 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Florio, Christopher J.; Cota, Steve A.; Gaffney, Stephanie K.
2010-08-01
In a companion paper presented at this conference we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) may be used in conjunction with a limited number of runs of AFRL's MODTRAN4 radiative transfer code, to quickly predict the top-of-atmosphere (TOA) radiance received in the visible through midwave IR (MWIR) by an earth viewing sensor, for any arbitrary combination of solar and sensor elevation angles. The method is particularly useful for large-scale scene simulations where each pixel could have a unique value of reflectance/emissivity and temperature, making the run-time required for direct prediction via MODTRAN4 prohibitive. In order to be self-consistent, the method described requires an atmospheric model (defined, at a minimum, as a set of vertical temperature, pressure and water vapor profiles) that is consistent with the average scene temperature. MODTRAN4 provides only six model atmospheres, ranging from sub-arctic winter to tropical conditions - too few to cover with sufficient temperature resolution the full range of average scene temperatures that might be of interest. Model atmospheres consistent with intermediate temperature values can be difficult to come by, and in any event, their use would be too cumbersome for use in trade studies involving a large number of average scene temperatures. In this paper we describe and assess a method for predicting TOA radiance for any arbitrary average scene temperature, starting from only a limited number of model atmospheres.
Fukuda, Takayuki; Takayama, Kazuo; Hirata, Mitsuhi; Liu, Yu-Jung; Yanagihara, Kana; Suga, Mika; Mizuguchi, Hiroyuki; Furue, Miho K
2017-03-15
Limited growth potential, narrow ranges of sources, and difference in variability and functions from batch to batch of primary hepatocytes cause a problem for predicting drug-induced hepatotoxicity during drug development. Human pluripotent stem cell (hPSC)-derived hepatocyte-like cells in vitro are expected as a tool for predicting drug-induced hepatotoxicity. Several studies have already reported efficient methods for differentiating hPSCs into hepatocyte-like cells, however its differentiation process is time-consuming, labor-intensive, cost-intensive, and unstable. In order to solve this problem, expansion culture for hPSC-derived hepatic progenitor cells, including hepatic stem cells and hepatoblasts which can self-renewal and differentiate into hepatocytes should be valuable as a source of hepatocytes. However, the mechanisms of the expansion of hPSC-derived hepatic progenitor cells are not yet fully understood. In this study, to isolate hPSC-derived hepatic progenitor cells, we tried to develop serum-free growth factor defined culture conditions using defined components. Our culture conditions were able to isolate and grow hPSC-derived hepatic progenitor cells which could differentiate into hepatocyte-like cells through hepatoblast-like cells. We have confirmed that the hepatocyte-like cells prepared by our methods were able to increase gene expression of cytochrome P450 enzymes upon encountering rifampicin, phenobarbital, or omeprazole. The isolation and expansion of hPSC-derived hepatic progenitor cells in defined culture conditions should have advantages in terms of detecting accurate effects of exogenous factors on hepatic lineage differentiation, understanding mechanisms underlying self-renewal ability of hepatic progenitor cells, and stably supplying functional hepatic cells. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delmau, L.H.; Haverlock, T.J.; Sloop, F.V., Jr.
This report presents the work that followed the CSSX model development completed in FY2002. The developed cesium and potassium extraction model was based on extraction data obtained from simple aqueous media. It was tested to ensure the validity of the prediction for the cesium extraction from actual waste. Compositions of the actual tank waste were obtained from the Savannah River Site personnel and were used to prepare defined simulants and to predict cesium distribution ratios using the model. It was therefore possible to compare the cesium distribution ratios obtained from the actual waste, the simulant, and the predicted values. Itmore » was determined that the predicted values agree with the measured values for the simulants. Predicted values also agreed, with three exceptions, with measured values for the tank wastes. Discrepancies were attributed in part to the uncertainty in the cation/anion balance in the actual waste composition, but likely more so to the uncertainty in the potassium concentration in the waste, given the demonstrated large competing effect of this metal on cesium extraction. It was demonstrated that the upper limit for the potassium concentration in the feed ought to not exceed 0.05 M in order to maintain suitable cesium distribution ratios.« less
Lontis, Eugen R; Lund, Morten E; Christensen, Henrik V; Bentsen, Bo; Gaihede, Michael; Caltenco, Hector A; Andreasen Struijk, Lotte N S
2010-01-01
Typing performance of a full alphabet keyboard and a joystick type of mouse (with on-screen keyboard) provided by a wireless integrated tongue control system (TCS) has been investigated. The speed and accuracy have been measured in a form of a throughput defining the true correct words per minute [cwpm]. Training character sequences were typed in a dedicated interface that provided visual feedback of activated sensors, a map of the alphabet associated, and the task character. Testing sentences were typed in Word, with limited visual feedback, using non-predictive typing (map of characters in alphabetic order associated to sensors) and predictive typing (LetterWise) for TCS keyboard, and non-predictive typing for TCS mouse. Two subjects participated for four and three consecutive days, respectively, two sessions per day. Maximal throughput of 2.94, 2.46, and 2.06, 1.68 [cwpm] were obtained with TCS keyboard by subject 1 and 2 with predictive and non-predictive typing respectively. Maximal throughput of 2.09 and 1.71 [cwpm] was obtained with TCS mouse by subject 1 and 2, respectively. Same experimental protocol has been planned for a larger number of subjects.
Çolak, Yunus; Marott, Jacob Louis; Vestbo, Jørgen; Lange, Peter
2015-02-01
The prevalence of obesity has increased during the last decades and varies from 10-20% in most European countries to approximately 32% in the United States. However, data on how obesity affects the presence of airflow limitation (AFL) defined as a reduced ratio between forced expiratory volume in 1 second (FEV1) and forced vital capacity (FVC) are scarce. Data was derived from the third examination of the Copenhagen City Heart Study from 1991 until 1994 (n = 10,135). We examine the impact of different adiposity markers (weight, body mass index (BMI), waist circumference, waist-hip ratio, and abdominal height) on AFL. AFL was defined in four ways: FEV1/FVC ratio < 0.70, FEV1/FVC ratio < lower limit of normal (LLN), FEV1/FVC ratio <0.70 including at least one respiratory symptom, and FEV1/FVC ratio < LLN and FEV1% of predicted < LLN. All adiposity markers were positively and significantly associated with FEV1/FVC independent of age, sex, height, smoking status, and cumulative tobacco consumption. Among all adiposity markers, BMI was the strongest predictor of FEV1/FVC. FEV1/FVC increased with 0.04 in men and 0.03 in women, as BMI increased with 10 units (kg · m-2). Consequently, diagnosis of AFL was significantly less likely in subjects with BMI ≥ 25 kg · m-2 with odds ratios 0.63 or less compared to subjects with BMI between 18.5-24.9 kg · m-2 when AFL was defined as FEV1/FVC < 0.70. High BMI reduces the probability of AFL. Ultimately, this may result in under-diagnosis and under-treatment of COPD among individuals with overweight and obesity.
Taylor, Steven; McKay, Dean; Crowe, Katherine B.; Abramowitz, Jonathan S.; Conelea, Christine A.; Calamari, John E.; Sica, Claudio
2014-01-01
Contemporary models of obsessive-compulsive disorder emphasize the importance of harm avoidance (HA) and related dysfunctional beliefs as motivators of obsessive-compulsive (OC) symptoms. Recently, there has been a resurgence of interest in Janet’s (1908) concept of incompleteness (INC) as another potentially important motivator. Contemporary investigators define INC as the sense that one’s actions, intentions, or experiences have not been properly achieved. Janet defined INC more broadly to include alexithymia, depersonalization, derealization, and impaired psychological mindedness. We conducted two studies to address four issues: (a) the clinical correlates of INC; (b) whether INC and HA are distinguishable constructs; (c) whether INC predicts OC symptoms after controlling for HA; and (d) the relative merits of broad versus narrow conceptualizations of INC. Study 1 was a meta-analysis of the clinical correlates of narrowly defined INC (16 studies, N=5,940). INC was correlated with all types of OC symptoms, and was more strongly correlated with OC symptoms than with general distress. Study 2 (N=534 nonclinical participants) showed that: (a) INC and HA were strongly correlated but factor analytically distinguishable; (b) INC statistically predicted all types of OC symptoms even after controlling for HA; and (c) narrow INC was most strongly correlated with OC symptoms whereas broad INC was most strongly correlated with general distress. Although the findings are limited by being correlational in nature, they support the hypothesis that INC, especially in its narrow form, is a motivator of OC symptoms. PMID:24491200
Musical Competence is Predicted by Music Training, Cognitive Abilities, and Personality.
Swaminathan, Swathi; Schellenberg, E Glenn
2018-06-15
Individuals differ in musical competence, which we defined as the ability to perceive, remember, and discriminate sequences of tones or beats. We asked whether such differences could be explained by variables other than music training, including socioeconomic status (SES), short-term memory, general cognitive ability, and personality. In a sample of undergraduates, musical competence had positive simple associations with duration of music training, SES, short-term memory, general cognitive ability, and openness-to-experience. When these predictors were considered jointly, musical competence had positive partial associations with music training, general cognitive ability, and openness. Nevertheless, moderation analyses revealed that the partial association between musical competence and music training was evident only among participants who scored below the mean on our measure of general cognitive ability. Moreover, general cognitive ability and openness had indirect associations with musical competence by predicting music training, which in turn predicted musical competence. Musical competence appears to be the result of multiple factors, including but not limited to music training.
Enhanced Predictive Handover for Fast Proxy Mobile IPv6
NASA Astrophysics Data System (ADS)
Jeon, Seil; Kang, Namhi; Kim, Younghan
Proxy Mobile IPv6 (PMIPv6) has been proposed in order to overcome the limitations of host-based mobility management in IPv6 networks. However, packet losses during doing handover are still a problem. To solve this issue, several schemes have been developed, and can be classified into two approaches: predictive and reactive handover. Both approaches commonly use bi-directional tunnel between mobile access gateways (MAGs). In predictive schemes especially, mobility support for a mobile node (MN) is triggered by simplified link signal strength. Thereafter, the MN sends handover notification to its serving MAG, and is then able to initiate packet forwarding. Therefore, if the MN moves toward an unexpected MAG that does not have any pre-established tunnel with the serving MAG, it may lead to packet losses. In this paper, we define this problem as Early Packet Forwarding (EPF). As a solution, we propose an enhanced PMIPv6 scheme using two-phase tunnel control based on the IEEE 802.21 Media Independent Handover (MIH).
Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction
NASA Astrophysics Data System (ADS)
Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc
2018-02-01
Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.
Forecasting the spatial transmission of influenza in the United States.
Pei, Sen; Kandula, Sasikiran; Yang, Wan; Shaman, Jeffrey
2018-03-13
Recurrent outbreaks of seasonal and pandemic influenza create a need for forecasts of the geographic spread of this pathogen. Although it is well established that the spatial progression of infection is largely attributable to human mobility, difficulty obtaining real-time information on human movement has limited its incorporation into existing infectious disease forecasting techniques. In this study, we develop and validate an ensemble forecast system for predicting the spatiotemporal spread of influenza that uses readily accessible human mobility data and a metapopulation model. In retrospective state-level forecasts for 35 US states, the system accurately predicts local influenza outbreak onset,-i.e., spatial spread, defined as the week that local incidence increases above a baseline threshold-up to 6 wk in advance of this event. In addition, the metapopulation prediction system forecasts influenza outbreak onset, peak timing, and peak intensity more accurately than isolated location-specific forecasts. The proposed framework could be applied to emergent respiratory viruses and, with appropriate modifications, other infectious diseases.
Models for the indices of thermal comfort
Adrian, Streinu-Cercel; Sergiu, Costoiu; Maria, Mârza; Anca, Streinu-Cercel; Monica, Mârza
2008-01-01
The current paper propose the analysis and extension formulation required for establishing decision in the management of the medical national system from the point of view of quality and efficiency such as: conceiving models for the indices of thermal comfort, defining the predicted mean vote (on the thermal sensation scale) „PMV”, defining the metabolism „M”, heat transfer between the human body and the environment, defining the predicted percent of dissatisfied people „PPD”, defining all indices of thermal comfort. PMID:20108461
A dynamic model for predicting growth in zinc-deficient stunted infants given supplemental zinc.
Wastney, Meryl E; McDonald, Christine M; King, Janet C
2018-05-01
Zinc deficiency limits infant growth and increases susceptibility to infections, which further compromises growth. Zinc supplementation improves the growth of zinc-deficient stunted infants, but the amount, frequency, and duration of zinc supplementation required to restore growth in an individual child is unknown. A dynamic model of zinc metabolism that predicts changes in weight and length of zinc-deficient, stunted infants with dietary zinc would be useful to define effective zinc supplementation regimens. The aims of this study were to develop a dynamic model for zinc metabolism in stunted, zinc-deficient infants and to use that model to predict the growth response when those infants are given zinc supplements. A model of zinc metabolism was developed using data on zinc kinetics, tissue zinc, and growth requirements for healthy 9-mo-old infants. The kinetic model was converted to a dynamic model by replacing the rate constants for zinc absorption and excretion with functions for these processes that change with zinc intake. Predictions of the dynamic model, parameterized for zinc-deficient, stunted infants, were compared with the results of 5 published zinc intervention trials. The model was then used to predict the results for zinc supplementation regimes that varied in the amount, frequency, and duration of zinc dosing. Model predictions agreed with published changes in plasma zinc after zinc supplementation. Predictions of weight and length agreed with 2 studies, but overpredicted values from a third study in which other nutrient deficiencies may have been growth limiting; the model predicted that zinc absorption was impaired in that study. The model suggests that frequent, smaller doses (5-10 mg Zn/d) are more effective for increasing growth in stunted, zinc-deficient 9-mo-old infants than are larger, less-frequent doses. The dose amount affects the duration of dosing necessary to restore and maintain plasma zinc concentration and growth.
Information thermodynamics of near-equilibrium computation
NASA Astrophysics Data System (ADS)
Prokopenko, Mikhail; Einav, Itai
2015-06-01
In studying fundamental physical limits and properties of computational processes, one is faced with the challenges of interpreting primitive information-processing functions through well-defined information-theoretic as well as thermodynamic quantities. In particular, transfer entropy, characterizing the function of computational transmission and its predictability, is known to peak near critical regimes. We focus on a thermodynamic interpretation of transfer entropy aiming to explain the underlying critical behavior by associating information flows intrinsic to computational transmission with particular physical fluxes. Specifically, in isothermal systems near thermodynamic equilibrium, the gradient of the average transfer entropy is shown to be dynamically related to Fisher information and the curvature of system's entropy. This relationship explicitly connects the predictability, sensitivity, and uncertainty of computational processes intrinsic to complex systems and allows us to consider thermodynamic interpretations of several important extreme cases and trade-offs.
Mind the gap: bridging economic and naturalistic risk-taking with cognitive neuroscience.
Schonberg, Tom; Fox, Craig R; Poldrack, Russell A
2011-01-01
Economists define risk in terms of the variability of possible outcomes, whereas clinicians and laypeople generally view risk as exposure to possible loss or harm. Neuroeconomic studies using relatively simple behavioral tasks have identified a network of brain regions that respond to economic risk, but these studies have had limited success predicting naturalistic risk-taking. By contrast, more complex behavioral tasks developed by clinicians (e.g. Balloon Analogue Risk Task and Iowa Gambling Task) correlate with naturalistic risk-taking but resist decomposition into distinct cognitive constructs. We propose here that to bridge this gap and better understand neural substrates of naturalistic risk-taking, new tasks are needed that: are decomposable into basic cognitive and/or economic constructs; predict naturalistic risk-taking; and engender dynamic, affective engagement. Copyright © 2010 Elsevier Ltd. All rights reserved.
Predicting Dengue Fever Outbreaks in French Guiana Using Climate Indicators.
Adde, Antoine; Roucou, Pascal; Mangeas, Morgan; Ardillon, Vanessa; Desenclos, Jean-Claude; Rousset, Dominique; Girod, Romain; Briolant, Sébastien; Quenel, Philippe; Flamand, Claude
2016-04-01
Dengue fever epidemic dynamics are driven by complex interactions between hosts, vectors and viruses. Associations between climate and dengue have been studied around the world, but the results have shown that the impact of the climate can vary widely from one study site to another. In French Guiana, climate-based models are not available to assist in developing an early warning system. This study aims to evaluate the potential of using oceanic and atmospheric conditions to help predict dengue fever outbreaks in French Guiana. Lagged correlations and composite analyses were performed to identify the climatic conditions that characterized a typical epidemic year and to define the best indices for predicting dengue fever outbreaks during the period 1991-2013. A logistic regression was then performed to build a forecast model. We demonstrate that a model based on summer Equatorial Pacific Ocean sea surface temperatures and Azores High sea-level pressure had predictive value and was able to predict 80% of the outbreaks while incorrectly predicting only 15% of the non-epidemic years. Predictions for 2014-2015 were consistent with the observed non-epidemic conditions, and an outbreak in early 2016 was predicted. These findings indicate that outbreak resurgence can be modeled using a simple combination of climate indicators. This might be useful for anticipating public health actions to mitigate the effects of major outbreaks, particularly in areas where resources are limited and medical infrastructures are generally insufficient.
Tang, Zhanghong; Wang, Qun; Ji, Zhijiang; Shi, Meiwu; Hou, Guoyan; Tan, Danjun; Wang, Pengqi; Qiu, Xianbo
2014-12-01
With the increasing city size, high-power electromagnetic radiation devices such as high-power medium-wave (MW) and short-wave (SW) antennas have been inevitably getting closer and closer to buildings, which resulted in the pollution of indoor electromagnetic radiation becoming worsened. To avoid such radiation exceeding the exposure limits by national standards, it is necessary to predict and survey the electromagnetic radiation by MW and SW antennas before constructing the buildings. In this paper, a modified prediction method for the far-field electromagnetic radiation is proposed and successfully applied to predict the electromagnetic environment of an area close to a group of typical high-power MW and SW wave antennas. Different from currently used simplified prediction method defined in the Radiation Protection Management Guidelines (H J/T 10. 3-1996), the new method in this article makes use of more information such as antennas' patterns to predict the electromagnetic environment. Therefore, it improves the prediction accuracy significantly by the new feature of resolution at different directions. At the end of this article, a comparison between the prediction data and the measured results is given to demonstrate the effectiveness of the proposed new method. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Method for early detection of cooling-loss events
Bermudez, Sergio A.; Hamann, Hendrik; Marianno, Fernando J.
2015-06-30
A method of detecting cooling-loss event early is provided. The method includes defining a relative humidity limit and change threshold for a given space, measuring relative humidity in the given space, determining, with a processing unit, whether the measured relative humidity is within the defined relative humidity limit, generating a warning in an event the measured relative humidity is outside the defined relative humidity limit and determining whether a change in the measured relative humidity is less than the defined change threshold for the given space and generating an alarm in an event the change is greater than the defined change threshold.
Method for early detection of cooling-loss events
Bermudez, Sergio A.; Hamann, Hendrik F.; Marianno, Fernando J.
2015-12-22
A method of detecting cooling-loss event early is provided. The method includes defining a relative humidity limit and change threshold for a given space, measuring relative humidity in the given space, determining, with a processing unit, whether the measured relative humidity is within the defined relative humidity limit, generating a warning in an event the measured relative humidity is outside the defined relative humidity limit and determining whether a change in the measured relative humidity is less than the defined change threshold for the given space and generating an alarm in an event the change is greater than the defined change threshold.
NASA Lewis Steady-State Heat Pipe Code Architecture
NASA Technical Reports Server (NTRS)
Mi, Ye; Tower, Leonard K.
2013-01-01
NASA Glenn Research Center (GRC) has developed the LERCHP code. The PC-based LERCHP code can be used to predict the steady-state performance of heat pipes, including the determination of operating temperature and operating limits which might be encountered under specified conditions. The code contains a vapor flow algorithm which incorporates vapor compressibility and axially varying heat input. For the liquid flow in the wick, Darcy s formula is employed. Thermal boundary conditions and geometric structures can be defined through an interactive input interface. A variety of fluid and material options as well as user defined options can be chosen for the working fluid, wick, and pipe materials. This report documents the current effort at GRC to update the LERCHP code for operating in a Microsoft Windows (Microsoft Corporation) environment. A detailed analysis of the model is presented. The programming architecture for the numerical calculations is explained and flowcharts of the key subroutines are given
Interferometric measurements of a dendritic growth front solutal diffusion layer
NASA Technical Reports Server (NTRS)
Hopkins, John A.; Mccay, T. D.; Mccay, Mary H.
1991-01-01
An experimental study was undertaken to measure solutal distributions in the diffusion layer produced during the vertical directional solidification (VDS) of an ammonium chloride - water (NH4Cl-H2O) solution. Interferometry was used to obtain concentration measurements in the 1-2 millimeter region defining the diffusion layer. These measurements were fitted to an exponential form to extract the characteristic diffusion parameter for various times after the start of solidification. The diffusion parameters are within the limits predicted by steady state theory and suggest that the effective solutal diffusivity is increasing as solidification progresses.
Flight test summary of modified fuel systems
NASA Technical Reports Server (NTRS)
Barrett, B. G.
1976-01-01
Two different aircraft designs, each with two modified fuel control systems, were evaluated. Each aircraft was evaluated in a given series of defined ground and flight conditions while quantitative and qualitative observations were made. During this program, some ten flights were completed, and a total of about 13 hours of engine run time was accumulated by the two airplanes. The results of these evaluations with emphasis on the operational and safety aspects were analyzed. Ground tests of the engine alone were not able to predict acceptable limiting lean mixture settings for the flight envelopes of the Cessna Models 150 and T337.
Kerry, Matthew J; Embretson, Susan E
2017-01-01
Future time perspective (FTP) is defined as "perceptions of the future as being limited or open-ended" (Lang and Carstensen, 2002; p. 125). The construct figures prominently in both workplace and retirement domains, but the age-predictions are competing: Workplace research predicts decreasing FTP age-change, in contrast, retirement scholars predict increasing FTP age-change. For the first time, these competing predictions are pitted in an experimental manipulation of subjective life expectancy (SLE). A sample of N = 207 older adults (age 45-60) working full-time (>30-h/week) were randomly assigned to SLE questions framed as either 'Live-to' or 'Die-by' to evaluate competing predictions for FTP. Results indicate general support for decreasing age-change in FTP, indicated by independent-sample t -tests showing lower FTP in the 'Die-by' framing condition. Further general-linear model analyses were conducted to test for interaction effects of retirement planning with experimental framings on FTP and intended retirement; While retirement planning buffered FTP's decrease, simple-effects also revealed that retirement planning increased intentions for sooner retirement, but lack of planning increased intentions for later retirement. Discussion centers on practical implications of our findings and consequences validity evidence in future empirical research of FTP in both workplace and retirement domains.
Pyrolysis Model Development for a Multilayer Floor Covering
McKinnon, Mark B.; Stoliarov, Stanislav I.
2015-01-01
Comprehensive pyrolysis models that are integral to computational fire codes have improved significantly over the past decade as the demand for improved predictive capabilities has increased. High fidelity pyrolysis models may improve the design of engineered materials for better fire response, the design of the built environment, and may be used in forensic investigations of fire events. A major limitation to widespread use of comprehensive pyrolysis models is the large number of parameters required to fully define a material and the lack of effective methodologies for measurement of these parameters, especially for complex materials. The work presented here details a methodology used to characterize the pyrolysis of a low-pile carpet tile, an engineered composite material that is common in commercial and institutional occupancies. The studied material includes three distinct layers of varying composition and physical structure. The methodology utilized a comprehensive pyrolysis model (ThermaKin) to conduct inverse analyses on data collected through several experimental techniques. Each layer of the composite was individually parameterized to identify its contribution to the overall response of the composite. The set of properties measured to define the carpet composite were validated against mass loss rate curves collected at conditions outside the range of calibration conditions to demonstrate the predictive capabilities of the model. The mean error between the predicted curve and the mean experimental mass loss rate curve was calculated as approximately 20% on average for heat fluxes ranging from 30 to 70 kW·m−2, which is within the mean experimental uncertainty. PMID:28793556
Parry, S; Denehy, L; Berney, S; Browning, L
2014-03-01
(1) To determine the ability of the Melbourne risk prediction tool to predict a pulmonary complication as defined by the Melbourne Group Scale in a medically defined high-risk upper abdominal surgery population during the postoperative period; (2) to identify the incidence of postoperative pulmonary complications; and (3) to examine the risk factors for postoperative pulmonary complications in this high-risk population. Observational cohort study. Tertiary Australian referral centre. 50 individuals who underwent medically defined high-risk upper abdominal surgery. Presence of postoperative pulmonary complications was screened daily for seven days using the Melbourne Group Scale (Version 2). Postoperative pulmonary risk prediction was calculated according to the Melbourne risk prediction tool. (1) Melbourne risk prediction tool; and (2) the incidence of postoperative pulmonary complications. Sixty-six percent (33/50) underwent hepatobiliary or upper gastrointestinal surgery. Mean (SD) anaesthetic duration was 377.8 (165.5) minutes. The risk prediction tool classified 84% (42/50) as high risk. Overall postoperative pulmonary complication incidence was 42% (21/50). The tool was 91% sensitive and 21% specific with a 50% chance of correct classification. This is the first study to externally validate the Melbourne risk prediction tool in an independent medically defined high-risk population. There was a higher incidence of pulmonary complications postoperatively observed compared to that previously reported. Results demonstrated poor validity of the tool in a population already defined medically as high risk and when applied postoperatively. This observational study has identified several important points to consider in future trials. Copyright © 2013 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Simulation of Avifauna Distributions Using Remote Sensing
NASA Technical Reports Server (NTRS)
Smith, James A.
2004-01-01
Remote sensing has proved a fruitful tool for understanding the distribution and functioning of plant communities at multiple scales and to understand their coupling to bioclimatic and anthropogenic factors. But a similar approach to understanding the distribution and abundance of bird species as well as many other animal organisms is lacking. The increasing need for such understanding is evident with the recent examples of threats to human health via avian vector transmission and the increasing emphasis on global conservation biology. From experimental observations we know that species richness tends to track biological or environmental gradients. In this paper, we explore the fundamental idea that thermal and water-relation environments of birds, as estimated from satellite data and biophysical models, can define the constraints on their Occurrences and richness. We develop individual bird energy budget models and use these models to define the climate space niche of birds. Using satellite data assimilation products to drive our models, we disperse a distribution of virtual or actual bird species across the landscape in accordance to the limits expressed by their climate space niche. Here, we focus on the North American summer breeding season and give two examples to illustrate our approach. The first is a tundra loving bird, e.g. corresponding to the Culidris genus, and a second genus example, Myiurchus, that corresponds to arid or semi-arid regions. We define these birds in terms of their basic physiology and morphological characteristics, construct avian energetic simulations to predict their allowable metabolic ranges and climate space limits.
From Plant Hydraulics to Ecohydrology: a Case Study of Water Limitation in Aspen Forests
NASA Astrophysics Data System (ADS)
Sperry, J.; Venturas, M.; Love, D.; Anderegg, W.; Mackay, D. S.
2017-12-01
How dry must it get to threaten a standing forest? We answered this question for aspen stands in Utah with a model that predicts tree gas exchange and water status by optimizing photosynthetic gain vs. hydraulic risk from xylem cavitation. The model was parameterized for 10 aspen stands from various elevations and mountain ranges in the state of Utah, USA. The 2016 growing season was simulated from site-specific micrometeorological data under shallow (0.5 m) vs. deep (2m) root depth scenarios starting at field capacity. The model predicted a water-limiting threshold for each stand, defined as the minimum water input required to maximize stand gas exchange. All but one stand was estimated to be near or above its threshold in 2016. In the majority of stands, spring soil moisture and summer rain fell far short of supplying the threshold requirement. Without additional water, these stands would suffer over 70% loss of tree hydraulic conductance and high mortality risk. These more water-demanding stands were predicted to rely on groundwater for 60-95% of their threshold supply. Groundwater dependence suggests a greater sensitivity to winter precipitation than to growing season conditions. All but the sparsest aspen stands would experience significant mortality risk from a 50% reduction in groundwater input. The aspen test case suggests a wider utility for linking plant hydraulics and hydrology.
Equilibrium limit of thermal conduction and boundary scattering in nanostructures.
Haskins, Justin B; Kınacı, Alper; Sevik, Cem; Çağın, Tahir
2014-06-28
Determining the lattice thermal conductivity (κ) of nanostructures is especially challenging in that, aside from the phonon-phonon scattering present in large systems, the scattering of phonons from the system boundary greatly influences heat transport, particularly when system length (L) is less than the average phonon mean free path (MFP). One possible route to modeling κ in these systems is through molecular dynamics (MD) simulations, inherently including both phonon-phonon and phonon-boundary scattering effects in the classical limit. Here, we compare current MD methods for computing κ in nanostructures with both L ⩽ MFP and L ≫ MFP, referred to as mean free path constrained (cMFP) and unconstrained (uMFP), respectively. Using a (10,0) CNT (carbon nanotube) as a benchmark case, we find that while the uMFP limit of κ is well-defined through the use of equilibrium MD and the time-correlation formalism, the standard equilibrium procedure for κ is not appropriate for the treatment of the cMFP limit because of the large influence of boundary scattering. To address this issue, we define an appropriate equilibrium procedure for cMFP systems that, through comparison to high-fidelity non-equilibrium methods, is shown to be the low thermal gradient limit to non-equilibrium results. Further, as a means of predicting κ in systems having L ≫ MFP from cMFP results, we employ an extrapolation procedure based on the phenomenological, boundary scattering inclusive expression of Callaway [Phys. Rev. 113, 1046 (1959)]. Using κ from systems with L ⩽ 3 μm in the extrapolation, we find that the equilibrium uMFP κ of a (10,0) CNT can be predicted within 5%. The equilibrium procedure is then applied to a variety of carbon-based nanostructures, such as graphene flakes (GF), graphene nanoribbons (GNRs), CNTs, and icosahedral fullerenes, to determine the influence of size and environment (suspended versus supported) on κ. Concerning the GF and GNR systems, we find that the supported samples yield consistently lower values of κ and that the phonon-boundary scattering remains dominant at large lengths, with L = 0.4 μm structures exhibiting a third of the periodic result. We finally characterize the effect of shape in CNTs and fullerenes on κ, showing the angular components of conductivity in CNTs and icosahedral fullerenes are similar for a given circumference.
Wafer hot spot identification through advanced photomask characterization techniques: part 2
NASA Astrophysics Data System (ADS)
Choi, Yohan; Green, Michael; Cho, Young; Ham, Young; Lin, Howard; Lan, Andy; Yang, Richer; Lung, Mike
2017-03-01
Historically, 1D metrics such as Mean to Target (MTT) and CD Uniformity (CDU) have been adequate for mask end users to evaluate and predict the mask impact on the wafer process. However, the wafer lithographer's process margin is shrinking at advanced nodes to a point that classical mask CD metrics are no longer adequate to gauge the mask contribution to wafer process error. For example, wafer CDU error at advanced nodes is impacted by mask factors such as 3-dimensional (3D) effects and mask pattern fidelity on sub-resolution assist features (SRAFs) used in Optical Proximity Correction (OPC) models of ever-increasing complexity. To overcome the limitation of 1D metrics, there are numerous on-going industry efforts to better define wafer-predictive metrics through both standard mask metrology and aerial CD methods. Even with these improvements, the industry continues to struggle to define useful correlative metrics that link the mask to final device performance. In part 1 of this work, we utilized advanced mask pattern characterization techniques to extract potential hot spots on the mask and link them, theoretically, to issues with final wafer performance. In this paper, part 2, we complete the work by verifying these techniques at wafer level. The test vehicle (TV) that was used for hot spot detection on the mask in part 1 will be used to expose wafers. The results will be used to verify the mask-level predictions. Finally, wafer performance with predicted and verified mask/wafer condition will be shown as the result of advanced mask characterization. The goal is to maximize mask end user yield through mask-wafer technology harmonization. This harmonization will provide the necessary feedback to determine optimum design, mask specifications, and mask-making conditions for optimal wafer process margin.
Wijburg, Martijn T; Witte, Birgit I; Vennegoor, Anke; Roosendaal, Stefan D; Sanchez, Esther; Liu, Yaou; Martins Jarnalo, Carine O; Uitdehaag, Bernard Mj; Barkhof, Frederik; Killestein, Joep; Wattjes, Mike P
2016-10-01
Differentiation between progressive multifocal leukoencephalopathy (PML) and new multiple sclerosis (MS) lesions on brain MRI during natalizumab pharmacovigilance in the absence of clinical signs and symptoms is challenging but is of substantial clinical relevance. We aim to define MRI characteristics that can aid in this differentiation. Reference and follow-up brain MRIs of natalizumab-treated patients with MS with asymptomatic PML (n=21), or asymptomatic new MS lesions (n=20) were evaluated with respect to characteristics of newly detected lesions by four blinded raters. We tested the association with PML for each characteristic and constructed a multivariable prediction model which we analysed using a receiver operating characteristic (ROC) curve. Presence of punctate T2 lesions, cortical grey matter involvement, juxtacortical white matter involvement, ill-defined and mixed lesion borders towards both grey and white matter, lesion size of >3 cm, and contrast enhancement were all associated with PML. Focal lesion appearance and periventricular localisation were associated with new MS lesions. In the multivariable model, punctate T2 lesions and cortical grey matter involvement predict for PML, while focal lesion appearance and periventricular localisation predict for new MS lesions (area under the curve: 0.988, 95% CI 0.977 to 1.0, sensitivity: 100%, specificity: 80.6%). The MRI characteristics of asymptomatic natalizumab-associated PML lesions proved to differ from new MS lesions. This led to a prediction model with a high discriminating power. Careful assessment of the presence of punctate T2 lesions, cortical grey matter involvement, focal lesion appearance and periventricular localisation allows for an early diagnosis of PML. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Relating resting-state fMRI and EEG whole-brain connectomes across frequency bands.
Deligianni, Fani; Centeno, Maria; Carmichael, David W; Clayden, Jonathan D
2014-01-01
Whole brain functional connectomes hold promise for understanding human brain activity across a range of cognitive, developmental and pathological states. So called resting-state (rs) functional MRI studies have contributed to the brain being considered at a macroscopic scale as a set of interacting regions. Interactions are defined as correlation-based signal measurements driven by blood oxygenation level dependent (BOLD) contrast. Understanding the neurophysiological basis of these measurements is important in conveying useful information about brain function. Local coupling between BOLD fMRI and neurophysiological measurements is relatively well defined, with evidence that gamma (range) frequency EEG signals are the closest correlate of BOLD fMRI changes during cognitive processing. However, it is less clear how whole-brain network interactions relate during rest where lower frequency signals have been suggested to play a key role. Simultaneous EEG-fMRI offers the opportunity to observe brain network dynamics with high spatio-temporal resolution. We utilize these measurements to compare the connectomes derived from rs-fMRI and EEG band limited power (BLP). Merging this multi-modal information requires the development of an appropriate statistical framework. We relate the covariance matrices of the Hilbert envelope of the source localized EEG signal across bands to the covariance matrices derived from rs-fMRI with the means of statistical prediction based on sparse Canonical Correlation Analysis (sCCA). Subsequently, we identify the most prominent connections that contribute to this relationship. We compare whole-brain functional connectomes based on their geodesic distance to reliably estimate the performance of the prediction. The performance of predicting fMRI from EEG connectomes is considerably better than predicting EEG from fMRI across all bands, whereas the connectomes derived in low frequency EEG bands resemble best rs-fMRI connectivity.
Relating resting-state fMRI and EEG whole-brain connectomes across frequency bands
Deligianni, Fani; Centeno, Maria; Carmichael, David W.; Clayden, Jonathan D.
2014-01-01
Whole brain functional connectomes hold promise for understanding human brain activity across a range of cognitive, developmental and pathological states. So called resting-state (rs) functional MRI studies have contributed to the brain being considered at a macroscopic scale as a set of interacting regions. Interactions are defined as correlation-based signal measurements driven by blood oxygenation level dependent (BOLD) contrast. Understanding the neurophysiological basis of these measurements is important in conveying useful information about brain function. Local coupling between BOLD fMRI and neurophysiological measurements is relatively well defined, with evidence that gamma (range) frequency EEG signals are the closest correlate of BOLD fMRI changes during cognitive processing. However, it is less clear how whole-brain network interactions relate during rest where lower frequency signals have been suggested to play a key role. Simultaneous EEG-fMRI offers the opportunity to observe brain network dynamics with high spatio-temporal resolution. We utilize these measurements to compare the connectomes derived from rs-fMRI and EEG band limited power (BLP). Merging this multi-modal information requires the development of an appropriate statistical framework. We relate the covariance matrices of the Hilbert envelope of the source localized EEG signal across bands to the covariance matrices derived from rs-fMRI with the means of statistical prediction based on sparse Canonical Correlation Analysis (sCCA). Subsequently, we identify the most prominent connections that contribute to this relationship. We compare whole-brain functional connectomes based on their geodesic distance to reliably estimate the performance of the prediction. The performance of predicting fMRI from EEG connectomes is considerably better than predicting EEG from fMRI across all bands, whereas the connectomes derived in low frequency EEG bands resemble best rs-fMRI connectivity. PMID:25221467
Rényi entropy, abundance distribution, and the equivalence of ensembles.
Mora, Thierry; Walczak, Aleksandra M
2016-05-01
Distributions of abundances or frequencies play an important role in many fields of science, from biology to sociology, as does the Rényi entropy, which measures the diversity of a statistical ensemble. We derive a mathematical relation between the abundance distribution and the Rényi entropy, by analogy with the equivalence of ensembles in thermodynamics. The abundance distribution is mapped onto the density of states, and the Rényi entropy to the free energy. The two quantities are related in the thermodynamic limit by a Legendre transform, by virtue of the equivalence between the micro-canonical and canonical ensembles. In this limit, we show how the Rényi entropy can be constructed geometrically from rank-frequency plots. This mapping predicts that non-concave regions of the rank-frequency curve should result in kinks in the Rényi entropy as a function of its order. We illustrate our results on simple examples, and emphasize the limitations of the equivalence of ensembles when a thermodynamic limit is not well defined. Our results help choose reliable diversity measures based on the experimental accuracy of the abundance distributions in particular frequency ranges.
A critical assessment of Mus musculus gene function prediction using integrated genomic evidence
Peña-Castillo, Lourdes; Tasan, Murat; Myers, Chad L; Lee, Hyunju; Joshi, Trupti; Zhang, Chao; Guan, Yuanfang; Leone, Michele; Pagnani, Andrea; Kim, Wan Kyu; Krumpelman, Chase; Tian, Weidong; Obozinski, Guillaume; Qi, Yanjun; Mostafavi, Sara; Lin, Guan Ning; Berriz, Gabriel F; Gibbons, Francis D; Lanckriet, Gert; Qiu, Jian; Grant, Charles; Barutcuoglu, Zafer; Hill, David P; Warde-Farley, David; Grouios, Chris; Ray, Debajyoti; Blake, Judith A; Deng, Minghua; Jordan, Michael I; Noble, William S; Morris, Quaid; Klein-Seetharaman, Judith; Bar-Joseph, Ziv; Chen, Ting; Sun, Fengzhu; Troyanskaya, Olga G; Marcotte, Edward M; Xu, Dong; Hughes, Timothy R; Roth, Frederick P
2008-01-01
Background: Several years after sequencing the human genome and the mouse genome, much remains to be discovered about the functions of most human and mouse genes. Computational prediction of gene function promises to help focus limited experimental resources on the most likely hypotheses. Several algorithms using diverse genomic data have been applied to this task in model organisms; however, the performance of such approaches in mammals has not yet been evaluated. Results: In this study, a standardized collection of mouse functional genomic data was assembled; nine bioinformatics teams used this data set to independently train classifiers and generate predictions of function, as defined by Gene Ontology (GO) terms, for 21,603 mouse genes; and the best performing submissions were combined in a single set of predictions. We identified strengths and weaknesses of current functional genomic data sets and compared the performance of function prediction algorithms. This analysis inferred functions for 76% of mouse genes, including 5,000 currently uncharacterized genes. At a recall rate of 20%, a unified set of predictions averaged 41% precision, with 26% of GO terms achieving a precision better than 90%. Conclusion: We performed a systematic evaluation of diverse, independently developed computational approaches for predicting gene function from heterogeneous data sources in mammals. The results show that currently available data for mammals allows predictions with both breadth and accuracy. Importantly, many highly novel predictions emerge for the 38% of mouse genes that remain uncharacterized. PMID:18613946
Washington, Chad W; Derdeyn, Colin P; Dacey, Ralph G; Dhar, Rajat; Zipfel, Gregory J
2014-08-01
Studies using the Nationwide Inpatient Sample (NIS), a large ICD-9-based (International Classification of Diseases, Ninth Revision) administrative database, to analyze aneurysmal subarachnoid hemorrhage (SAH) have been limited by an inability to control for SAH severity and the use of unverified outcome measures. To address these limitations, the authors developed and validated a surrogate marker for SAH severity, the NIS-SAH Severity Score (NIS-SSS; akin to Hunt and Hess [HH] grade), and a dichotomous measure of SAH outcome, the NIS-SAH Outcome Measure (NIS-SOM; akin to modified Rankin Scale [mRS] score). Three separate and distinct patient cohorts were used to define and then validate the NIS-SSS and NIS-SOM. A cohort (n = 148,958, the "model population") derived from the 1998-2009 NIS was used for developing the NIS-SSS and NIS-SOM models. Diagnoses most likely reflective of SAH severity were entered into a regression model predicting poor outcome; model coefficients of significant factors were used to generate the NIS-SSS. Nationwide Inpatient Sample codes most likely to reflect a poor outcome (for example, discharge disposition, tracheostomy) were used to create the NIS-SOM. Data from 716 patients with SAH (the "validation population") treated at the authors' institution were used to validate the NIS-SSS and NIS-SOM against HH grade and mRS score, respectively. Lastly, 147,395 patients (the "assessment population") from the 1998-2009 NIS, independent of the model population, were used to assess performance of the NIS-SSS in predicting outcome. The ability of the NIS-SSS to predict outcome was compared with other common measures of disease severity (All Patient Refined Diagnosis Related Group [APR-DRG], All Payer Severity-adjusted DRG [APS-DRG], and DRG). RESULTS The NIS-SSS significantly correlated with HH grade, and there was no statistical difference between the abilities of the NIS-SSS and HH grade to predict mRS-based outcomes. As compared with the APR-DRG, APSDRG, and DRG, the NIS-SSS was more accurate in predicting SAH outcome (area under the curve [AUC] = 0.69, 0.71, 0.71, and 0.79, respectively). A strong correlation between NIS-SOM and mRS was found, with an agreement and kappa statistic of 85% and 0.63, respectively, when poor outcome was defined by an mRS score > 2 and 95% and 0.84 when poor outcome was defined by an mRS score > 3. Data in this study indicate that in the analysis of NIS data sets, the NIS-SSS is a valid measure of SAH severity that outperforms previous measures of disease severity and that the NIS-SOM is a valid measure of SAH outcome. It is critically important that outcomes research in SAH using administrative data sets incorporate the NIS-SSS and NIS-SOM to adjust for neurology-specific disease severity.
'Nothing of chemistry disappears in biology': the Top 30 damage-prone endogenous metabolites.
Lerma-Ortiz, Claudia; Jeffryes, James G; Cooper, Arthur J L; Niehaus, Thomas D; Thamm, Antje M K; Frelin, Océane; Aunins, Thomas; Fiehn, Oliver; de Crécy-Lagard, Valérie; Henry, Christopher S; Hanson, Andrew D
2016-06-15
Many common metabolites are intrinsically unstable and reactive, and hence prone to chemical (i.e. non-enzymatic) damage in vivo Although this fact is widely recognized, the purely chemical side-reactions of metabolic intermediates can be surprisingly hard to track down in the literature and are often treated in an unprioritized case-by-case way. Moreover, spontaneous chemical side-reactions tend to be overshadowed today by side-reactions mediated by promiscuous ('sloppy') enzymes even though chemical damage to metabolites may be even more prevalent than damage from enzyme sloppiness, has similar outcomes, and is held in check by similar biochemical repair or pre-emption mechanisms. To address these limitations and imbalances, here we draw together and systematically integrate information from the (bio)chemical literature, from cheminformatics, and from genome-scale metabolic models to objectively define a 'Top 30' list of damage-prone metabolites. A foundational part of this process was to derive general reaction rules for the damage chemistries involved. The criteria for a 'Top 30' metabolite included predicted chemical reactivity, essentiality, and occurrence in diverse organisms. We also explain how the damage chemistry reaction rules ('operators') are implemented in the Chemical-Damage-MINE (CD-MINE) database (minedatabase.mcs.anl.gov/#/top30) to provide a predictive tool for many additional potential metabolite damage products. Lastly, we illustrate how defining a 'Top 30' list can drive genomics-enabled discovery of the enzymes of previously unrecognized damage-control systems, and how applying chemical damage reaction rules can help identify previously unknown peaks in metabolomics profiles. © 2016 The Author(s). published by Portland Press Limited on behalf of the Biochemical Society.
Human Immunity and the Design of Multi-Component, Single Target Vaccines
Saul, Allan; Fay, Michael P.
2007-01-01
Background Inclusion of multiple immunogens to target a single organism is a strategy being pursued for many experimental vaccines, especially where it is difficult to generate a strongly protective response from a single immunogen. Although there are many human vaccines that contain multiple defined immunogens, in almost every case each component targets a different pathogen. As a consequence, there is little practical experience for deciding where the increased complexity of vaccines with multiple defined immunogens vaccines targeting single pathogens will be justifiable. Methodology/Principal Findings A mathematical model, with immunogenicity parameters derived from a database of human responses to established vaccines, was used to predict the increase in the efficacy and the proportion of the population protected resulting from addition of further immunogens. The gains depended on the relative protection and the range of responses in the population to each immunogen and also to the correlation of the responses between immunogens. In most scenarios modeled, the gain in overall efficacy obtained by adding more immunogens was comparable to gains obtained from a single immunogen through the use of better formulations or adjuvants. Multi-component single target vaccines were more effective at decreasing the proportion of poor responders than increasing the overall efficacy of the vaccine in a population. Conclusions/Significance Inclusion of limited number of antigens in a vaccine aimed at targeting a single organism will increase efficacy, but the gains are relatively modest and for a practical vaccine there are constraints that are likely to limit multi-component single target vaccines to a small number of key antigens. The model predicts that this type of vaccine will be most useful where the critical issue is the reduction in proportion of poor responders. PMID:17786221
Development of a real-time system for ITER first wall heat load control
NASA Astrophysics Data System (ADS)
Anand, Himank; de Vries, Peter; Gribov, Yuri; Pitts, Richard; Snipes, Joseph; Zabeo, Luca
2017-10-01
The steady state heat flux on the ITER first wall (FW) panels are limited by the heat removal capacity of the water cooling system. In case of off-normal events (e.g. plasma displacement during H-L transitions), the heat loads are predicted to exceed the design limits (2-4.7 MW/m2). Intense heat loads are predicted on the FW, even well before the burning plasma phase. Thus, a real-time (RT) FW heat load control system is mandatory from early plasma operation of the ITER tokamak. A heat load estimator based on the RT equilibrium reconstruction has been developed for the plasma control system (PCS). A scheme, estimating the energy state for prescribed gaps defined as the distance between the last closed flux surface (LCFS)/separatrix and the FW is presented. The RT energy state is determined by the product of a weighted function of gap distance and the power crossing the plasma boundary. In addition, a heat load estimator assuming a simplified FW geometry and parallel heat transport model in the scrape-off layer (SOL), benchmarked against a full 3-D magnetic field line tracer is also presented.
Effects of High-Density Impacts on Shielding Capability
NASA Technical Reports Server (NTRS)
Christiansen, Eric L.; Lear, Dana M.
2014-01-01
Spacecraft are shielded from micrometeoroids and orbital debris (MMOD) impacts to meet requirements for crew safety and/or mission success. In the past, orbital debris particles have been considered to be composed entirely of aluminum (medium-density material) for the purposes of MMOD shielding design and verification. Meteoroids have been considered to be low-density porous materials, with an average density of 1 g/cu cm. Recently, NASA released a new orbital debris environment model, referred to as ORDEM 3.0, that indicates orbital debris contains a substantial fraction of high-density material for which steel is used in MMOD risk assessments [Ref.1]. Similarly, an update to the meteoroid environment model is also under consideration to include a high-density component of that environment. This paper provides results of hypervelocity impact tests and hydrocode simulations on typical spacecraft MMOD shields using steel projectiles. It was found that previous ballistic limit equations (BLEs) that define the protection capability of the MMOD shields did not predict the results from the steel impact tests and hydrocode simulations (typically, the predictions from these equations were too optimistic). The ballistic limit equations required updates to more accurately represent shield protection capability from the range of densities in the orbital debris environment. Ballistic limit equations were derived from the results of the work and are provided in the paper.
Ballistic Limit Equation for Single Wall Titanium
NASA Technical Reports Server (NTRS)
Ratliff, J. M.; Christiansen, Eric L.; Bryant, C.
2009-01-01
Hypervelocity impact tests and hydrocode simulations were used to determine the ballistic limit equation (BLE) for perforation of a titanium wall, as a function of wall thickness. Two titanium alloys were considered, and separate BLEs were derived for each. Tested wall thicknesses ranged from 0.5mm to 2.0mm. The single-wall damage equation of Cour-Palais [ref. 1] was used to analyze the Ti wall's shielding effectiveness. It was concluded that the Cour-Palais single-wall equation produced a non-conservative prediction of the ballistic limit for the Ti shield. The inaccurate prediction was not a particularly surprising result; the Cour-Palais single-wall BLE contains shield material properties as parameters, but it was formulated only from tests of different aluminum alloys. Single-wall Ti shield tests were run (thicknesses of 2.0 mm, 1.5 mm, 1.0 mm, and 0.5 mm) on Ti 15-3-3-3 material custom cut from rod stock. Hypervelocity impact (HVI) tests were used to establish the failure threshold empirically, using the additional constraint that the damage scales with impact energy, as was indicated by hydrocode simulations. The criterion for shield failure was defined as no detached spall from the shield back surface during HVI. Based on the test results, which confirmed an approximately energy-dependent shield effectiveness, the Cour-Palais equation was modified.
Abramov, Yuriy A
2015-06-01
The main purpose of this study is to define the major limiting factor in the accuracy of the quantitative structure-property relationship (QSPR) models of the thermodynamic intrinsic aqueous solubility of the drug-like compounds. For doing this, the thermodynamic intrinsic aqueous solubility property was suggested to be indirectly "measured" from the contributions of solid state, ΔGfus, and nonsolid state, ΔGmix, properties, which are estimated by the corresponding QSPR models. The QSPR models of ΔGfus and ΔGmix properties were built based on a set of drug-like compounds with available accurate measurements of fusion and thermodynamic solubility properties. For consistency ΔGfus and ΔGmix models were developed using similar algorithms and descriptor sets, and validated against the similar test compounds. Analysis of the relative performances of these two QSPR models clearly demonstrates that it is the solid state contribution which is the limiting factor in the accuracy and predictive power of the QSPR models of the thermodynamic intrinsic solubility. The performed analysis outlines a necessity of development of new descriptor sets for an accurate description of the long-range order (periodicity) phenomenon in the crystalline state. The proposed approach to the analysis of limitations and suggestions for improvement of QSPR-type models may be generalized to other applications in the pharmaceutical industry.
NASA Astrophysics Data System (ADS)
Pizzuto, J. E.; Skalak, K.; Karwan, D. L.
2017-12-01
Transport of suspended sediment and sediment-borne constituents (here termed fluvial particles) through large river systems can be significantly influenced by episodic storage in floodplains and other alluvial deposits. Geomorphologists quantify the importance of storage using sediment budgets, but these data alone are insufficient to determine how storage influences the routing of fluvial particles through river corridors across large spatial scales. For steady state systems, models that combine sediment budget data with "waiting time distributions" (to define how long deposited particles remain stored until being remobilized) and velocities during transport events can provide useful predictions. Limited field data suggest that waiting time distributions are well represented by power laws, extending from <1 to >104 years, while the probability of storage defined by sediment budgets varies from 0.1 km-1 for small drainage basins to 0.001 km-1 for the world's largest watersheds. Timescales of particle delivery from large watersheds are determined by storage rather than by transport processes, with most particles requiring 102 -104 years to reach the basin outlet. These predictions suggest that erosional "signals" induced by climate change, tectonics, or anthropogenic activity will be transformed by storage before delivery to the outlets of large watersheds. In particular, best management practices (BMPs) implemented in upland source areas, designed to reduce the loading of fluvial particles to estuarine receiving waters, will not achieve their intended benefits for centuries (or longer). For transient systems, waiting time distributions cannot be constant, but will vary as portions of transient sediment "pulses" enter and are later released from storage. The delivery of sediment pulses under transient conditions can be predicted by adopting the hypothesis that the probability of erosion of stored particles will decrease with increasing "age" (where age is defined as the elapsed time since deposition). Then, waiting time and age distributions for stored particles become predictions based on the architecture of alluvial storage and the tendency for erosional processes to preferentially remove younger deposits, improving assessment of watershed BMPs and other important applications.
Prowess - A Software Model for the Ooty Wide Field Array
NASA Astrophysics Data System (ADS)
Marthi, Visweshwar Ram
2017-03-01
One of the scientific objectives of the Ooty Wide Field Array (OWFA) is to observe the redshifted H i emission from z ˜ 3.35. Although predictions spell out optimistic outcomes in reasonable integration times, these studies were based purely on analytical assumptions, without accounting for limiting systematics. A software model for OWFA has been developed with a view to understanding the instrument-induced systematics, by describing a complete software model for the instrument. This model has been implemented through a suite of programs, together called Prowess, which has been conceived with the dual role of an emulator as well as observatory data analysis software. The programming philosophy followed in building Prowess enables a general user to define an own set of functions and add new functionality. This paper describes a co-ordinate system suitable for OWFA in which the baselines are defined. The foregrounds are simulated from their angular power spectra. The visibilities are then computed from the foregrounds. These visibilities are then used for further processing, such as calibration and power spectrum estimation. The package allows for rich visualization features in multiple output formats in an interactive fashion, giving the user an intuitive feel for the data. Prowess has been extensively used for numerical predictions of the foregrounds for the OWFA H i experiment.
Mechanistic materials modeling for nuclear fuel performance
Tonks, Michael R.; Andersson, David; Phillpot, Simon R.; ...
2017-03-15
Fuel performance codes are critical tools for the design, certification, and safety analysis of nuclear reactors. However, their ability to predict fuel behavior under abnormal conditions is severely limited by their considerable reliance on empirical materials models correlated to burn-up (a measure of the number of fission events that have occurred, but not a unique measure of the history of the material). In this paper, we propose a different paradigm for fuel performance codes to employ mechanistic materials models that are based on the current state of the evolving microstructure rather than burn-up. In this approach, a series of statemore » variables are stored at material points and define the current state of the microstructure. The evolution of these state variables is defined by mechanistic models that are functions of fuel conditions and other state variables. The material properties of the fuel and cladding are determined from microstructure/property relationships that are functions of the state variables and the current fuel conditions. Multiscale modeling and simulation is being used in conjunction with experimental data to inform the development of these models. Finally, this mechanistic, microstructure-based approach has the potential to provide a more predictive fuel performance capability, but will require a team of researchers to complete the required development and to validate the approach.« less
Zones of life in the subsurface of hydrothermal vents: A synthesis
NASA Astrophysics Data System (ADS)
Larson, B. I.; Houghton, J.; Meile, C. D.
2011-12-01
Subsurface microbial communities in Mid-ocean Ridge (MOR) hydrothermal systems host a wide array of unique metabolic strategies, but the spatial distribution of biogeochemical transformations is poorly constrained. Here we present an approach that reexamines chemical measurements from diffuse fluids with models of convective transport to delineate likely reaction zones. Chemical data have been compiled from bare basalt surfaces at a wide array of mid-ocean ridge systems, including 9°N, East Pacific Rise, Axial Seamount, Juan de Fuca, and Lucky Strike, Mid-Atlantic Ridge. Co-sampled end-member fluid from Ty (EPR) was used to constrain reaction path models that define diffuse fluid compositions as a function of temperature. The degree of mixing between hot vent fluid (350 deg. C) and seawater (2 deg. C) governs fluid temperature, Fe-oxide mineral precipitation is suppressed, and aqueous redox reactions are prevented from equilibrating, consistent with sluggish kinetics. Quartz and pyrite are predicted to precipitate, consistent with field observations. Most reported samples of diffuse fluids from EPR and Axial Seamount fall along the same predicted mixing line only when pyrite precipitation is suppressed, but Lucky Strike fluids do not follow the same trend. The predicted fluid composition as a function of temperature is then used to calculate the free energy available to autotrophic microorganisms for a variety of catabolic strategies in the subsurface. Finally, the relationships between temperature and free energy is combined with modeled temperature fields (Lowell et al., 2007 Geochem. Geophys., Geosys.) over a 500 m x 500 m region extending downward from the seafloor and outward from the high temperature focused hydrothermal flow to define areas that are energetically most favorable for a given metabolic process as well as below the upper temperature limit for life (~120 deg. C). In this way, we can expand the relevance of geochemical model predictions of bioenergetics by predicting functionally-defined 'Zones of Life' and placing them spatially within the boundary of the 120 deg. C isotherm, estimating the extent of subsurface biosphere beneath mid-ocean ridge hydrothermal systems. Preliminary results indicate that methanogenesis yields the most energy per kg of vent fluid, consistent with the elevated CH4(aq) seen at all three sites, but may be constrained by temperatures too hot for microbial life while available energy from the oxidation of Fe(II) peaks near regions of the crust that are more hospitable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Weili; Department of Radiation Oncology, the Fourth Affiliated Hospital, China Medical University, Shenyang; Xu, Yaping
2013-08-01
Purpose: This study aimed to compare lung dose–volume histogram (DVH) parameters such as mean lung dose (MLD) and the lung volume receiving ≥20 Gy (V20) of commonly used definitions of normal lung in terms of tumor/target subtraction and to determine to what extent they differ in predicting radiation pneumonitis (RP). Methods and Materials: One hundred lung cancer patients treated with definitive radiation therapy were assessed. The gross tumor volume (GTV) and clinical planning target volume (PTV{sub c}) were defined by the treating physician and dosimetrist. For this study, the clinical target volume (CTV) was defined as GTV with 8-mm uniformmore » expansion, and the PTV was defined as CTV with an 8-mm uniform expansion. Lung DVHs were generated with exclusion of targets: (1) GTV (DVH{sub G}); (2) CTV (DVH{sub C}); (3) PTV (DVH{sub P}); and (4) PTV{sub c} (DVH{sub Pc}). The lung DVHs, V20s, and MLDs from each of the 4 methods were compared, as was their significance in predicting radiation pneumonitis of grade 2 or greater (RP2). Results: There are significant differences in dosimetric parameters among the various definition methods (all Ps<.05). The mean and maximum differences in V20 are 4.4% and 12.6% (95% confidence interval 3.6%-5.1%), respectively. The mean and maximum differences in MLD are 3.3 Gy and 7.5 Gy (95% confidence interval, 1.7-4.8 Gy), respectively. MLDs of all methods are highly correlated with each other and significantly correlated with clinical RP2, although V20s are not. For RP2 prediction, on the receiver operating characteristic curve, MLD from DVH{sub G} (MLD{sub G}) has a greater area under curve of than MLD from DVH{sub C} (MLD{sub C}) or DVH{sub P} (MLD{sub P}). Limiting RP2 to 30%, the threshold is 22.4, 20.6, and 18.8 Gy, for MLD{sub G}, MLD{sub C}, and MLD{sub P}, respectively. Conclusions: The differences in MLD and V20 from various lung definitions are significant. MLD from the GTV exclusion method may be more accurate in predicting clinical significant radiation pneumonitis.« less
Gureckis, Todd M.; Love, Bradley C.
2009-01-01
We evaluate two broad classes of cognitive mechanisms that might support the learning of sequential patterns. According to the first, learning is based on the gradual accumulation of direct associations between events based on simple conditioning principles. The other view describes learning as the process of inducing the transformational structure that defines the material. Each of these learning mechanisms predict differences in the rate of acquisition for differently organized sequences. Across a set of empirical studies, we compare the predictions of each class of model with the behavior of human subjects. We find that learning mechanisms based on transformations of an internal state, such as recurrent network architectures (e.g., Elman, 1990), have difficulty accounting for the pattern of human results relative to a simpler (but more limited) learning mechanism based on learning direct associations. Our results suggest new constraints on the cognitive mechanisms supporting sequential learning behavior. PMID:20396653
Continuous quantum measurement and the quantum to classical transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharya, Tanmoy; Habib, Salman; Jacobs, Kurt
2003-04-01
While ultimately they are described by quantum mechanics, macroscopic mechanical systems are nevertheless observed to follow the trajectories predicted by classical mechanics. Hence, in the regime defining macroscopic physics, the trajectories of the correct classical motion must emerge from quantum mechanics, a process referred to as the quantum to classical transition. Extending previous work [Bhattacharya, Habib, and Jacobs, Phys. Rev. Lett. 85, 4852 (2000)], here we elucidate this transition in some detail, showing that once the measurement processes that affect all macroscopic systems are taken into account, quantum mechanics indeed predicts the emergence of classical motion. We derive inequalities thatmore » describe the parameter regime in which classical motion is obtained, and provide numerical examples. We also demonstrate two further important properties of the classical limit: first, that multiple observers all agree on the motion of an object, and second, that classical statistical inference may be used to correctly track the classical motion.« less
RACE-SPECIFIC TRANSITION PATTERNS AMONG ALCOHOL USE CLASSES IN ADOLESCENT GIRLS
Dauber, Sarah E.; Paulson, James F.; Leiferman, Jenn A.
2010-01-01
We used data from the National Longitudinal Study of Adolescent Health to examine transitions among alcohol use classes in 2225 White and African American adolescent girls, and race differences in predictors of transition into and out of problematic drinking classes. Latent class analysis confirmed four classes for White girls and three for AA girls, defined in a previous study. Latent transition analysis revealed more stable abstainers and decreasing alcohol use among AA girls, and more increasing alcohol use among White girls, though stable abstainers were the largest group among both races. Increasing use was predicted by delinquency, academic misbehavior, substance use, and peer support for White girls, and by older age and delinquency for AA girls. Decreasing use was predicted by older age and depressive symptoms for White girls, and by family relationship quality and substance use for AA girls. Study limitations and implications of findings are discussed. PMID:20708254
A Robust Compositional Architecture for Autonomous Systems
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Deney, Ewen; Farrell, Kimberley; Giannakopoulos, Dimitra; Jonsson, Ari; Frank, Jeremy; Bobby, Mark; Carpenter, Todd; Estlin, Tara
2006-01-01
Space exploration applications can benefit greatly from autonomous systems. Great distances, limited communications and high costs make direct operations impossible while mandating operations reliability and efficiency beyond what traditional commanding can provide. Autonomous systems can improve reliability and enhance spacecraft capability significantly. However, there is reluctance to utilizing autonomous systems. In part this is due to general hesitation about new technologies, but a more tangible concern is that of reliability of predictability of autonomous software. In this paper, we describe ongoing work aimed at increasing robustness and predictability of autonomous software, with the ultimate goal of building trust in such systems. The work combines state-of-the-art technologies and capabilities in autonomous systems with advanced validation and synthesis techniques. The focus of this paper is on the autonomous system architecture that has been defined, and on how it enables the application of validation techniques for resulting autonomous systems.
Da, Yang
2015-12-18
The amount of functional genomic information has been growing rapidly but remains largely unused in genomic selection. Genomic prediction and estimation using haplotypes in genome regions with functional elements such as all genes of the genome can be an approach to integrate functional and structural genomic information for genomic selection. Towards this goal, this article develops a new haplotype approach for genomic prediction and estimation. A multi-allelic haplotype model treating each haplotype as an 'allele' was developed for genomic prediction and estimation based on the partition of a multi-allelic genotypic value into additive and dominance values. Each additive value is expressed as a function of h - 1 additive effects, where h = number of alleles or haplotypes, and each dominance value is expressed as a function of h(h - 1)/2 dominance effects. For a sample of q individuals, the limit number of effects is 2q - 1 for additive effects and is the number of heterozygous genotypes for dominance effects. Additive values are factorized as a product between the additive model matrix and the h - 1 additive effects, and dominance values are factorized as a product between the dominance model matrix and the h(h - 1)/2 dominance effects. Genomic additive relationship matrix is defined as a function of the haplotype model matrix for additive effects, and genomic dominance relationship matrix is defined as a function of the haplotype model matrix for dominance effects. Based on these results, a mixed model implementation for genomic prediction and variance component estimation that jointly use haplotypes and single markers is established, including two computing strategies for genomic prediction and variance component estimation with identical results. The multi-allelic genetic partition fills a theoretical gap in genetic partition by providing general formulations for partitioning multi-allelic genotypic values and provides a haplotype method based on the quantitative genetics model towards the utilization of functional and structural genomic information for genomic prediction and estimation.
The practice of prediction: What can ecologists learn from applied, ecology-related fields?
Pennekamp, Frank; Adamson, Matthew; Petchey, Owen L; Poggiale, Jean-Christophe; Aguiar, Maira; Kooi, Bob W.; Botkin, Daniel B.; DeAngelis, Donald L.
2017-01-01
The pervasive influence of human induced global environmental change affects biodiversity across the globe, and there is great uncertainty as to how the biosphere will react on short and longer time scales. To adapt to what the future holds and to manage the impacts of global change, scientists need to predict the expected effects with some confidence and communicate these predictions to policy makers. However, recent reviews found that we currently lack a clear understanding of how predictable ecology is, with views seeing it as mostly unpredictable to potentially predictable, at least over short time frames. However, in applied, ecology-related fields predictions are more commonly formulated and reported, as well as evaluated in hindsight, potentially allowing one to define baselines of predictive proficiency in these fields. We searched the literature for representative case studies in these fields and collected information about modeling approaches, target variables of prediction, predictive proficiency achieved, as well as the availability of data to parameterize predictive models. We find that some fields such as epidemiology achieve high predictive proficiency, but even in the more predictive fields proficiency is evaluated in different ways. Both phenomenological and mechanistic approaches are used in most fields, but differences are often small, with no clear superiority of one approach over the other. Data availability is limiting in most fields, with long-term studies being rare and detailed data for parameterizing mechanistic models being in short supply. We suggest that ecologists adopt a more rigorous approach to report and assess predictive proficiency, and embrace the challenges of real world decision making to strengthen the practice of prediction in ecology.
Prasad, M; Chinnaswamy, G; Arora, B; Vora, T; Hawaldar, R; Banavali, S
2014-01-01
Risk stratification of patients with febrile neutropenia (FN) into those at "High Risk" and "Low Risk" of developing complications helps in making decisions regarding optimal treatment, such as whether to treat with oral or intravenous antibiotics, whether to treat as inpatient or outpatient and how long to treat. Risk predictors obtained from Western studies on pediatric FN are unlikely to be relevant to low middle-income country (LMICs). Our study aimed to identify clinical and laboratory parameters predictive of poor outcomes in children with chemotherapy-induced FN in a LMIC. Two hundred and fifty consecutive episodes of chemotherapy-induced FN in pediatric (<15 years) patients were analyzed prospectively. Adverse outcomes were defined as per SPOG 2003 FN study as serious medical complications (SMC) due to infection, microbiologically defined infection, and radiologically defined pneumonia (RDP). Variables found to be significant for adverse outcome (P < 0.05) on univariate analysis were selected for multivariate analysis. Five factors that were found to independently predict adverse outcome were (a) previously documented infection in the past 6 months, (b) presence of significant focus of infection, (c) absolute phagocyte count <100/mm3, (d) peak temperature more than 39°C in this episode of FN, and (e) fever lasting more than 5 days during this episode of FN. Identifying the risk factors for adverse outcome in pediatric FN, which are objective and applicable across LMICs would contribute in developing guidelines for the management of FN in a resource-limited setting.
Accurate Prediction of Drug-Induced Liver Injury Using Stem Cell-Derived Populations
Szkolnicka, Dagmara; Farnworth, Sarah L.; Lucendo-Villarin, Baltasar; Storck, Christopher; Zhou, Wenli; Iredale, John P.; Flint, Oliver
2014-01-01
Despite major progress in the knowledge and management of human liver injury, there are millions of people suffering from chronic liver disease. Currently, the only cure for end-stage liver disease is orthotopic liver transplantation; however, this approach is severely limited by organ donation. Alternative approaches to restoring liver function have therefore been pursued, including the use of somatic and stem cell populations. Although such approaches are essential in developing scalable treatments, there is also an imperative to develop predictive human systems that more effectively study and/or prevent the onset of liver disease and decompensated organ function. We used a renewable human stem cell resource, from defined genetic backgrounds, and drove them through developmental intermediates to yield highly active, drug-inducible, and predictive human hepatocyte populations. Most importantly, stem cell-derived hepatocytes displayed equivalence to primary adult hepatocytes, following incubation with known hepatotoxins. In summary, we have developed a serum-free, scalable, and shippable cell-based model that faithfully predicts the potential for human liver injury. Such a resource has direct application in human modeling and, in the future, could play an important role in developing renewable cell-based therapies. PMID:24375539
Predicting structured metadata from unstructured metadata.
Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier
2016-01-01
Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data-defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. © The Author(s) 2016. Published by Oxford University Press.
Predicting structured metadata from unstructured metadata
Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier
2016-01-01
Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data—defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. Database URL: http://www.yeastgenome.org/ PMID:28637268
Collins, G S; Altman, D G
2012-07-10
Early identification of colorectal cancer is an unresolved challenge and the predictive value of single symptoms is limited. We evaluated the performance of QCancer (Colorectal) prediction model for predicting the absolute risk of colorectal cancer in an independent UK cohort of patients from general practice records. A total of 2.1 million patients registered with a general practice surgery between 01 January 2000 and 30 June 2008, aged 30-84 years (3.7 million person-years) with 3712 colorectal cancer cases were included in the analysis. Colorectal cancer was defined as incident diagnosis of colorectal cancer during the 2 years after study entry. The results from this independent and external validation of QCancer (Colorectal) prediction model demonstrated good performance data on a large cohort of general practice patients. QCancer (Colorectal) had very good discrimination with an area under the ROC curve of 0.92 (women) and 0.91 (men), and explained 68% (women) and 66% (men) of the variation. QCancer (Colorectal) was well calibrated across all tenths of risk and over all age ranges with predicted risks closely matching observed risks. QCancer (Colorectal) appears to be a useful tool for identifying undetected cases of undiagnosed colorectal cancer in primary care in the United Kingdom.
NASA Astrophysics Data System (ADS)
Dilbone, Elizabeth K.
Methods for spectrally-based bathymetric mapping of rivers mainly have been developed and tested on clear-flowing, gravel bedded channels, with limited application to turbid, sand-bedded rivers. Using hyperspectral images of the Niobrara River, Nebraska, and field-surveyed depth data, this study evaluated three methods of retrieving depth from remotely sensed data in a dynamic, sand-bedded channel. The first regression-based approach paired in situ depth measurements and image pixel values to predict depth via Optimal Band Ratio Analysis (OBRA). The second approach used ground-based reflectance measurements to calibrate an OBRA relationship. For this approach, CASI images were atmospherically corrected to units of apparent surface reflectance using an empirical line calibration. For the final technique, we used Image-to-Depth Quantile Transformation (IDQT) to predict depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image derived variable. OBRA yielded the lowest overall depth retrieval error (0.0047 m) and highest observed versus predicted R2 (0.81). Although misalignment between field and image data were not problematic to OBRA's performance in this study, such issues present potential limitations to standard regression-based approaches like OBRA in dynamic, sand-bedded rivers. Field spectroscopy-based maps exhibited a slight shallow bias (0.0652 m) but provided reliable depth estimates for most of the study reach. IDQT had a strong deep bias, but still provided informative relative depth maps that portrayed general patterns of shallow and deep areas of the channel. The over-prediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the CDF of depth. While each of the techniques tested in this study demonstrated the potential to provide accurate depth estimates in sand-bedded rivers, each method also was subject to certain constraints and limitations.
Villa, Chiara; Brůžek, Jaroslav
2017-01-01
Background Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. Methods We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). Results and Discussion The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results. PMID:28533960
Hur, Manhoi; Campbell, Alexis Ann; Almeida-de-Macedo, Marcia; Li, Ling; Ransom, Nick; Jose, Adarsh; Crispin, Matt; Nikolau, Basil J; Wurtele, Eve Syrkin
2013-04-01
Discovering molecular components and their functionality is key to the development of hypotheses concerning the organization and regulation of metabolic networks. The iterative experimental testing of such hypotheses is the trajectory that can ultimately enable accurate computational modelling and prediction of metabolic outcomes. This information can be particularly important for understanding the biology of natural products, whose metabolism itself is often only poorly defined. Here, we describe factors that must be in place to optimize the use of metabolomics in predictive biology. A key to achieving this vision is a collection of accurate time-resolved and spatially defined metabolite abundance data and associated metadata. One formidable challenge associated with metabolite profiling is the complexity and analytical limits associated with comprehensively determining the metabolome of an organism. Further, for metabolomics data to be efficiently used by the research community, it must be curated in publicly available metabolomics databases. Such databases require clear, consistent formats, easy access to data and metadata, data download, and accessible computational tools to integrate genome system-scale datasets. Although transcriptomics and proteomics integrate the linear predictive power of the genome, the metabolome represents the nonlinear, final biochemical products of the genome, which results from the intricate system(s) that regulate genome expression. For example, the relationship of metabolomics data to the metabolic network is confounded by redundant connections between metabolites and gene-products. However, connections among metabolites are predictable through the rules of chemistry. Therefore, enhancing the ability to integrate the metabolome with anchor-points in the transcriptome and proteome will enhance the predictive power of genomics data. We detail a public database repository for metabolomics, tools and approaches for statistical analysis of metabolomics data, and methods for integrating these datasets with transcriptomic data to create hypotheses concerning specialized metabolisms that generate the diversity in natural product chemistry. We discuss the importance of close collaborations among biologists, chemists, computer scientists and statisticians throughout the development of such integrated metabolism-centric databases and software.
Hur, Manhoi; Campbell, Alexis Ann; Almeida-de-Macedo, Marcia; Li, Ling; Ransom, Nick; Jose, Adarsh; Crispin, Matt; Nikolau, Basil J.
2013-01-01
Discovering molecular components and their functionality is key to the development of hypotheses concerning the organization and regulation of metabolic networks. The iterative experimental testing of such hypotheses is the trajectory that can ultimately enable accurate computational modelling and prediction of metabolic outcomes. This information can be particularly important for understanding the biology of natural products, whose metabolism itself is often only poorly defined. Here, we describe factors that must be in place to optimize the use of metabolomics in predictive biology. A key to achieving this vision is a collection of accurate time-resolved and spatially defined metabolite abundance data and associated metadata. One formidable challenge associated with metabolite profiling is the complexity and analytical limits associated with comprehensively determining the metabolome of an organism. Further, for metabolomics data to be efficiently used by the research community, it must be curated in publically available metabolomics databases. Such databases require clear, consistent formats, easy access to data and metadata, data download, and accessible computational tools to integrate genome system-scale datasets. Although transcriptomics and proteomics integrate the linear predictive power of the genome, the metabolome represents the nonlinear, final biochemical products of the genome, which results from the intricate system(s) that regulate genome expression. For example, the relationship of metabolomics data to the metabolic network is confounded by redundant connections between metabolites and gene-products. However, connections among metabolites are predictable through the rules of chemistry. Therefore, enhancing the ability to integrate the metabolome with anchor-points in the transcriptome and proteome will enhance the predictive power of genomics data. We detail a public database repository for metabolomics, tools and approaches for statistical analysis of metabolomics data, and methods for integrating these dataset with transcriptomic data to create hypotheses concerning specialized metabolism that generates the diversity in natural product chemistry. We discuss the importance of close collaborations among biologists, chemists, computer scientists and statisticians throughout the development of such integrated metabolism-centric databases and software. PMID:23447050
Lacoste Jeanson, Alizé; Dupej, Ján; Villa, Chiara; Brůžek, Jaroslav
2017-01-01
Estimating volumes and masses of total body components is important for the study and treatment monitoring of nutrition and nutrition-related disorders, cancer, joint replacement, energy-expenditure and exercise physiology. While several equations have been offered for estimating total body components from MRI slices, no reliable and tested method exists for CT scans. For the first time, body composition data was derived from 41 high-resolution whole-body CT scans. From these data, we defined equations for estimating volumes and masses of total body AT and LT from corresponding tissue areas measured in selected CT scan slices. We present a new semi-automatic approach to defining the density cutoff between adipose tissue (AT) and lean tissue (LT) in such material. An intra-class correlation coefficient (ICC) was used to validate the method. The equations for estimating the whole-body composition volume and mass from areas measured in selected slices were modeled with ordinary least squares (OLS) linear regressions and support vector machine regression (SVMR). The best predictive equation for total body AT volume was based on the AT area of a single slice located between the 4th and 5th lumbar vertebrae (L4-L5) and produced lower prediction errors (|PE| = 1.86 liters, %PE = 8.77) than previous equations also based on CT scans. The LT area of the mid-thigh provided the lowest prediction errors (|PE| = 2.52 liters, %PE = 7.08) for estimating whole-body LT volume. We also present equations to predict total body AT and LT masses from a slice located at L4-L5 that resulted in reduced error compared with the previously published equations based on CT scans. The multislice SVMR predictor gave the theoretical upper limit for prediction precision of volumes and cross-validated the results.
NASA Astrophysics Data System (ADS)
Godfrey, L. E. H.; Morganti, R.; Brienza, M.
2017-10-01
The purpose of this work is two-fold: (1) to quantify the occurrence of ultrasteep spectrum remnant Fanaroff-Riley type II (FRII) radio galaxies in a 74 MHz flux-limited sample, and (2) perform Monte Carlo simulations of the population of active and remnant FRII radio galaxies to confront models of remnant lobe evolution, and to provide guidance for further investigation of remnant radio galaxies. We find that fewer than 2 per cent of FRII radio galaxies with S74 MHz > 1.5 Jy are candidate ultrasteep spectrum remnants, where we define ultrasteep spectrum as α _74 MHz^1400 MHz > 1.2. Our Monte Carlo simulations demonstrate that models involving Sedov-like expansion in the remnant phase, resulting in rapid adiabatic energy losses, are consistent with this upper limit, and predict the existence of nearly twice as many remnants with normal (not ultrasteep) spectra in the observed frequency range as there are ultrasteep spectrum remnants. This model also predicts an ultrasteep remnant fraction approaching 10 per cent at redshifts z < 0.5. Importantly, this model implies the lobes remain overpressured with respect to the ambient medium well after their active lifetime, in contrast with existing observational evidence that many FRII radio galaxy lobes reach pressure equilibrium with the external medium whilst still in the active phase. The predicted age distribution of remnants is a steeply decreasing function of age. In other words, young remnants are expected to be much more common than old remnants in flux-limited samples. For this reason, incorporating higher frequency data ≳5 GHz will be of great benefit to future studies of the remnant population.
Clinical characterization of children with resistant airflow obstruction, a multicenter study.
Krishnan, Sankaran; Dozor, Allen J; Bacharier, Leonard; Lang, Jason E; Irvin, Charles G; Kaminsky, David; Farber, Harold J; Gerald, Lynn; Brown, Mark; Holbrook, Janet T; Wise, Robert A; Ryu, Julie; Bose, Sonali; Yasin, Razan; Saams, Joy; Henderson, Robert J; Teague, William G
2018-05-17
To characterize a cohort of children with airflow limitation resistant to bronchodilator (BD) therapy. Pulmonary function tests performed in children 6-17 years of age at 15 centers in a clinical research consortium were screened for resistant airflow limitation, defined as a post-BD FEV 1 and/or an FEV 1 /FVC less than the lower limits of normal. Demographic and clinical data were analyzed for associations with pulmonary function. 582 children were identified. Median age was 13 years (IQR: 11, 16), 60% were males; 62% were Caucasian, 28% were African-American; 19% were obese; 32% were born prematurely and 21% exposed to second hand smoke. Pulmonary diagnoses included asthma (93%), prior significant pneumonia (28%), and bronchiectasis (5%). 65% reported allergic rhinitis, and 11% chronic sinusitis. Subjects without a history of asthma had significantly lower post-BD FEV 1 % predicted (p = 0.008). Subjects without allergic rhinitis had lower post-BD FEV 1 % predicted (p = 0.003). Children with allergic rhinitis, male sex, obesity and Black race had better pulmonary function post-BD. There was lower pulmonary function in children after age 11 years without a history of allergic rhinitis, as compared to those with a history of allergic rhinitis. The most prevalent diagnosis in children with BD-resistant airflow limitation is asthma. Allergic rhinitis and premature birth are common co-morbidities. Children without a history of asthma, as well as those with asthma but no allergic rhinitis, had lower pulmonary function. Children with BD-resistant airflow limitation may represent a sub-group of children with persistent obstruction and high risk for life-long airway disease.
Association of serum bicarbonate with incident functional limitation in older adults.
Yenchek, Robert; Ix, Joachim H; Rifkin, Dena E; Shlipak, Michael G; Sarnak, Mark J; Garcia, Melissa; Patel, Kushang V; Satterfield, Suzanne; Harris, Tamara B; Newman, Anne B; Fried, Linda F
2014-12-05
Cross-sectional studies have found that low serum bicarbonate is associated with slower gait speed. Whether bicarbonate levels independently predict the development of functional limitation has not been previously studied. Whether bicarbonate was associated with incident persistent lower extremity functional limitation and whether the relationship differed in individuals with and without CKD were assessed in participants in the Health, Aging, and Body Composition study, a prospective study of well functioning older individuals Functional limitation was defined as difficulty in walking 0.25 miles or up 10 stairs on two consecutive reports 6 months apart in the same activity (stairs or walking). Kidney function was measured using eGFR by the Chronic Kidney Disease Epidemiology Collaboration creatinine equation, and CKD was defined as an eGFR<60 ml/min per 1.73 m(2). Serum bicarbonate was measured using arterialized venous blood gas. Cox proportional hazards analysis was used to assess the association of bicarbonate (<23, 23-25.9, and ≥26 mEq/L) with functional limitation. Mixed model linear regression was performed to assess the association of serum bicarbonate on change in gait speed over time. Of 1544 participants, 412 participants developed incident persistent functional limitation events over a median 4.4 years (interquartile range, 3.1 to 4.5). Compared with ≥26 mEq/L, lower serum bicarbonate was associated with functional limitation. After adjustment for demographics, CKD, diabetes, body mass index, smoking, diuretic use, and gait speed, lower serum bicarbonate was significantly associated with functional limitation (hazard ratio, 1.35; 95% confidence interval, 1.08 to 1.68 and hazard ratio, 1.58; 95% confidence interval, 1.12 to 2.22 for bicarbonate levels from 23 to 25.9 and <23, respectively). There was not a significant interaction of bicarbonate with CKD. In addition, bicarbonate was not significantly associated with change in gait speed. Lower serum bicarbonate was associated with greater risk of incident, persistent functional limitation. This association was present in individuals with and without CKD. Copyright © 2014 by the American Society of Nephrology.
Association of Serum Bicarbonate with Incident Functional Limitation in Older Adults
Yenchek, Robert; Ix, Joachim H.; Rifkin, Dena E.; Shlipak, Michael G.; Sarnak, Mark J.; Garcia, Melissa; Patel, Kushang V.; Satterfield, Suzanne; Harris, Tamara B.; Newman, Anne B.
2014-01-01
Background and objectives Cross-sectional studies have found that low serum bicarbonate is associated with slower gait speed. Whether bicarbonate levels independently predict the development of functional limitation has not been previously studied. Whether bicarbonate was associated with incident persistent lower extremity functional limitation and whether the relationship differed in individuals with and without CKD were assessed in participants in the Health, Aging, and Body Composition study, a prospective study of well functioning older individuals Design, setting, participants, & measurements Functional limitation was defined as difficulty in walking 0.25 miles or up 10 stairs on two consecutive reports 6 months apart in the same activity (stairs or walking). Kidney function was measured using eGFR by the Chronic Kidney Disease Epidemiology Collaboration creatinine equation, and CKD was defined as an eGFR<60 ml/min per 1.73 m2. Serum bicarbonate was measured using arterialized venous blood gas. Cox proportional hazards analysis was used to assess the association of bicarbonate (<23, 23–25.9, and ≥26 mEq/L) with functional limitation. Mixed model linear regression was performed to assess the association of serum bicarbonate on change in gait speed over time. Results Of 1544 participants, 412 participants developed incident persistent functional limitation events over a median 4.4 years (interquartile range, 3.1 to 4.5). Compared with ≥26 mEq/L, lower serum bicarbonate was associated with functional limitation. After adjustment for demographics, CKD, diabetes, body mass index, smoking, diuretic use, and gait speed, lower serum bicarbonate was significantly associated with functional limitation (hazard ratio, 1.35; 95% confidence interval, 1.08 to 1.68 and hazard ratio, 1.58; 95% confidence interval, 1.12 to 2.22 for bicarbonate levels from 23 to 25.9 and <23, respectively). There was not a significant interaction of bicarbonate with CKD. In addition, bicarbonate was not significantly associated with change in gait speed. Conclusions Lower serum bicarbonate was associated with greater risk of incident, persistent functional limitation. This association was present in individuals with and without CKD. PMID:25381341
Life Extending Control. [mechanical fatigue in reusable rocket engines
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control is defined. Life is defined in terms of mechanical fatigue life. A brief description is given of the current approach to life prediction using a local, cyclic, stress-strain approach for a critical system component. An alternative approach to life prediction based on a continuous functional relationship to component performance is proposed. Based on cyclic life prediction, an approach to life extending control, called the Life Management Approach, is proposed. A second approach, also based on cyclic life prediction, called the implicit approach, is presented. Assuming the existence of the alternative functional life prediction approach, two additional concepts for Life Extending Control are presented.
Life extending control: A concept paper
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control is defined. Life is defined in terms of mechanical fatigue life. A brief description is given of the current approach to life prediction using a local, cyclic, stress-strain approach for a critical system component. An alternative approach to life prediction based on a continuous functional relationship to component performance is proposed.Base on cyclic life prediction an approach to Life Extending Control, called the Life Management Approach is proposed. A second approach, also based on cyclic life prediction, called the Implicit Approach, is presented. Assuming the existence of the alternative functional life prediction approach, two additional concepts for Life Extending Control are presented.
Material electronic quality specifications for polycrystalline silicon wafers
NASA Astrophysics Data System (ADS)
Kalejs, J. P.
1994-06-01
As the use of polycrystalline silicon wafers has expanded in the photovoltaic industry, the need grows for monitoring and qualification techniques for as-grown material that can be used to optimize crystal growth and help predict solar cell performance. Particular needs are for obtaining quantitative measures over full wafer areas of the effects of lifetime limiting defects and of the lifetime upgrading taking place during solar cell processing. We review here the approaches being pursued in programs under way to develop material quality specifications for thin Edge-defined Film-fed Growth (EFG) polycrystalline silicon as-grown wafers. These studies involve collaborations between Mobil Solar, and NREL and university-based laboratories.
Malietzis, G; Monzon, L; Hand, J; Wasan, H; Leen, E; Abel, M; Muhammad, A; Abel, P
2013-01-01
High-intensity focused ultrasound (HIFU) is a rapidly maturing technology with diverse clinical applications. In the field of oncology, the use of HIFU to non-invasively cause tissue necrosis in a defined target, a technique known as focused ultrasound surgery (FUS), has considerable potential for tumour ablation. In this article, we outline the development and underlying principles of HIFU, overview the limitations and commercially available equipment for FUS, then summarise some of the recent technological advances and experimental clinical trials that we predict will have a positive impact on extending the role of FUS in cancer therapy. PMID:23403455
A Clinical Approach to the Diagnosis of Acid-Base Disorders
Bear, Robert A.
1986-01-01
The ability to diagnose and manage acid-base disorders rapidly and effectively is essential to the care of critically ill patients. This article presents an approach to the diagnosis of pure and mixed acid-base disorders, metabolic or respiratory. The approach taken is based on using the law of mass-action equation as it applies to the bicarbonate buffer system (Henderson equation), using sub-classifications for diagnostic purposes of causes of metabolic acidosis and metabolic alkalosis, and using a knowledge of the well-defined and predictable compensatory responses that attempt to limit the change in pH in each of the primary acid-base disorders. PMID:21267134
[Breakthrough cancer pain in the elderly].
Cabezón-Gutiérrez, Luis; Viloria-Jiménez, María Aurora; Pérez-Cajaraville, Juan; Álamo-González, Cecilio; López-Trigo, José Antonio; Gil-Gregorio, Pedro
Breakthrough pain is defined as an acute exacerbation of pain with rapid onset, short duration and moderate or high intensity, which occurs spontaneously or in connection with a predictable or unpredictable event despite there being stabilised and controlled baseline pain. However, there are doubts about the definition, terminology, epidemiology, and assessment of breakthrough pain, with no clear answers or consensus, especially in the elderly population. This non-systematic review summarises the most important aspects of breakthrough pain in the elderly, based on the limited publications there are in that population group. Copyright © 2016 SEGG. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Rammig, A.; Fleischer, K.; Lapola, D.; Holm, J.; Hoosbeek, M.
2017-12-01
Increasing atmospheric CO2 concentration is assumed to have a stimulating effect ("CO2 fertilization effect") on forest growth and resilience. Empirical evidence, however, for the existence and strength of such a tropical CO2 fertilization effect is scarce and thus a major impediment for constraining the uncertainties in Earth System Model projections. The implications of the tropical CO2 effect are far-reaching, as it strongly influences the global carbon and water cycle, and hence future global climate. In the scope of the Amazon Free Air CO2 Enrichment (FACE) experiment, we addressed these uncertainties by assessing the CO2 fertilization effect at ecosystem scale. AmazonFACE is the first FACE experiment in an old-growth, highly diverse tropical rainforest. Here, we present a priori model-based hypotheses for the experiment derived from a set of 12 ecosystem models. Model simulations identified key uncertainties in our understanding of limiting processes and derived model-based hypotheses of expected ecosystem responses to elevated CO2 that can directly be tested during the experiment. Ambient model simulations compared satisfactorily with in-situ measurements of ecosystem carbon fluxes, as well as carbon, nitrogen, and phosphorus stocks. Models consistently predicted an increase in photosynthesis with elevated CO2, which declined over time due to developing limitations. The conversion of enhanced photosynthesis into biomass, and hence ecosystem carbon sequestration, varied strongly among the models due to different assumptions on nutrient limitation. Models with flexible allocation schemes consistently predicted an increased investment in belowground structures to alleviate nutrient limitation, in turn accelerating turnover rates of soil organic matter. The models diverged on the prediction for carbon accumulation after 10 years of elevated CO2, mainly due to contrasting assumptions in their phosphorus cycle representation. These differences define the expected response ratio to elevated CO2 at the AmazonFACE site and identify priorities for experimental work and model development.
Prevalence and magnitude of groundwater use by vegetation: a global stable isotope meta-analysis
Evaristo, Jaivime; McDonnell, Jeffrey J.
2017-01-01
The role of groundwater as a resource in sustaining terrestrial vegetation is widely recognized. But the global prevalence and magnitude of groundwater use by vegetation is unknown. Here we perform a meta-analysis of plant xylem water stable isotope (δ2H and δ18O, n = 7367) information from 138 published papers – representing 251 genera, and 414 species of angiosperms (n = 376) and gymnosperms (n = 38). We show that the prevalence of groundwater use by vegetation (defined as the number of samples out of a universe of plant samples reported to have groundwater contribution to xylem water) is 37% (95% confidence interval, 28–46%). This is across 162 sites and 12 terrestrial biomes (89% of heterogeneity explained; Q-value = 1235; P < 0.0001). However, the magnitude of groundwater source contribution to the xylem water mixture (defined as the proportion of groundwater contribution in xylem water) is limited to 23% (95% CI, 20–26%; 95% prediction interval, 3–77%). Spatial analysis shows that the magnitude of groundwater source contribution increases with aridity. Our results suggest that while groundwater influence is globally prevalent, its proportional contribution to the total terrestrial transpiration is limited. PMID:28281644
Predicting neuroblastoma using developmental signals and a logic-based model.
Kasemeier-Kulesa, Jennifer C; Schnell, Santiago; Woolley, Thomas; Spengler, Jennifer A; Morrison, Jason A; McKinney, Mary C; Pushel, Irina; Wolfe, Lauren A; Kulesa, Paul M
2018-07-01
Genomic information from human patient samples of pediatric neuroblastoma cancers and known outcomes have led to specific gene lists put forward as high risk for disease progression. However, the reliance on gene expression correlations rather than mechanistic insight has shown limited potential and suggests a critical need for molecular network models that better predict neuroblastoma progression. In this study, we construct and simulate a molecular network of developmental genes and downstream signals in a 6-gene input logic model that predicts a favorable/unfavorable outcome based on the outcome of the four cell states including cell differentiation, proliferation, apoptosis, and angiogenesis. We simulate the mis-expression of the tyrosine receptor kinases, trkA and trkB, two prognostic indicators of neuroblastoma, and find differences in the number and probability distribution of steady state outcomes. We validate the mechanistic model assumptions using RNAseq of the SHSY5Y human neuroblastoma cell line to define the input states and confirm the predicted outcome with antibody staining. Lastly, we apply input gene signatures from 77 published human patient samples and show that our model makes more accurate disease outcome predictions for early stage disease than any current neuroblastoma gene list. These findings highlight the predictive strength of a logic-based model based on developmental genes and offer a better understanding of the molecular network interactions during neuroblastoma disease progression. Copyright © 2018. Published by Elsevier B.V.
Kerry, Matthew J.; Embretson, Susan E.
2018-01-01
Future time perspective (FTP) is defined as “perceptions of the future as being limited or open-ended” (Lang and Carstensen, 2002; p. 125). The construct figures prominently in both workplace and retirement domains, but the age-predictions are competing: Workplace research predicts decreasing FTP age-change, in contrast, retirement scholars predict increasing FTP age-change. For the first time, these competing predictions are pitted in an experimental manipulation of subjective life expectancy (SLE). A sample of N = 207 older adults (age 45–60) working full-time (>30-h/week) were randomly assigned to SLE questions framed as either ‘Live-to’ or ‘Die-by’ to evaluate competing predictions for FTP. Results indicate general support for decreasing age-change in FTP, indicated by independent-sample t-tests showing lower FTP in the ‘Die-by’ framing condition. Further general-linear model analyses were conducted to test for interaction effects of retirement planning with experimental framings on FTP and intended retirement; While retirement planning buffered FTP’s decrease, simple-effects also revealed that retirement planning increased intentions for sooner retirement, but lack of planning increased intentions for later retirement. Discussion centers on practical implications of our findings and consequences validity evidence in future empirical research of FTP in both workplace and retirement domains. PMID:29375435
Notas, George; Bariotakis, Michail; Kalogrias, Vaios; Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.
Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106
NASA Astrophysics Data System (ADS)
Held, Christian; Liewald, Mathias; Schleich, Ralf; Sindel, Manfred
2010-06-01
The use of lightweight materials offers substantial strength and weight advantages in car body design. Unfortunately such kinds of sheet material are more susceptible to wrinkling, spring back and fracture during press shop operations. For characterization of capability of sheet material dedicated to deep drawing processes in the automotive industry, mainly Forming Limit Diagrams (FLD) are used. However, new investigations at the Institute for Metal Forming Technology have shown that High Strength Steel Sheet Material and Aluminum Alloys show increased formability in case of bending loads are superposed to stretching loads. Likewise, by superposing shearing on in plane uniaxial or biaxial tension formability changes because of materials crystallographic texture. Such mixed stress and strain conditions including bending and shearing effects can occur in deep-drawing processes of complex car body parts as well as subsequent forming operations like flanging. But changes in formability cannot be described by using the conventional FLC. Hence, for purpose of improvement of failure prediction in numerical simulation codes significant failure criteria for these strain conditions are missing. Considering such aspects in defining suitable failure criteria which is easy to implement into FEA a new semi-empirical model has been developed considering the effect of bending and shearing in sheet metals formability. This failure criterion consists of the combination of the so called cFLC (combined Forming Limit Curve), which considers superposed bending load conditions and the SFLC (Shear Forming Limit Curve), which again includes the effect of shearing on sheet metal's formability.
Estimating top-of-atmosphere thermal infrared radiance using MERRA-2 atmospheric data
NASA Astrophysics Data System (ADS)
Kleynhans, Tania; Montanaro, Matthew; Gerace, Aaron; Kanan, Christopher
2017-05-01
Thermal infrared satellite images have been widely used in environmental studies. However, satellites have limited temporal resolution, e.g., 16 day Landsat or 1 to 2 day Terra MODIS. This paper investigates the use of the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) reanalysis data product, produced by NASA's Global Modeling and Assimilation Office (GMAO) to predict global topof-atmosphere (TOA) thermal infrared radiance. The high temporal resolution of the MERRA-2 data product presents opportunities for novel research and applications. Various methods were applied to estimate TOA radiance from MERRA-2 variables namely (1) a parameterized physics based method, (2) Linear regression models and (3) non-linear Support Vector Regression. Model prediction accuracy was evaluated using temporally and spatially coincident Moderate Resolution Imaging Spectroradiometer (MODIS) thermal infrared data as reference data. This research found that Support Vector Regression with a radial basis function kernel produced the lowest error rates. Sources of errors are discussed and defined. Further research is currently being conducted to train deep learning models to predict TOA thermal radiance
Dennerline, D.E.; Van Den Avyle, M.J.
2000-01-01
Striped bass Morone saxatilis and hybrid bass M. saxatilis x M. chrysops have been stocked to establish fisheries in many US reservoirs, but success has been limited by a poor understanding of relations between prey biomass and predator growth and survival. To define sizes of prey that are morphologically available, we developed predictive relationships between predator length, mouth dimensions, and expected maximum prey size; predictions were then validated using published data on sizes of clupeid prey (Dorosoma spp.) in five US reservoirs. Further, we compared the biomass of prey considered available to predators using two forms of a length-based consumption model - a previously published AP/P ratio and a revised model based on our results. Predictions of maximum prey size using predator GW were consistent with observed prey sizes in US reservoirs. Length of consumed Dorosoma was significantly, but weakly, correlated with predator length in four of the five reservoirs (r2 = 0.006-0.336, P 150 mm TL) were abundant. (C) 2000 Elsevier Science B.V.
Trolle, Thomas; McMurtrey, Curtis P; Sidney, John; Bardet, Wilfried; Osborn, Sean C; Kaever, Thomas; Sette, Alessandro; Hildebrand, William H; Nielsen, Morten; Peters, Bjoern
2016-02-15
HLA class I-binding predictions are widely used to identify candidate peptide targets of human CD8(+) T cell responses. Many such approaches focus exclusively on a limited range of peptide lengths, typically 9 aa and sometimes 9-10 aa, despite multiple examples of dominant epitopes of other lengths. In this study, we examined whether epitope predictions can be improved by incorporating the natural length distribution of HLA class I ligands. We found that, although different HLA alleles have diverse length-binding preferences, the length profiles of ligands that are naturally presented by these alleles are much more homogeneous. We hypothesized that this is due to a defined length profile of peptides available for HLA binding in the endoplasmic reticulum. Based on this, we created a model of HLA allele-specific ligand length profiles and demonstrate how this model, in combination with HLA-binding predictions, greatly improves comprehensive identification of CD8(+) T cell epitopes. Copyright © 2016 by The American Association of Immunologists, Inc.
Gorman, Julian; Pearson, Diane; Whitehead, Peter
2008-01-01
Information on distribution and relative abundance of species is integral to sustainable management, especially if they are to be harvested for subsistence or commerce. In northern Australia, natural landscapes are vast, centers of population few, access is difficult, and Aboriginal resource centers and communities have limited funds and infrastructure. Consequently defining distribution and relative abundance by comprehensive ground survey is difficult and expensive. This highlights the need for simple, cheap, automated methodologies to predict the distribution of species in use, or having potential for use, in commercial enterprise. The technique applied here uses a Geographic Information System (GIS) to make predictions of probability of occurrence using an inductive modeling technique based on Bayes' theorem. The study area is in the Maningrida region, central Arnhem Land, in the Northern Territory, Australia. The species examined, Cycas arnhemica and Brachychiton diversifolius, are currently being 'wild harvested' in commercial trials, involving sale of decorative plants and use as carving wood, respectively. This study involved limited and relatively simple ground surveys requiring approximately 7 days of effort for each species. The overall model performance was evaluated using Cohen's kappa statistics. The predictive ability of the model for C. arnhemica was classified as moderate and for B. diversifolius as fair. The difference in model performance can be attributed to the pattern of distribution of these species. C. arnhemica tends to occur in a clumped distribution due to relatively short distance dispersal of its large seeds and vegetative growth from long-lived rhizomes, while B. diversifolius seeds are smaller and more widely dispersed across the landscape. The output from analysis predicts trends in species distribution that are consistent with independent on-site sampling for each species and therefore should prove useful in gauging the extent of resource availability. However, some caution needs to be applied as the models tend to over predict presence which is a function of distribution patterns and of other variables operating in the landscape such as fire histories which were not included in the model due to limited availability of data.
Speakman, John R; Król, Elzbieta
2010-07-01
1. The role of energy in ecological processes has hitherto been considered primarily from the standpoint that energy supply is limited. That is, traditional resource-based ecological and evolutionary theories and the recent 'metabolic theory of ecology' (MTE) all assume that energetic constraints operate on the supply side of the energy balance equation. 2. For endothermic animals, we provide evidence suggesting that an upper boundary on total energy expenditure is imposed by the maximal capacity to dissipate body heat and therefore avoid the detrimental consequences of hyperthermia--the heat dissipation limit (HDL) theory. We contend that the HDL is a major constraint operating on the expenditure side of the energy balance equation, and that processes that generate heat compete and trade-off within a total boundary defined by heat dissipation capacity, rather than competing for limited energy supply. 3. The HDL theory predicts that daily energy expenditure should scale in relation to body mass (M(b)) with an exponent of about 0.63. This contrasts the prediction of the MTE of an exponent of 0.75. 4. We compiled empirical data on field metabolic rate (FMR) measured by the doubly-labelled water method, and found that they scale to M(b) with exponents of 0.647 in mammals and 0.658 in birds, not significantly different from the HDL prediction (P > 0.05) but lower than predicted by the MTE (P < 0.001). The same statistical result was obtained using phylogenetically independent contrasts analysis. Quantitative predictions of the model matched the empirical data for both mammals and birds. There was no indication of curvature in the relationship between Log(e) FMR and Log(e)M(b). 5. Together, these data provide strong support for the HDL theory and allow us to reject the MTE, at least when applied to endothermic animals. 6. The HDL theory provides a novel conceptual framework that demands a reframing of our views of the interplay between energy and the environment in endothermic animals, and provides many new interpretations of ecological and evolutionary phenomena.
Index theorem for the flat Andreev bound states at a dirty surface of a nodal superconductor
NASA Astrophysics Data System (ADS)
Ikegaya, Satoshi; Asano, Yasuhiro
2018-03-01
We discuss the stability of at-band Andreev bound states appearing at a surface of a nodal unconventional superconductor. In the clean limit, the existence of the surface bound states is topologically characterized by a momentum-dependent topological invariant: one-dimensional winding number de ned in the restricted Brillouin zone. Thus, such topological invariant is ill-defined in the presence of potential disorder which is inevitable in experiments. By paying attention to chiral symmetry of the Hamiltonian, we provide an alternative topological index N ZES that predicts the number of Andreev bound states at a dirty surface of an unconventional superconductor. Moreover, we demonstrate that the zero-bias differential conductance in a normal metal/unconventional superconductor junction is quantized at (4e 2 /h)|N ZES | in the limit of strong impurity scattering in the normal metal.
Prediction and Stability of Reading Problems in Middle Childhood
ERIC Educational Resources Information Center
Ritchey, Kristen D.; Silverman, Rebecca D.; Schatschneider, Christopher; Speece, Deborah L.
2015-01-01
The longitudinal prediction of reading problems from fourth grade to sixth grade was investigated with a sample of 173 students. Reading problems at the end of sixth grade were defined by significantly below average performance (= 15th percentile) on reading factors defining word reading, fluency, and reading comprehension. Sixth grade poor reader…
Imposing constraints on parameter values of a conceptual hydrological model using baseflow response
NASA Astrophysics Data System (ADS)
Dunn, S. M.
Calibration of conceptual hydrological models is frequently limited by a lack of data about the area that is being studied. The result is that a broad range of parameter values can be identified that will give an equally good calibration to the available observations, usually of stream flow. The use of total stream flow can bias analyses towards interpretation of rapid runoff, whereas water quality issues are more frequently associated with low flow condition. This paper demonstrates how model distinctions between surface an sub-surface runoff can be used to define a likelihood measure based on the sub-surface (or baseflow) response. This helps to provide more information about the model behaviour, constrain the acceptable parameter sets and reduce uncertainty in streamflow prediction. A conceptual model, DIY, is applied to two contrasting catchments in Scotland, the Ythan and the Carron Valley. Parameter ranges and envelopes of prediction are identified using criteria based on total flow efficiency, baseflow efficiency and combined efficiencies. The individual parameter ranges derived using the combined efficiency measures still cover relatively wide bands, but are better constrained for the Carron than the Ythan. This reflects the fact that hydrological behaviour in the Carron is dominated by a much flashier surface response than in the Ythan. Hence, the total flow efficiency is more strongly controlled by surface runoff in the Carron and there is a greater contrast with the baseflow efficiency. Comparisons of the predictions using different efficiency measures for the Ythan also suggest that there is a danger of confusing parameter uncertainties with data and model error, if inadequate likelihood measures are defined.
Predicting plant vulnerability to drought in biodiverse regions using functional traits.
Skelton, Robert Paul; West, Adam G; Dawson, Todd E
2015-05-05
Attempts to understand mechanisms underlying plant mortality during drought have led to the emergence of a hydraulic framework describing distinct hydraulic strategies among coexisting species. This framework distinguishes species that rapidly decrease stomatal conductance (gs), thereby maintaining high water potential (Px; isohydric), from those species that maintain relatively high gs at low Px, thereby maintaining carbon assimilation, albeit at the cost of loss of hydraulic conductivity (anisohydric). This framework is yet to be tested in biodiverse communities, potentially due to a lack of standardized reference values upon which hydraulic strategies can be defined. We developed a system of quantifying hydraulic strategy using indices from vulnerability curves and stomatal dehydration response curves and tested it in a speciose community from South Africa's Cape Floristic Region. Degree of stomatal regulation over cavitation was defined as the margin between Px at stomatal closure (Pg12) and Px at 50% loss of conductivity. To assess relationships between hydraulic strategy and mortality mechanisms, we developed proxies for carbon limitation and hydraulic failure using time since Pg12 and loss of conductivity at minimum seasonal Px, respectively. Our approach captured continuous variation along an isohydry/anisohydry axis and showed that this variation was linearly related to xylem safety margin. Degree of isohydry/anisohydry was associated with contrasting predictions for mortality during drought. Merging stomatal regulation strategies that represent an index of water use behavior with xylem vulnerability facilitates a more comprehensive framework with which to characterize plant response to drought, thus opening up an avenue for predicting the response of diverse communities to future droughts.
Verdeli, Helen; Wickramaratne, Priya; Warner, Virginia; Mancini, Anthony; Weissman, Myrna
2014-01-01
Understanding differences in factors leading to positive outcomes in high-risk and low-risk offspring has important implications for preventive interventions. We identified variables predicting positive outcomes in a cohort of 235 offspring from 76 families in which one, both, or neither parent had major depressive disorder. Positive outcomes were termed resilient in offspring of depressed parents, and competent in offspring of non-depressed parents, and defined by two separate criteria: absence of psychiatric diagnosis and consistently high functioning at 2, 10, and 20 years follow-up. In offspring of depressed parents, easier temperament and higher self-esteem were associated with greater odds of resilient outcome defined by absence of diagnosis. Lower maternal overprotection, greater offspring self-esteem, and higher IQ were associated with greater odds of resilient outcome defined by consistently high functioning. Multivariate analysis indicated that resilient outcome defined by absence of diagnosis was best predicted by offspring self-esteem; resilient outcome defined by functioning was best predicted by maternal overprotection and self-esteem. Among offspring of non-depressed parents, greater family cohesion, easier temperament and higher self-esteem were associated with greater odds of offspring competent outcome defined by absence of diagnosis. Higher maternal affection and greater offspring self-esteem were associated with greater odds of competent outcome, defined by consistently high functioning. Multivariate analysis for each criterion indicated that competent outcome was best predicted by offspring self-esteem. As the most robust predictor of positive outcomes in offspring of depressed and non-depressed parents, self-esteem is an important target for youth preventive interventions. PMID:25374449
Porous Silicon Gradient Refractive Index Micro-Optics.
Krueger, Neil A; Holsteen, Aaron L; Kang, Seung-Kyun; Ocier, Christian R; Zhou, Weijun; Mensing, Glennys; Rogers, John A; Brongersma, Mark L; Braun, Paul V
2016-12-14
The emergence and growth of transformation optics over the past decade has revitalized interest in how a gradient refractive index (GRIN) can be used to control light propagation. Two-dimensional demonstrations with lithographically defined silicon (Si) have displayed the power of GRIN optics and also represent a promising opportunity for integrating compact optical elements within Si photonic integrated circuits. Here, we demonstrate the fabrication of three-dimensional Si-based GRIN micro-optics through the shape-defined formation of porous Si (PSi). Conventional microfabrication creates Si square microcolumns (SMCs) that can be electrochemically etched into PSi elements with nanoscale porosity along the shape-defined etching pathway, which imparts the geometry with structural birefringence. Free-space characterization of the transmitted intensity distribution through a homogeneously etched PSi SMC exhibits polarization splitting behavior resembling that of dielectric metasurfaces that require considerably more laborious fabrication. Coupled birefringence/GRIN effects are studied by way of PSi SMCs etched with a linear (increasing from edge to center) GRIN profile. The transmitted intensity distribution shows polarization-selective focusing behavior with one polarization focused to a diffraction-limited spot and the orthogonal polarization focused into two laterally displaced foci. Optical thickness-based analysis readily predicts the experimentally observed phenomena, which strongly match finite-element electromagnetic simulations.
Martin, James A.; Anderson, Donald D.; Goetz, Jessica E.; Fredericks, Douglas; Pedersen, Douglas R.; Ayati, Bruce P.; Marsh, J. Lawrence; Buckwalter, Joseph A.
2016-01-01
Two categories of joint overloading cause post-traumatic osteoarthritis (PTOA): single acute traumatic loads/impactions and repetitive overloading due to incongruity/instability. We developed and refined three classes of complementary models to define relationships between joint overloading and progressive cartilage loss across the spectrum of acute injuries and chronic joint abnormalities: explant and whole joint models that allow probing of cellular responses to mechanical injury and contact stresses, animal models that enable study of PTOA pathways in living joints and pre-clinical testing of treatments, and patient-specific computational models that define the overloading that causes OA in humans. We coordinated methodologies across models so that results from each informed the others, maximizing the benefit of this complementary approach. We are incorporating results from these investigations into biomathematical models to provide predictions of PTOA risk and guide treatment. Each approach has limitations, but each provides opportunities to elucidate PTOA pathogenesis. Taken together, they help define levels of joint overloading that cause cartilage destruction, show that both forms of overloading can act through the same biologic pathways, and create a framework for initiating clinical interventions that decrease PTOA risk. PMID:27509320
Defining a genetic ideotype for crop improvement.
Trethowan, Richard M
2014-01-01
While plant breeders traditionally base selection on phenotype, the development of genetic ideotypes can help focus the selection process. This chapter provides a road map for the establishment of a refined genetic ideotype. The first step is an accurate definition of the target environment including the underlying constraints, their probability of occurrence, and impact on phenotype. Once the environmental constraints are established, the wealth of information on plant physiological responses to stresses, known gene information, and knowledge of genotype ×environment and gene × environment interaction help refine the target ideotype and form a basis for cross prediction.Once a genetic ideotype is defined the challenge remains to build the ideotype in a plant breeding program. A number of strategies including marker-assisted recurrent selection and genomic selection can be used that also provide valuable information for the optimization of genetic ideotype. However, the informatics required to underpin the realization of the genetic ideotype then becomes crucial. The reduced cost of genotyping and the need to combine pedigree, phenotypic, and genetic data in a structured way for analysis and interpretation often become the rate-limiting steps, thus reducing genetic gain. Systems for managing these data and an example of ideotype construction for a defined environment type are discussed.
Cheng, Ryan R.; Uzawa, Takanori; Plaxco, Kevin W.; Makarov, Dmitrii E.
2010-01-01
The problem of determining the rate of end-to-end collisions for polymer chains has attracted the attention of theorists and experimentalists for more than three decades. The typical theoretical approach to this problem has focused on the case where a collision is defined as any instantaneous fluctuation that brings the chain ends to within a specific capture distance. In this paper, we study the more experimentally relevant case, where the end-to-end collision dynamics are probed by measuring the excited state lifetime of a fluorophore (or other lumiphore) attached to one chain end and quenched by a quencher group attached to the other end. Under this regime, a “contact” is defined not by the chain ends approach to within some sharp cutoff but, instead, typically by an exponentially distance-dependent process. Previous theoretical models predict that, if quenching is sufficiently rapid, a diffusion-controlled limit is attained, where such measurements report on the probe-independent, intrinsic end-to-end collision rate. In contrast, our theoretical considerations, simulations, and an analysis of experimental measurements of loop closure rates in single-stranded DNA molecules all indicate that no such limit exists, and that the measured effective collision rate has a nontrivial, fractional power-law dependence on both the intrinsic quenching rate of the fluorophore and the solvent viscosity. We propose a simple scaling formula describing the effective loop closure rate and its dependence on the viscosity, chain length, and properties of the probes. Previous theoretical results are limiting cases of this more general formula. PMID:19780594
NASA Astrophysics Data System (ADS)
Zhang, Ying; Moges, Semu; Block, Paul
2018-01-01
Prediction of seasonal precipitation can provide actionable information to guide management of various sectoral activities. For instance, it is often translated into hydrological forecasts for better water resources management. However, many studies assume homogeneity in precipitation across an entire study region, which may prove ineffective for operational and local-level decisions, particularly for locations with high spatial variability. This study proposes advancing local-level seasonal precipitation predictions by first conditioning on regional-level predictions, as defined through objective cluster analysis, for western Ethiopia. To our knowledge, this is the first study predicting seasonal precipitation at high resolution in this region, where lives and livelihoods are vulnerable to precipitation variability given the high reliance on rain-fed agriculture and limited water resources infrastructure. The combination of objective cluster analysis, spatially high-resolution prediction of seasonal precipitation, and a modeling structure spanning statistical and dynamical approaches makes clear advances in prediction skill and resolution, as compared with previous studies. The statistical model improves versus the non-clustered case or dynamical models for a number of specific clusters in northwestern Ethiopia, with clusters having regional average correlation and ranked probability skill score (RPSS) values of up to 0.5 and 33 %, respectively. The general skill (after bias correction) of the two best-performing dynamical models over the entire study region is superior to that of the statistical models, although the dynamical models issue predictions at a lower resolution and the raw predictions require bias correction to guarantee comparable skills.
Raboud, J M; Rae, S; Montaner, J S
2000-08-15
To determine the ability of intermediate plasma viral load (pVL) measurements to predict virologic outcome at 52 weeks of follow-up in clinical trials of antiretroviral therapy. Individual patient data from three clinical trials (INCAS, AVANTI-2 and AVANTI-3) were combined into a single database. Virologic success was defined to be plasma viral load (pVL) <500 copies/ml at week 52. The sensitivity and specificity of intermediate pVL measurements below the limit of detection, 100, 500, 1000, and 5000 copies/ml to predict virologic success were calculated. The sensitivity, specificity, and positive and negative predictive values of a pVL measurement <1000 copies/ml at week 16 to predict virologic outcome at week 52 were 74%, 74%, 48%, and 90%, respectively, for patients on double therapy. For patients on triple therapy, the sensitivity, specificity, and positive and negative predictive values of a pVL measurement <50 copies/ml at week 16 to predict virologic outcome were 68%, 68%, 80%, and 47%, respectively. For patients receiving double therapy, a poor virologic result at an intermediate week of follow-up is a strong indicator of virologic failure at 52 weeks whereas intermediate virologic success is no guarantee of success at 1 year. For patients on triple therapy, disappointing intermediate results do not preclude virologic success at 1 year and intermediate successes are more likely to be sustained.
Fischer, Axel W.; Bordignon, Enrica; Bleicken, Stephanie; García-Sáez, Ana J.; Jeschke, Gunnar; Meiler, Jens
2016-01-01
Structure determination remains a challenge for many biologically important proteins. In particular, proteins that adopt multiple conformations often evade crystallization in all biologically relevant states. Although computational de novo protein folding approaches often sample biologically relevant conformations, the selection of the most accurate model for different functional states remains a formidable challenge, in particular, for proteins with more than about 150 residues. Electron paramagnetic resonance (EPR) spectroscopy can obtain limited structural information for proteins in well-defined biological states and thereby assist in selecting biologically relevant conformations. The present study demonstrates that de novo folding methods are able to accurately sample the folds of 192-residue long soluble monomeric Bcl-2-associated X protein (BAX). The tertiary structures of the monomeric and homodimeric forms of BAX were predicted using the primary structure as well as 25 and 11 EPR distance restraints, respectively. The predicted models were subsequently compared to respective NMR/X-ray structures of BAX. EPR restraints improve the protein-size normalized root-mean-square-deviation (RMSD100) of the most accurate models with respect to the NMR/crystal structure from 5.9 Å to 3.9 Å and from 5.7 Å to 3.3 Å, respectively. Additionally, the model discrimination is improved, which is demonstrated by an improvement of the enrichment from 5% to 15% and from 13% to 21%, respectively. PMID:27129417
Hoogendoorn, Ayla; Gnanadesigan, Muthukaruppan; Zahnd, Guillaume; van Ditzhuijzen, Nienke S; Schuurbiers, Johan C H; van Soest, Gijs; Regar, Evelyn; Wentzel, Jolanda J
2016-10-01
The aim of this study was to investigate the relationship between the plaque free wall (PFW) measured by optical coherence tomography (OCT) and the plaque burden (PB) measured by intravascular ultrasound (IVUS). We hypothesize that measurement of the PFW could help to estimate the PB, thereby overcoming the limited ability of OCT to visualize the external elastic membrane in the presence of plaque. This could enable selection of the optimal stent-landing zone by OCT, which is traditionally defined by IVUS as a region with a PB < 40 %. PB (IVUS) and PFW angle (OCT and IVUS) were measured in 18 matched IVUS and OCT pullbacks acquired in the same coronary artery. We determined the relationship between OCT measured PFW (PFWOCT) and IVUS PB (PBIVUS) by non-linear regression analysis. An ROC-curve analysis was used to determine the optimal cut-off value of PFW angle for the detection of PB < 40 %. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were calculated. There is a significant correlation between PFWOCT and PBIVUS (r(2) = 0.59). The optimal cut-off value of the PFWOCT for the prediction of a PBIVUS < 40 % is ≥220° with a PPV of 78 % and an NPV of 84 %. This study shows that PFWOCT can be considered as a surrogate marker for PBIVUS, which is currently a common criterion to select an optimal stent-landing zone.
A systematic review of quantitative burn wound microbiology in the management of burns patients.
Halstead, Fenella D; Lee, Kwang Chear; Kwei, Johnny; Dretzke, Janine; Oppenheim, Beryl A; Moiemen, Naiem S
2018-02-01
The early diagnosis of infection or sepsis in burns are important for patient care. Globally, a large number of burn centres advocate quantitative cultures of wound biopsies for patient management, since there is assumed to be a direct link between the bioburden of a burn wound and the risk of microbial invasion. Given the conflicting study findings in this area, a systematic review was warranted. Bibliographic databases were searched with no language restrictions to August 2015. Study selection, data extraction and risk of bias assessment were performed in duplicate using pre-defined criteria. Substantial heterogeneity precluded quantitative synthesis, and findings were described narratively, sub-grouped by clinical question. Twenty six laboratory and/or clinical studies were included. Substantial heterogeneity hampered comparisons across studies and interpretation of findings. Limited evidence suggests that (i) more than one quantitative microbiology sample is required to obtain reliable estimates of bacterial load; (ii) biopsies are more sensitive than swabs in diagnosing or predicting sepsis; (iii) high bacterial loads may predict worse clinical outcomes, and (iv) both quantitative and semi-quantitative culture reports need to be interpreted with caution and in the context of other clinical risk factors. The evidence base for the utility and reliability of quantitative microbiology for diagnosing or predicting clinical outcomes in burns patients is limited and often poorly reported. Consequently future research is warranted. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Wang, Jian-jun; Li, Hong-bing; Kinnunen, Leena; Hu, Gang; Järvinen, Tiina M; Miettinen, Maija E; Yuan, Shenyuan; Tuomilehto, Jaakko
2007-05-01
We evaluate the ability of the metabolic syndrome (MetS) defined by five definitions for predicting both incident CHD and diabetes combined, diabetes alone, and CHD alone in a Chinese population. The screening survey for type 2 diabetes was conducted in 1994. A follow-up study of 541 high-risk non-diabetic individuals who were free of CHD at baseline was carried out in 1999 in Beijing area. The MetS was defined by the World Health Organization (WHO), European Group for the Study of Insulin Resistance (EGIR), American College of Endocrinology (ACE), the International Diabetes Federation (IDF), and the National Cholesterol Education Program and the American Heart Association (AHA) (updated NCEP) criteria. From a multiple logistic regression adjusting for age, sex, education, occupation, smoking, family history of diabetes, and total cholesterol, the relative risk of the ACE-defined MetS for incident diabetes alone (67 cases) was 2.29 (95% CI, 1.20-4.34). The MetS defined by the five definitions was associated with a 1.8-3.9 times increased risk for both incident CHD and diabetes combined (59 cases), and with a 1.9-3.0 times for total incident diabetes (126 cases). None of the five definitions predicted either incident CHD alone (177 cases) or total incident CHD (236 cases). In conclusion, the MetS defined by the current definitions appears to be more effective at predicting incident diabetes.
Kruger, Jen; Pollard, Daniel; Basarir, Hasan; Thokala, Praveen; Cooke, Debbie; Clark, Marie; Bond, Rod; Heller, Simon; Brennan, Alan
2015-10-01
. Health economic modeling has paid limited attention to the effects that patients' psychological characteristics have on the effectiveness of treatments. This case study tests 1) the feasibility of incorporating psychological prediction models of treatment response within an economic model of type 1 diabetes, 2) the potential value of providing treatment to a subgroup of patients, and 3) the cost-effectiveness of providing treatment to a subgroup of responders defined using 5 different algorithms. . Multiple linear regressions were used to investigate relationships between patients' psychological characteristics and treatment effectiveness. Two psychological prediction models were integrated with a patient-level simulation model of type 1 diabetes. Expected value of individualized care analysis was undertaken. Five different algorithms were used to provide treatment to a subgroup of predicted responders. A cost-effectiveness analysis compared using the algorithms to providing treatment to all patients. . The psychological prediction models had low predictive power for treatment effectiveness. Expected value of individualized care results suggested that targeting education at responders could be of value. The cost-effectiveness analysis suggested, for all 5 algorithms, that providing structured education to a subgroup of predicted responders would not be cost-effective. . The psychological prediction models tested did not have sufficient predictive power to make targeting treatment cost-effective. The psychological prediction models are simple linear models of psychological behavior. Collection of data on additional covariates could potentially increase statistical power. . By collecting data on psychological variables before an intervention, we can construct predictive models of treatment response to interventions. These predictive models can be incorporated into health economic models to investigate more complex service delivery and reimbursement strategies. © The Author(s) 2015.
The Molecular and Cellular Basis of Bitter Taste in Drosophila
Weiss, Linnea A.; Dahanukar, Anupama; Kwon, Jae Young; Banerjee, Diya; Carlson, John R.
2011-01-01
Summary The extent of diversity among bitter-sensing neurons is a fundamental issue in the field of taste. Data are limited and conflicting as to whether bitter neurons are broadly tuned and uniform, resulting in indiscriminate avoidance of bitter stimuli, or diverse, allowing a more discerning evaluation of food sources. We provide a systematic analysis of how bitter taste is encoded by the major taste organ of the Drosophila head, the labellum. Each of 16 bitter compounds is tested physiologically against all 31 bitter neurons, revealing responses that are diverse in magnitude and dynamics. Four functional classes of bitter neurons are defined. Four corresponding classes are defined through expression analysis of all 68 Gr taste receptors. A receptor-to-neuron-to-tastant map is constructed. Misexpression of one receptor confers bitter responses as predicted by the map. These results reveal a degree of complexity that greatly expands the capacity of the system to encode bitter taste. PMID:21262465
Predicting post-vaccination autoimmunity: who might be at risk?
Soriano, Alessandra; Nesher, Gideon; Shoenfeld, Yehuda
2015-02-01
Vaccinations have been used as an essential tool in the fight against infectious diseases, and succeeded in improving public health. However, adverse effects, including autoimmune conditions may occur following vaccinations (autoimmune/inflammatory syndrome induced by adjuvants--ASIA syndrome). It has been postulated that autoimmunity could be triggered or enhanced by the vaccine immunogen contents, as well as by adjuvants, which are used to increase the immune reaction to the immunogen. Fortunately, vaccination-related ASIA is uncommon. Yet, by defining individuals at risk we may further limit the number of individuals developing post-vaccination ASIA. In this perspective we defined four groups of individuals who might be susceptible to develop vaccination-induced ASIA: patients with prior post-vaccination autoimmune phenomena, patients with a medical history of autoimmunity, patients with a history of allergic reactions, and individuals who are prone to develop autoimmunity (having a family history of autoimmune diseases; asymptomatic carriers of autoantibodies; carrying certain genetic profiles, etc.). Copyright © 2014 Elsevier Ltd. All rights reserved.
Stress and efficiency studies in EFG
NASA Technical Reports Server (NTRS)
1986-01-01
The goals of this program were: (1) to define minimum stress configurations for silicon sheet growth at high speeds; (2) to quantify dislocation electrical activity and their limits on minority carrier diffusion length in deformed silicon; and (3) to study reasons for degradation of lifetime with increases in doping level in edge-defined film-fed growth (EFG) materials. A finite element model was developed for calculating residual stress with plastic deformation. A finite element model was verified for EFG control variable relationships to temperature field of the sheet to permit prediction of profiles and stresses encountered in EFG systems. A residual stress measurement technique was developed for finite size EFG material blanks using shadow Moire interferometry. Transient creep response of silicon was investigated in the temperature range between 800 and 1400 C in strain and strain regimes of interest in stress analysis of sheet growth. Quantitative relationships were established between minority carrier diffusion length and dislocation densities using Electron Beam Induced Current (EBIC) measurement in FZ silicon deformed in four point bending tests.
Multi-scale, Hierarchically Nested Young Stellar Structures in LEGUS Galaxies
NASA Astrophysics Data System (ADS)
Thilker, David A.; LEGUS Team
2017-01-01
The study of star formation in galaxies has predominantly been limited to either young stellar clusters and HII regions, or much larger kpc-scale morphological features such as spiral arms. The HST Legacy ExtraGalactic UV Survey (LEGUS) provides a rare opportunity to link these scales in a diverse sample of nearby galaxies and obtain a more comprehensive understanding of their co-evolution for comparison against model predictions. We have utilized LEGUS stellar photometry to identify young, resolved stellar populations belonging to several age bins and then defined nested hierarchical structures as traced by these subsamples of stars. Analagous hierarchical structures were also defined using LEGUS catalogs of unresolved young stellar clusters. We will present our emerging results concerning the physical properties (e.g. area, star counts, stellar mass, star formation rate, ISM characteristics), occupancy statistics (e.g. clusters per substructure versus age and scale, parent/child demographics) and relation to overall galaxy morphology/mass for these building blocks of hierarchical star-forming structure.
Ratigan, Amanda; Kritz-Silverstein, Donna; Barrett-Connor, Elizabeth
2016-07-01
This study examines the cross-sectional associations of cognitive and physical function with life satisfaction in middle-class, community-dwelling adults aged 60 and older. Participants were 632 women and 410 men who had cognitive function tests (CFT) and physical function tasks (PFT) assessed at a clinic visit between 1988 and 1992, and who responded in 1992 to a mailed survey that included life satisfaction measures. Cognitive impairment was defined as ≤24 on MMSE, ≥132 on Trails B, ≤12 on Category Fluency, ≤13 on Buschke long-term recall, and ≤7 on Heaton immediate recall. Physical impairment was defined as participants' self-reported difficulty (yes/no) in performing 10 physical functions. Multiple linear regression examined associations between life satisfaction and impairment on ≥1 CFT or difficulty with ≥1 PFT. The Satisfaction with Life Scale (SWLS; range:0-26) and Life Satisfaction Index-Z (LSI-Z; range:5-35). Participants' average age was 73.4 years (range=60-94). Categorically defined cognitive impairment was present in 40% of men and 47% of women. Additionally, 30% of men and 43% of women reported difficulty performing any PFT. Adjusting for age and impairment on ≥1 CFT, difficulty performing ≥1 PFT was associated with lower LSI-Z and SWLS scores in men (β=-1.73, -1.26, respectively, p<0.05) and women (β=-1.79, -1.93, respectively, p<0.01). However, impairment on ≥ 1 CFT was not associated with LSI-Z or SWLS score after adjusting for age and difficulty with ≥1 PFT. Limited cognitive function was more common than limited physical function; however, limited physical function was more predictive of lower life satisfaction. Interventions to increase or maintain mobility among older adults may improve overall life satisfaction. Copyright © 2016. Published by Elsevier Ireland Ltd.
Phase-Amplitude Response Functions for Transient-State Stimuli
2013-01-01
Abstract The phase response curve (PRC) is a powerful tool to study the effect of a perturbation on the phase of an oscillator, assuming that all the dynamics can be explained by the phase variable. However, factors like the rate of convergence to the oscillator, strong forcing or high stimulation frequency may invalidate the above assumption and raise the question of how is the phase variation away from an attractor. The concept of isochrons turns out to be crucial to answer this question; from it, we have built up Phase Response Functions (PRF) and, in the present paper, we complete the extension of advancement functions to the transient states by defining the Amplitude Response Function (ARF) to control changes in the transversal variables. Based on the knowledge of both the PRF and the ARF, we study the case of a pulse-train stimulus, and compare the predictions given by the PRC-approach (a 1D map) to those given by the PRF-ARF-approach (a 2D map); we observe differences up to two orders of magnitude in favor of the 2D predictions, especially when the stimulation frequency is high or the strength of the stimulus is large. We also explore the role of hyperbolicity of the limit cycle as well as geometric aspects of the isochrons. Summing up, we aim at enlightening the contribution of transient effects in predicting the phase response and showing the limits of the phase reduction approach to prevent from falling into wrong predictions in synchronization problems. List of Abbreviations PRC phase response curve, phase resetting curve. PRF phase response function. ARF amplitude response function. PMID:23945295
Lattice animals in diffusion limited binary colloidal system
NASA Astrophysics Data System (ADS)
Shireen, Zakiya; Babu, Sujin B.
2017-08-01
In a soft matter system, controlling the structure of the amorphous materials has been a key challenge. In this work, we have modeled irreversible diffusion limited cluster aggregation of binary colloids, which serves as a model for chemical gels. Irreversible aggregation of binary colloidal particles leads to the formation of a percolating cluster of one species or both species which are also called bigels. Before the formation of the percolating cluster, the system forms a self-similar structure defined by a fractal dimension. For a one component system when the volume fraction is very small, the clusters are far apart from each other and the system has a fractal dimension of 1.8. Contrary to this, we will show that for the binary system, we observe the presence of lattice animals which has a fractal dimension of 2 irrespective of the volume fraction. When the clusters start inter-penetrating, we observe a fractal dimension of 2.5, which is the same as in the case of the one component system. We were also able to predict the formation of bigels using a simple inequality relation. We have also shown that the growth of clusters follows the kinetic equations introduced by Smoluchowski for diffusion limited cluster aggregation. We will also show that the chemical distance of a cluster in the flocculation regime will follow the same scaling law as predicted for the lattice animals. Further, we will also show that irreversible binary aggregation comes under the universality class of the percolation theory.
Oettel, M
2004-04-01
We analyze the depletion interaction between two hard colloids in a hard-sphere solvent and pay special attention to the limit of large size ratio between colloids and solvent particles which is governed by the well-known Derjaguin approximation. For separations between the colloids of less than the diameter of the solvent particles (defining the depletion region), the solvent structure between the colloids can be analyzed in terms of an effective two-dimensional gas. Thereby we find that the Derjaguin limit is approached more slowly than previously thought. This analysis is in good agreement with simulation data which are available for a moderate size ratio of 10. Small discrepancies in results from density functional theory (DFT) at this size ratio become amplified for larger size ratios. Therefore we have improved upon previous DFT techniques by imposing test-particle consistency which connects DFT to integral equations. However, the improved results show no convergence towards the Derjaguin limit and thus we conclude that this implementation of DFT together with previous ones which rely on test-particle insertion become unreliable in predicting the force in the depletion region for size ratios larger than 10.
Predictors of short-term outcome to exercise and manual therapy for people with hip osteoarthritis.
French, Helen P; Galvin, Rose; Cusack, Tara; McCarthy, Geraldine M
2014-01-01
Physical therapy for hip osteoarthritis (OA) has shown short-term effects but limited long-term benefit. There has been limited research, with inconsistent results, in identifying prognostic factors associated with a positive response to physical therapy. The purpose of this study was to identify potential predictors of response to physical therapy (exercise therapy [ET] with or without adjunctive manual therapy [MT]) for hip OA based on baseline patient-specific and clinical characteristics. A prognostic study was conducted. Secondary analysis of data from a multicenter randomized controlled trial (RCT) (N=131) that evaluated the effectiveness of ET and ET+MT for hip OA was undertaken. Treatment response was defined using OMERACT/OARSI responder criteria. Ten baseline measures were used as predictor variables. Regression analyses were undertaken to identify predictors of outcome. Discriminative ability (sensitivity, specificity, and likelihood ratios) of significant variables was calculated. The RCT results showed no significant difference in most outcomes between ET and ET+MT at 9 and 18 weeks posttreatment. Forty-six patients were classified as responders at 9 weeks, and 36 patients were classified as responders at 18 weeks. Four baseline variables were predictive of a positive outcome at 9 weeks: male sex, pain with activity (<6/10), Western Ontario and McMaster Universities Osteoarthritis Index physical function subscale score (<34/68), and psychological health (Hospital Anxiety and Depression Scale score <9/42). No predictor variables were identified at the 18-week follow-up. Prognostic accuracy was fair for all 4 variables (sensitivity=0.5-0.58, specificity=0.57-0.72, likelihood ratios=1.25-1.77), indicating fair discriminative ability at predicting treatment response. The short-term follow-up limits the interpretation of results, and the low number of identified responders may have resulted in possible overfitting of the predictor model. The authors were unable to identify baseline variables in patients with hip OA that indicate those most likely to respond to treatment due to low discriminative ability. Further validation studies are needed to definitively define the best predictors of response to physical therapy in people with hip OA.
Multi-level multi-task learning for modeling cross-scale interactions in nested geospatial data
Yuan, Shuai; Zhou, Jiayu; Tan, Pang-Ning; Fergus, Emi; Wagner, Tyler; Sorrano, Patricia
2017-01-01
Predictive modeling of nested geospatial data is a challenging problem as the models must take into account potential interactions among variables defined at different spatial scales. These cross-scale interactions, as they are commonly known, are particularly important to understand relationships among ecological properties at macroscales. In this paper, we present a novel, multi-level multi-task learning framework for modeling nested geospatial data in the lake ecology domain. Specifically, we consider region-specific models to predict lake water quality from multi-scaled factors. Our framework enables distinct models to be developed for each region using both its local and regional information. The framework also allows information to be shared among the region-specific models through their common set of latent factors. Such information sharing helps to create more robust models especially for regions with limited or no training data. In addition, the framework can automatically determine cross-scale interactions between the regional variables and the local variables that are nested within them. Our experimental results show that the proposed framework outperforms all the baseline methods in at least 64% of the regions for 3 out of 4 lake water quality datasets evaluated in this study. Furthermore, the latent factors can be clustered to obtain a new set of regions that is more aligned with the response variables than the original regions that were defined a priori from the ecology domain.
NASA Astrophysics Data System (ADS)
Powalka, Mathieu; Lançon, Ariane; Puzia, Thomas H.; Peng, Eric W.; Liu, Chengze; Muñoz, Roberto P.; Blakeslee, John P.; Côté, Patrick; Ferrarese, Laura; Roediger, Joel; Sánchez-Janssen, Rúben; Zhang, Hongxin; Durrell, Patrick R.; Cuillandre, Jean-Charles; Duc, Pierre-Alain; Guhathakurta, Puragra; Gwyn, S. D. J.; Hudelot, Patrick; Mei, Simona; Toloba, Elisa
2016-11-01
The central region of the Virgo Cluster of galaxies contains thousands of globular clusters (GCs), an order of magnitude more than the number of clusters found in the Local Group. Relics of early star formation epochs in the universe, these GCs also provide ideal targets to test our understanding of the spectral energy distributions (SEDs) of old stellar populations. Based on photometric data from the Next Generation Virgo Cluster Survey (NGVS) and its near-infrared counterpart NGVS-IR, we select a robust sample of ≈ 2000 GCs with excellent photometry and tha span the full range of colors present in the Virgo core. The selection exploits the well-defined locus of GCs in the uiK diagram and the fact that the GCs are marginally resolved in the images. We show that the GCs define a narrow sequence in five-dimensional color space, with limited but real dispersion around the mean sequence. The comparison of these SEDs with the predictions of 11 widely used population synthesis models highlights differences between the models and also shows that no single model adequately matches the data in all colors. We discuss possible causes for some of these discrepancies. Forthcoming papers of this series will examine how best to estimate photometric metallicities in this context, and compare the Virgo GC colors with those in other environments.
NASA Technical Reports Server (NTRS)
Bowles, Roland L.; Buck, Bill K.
2009-01-01
The objective of the research developed and presented in this document was to statistically assess turbulence hazard detection performance employing airborne pulse Doppler radar systems. The FAA certification methodology for forward looking airborne turbulence radars will require estimating the probabilities of missed and false hazard indications under operational conditions. Analytical approaches must be used due to the near impossibility of obtaining sufficient statistics experimentally. This report describes an end-to-end analytical technique for estimating these probabilities for Enhanced Turbulence (E-Turb) Radar systems under noise-limited conditions, for a variety of aircraft types, as defined in FAA TSO-C134. This technique provides for one means, but not the only means, by which an applicant can demonstrate compliance to the FAA directed ATDS Working Group performance requirements. Turbulence hazard algorithms were developed that derived predictive estimates of aircraft hazards from basic radar observables. These algorithms were designed to prevent false turbulence indications while accurately predicting areas of elevated turbulence risks to aircraft, passengers, and crew; and were successfully flight tested on a NASA B757-200 and a Delta Air Lines B737-800. Application of this defined methodology for calculating the probability of missed and false hazard indications taking into account the effect of the various algorithms used, is demonstrated for representative transport aircraft and radar performance characteristics.
Invited commentary: the incremental value of customization in defining abnormal fetal growth status.
Zhang, Jun; Sun, Kun
2013-10-15
Reference tools based on birth weight percentiles at a given gestational week have long been used to define fetuses or infants that are small or large for their gestational ages. However, important deficiencies of the birth weight reference are being increasingly recognized. Overwhelming evidence indicates that an ultrasonography-based fetal weight reference should be used to classify fetal and newborn sizes during pregnancy and at birth, respectively. Questions have been raised as to whether further adjustments for race/ethnicity, parity, sex, and maternal height and weight are helpful to improve the accuracy of the classification. In this issue of the Journal, Carberry et al. (Am J Epidemiol. 2013;178(8):1301-1308) show that adjustment for race/ethnicity is useful, but that additional fine tuning for other factors (i.e., full customization) in the classification may not further improve the ability to predict infant morbidity, mortality, and other fetal growth indicators. Thus, the theoretical advantage of full customization may have limited incremental value for pediatric outcomes, particularly in term births. Literature on the prediction of short-term maternal outcomes and very long-term outcomes (adult diseases) is too scarce to draw any conclusions. Given that each additional variable being incorporated in the classification scheme increases complexity and costs in practice, the clinical utility of full customization in obstetric practice requires further testing.
Saillard, Colombe; Crocchiolo, Roberto; Furst, Sabine; El-Cheikh, Jean; Castagna, Luca; Signori, Alessio; Oudin, Claire; Faucher, Catherine; Lemarie, Claude; Chabannon, Christian; Granata, Angela; Blaise, Didier
2014-05-01
Abstract In 2005, the National Institutes of Health (NIH) proposed standard criteria for diagnosis, organ scoring and global assessment of chronic graft-versus-host disease (cGvHD) severity. We retrospectively reclassified cGvHD with NIH criteria in a monocentric cohort of 130 consecutive adult patients with hematological malignancies presenting cGvHD after receiving allo-hematopoietic stem cell transplant (HSCT) with a fludarabine-busulfan-antithymocyte globulin (ATG) conditioning regimen, among 313 consecutive HSCT recipients. We compared NIH and Seattle classifications to correlate severity and outcome. The follow up range was effectively 2-120 months. Forty-four percent developed Seattle-defined cGvHD (22% limited, 78% extensive forms). Using NIH criteria, there were 23%, 40% and 37% mild, moderate and severe forms, respectively, and 58%, 32% and 8% classic cGvHD, late acute GvHD and overlap syndrome. Five-year overall survival was 55% (49-61), and cumulative incidences of non-relapse mortality (NRM) and relapse/progression at 2 years were 19% (14-23) and 19% (14-24). NIH mild and moderate forms were associated with better survival compared to severe cGvHD (hazard ratio [HR] = 3.28, 95% confidence interval [CI]: 1.38-7.82, p = 0.007), due to higher NRM among patients with severe cGvHD (HR = 3.04, 95% CI: 1.05-8.78, p = 0.04) but comparable relapse risk (p = NS). In conclusion, the NIH classification appears to be more accurate in predicting outcome mostly by the reclassification of old-defined extensive forms into NIH-defined moderate or severe.
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
Thalhammer, O; Coradello, H; Pollak, A; Scheibenreiter, S; Simbruner, G
1976-01-01
An easily applicable score to predict the risk of prematurity (Tab. I) (defined by weight) is examined prospectively (scoring during 6 th month of pregnancy) in 431 and retrospectively (obtained after delivery) in 1183 pregnancies. In the prospective study (Tab. II) 71.4% of all pregnancies resulting in babies below 2501 g exceed the proposed 50 points risk probabilty limit whereas only 18.7% of pregnancies with babies of more than 3000 g do so. Excluding pregnancies with 20 or more risk points and excellent prenatal care (8 or more consultations) - which should change the outcome of risk-pregnancies - the percentages are 77.8% and 12.2% respectively (Tab. III). Pregnancies resulting in babies with birth weight of 2501 g -2750 g exceeded the limit in 38.9% and those with babies of 2751 g-3000 g in 20.7%. If 60 risk points are used as the limit the percentages for more than 3000 g until less than 2501 g would be 8.2%, 6.9%, 33.3% and 66.7%. In the resrospective study (Tab. V) 14.7% of all pregnancies with babies above 3000 g exceeded the 50 risk points limit compared with 57.2 of those with babies below 2501 g. Excluding pregnancies with 20 or more risk points and excellent prental care the percentages are 7.6 and 59.4 respectively. In the retrospective study the influence of the quality of prenatal care by the number of consultations (3-4; 5-7; 8 or more) is clearly demonstrable: Pregnancies with more than 50 risk points resulted in 80.7%, 57.1% and 19.8% depending on the quality of care in babies below 2501 g. Pregnancies with 31-50 risk points did so in 47.2%, 20.4% and 11.8%. In 334 women the score could be applied twice, in the 6th month and at delivery. Comparing both scores it was found that only 1.8% of these women exceeded the 50 risk points limit by events occurring after the 6th month scoring (Tab. IV). The score, simple enough to be applied by nurses and midwives, seems to be able to select 77.8% of pregnancies resulting in babies below 2501 g already during the 6th month of pregnancy, i.e. early enough for preventive measures to be taken that decrease the frequency of underweight births by three quarters.
Predictable patterns of the May-June rainfall anomaly over East Asia
NASA Astrophysics Data System (ADS)
Xing, Wen; Wang, Bin; Yim, So-Young; Ha, Kyung-Ja
2017-02-01
During early summer (May-June, MJ), East Asia (EA) subtropical front is a defining feature of Asian monsoon, which produces the most prominent precipitation band in the global subtropics. Here we show that dynamical prediction of early summer EA (20°N-45°N, 100°E-130°E) rainfall made by four coupled climate models' ensemble hindcast (1979-2010) yields only a moderate skill and cannot be used to estimate predictability. The present study uses an alternative, empirical orthogonal function (EOF)-based physical-empirical (P-E) model approach to predict rainfall anomaly pattern and estimate its potential predictability. The first three leading modes are physically meaningful and can be, respectively, attributed to (a) the interaction between the anomalous western North Pacific subtropical high and underlying Indo-Pacific warm ocean, (b) the forcing associated with North Pacific sea surface temperature (SST) anomaly, and (c) the development of equatorial central Pacific SST anomalies. A suite of P-E models is established to forecast the first three leading principal components. All predictors are 0 month ahead of May, so the prediction here is named as a 0 month lead prediction. The cross-validated hindcast results demonstrate that these modes may be predicted with significant temporal correlation skills (0.48-0.72). Using the predicted principal components and the corresponding EOF patterns, the total MJ rainfall anomaly was hindcasted for the period of 1979-2015. The time-mean pattern correlation coefficient (PCC) score reaches 0.38, which is significantly higher than dynamical models' multimodel ensemble skill (0.21). The estimated potential maximum attainable PCC is around 0.65, suggesting that the dynamical prediction models may have large rooms to improve. Limitations and future work are discussed.
Goldschmidt, Andrea B.
2017-01-01
Background Binge eating is a marker of weight gain and obesity, and a hallmark feature of eating disorders. Yet, its component constructs—overeating and loss of control (LOC) while eating—are poorly understood and difficult to measure. Objective To critically review the human literature concerning the validity of LOC and overeating across the age and weight spectrum. Data sources English-language articles addressing the face, convergent, discriminant, and predictive validity of LOC and overeating were included. Results LOC and overeating appear to have adequate face validity. Emerging evidence supports the convergent and predictive validity of the LOC construct, given its unique cross-sectional and prospective associations with numerous anthropometric, psychosocial, and eating behavior-related factors. Overeating may be best conceptualized as a marker of excess weight status. Limitations Binge eating constructs, particularly in the context of subjectively large episodes, are challenging to measure reliably. Few studies addressed overeating in the absence of LOC, thereby limiting conclusions about the validity of the overeating construct independent of LOC. Additional studies addressing the discriminant validity of both constructs are warranted. Discussion Suggestions for future weight-related research and for appropriately defining binge eating in the eating disorders diagnostic scheme are presented. PMID:28165655
Day, Troy
2012-01-01
The process of evolutionary diversification unfolds in a vast genotypic space of potential outcomes. During the past century, there have been remarkable advances in the development of theory for this diversification, and the theory's success rests, in part, on the scope of its applicability. A great deal of this theory focuses on a relatively small subset of the space of potential genotypes, chosen largely based on historical or contemporary patterns, and then predicts the evolutionary dynamics within this pre-defined set. To what extent can such an approach be pushed to a broader perspective that accounts for the potential open-endedness of evolutionary diversification? There have been a number of significant theoretical developments along these lines but the question of how far such theory can be pushed has not been addressed. Here a theorem is proven demonstrating that, because of the digital nature of inheritance, there are inherent limits on the kinds of questions that can be answered using such an approach. In particular, even in extremely simple evolutionary systems, a complete theory accounting for the potential open-endedness of evolution is unattainable unless evolution is progressive. The theorem is closely related to Gödel's incompleteness theorem, and to the halting problem from computability theory. PMID:21849390
Heating of trapped ultracold atoms by collapse dynamics
NASA Astrophysics Data System (ADS)
Laloë, Franck; Mullin, William J.; Pearle, Philip
2014-11-01
The continuous spontaneous localization (CSL) theory alters the Schrödinger equation. It describes wave-function collapse as a dynamical process instead of an ill-defined postulate, thereby providing macroscopic uniqueness and solving the so-called measurement problem of standard quantum theory. CSL contains a parameter λ giving the collapse rate of an isolated nucleon in a superposition of two spatially separated states and, more generally, characterizing the collapse time for any physical situation. CSL is experimentally testable, since it predicts some behavior different from that predicted by standard quantum theory. One example is the narrowing of wave functions, which results in energy imparted to particles. Here we consider energy given to trapped ultracold atoms. Since these are the coldest samples under experimental investigation, it is worth inquiring how they are affected by the CSL heating mechanism. We examine the CSL heating of a Bose-Einstein condensate (BEC) in contact with its thermal cloud. Of course, other mechanisms also provide heat and also particle loss. From varied data on optically trapped cesium BECs, we present an energy audit for known heating and loss mechanisms. The result provides an upper limit on CSL heating and thereby an upper limit on the parameter λ . We obtain λ ≲1 (±1 ) ×10-7 s-1.
Hemodynamic variables predict outcome of emergency thoracotomy in the pediatric trauma population.
Wyrick, Deidre L; Dassinger, Melvin S; Bozeman, Andrew P; Porter, Austin; Maxson, R Todd
2014-09-01
Limited data exist regarding indications for resuscitative emergency thoracotomy (ETR) in the pediatric population. We attempt to define the presenting hemodynamic parameters that predict survival for pediatric patients undergoing ETR. We reviewed all pediatric patients (age <18years), entered into the National Trauma Data Bank from 2007 to 2010, who underwent ETR within one hour of ED arrival. Mechanism of injury and hemodynamics were analyzed using Chi squared and Wilcoxon tests. 316 children (70 blunt, 240 penetrating) underwent ETR, 31% (98/316) survived to discharge. Less than 5% of patients survived when presenting SBP was ≤50mmHg or heart rate was ≤70bpm. For blunt injuries there were no survivors with a pulse ≤80bpm or SBP ≤60mmHg. When survivors were compared to nonsurvivors, blood pressure, pulse, and injury type were statistically significant when treated as independent variables and in a logistic regression model. When ETR was performed for SBP ≤50mmHg or for heart rate ≤70bpm less than 5% of patients survived. There were no survivors of blunt trauma when SBP was ≤60mmHg or pulse was ≤80bpm. This review suggests that ETR may have limited benefit in these patients. Copyright © 2014 Elsevier Inc. All rights reserved.
Voltammetric Thin-Layer Ionophore-Based Films: Part 2. Semi-Empirical Treatment.
Yuan, Dajing; Cuartero, Maria; Crespo, Gaston A; Bakker, Eric
2017-01-03
This work reports on a semiempirical treatment that allows one to rationalize and predict experimental conditions for thin-layer ionophore-based films with cation-exchange capacity read out with cyclic voltammetry. The transition between diffusional mass transport and thin-layer regime is described with a parameter (α), which depends on membrane composition, diffusion coefficient, scan rate, and electrode rotating speed. Once the thin-layer regime is fulfilled (α = 1), the membrane behaves in some analogy to a potentiometric sensor with a second discrimination variable (the applied potential) that allows one to operate such electrodes in a multianalyte detection mode owing to the variable applied ion-transfer potentials. The limit of detection of this regime is defined with a second parameter (β = 2) and is chosen in analogy to the definition of the detection limit for potentiometric sensors provided by the IUPAC. The analytical equations were validated through the simulation of the respective cyclic voltammograms under the same experimental conditions. While simulations of high complexity and better accuracy satisfactorily reproduced the experimental voltammograms during the forward and backward potential sweeps (companion paper 1), the semiempirical treatment here, while less accurate, is of low complexity and allows one to quite easily predict relevant experimental conditions for this emergent methodology.
NASA Technical Reports Server (NTRS)
Kerslake, Thomas W.; Scheiman, David A.
2005-01-01
This paper documents testing and analyses to quantify International Space Station (ISS) Solar Array Wing (SAW) string electrical performance under highly off-nominal, low-temperature-low-intensity (LILT) operating conditions with nonsolar light sources. This work is relevant for assessing feasibility and risks associated with a Sequential Shunt Unit (SSU) remove and replace (R&R) Extravehicular Activity (EVA). During eclipse, SAW strings can be energized by moonlight, EVA suit helmet lights or video camera lights. To quantify SAW performance under these off-nominal conditions, solar cell performance testing was performed using full moon, solar simulator and Video Camera Luminaire (VCL) light sources. Test conditions included 25 to 110 C temperatures and 1- to 0.0001-Sun illumination intensities. Electrical performance data and calculated eclipse lighting intensities were combined to predict SAW current-voltage output for comparison with electrical hazard thresholds. Worst case predictions show there is no connector pin molten metal hazard but crew shock hazard limits are exceeded due to VCL illumination. Assessment uncertainties and limitations are discussed along with operational solutions to mitigate SAW electrical hazards from VCL illumination. Results from a preliminary assessment of SAW arcing are also discussed. The authors recommend further analyses once SSU, R&R, and EVA procedures are better defined.
Alignment limit of the NMSSM Higgs sector
Carena, Marcela; Haber, Howard E.; Low, Ian; ...
2016-02-17
The Next-to-Minimal Supersymmetric extension of the Standard Model (NMSSM) with a Higgs boson of mass 125 GeV can be compatible with stop masses of order of the electroweak scale, thereby reducing the degree of fine-tuning necessary to achieve electroweak symmetry breaking. Moreover, in an attractive region of the NMSSM parameter space, corresponding to the \\alignment limit" in which one of the neutral Higgs fields lies approximately in the same direction in field space as the doublet Higgs vacuum expectation value, the observed Higgs boson is predicted to have Standard- Model-like properties. We derive analytical expressions for the alignment conditions andmore » show that they point toward a more natural region of parameter space for electroweak symmetry breaking, while allowing for perturbativity of the theory up to the Planck scale. Additionally, the alignment limit in the NMSSM leads to a well defined spectrum in the Higgs and Higgsino sectors, and yields a rich and interesting Higgs boson phenomenology that can be tested at the LHC. Here, we discuss the most promising channels for discovery and present several benchmark points for further study.« less
Double-regge exchange limit for the γp→ K⁺K⁻p reaction
Shi, M.; Danilkin, I. V.; Fernández-Ramírez, C.; ...
2015-02-01
We apply the generalized Veneziano model (B₅ model) in the double-Regge exchange limit to the γp→K⁺K⁻p reaction. Four different cases defined by the possible combinations of the signature factors of leading Regge exchanges ((K *,a₂/f₂), (K *,ρ/ω), (K *₂,a₂/f₂), and (K *₂,ρ/ω)) have been simulated through the Monte Carlo method. Suitable event candidates for the double-Regge exchange high-energy limit were selected employing Van Hove plots as a better alternative to kinematical cuts in the K⁺K⁻p Dalitz plot. In this way we predict and analyze the double-Regge contribution to the K⁺K⁻p Dalitz plot, which constitutes one of the major backgrounds inmore » the search for strangeonia, hybrids and exotics using γp→K⁺K⁻p reaction. We expect that data currently under analysis, and that to come in the future, will allow verification of the double-Regge behavior and a better assessment of this component of the amplitude.« less
Micrometeoroid and Orbital Debris (MMOD) Shield Ballistic Limit Analysis Program
NASA Technical Reports Server (NTRS)
Ryan, Shannon
2013-01-01
This software implements penetration limit equations for common micrometeoroid and orbital debris (MMOD) shield configurations, windows, and thermal protection systems. Allowable MMOD risk is formulated in terms of the probability of penetration (PNP) of the spacecraft pressure hull. For calculating the risk, spacecraft geometry models, mission profiles, debris environment models, and penetration limit equations for installed shielding configurations are required. Risk assessment software such as NASA's BUMPERII is used to calculate mission PNP; however, they are unsuitable for use in shield design and preliminary analysis studies. The software defines a single equation for the design and performance evaluation of common MMOD shielding configurations, windows, and thermal protection systems, along with a description of their validity range and guidelines for their application. Recommendations are based on preliminary reviews of fundamental assumptions, and accuracy in predicting experimental impact test results. The software is programmed in Visual Basic for Applications for installation as a simple add-in for Microsoft Excel. The user is directed to a graphical user interface (GUI) that requires user inputs and provides solutions directly in Microsoft Excel workbooks.
Therapeutic drug monitoring in pregnancy.
Matsui, Doreen M
2012-10-01
Therapeutic drug monitoring (TDM) is commonly recommended to optimize drug dosing regimens of various medications. It has been proposed to guide therapy in pregnant women, in whom physiological changes may lead to altered pharmacokinetics resulting in difficulty in predicting the appropriate drug dosage. Ideally, TDM may play a role in enhancing the effectiveness of treatment while minimizing toxicity of both the mother and fetus. Monitoring of drug levels may also be helpful in assessing adherence to prescribed therapy in selected cases. Limitations exist as therapeutic ranges have only been defined for a limited number of drugs and are based on data obtained in nonpregnant patients. TDM has been suggested for anticonvulsants, antidepressants, and antiretroviral drugs, based on pharmacokinetic studies that have shown reduced drug concentrations. However, there is only relatively limited (and sometimes inconsistent) information regarding the clinical impact of these pharmacokinetic changes during pregnancy and the effect of subsequent dose adjustments. Further studies are required to determine whether implementation of TDM during pregnancy improves outcome and is associated with any benefit beyond that achieved by clinical judgment alone. The cost effectiveness of TDM programs during pregnancy also remains to be examined.
Hey, Christiane; Lange, Benjamin P; Eberle, Silvia; Zaretsky, Yevgen; Sader, Robert; Stöver, Timo; Wagenblast, Jens
2013-09-01
Patients with head and neck cancer (HNC) are at high risk for oropharyngeal dysphagia (OD) following surgical therapy. Early identification of OD can improve outcomes and reduce economic burden. This study aimed to evaluate the validity of a water screening test using increasing volumes postsurgically for patients with HNC (N=80) regarding the early identification of OD in general, and whether there is a need for further instrumental diagnostics to investigate the presence of aspiration as well as to determine the limitations of oral intake as defined by fiberoptic endoscopic evaluation of swallowing. OD in general was identified in 65%, with aspiration in 49%, silent aspiration in 21% and limitations of oral intake in 56%. Despite a good sensitivity, for aspiration of 100% and for limitations of oral intake of 97.8%, the presented water screening test did not satisfactorily predict either of these reference criteria due to its low positive likelihood ratio (aspiration=2.6; limitations of oral intake=3.1). However, it is an accurate tool for the early identification of OD in general, with a sensitivity of 96.2% and a positive likelihood ratio of 5.4 in patients after surgery for HNC.
Local backbone structure prediction of proteins
De Brevern, Alexandre G.; Benros, Cristina; Gautier, Romain; Valadié, Hélène; Hazout, Serge; Etchebest, Catherine
2004-01-01
Summary A statistical analysis of the PDB structures has led us to define a new set of small 3D structural prototypes called Protein Blocks (PBs). This structural alphabet includes 16 PBs, each one is defined by the (φ, Ψ) dihedral angles of 5 consecutive residues. The amino acid distributions observed in sequence windows encompassing these PBs are used to predict by a Bayesian approach the local 3D structure of proteins from the sole knowledge of their sequences. LocPred is a software which allows the users to submit a protein sequence and performs a prediction in terms of PBs. The prediction results are given both textually and graphically. PMID:15724288
ERIC Educational Resources Information Center
Bauer, Jack J.; McAdams, Dan P.
2010-01-01
We examine (a) the normative course of eudaimonic well-being in emerging adulthood and (b) whether people's narratives of major life goals might prospectively predict eudaimonic growth 3 years later. We define eudaimonic growth as longitudinal increases in eudaimonic well-being, which we define as the combination of psychosocial maturity and…
Tan, Y M; Flynn, M R
2000-10-01
The transfer efficiency of a spray-painting gun is defined as the amount of coating applied to the workpiece divided by the amount sprayed. Characterizing this transfer process allows for accurate estimation of the overspray generation rate, which is important for determining a spray painter's exposure to airborne contaminants. This study presents an experimental evaluation of a mathematical model for predicting the transfer efficiency of a high volume-low pressure spray gun. The effects of gun-to-surface distance and nozzle pressure on the agreement between the transfer efficiency measurement and prediction were examined. Wind tunnel studies and non-volatile vacuum pump oil in place of commercial paint were used to determine transfer efficiency at nine gun-to-surface distances and four nozzle pressure levels. The mathematical model successfully predicts transfer efficiency within the uncertainty limits. The least squares regression between measured and predicted transfer efficiency has a slope of 0.83 and an intercept of 0.12 (R2 = 0.98). Two correction factors were determined to improve the mathematical model. At higher nozzle pressure settings, 6.5 psig and 5.5 psig, the correction factor is a function of both gun-to-surface distance and nozzle pressure level. At lower nozzle pressures, 4 psig and 2.75 psig, gun-to-surface distance slightly influences the correction factor, while nozzle pressure has no discernible effect.
Britto, Ingrid Schwach Werneck; Sananes, Nicolas; Olutoye, Oluyinka O; Cass, Darrell L; Sangi-Haghpeykar, Haleh; Lee, Timothy C; Cassady, Christopher I; Mehollin-Ray, Amy; Welty, Stephen; Fernandes, Caraciolo; Belfort, Michael A; Lee, Wesley; Ruano, Rodrigo
2015-10-01
The purpose of this study was to evaluate the impact of standardization of the lung-to-head ratio measurements in isolated congenital diaphragmatic hernia on prediction of neonatal outcomes and reproducibility. We conducted a retrospective cohort study of 77 cases of isolated congenital diaphragmatic hernia managed in a single center between 2004 and 2012. We compared lung-to-head ratio measurements that were performed prospectively in our institution without standardization to standardized measurements performed according to a defined protocol. The standardized lung-to-head ratio measurements were statistically more accurate than the nonstandardized measurements for predicting neonatal mortality (area under the receiver operating characteristic curve, 0.85 versus 0.732; P = .003). After standardization, there were no statistical differences in accuracy between measurements regardless of whether we considered observed-to-expected values (P > .05). Standardization of the lung-to-head ratio did not improve prediction of the need for extracorporeal membrane oxygenation (P> .05). Both intraoperator and interoperator reproducibility were good for the standardized lung-to-head ratio (intraclass correlation coefficient, 0.98 [95% confidence interval, 0.97-0.99]; bias, 0.02 [limits of agreement, -0.11 to +0.15], respectively). Standardization of lung-to-head ratio measurements improves prediction of neonatal outcomes. Further studies are needed to confirm these results and to assess the utility of standardization of other prognostic parameters.
NASA Astrophysics Data System (ADS)
Farmann, Alexander; Sauer, Dirk Uwe
2016-10-01
This study provides an overview of available techniques for on-board State-of-Available-Power (SoAP) prediction of lithium-ion batteries (LIBs) in electric vehicles. Different approaches dealing with the on-board estimation of battery State-of-Charge (SoC) or State-of-Health (SoH) have been extensively discussed in various researches in the past. However, the topic of SoAP prediction has not been explored comprehensively yet. The prediction of the maximum power that can be applied to the battery by discharging or charging it during acceleration, regenerative braking and gradient climbing is definitely one of the most challenging tasks of battery management systems. In large lithium-ion battery packs because of many factors, such as temperature distribution, cell-to-cell deviations regarding the actual battery impedance or capacity either in initial or aged state, the use of efficient and reliable methods for battery state estimation is required. The available battery power is limited by the safe operating area (SOA), where SOA is defined by battery temperature, current, voltage and SoC. Accurate SoAP prediction allows the energy management system to regulate the power flow of the vehicle more precisely and optimize battery performance and improve its lifetime accordingly. To this end, scientific and technical literature sources are studied and available approaches are reviewed.
He, Steven Y; McCulloch, Charles E; Boscardin, W John; Chren, Mary-Margaret; Linos, Eleni; Arron, Sarah T
2014-10-01
Fitzpatrick skin phototype (FSPT) is the most common method used to assess sunburn risk and is an independent predictor of skin cancer risk. Because of a conventional assumption that FSPT is predictable based on pigmentary phenotypes, physicians frequently estimate FSPT based on patient appearance. We sought to determine the degree to which self-reported race and pigmentary phenotypes are predictive of FSPT in a large, ethnically diverse population. A cross-sectional survey collected responses from 3386 individuals regarding self-reported FSPT, pigmentary phenotypes, race, age, and sex. Univariate and multivariate logistic regression analyses were performed to determine variables that significantly predict FSPT. Race, sex, skin color, eye color, and hair color are significant but weak independent predictors of FSPT (P<.0001). A multivariate model constructed using all independent predictors of FSPT only accurately predicted FSPT to within 1 point on the Fitzpatrick scale with 92% accuracy (weighted kappa statistic 0.53). Our study enriched for responses from ethnic minorities and does not fully represent the demographics of the US population. Patient self-reported race and pigmentary phenotypes are inaccurate predictors of sun sensitivity as defined by FSPT. There are limitations to using patient-reported race and appearance in predicting individual sunburn risk. Copyright © 2014 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.
Kinetic bottlenecks to chemical exchange rates for deep-sea animals - Part 1: Oxygen
NASA Astrophysics Data System (ADS)
Hofmann, A. F.; Peltzer, E. T.; Brewer, P. G.
2012-10-01
Ocean warming will reduce dissolved oxygen concentrations which can pose challenges to marine life. Oxygen limits are traditionally reported simply as a static concentration thresholds with no temperature, pressure or flow rate dependency. Here we treat the oceanic oxygen supply potential for heterotrophic consumption as a dynamic molecular exchange problem analogous to familiar gas exchange processes at the sea surface. A combination of the purely physico-chemical oceanic properties temperature, hydrostatic pressure, and oxygen concentration defines the ability of the ocean to supply oxygen to any given animal. This general oceanic oxygen supply potential is modulated by animal specific properties such as the diffusive boundary layer thickness to define and limit maximal oxygen supply rates. Here we combine all these properties into formal, mechanistic equations defining novel oceanic properties that subsume various relevant classical oceanographic parameters to better visualize, map, comprehend, and predict the impact of ocean deoxygenation on aerobic life. By explicitly including temperature and hydrostatic pressure into our quantities, various ocean regions ranging from the cold deep-sea to warm, coastal seas can be compared. We define purely physico-chemical quantities to describe the oceanic oxygen supply potential, but also quantities that contain organism-specific properties which in a most generalized way describe general concepts and dependencies. We apply these novel quantities to example oceanic profiles around the world and find that temperature and pressure dependencies of diffusion and partial pressure create zones of greatest physical constriction on oxygen supply typically at around 1000 m depth, which coincides with oxygen concentration minimum zones. In these zones, which comprise the bulk of the world ocean, ocean warming and deoxygenation have a clear negative effect for aerobic life. In some shallow and warm waters the enhanced diffusion and higher partial pressure due to higher temperatures might slightly overcompensate for oxygen concentration decreases due to decreases in solubility.
Linden, Ariel
2006-04-01
Diagnostic or predictive accuracy concerns are common in all phases of a disease management (DM) programme, and ultimately play an influential role in the assessment of programme effectiveness. Areas, such as the identification of diseased patients, predictive modelling of future health status and costs and risk stratification, are just a few of the domains in which assessment of accuracy is beneficial, if not critical. The most commonly used analytical model for this purpose is the standard 2 x 2 table method in which sensitivity and specificity are calculated. However, there are several limitations to this approach, including the reliance on a single defined criterion or cut-off for determining a true-positive result, use of non-standardized measurement instruments and sensitivity to outcome prevalence. This paper introduces the receiver operator characteristic (ROC) analysis as a more appropriate and useful technique for assessing diagnostic and predictive accuracy in DM. Its advantages include; testing accuracy across the entire range of scores and thereby not requiring a predetermined cut-off point, easily examined visual and statistical comparisons across tests or scores, and independence from outcome prevalence. Therefore the implementation of ROC as an evaluation tool should be strongly considered in the various phases of a DM programme.
Wollstein, Andreas; Walsh, Susan; Liu, Fan; Chakravarthy, Usha; Rahu, Mati; Seland, Johan H; Soubrane, Gisèle; Tomazzoli, Laura; Topouzis, Fotis; Vingerling, Johannes R; Vioque, Jesus; Böhringer, Stefan; Fletcher, Astrid E; Kayser, Manfred
2017-02-27
Success of genetic association and the prediction of phenotypic traits from DNA are known to depend on the accuracy of phenotype characterization, amongst other parameters. To overcome limitations in the characterization of human iris pigmentation, we introduce a fully automated approach that specifies the areal proportions proposed to represent differing pigmentation types, such as pheomelanin, eumelanin, and non-pigmented areas within the iris. We demonstrate the utility of this approach using high-resolution digital eye imagery and genotype data from 12 selected SNPs from over 3000 European samples of seven populations that are part of the EUREYE study. In comparison to previous quantification approaches, (1) we achieved an overall improvement in eye colour phenotyping, which provides a better separation of manually defined eye colour categories. (2) Single nucleotide polymorphisms (SNPs) known to be involved in human eye colour variation showed stronger associations with our approach. (3) We found new and confirmed previously noted SNP-SNP interactions. (4) We increased SNP-based prediction accuracy of quantitative eye colour. Our findings exemplify that precise quantification using the perceived biological basis of pigmentation leads to enhanced genetic association and prediction of eye colour. We expect our approach to deliver new pigmentation genes when applied to genome-wide association testing.
Fooshee, David R.; Nguyen, Tran B.; Nizkorodov, Sergey A.; Laskin, Julia; Laskin, Alexander; Baldi, Pierre
2012-01-01
Atmospheric organic aerosols (OA) represent a significant fraction of airborne particulate matter and can impact climate, visibility, and human health. These mixtures are difficult to characterize experimentally due to their complex and dynamic chemical composition. We introduce a novel Computational Brewing Application (COBRA) and apply it to modeling oligomerization chemistry stemming from condensation and addition reactions in OA formed by photooxidation of isoprene. COBRA uses two lists as input: a list of chemical structures comprising the molecular starting pool, and a list of rules defining potential reactions between molecules. Reactions are performed iteratively, with products of all previous iterations serving as reactants for the next. The simulation generated thousands of structures in the mass range of 120–500 Da, and correctly predicted ~70% of the individual OA constituents observed by high-resolution mass spectrometry. Select predicted structures were confirmed with tandem mass spectrometry. Esterification was shown to play the most significant role in oligomer formation, with hemiacetal formation less important, and aldol condensation insignificant. COBRA is not limited to atmospheric aerosol chemistry; it should be applicable to the prediction of reaction products in other complex mixtures for which reasonable reaction mechanisms and seed molecules can be supplied by experimental or theoretical methods. PMID:22568707
FutureTox II: in vitro data and in silico models for predictive toxicology.
Knudsen, Thomas B; Keller, Douglas A; Sander, Miriam; Carney, Edward W; Doerrer, Nancy G; Eaton, David L; Fitzpatrick, Suzanne Compton; Hastings, Kenneth L; Mendrick, Donna L; Tice, Raymond R; Watkins, Paul B; Whelan, Maurice
2015-02-01
FutureTox II, a Society of Toxicology Contemporary Concepts in Toxicology workshop, was held in January, 2014. The meeting goals were to review and discuss the state of the science in toxicology in the context of implementing the NRC 21st century vision of predicting in vivo responses from in vitro and in silico data, and to define the goals for the future. Presentations and discussions were held on priority concerns such as predicting and modeling of metabolism, cell growth and differentiation, effects on sensitive subpopulations, and integrating data into risk assessment. Emerging trends in technologies such as stem cell-derived human cells, 3D organotypic culture models, mathematical modeling of cellular processes and morphogenesis, adverse outcome pathway development, and high-content imaging of in vivo systems were discussed. Although advances in moving towards an in vitro/in silico based risk assessment paradigm were apparent, knowledge gaps in these areas and limitations of technologies were identified. Specific recommendations were made for future directions and research needs in the areas of hepatotoxicity, cancer prediction, developmental toxicity, and regulatory toxicology. © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Wakefield, J C; Schmitz, M F
2016-04-01
To establish which symptoms of major depressive episode (MDE) predict postremission suicide attempts in complicated single-episode cases. Using the nationally representative two-wave National Epidemiologic Survey on Alcohol and Related Conditions data set, we identified wave 1 lifetime single-episode MDE cases in which the episode remitted by the beginning of the wave 2 three-year follow-up period (N = 2791). The analytic sample was further limited to 'complicated' cases (N = 1872) known to have elevated suicide attempt rates, defined as having two or more of the following: suicidal ideation, marked role impairment, feeling worthless, psychomotor retardation, and prolonged (>6 months) duration. Logistic regression analyses showed that, after controlling for wave 1 suicide attempt which significantly predicted postremission suicide attempt (OR = 10.0), the additional complicated symptom 'feelings of worthlessness' during the wave 1 index episode significantly and very substantially predicted postremission suicide attempt (OR = 6.96). Neither wave 1 psychomotor retardation nor wave 1 suicidal ideation nor any of the other wave 1 depressive symptoms were significant predictors of wave 2 suicide attempt. Among depressive symptoms during an MDE, feelings of worthlessness is the only significant indicator of elevated risk of suicide attempt after the episode has remitted, beyond previous suicide attempts. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A predictability study of Lorenz's 28-variable model as a dynamical system
NASA Technical Reports Server (NTRS)
Krishnamurthy, V.
1993-01-01
The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.
Wollstein, Andreas; Walsh, Susan; Liu, Fan; Chakravarthy, Usha; Rahu, Mati; Seland, Johan H.; Soubrane, Gisèle; Tomazzoli, Laura; Topouzis, Fotis; Vingerling, Johannes R.; Vioque, Jesus; Böhringer, Stefan; Fletcher, Astrid E.; Kayser, Manfred
2017-01-01
Success of genetic association and the prediction of phenotypic traits from DNA are known to depend on the accuracy of phenotype characterization, amongst other parameters. To overcome limitations in the characterization of human iris pigmentation, we introduce a fully automated approach that specifies the areal proportions proposed to represent differing pigmentation types, such as pheomelanin, eumelanin, and non-pigmented areas within the iris. We demonstrate the utility of this approach using high-resolution digital eye imagery and genotype data from 12 selected SNPs from over 3000 European samples of seven populations that are part of the EUREYE study. In comparison to previous quantification approaches, (1) we achieved an overall improvement in eye colour phenotyping, which provides a better separation of manually defined eye colour categories. (2) Single nucleotide polymorphisms (SNPs) known to be involved in human eye colour variation showed stronger associations with our approach. (3) We found new and confirmed previously noted SNP-SNP interactions. (4) We increased SNP-based prediction accuracy of quantitative eye colour. Our findings exemplify that precise quantification using the perceived biological basis of pigmentation leads to enhanced genetic association and prediction of eye colour. We expect our approach to deliver new pigmentation genes when applied to genome-wide association testing. PMID:28240252
Microbial genomic island discovery, visualization and analysis.
Bertelli, Claire; Tilley, Keith E; Brinkman, Fiona S L
2018-06-03
Horizontal gene transfer (also called lateral gene transfer) is a major mechanism for microbial genome evolution, enabling rapid adaptation and survival in specific niches. Genomic islands (GIs), commonly defined as clusters of bacterial or archaeal genes of probable horizontal origin, are of particular medical, environmental and/or industrial interest, as they disproportionately encode virulence factors and some antimicrobial resistance genes and may harbor entire metabolic pathways that confer a specific adaptation (solvent resistance, symbiosis properties, etc). As large-scale analyses of microbial genomes increases, such as for genomic epidemiology investigations of infectious disease outbreaks in public health, there is increased appreciation of the need to accurately predict and track GIs. Over the past decade, numerous computational tools have been developed to tackle the challenges inherent in accurate GI prediction. We review here the main types of GI prediction methods and discuss their advantages and limitations for a routine analysis of microbial genomes in this era of rapid whole-genome sequencing. An assessment is provided of 20 GI prediction software methods that use sequence-composition bias to identify the GIs, using a reference GI data set from 104 genomes obtained using an independent comparative genomics approach. Finally, we present guidelines to assist researchers in effectively identifying these key genomic regions.
Quantitative prediction of oral cancer risk in patients with oral leukoplakia.
Liu, Yao; Li, Yicheng; Fu, Yue; Liu, Tong; Liu, Xiaoyong; Zhang, Xinyan; Fu, Jie; Guan, Xiaobing; Chen, Tong; Chen, Xiaoxin; Sun, Zheng
2017-07-11
Exfoliative cytology has been widely used for early diagnosis of oral squamous cell carcinoma. We have developed an oral cancer risk index using DNA index value to quantitatively assess cancer risk in patients with oral leukoplakia, but with limited success. In order to improve the performance of the risk index, we collected exfoliative cytology, histopathology, and clinical follow-up data from two independent cohorts of normal, leukoplakia and cancer subjects (training set and validation set). Peaks were defined on the basis of first derivatives with positives, and modern machine learning techniques were utilized to build statistical prediction models on the reconstructed data. Random forest was found to be the best model with high sensitivity (100%) and specificity (99.2%). Using the Peaks-Random Forest model, we constructed an index (OCRI2) as a quantitative measurement of cancer risk. Among 11 leukoplakia patients with an OCRI2 over 0.5, 4 (36.4%) developed cancer during follow-up (23 ± 20 months), whereas 3 (5.3%) of 57 leukoplakia patients with an OCRI2 less than 0.5 developed cancer (32 ± 31 months). OCRI2 is better than other methods in predicting oral squamous cell carcinoma during follow-up. In conclusion, we have developed an exfoliative cytology-based method for quantitative prediction of cancer risk in patients with oral leukoplakia.
A New Approach to Defining Human Touch Temperature Standards
NASA Technical Reports Server (NTRS)
Ungar, Eugene; Stroud, Kenneth
2010-01-01
Defining touch temperature limits for skin contact with both hot and cold objects is important to prevent pain and skin damage, which may affect task performance or become a safety concern. Pain and skin damage depend on the skin temperature during contact, which depends on the contact thermal conductance, the object's initial temperature, and its material properties. However, previous spacecraft standards have incorrectly defined touch temperature limits in terms of a single object temperature value for all materials, or have provided limited material-specific values which do not cover the gamut of likely designs. A new approach has been developed for updated NASA standards, which defines touch temperature limits in terms of skin temperature at pain onset for bare skin contact with hot and cold objects. The authors have developed an analytical verification method for safe hot and cold object temperatures for contact times from 1 second to infinity.
A New Approach to Defining Human Touch Temperature Standards
NASA Technical Reports Server (NTRS)
Ungar, Eugene; Stroud, Kenneth
2009-01-01
Defining touch temperature limits for skin contact with both hot and cold objects is important to prevent pain and skin damage, which may affect task performance or become a safety concern. Pain and skin damage depend on the resulting skin temperature during contact, which depends on the object s initial temperature, its material properties and its ability to transfer heat. However, previous spacecraft standards have incorrectly defined touch temperature limits in terms of a single object temperature value for all materials, or have provided limited material-specific values which do not cover the gamut of most designs. A new approach is being used in new NASA standards, which defines touch temperature limits in terms of skin temperature at pain onset for bare skin contact with hot and cold objects. The authors have developed an analytical verification method for safe hot and cold object temperatures for contact times from 1 second to infinity.
NASA Technical Reports Server (NTRS)
Ormsbee, A. I.; Bragg, M. B.; Maughmer, M. D.
1981-01-01
A set of relationships used to scale small sized dispersion studies to full size results are experimentally verified and, with some qualifications, basic deposition patterns are presented. In the process of validating these scaling laws, the basic experimental techniques used in conducting such studies both with and without an operational propeller were developed. The procedures that evolved are outlined in some detail. The envelope of test conditions that can be accommodated in the Langley Vortex Research Facility, which were developed theoretically, are verified using a series of vortex trajectory experiments that help to define the limitations due to wall interference effects for models of different sizes.
Reproducible surface-enhanced Raman quantification of biomarkers in multicomponent mixtures.
De Luca, Anna Chiara; Reader-Harris, Peter; Mazilu, Michael; Mariggiò, Stefania; Corda, Daniela; Di Falco, Andrea
2014-03-25
Direct and quantitative detection of unlabeled glycerophosphoinositol (GroPIns), an abundant cytosolic phosphoinositide derivative, would allow rapid evaluation of several malignant cell transformations. Here we report label-free analysis of GroPIns via surface-enhanced Raman spectroscopy (SERS) with a sensitivity of 200 nM, well below its apparent concentration in cells. Crucially, our SERS substrates, based on lithographically defined gold nanofeatures, can be used to predict accurately the GroPIns concentration even in multicomponent mixtures, avoiding the preliminary separation of individual compounds. Our results represent a critical step toward the creation of SERS-based biosensor for rapid, label-free, and reproducible detection of specific molecules, overcoming limits of current experimental methods.
NASA Technical Reports Server (NTRS)
Fehrman, A. L.; Masek, R. V.
1972-01-01
Quantitative estimates of the uncertainty in predicting aerodynamic heating rates for a fully reusable space shuttle system are developed and the impact of these uncertainties on Thermal Protection System (TPS) weight are discussed. The study approach consisted of statistical evaluations of the scatter of heating data on shuttle configurations about state-of-the-art heating prediction methods to define the uncertainty in these heating predictions. The uncertainties were then applied as heating rate increments to the nominal predicted heating rate to define the uncertainty in TPS weight. Separate evaluations were made for the booster and orbiter, for trajectories which included boost through reentry and touchdown. For purposes of analysis, the vehicle configuration is divided into areas in which a given prediction method is expected to apply, and separate uncertainty factors and corresponding uncertainty in TPS weight derived for each area.
NASA Technical Reports Server (NTRS)
Olson, Sandra L.; Beeson, Harold; Fernandez-Pello, A. Carlos
2014-01-01
Repeated Test 1 extinction tests near the upward flammability limit are expected to follow a Poisson process trend. This Poisson process trend suggests that rather than define a ULOI and MOC (which requires two limits to be determined), it might be better to define a single upward limit as being where 1/e (where e (approx. equal to 2.7183) is the characteristic time of the normalized Poisson process) of the materials burn, or, rounding, where approximately 1/3 of the samples fail the test (and burn). Recognizing that spacecraft atmospheres will not bound the entire oxygen-pressure parameter space, but actually lie along the normoxic atmosphere control band, we can focus the materials flammability testing along this normoxic band. A Normoxic Upward Limiting Pressure (NULP) is defined that determines the minimum safe total pressure for a material within the constant partial pressure control band. Then, increasing this pressure limit by a factor of safety, we can define the material as being safe to use at the NULP + SF (where SF is on the order of 10 kilopascal, based on existing flammability data). It is recommended that the thickest material to be tested with the current Test 1 igniter should be 3 mm thick (1/8 inches) to avoid the problem of differentiating between an ignition limit and a true flammability limit.
Signal-averaged P wave in patients with paroxysmal atrial fibrillation.
Rosenheck, S
1997-10-01
The theoretical and experimental rational of atrial signal-averaged ECG in patients with AF is delay in the intra-atrial and interatrial conduction. Similar to the ventricular signal-averaged ECG, the atrial signal-averaged ECG is an averaging of a high number of consecutive P waves that match the template created earlier P wave triggering is preferred over QRS triggering because of more accurate aligning. However, the small amplitude of the atrial ECG and its gradual increase from the isoelectric line may create difficulties in defining the start point if P wave triggering is used. Studies using P wave triggering and those using QRS triggering demonstrate a prolonged P wave duration in patients with paroxysmal AF. The negative predictive value of this test is relatively high at 60%-80%. The positive predictive value of atrial signal-averaged ECGs in predicting the risk of AF is considerably lower than the negative predictive value. All the data accumulated prospectively on the predictive value of P wave signal-averaging was determined only in patients undergoing coronary bypass surgery or following MI; its value in other patients with paroxysmal AF is still not determined. The clinical role of frequency-domain analysis (alone or added to time-domain analysis) remains undefined. Because of this limited knowledge on the predictive value of P wave signal-averaging, it is still not clinical medicine, and further research is needed before atrial signal-averaged ECG will be part of clinical testing.
Jamali, Akram; Sadeghi-Demneh, Ebrahim; Fereshtenajad, Niloufar; Hillier, Susan
2017-09-01
Somatosensory impairments are common in multiple sclerosis. However, little data are available to characterize the nature and frequency of these problems in people with multiple sclerosis. To investigate the frequency of somatosensory impairments and identify any association with balance limitations in people with multiple sclerosis. The design was a prospective cross-sectional study, involving 82 people with multiple sclerosis and 30 healthy controls. Tactile and proprioceptive sensory acuity were measured using the Rivermead Assessment of Somatosensory Performance. Vibration duration was assessed using a tuning fork. Duration for the Timed Up and Go Test and reaching distance of the Functional Reach Test were measured to assess balance limitations. The normative range of sensory modalities was defined using cut-off points in the healthy participants. The multivariate linear regression was used to identify the significant predictors of balance in people with multiple sclerosis. Proprioceptive impairments (66.7%) were more common than tactile (60.8%) and vibration impairments (44.9%). Somatosensory impairments were more frequent in the lower limb (78.2%) than the upper limb (64.1%). All sensory modalities were significantly associated with the Timed Up and Go and Functional Reach tests (p<0.05). The Timed Up and Go test was independently predicted by the severity of the neurological lesion, Body Mass Index, ataxia, and tactile sensation (R2=0.58), whereas the Functional Reach test was predicted by the severity of the neurological lesion, lower limb strength, and vibration sense (R2=0.49). Somatosensory impairments are very common in people with multiple sclerosis. These impairments are independent predictors of balance limitation. Copyright © 2017 Elsevier B.V. All rights reserved.
Mapping the birch and grass pollen seasons in the UK using satellite sensor time-series.
Khwarahm, Nabaz R; Dash, Jadunandan; Skjøth, C A; Newnham, R M; Adams-Groom, B; Head, K; Caulton, Eric; Atkinson, Peter M
2017-02-01
Grass and birch pollen are two major causes of seasonal allergic rhinitis (hay fever) in the UK and parts of Europe affecting around 15-20% of the population. Current prediction of these allergens in the UK is based on (i) measurements of pollen concentrations at a limited number of monitoring stations across the country and (ii) general information about the phenological status of the vegetation. Thus, the current prediction methodology provides information at a coarse spatial resolution only. Most station-based approaches take into account only local observations of flowering, while only a small number of approaches take into account remote observations of land surface phenology. The systematic gathering of detailed information about vegetation status nationwide would therefore be of great potential utility. In particular, there exists an opportunity to use remote sensing to estimate phenological variables that are related to the flowering phenophase and, thus, pollen release. In turn, these estimates can be used to predict pollen release at a fine spatial resolution. In this study, time-series of MERIS Terrestrial Chlorophyll Index (MTCI) data were used to predict two key phenological variables: the start of season and peak of season. A technique was then developed to estimate the flowering phenophase of birch and grass from the MTCI time-series. For birch, the timing of flowering was defined as the time after the start of the growing season when the MTCI value reached 25% of the maximum. Similarly, for grass this was defined as the time when the MTCI value reached 75% of the maximum. The predicted pollen release dates were validated with data from nine pollen monitoring stations in the UK. For both birch and grass, we obtained large positive correlations between the MTCI-derived start of pollen season and the start of the pollen season defined using station data, with a slightly larger correlation observed for birch than for grass. The technique was applied to produce detailed maps for the flowering of birch and grass across the UK for each of the years from 2003 to 2010. The results demonstrate that the remote sensing-based maps of onset flowering of birch and grass for the UK together with the pollen forecast from the Meteorology Office and National Pollen and Aerobiology Research Unit (NPARU) can potentially provide more accurate information to pollen allergy sufferers in the UK. Copyright © 2016 Elsevier B.V. All rights reserved.
Limited family structure and BRCA gene mutation status in single cases of breast cancer.
Weitzel, Jeffrey N; Lagos, Veronica I; Cullinane, Carey A; Gambol, Patricia J; Culver, Julie O; Blazer, Kathleen R; Palomares, Melanie R; Lowstuter, Katrina J; MacDonald, Deborah J
2007-06-20
An autosomal dominant pattern of hereditary breast cancer may be masked by small family size or transmission through males given sex-limited expression. To determine if BRCA gene mutations are more prevalent among single cases of early onset breast cancer in families with limited vs adequate family structure than would be predicted by currently available probability models. A total of 1543 women seen at US high-risk clinics for genetic cancer risk assessment and BRCA gene testing were enrolled in a prospective registry study between April 1997 and February 2007. Three hundred six of these women had breast cancer before age 50 years and no first- or second-degree relatives with breast or ovarian cancers. The main outcome measure was whether family structure, assessed from multigenerational pedigrees, predicts BRCA gene mutation status. Limited family structure was defined as fewer than 2 first- or second-degree female relatives surviving beyond age 45 years in either lineage. Family structure effect and mutation probability by the Couch, Myriad, and BRCAPRO models were assessed with stepwise multiple logistic regression. Model sensitivity and specificity were determined and receiver operating characteristic curves were generated. Family structure was limited in 153 cases (50%). BRCA gene mutations were detected in 13.7% of participants with limited vs 5.2% with adequate family structure. Family structure was a significant predictor of mutation status (odds ratio, 2.8; 95% confidence interval, 1.19-6.73; P = .02). Although none of the models performed well, receiver operating characteristic analysis indicated that modification of BRCAPRO output by a corrective probability index accounting for family structure was the most accurate BRCA gene mutation status predictor (area under the curve, 0.72; 95% confidence interval, 0.63-0.81; P<.001) for single cases of breast cancer. Family structure can affect the accuracy of mutation probability models. Genetic testing guidelines may need to be more inclusive for single cases of breast cancer when the family structure is limited and probability models need to be recreated using limited family history as an actual variable.
Russell, Charlotte; Wearden, Alison J.; Fairclough, Gillian; Emsley, Richard A.; Kyle, Simon D.
2016-01-01
Study Objectives: This study aimed to (1) examine the relationship between subjective and actigraphy-defined sleep, and next-day fatigue in chronic fatigue syndrome (CFS); and (2) investigate the potential mediating role of negative mood on this relationship. We also sought to examine the effect of presleep arousal on perceptions of sleep. Methods: Twenty-seven adults meeting the Oxford criteria for CFS and self-identifying as experiencing sleep difficulties were recruited to take part in a prospective daily diary study, enabling symptom capture in real time over a 6-day period. A paper diary was used to record nightly subjective sleep and presleep arousal. Mood and fatigue symptoms were rated four times each day. Actigraphy was employed to provide objective estimations of sleep duration and continuity. Results: Multilevel modelling revealed that subjective sleep variables, namely sleep quality, efficiency, and perceiving sleep to be unrefreshing, predicted following-day fatigue levels, with poorer subjective sleep related to increased fatigue. Lower subjective sleep efficiency and perceiving sleep as unrefreshing predicted reduced variance in fatigue across the following day. Negative mood on waking partially mediated these relationships. Increased presleep cognitive and somatic arousal predicted self-reported poor sleep. Actigraphy-defined sleep, however, was not found to predict following-day fatigue. Conclusions: For the first time we show that nightly subjective sleep predicts next-day fatigue in CFS and identify important factors driving this relationship. Our data suggest that sleep specific interventions, targeting presleep arousal, perceptions of sleep and negative mood on waking, may improve fatigue in CFS. Citation: Russell C, Wearden AJ, Fairclough G, Emsley RA, Kyle SD. Subjective but not actigraphy-defined sleep predicts next-day fatigue in chronic fatigue syndrome: a prospective daily diary study. SLEEP 2016;39(4):937–944. PMID:26715232
Ryan, J E; Warrier, S K; Lynch, A C; Ramsay, R G; Phillips, W A; Heriot, A G
2016-03-01
Approximately 20% of patients treated with neoadjuvant chemoradiotherapy (nCRT) for locally advanced rectal cancer achieve a pathological complete response (pCR) while the remainder derive the benefit of improved local control and downstaging and a small proportion show a minimal response. The ability to predict which patients will benefit would allow for improved patient stratification directing therapy to those who are likely to achieve a good response, thereby avoiding ineffective treatment in those unlikely to benefit. A systematic review of the English language literature was conducted to identify pathological factors, imaging modalities and molecular factors that predict pCR following chemoradiotherapy. PubMed, MEDLINE and Cochrane Database searches were conducted with the following keywords and MeSH search terms: 'rectal neoplasm', 'response', 'neoadjuvant', 'preoperative chemoradiation', 'tumor response'. After review of title and abstracts, 85 articles addressing the prediction of pCR were selected. Clear methods to predict pCR before chemoradiotherapy have not been defined. Clinical and radiological features of the primary cancer have limited ability to predict response. Molecular profiling holds the greatest potential to predict pCR but adoption of this technology will require greater concordance between cohorts for the biomarkers currently under investigation. At present no robust markers of the prediction of pCR have been identified and the topic remains an area for future research. This review critically evaluates existing literature providing an overview of the methods currently available to predict pCR to nCRT for locally advanced rectal cancer. The review also provides a comprehensive comparison of the accuracy of each modality. Colorectal Disease © 2015 The Association of Coloproctology of Great Britain and Ireland.
Photodynamic therapy: computer modeling of diffusion and reaction phenomena
NASA Astrophysics Data System (ADS)
Hampton, James A.; Mahama, Patricia A.; Fournier, Ronald L.; Henning, Jeffery P.
1996-04-01
We have developed a transient, one-dimensional mathematical model for the reaction and diffusion phenomena that occurs during photodynamic therapy (PDT). This model is referred to as the PDTmodem program. The model is solved by the Crank-Nicholson finite difference technique and can be used to predict the fates of important molecular species within the intercapillary tissue undergoing PDT. The following factors govern molecular oxygen consumption and singlet oxygen generation within a tumor: (1) photosensitizer concentration; (2) fluence rate; and (3) intercapillary spacing. In an effort to maximize direct tumor cell killing, the model allows educated decisions to be made to insure the uniform generation and exposure of singlet oxygen to tumor cells across the intercapillary space. Based on predictions made by the model, we have determined that the singlet oxygen concentration profile within the intercapillary space is controlled by the product of the drug concentration, and light fluence rate. The model predicts that at high levels of this product, within seconds singlet oxygen generation is limited to a small core of cells immediately surrounding the capillary. The remainder of the tumor tissue in the intercapillary space is anoxic and protected from the generation and toxic effects of singlet oxygen. However, at lower values of this product, the PDT-induced anoxic regions are not observed. An important finding is that an optimal value of this product can be defined that maintains the singlet oxygen concentration throughout the intercapillary space at a near constant level. Direct tumor cell killing is therefore postulated to depend on the singlet oxygen exposure, defined as the product of the uniform singlet oxygen concentration and the time of exposure, and not on the total light dose.
NASA Astrophysics Data System (ADS)
Benjankar, R. M.; Sohrabi, M.; Tonina, D.; McKean, J. A.
2013-12-01
Aquatic habitat models utilize flow variables which may be predicted with one-dimensional (1D) or two-dimensional (2D) hydrodynamic models to simulate aquatic habitat quality. Studies focusing on the effects of hydrodynamic model dimensionality on predicted aquatic habitat quality are limited. Here we present the analysis of the impact of flow variables predicted with 1D and 2D hydrodynamic models on simulated spatial distribution of habitat quality and Weighted Usable Area (WUA) for fall-spawning Chinook salmon. Our study focuses on three river systems located in central Idaho (USA), which are a straight and pool-riffle reach (South Fork Boise River), small pool-riffle sinuous streams in a large meadow (Bear Valley Creek) and a steep-confined plane-bed stream with occasional deep forced pools (Deadwood River). We consider low and high flows in simple and complex morphologic reaches. Results show that 1D and 2D modeling approaches have effects on both the spatial distribution of the habitat and WUA for both discharge scenarios, but we did not find noticeable differences between complex and simple reaches. In general, the differences in WUA were small, but depended on stream type. Nevertheless, spatially distributed habitat quality difference is considerable in all streams. The steep-confined plane bed stream had larger differences between aquatic habitat quality defined with 1D and 2D flow models compared to results for streams with well defined macro-topographies, such as pool-riffle bed forms. KEY WORDS: one- and two-dimensional hydrodynamic models, habitat modeling, weighted usable area (WUA), hydraulic habitat suitability, high and low discharges, simple and complex reaches
Wood, Brian R; Komarow, Lauren; Zolopa, Andrew R; Finkelman, Malcolm A; Powderly, William G; Sax, Paul E
2013-03-27
The objective of this study was to define the test characteristics of plasma beta-glucan for diagnosis of Pneumocystis jirovecii pneumonia (PCP) in AIDS patients with respiratory symptoms. Analysis of baseline blood samples in a randomized strategy study of patients with acute opportunistic infections, limited to participants with respiratory symptoms. Participants in the 282-person ACTG A5164 trial had baseline plasma samples assayed for beta-glucan testing. As part of A5164 trial, two study investigators independently adjudicated the diagnosis of PCP. Respiratory symptoms were identified by investigators from a list of all signs and symptoms with an onset or resolution in the 21 days prior to or 14 days following study entry. Beta-glucan was defined as positive if at least 80 pg/ml and negative if less than 80 pg/ml. Of 252 study participants with a beta-glucan result, 159 had at least one respiratory symptom, 139 of whom had a diagnosis of PCP. The sensitivity of beta-glucan for PCP in participants with respiratory symptoms was 92.8% [95% confidence interval (CI) 87.2-96.5], and specificity 75.0% (95% CI 50.9-91.3). Among 134 individuals with positive beta-glucan and respiratory symptoms, 129 had PCP, for a positive predictive value of 96.3% (95% CI 91.5-98.8). Fifteen of 25 patients with a normal beta-glucan did not have PCP, for a negative predictive value of 60% (95% CI 38.7-78.9). Elevated plasma beta-glucan has a high predictive value for diagnosis of PCP in AIDS patients with respiratory symptoms. We propose an algorithm for the use of beta-glucan as a diagnostic tool on the basis of the pretest probability of PCP in such patients.
Fessler, Stephanie J; Simon, Harold K; Yancey, Arthur H; Colman, Michael; Hirsh, Daniel A
2014-03-01
The use of Emergency Medical Services (EMS) for low-acuity pediatric problems is well documented. Attempts have been made to curb potentially unnecessary transports, including using EMS dispatch protocols, shown to predict acuity and needs of adults. However, there are limited data about this in children. The primary objective of this study is to determine the pediatric emergency department (PED) resource utilization (surrogate of acuity level) for pediatric patients categorized as "low-acuity" by initial EMS protocols. Records of all pediatric patients classified as "low acuity" and transported to a PED in winter and summer of 2010 were reviewed. Details of the PED visit were recorded. Patients were categorized and compared based on chief complaint group. Resource utilization was defined as requiring any prescription medications, labs, procedures, consults, admission or transfer. "Under-triage" was defined as a "low-acuity" EMS transport subsequently requiring emergent interventions. Of the 876 eligible cases, 801 were included; 392/801 had no resource utilization while 409 of 801 had resource utilization. Most (737/801) were discharged to home; however, 64/801 were admitted, including 1 of 801 requiring emergent intervention (under-triage rate 0.12%). Gastroenterology and trauma groups had a significant increase in resource utilization, while infectious disease and ear-nose-throat groups had decreased resource utilization. While this EMS system did not well predict overall resource utilization, it safely identified most low-acuity patients, with a low under-triage rate. This study identifies subgroups of patients that could be managed without emergent transport and can be used to further refine current protocols or establish secondary triage systems. © 2013.
Thomas, George; McGirt, Matthew J; Woodworth, Graeme; Heidler, Jennifer; Rigamonti, Daniele; Hillis, Argye E; Williams, Michael A
2005-01-01
To evaluate neurocognitive changes and predict neurocognitive outcome after ventriculoperitoneal shunting for idiopathic normal pressure hydrocephalus (INPH). Reports of neurocognitive response to shunting have been variable and studies that predict cognitive outcomes after shunting are limited. We reviewed our experience with cognitive outcomes for INPH patients who were selected for shunting based on abnormal cerebrospinal fluid (CSF) pressure monitoring and positive response in any of the NPH symptoms following large volume CSF drainage. Forty-two INPH patients underwent neurocognitive testing and Folstein Mini-Mental State Examination (MMSE) prior to shunting. Neurocognitive testing or MMSEwere performed at least 3 months after shunt insertion. Significant improvement in a neurocognitive subtest was defined as improvement by one standard deviation (1 SD) for the patient's age, sex and education level. Significant improvement in overall neurocognitive outcome was defined as a 4-point improvement in MMSE or improvement by 1 SD in 50% of the administered neurocognitive subtests. Nonparametric tests were used to assess changes. Predictors of outcome were assessed via logistic regression analysis. Twenty-two patients (52.3%) showed overall neurocognitive improvement, and significant improvement was seen in tests of verbal memory and psychomotor speed. Predictive analysis showed that patients scoring more than 1 SD below mean at baseline on verbal memory immediate recall were fourfold less likely to show overall cognitive improvement, and sixfold less likely if also associated with visuoconstructional deficit or executive dysfunction. Verbal memory scores at baseline were higher in patients who showed overall cognitive improvement. Shunting INPH patients on the basis of CSF pressure monitoring and drainage response shows a significant rate of cognitive improvement, and baseline neurocognitive test scores may distinguish patients likely to respond to shunt surgery from those who will not. Copyright (c) 2005 S. Karger AG, Basel.
Scheurer, Eva; Ith, Michael; Dietrich, Daniel; Kreis, Roland; Hüsler, Jürg; Dirnhofer, Richard; Boesch, Chris
2005-05-01
Knowledge of the time interval from death (post-mortem interval, PMI) has an enormous legal, criminological and psychological impact. Aiming to find an objective method for the determination of PMIs in forensic medicine, 1H-MR spectroscopy (1H-MRS) was used in a sheep head model to follow changes in brain metabolite concentrations after death. Following the characterization of newly observed metabolites (Ith et al., Magn. Reson. Med. 2002; 5: 915-920), the full set of acquired spectra was analyzed statistically to provide a quantitative estimation of PMIs with their respective confidence limits. In a first step, analytical mathematical functions are proposed to describe the time courses of 10 metabolites in the decomposing brain up to 3 weeks post-mortem. Subsequently, the inverted functions are used to predict PMIs based on the measured metabolite concentrations. Individual PMIs calculated from five different metabolites are then pooled, being weighted by their inverse variances. The predicted PMIs from all individual examinations in the sheep model are compared with known true times. In addition, four human cases with forensically estimated PMIs are compared with predictions based on single in situ MRS measurements. Interpretation of the individual sheep examinations gave a good correlation up to 250 h post-mortem, demonstrating that the predicted PMIs are consistent with the data used to generate the model. Comparison of the estimated PMIs with the forensically determined PMIs in the four human cases shows an adequate correlation. Current PMI estimations based on forensic methods typically suffer from uncertainties in the order of days to weeks without mathematically defined confidence information. In turn, a single 1H-MRS measurement of brain tissue in situ results in PMIs with defined and favorable confidence intervals in the range of hours, thus offering a quantitative and objective method for the determination of PMIs. Copyright 2004 John Wiley & Sons, Ltd.
DeLeon, Orlando; Hodis, Hagit; O’Malley, Yunxia; Johnson, Jacklyn; Salimi, Hamid; Zhai, Yinjie; Winter, Elizabeth; Remec, Claire; Eichelberger, Noah; Van Cleave, Brandon; Puliadi, Ramya; Harrington, Robert D.; Stapleton, Jack T.; Haim, Hillel
2017-01-01
The envelope glycoproteins (Envs) of HIV-1 continuously evolve in the host by random mutations and recombination events. The resulting diversity of Env variants circulating in the population and their continuing diversification process limit the efficacy of AIDS vaccines. We examined the historic changes in Env sequence and structural features (measured by integrity of epitopes on the Env trimer) in a geographically defined population in the United States. As expected, many Env features were relatively conserved during the 1980s. From this state, some features diversified whereas others remained conserved across the years. We sought to identify “clues” to predict the observed historic diversification patterns. Comparison of viruses that cocirculate in patients at any given time revealed that each feature of Env (sequence or structural) exists at a defined level of variance. The in-host variance of each feature is highly conserved among individuals but can vary between different HIV-1 clades. We designate this property “volatility” and apply it to model evolution of features as a linear diffusion process that progresses with increasing genetic distance. Volatilities of different features are highly correlated with their divergence in longitudinally monitored patients. Volatilities of features also correlate highly with their population-level diversification. Using volatility indices measured from a small number of patient samples, we accurately predict the population diversity that developed for each feature over the course of 30 years. Amino acid variants that evolved at key antigenic sites are also predicted well. Therefore, small “fluctuations” in feature values measured in isolated patient samples accurately describe their potential for population-level diversification. These tools will likely contribute to the design of population-targeted AIDS vaccines by effectively capturing the diversity of currently circulating strains and addressing properties of variants expected to appear in the future. PMID:28384158
Pascoal, Lívia Maia; Lopes, Marcos Venícios de Oliveira; Chaves, Daniel Bruno Resende; Beltrão, Beatriz Amorim; da Silva, Viviane Martins; Monteiro, Flávia Paula Magalhães
2015-01-01
OBJECTIVE: to analyze the accuracy of the defining characteristics of the Impaired gas exchange nursing diagnosis in children with acute respiratory infection. METHOD: open prospective cohort study conducted with 136 children monitored for a consecutive period of at least six days and not more than ten days. An instrument based on the defining characteristics of the Impaired gas exchange diagnosis and on literature addressing pulmonary assessment was used to collect data. The accuracy means of all the defining characteristics under study were computed. RESULTS: the Impaired gas exchange diagnosis was present in 42.6% of the children in the first assessment. Hypoxemia was the characteristic that presented the best measures of accuracy. Abnormal breathing presented high sensitivity, while restlessness, cyanosis, and abnormal skin color showed high specificity. All the characteristics presented negative predictive values of 70% and cyanosis stood out by its high positive predictive value. CONCLUSION: hypoxemia was the defining characteristic that presented the best predictive ability to determine Impaired gas exchange. Studies of this nature enable nurses to minimize variability in clinical situations presented by the patient and to identify more precisely the nursing diagnosis that represents the patient's true clinical condition. PMID:26155010
Tribbles pseudokinases: novel targets for chemical biology and drug discovery?
Foulkes, Daniel M; Byrne, Dominic P; Bailey, Fiona P; Eyers, Patrick A
2015-10-01
Tribbles (TRIB) proteins are pseudokinase mediators of eukaryotic signalling that have evolved important roles in lipoprotein metabolism, immune function and cellular differentiation and proliferation. In addition, an evolutionary-conserved modulation of PI3K/AKT signalling pathways highlights them as novel and rather unusual pharmaceutical targets. The three human TRIB family members are uniquely defined by an acidic pseudokinase domain containing a 'broken' α C-helix and a MEK (MAPK/ERK)-binding site at the end of the putative C-lobe and a distinct C-terminal peptide motif that interacts directly with a small subset of cellular E3 ubiquitin ligases. This latter interaction drives proteasomal-dependent degradation of networks of transcription factors, whose rate of turnover determines the biological attributes of individual TRIB family members. Defining the function of individual Tribs has been made possible through evaluation of individual TRIB knockout mice, siRNA/overexpression approaches and genetic screening in flies, where the single TRIB gene was originally described 15 years ago. The rapidly maturing TRIB field is primed to exploit chemical biology approaches to evaluate endogenous TRIB signalling events in intact cells. This will help define how TRIB-driven protein-protein interactions and the atypical TRIB ATP-binding site, fit into cellular signalling modules in experimental scenarios where TRIB-signalling complexes remain unperturbed. In this mini-review, we discuss how small molecules can reveal rate-limiting signalling outputs and functions of Tribs in cells and intact organisms, perhaps serving as guides for the development of new drugs. We predict that appropriate small molecule TRIB ligands will further accelerate the transition of TRIB pseudokinase analysis into the mainstream of cell signalling. © 2015 Authors; published by Portland Press Limited.
Modeling integrated photovoltaic–electrochemical devices using steady-state equivalent circuits
Winkler, Mark T.; Cox, Casandra R.; Nocera, Daniel G.; Buonassisi, Tonio
2013-01-01
We describe a framework for efficiently coupling the power output of a series-connected string of single-band-gap solar cells to an electrochemical process that produces storable fuels. We identify the fundamental efficiency limitations that arise from using solar cells with a single band gap, an arrangement that describes the use of currently economic solar cell technologies such as Si or CdTe. Steady-state equivalent circuit analysis permits modeling of practical systems. For the water-splitting reaction, modeling defines parameters that enable a solar-to-fuels efficiency exceeding 18% using laboratory GaAs cells and 16% using all earth-abundant components, including commercial Si solar cells and Co- or Ni-based oxygen evolving catalysts. Circuit analysis also provides a predictive tool: given the performance of the separate photovoltaic and electrochemical systems, the behavior of the coupled photovoltaic–electrochemical system can be anticipated. This predictive utility is demonstrated in the case of water oxidation at the surface of a Si solar cell, using a Co–borate catalyst.
Albin, Thomas J
2017-07-01
Occasionally practitioners must work with single dimensions defined as combinations (sums or differences) of percentile values, but lack information (e.g. variances) to estimate the accommodation achieved. This paper describes methods to predict accommodation proportions for such combinations of percentile values, e.g. two 90th percentile values. Kreifeldt and Nah z-score multipliers were used to estimate the proportions accommodated by combinations of percentile values of 2-15 variables; two simplified versions required less information about variance and/or correlation. The estimates were compared to actual observed proportions; for combinations of 2-15 percentile values the average absolute differences ranged between 0.5 and 1.5 percentage points. The multipliers were also used to estimate adjusted percentile values, that, when combined, estimate a desired proportion of the combined measurements. For combinations of two and three adjusted variables, the average absolute difference between predicted and observed proportions ranged between 0.5 and 3.0 percentage points. Copyright © 2017 Elsevier Ltd. All rights reserved.
A review of the ionospheric model for the long wave prediction capability
NASA Astrophysics Data System (ADS)
Ferguson, J. A.
1992-11-01
The Naval Command, Control, and Ocean Surveillance Center's Long Wave Prediction Capability (LWPC) has a built-in ionospheric model. The latter was defined after a review of the literature comparing measurements with calculations. Subsequent to this original specification of the ionospheric model in the LWPC, a new collection of data were obtained and analyzed. The new data were collected aboard a merchant ship named the Callaghan during a series of trans-Atlantic trips over a period of a year. This report presents a detailed analysis of the ionospheric model currently in use by the LWPC and the new model suggested by the shipboard measurements. We conclude that, although the fits to measurements are almost the same between the two models examined, the current LWPC model should be used because it is better than the new model for nighttime conditions at long ranges. This conclusion supports the primary use of the LWPC model for coverage assessment that requires a valid model at the limits of a transmitter's reception.
Modelling decremental ramps using 2- and 3-parameter "critical power" models.
Morton, R Hugh; Billat, Veronique
2013-01-01
The "Critical Power" (CP) model of human bioenergetics provides a valuable way to identify both limits of tolerance to exercise and mechanisms that underpin that tolerance. It applies principally to cycling-based exercise, but with suitable adjustments for analogous units it can be applied to other exercise modalities; in particular to incremental ramp exercise. It has not yet been applied to decremental ramps which put heavy early demand on the anaerobic energy supply system. This paper details cycling-based bioenergetics of decremental ramps using 2- and 3-parameter CP models. It derives equations that, for an individual of known CP model parameters, define those combinations of starting intensity and decremental gradient which will or will not lead to exhaustion before ramping to zero; and equations that predict time to exhaustion on those decremental ramps that will. These are further detailed with suitably chosen numerical and graphical illustrations. These equations can be used for parameter estimation from collected data, or to make predictions when parameters are known.
Predicting Catalytic Activity of Nanoparticles by a DFT-Aided Machine-Learning Algorithm.
Jinnouchi, Ryosuke; Asahi, Ryoji
2017-09-07
Catalytic activities are often dominated by a few specific surface sites, and designing active sites is the key to realize high-performance heterogeneous catalysts. The great triumphs of modern surface science lead to reproduce catalytic reaction rates by modeling the arrangement of surface atoms with well-defined single-crystal surfaces. However, this method has limitations in the case for highly inhomogeneous atomic configurations such as on alloy nanoparticles with atomic-scale defects, where the arrangement cannot be decomposed into single crystals. Here, we propose a universal machine-learning scheme using a local similarity kernel, which allows interrogation of catalytic activities based on local atomic configurations. We then apply it to direct NO decomposition on RhAu alloy nanoparticles. The proposed method can efficiently predict energetics of catalytic reactions on nanoparticles using DFT data on single crystals, and its combination with kinetic analysis can provide detailed information on structures of active sites and size- and composition-dependent catalytic activities.
Questioning the Faith - Models and Prediction in Stream Restoration (Invited)
NASA Astrophysics Data System (ADS)
Wilcock, P.
2013-12-01
River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.
The HSA in Your Future: Defined Contribution Retiree Medical Coverage.
Towarnicky, Jack M
In 2004, when evaluating health savings account (HSA) business opportunities, I predicted: "Twenty-five years ago, no one had ever heard of 401(k); 25 years from now, everyone will have an HSA." Twelve years later, growth in HSA eligibility, participation, contributions and asset accumulations suggests we just might achieve that prediction. This article shares one plan sponsor's journey to help employees accumulate assets to fund medical costs-while employed and after retirement, It documents a 30-plus-year retiree health insurance transition from a defined benefit to a defined dollar structure and culminating in a full-replacement defined contribution structure using HSA-qualifying high-deductible health plans (HDHPs) and then redeploying/repurposing the HSA to incorporate a savings incentive for retiree medical costs.
Murray, Nigel P; Aedo, Socrates; Fuentealba, Cynthia; Jacob, Omar; Reyes, Eduardo; Novoa, Camilo; Orellana, Sebastian; Orellana, Nelson
2016-10-01
To establish a prediction model for early biochemical failure based on the Cancer of the Prostate Risk Assessment (CAPRA) score, the presence or absence of primary circulating prostate cells (CPC) and the number of primary CPC (nCPC)/8ml blood sample is detected before surgery. A prospective single-center study of men who underwent radical prostatectomy as monotherapy for prostate cancer. Clinical-pathological findings were used to calculate the CAPRA score. Before surgery blood was taken for CPC detection, mononuclear cells were obtained using differential gel centrifugation, and CPCs identified using immunocytochemistry. A CPC was defined as a cell expressing prostate-specific antigen and P504S, and the presence or absence of CPCs and the number of cells detected/8ml blood sample was registered. Patients were followed up for up to 5 years; biochemical failure was defined as a prostate-specific antigen>0.2ng/ml. The validity of the CAPRA score was calibrated using partial validation, and the fractional polynomial Cox proportional hazard regression was used to build 3 models, which underwent a decision analysis curve to determine the predictive value of the 3 models with respect to biochemical failure. A total of 267 men participated, mean age 65.80 years, and after 5 years of follow-up the biochemical-free survival was 67.42%. The model using CAPRA score showed a hazards ratio (HR) of 5.76 between low and high-risk groups, that of CPC with a HR of 26.84 between positive and negative groups, and the combined model showed a HR of 4.16 for CAPRA score and 19.93 for CPC. Using the continuous variable nCPC, there was no improvement in the predictive value of the model compared with the model using a positive-negative result of CPC detection. The combined CAPRA-nCPC model showed an improvement of the predictive performance for biochemical failure using the Harrell׳s C concordance test and a net benefit on DCA in comparison with either model used separately. The use of primary CPC as a predictive factor based on their presence or absence did not predict aggressive disease or biochemical failure. Although the use of a combined CAPRA-nCPC model improves the prediction of biochemical failure in patients undergoing radical prostatectomy for prostate cancer, this is minimal. The use of the presence or absence of primary CPCs alone did not predict aggressive disease or biochemical failure. Copyright © 2016 Elsevier Inc. All rights reserved.
Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages.
Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart
2016-01-01
Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure-regularities arising in an ordered series of syllable timings-testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.
Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages
Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart
2016-01-01
Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint. PMID:27994544
Jang, Jin-Young; Park, Taesung; Lee, Selyeong; Kim, Yongkang; Lee, Seung Yeoun; Kim, Sun-Whe; Kim, Song-Cheol; Song, Ki-Byung; Yamamoto, Masakazu; Hatori, Takashi; Hirono, Seiko; Satoi, Sohei; Fujii, Tsutomu; Hirano, Satoshi; Hashimoto, Yasushi; Shimizu, Yashuhiro; Choi, Dong Wook; Choi, Seong Ho; Heo, Jin Seok; Motoi, Fuyuhiko; Matsumoto, Ippei; Lee, Woo Jung; Kang, Chang Moo; Han, Ho-Seong; Yoon, Yoo-Seok; Sho, Masayuki; Nagano, Hiroaki; Honda, Goro; Kim, Sang Geol; Yu, Hee Chul; Chung, Jun Chul; Nagakawa, Yuichi; Seo, Hyung Il; Yamaue, Hiroki
2017-12-01
This study evaluated individual risks of malignancy and proposed a nomogram for predicting malignancy of branch duct type intraductal papillary mucinous neoplasms (BD-IPMNs) using the large database for IPMN. Although consensus guidelines list several malignancy predicting factors in patients with BD-IPMN, those variables have different predictability and individual quantitative prediction of malignancy risk is limited. Clinicopathological factors predictive of malignancy were retrospectively analyzed in 2525 patients with biopsy proven BD-IPMN at 22 tertiary hospitals in Korea and Japan. The patients with main duct dilatation >10 mm and inaccurate information were excluded. The study cohort consisted of 2258 patients. Malignant IPMNs were defined as those with high grade dysplasia and associated invasive carcinoma. Of 2258 patients, 986 (43.7%) had low, 443 (19.6%) had intermediate, 398 (17.6%) had high grade dysplasia, and 431 (19.1%) had invasive carcinoma. To construct and validate the nomogram, patients were randomly allocated into training and validation sets, with fixed ratios of benign and malignant lesions. Multiple logistic regression analysis resulted in five variables (cyst size, duct dilatation, mural nodule, serum CA19-9, and CEA) being selected to construct the nomogram. In the validation set, this nomogram showed excellent discrimination power through a 1000 times bootstrapped calibration test. A nomogram predicting malignancy in patients with BD-IPMN was constructed using a logistic regression model. This nomogram may be useful in identifying patients at risk of malignancy and for selecting optimal treatment methods. The nomogram is freely available at http://statgen.snu.ac.kr/software/nomogramIPMN.
Unchained Melody: Revisiting the Estimation of SF-6D Values
Craig, Benjamin M.
2015-01-01
Purpose In the original SF-6D valuation study, the analytical design inherited conventions that detrimentally affected its ability to predict values on a quality-adjusted life year (QALY) scale. Our objective is to estimate UK values for SF-6D states using the original data and multi-attribute utility (MAU) regression after addressing its limitations and to compare the revised SF-6D and EQ-5D value predictions. Methods Using the unaltered data (611 respondents, 3503 SG responses), the parameters of the original MAU model were re-estimated under 3 alternative error specifications, known as the instant, episodic, and angular random utility models. Value predictions on a QALY scale were compared to EQ-5D3L predictions using the 1996 Health Survey for England. Results Contrary to the original results, the revised SF-6D value predictions range below 0 QALYs (i.e., worse than death) and agree largely with EQ-5D predictions after adjusting for scale. Although a QALY is defined as a year in optimal health, the SF-6D sets a higher standard for optimal health than the EQ-5D-3L; therefore, it has larger units on a QALY scale by construction (20.9% more). Conclusions Much of the debate in health valuation has focused on differences between preference elicitation tasks, sampling, and instruments. After correcting errant econometric practices and adjusting for differences in QALY scale between the EQ-5D and SF-6D values, the revised predictions demonstrate convergent validity, making them more suitable for UK economic evaluations compared to original estimates. PMID:26359242
Resource Management in Constrained Dynamic Situations
NASA Astrophysics Data System (ADS)
Seok, Jinwoo
Resource management is considered in this dissertation for systems with limited resources, possibly combined with other system constraints, in unpredictably dynamic environments. Resources may represent fuel, power, capabilities, energy, and so on. Resource management is important for many practical systems; usually, resources are limited, and their use must be optimized. Furthermore, systems are often constrained, and constraints must be satisfied for safe operation. Simplistic resource management can result in poor use of resources and failure of the system. Furthermore, many real-world situations involve dynamic environments. Many traditional problems are formulated based on the assumptions of given probabilities or perfect knowledge of future events. However, in many cases, the future is completely unknown, and information on or probabilities about future events are not available. In other words, we operate in unpredictably dynamic situations. Thus, a method is needed to handle dynamic situations without knowledge of the future, but few formal methods have been developed to address them. Thus, the goal is to design resource management methods for constrained systems, with limited resources, in unpredictably dynamic environments. To this end, resource management is organized hierarchically into two levels: 1) planning, and 2) control. In the planning level, the set of tasks to be performed is scheduled based on limited resources to maximize resource usage in unpredictably dynamic environments. In the control level, the system controller is designed to follow the schedule by considering all the system constraints for safe and efficient operation. Consequently, this dissertation is mainly divided into two parts: 1) planning level design, based on finite state machines, and 2) control level methods, based on model predictive control. We define a recomposable restricted finite state machine to handle limited resource situations and unpredictably dynamic environments for the planning level. To obtain a policy, dynamic programing is applied, and to obtain a solution, limited breadth-first search is applied to the recomposable restricted finite state machine. A multi-function phased array radar resource management problem and an unmanned aerial vehicle patrolling problem are treated using recomposable restricted finite state machines. Then, we use model predictive control for the control level, because it allows constraint handling and setpoint tracking for the schedule. An aircraft power system management problem is treated that aims to develop an integrated control system for an aircraft gas turbine engine and electrical power system using rate-based model predictive control. Our results indicate that at the planning level, limited breadth-first search for recomposable restricted finite state machines generates good scheduling solutions in limited resource situations and unpredictably dynamic environments. The importance of cooperation in the planning level is also verified. At the control level, a rate-based model predictive controller allows good schedule tracking and safe operations. The importance of considering the system constraints and interactions between the subsystems is indicated. For the best resource management in constrained dynamic situations, the planning level and the control level need to be considered together.
Jappe, Emma Christine; Kringelum, Jens; Trolle, Thomas; Nielsen, Morten
2018-02-15
Peptides that bind to and are presented by MHC class I and class II molecules collectively make up the immunopeptidome. In the context of vaccine development, an understanding of the immunopeptidome is essential, and much effort has been dedicated to its accurate and cost-effective identification. Current state-of-the-art methods mainly comprise in silico tools for predicting MHC binding, which is strongly correlated with peptide immunogenicity. However, only a small proportion of the peptides that bind to MHC molecules are, in fact, immunogenic, and substantial work has been dedicated to uncovering additional determinants of peptide immunogenicity. In this context, and in light of recent advancements in mass spectrometry (MS), the existence of immunological hotspots has been given new life, inciting the hypothesis that hotspots are associated with MHC class I peptide immunogenicity. We here introduce a precise terminology for defining these hotspots and carry out a systematic analysis of MS and in silico predicted hotspots. We find that hotspots defined from MS data are largely captured by peptide binding predictions, enabling their replication in silico. This leads us to conclude that hotspots, to a great degree, are simply a result of promiscuous HLA binding, which disproves the hypothesis that the identification of hotspots provides novel information in the context of immunogenic peptide prediction. Furthermore, our analyses demonstrate that the signal of ligand processing, although present in the MS data, has very low predictive power to discriminate between MS and in silico defined hotspots. © 2018 John Wiley & Sons Ltd.
Evaluate depth of field limits of fixed focus lens arrangements in thermal infrared
NASA Astrophysics Data System (ADS)
Schuster, Norbert
2016-05-01
More and more modern thermal imaging systems use uncooled detectors. High volume applications work with detectors that have a reduced pixel count (typically between 200x150 and 640x480). This reduces the usefulness of modern image treatment procedures such as wave front coding. On the other hand, uncooled detectors demand lenses with fast fnumbers, near f/1.0, which reduces the expected Depth of Field (DoF). What are the limits on resolution if the target changes distance to the camera system? The desire to implement lens arrangements without a focusing mechanism demands a deeper quantification of the DoF problem. A new approach avoids the classic "accepted image blur circle" and quantifies the expected DoF by the Through Focus MTF of the lens. This function is defined for a certain spatial frequency that provides a straightforward relation to the pixel pitch of imaging device. A certain minimum MTF-level is necessary so that the complete thermal imaging system can realize its basic functions, such as recognition or detection of specified targets. Very often, this technical tradeoff is approved with a certain lens. But what is the impact of changing the lens for one with a different focal length? Narrow field lenses, which give more details of targets in longer distances, tighten the DoF problem. A first orientation is given by the hyperfocal distance. It depends in a square relation on the focal length and in a linear relation on the through focus MTF of the lens. The analysis of these relations shows the contradicting requirements between higher thermal and spatial resolution, faster f-number and desired DoF. Furthermore, the hyperfocal distance defines the DoF-borders. Their relation between is such as the first order imaging formulas. A calculation methodology will be presented to transfer DoF-results from an approved combination lens and camera to another lens in combination with the initial camera. Necessary input for this prediction is the accepted DoF of the initial combination and the through focus MTFs of both lenses. The accepted DoF of the initial combination defines an application and camera related MTF-level, which must be provided also by the new lens. Examples are provided. The formula of the Diffraction-Limited-Through-Focus-MTF (DLTF) quantifies the physical limit and works without any ray trace. This relation respects the pixel pitch, the waveband and the aperture based f-number, but is independent of detector size. The DLTF has a steeper slope than the ray traced Through-Focus-MTF; its maximum is the diffraction limit. The DLTF predicts the DoF-relations quite precisely. Differences to ray trace results are discussed. Last calculations with modern detectors show that a static chosen MTF-level doesn't reflect the reality for the DoFproblem. The MTF-level to respect depends on application, pixel pitch, IR-camera and image treatment. A value of 0.250 at the detector Nyquist frequency seems to be a reasonable starting point for uncooled FPAs with 17μm pixel pitch.
QCD next-to-leading-order predictions matched to parton showers for vector-like quark models.
Fuks, Benjamin; Shao, Hua-Sheng
2017-01-01
Vector-like quarks are featured by a wealth of beyond the Standard Model theories and are consequently an important goal of many LHC searches for new physics. Those searches, as well as most related phenomenological studies, however, rely on predictions evaluated at the leading-order accuracy in QCD and consider well-defined simplified benchmark scenarios. Adopting an effective bottom-up approach, we compute next-to-leading-order predictions for vector-like-quark pair production and single production in association with jets, with a weak or with a Higgs boson in a general new physics setup. We additionally compute vector-like-quark contributions to the production of a pair of Standard Model bosons at the same level of accuracy. For all processes under consideration, we focus both on total cross sections and on differential distributions, most these calculations being performed for the first time in our field. As a result, our work paves the way to precise extraction of experimental limits on vector-like quarks thanks to an accurate control of the shapes of the relevant observables and emphasise the extra handles that could be provided by novel vector-like-quark probes never envisaged so far.
The adaptive nature of liquidity taking in limit order books
NASA Astrophysics Data System (ADS)
Taranto, D. E.; Bormetti, G.; Lillo, F.
2014-06-01
In financial markets, the order flow, defined as the process assuming value one for buy market orders and minus one for sell market orders, displays a very slowly decaying autocorrelation function. Since orders impact prices, reconciling the persistence of the order flow with market efficiency is a subtle issue. A possible solution is provided by asymmetric liquidity, which states that the impact of a buy or sell order is inversely related to the probability of its occurrence. We empirically find that when the order flow predictability increases in one direction, the liquidity in the opposite side decreases, but the probability that a trade moves the price decreases significantly. While the last mechanism is able to counterbalance the persistence of order flow and restore efficiency and diffusivity, the first acts in the opposite direction. We introduce a statistical order book model where the persistence of the order flow is mitigated by adjusting the market order volume to the predictability of the order flow. The model reproduces the diffusive behaviour of prices at all time scales without fine-tuning the values of parameters, as well as the behaviour of most order book quantities as a function of the local predictability of the order flow.
Czyz, Ewa K.; Berona, Johnny; King, Cheryl A.
2016-01-01
The challenge of identifying suicide risk in adolescents, and particularly among high-risk subgroups such as adolescent inpatients, calls for further study of models of suicidal behavior that could meaningfully aid in the prediction of risk. This study examined how well the Interpersonal-Psychological Theory of Suicidal Behavior (IPTS)—with its constructs of thwarted belongingness (TB), perceived burdensomeness (PB), and an acquired capability (AC) for lethal self-injury—predicts suicide attempts among adolescents (N = 376) 3 and 12 months after hospitalization. The three-way interaction between PB, TB, and AC, defined as a history of multiple suicide attempts, was not significant. However, there were significant 2-way interaction effects, which varied by sex: girls with low AC and increasing TB, and boys with high AC and increasing PB, were more likely to attempt suicide at 3 months. Only high AC predicted 12-month attempts. Results suggest gender-specific associations between theory components and attempts. The time-limited effects of these associations point to TB and PB being dynamic and modifiable in high-risk populations, whereas the effects of AC are more lasting. The study also fills an important gap in existing research by examining IPTS prospectively. PMID:25263410
Czyz, Ewa K.; Berona, Johnny; King, Cheryl A.
2016-01-01
The challenge of identifying suicide risk in adolescents, and particularly among high-risk subgroups such as adolescent inpatients, calls for further study of models of suicidal behavior that could meaningfully aid in the prediction of risk. This study examined how well the Interpersonal-Psychological Theory of Suicidal Behavior (IPTS)—with its constructs of thwarted belongingness (TB), perceived burdensomeness (PB), and an acquired capability (AC) for lethal self-injury—predicts suicide attempts among adolescents (N = 376) 3 and 12 months after hospitalization. The three-way interaction between PB, TB, and AC, defined as a history of multiple suicide attempts, was not significant. However, there were significant 2-way interaction effects, which varied by sex: girls with low AC and increasing TB, and boys with high AC and increasing PB, were more likely to attempt suicide at 3 months. Only high AC predicted 12-month attempts. Results suggest gender-specific associations between theory components and attempts. The time-limited effects of these associations point to TB and PB being dynamic and modifiable in high-risk populations, whereas the effects of AC are more lasting. The study also fills an important gap in existing research by examining IPTS prospectively. PMID:26872965
NASA Technical Reports Server (NTRS)
Everhart, Joel L.
2008-01-01
Impact and debris damage to the Space Shuttle Orbiter Thermal Protection System tiles is a random phenomenon, occurring at random locations on the vehicle surface, resulting in random geometrical shapes that are exposed to a definable range of surface flow conditions. In response to the 2003 Final Report of the Columbia Accident Investigation Board, wind tunnel aeroheating experiments approximating a wide range of possible damage scenarios covering both open and closed cavity flow conditions were systematically tested in hypersonic ground based facilities. These data were analyzed and engineering assessment tools for damage-induced fully-laminar heating were developed and exercised on orbit. These tools provide bounding approximations for the damaged-surface heating environment. This paper presents a further analysis of the baseline, zero-pressure-gradient, idealized, rectangular-geometry cavity heating data, yielding new laminar correlations for the floor-averaged heating, peak cavity endwall heating, and the downstream decay rate. Correlation parameters are derived in terms of cavity geometry and local flow conditions. Prediction Limit Uncertainty values are provided at the 95%, 99% and 99.9% levels of significance. Non-baseline conditions, including non-rectangular geometries and flows with known pressure gradients, are used to assess the range of applicability of the new correlations. All data variations fall within the 99% Prediction Limit Uncertainty bounds. Importantly, both open-flow and closed-flow cavity heating are combined into a single-curve parameterization of the heating predictions, and provide a concise mathematical model of the laminar cavity heating flow field with known uncertainty.
Computational approaches to define a human milk metaglycome
Agravat, Sanjay B.; Song, Xuezheng; Rojsajjakul, Teerapat; Cummings, Richard D.; Smith, David F.
2016-01-01
Motivation: The goal of deciphering the human glycome has been hindered by the lack of high-throughput sequencing methods for glycans. Although mass spectrometry (MS) is a key technology in glycan sequencing, MS alone provides limited information about the identification of monosaccharide constituents, their anomericity and their linkages. These features of individual, purified glycans can be partly identified using well-defined glycan-binding proteins, such as lectins and antibodies that recognize specific determinants within glycan structures. Results: We present a novel computational approach to automate the sequencing of glycans using metadata-assisted glycan sequencing, which combines MS analyses with glycan structural information from glycan microarray technology. Success in this approach was aided by the generation of a ‘virtual glycome’ to represent all potential glycan structures that might exist within a metaglycomes based on a set of biosynthetic assumptions using known structural information. We exploited this approach to deduce the structures of soluble glycans within the human milk glycome by matching predicted structures based on experimental data against the virtual glycome. This represents the first meta-glycome to be defined using this method and we provide a publically available web-based application to aid in sequencing milk glycans. Availability and implementation: http://glycomeseq.emory.edu Contact: sagravat@bidmc.harvard.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26803164
King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I.; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin
2011-01-01
Background Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. Methods A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. Results 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). Conclusions The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse. PMID:21853028
King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin
2011-01-01
Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.
Wuelfing, W Peter; Daublain, Pierre; Kesisoglou, Filippos; Templeton, Allen; McGregor, Caroline
2015-04-06
In the drug discovery setting, the ability to rapidly identify drug absorption risk in preclinical species at high doses from easily measured physical properties is desired. This is due to the large number of molecules being evaluated and their high attrition rate, which make resource-intensive in vitro and in silico evaluation unattractive. High-dose in vivo data from rat, dog, and monkey are analyzed here, using a preclinical dose number (PDo) concept based on the dose number described by Amidon and other authors (Pharm. Res., 1993, 10, 264-270). PDo, as described in this article, is simply calculated as dose (mg/kg) divided by compound solubility in FaSSIF (mg/mL) and approximates the volume of biorelevant media per kilogram of animal that would be needed to fully dissolve the dose. High PDo values were found to be predictive of difficulty in achieving drug exposure (AUC)-dose proportionality in in vivo studies, as could be expected; however, this work analyzes a large data set (>900 data points) and provides quantitative guidance to identify drug absorption risk in preclinical species based on a single solubility measurement commonly carried out in drug discovery. Above the PDo values defined, >50% of all in vivo studies exhibited poor AUC-dose proportionality in rat, dog, and monkey, and these values can be utilized as general guidelines in discovery and early development to rapidly assess risk of solubility-limited absorption for a given compound. A preclinical dose number generated by biorelevant dilutions of formulated compounds (formulated PDo) was also evaluated and defines solubility targets predictive of suitable AUC-dose proportionality in formulation development efforts. Application of these guidelines can serve to efficiently identify compounds in discovery that are likely to present extreme challenges with respect to solubility-limited absorption in preclinical species as well as reduce the testing of poor formulations in vivo, which is a key ethical and resource matter.
Risk control and the minimum significant risk
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seiler, F.A.; Alvarez, J.L.
1996-06-01
Risk management implies that the risk manager can, by his actions, exercise at least a modicum of control over the risk in question. In the terminology of control theory, a management action is a control signal imposed as feedback on the system to bring about a desired change in the state of the system. In the terminology of risk management, an action is taken to bring a predicted risk to lower values. Even if it is assumed that the management action taken is 100% effective and that the projected risk reduction is infinitely well known, there is a lower limitmore » to the desired effects that can be achieved. It is based on the fact that all risks, such as the incidence of cancer, exhibit a degree of variability due to a number of extraneous factors such as age at exposure, sex, location, and some lifestyle parameters such as smoking or the consumption of alcohol. If the control signal is much smaller than the variability of the risk, the signal is lost in the noise and control is lost. This defines a minimum controllable risk based on the variability of the risk over the population considered. This quantity is the counterpart of the minimum significant risk which is defined by the uncertainties of the risk model. Both the minimum controllable risk and the minimum significant risk are evaluated for radiation carcinogenesis and are shown to be of the same order of magnitude. For a realistic management action, the assumptions of perfectly effective action and perfect model prediction made above have to be dropped, resulting in an effective minimum controllable risk which is determined by both risk limits. Any action below that effective limit is futile, but it is also unethical due to the ethical requirement of doing more good than harm. Finally, some implications of the effective minimum controllable risk on the use of the ALARA principle and on the evaluation of remedial action goals are presented.« less
Tolls, Johannes; Müller, Martin; Willing, Andreas; Steber, Josef
2009-07-01
Many consumer products contain lipophilic, poorly soluble ingredients representing large-volume substances whose aquatic toxicity cannot be adequately determined with standard methods for a number of reasons. In such cases, a recently developed approach can be used to define an aquatic exposure threshold of no concern (ETNCaq; i.e., a concentration below which no adverse affects on the environment are to be expected). A risk assessment can be performed by comparing the ETNCaq value with the aquatic exposure levels of poorly soluble substances. Accordingly, the aquatic exposure levels of substances with water solubility below the ETNCaq will not exceed the ecotoxicological no-effect concentration; therefore, their risk can be assessed as being negligible. The ETNCaq value relevant for substances with a narcotic mode of action is 1.9 microg/L. To apply the above risk assessment strategy, the solubility in water needs to be known. Most frequently, this parameter is estimated by means of quantitative structure/activity relationships based on the log octanol-water partition coefficient (log Kow). The predictive value of several calculation models for water solubility has been investigated by this method with the use of more recent experimental solubility data for lipophilic compounds. A linear regression model was shown to be the most suitable for providing correct predictions without underestimation of real water solubility. To define a log Kow threshold suitable for reliably predicting a water solubility of less than 1.9 microg/L, a confidence limit was established by statistical comparison of the experimental solubility data with their log Kow. It was found that a threshold of log Kow = 7 generally allows discrimination between substances with solubility greater than and less than 1.9 microg/L. Accordingly, organic substances with a baseline toxicity and log Kow > 7 do not require further testing to prove that they have low environmental risk. In applying this concept, the uncertainty of the prediction of water solubility can be accounted for. If the predicted solubility in water is to be below ETNCaq with a probability of 95%, the corresponding log Kow value is 8.
Performance of third-trimester combined screening model for prediction of adverse perinatal outcome.
Miranda, J; Triunfo, S; Rodriguez-Lopez, M; Sairanen, M; Kouru, H; Parra-Saavedra, M; Crovetto, F; Figueras, F; Crispi, F; Gratacós, E
2017-09-01
To explore the potential value of third-trimester combined screening for the prediction of adverse perinatal outcome (APO) in the general population and among small-for-gestational-age (SGA) fetuses. This was a nested case-control study within a prospective cohort of 1590 singleton gestations undergoing third-trimester evaluation (32 + 0 to 36 + 6 weeks' gestation). Maternal baseline characteristics, mean arterial blood pressure, fetoplacental ultrasound and circulating biochemical markers (placental growth factor (PlGF), lipocalin-2, unconjugated estriol and inhibin A) were assessed in all women who subsequently had an APO (n = 148) and in a control group without perinatal complications (n = 902). APO was defined as the occurrence of stillbirth, umbilical artery cord blood pH < 7.15, 5-min Apgar score < 7 or emergency operative delivery for fetal distress. Logistic regression models were developed for the prediction of APO in the general population and among SGA cases (defined as customized birth weight < 10 th centile). The prevalence of APO was 9.3% in the general population and 27.4% among SGA cases. In the general population, a combined screening model including a-priori risk (maternal characteristics), estimated fetal weight (EFW) centile, umbilical artery pulsatility index (UA-PI), estriol and PlGF achieved a detection rate for APO of 26% (area under receiver-operating characteristics curve (AUC), 0.59 (95% CI, 0.54-0.65)), at a 10% false-positive rate (FPR). Among SGA cases, a model including a-priori risk, EFW centile, UA-PI, cerebroplacental ratio, estriol and PlGF predicted 62% of APO (AUC, 0.86 (95% CI, 0.80-0.92)) at a FPR of 10%. The use of fetal ultrasound and maternal biochemical markers at 32-36 weeks provides a poor prediction of APO in the general population. Although it remains limited, the performance of the screening model is improved when applied to fetuses with suboptimal fetal growth. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd.
Baños, Núria; Migliorelli, Federico; Posadas, Eduardo; Ferreri, Janisse; Palacio, Montse
2015-01-01
The objectives of this review were to identify the predictive factors of induction of labor (IOL) failure or success as well as to highlight the current heterogeneity regarding the definition and diagnosis of failed IOL. Only studies in which the main or secondary outcome was failed IOL, defined as not entering the active phase of labor after 24 h of prostaglandin administration ± 12 h of oxytocin infusion, were included in the review. The data collected were: study design, definition of failed IOL, induction method, IOL indications, failed IOL rate, cesarean section because of failed IOL and predictors of failed IOL. The database search detected 507 publications. The main reason for exclusion was that the primary or secondary outcomes were not the predetermined definition of failed IOL (not achieving active phase of labor). Finally, 7 studies were eligible. The main predictive factors identified in the review were cervical status, evaluated by the Bishop score or cervical length. Failed IOL should be defined as the inability to achieve the active phase of labor, considering that the definition of IOL is to enter the active phase of labor. A universal definition of failed IOL is an essential requisite to analyze and obtain solid results and conclusions on this issue. An important finding of this review is that only 7 of all the studies reviewed assessed achieving the active phase of labor as a primary or secondary IOL outcome. Another conclusion is that cervical status remains the most important predictor of IOL outcome, although the value of the parameters explored up to now is limited. To find or develop predictive tools to identify those women exposed to IOL who may not reach the active phase of labor is crucial to minimize the risks and costs associated with IOL failure while opening a great opportunity for investigation. Therefore, other predictive tools should be studied in order to improve IOL outcome in terms of health and economic burden. © 2015 S. Karger AG, Basel.
Improving Ms Estimates by Calibrating Variable-Period Magnitude Scales at Regional Distances
2008-09-01
TF), or oblique - slip variations of normal and thrust faults using the Zoback (1992) classification scheme. For normal faults , 2008 Monitoring...between the observed and Ms-predicted Mw have a definable faulting mechanism effect, especially when strike- slip events are compared to those with...between true and Ms-predicted Mw have a definable faulting mechanism effect, especially when strike- slip events are compared to those with other
Larsen, Sadie E; Berenbaum, Howard
2017-01-01
A recent meta-analysis found that DSM-III- and DSM-IV-defined traumas were associated with only slightly higher posttraumatic stress disorder (PTSD) symptoms than nontraumatic stressors. The current study is the first to examine whether DSM-5-defined traumas were associated with higher levels of PTSD than DSM-IV-defined traumas. Further, we examined theoretically relevant event characteristics to determine whether characteristics other than those outlined in the DSM could predict PTSD symptoms. One hundred six women who had experienced a trauma or significant stressor completed questionnaires assessing PTSD, depression, impairment, and event characteristics. Events were rated for whether they qualified as DSM-IV and DSM-5 trauma. There were no significant differences between DSM-IV-defined traumas and stressors. For DSM-5, effect sizes were slightly larger but still nonsignificant (except for significantly higher hyperarousal following traumas vs. stressors). Self-reported fear for one's life significantly predicted PTSD symptoms. Our results indicate that the current DSM-5 definition of trauma, although a slight improvement from DSM-IV, is not highly predictive of who develops PTSD symptoms. Our study also indicates the importance of individual perception of life threat in the prediction of PTSD. © 2017 S. Karger AG, Basel.
A diffusion-limited reaction model for self-propagating Al/Pt multilayers with quench limits
NASA Astrophysics Data System (ADS)
Kittell, D. E.; Yarrington, C. D.; Hobbs, M. L.; Abere, M. J.; Adams, D. P.
2018-04-01
A diffusion-limited reaction model was calibrated for Al/Pt multilayers ignited on oxidized silicon, sapphire, and tungsten substrates, as well as for some Al/Pt multilayers ignited as free-standing foils. The model was implemented in a finite element analysis code and used to match experimental burn front velocity data collected from several years of testing at Sandia National Laboratories. Moreover, both the simulations and experiments reveal well-defined quench limits in the total Al + Pt layer (i.e., bilayer) thickness. At these limits, the heat generated from atomic diffusion is insufficient to support a self-propagating wave front on top of the substrates. Quench limits for reactive multilayers are seldom reported and are found to depend on the thermal properties of the individual layers. Here, the diffusion-limited reaction model is generalized to allow for temperature- and composition-dependent material properties, phase change, and anisotropic thermal conductivity. Utilizing this increase in model fidelity, excellent overall agreement is shown between the simulations and experimental results with a single calibrated parameter set. However, the burn front velocities of Al/Pt multilayers ignited on tungsten substrates are over-predicted. Possible sources of error are discussed and a higher activation energy (from 41.9 kJ/mol.at. to 47.5 kJ/mol.at.) is shown to bring the simulations into agreement with the velocity data observed on tungsten substrates. This higher activation energy suggests an inhibited diffusion mechanism present at lower heating rates.
NASA Astrophysics Data System (ADS)
Béres, Gábor; Weltsch, Zoltán; Lukács, Zsolt; Tisza, Miklós
2018-05-01
Forming limit is a complex concept of limit values related to the onset of local necking in the sheet metal. In cold sheet metal forming, major and minor limit strains are influenced by the sheet thickness, strain path (deformation history) as well as material parameters and microstructure. Forming Limit Curves are plotted in ɛ1 - ɛ2 coordinate system providing the classic strain-based Forming Limit Diagram (FLD). Using the appropriate constitutive model, the limit strains can be changed into the stress-based Forming Limit Diagram (SFLD), irrespective of the strain path. This study is about the effect of the hardening model parameters on defining of limit stress values during Nakazima tests for automotive dual phase (DP) steels. Five limit strain pairs were specified experimentally with the loading of five different sheet geometries, which performed different strain-paths from pure shear (-2ɛ2=ɛ1) up to biaxial stretching (ɛ2=ɛ1). The former works of Hill, Levy-Tyne and Keeler-Brazier made possible some kind of theoretical strain determination, too. This was followed by the stress calculation based on the experimental and theoretical strain data. Since the n exponent in the Nádai expression is varying with the strain at some DP steels, we applied the least-squares method to fit other hardening model parameters (Ludwik, Voce, Hockett-Sherby) to calculate the stress fields belonging to each limit strains. The results showed that each model parameters could produce some discrepancies between the limit stress states in the range of higher equivalent strains than uniaxial stretching. The calculated hardening models were imported to FE code to extend and validate the results by numerical simulations.
Campbell, Karen M; Haldeman, Kristin; Lehnig, Chris; Munayco, Cesar V; Halsey, Eric S; Laguna-Torres, V Alberto; Yagui, Martín; Morrison, Amy C; Lin, Chii-Dean; Scott, Thomas W
2015-01-01
Dengue is one of the most aggressively expanding mosquito-transmitted viruses. The human burden approaches 400 million infections annually. Complex transmission dynamics pose challenges for predicting location, timing, and magnitude of risk; thus, models are needed to guide prevention strategies and policy development locally and globally. Weather regulates transmission-potential via its effects on vector dynamics. An important gap in understanding risk and roadblock in model development is an empirical perspective clarifying how weather impacts transmission in diverse ecological settings. We sought to determine if location, timing, and potential-intensity of transmission are systematically defined by weather. We developed a high-resolution empirical profile of the local weather-disease connection across Peru, a country with considerable ecological diversity. Applying 2-dimensional weather-space that pairs temperature versus humidity, we mapped local transmission-potential in weather-space by week during 1994-2012. A binary classification-tree was developed to test whether weather data could classify 1828 Peruvian districts as positive/negative for transmission and into ranks of transmission-potential with respect to observed disease. We show that transmission-potential is regulated by temperature-humidity coupling, enabling epidemics in a limited area of weather-space. Duration within a specific temperature range defines transmission-potential that is amplified exponentially in higher humidity. Dengue-positive districts were identified by mean temperature >22°C for 7+ weeks and minimum temperature >14°C for 33+ weeks annually with 95% sensitivity and specificity. In elevated-risk locations, seasonal peak-incidence occurred when mean temperature was 26-29°C, coincident with humidity at its local maximum; highest incidence when humidity >80%. We profile transmission-potential in weather-space for temperature-humidity ranging 0-38°C and 5-100% at 1°C x 2% resolution. Local duration in limited areas of temperature-humidity weather-space identifies potential locations, timing, and magnitude of transmission. The weather-space profile of transmission-potential provides needed data that define a systematic and highly-sensitive weather-disease connection, demonstrating separate but coupled roles of temperature and humidity. New insights regarding natural regulation of human-mosquito transmission across diverse ecological settings advance our understanding of risk locally and globally for dengue and other mosquito-borne diseases and support advances in public health policy/operations, providing an evidence-base for modeling, predicting risk, and surveillance-prevention planning.
Interannual variability and predictability over the Arabian Penuinsula Winter monsoon region
NASA Astrophysics Data System (ADS)
Adnan Abid, Muhammad; Kucharski, Fred; Almazroui, Mansour; Kang, In-Sik
2016-04-01
Interannual winter rainfall variability and its predictability are analysed over the Arabian Peninsula region by using observed and hindcast datasets from the state-of-the-art European Centre for Medium-Range Weather Forecasts (ECMWF) seasonal prediction System 4 for the period 1981-2010. An Arabian winter monsoon index (AWMI) is defined to highlight the Arabian Peninsula as the most representative region for the Northern Hemispheric winter dominating the summer rainfall. The observations show that the rainfall variability is relatively large over the northeast of the Arabian Peninsula. The correlation coefficient between the Nino3.4 index and rainfall in this region is 0.33, suggesting potentially some modest predictability, and indicating that El Nino increases and La Nina decreases the rainfall. Regression analysis shows that upper-level cyclonic circulation anomalies that are forced by El Nino Southern Oscillation (ENSO) are responsible for the winter rainfall anomalies over the Arabian region. The stronger (weaker) mean transient-eddy activity related to the upper-level trough induced by the warm (cold) sea-surface temperatures during El Nino (La Nina) tends to increase (decrease) the rainfall in the region. The model hindcast dataset reproduces the ENSO-rainfall connection. The seasonal mean predictability of the northeast Arabian rainfall index is 0.35. It is shown that the noise variance is larger than the signal over the Arabian Peninsula region, which tends to limit the prediction skill. The potential predictability is generally increased in ENSO years and is, in particular, larger during La Nina compared to El Nino years in the region. Furthermore, central Pacific ENSO events and ENSO events with weak signals in the Indian Ocean tend to increase predictability over the Arabian region.
Debris-flow runout predictions based on the average channel slope (ACS)
Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.
2008-01-01
Prediction of the runout distance of a debris flow is an important element in the delineation of potentially hazardous areas on alluvial fans and for the siting of mitigation structures. Existing runout estimation methods rely on input parameters that are often difficult to estimate, including volume, velocity, and frictional factors. In order to provide a simple method for preliminary estimates of debris-flow runout distances, we developed a model that provides runout predictions based on the average channel slope (ACS model) for non-volcanic debris flows that emanate from confined channels and deposit on well-defined alluvial fans. This model was developed from 20 debris-flow events in the western United States and British Columbia. Based on a runout estimation method developed for snow avalanches, this model predicts debris-flow runout as an angle of reach from a fixed point in the drainage channel to the end of the runout zone. The best fixed point was found to be the mid-point elevation of the drainage channel, measured from the apex of the alluvial fan to the top of the drainage basin. Predicted runout lengths were more consistent than those obtained from existing angle-of-reach estimation methods. Results of the model compared well with those of laboratory flume tests performed using the same range of channel slopes. The robustness of this model was tested by applying it to three debris-flow events not used in its development: predicted runout ranged from 82 to 131% of the actual runout for these three events. Prediction interval multipliers were also developed so that the user may calculate predicted runout within specified confidence limits. ?? 2008 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Arriola, Sonya; Murphy, Katy
2010-01-01
Undocumented students are a population defined by limitations. Their lack of legal residency and any supporting paperwork (e.g., Social Security number, government issued identification) renders them essentially invisible to the American and state governments. They cannot legally work. In many states, they cannot legally drive. After the age of…
Examining the predictive validity of low-risk gambling limits with longitudinal data.
Currie, Shawn R; Hodgins, David C; Casey, David M; el-Guebaly, Nady; Smith, Garry J; Williams, Robert J; Schopflocher, Don P; Wood, Robert T
2012-02-01
To assess the impact of gambling above the low-risk gambling limits developed by Currie et al. (2006) on future harm. To identify demographic, behavioural, clinical and environmental factors that predict the shift from low- to high-risk gambling habits over time. Longitudinal cohort study of gambling habits in community-dwelling adults. Alberta, Canada. A total of 809 adult gamblers who completed the time 1 and time 2 assessments separated by a 14-month interval. Low-risk gambling limits were defined as gambling no more than three times per month, spending no more than CAN$1000 per year on gambling and spending less than 1% of gross income on gambling. Gambling habits, harm from gambling and gambler characteristics were assessed by the Canadian Problem Gambling Index. Ancillary measures of substance abuse, gambling environment, major depression, impulsivity and personality traits assessed the influence of other risk factors on the escalation of gambling intensity. Gamblers classified as low risk at time 1 and shifted into high-risk gambling by time 2 were two to three times more likely to experience harm compared to gamblers who remained low risk at both assessments. Factors associated with the shift from low- to high-risk gambling behaviour from time 1 to time 2 included male gender, tobacco use, older age, having less education, having friends who gamble and playing electronic gaming machines. An increase in the intensity of gambling behaviour is associated with greater likelihood of future gambling related harm in adults. © 2011 The Authors, Addiction © 2011 Society for the Study of Addiction.
Quantifying evenly distributed states in exclusion and nonexclusion processes
NASA Astrophysics Data System (ADS)
Binder, Benjamin J.; Landman, Kerry A.
2011-04-01
Spatial-point data sets, generated from a wide range of physical systems and mathematical models, can be analyzed by counting the number of objects in equally sized bins. We find that the bin counts are related to the Pólya distribution. New measures are developed which indicate whether or not a spatial data set, generated from an exclusion process, is at its most evenly distributed state, the complete spatial randomness (CSR) state. To this end, we define an index in terms of the variance between the bin counts. Limiting values of the index are determined when objects have access to the entire domain and when there are subregions of the domain that are inaccessible to objects. Using three case studies (Lagrangian fluid particles in chaotic laminar flows, cellular automata agents in discrete models, and biological cells within colonies), we calculate the indexes and verify that our theoretical CSR limit accurately predicts the state of the system. These measures should prove useful in many biological applications.
Revisiting competition in a classic model system using formal links between theory and data.
Hart, Simon P; Burgin, Jacqueline R; Marshall, Dustin J
2012-09-01
Formal links between theory and data are a critical goal for ecology. However, while our current understanding of competition provides the foundation for solving many derived ecological problems, this understanding is fractured because competition theory and data are rarely unified. Conclusions from seminal studies in space-limited benthic marine systems, in particular, have been very influential for our general understanding of competition, but rely on traditional empirical methods with limited inferential power and compatibility with theory. Here we explicitly link mathematical theory with experimental field data to provide a more sophisticated understanding of competition in this classic model system. In contrast to predictions from conceptual models, our estimates of competition coefficients show that a dominant space competitor can be equally affected by interspecific competition with a poor competitor (traditionally defined) as it is by intraspecific competition. More generally, the often-invoked competitive hierarchies and intransitivities in this system might be usefully revisited using more sophisticated empirical and analytical approaches.
NASA Astrophysics Data System (ADS)
Ham, Boo-Hyun; Kim, Il-Hwan; Park, Sung-Sik; Yeo, Sun-Young; Kim, Sang-Jin; Park, Dong-Woon; Park, Joon-Soo; Ryu, Chang-Hoon; Son, Bo-Kyeong; Hwang, Kyung-Bae; Shin, Jae-Min; Shin, Jangho; Park, Ki-Yeop; Park, Sean; Liu, Lei; Tien, Ming-Chun; Nachtwein, Angelique; Jochemsen, Marinus; Yan, Philip; Hu, Vincent; Jones, Christopher
2017-03-01
As critical dimensions for advanced two dimensional (2D) DUV patterning continue to shrink, the exact process window becomes increasingly difficult to determine. The defect size criteria shrink with the patterning critical dimensions and are well below the resolution of current optical inspection tools. As a result, it is more challenging for traditional bright field inspection tools to accurately discover the hotspots that define the process window. In this study, we use a novel computational inspection method to identify the depth-of-focus limiting features of a 10 nm node mask with 2D metal structures (single exposure) and compare the results to those obtained with a traditional process windows qualification (PWQ) method based on utilizing a focus modulated wafer and bright field inspection (BFI) to detect hotspot defects. The method is extended to litho-etch litho-etch (LELE) on a different test vehicle to show that overlay related bridging hotspots also can be identified.
Family-wide analysis of poly(ADP-ribose) polymerase activity
Uchima, Lilen; Rood, Jenny; Zaja, Roko; Hay, Ronald T.; Ahel, Ivan; Chang, Paul
2014-01-01
The poly(ADP-ribose) polymerase (PARP) protein family generates ADP-ribose (ADPr) modifications onto target proteins using NAD+ as substrate. Based on the composition of three NAD+ coordinating amino acids, the H-Y-E motif, each PARP is predicted to generate either poly(ADP-ribose) (PAR) or mono(ADP-ribose) (MAR). However, the reaction product of each PARP has not been clearly defined, and is an important priority since PAR and MAR function via distinct mechanisms. Here we show that the majority of PARPs generate MAR, not PAR, and demonstrate that the H-Y-E motif is not the sole indicator of PARP activity. We identify automodification sites on seven PARPs, and demonstrate that MAR and PAR generating PARPs modify similar amino acids, suggesting that the sequence and structural constraints limiting PARPs to MAR synthesis do not limit their ability to modify canonical amino acid targets. In addition, we identify cysteine as a novel amino acid target for ADP-ribosylation on PARPs. PMID:25043379
Noninflammatory Joint Contractures Arising from Immobility: Animal Models to Future Treatments
Wong, Kayleigh; Trudel, Guy; Laneuville, Odette
2015-01-01
Joint contractures, defined as the limitation in the passive range of motion of a mobile joint, can be classified as noninflammatory diseases of the musculoskeletal system. The pathophysiology is not well understood; limited information is available on causal factors, progression, the pathophysiology involved, and prediction of response to treatment. The clinical heterogeneity of joint contractures combined with the heterogeneous contribution of joint connective tissues to joint mobility presents challenges to the study of joint contractures. Furthermore, contractures are often a symptom of a wide variety of heterogeneous disorders that are in many cases multifactorial. Extended immobility has been identified as a causal factor and evidence is provided from both experimental and epidemiology studies. Of interest is the involvement of the joint capsule in the pathophysiology of joint contractures and lack of response to remobilization. While molecular pathways involved in the development of joint contractures are being investigated, current treatments focus on physiotherapy, which is ineffective on irreversible contractures. Future treatments may include early diagnosis and prevention. PMID:26247029
Gear Damage Detection Using Oil Debris Analysis
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.
2001-01-01
The purpose of this paper was to verify, when using an oil debris sensor, that accumulated mass predicts gear pitting damage and to identify a method to set threshold limits for damaged gears. Oil debris data was collected from 8 experiments with no damage and 8 with pitting damage in the NASA Glenn Spur Gear Fatigue Rig. Oil debris feature analysis was performed on this data. Video images of damage progression were also collected from 6 of the experiments with pitting damage. During each test, data from an oil debris sensor was monitored and recorded for the occurrence of pitting damage. The data measured from the oil debris sensor during experiments with damage and with no damage was used to identify membership functions to build a simple fuzzy logic model. Using fuzzy logic techniques and the oil debris data, threshold limits were defined that discriminate between stages of pitting wear. Results indicate accumulated mass combined with fuzzy logic analysis techniques is a good predictor of pitting damage on spur gears.
Positional orthology: putting genomic evolutionary relationships into context.
Dewey, Colin N
2011-09-01
Orthology is a powerful refinement of homology that allows us to describe more precisely the evolution of genomes and understand the function of the genes they contain. However, because orthology is not concerned with genomic position, it is limited in its ability to describe genes that are likely to have equivalent roles in different genomes. Because of this limitation, the concept of 'positional orthology' has emerged, which describes the relation between orthologous genes that retain their ancestral genomic positions. In this review, we formally define this concept, for which we introduce the shorter term 'toporthology', with respect to the evolutionary events experienced by a gene's ancestors. Through a discussion of recent studies on the role of genomic context in gene evolution, we show that the distinction between orthology and toporthology is biologically significant. We then review a number of orthology prediction methods that take genomic context into account and thus that may be used to infer the important relation of toporthology.
Positional orthology: putting genomic evolutionary relationships into context
2011-01-01
Orthology is a powerful refinement of homology that allows us to describe more precisely the evolution of genomes and understand the function of the genes they contain. However, because orthology is not concerned with genomic position, it is limited in its ability to describe genes that are likely to have equivalent roles in different genomes. Because of this limitation, the concept of ‘positional orthology’ has emerged, which describes the relation between orthologous genes that retain their ancestral genomic positions. In this review, we formally define this concept, for which we introduce the shorter term ‘toporthology’, with respect to the evolutionary events experienced by a gene’s ancestors. Through a discussion of recent studies on the role of genomic context in gene evolution, we show that the distinction between orthology and toporthology is biologically significant. We then review a number of orthology prediction methods that take genomic context into account and thus that may be used to infer the important relation of toporthology. PMID:21705766
Meibom, A.; Stage, M.; Wooden, J.; Constantz, B.R.; Dunbar, R.B.; Owen, A.; Grumet, N.; Bacon, C.R.; Chamberlain, C.P.
2003-01-01
In thermodynamic equilibrium with sea water the Sr/Ca ratio of aragonite varies predictably with temperature and the Sr/Ca ratio in coral have thus become a frequently used proxy for past Sea Surface Temperature (SST). However, biological effects can offset the Sr/Ca ratio from its equilibrium value. We report high spatial resolution ion microprobe analyses of well defined skeletal elements in the reef-building coral Porites lutea that reveal distinct monthly oscillations in the Sr/Ca ratio, with an amplitude in excess of ten percent. The extreme Sr/Ca variations, which we propose result from metabolic changes synchronous with the lunar cycle, introduce variability in Sr/Ca measurements based on conventional sampling techniques well beyond the analytical precision. These variations can limit the accuracy of Sr/Ca paleothermometry by conventional sampling techniques to about 2??C. Our results may help explain the notorious difficulties involved in obtaining an accurate and consistent calibration of the Sr/Ca vs. SST relationship.
A mechanistic model of small intestinal starch digestion and glucose uptake in the cow.
Mills, J A N; France, J; Ellis, J L; Crompton, L A; Bannink, A; Hanigan, M D; Dijkstra, J
2017-06-01
The high contribution of postruminal starch digestion (up to 50%) to total-tract starch digestion on energy-dense, starch-rich diets demands that limitations to small intestinal starch digestion be identified. A mechanistic model of the small intestine was described and evaluated with regard to its ability to simulate observations from abomasal carbohydrate infusions in the dairy cow. The 7 state variables represent starch, oligosaccharide, glucose, and pancreatic amylase in the intestinal lumen, oligosaccharide and glucose in the unstirred water layer at the intestinal wall, and intracellular glucose of the enterocyte. Enzymatic hydrolysis of starch was modeled as a 2-stage process involving the activity of pancreatic amylase in the lumen and of oligosaccharidase at the brush border of the enterocyte confined within the unstirred water layer. The Na + -dependent glucose transport into the enterocyte was represented along with a facilitative glucose transporter 2 transport system on the basolateral membrane. The small intestine is subdivided into 3 main sections, representing the duodenum, jejunum, and ileum for parameterization. Further subsections are defined between which continual digesta flow is represented. The model predicted nonstructural carbohydrate disappearance in the small intestine for cattle unadapted to duodenal infusion with a coefficient of determination of 0.92 and a root mean square prediction error of 25.4%. Simulation of glucose disappearance for mature Holstein heifers adapted to various levels of duodenal glucose infusion yielded a coefficient of determination of 0.81 and a root mean square prediction error of 38.6%. Analysis of model behavior identified limitations to the efficiency of small intestinal starch digestion with high levels of duodenal starch flow. Limitations to individual processes, particularly starch digestion in the proximal section of the intestine, can create asynchrony between starch hydrolysis and glucose uptake capacity. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Brand, Caroline; Lowe, Adrian; Hall, Stephen
2008-01-01
Background Patients with rheumatoid arthritis have a higher risk of low bone mineral density than normal age matched populations. There is limited evidence to support cost effectiveness of population screening in rheumatoid arthritis and case finding strategies have been proposed as a means to increase cost effectiveness of diagnostic screening for osteoporosis. This study aimed to assess the performance attributes of generic and rheumatoid arthritis specific clinical decision tools for diagnosing osteoporosis in a postmenopausal population with rheumatoid arthritis who attend ambulatory specialist rheumatology clinics. Methods A cross-sectional study of 127 ambulatory post-menopausal women with rheumatoid arthritis was performed. Patients currently receiving or who had previously received bone active therapy were excluded. Eligible women underwent clinical assessment and dual-energy-xray absorptiometry (DXA) bone mineral density assessment. Clinical decision tools, including those specific for rheumatoid arthritis, were compared to seven generic post-menopausal tools to predict osteoporosis (defined as T score < -2.5). Sensitivity, specificity, positive predictive and negative predictive values and area under the curve were assessed. The diagnostic attributes of the clinical decision tools were compared by examination of the area under the receiver-operator-curve. Results One hundred and twenty seven women participated. The median age was 62 (IQR 56–71) years. Median disease duration was 108 (60–168) months. Seventy two (57%) women had no record of a previous DXA examination. Eighty (63%) women had T scores at femoral neck or lumbar spine less than -1. The area under the ROC curve for clinical decision tool prediction of T score <-2.5 varied between 0.63 and 0.76. The rheumatoid arthritis specific decision tools did not perform better than generic tools, however, the National Osteoporosis Foundation score could potentially reduce the number of unnecessary DXA tests by approximately 45% in this population. Conclusion There was limited utility of clinical decision tools for predicting osteoporosis in this patient population. Fracture prediction tools that include risk factors independent of BMD are needed. PMID:18230132
Framework for making better predictions by directly estimating variables' predictivity.
Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa
2016-12-13
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the [Formula: see text]-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the [Formula: see text]-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the [Formula: see text]-score on real data to demonstrate the statistic's predictive performance on sample data. We conjecture that using the partition retention and [Formula: see text]-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired.
Framework for making better predictions by directly estimating variables’ predictivity
Chernoff, Herman; Lo, Shaw-Hwa
2016-01-01
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the I-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the I-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the I-score on real data to demonstrate the statistic’s predictive performance on sample data. We conjecture that using the partition retention and I-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired. PMID:27911830
Furushima, Taishi; Miyachi, Motohiko; Iemitsu, Motoyuki; Murakami, Haruka; Kawano, Hiroshi; Gando, Yuko; Kawakami, Ryoko; Sanada, Kiyoshi
2017-08-29
This study aimed to develop and cross-validate prediction equations for estimating appendicular skeletal muscle mass (ASM) and to examine the relationship between sarcopenia defined by the prediction equations and risk factors for cardiovascular diseases (CVD) or osteoporosis in Japanese men and women. Subjects were healthy men and women aged 20-90 years, who were randomly allocated to the following two groups: the development group (D group; 257 men, 913 women) and the cross-validation group (V group; 119 men, 112 women). To develop prediction equations, stepwise multiple regression analyses were performed on data obtained from the D group, using ASM measured by dual-energy X-ray absorptiometry (DXA) as a dependent variable and five easily obtainable measures (age, height, weight, waist circumference, and handgrip strength) as independent variables. When the prediction equations for ASM estimation were applied to the V group, a significant correlation was found between DXA-measured ASM and predicted ASM in both men and women (R 2 = 0.81 and R 2 = 0.72). Our prediction equations had higher R 2 values compared to previously developed equations (R 2 = 0.75-0.59 and R 2 = 0.69-0.40) in both men and women. Moreover, sarcopenia defined by predicted ASM was related to risk factors for osteoporosis and CVD, as well as sarcopenia defined by DXA-measured ASM. In this study, novel prediction equations were developed and cross-validated in Japanese men and women. Our analyses validated the clinical significance of these prediction equations and showed that previously reported equations were not applicable in a Japanese population.
Somers, Jeffrey T.; Newby, Nathaniel; Lawrence, Charles; DeWeese, Richard; Moorcroft, David; Phelps, Shean
2014-01-01
The objective of this study was to investigate new methods for predicting injury from expected spaceflight dynamic loads by leveraging a broader range of available information in injury biomechanics. Although all spacecraft designs were considered, the primary focus was the National Aeronautics and Space Administration Orion capsule, as the authors have the most knowledge and experience related to this design. The team defined a list of critical injuries and selected the THOR anthropomorphic test device as the basis for new standards and requirements. In addition, the team down-selected the list of available injury metrics to the following: head injury criteria 15, kinematic brain rotational injury criteria, neck axial tension and compression force, maximum chest deflection, lateral shoulder force and displacement, acetabular lateral force, thoracic spine axial compression force, ankle moments, and average distal forearm speed limits. The team felt that these metrics capture all of the injuries that might be expected by a seated crewmember during vehicle aborts and landings. Using previously determined injury risk levels for nominal and off-nominal landings, appropriate injury assessment reference values (IARVs) were defined for each metric. Musculoskeletal deconditioning due to exposure to reduced gravity over time can affect injury risk during landing; therefore a deconditioning factor was applied to all IARVs. Although there are appropriate injury data for each anatomical region of interest, additional research is needed for several metrics to improve the confidence score. PMID:25152879
Exploratory Study of RNA Polymerase II Using Dynamic Atomic Force Microscopy
NASA Astrophysics Data System (ADS)
Rhodin, Thor; Umemura, Kazuo; Gad, Mohammed; Jarvis, Suzanne; Ishikawa, Mitsuru; Fu, Jianhua
2002-03-01
An exploratory study of the microtopological dimensions and shape features of yeast RNA polymerase II (y-poly II) on freshly cleaved mica was made in phosphate aqueous buffer solution at room temperature following previous work by Hansma and others. The molecules were imaged by stabilization on freshly cleaved mica at a limiting resolution of 10 Å and scanned using dynamical atomic force microscopy with a 10 nm multi-wall carbon nanotube in the resonance frequency modulation mode. They indicated microtopological shape and dimensional features similar to those predicted by electron density plots derived from the X-ray crystallographic model. It is concluded that this is considered primarily a feasibility study with definitive conclusions subject to more detailed systematic measurements of the 3D microtopology. These measurements appear to establish validity of the noncontact atomic force microscopy (nc-AFM) approach into defining the primary microtopology and biochemical functionality of RNA polymerase II. Further nc-AFM studies at higher resolution using dynamical nc-AFM will be required to clearly define the detailed 3D microtopology of RNA polymerase II in anaerobic aqueous environments for both static and dynamic conditions.
Successful Aging: Advancing the Science of Physical Independence in Older Adults
Anton, Stephen D.; Woods, Adam J.; Ashizawa, Tetso; Barb, Diana; Buford, Thomas W.; Carter, Christy S.; Clark, David J.; Cohen, Ronald A.; Corbett, Duane B.; Cruz-Almeida, Yenisel; Dotson, Vonetta; Ebner, Natalie; Efron, Philip A.; Fillingim, Roger B.; Foster, Thomas C.; Gundermann, David M.; Joseph, Anna-Maria; Karabetian, Christy; Leeuwenburgh, Christiaan; Manini, Todd M.; Marsiske, Michael; Mankowski, Robert T.; Mutchie, Heather L.; Perri, Michael G.; Ranka, Sanjay; Rashidi, Parisa; Sandesara, Bhanuprasad; Scarpace, Philip J.; Sibille, Kimberly T.; Solberg, Laurence M.; Someya, Shinichi; Uphold, Connie; Wohlgemuth, Stephanie; Wu, Samuel Shangwu; Pahor, Marco
2015-01-01
The concept of ‘Successful Aging’ has long intrigued the scientific community. Despite this long-standing interest, a consensus definition has proven to be a difficult task, due to the inherent challenge involved in defining such a complex, multi-dimensional phenomenon. The lack of a clear set of defining characteristics for the construct of successful aging has made comparison of findings across studies difficult and has limited advances in aging research. The domain in which consensus on markers of successful aging is furthest developed is the domain of physical functioning. For example, walking speed appears to be an excellent surrogate marker of overall health and predicts the maintenance of physical independence, a cornerstone of successful aging. The purpose of the present article is to provide an overview and discussion of specific health conditions, behavioral factors, and biological mechanisms that mark declining mobility and physical function and promising interventions to counter these effects. With life expectancy continuing to increase in the United States and developed countries throughout the world, there is an increasing public health focus on the maintenance of physical independence among all older adults. PMID:26462882
Treading lightly on shifting ground: The direction and motivation of future geological research
Witt, A.C.
2011-01-01
The future of the geosciences and geological research will involve complex scientific challenges, primarily concerning global and regional environmental issues, in the next 20-30 years. It is quite reasonable to suspect, based on current political and socioeconomic events, that young geoscientists will be faced with and involved in helping to resolve some well defined problems: water and energy security, the effects of anthropogenic climate change, coastal sea level rise and development, and the mitigation of geohazards. It is how we choose to approach these challenges that will define our future. Interdisciplinary applied research, improved modeling and prediction augmented with faster and more sophisticated computing, and a greater role in creating and guiding public policy, will help us achieve our goals of a cleaner and safer Earth environment in the next 30 years. In the far future, even grander possibilities for eliminating the risk of certain geohazards and finding sustainable solutions to our energy needs can be envisioned. Looking deeper into the future, the possibilities for geoscience research push the limits of the imagination.
Evaluating gambles using dynamics
NASA Astrophysics Data System (ADS)
Peters, O.; Gell-Mann, M.
2016-02-01
Gambles are random variables that model possible changes in wealth. Classic decision theory transforms money into utility through a utility function and defines the value of a gamble as the expectation value of utility changes. Utility functions aim to capture individual psychological characteristics, but their generality limits predictive power. Expectation value maximizers are defined as rational in economics, but expectation values are only meaningful in the presence of ensembles or in systems with ergodic properties, whereas decision-makers have no access to ensembles, and the variables representing wealth in the usual growth models do not have the relevant ergodic properties. Simultaneously addressing the shortcomings of utility and those of expectations, we propose to evaluate gambles by averaging wealth growth over time. No utility function is needed, but a dynamic must be specified to compute time averages. Linear and logarithmic "utility functions" appear as transformations that generate ergodic observables for purely additive and purely multiplicative dynamics, respectively. We highlight inconsistencies throughout the development of decision theory, whose correction clarifies that our perspective is legitimate. These invalidate a commonly cited argument for bounded utility functions.
Portegijs, Erja; Keskinen, Kirsi E.; Tsai, Li-Tang; Rantanen, Taina; Rantakokko, Merja
2017-01-01
The aim was to study objectively assessed walkability of the environment and participant perceived environmental facilitators for outdoor mobility as predictors of physical activity in older adults with and without physical limitations. 75–90-year-old adults living independently in Central Finland were interviewed (n = 839) and reassessed for self-reported physical activity one or two years later (n = 787). Lower-extremity physical limitations were defined as Short Physical Performance Battery score ≤9. Number of perceived environmental facilitators was calculated from a 16-item checklist. Walkability index (land use mix, street connectivity, population density) of the home environment was calculated from geographic information and categorized into tertiles. Accelerometer-based step counts were registered for one week (n = 174). Better walkability was associated with higher numbers of perceived environmental facilitators (p < 0.001) and higher physical activity (self-reported p = 0.021, step count p = 0.010). Especially among those with physical limitations, reporting more environmental facilitators was associated with higher odds for reporting at least moderate physical activity (p < 0.001), but not step counts. Perceived environmental facilitators only predicted self-reported physical activity at follow-up. To conclude, high walkability of the living environment provides opportunities for physical activity in old age, but among those with physical limitations especially, awareness of environmental facilitators may be needed to promote physical activity. PMID:28327543
Portegijs, Erja; Keskinen, Kirsi E; Tsai, Li-Tang; Rantanen, Taina; Rantakokko, Merja
2017-03-22
The aim was to study objectively assessed walkability of the environment and participant perceived environmental facilitators for outdoor mobility as predictors of physical activity in older adults with and without physical limitations. 75-90-year-old adults living independently in Central Finland were interviewed ( n = 839) and reassessed for self-reported physical activity one or two years later ( n = 787). Lower-extremity physical limitations were defined as Short Physical Performance Battery score ≤9. Number of perceived environmental facilitators was calculated from a 16-item checklist. Walkability index (land use mix, street connectivity, population density) of the home environment was calculated from geographic information and categorized into tertiles. Accelerometer-based step counts were registered for one week ( n = 174). Better walkability was associated with higher numbers of perceived environmental facilitators ( p < 0.001) and higher physical activity (self-reported p = 0.021, step count p = 0.010). Especially among those with physical limitations, reporting more environmental facilitators was associated with higher odds for reporting at least moderate physical activity ( p < 0.001), but not step counts. Perceived environmental facilitators only predicted self-reported physical activity at follow-up. To conclude, high walkability of the living environment provides opportunities for physical activity in old age, but among those with physical limitations especially, awareness of environmental facilitators may be needed to promote physical activity.
Yu, Ruby; Morley, John E; Kwok, Timothy; Leung, Jason; Cheung, Osbert; Woo, Jean
2018-01-01
To examine how various combinations of cognitive impairment (overall performance and specific domains) and pre-frailty predict risks of adverse outcomes; and to determine whether cognitive frailty may be defined as the combination of cognitive impairment and the presence of pre-frailty. Community-based cohort study. Chinese men and women ( n = 3,491) aged 65+ without dementia, Parkinson's disease and/or frailty at baseline. Frailty was characterized using the Cardiovascular Health Study criteria. Overall cognitive impairment was defined by a Cantonese Mini-Mental Status Examination (CMMSE) total score (<21/24/27, depending on participants' educational levels); delayed recall impairment by a CMMSE delayed recall score (<3); and language and praxis impairment by a CMMSE language and praxis score (<9). Adverse outcomes included poor quality of life, physical limitation, increased cumulative hospital stay, and mortality. Compared to those who were robust and cognitively intact at baseline, those who were robust but cognitively impaired were more likely to develop pre-frailty/frailty after 4 years ( P < 0.01). Compared to participants who were robust and cognitively intact at baseline, those who were pre-frail and with overall cognitive impairment had lower grip strength ( P < 0.05), lower gait speed ( P < 0.01), poorer lower limb strength ( P < 0.01), and poorer delayed recall at year 4 [OR, 1.6; 95% confidence interval (CI), 1.2-2.3]. They were also associated with increased risks of poor quality of life (OR, 1.5; 95% CI, 1.1-2.2) and incident physical limitation at year 4 (OR, 1.8; 95% CI, 1.3-2.5), increased cumulative hospital stay at year 7 (OR, 1.5; 95% CI, 1.1-2.1), and mortality over an average of 12 years (OR, 1.5; 95% CI, 1.0-2.1) after adjustment for covariates. There was no significant difference in risks of adverse outcomes between participants who were pre-frail, with/without cognitive impairment at baseline. Similar results were obtained with delayed recall and language and praxis impairments. Robust and cognitively impaired participants had higher risks of becoming pre-frail/frail over 4 years compared with those with normal cognition. Cognitive impairment characterized by the CMMSE overall score or its individual domain score improved the predictive power of pre-frailty for poor quality of life, incident physical limitation, increased cumulative hospital stay, and mortality. Our findings support to the concept that cognitive frailty may be defined as the occurrence of both cognitive impairment and pre-frailty, not necessarily progressing to dementia.
NASA Technical Reports Server (NTRS)
Dash, S.; Delguidice, P.
1978-01-01
This report summarizes work accomplished under Contract No. NAS1-12726 towards the development of computational procedures and associated numerical. The flow fields considered were those associated with airbreathing hypersonic aircraft which require a high degree of engine/airframe integration in order to achieve optimized performance. The exhaust flow, due to physical area limitations, was generally underexpanded at the nozzle exit; the vehicle afterbody undersurface was used to provide additional expansion to obtain maximum propulsive efficiency. This resulted in a three dimensional nozzle flow, initialized at the combustor exit, whose boundaries are internally defined by the undersurface, cowling and walls separating individual modules, and externally, by the undersurface and slipstream separating the exhaust flow and external stream.
PCR-based detection of a rare linear DNA in cell culture.
Saveliev, Sergei V.
2002-11-11
The described method allows for detection of rare linear DNA fragments generated during genomic deletions. The predicted limit of the detection is one DNA molecule per 10(7) or more cells. The method is based on anchor PCR and involves gel separation of the linear DNA fragment and chromosomal DNA before amplification. The detailed chemical structure of the ends of the linear DNA can be defined with the use of additional PCR-based protocols. The method was applied to study the short-lived linear DNA generated during programmed genomic deletions in a ciliate. It can be useful in studies of spontaneous DNA deletions in cell culture or for tracking intracellular modifications at the ends of transfected DNA during gene therapy trials.
PCR-based detection of a rare linear DNA in cell culture
2002-01-01
The described method allows for detection of rare linear DNA fragments generated during genomic deletions. The predicted limit of the detection is one DNA molecule per 107 or more cells. The method is based on anchor PCR and involves gel separation of the linear DNA fragment and chromosomal DNA before amplification. The detailed chemical structure of the ends of the linear DNA can be defined with the use of additional PCR-based protocols. The method was applied to study the short-lived linear DNA generated during programmed genomic deletions in a ciliate. It can be useful in studies of spontaneous DNA deletions in cell culture or for tracking intracellular modifications at the ends of transfected DNA during gene therapy trials. PMID:12734566
Jovian longitudinal asymmetry in Io-related and Europa-related auroral hot spots
NASA Technical Reports Server (NTRS)
Dessler, A. J.; Chamberlain, J. W.
1979-01-01
Auroral emissions generated by the Jovian moons Io and Europa, originating at the foot of the magnetic flux tubes of the satellites, may be largely limited to longitudes where the planet's ionospheric conductivity is enhanced. The enhanced conductivity is produced by trapped energetic electrons that drift into the Jovian atmosphere in regions where the planet's magnetic field is anomalously weak. The most active auroral hot-spot emissions lie in a sector of the northern hemisphere defined by decametric radio emission. Weaker auroral hot spots are found in the southern hemisphere along a magnetic conjugate trace. The brightness and the longitude of the Jovian hot spots predicted in this paper are in agreement with observations reported by Atreya et al. (1977).
The Crossover Time as an Evaluation of Ocean Models Against Persistence
NASA Astrophysics Data System (ADS)
Phillipson, L. M.; Toumi, R.
2018-01-01
A new ocean evaluation metric, the crossover time, is defined as the time it takes for a numerical model to equal the performance of persistence. As an example, the average crossover time calculated using the Lagrangian separation distance (the distance between simulated trajectories and observed drifters) for the global MERCATOR ocean model analysis is found to be about 6 days. Conversely, the model forecast has an average crossover time longer than 6 days, suggesting limited skill in Lagrangian predictability by the current generation of global ocean models. The crossover time of the velocity error is less than 3 days, which is similar to the average decorrelation time of the observed drifters. The crossover time is a useful measure to quantify future ocean model improvements.
Hwang, In Cheol; Ahn, Hong Yup; Park, Sang Min; Shim, Jae Yong; Kim, Kyoung Kon
2013-03-01
There is scant research concerning the prediction of imminent death, and current studies simply list events "that have already occurred" around 48 h of the death. We sought to determine what events herald the onset of dying process using the length of time from "any change" to death. This is a prospective observational study with chart audit. Inclusion criteria were terminal cancer patients who passed away in a palliative care unit. The analysis was limited to 181 patients who had medical records for their final week. Commonly observed events in the terminally ill were determined and their significant changes were defined beforehand. We selected the statistically significant changes by multiple logistic regression analysis and evaluated their predictive values for "death within 48 h." The median age was 67 years and there were 103 male patients. After adjusting for age, sex, primary cancer site, metastatic site, and cancer treatment, multiple logistic regression analyses for association between the events and "death within 48 h" revealed some significant changes: confused mental state, decreased blood pressure, increased pulse pressure, low oxygen saturation, death rattle, and decreased conscious level. The events that had higher predictability for death within 48 h were decreased blood pressure and low oxygen saturation, and the positive and negative predictive values of their combination were 95.0 and 81.4%, respectively. The most reliable events to predict impending death were decreased blood pressure and low oxygen saturation.
Hemoglobin A1c can be helpful in predicting progression to diabetes after Whipple procedure.
Hamilton, Lisa; Jeyarajah, D Rohan
2007-01-01
Normoglycemic patients undergoing pancreaticoduodenectomy (Whipple procedure) often inquire whether they will be diabetic postoperatively. There is limited information on this issue. We therefore looked at a more subtle measurement of long-term glycemic control, hemoglobin A1c (HgbA1c), as a prognostic tool in predicting progression to diabetes post Whipple. A retrospective review over a 6-year period of all patients undergoing Whipple procedures at a single institution was conducted. In all, 27 patients had no prior history of diabetes, complete follow-up, and measured preoperative HgbA1c values. Postoperative diabetes was defined as the need for oral hypoglycemic agents or insulin. These charts were analyzed for progression to diabetes after Whipple. Of the 27 patients, 10 were considered to have postoperative diabetes. The average preoperative HgbA1c value for these patients was 6.3+/-0.66. This was statistically different from the 17 patients without postoperative diabetes (average HgbA1c 5.2+/-0.39, p<0.001). The positive predictive value, negative predictive value, sensitivity, and specificity were 82%, 94%, 90%, and 88%, respectively. This study demonstrates that progression to diabetes is very unlikely after Whipple operation if the preoperative HgbA1c value is in the normal range. The apparent utility of HgbA1c in predicting postoperative diabetes in this small study suggests that this laboratory test may be very helpful in counseling patients for Whipple operation.
Updated Global Burden of Cholera in Endemic Countries
Ali, Mohammad; Nelson, Allyson R.; Lopez, Anna Lena; Sack, David A.
2015-01-01
Background The global burden of cholera is largely unknown because the majority of cases are not reported. The low reporting can be attributed to limited capacity of epidemiological surveillance and laboratories, as well as social, political, and economic disincentives for reporting. We previously estimated 2.8 million cases and 91,000 deaths annually due to cholera in 51 endemic countries. A major limitation in our previous estimate was that the endemic and non-endemic countries were defined based on the countries’ reported cholera cases. We overcame the limitation with the use of a spatial modelling technique in defining endemic countries, and accordingly updated the estimates of the global burden of cholera. Methods/Principal Findings Countries were classified as cholera endemic, cholera non-endemic, or cholera-free based on whether a spatial regression model predicted an incidence rate over a certain threshold in at least three of five years (2008-2012). The at-risk populations were calculated for each country based on the percent of the country without sustainable access to improved sanitation facilities. Incidence rates from population-based published studies were used to calculate the estimated annual number of cases in endemic countries. The number of annual cholera deaths was calculated using inverse variance-weighted average case-fatality rate (CFRs) from literature-based CFR estimates. We found that approximately 1.3 billion people are at risk for cholera in endemic countries. An estimated 2.86 million cholera cases (uncertainty range: 1.3m-4.0m) occur annually in endemic countries. Among these cases, there are an estimated 95,000 deaths (uncertainty range: 21,000-143,000). Conclusion/Significance The global burden of cholera remains high. Sub-Saharan Africa accounts for the majority of this burden. Our findings can inform programmatic decision-making for cholera control. PMID:26043000
Methods for Combining Payload Parameter Variations with Input Environment
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Straayer, J. W.
1975-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.
Al-Air Batteries: Fundamental Thermodynamic Limitations from First Principles Theory
NASA Astrophysics Data System (ADS)
Chen, Leanne D.; Noerskov, Jens K.; Luntz, Alan C.
2015-03-01
The Al-air battery possesses high theoretical specific energy (4140 Wh/kg) and is therefore an attractive candidate for vehicle propulsion applications. However, the experimentally observed open-circuit potential is much lower than what thermodynamics predicts, and this potential loss is widely believed to be an effect of corrosion. We present a detailed study of the Al-air battery using density functional theory. The results suggest that the difference between bulk thermodynamic and surface potentials is due to both the effects of asymmetry in multi-electron transfer reactions that define the anodic dissolution of Al and, more importantly, a large chemical step inherent to the formation of bulk Al(OH)3 from surface intermediates. The former results in an energy loss of 3%, while the latter accounts for 14 -29% of the total thermodynamic energy depending on the surface site where dissolution occurs. Therefore, the maximum open-circuit potential of the Al anode is only -1.87 V vs. SHE in the absence of thermal excitations, contrary to -2.34 V predicted by bulk thermodynamics at pH 14.6. This is a fundamental limitation of the system and governs the maximum output potential, which cannot be improved even if corrosion effects were completely suppressed. Supported by the Natural Sciences and Engineering Research Council of Canada and the ReLiable Project (#11-116792) funded by the Danish Council for Strategic Research.
Rusyn, Ivan; Sedykh, Alexander; Guyton, Kathryn Z.; Tropsha, Alexander
2012-01-01
Quantitative structure-activity relationship (QSAR) models are widely used for in silico prediction of in vivo toxicity of drug candidates or environmental chemicals, adding value to candidate selection in drug development or in a search for less hazardous and more sustainable alternatives for chemicals in commerce. The development of traditional QSAR models is enabled by numerical descriptors representing the inherent chemical properties that can be easily defined for any number of molecules; however, traditional QSAR models often have limited predictive power due to the lack of data and complexity of in vivo endpoints. Although it has been indeed difficult to obtain experimentally derived toxicity data on a large number of chemicals in the past, the results of quantitative in vitro screening of thousands of environmental chemicals in hundreds of experimental systems are now available and continue to accumulate. In addition, publicly accessible toxicogenomics data collected on hundreds of chemicals provide another dimension of molecular information that is potentially useful for predictive toxicity modeling. These new characteristics of molecular bioactivity arising from short-term biological assays, i.e., in vitro screening and/or in vivo toxicogenomics data can now be exploited in combination with chemical structural information to generate hybrid QSAR–like quantitative models to predict human toxicity and carcinogenicity. Using several case studies, we illustrate the benefits of a hybrid modeling approach, namely improvements in the accuracy of models, enhanced interpretation of the most predictive features, and expanded applicability domain for wider chemical space coverage. PMID:22387746
Childhood sexual abuse history and role reversal in parenting.
Alexander, P C; Teti, L; Anderson, C L
2000-06-01
This study explored the main and interactive effects of sexual abuse history and relationship satisfaction on self-reported parenting, controlling for histories of physical abuse and parental alcoholism. The community sample consisted of 90 mothers of 5- to 8-year-old children. The sample was limited to those mothers currently in an intimate relationship, 19 of whom reported a history of childhood sexual abuse. Participants completed the Child Behavior Checklist, the Parenting Stress Inventory, the Family Cohesion Index, and questions assessing parent-child role reversal, history of abuse and parental alcoholism, and current relationship satisfaction. Results of analyses and multivariate analyses of covariance suggested that sexual abuse survivors with an unsatisfactory intimate relationship were more likely than either sexual abuse survivors with a satisfactory relationship or nonabused women to endorse items on a questionnaire of role reversal (defined as emotional overdependence upon one's child). Role reversal was not significantly predicted by histories of physical abuse or parental alcoholism or child's gender. While parenting stress was inversely predicted by the significant main effect of relationship satisfaction, neither parenting stress nor child behavior problems were predicted by the main effect of sexual abuse history or by the interaction between sexual abuse history and relationship satisfaction. These results suggest the unique relevance of sexual abuse history and relationship satisfaction in the prediction of a specific type of parent-child role reversal--namely, a mother's emotional overdependence upon her child.
Prediction of Incident Diabetes in the Jackson Heart Study Using High-Dimensional Machine Learning
Casanova, Ramon; Saldana, Santiago; Simpson, Sean L.; Lacy, Mary E.; Subauste, Angela R.; Blackshear, Chad; Wagenknecht, Lynne; Bertoni, Alain G.
2016-01-01
Statistical models to predict incident diabetes are often based on limited variables. Here we pursued two main goals: 1) investigate the relative performance of a machine learning method such as Random Forests (RF) for detecting incident diabetes in a high-dimensional setting defined by a large set of observational data, and 2) uncover potential predictors of diabetes. The Jackson Heart Study collected data at baseline and in two follow-up visits from 5,301 African Americans. We excluded those with baseline diabetes and no follow-up, leaving 3,633 individuals for analyses. Over a mean 8-year follow-up, 584 participants developed diabetes. The full RF model evaluated 93 variables including demographic, anthropometric, blood biomarker, medical history, and echocardiogram data. We also used RF metrics of variable importance to rank variables according to their contribution to diabetes prediction. We implemented other models based on logistic regression and RF where features were preselected. The RF full model performance was similar (AUC = 0.82) to those more parsimonious models. The top-ranked variables according to RF included hemoglobin A1C, fasting plasma glucose, waist circumference, adiponectin, c-reactive protein, triglycerides, leptin, left ventricular mass, high-density lipoprotein cholesterol, and aldosterone. This work shows the potential of RF for incident diabetes prediction while dealing with high-dimensional data. PMID:27727289
A review of research in rotor loads
NASA Technical Reports Server (NTRS)
Bousman, William G.; Mantay, Wayne R.
1988-01-01
The research accomplished in the area of rotor loads over the last 13 to 14 years is reviewed. The start of the period examined is defined by the 1973 AGARD Milan conference and the 1974 hypothetical rotor comparison. The major emphasis of the review is research performed by the U.S. Army and NASA at their laboratories and/or by the industry under government contract. For the purpose of this review, two main topics are addressed: rotor loads prediction and means of rotor loads reduction. A limited discussion of research in gust loads and maneuver loads is included. In the area of rotor loads predictions, the major problem areas are reviewed including dynamic stall, wake induced flows, blade tip effects, fuselage induced effects, blade structural modeling, hub impedance, and solution methods. It is concluded that the capability to predict rotor loads has not significantly improved in this time frame. Future progress will require more extensive correlation of measurements and predictions to better understand the causes of the problems, and a recognition that differences between theory and measurement have multiple sources, yet must be treated as a whole. There is a need for high-quality data to support future research in rotor loads, but the resulting data base must not be seen as an end in itself. It will be useful only if it is integrated into firm long-range plans for the use of the data.
Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke
2008-05-01
Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of the features, which are capable of separating the structural classes in spite of their low dimensionality. We also demonstrate that the SCPRED's predictions can be successfully used as a post-processing filter to improve performance of modern fold classification methods.
Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke
2008-01-01
Background Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. Results SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. Conclusion The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of the features, which are capable of separating the structural classes in spite of their low dimensionality. We also demonstrate that the SCPRED's predictions can be successfully used as a post-processing filter to improve performance of modern fold classification methods. PMID:18452616
Predictive modeling of neuroanatomic structures for brain atrophy detection
NASA Astrophysics Data System (ADS)
Hu, Xintao; Guo, Lei; Nie, Jingxin; Li, Kaiming; Liu, Tianming
2010-03-01
In this paper, we present an approach of predictive modeling of neuroanatomic structures for the detection of brain atrophy based on cross-sectional MRI image. The underlying premise of applying predictive modeling for atrophy detection is that brain atrophy is defined as significant deviation of part of the anatomy from what the remaining normal anatomy predicts for that part. The steps of predictive modeling are as follows. The central cortical surface under consideration is reconstructed from brain tissue map and Regions of Interests (ROI) on it are predicted from other reliable anatomies. The vertex pair-wise distance between the predicted vertex and the true one within the abnormal region is expected to be larger than that of the vertex in normal brain region. Change of white matter/gray matter ratio within a spherical region is used to identify the direction of vertex displacement. In this way, the severity of brain atrophy can be defined quantitatively by the displacements of those vertices. The proposed predictive modeling method has been evaluated by using both simulated atrophies and MRI images of Alzheimer's disease.
Bean, William T.; Stafford, Robert; Butterfield, H. Scott; Brashares, Justin S.
2014-01-01
Species distributions are known to be limited by biotic and abiotic factors at multiple temporal and spatial scales. Species distribution models, however, frequently assume a population at equilibrium in both time and space. Studies of habitat selection have repeatedly shown the difficulty of estimating resource selection if the scale or extent of analysis is incorrect. Here, we present a multi-step approach to estimate the realized and potential distribution of the endangered giant kangaroo rat. First, we estimate the potential distribution by modeling suitability at a range-wide scale using static bioclimatic variables. We then examine annual changes in extent at a population-level. We define “available” habitat based on the total suitable potential distribution at the range-wide scale. Then, within the available habitat, model changes in population extent driven by multiple measures of resource availability. By modeling distributions for a population with robust estimates of population extent through time, and ecologically relevant predictor variables, we improved the predictive ability of SDMs, as well as revealed an unanticipated relationship between population extent and precipitation at multiple scales. At a range-wide scale, the best model indicated the giant kangaroo rat was limited to areas that received little to no precipitation in the summer months. In contrast, the best model for shorter time scales showed a positive relation with resource abundance, driven by precipitation, in the current and previous year. These results suggest that the distribution of the giant kangaroo rat was limited to the wettest parts of the drier areas within the study region. This multi-step approach reinforces the differing relationship species may have with environmental variables at different scales, provides a novel method for defining “available” habitat in habitat selection studies, and suggests a way to create distribution models at spatial and temporal scales relevant to theoretical and applied ecologists. PMID:25237807
Cardiopulmonary Exercise Testing in Patients Following Massive and Submassive Pulmonary Embolism.
Albaghdadi, Mazen S; Dudzinski, David M; Giordano, Nicholas; Kabrhel, Christopher; Ghoshhajra, Brian; Jaff, Michael R; Weinberg, Ido; Baggish, Aaron
2018-03-03
Little data exist regarding the functional capacity of patients following acute pulmonary embolism. We sought to characterize the natural history of symptom burden, right ventricular (RV) structure and function, and exercise capacity among survivors of massive and submassive pulmonary embolism. Survivors of submassive or massive pulmonary embolism (n=20, age 57±13.3 years, 8/20 female) underwent clinical evaluation, transthoracic echocardiography, and cardiopulmonary exercise testing at 1 and 6 months following hospital discharge. At 1 month, 9/20 (45%) patients had New York Heart Association II or greater symptoms, 13/20 (65%) demonstrated either persistent RV dilation or systolic dysfunction, and 14/20 (70%) had objective exercise impairment as defined by a peak oxygen consumption (V˙O 2 ) of <80% of age-sex predicted maximal values (16.25 [13.4-20.98] mL/kg per minute). At 6 months, no appreciable improvements in symptom severity, RV structure or function, and peak V˙O 2 (17.45 [14.08-22.48] mL/kg per minute, P =NS) were observed. No patients demonstrated an exercise limitation attributable to either RV/pulmonary vascular coupling, as defined by a VE/VCO 2 slope >33, or a pulmonary mechanical limit to exercise at either time point. Similarly, persistent RV dilation or dysfunction was not significantly related to symptom burden or peak V˙O 2 at either time point. Persistent symptoms, abnormalities of RV structure and function, and objective exercise limitation are common among survivors of massive and submassive pulmonary embolism. Functional impairment appears to be attributable to general deconditioning rather than intrinsic cardiopulmonary limitation, suggesting an important role for prescribed exercise rehabilitation as a means toward improved patient outcomes and quality of life. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
NASA Astrophysics Data System (ADS)
Ivanovici, A. M.
1980-03-01
Various physiological and biochemical methods have been proposed for assessing the effects of environmental perturbation on aquatic organisms. The success of these methods as diagnostic tools has, however, been limited. This paper proposes that adenylate energy charge overcomes some of these limitations. The adenylate energy charge (AEC) is calculated from concentrations of adenine nucleotides ([ATP+½ADP]/[ATP+ADP+AMP]), and is a reflection of metabolic potential available to an organism. Several features of this method are: correlation of specific values with physiological condition or growth state, a defined range of values, fast response times and high precision. Several examples from laboratory and field experiments are given to demonstrate these features. The test organisms used (mollusc species) were exposed to a variety of environmental perturbations, including salinity reduction, hydrocarbons and low doses of heavy metal. The studies performed indicate that the energy charge may be a useful measure in the assessment of environmental impact. Its use is restricted, however, as several limitations exist which need to be fully evaluated. Further work relating values to population characteristics of multicellular organisms needs to be completed before the method can become a predictive tool for management.
Zero Thermal Noise in Resistors at Zero Temperature
NASA Astrophysics Data System (ADS)
Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes-Göran
2016-06-01
The bandwidth of transistors in logic devices approaches the quantum limit, where Johnson noise and associated error rates are supposed to be strongly enhanced. However, the related theory — asserting a temperature-independent quantum zero-point (ZP) contribution to Johnson noise, which dominates the quantum regime — is controversial and resolution of the controversy is essential to determine the real error rate and fundamental energy dissipation limits of logic gates in the quantum limit. The Callen-Welton formula (fluctuation-dissipation theorem) of voltage and current noise for a resistance is the sum of Nyquist’s classical Johnson noise equation and a quantum ZP term with a power density spectrum proportional to frequency and independent of temperature. The classical Johnson-Nyquist formula vanishes at the approach of zero temperature, but the quantum ZP term still predicts non-zero noise voltage and current. Here, we show that this noise cannot be reconciled with the Fermi-Dirac distribution, which defines the thermodynamics of electrons according to quantum-statistical physics. Consequently, Johnson noise must be nil at zero temperature, and non-zero noise found for certain experimental arrangements may be a measurement artifact, such as the one mentioned in Kleen’s uncertainty relation argument.
THE FUTURE OF TOXICOLOGY-PREDICTIVE TOXICOLOGY: AN EXPANDED VIEW OF CHEMICAL TOXICITY
A chemistry approach to predictive toxicology relies on structure−activity relationship (SAR) modeling to predict biological activity from chemical structure. Such approaches have proven capabilities when applied to well-defined toxicity end points or regions of chemical space. T...
Fatigue in HIV illness: relationship to depression, physical limitations, and disability.
Ferrando, S; Evans, S; Goggin, K; Sewell, M; Fishman, B; Rabkin, J
1998-01-01
This study was conducted to investigate the prevalence of clinical fatigue reported by gay/bisexual men at all HIV illness stages, and whether fatigue, while associated with depression, independently contributes to limitations in physical function and disability. HIV- men, HIV+ men with CD4 counts >500, HIV+ men with CD4 counts 200 to 500, and men with AIDS were compared on prevalence of clinical fatigue, as defined by a standardized instrument. Among HIV+ men, the relationships among fatigue, depressed mood, major depressive disorder, HIV illness markers (including CD4 count and HIV RNA viral load), physical limitations, and disability were assessed at baseline and after 1 year. The prevalence of clinical fatigue in men with CD4 counts <500 was 14%, significantly higher than HIV- men and HIV+ men with CD4 counts >500. However, fatigue was not directly correlated with CD4 count or HIV RNA. Fatigue was a chronic symptom that was associated with depressed mood, major depressive disorder, physical limitations, and disability. After 1 year, an increase in depressive symptoms predicted a small amount of variance in fatigue; however, depressive symptoms were not associated with physical limitations or disability after controlling for fatigue. Fatigue is a chronic symptom that is more prevalent in advanced HIV illness, and which, although associated with depression, does not seem to be merely a symptom of depression. Because fatigue contributes independently to physical limitations and disability, it should be assessed and treated.
GPS-ARM: Computational Analysis of the APC/C Recognition Motif by Predicting D-Boxes and KEN-Boxes
Ren, Jian; Cao, Jun; Zhou, Yanhong; Yang, Qing; Xue, Yu
2012-01-01
Anaphase-promoting complex/cyclosome (APC/C), an E3 ubiquitin ligase incorporated with Cdh1 and/or Cdc20 recognizes and interacts with specific substrates, and faithfully orchestrates the proper cell cycle events by targeting proteins for proteasomal degradation. Experimental identification of APC/C substrates is largely dependent on the discovery of APC/C recognition motifs, e.g., the D-box and KEN-box. Although a number of either stringent or loosely defined motifs proposed, these motif patterns are only of limited use due to their insufficient powers of prediction. We report the development of a novel GPS-ARM software package which is useful for the prediction of D-boxes and KEN-boxes in proteins. Using experimentally identified D-boxes and KEN-boxes as the training data sets, a previously developed GPS (Group-based Prediction System) algorithm was adopted. By extensive evaluation and comparison, the GPS-ARM performance was found to be much better than the one using simple motifs. With this powerful tool, we predicted 4,841 potential D-boxes in 3,832 proteins and 1,632 potential KEN-boxes in 1,403 proteins from H. sapiens, while further statistical analysis suggested that both the D-box and KEN-box proteins are involved in a broad spectrum of biological processes beyond the cell cycle. In addition, with the co-localization information, we predicted hundreds of mitosis-specific APC/C substrates with high confidence. As the first computational tool for the prediction of APC/C-mediated degradation, GPS-ARM is a useful tool for information to be used in further experimental investigations. The GPS-ARM is freely accessible for academic researchers at: http://arm.biocuckoo.org. PMID:22479614
Systematic review of prediction models for delirium in the older adult inpatient.
Lindroth, Heidi; Bratzke, Lisa; Purvis, Suzanne; Brown, Roger; Coburn, Mark; Mrkobrada, Marko; Chan, Matthew T V; Davis, Daniel H J; Pandharipande, Pratik; Carlsson, Cynthia M; Sanders, Robert D
2018-04-28
To identify existing prognostic delirium prediction models and evaluate their validity and statistical methodology in the older adult (≥60 years) acute hospital population. Systematic review. PubMed, CINAHL, PsychINFO, SocINFO, Cochrane, Web of Science and Embase were searched from 1 January 1990 to 31 December 2016. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses and CHARMS Statement guided protocol development. age >60 years, inpatient, developed/validated a prognostic delirium prediction model. alcohol-related delirium, sample size ≤50. The primary performance measures were calibration and discrimination statistics. Two authors independently conducted search and extracted data. The synthesis of data was done by the first author. Disagreement was resolved by the mentoring author. The initial search resulted in 7,502 studies. Following full-text review of 192 studies, 33 were excluded based on age criteria (<60 years) and 27 met the defined criteria. Twenty-three delirium prediction models were identified, 14 were externally validated and 3 were internally validated. The following populations were represented: 11 medical, 3 medical/surgical and 13 surgical. The assessment of delirium was often non-systematic, resulting in varied incidence. Fourteen models were externally validated with an area under the receiver operating curve range from 0.52 to 0.94. Limitations in design, data collection methods and model metric reporting statistics were identified. Delirium prediction models for older adults show variable and typically inadequate predictive capabilities. Our review highlights the need for development of robust models to predict delirium in older inpatients. We provide recommendations for the development of such models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
2013-10-01
study will recruit wounded warriors with severe extremity trauma, which places them at high risk for heterotopic ossification (HO); bone formation at...involved in HO; 2) to define accurate and practical methods to predict where HO will develop; and 3) to define potential therapies for prevention or...elicit HO. These tools also need to provide effective methods for early diagnosis or risk assessment (prediction) so that therapies for prevention or
Self-calibrating multiplexer circuit
Wahl, Chris P.
1997-01-01
A time domain multiplexer system with automatic determination of acceptable multiplexer output limits, error determination, or correction is comprised of a time domain multiplexer, a computer, a constant current source capable of at least three distinct current levels, and two series resistances employed for calibration and testing. A two point linear calibration curve defining acceptable multiplexer voltage limits may be defined by the computer by determining the voltage output of the multiplexer to very accurately known input signals developed from predetermined current levels across the series resistances. Drift in the multiplexer may be detected by the computer when the output voltage limits, expected during normal operation, are exceeded, or the relationship defined by the calibration curve is invalidated.
T-Epitope Designer: A HLA-peptide binding prediction server.
Kangueane, Pandjassarame; Sakharkar, Meena Kishore
2005-05-15
The current challenge in synthetic vaccine design is the development of a methodology to identify and test short antigen peptides as potential T-cell epitopes. Recently, we described a HLA-peptide binding model (using structural properties) capable of predicting peptides binding to any HLA allele. Consequently, we have developed a web server named T-EPITOPE DESIGNER to facilitate HLA-peptide binding prediction. The prediction server is based on a model that defines peptide binding pockets using information gleaned from X-ray crystal structures of HLA-peptide complexes, followed by the estimation of peptide binding to binding pockets. Thus, the prediction server enables the calculation of peptide binding to HLA alleles. This model is superior to many existing methods because of its potential application to any given HLA allele whose sequence is clearly defined. The web server finds potential application in T cell epitope vaccine design. http://www.bioinformation.net/ted/
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
Rossi, Andrea; Butorac-Petanjek, Bojana; Chilosi, Marco; Cosío, Borja G; Flezar, Matjaz; Koulouris, Nikolaos; Marin, José; Miculinic, Neven; Polese, Guido; Samaržija, Miroslav; Skrgat, Sabina; Vassilakopoulos, Theodoros; Vukić-Dugac, Andrea; Zakynthinos, Spyridon; Miravitlles, Marc
2017-01-01
Chronic obstructive pulmonary disease (COPD) is a leading cause of mortality and morbidity worldwide, with high and growing prevalence. Its underdiagnosis and hence under-treatment is a general feature across all countries. This is particularly true for the mild or early stages of the disease, when symptoms do not yet interfere with daily living activities and both patients and doctors are likely to underestimate the presence of the disease. A diagnosis of COPD requires spirometry in subjects with a history of exposure to known risk factors and symptoms. Postbronchodilator forced expiratory volume in 1 second (FEV 1 )/forced vital capacity <0.7 or less than the lower limit of normal confirms the presence of airflow limitation, the severity of which can be measured by FEV 1 % predicted: stage 1 defines COPD with mild airflow limitation, which means postbronchodilator FEV 1 ≥80% predicted. In recent years, an elegant series of studies has shown that "exclusive reliance on spirometry, in patients with mild airflow limitation, may result in underestimation of clinically important physiologic impairment". In fact, exercise tolerance, diffusing capacity, and gas exchange can be impaired in subjects at a mild stage of airflow limitation. Furthermore, growing evidence indicates that smokers without overt abnormal spirometry have respiratory symptoms and undergo therapy. This is an essential issue in COPD. In fact, on one hand, airflow limitation, even mild, can unduly limit the patient's physical activity, with deleterious consequences on quality of life and even survival; on the other hand, particularly in younger subjects, mild airflow limitation might coincide with the early stage of the disease. Therefore, we thought that it was worthwhile to analyze further and discuss this stage of "mild COPD". To this end, representatives of scientific societies from five European countries have met and developed this document to stimulate the attention of the scientific community on COPD with "mild" airflow limitation. The aim of this document is to highlight some key features of this important concept and help the practicing physician to understand better what is behind "mild" COPD. Future research should address two major issues: first, whether mild airflow limitation represents an early stage of COPD and what the mechanisms underlying the evolution to more severe stages of the disease are; and second, not far removed from the first, whether regular treatment should be considered for COPD patients with mild airflow limitation, either to prevent progression of the disease or to encourage and improve physical activity or both.
Rossi, Andrea; Butorac-Petanjek, Bojana; Chilosi, Marco; Cosío, Borja G; Flezar, Matjaz; Koulouris, Nikolaos; Marin, José; Miculinic, Neven; Polese, Guido; Samaržija, Miroslav; Skrgat, Sabina; Vassilakopoulos, Theodoros; Vukić-Dugac, Andrea; Zakynthinos, Spyridon; Miravitlles, Marc
2017-01-01
Chronic obstructive pulmonary disease (COPD) is a leading cause of mortality and morbidity worldwide, with high and growing prevalence. Its underdiagnosis and hence under-treatment is a general feature across all countries. This is particularly true for the mild or early stages of the disease, when symptoms do not yet interfere with daily living activities and both patients and doctors are likely to underestimate the presence of the disease. A diagnosis of COPD requires spirometry in subjects with a history of exposure to known risk factors and symptoms. Postbronchodilator forced expiratory volume in 1 second (FEV1)/forced vital capacity <0.7 or less than the lower limit of normal confirms the presence of airflow limitation, the severity of which can be measured by FEV1% predicted: stage 1 defines COPD with mild airflow limitation, which means postbronchodilator FEV1 ≥80% predicted. In recent years, an elegant series of studies has shown that “exclusive reliance on spirometry, in patients with mild airflow limitation, may result in underestimation of clinically important physiologic impairment”. In fact, exercise tolerance, diffusing capacity, and gas exchange can be impaired in subjects at a mild stage of airflow limitation. Furthermore, growing evidence indicates that smokers without overt abnormal spirometry have respiratory symptoms and undergo therapy. This is an essential issue in COPD. In fact, on one hand, airflow limitation, even mild, can unduly limit the patient’s physical activity, with deleterious consequences on quality of life and even survival; on the other hand, particularly in younger subjects, mild airflow limitation might coincide with the early stage of the disease. Therefore, we thought that it was worthwhile to analyze further and discuss this stage of “mild COPD”. To this end, representatives of scientific societies from five European countries have met and developed this document to stimulate the attention of the scientific community on COPD with “mild” airflow limitation. The aim of this document is to highlight some key features of this important concept and help the practicing physician to understand better what is behind “mild” COPD. Future research should address two major issues: first, whether mild airflow limitation represents an early stage of COPD and what the mechanisms underlying the evolution to more severe stages of the disease are; and second, not far removed from the first, whether regular treatment should be considered for COPD patients with mild airflow limitation, either to prevent progression of the disease or to encourage and improve physical activity or both. PMID:28919728
Lin, Longting; Bivard, Andrew; Kleinig, Timothy; Spratt, Neil J; Levi, Christopher R; Yang, Qing; Parsons, Mark W
2018-04-01
This study aimed to assess how the ischemic core measured by perfusion computed tomography (CTP) was affected by the delay and dispersion effect. Ischemic stroke patients having CTP performed within 6 hours of onset were included. The CTP data were processed twice, generating standard cerebral blood flow (sCBF) and delay- and dispersion-corrected CBF (ddCBF), respectively. Ischemic core measured by the sCBF and ddCBF was then compared at the relative threshold <30% of normal tissue. Two references for ischemic core were used: acute diffusion-weighted imaging or 24-hour diffusion-weighted imaging in patients with complete recanalization. Difference of core volume between CTP and diffusion-weighted imaging was estimated by Mann-Whitney U test and limits of agreement. Patients were also classified into favorable and unfavorable CTP patterns. The imaging pattern classification by sCBF and ddCBF was compared by the χ 2 test; their respective ability to predict good clinical outcome (3-month modified Rankin Scale score) was tested in logistic regression. Fifty-five patients were included in this study. Median sCBF ischemic core volume was 38.5 mL (12.4-61.9 mL), much larger than the median core volume of 17.2 mL measured by ddCBF (interquartile range, 5.5-38.8; P <0.001). Moreover, compared with sCBF <30%, ddCBF <30% measured the ischemic core much closer to diffusion-weighted imaging core references, with the mean volume difference of -0.1 mL (95% limits of agreement, -25.4 to 25.2; P =0.97) and 16.7 mL (95% limits of agreement, -21.7 to 55.2; P <0.001), respectively. Imaging patterns defined by sCBF showed a difference to that defined by ddCBF ( P <0.001), with 12 patients classified as favorable imaging patterns by ddCBF but as unfavorable by sCBF. The favorable imaging pattern classified by ddCBF, compared with sCBF classification, had higher predictive power for good clinical outcome (odds ratio, 7.8 [2-30.5] and 3.1 [0.9-11], respectively). Delay and dispersion correction increases the accuracy of ischemic core measurement on CTP. © 2018 American Heart Association, Inc.
40 CFR 92.707 - Notification to locomotive or locomotive engine owners.
Code of Federal Regulations, 2010 CFR
2010-07-01
... be emitting pollutants in excess of the federal emission standards or family emission limits, as defined in 40 CFR part 92. These standards or family emission limits, as defined in 40 CFR part 92 were... communication sent to locomotive or locomotive engine owners or dealers shall contain any statement or...
NASA Astrophysics Data System (ADS)
Endo, Takako; Konno, Norio; Obuse, Hideaki; Segawa, Etsuo
2017-11-01
In this paper, we treat quantum walks in a two-dimensional lattice with cutting edges along a straight boundary introduced by Asboth and Edge (2015 Phys. Rev. A 91 022324) in order to study one-dimensional edge states originating from topological phases of matter and to obtain collateral evidence of how a quantum walker reacts to the boundary. Firstly, we connect this model to the CMV matrix, which provides a 5-term recursion relation of the Laurent polynomial associated with spectral measure on the unit circle. Secondly, we explicitly derive the spectra of bulk and edge states of the quantum walk with the boundary using spectral analysis of the CMV matrix. Thirdly, while topological numbers of the model studied so far are well-defined only when gaps in the bulk spectrum exist, we find a new topological number defined only when there are no gaps in the bulk spectrum. We confirm that the existence of the spectrum for edge states derived from the CMV matrix is consistent with the prediction from a bulk-edge correspondence using topological numbers calculated in the cases where gaps in the bulk spectrum do or do not exist. Finally, we show how the edge states contribute to the asymptotic behavior of the quantum walk through limit theorems of the finding probability. Conversely, we also propose a differential equation using this limit distribution whose solution is the underlying edge state.
40 CFR Table 1 to Subpart Yyyy of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... turbine as defined in this subpart, 3. a diffusion flame gas-fired stationary combustion turbine as defined in this subpart, or 4. a diffusion flame oil-fired stationary combustion turbine as defined in...
40 CFR Table 1 to Subpart Yyyy of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... turbine as defined in this subpart, 3. a diffusion flame gas-fired stationary combustion turbine as defined in this subpart, or 4. a diffusion flame oil-fired stationary combustion turbine as defined in...
Predictors of neonatal sepsis in developing countries.
Weber, Martin W; Carlin, John B; Gatchalian, Salvacion; Lehmann, Deborah; Muhe, Lulu; Mulholland, E Kim
2003-08-01
Neonatal infections are a major cause of death worldwide. Simple procedures for identifying infants with infection that need referral for treatment are therefore of major public health importance. We investigated 3303 infants <2 months of age presenting with illness to health facilities in Ethiopia, The Gambia, Papua New Guinea and The Philippines, using a standardized approach. Historical factors and clinical signs predicting sepsis, meningitis, hypoxemia, deaths and an ordinal scale indicating severe disease were investigated by logistic regression, and the performance of simple combination rules was explored. In multivariable analysis, reduced feeding ability, no spontaneous movement, temperature >38 degrees C, being drowsy/unconscious, a history of a feeding problem, history of change in activity, being agitated, the presence of lower chest wall indrawing, respiratory rate >60 breaths/min, grunting, cyanosis, a history of convulsions, a bulging fontanel and slow digital capillary refill were independent predictors of severe disease. The presence of any 1 of these 14 signs had a sensitivity for severe disease (defined as sepsis, meningitis, hypoxemia, or radiologically proven pneumonia) of 87% and a specificity of 54%. More stringent combinations, such as demanding 2 signs from the list, resulted in a considerable loss of sensitivity. By contrast only slight loss of sensitivity and considerable gain of specificity resulted from reducing the list to 9 signs. Requiring the presence of fever and any other sign produced a diagnostic rule with extremely low sensitivity (25%). Physical signs can be used to identify young infants at risk of severe disease, however with limited specificity, resulting in large numbers of unnecessary referrals. Further studies are required to validate and refine the prediction of severe disease, especially in the first week of life, but there appear to be limits on the accuracy of prediction that is achievable.
Feng, C L; Wu, F C; Dyer, S D; Chang, H; Zhao, X L
2013-01-01
Species sensitivity distributions (SSDs) are usually used in the development of water quality criteria and require a large number of toxicity values to define a hazard level to protect the majority of species. However, some toxicity data for certain chemicals are limited, especially for endangered and threatened species. Thus, it is important to predict the unknown species toxicity data using available toxicity data. To address this need, interspecies correlation estimation (ICE) models were developed by US EPA to predict acute toxicity of chemicals to diverse species based on a more limited data set of surrogate species toxicity data. Use of SSDs generated from ICE models allows for the prediction of protective water quality criteria, such as the HC5 (hazard concentration, 5th percentile). In the present study, we tested this concept using toxicity data collected for zinc. ICE-based-SSDs were generated using three surrogate species (common carp (Cyprinus carpio), rainbow trout (Oncorhynchus mykiss), and Daphnia magna) and compared with the measured-based SSD and corresponding HC5. The results showed that no significant differences were observed between the ICE- and the measured-based SSDs and HC5s. Furthermore, the examination of species placements within the SSDs indicated that the most sensitive species to zinc were invertebrates, especially crustaceans. Given the similarity of SSD and HC5s for zinc, the use of ICE to derive potential water quality criteria for diverse chemicals in China is proposed. Further, a combination of measured and ICE-derived data will prove useful for assessing water quality and chemical risks in the near future. Copyright © 2012 Elsevier Ltd. All rights reserved.
Langenheim, V.E.; Powell, R.E.
2009-01-01
The Eastern Transverse Ranges, adjacent to and southeast of the big left bend of the San Andreas fault, southern California, form a crustal block that has rotated clockwise in response to dextral shear within the San Andreas system. Previous studies have indicated a discrepancy between the measured magnitudes of left slip on through-going east-striking fault zones of the Eastern Transverse Ranges and those predicted by simple geometric models using paleomagnetically determined clockwise rotations of basalts distributed along the faults. To assess the magnitude and source of this discrepancy, we apply new gravity and magnetic data in combination with geologic data to better constrain cumulative fault offsets and to define basin structure for the block between the Pinto Mountain and Chiriaco fault zones. Estimates of offset from using the length of pull-apart basins developed within left-stepping strands of the sinistral faults are consistent with those derived by matching offset magnetic anomalies and bedrock patterns, indicating a cumulative offset of at most ???40 km. The upper limit of displacements constrained by the geophysical and geologic data overlaps with the lower limit of those predicted at the 95% confidence level by models of conservative slip located on margins of rigid rotating blocks and the clockwise rotation of the paleomagnetic vectors. Any discrepancy is likely resolved by internal deformation within the blocks, such as intense deformation adjacent to the San Andreas fault (that can account for the absence of basins there as predicted by rigid-block models) and linkage via subsidiary faults between the main faults. ?? 2009 Geological Society of America.
Validity of suspected alcohol and drug violations in aviation employees.
Li, Guohua; Brady, Joanne E; DiMaggio, Charles; Baker, Susan P; Rebok, George W
2010-10-01
In the United States, transportation employees who are suspected of using alcohol and drugs are subject to reasonable-cause testing. This study aims to assess the validity of suspected alcohol and drug violations in aviation employees. Using reasonable-cause testing and random testing data from the Federal Aviation Administration for the years 1995-2005, we calculated the positive predictive value (PPV) and positive likelihood ratio (LR+) of suspected alcohol and drug violations. The true status of violations was based on testing results, with an alcohol violation being defined as a blood alcohol concentration of ≥0.04 mg/dl and a drug violation as a test positive for marijuana, cocaine, amphetamines, phencyclidine or opiates. During the 11-year study period, a total of 2284 alcohol tests and 2015 drug tests were performed under the reasonable-cause testing program. The PPV was 37.7% [95% confidence interval (CI), 35.7-39.7%] for suspected alcohol violations and 12.6% (95% CI, 11.2-14.1%) for suspected drug violations. Random testing revealed an overall prevalence of 0.09% for alcohol violations and 0.6% for drug violations. The LR+ was 653.6 (95% CI, 581.7-734.3) for suspected alcohol violations and 22.5 (95% CI, 19.6-25.7) for suspected drug violations. The discriminative power of reasonable-cause testing suggests that, despite its limited positive predictive value, physical and behavioral observation represents an efficient screening method for detecting alcohol and drug violations. The limited positive predictive value of reasonable-cause testing in aviation employees is due in part to the very low prevalence of alcohol and drug violations. © 2010 The Authors, Addiction © 2010 Society for the Study of Addiction.
Forecasting the forest and the trees: consequences of drought in competitive forests
NASA Astrophysics Data System (ADS)
Clark, J. S.
2015-12-01
Models that translate individual tree responses to distribution and abundance of competing populations are needed to understand forest vulnerability to drought. Currently, biodiversity predictions rely on one scale or the other, but do not combine them. Synthesis is accomplished here by modeling data together, each with their respective scale-dependent connections to the scale needed for prediction—landscape to regional biodiversity. The approach we summarize integrates three scales, i) individual growth, reproduction, and survival, ii) size-species structure of stands, and iii) regional forest biomass. Data include 24,347 USDA Forest Inventory and Analysis (FIA) plots and 135 Long-term Forest Demography plots. Climate, soil moisture, and competitive interactions are predictors. We infer and predict the four-dimensional size/species/space/time (SSST) structure of forests, where all demographic rates respond to winter temperature, growing season length, moisture deficits, local moisture status, and competition. Responses to soil moisture are highly non-linear and not strongly related to responses to climatic moisture deficits over time. In the Southeast the species that are most sensitive to drought on dry sites are not the same as those that are most sensitive on moist sites. Those that respond most to spatial moisture gradients are not the same as those that respond most to regional moisture deficits. There is little evidence of simple tradeoffs in responses. Direct responses to climate constrain the ranges of few tree species, north or south; there is little evidence that range limits are defined by fecundity or survival responses to climate. By contrast, recruitment and the interactions between competition and drought that affect growth and survival are predicted to limit ranges of many species. Taken together, results suggest a rich interaction involving demographic responses at all size classes to neighbors, landscape variation in moisture, and regional climate change.
Validity of Suspected Alcohol and Drug Violations in Aviation Employees
Li, Guohua; Brady, Joanne E.; DiMaggio, Charles; Baker, Susan P.; Rebok, George W.
2012-01-01
Introduction In the United States, transportation employees who are suspected of using alcohol and drugs are subject to reasonable-cause testing. This study aims to assess the validity of suspected alcohol and drug violations in aviation employees. Methods Using reasonable-cause testing and random testing data from the Federal Aviation Administration for the years 1995 through 2005, we calculated the positive predictive value (PPV) and positive likelihood ratio (LR+) of suspected alcohol and drug violations. The true status of violations was based on testing results, with an alcohol violation being defined as a blood alcohol concentration of ≥40 mg/dL and a drug violation as a test positive for marijuana, cocaine, amphetamines, phencyclidine, or opiates. Results During the 11-year study period, a total of 2,284 alcohol tests and 2,015 drug tests were performed under the reasonable-cause testing program. The PPV was 37.7% [95% confidence interval (CI), 35.7–39.7%] for suspected alcohol violations and 12.6% (95% CI, 11.2–14.1%) for suspected drug violations. Random testing revealed an overall prevalence of 0.09% (601/649,796) for alcohol violations and 0.6% (7,211/1,130,922) for drug violations. The LR+ was 653.6 (95% CI, 581.7–734.3) for suspected alcohol violations and 22.5 (95% CI, 19.6–25.7) for suspected drug violations. Discussion The discriminative power of reasonable-cause testing suggests that, despite its limited positive predictive value, physical and behavioral observation represents an efficient screening method for detecting alcohol and drug violations. The limited positive predictive value of reasonable-cause testing in aviation employees is due in part to the very low prevalence of alcohol and drug violations. PMID:20712820
Hybrid neural network for density limit disruption prediction and avoidance on J-TEXT tokamak
NASA Astrophysics Data System (ADS)
Zheng, W.; Hu, F. R.; Zhang, M.; Chen, Z. Y.; Zhao, X. Q.; Wang, X. L.; Shi, P.; Zhang, X. L.; Zhang, X. Q.; Zhou, Y. N.; Wei, Y. N.; Pan, Y.; J-TEXT team
2018-05-01
Increasing the plasma density is one of the key methods in achieving an efficient fusion reaction. High-density operation is one of the hot topics in tokamak plasmas. Density limit disruptions remain an important issue for safe operation. An effective density limit disruption prediction and avoidance system is the key to avoid density limit disruptions for long pulse steady state operations. An artificial neural network has been developed for the prediction of density limit disruptions on the J-TEXT tokamak. The neural network has been improved from a simple multi-layer design to a hybrid two-stage structure. The first stage is a custom network which uses time series diagnostics as inputs to predict plasma density, and the second stage is a three-layer feedforward neural network to predict the probability of density limit disruptions. It is found that hybrid neural network structure, combined with radiation profile information as an input can significantly improve the prediction performance, especially the average warning time ({{T}warn} ). In particular, the {{T}warn} is eight times better than that in previous work (Wang et al 2016 Plasma Phys. Control. Fusion 58 055014) (from 5 ms to 40 ms). The success rate for density limit disruptive shots is above 90%, while, the false alarm rate for other shots is below 10%. Based on the density limit disruption prediction system and the real-time density feedback control system, the on-line density limit disruption avoidance system has been implemented on the J-TEXT tokamak.
Wastewater discharge impact on drinking water sources along the Yangtze River (China).
Wang, Zhuomin; Shao, Dongguo; Westerhoff, Paul
2017-12-01
Unplanned indirect (de facto) wastewater reuse occurs when wastewater is discharged into surface waters upstream of potable drinking water treatment plant intakes. This paper aims to predict percentages and trends of de facto reuse throughout the Yangtze River watershed in order to understand the relative contribution of wastewater discharges into the river and its tributaries towards averting water scarcity concerns. The Yangtze River is the third longest in the world and supports more than 1/15 of the world's population, yet the importance of wastewater on the river remains ill-defined. Municipal wastewater produced in the Yangtze River Basin increased by 41% between 1998 and 2014, from 2580m 3 /s to 3646m 3 /s. Under low flow conditions in the Yangtze River near Shanghai, treated wastewater contributions to river flows increased from 8% in 1998 to 14% in 2014. The highest levels of de facto reuse appeared along a major tributary (Han River) of the Yangtze River, where de facto reuse can exceed 20%. While this initial analysis of de facto reuse used water supply and wastewater data from 110 cities in the basin and 11 gauging stations with >50years of historic streamflow data, the outcome was limited by the lack of gauging stations at more locations (i.e., data had to be predicted using digital elevation mapping) and lack of precise geospatial location of drinking water intakes or wastewater discharges. This limited the predictive capability of the model relative to larger datasets available in other countries (e.g., USA). This assessment is the first analysis of de facto wastewater reuse in the Yangtze River Basin. It will help identify sections of the river at higher risk for wastewater-related pollutants due to presence of-and reliance on-wastewater discharge that could be the focus of field studies and model predictions of higher spatial and temporal resolution. Copyright © 2017 Elsevier B.V. All rights reserved.
Siddiqi, Shan H.; Chockalingam, Ravikumar; Cloninger, C. Robert; Lenze, Eric J.; Cristancho, Pilar
2016-01-01
Objective . The goal of this study was to investigate the utility of the Temperament and Character Inventory (TCI) in predicting antidepressant response to repetitive transcranial magnetic stimulation (rTMS). Background Although rTMS of the dorsolateral prefrontal cortex (DLPFC) is an established antidepressant treatment, little is known about predictors of response. The TCI measures multiple personality dimensions (harm avoidance, novelty seeking, reward dependence, persistence, self-directedness, self-transcendence, and cooperativeness), some of which have predicted response to pharmacotherapy and cognitive-behavioral therapy. A previous study suggested a possible association between self-directedness and response to rTMS in melancholic depression, although this was limited by the fact that melancholic depression is associated with a limited range of TCI profiles. Methods . Nineteen patients with a major depressive episode completed the TCI prior to a clinical course of rTMS over the DLPFC. Treatment response was defined as ≥50% decrease in scores on the Hamilton Rating Scale for Depression (HAM-D). Baseline scores on each TCI dimension were compared between responders and non-responders via analysis of variance. Pearson correlations were also calculated for temperament/character scores in comparison with percentage improvement in HAM-D scores. Results Eleven of the 19 patients responded to rTMS. T-scores for persistence were significantly higher in responders than in non-responders (P=0.022). Linear regression revealed a correlation between persistence scores and percentage improvement in HAM-D scores. Conclusions Higher persistence scores predicted antidepressant response to rTMS. This may be explained by rTMS-induced enhancement of cortical excitability, which has been found to be decreased in patients with high persistence. Personality assessment that includes measurement of TCI persistence may be a useful component of precision medicine initiatives in rTMS for depression. PMID:27123799
ERIC Educational Resources Information Center
Hilton, N. Zoe; Harris, Grant T.
2009-01-01
Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…
A diffusion-limited reaction model for self-propagating Al/Pt multilayers with quench limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kittell, David E.; Yarrington, Cole D.; Hobbs, M. L.
A diffusion-limited reaction model was calibrated for Al/Pt multilayers ignited on oxidized silicon, sapphire, and tungsten substrates, as well as for some Al/Pt multilayers ignited as free-standing foils. The model was implemented in a finite element analysis code and used to match experimental burn front velocity data collected from several years of testing at Sandia National Laboratories. Moreover, both the simulations and experiments reveal well-defined quench limits in the total Al + Pt layer (i.e., bilayer) thickness. At these limits, the heat generated from atomic diffusion is insufficient to support a self-propagating wave front on top of the substrates. Quenchmore » limits for reactive multilayers are seldom reported and are found to depend on the thermal properties of the individual layers. Here, the diffusion-limited reaction model is generalized to allow for temperature- and composition-dependent material properties, phase change, and anisotropic thermal conductivity. Utilizing this increase in model fidelity, excellent overall agreement is shown between the simulations and experimental results with a single calibrated parameter set. However, the burn front velocities of Al/Pt multilayers ignited on tungsten substrates are over-predicted. Finally, possible sources of error are discussed and a higher activation energy (from 41.9 kJ/mol.at. to 47.5 kJ/mol.at.) is shown to bring the simulations into agreement with the velocity data observed on tungsten substrates. Finally, this higher activation energy suggests an inhibited diffusion mechanism present at lower heating rates.« less
A diffusion-limited reaction model for self-propagating Al/Pt multilayers with quench limits
Kittell, David E.; Yarrington, Cole D.; Hobbs, M. L.; ...
2018-04-14
A diffusion-limited reaction model was calibrated for Al/Pt multilayers ignited on oxidized silicon, sapphire, and tungsten substrates, as well as for some Al/Pt multilayers ignited as free-standing foils. The model was implemented in a finite element analysis code and used to match experimental burn front velocity data collected from several years of testing at Sandia National Laboratories. Moreover, both the simulations and experiments reveal well-defined quench limits in the total Al + Pt layer (i.e., bilayer) thickness. At these limits, the heat generated from atomic diffusion is insufficient to support a self-propagating wave front on top of the substrates. Quenchmore » limits for reactive multilayers are seldom reported and are found to depend on the thermal properties of the individual layers. Here, the diffusion-limited reaction model is generalized to allow for temperature- and composition-dependent material properties, phase change, and anisotropic thermal conductivity. Utilizing this increase in model fidelity, excellent overall agreement is shown between the simulations and experimental results with a single calibrated parameter set. However, the burn front velocities of Al/Pt multilayers ignited on tungsten substrates are over-predicted. Finally, possible sources of error are discussed and a higher activation energy (from 41.9 kJ/mol.at. to 47.5 kJ/mol.at.) is shown to bring the simulations into agreement with the velocity data observed on tungsten substrates. Finally, this higher activation energy suggests an inhibited diffusion mechanism present at lower heating rates.« less
Bodei, L; Kidd, M; Modlin, I M; Severi, S; Drozdov, I; Nicolini, S; Kwekkeboom, D J; Krenning, E P; Baum, R P; Paganelli, G
2016-05-01
Peptide receptor radionuclide therapy (PRRT) is an effective method for treating neuroendocrine tumors (NETs). It is limited, however, in the prediction of individual tumor response and the precise and early identification of changes in tumor size. Currently, response prediction is based on somatostatin receptor expression and efficacy by morphological imaging and/or chromogranin A (CgA) measurement. The aim of this study was to assess the accuracy of circulating NET transcripts as a measure of PRRT efficacy, and moreover to identify prognostic gene clusters in pretreatment blood that could be interpolated with relevant clinical features in order to define a biological index for the tumor and a predictive quotient for PRRT efficacy. NET patients (n = 54), M: F 37:17, median age 66, bronchial: n = 13, GEP-NET: n = 35, CUP: n = 6 were treated with (177)Lu-based-PRRT (cumulative activity: 6.5-27.8 GBq, median 18.5). At baseline: 47/54 low-grade (G1/G2; bronchial typical/atypical), 31/49 (18)FDG positive and 39/54 progressive. Disease status was assessed by RECIST1.1. Transcripts were measured by real-time quantitative reverse transcription PCR (qRT-PCR) and multianalyte algorithmic analysis (NETest); CgA by enzyme-linked immunosorbent assay (ELISA). Gene cluster (GC) derivations: regulatory network, protein:protein interactome analyses. chi-square, non-parametric measurements, multiple regression, receiver operating characteristic and Kaplan-Meier survival. The disease control rate was 72 %. Median PFS was not achieved (follow-up: 1-33 months, median: 16). Only grading was associated with response (p < 0.01). At baseline, 94 % of patients were NETest-positive, while CgA was elevated in 59 %. NETest accurately (89 %, χ(2) = 27.4; p = 1.2 × 10(-7)) correlated with treatment response, while CgA was 24 % accurate. Gene cluster expression (growth-factor signalome and metabolome) had an AUC of 0.74 ± 0.08 (z-statistic = 2.92, p < 0.004) for predicting response (76 % accuracy). Combination with grading reached an AUC: 0.90 ± 0.07, irrespective of tumor origin. Circulating transcripts correlated accurately (94 %) with PRRT responders (SD+PR+CR; 97 %) vs. non-responders (91 %). Blood NET transcript levels and the predictive quotient (circulating gene clusters+grading) accurately predicted PRRT efficacy. CgA was non-informative.
Can the theory of planned behaviour predict the physical activity behaviour of individuals?
Hobbs, Nicola; Dixon, Diane; Johnston, Marie; Howie, Kate
2013-01-01
The theory of planned behaviour (TPB) can identify cognitions that predict differences in behaviour between individuals. However, it is not clear whether the TPB can predict the behaviour of an individual person. This study employs a series of n-of-1 studies and time series analyses to examine the ability of the TPB to predict physical activity (PA) behaviours of six individuals. Six n-of-1 studies were conducted, in which TPB cognitions and up to three PA behaviours (walking, gym workout and a personally defined PA) were measured twice daily for six weeks. Walking was measured by pedometer step count, gym attendance by self-report with objective validation of gym entry and the personally defined PA behaviour by self-report. Intra-individual variability in TPB cognitions and PA behaviour was observed in all participants. The TPB showed variable predictive utility within individuals and across behaviours. The TPB predicted at least one PA behaviour for five participants but had no predictive utility for one participant. Thus, n-of-1 designs and time series analyses can be used to test theory in an individual.
Cascão, Angela Maria; Jorge, Maria Helena Prado de Mello; Costa, Antonio José Leal; Kale, Pauline Lorena
2016-01-01
Ill-defined causes of death are common among the elderly owing to the high frequency of comorbidities and, consequently, to the difficulty in defining the underlying cause of death. To analyze the validity and reliability of the "primary diagnosis" in hospitalization to recover the information on the underlying cause of death in natural deaths among the elderly whose deaths were originally assigned to "ill-defined cause" in their Death Certificate. The hospitalizations occurred in the state of Rio de Janeiro, in 2006. The databases obtained in the Information Systems on Mortality and Hospitalization were probabilistically linked. The following data were calculated for hospitalizations of the elderly that evolved into deaths with a natural cause: concordance percentages, Kappa coefficient, sensitivity, specificity, and the positive predictive value of the primary diagnosis. Deaths related to "ill-defined causes" were assigned to a new cause, which was defined based on the primary diagnosis. The reliability of the primary diagnosis was good, according to the total percentage of consistency (50.2%), and fair, according to the Kappa coefficient (k = 0.4; p < 0.0001). Diseases related to the circulatory system and neoplasia occurred with the highest frequency among the deaths and the hospitalizations and presented a higher consistency of positive predictive values per chapter and grouping of the International Classification of Diseases. The recovery of the information on the primary cause occurred in 22.6% of the deaths with ill-defined causes (n = 14). The methodology developed and applied for the recovery of the information on the natural cause of death among the elderly in this study had the advantage of effectiveness and the reduction of costs compared to an investigation of the death that is recommended in situations of non-linked and low positive predictive values. Monitoring the mortality profile by the cause of death is necessary to periodically update the predictive values.
What defines an Expert? - Uncertainty in the interpretation of seismic data
NASA Astrophysics Data System (ADS)
Bond, C. E.
2008-12-01
Studies focusing on the elicitation of information from experts are concentrated primarily in economics and world markets, medical practice and expert witness testimonies. Expert elicitation theory has been applied in the natural sciences, most notably in the prediction of fluid flow in hydrological studies. In the geological sciences expert elicitation has been limited to theoretical analysis with studies focusing on the elicitation element, gaining expert opinion rather than necessarily understanding the basis behind the expert view. In these cases experts are defined in a traditional sense, based for example on: standing in the field, no. of years of experience, no. of peer reviewed publications, the experts position in a company hierarchy or academia. Here traditional indicators of expertise have been compared for significance on affective seismic interpretation. Polytomous regression analysis has been used to assess the relative significance of length and type of experience on the outcome of a seismic interpretation exercise. Following the initial analysis the techniques used by participants to interpret the seismic image were added as additional variables to the analysis. Specific technical skills and techniques were found to be more important for the affective geological interpretation of seismic data than the traditional indicators of expertise. The results of a seismic interpretation exercise, the techniques used to interpret the seismic and the participant's prior experience have been combined and analysed to answer the question - who is and what defines an expert?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penn, D.A.
1984-01-01
This study represents a limited step towards addressing the problems of the continuous development of property rights in natural resources. First a model of the evolution of water rights in the West is developed. In the model, the specification of property rights in water is treated as a continuous variable, ranging from completely nonexclusive to completely exclusive. The particular position on the continuum depends on the benefits of increasing the level of exclusivity relative to the costs. As the net benefits of increasing water right exclusivity increase, the model predicts greater property rights defining and enforcing activity. The second portionmore » of the study provides a historical context of the development of property rights in the West, focusing on the experience in Colorado. The evolution of property rights is analyzed, given the actual economic and ideological constraints and motivations existing at the time. The last portion of the study offers an empirical test of the hypotheses generated by the model. Ordinary least-squares equations are estimated, using the number of water-rights cases as an indicator of defining and enforcing activity. The quantitative and historical evidence suggests that water rights exhibit a tendency to develop efficiently, but that this tendency is constrained by the costs of defining exclusive rights to water and by ideology.« less
Bezinge, Leonard; Maceiczyk, Richard M; Lignos, Ioannis; Kovalenko, Maksym V; deMello, Andrew J
2018-06-06
Recent advances in the development of hybrid organic-inorganic lead halide perovskite (LHP) nanocrystals (NCs) have demonstrated their versatility and potential application in photovoltaics and as light sources through compositional tuning of optical properties. That said, due to their compositional complexity, the targeted synthesis of mixed-cation and/or mixed-halide LHP NCs still represents an immense challenge for traditional batch-scale chemistry. To address this limitation, we herein report the integration of a high-throughput segmented-flow microfluidic reactor and a self-optimizing algorithm for the synthesis of NCs with defined emission properties. The algorithm, named Multiparametric Automated Regression Kriging Interpolation and Adaptive Sampling (MARIA), iteratively computes optimal sampling points at each stage of an experimental sequence to reach a target emission peak wavelength based on spectroscopic measurements. We demonstrate the efficacy of the method through the synthesis of multinary LHP NCs, (Cs/FA)Pb(I/Br) 3 (FA = formamidinium) and (Rb/Cs/FA)Pb(I/Br) 3 NCs, using MARIA to rapidly identify reagent concentrations that yield user-defined photoluminescence peak wavelengths in the green-red spectral region. The procedure returns a robust model around a target output in far fewer measurements than systematic screening of parametric space and additionally enables the prediction of other spectral properties, such as, full-width at half-maximum and intensity, for conditions yielding NCs with similar emission peak wavelength.
Palm Reed, Kathleen M; Cameron, Amy Y; Ameral, Victoria E
2017-09-01
There is a growing literature focusing on the emerging idea that behavioral flexibility, rather than particular emotion regulation strategies per se, provides greater promise in predicting and influencing anxiety-related psychopathology. Yet this line of research and theoretical analysis appear to be plagued by its own challenges. For example, middle-level constructs, such as behavioral flexibility, are difficult to define, difficult to measure, and difficult to interpret in relation to clinical interventions. A key point that some researchers have made is that previous studies examining flexible use of emotion regulation strategies (or, more broadly, coping) have failed due to a lack of focus on context. That is, examining strategies in isolation of the context in which they are used provides limited information on the suitability, rigid adherence, or effectiveness of a given strategy in that situation. Several of these researchers have proposed the development of new models to define and measure various types of behavioral flexibility. We would like to suggest that an explanation of the phenomenon already exists and that we can go back to our behavioral roots to understand this phenomenon rather than focusing on defining and capturing a new process. Indeed, thorough contextual behavioral analyses already yield a useful account of what has been observed. We will articulate a model explaining behavioral flexibility using a functional, contextual framework, with anxiety-related disorders as an example.
Irving, Paul; Moncrieff, Ian
2004-12-01
Ecological systems have limits or thresholds that vary by pollutant type, emissions sources and the sensitivity of a given location. Human health can also indicate sensitivity. Good environmental management requires any problem to be defined to obtain efficient and effective solutions. Cities are where transport activities, effects and resource management decisions are often most focussed. The New Zealand Ministry of Transport has developed two environmental management tools. The Vehicle Fleet Model (VFM) is a predictive database of the environmental performance of the New Zealand traffic fleet (and rail fleet). It calculates indices of local air quality, stormwater, and greenhouse gases emissions. The second is an analytical process based on Environmental Capacity Analysis (ECA). Information on local traffic is combined with environmental performance data from the Vehicle Fleet Model. This can be integrated within a live, geo-spatially defined analysis of the overall environmental effects within a defined local area. Variations in urban form and activity (traffic and other) that contribute to environmental effects can be tracked. This enables analysis of a range of mitigation strategies that may contribute, now or in the future, to maintaining environmental thresholds or meeting targets. A case study of the application of this approach was conducted within Waitakere City. The focus was on improving the understanding of the relative significance of stormwater contaminants derived from land transport.
Nelson, Deborah Shafer; McManus, John; Richmond, Robert H; King, David B; Gailani, Joe Z; Lackey, Tahirih C; Bryant, Duncan
2016-03-01
Coral reefs are in decline worldwide due to anthropogenic stressors including reductions in water and substratum quality. Dredging results in the mobilization of sediments, which can stress and kill corals via increasing turbidity, tissue damage and burial. The Particle Tracking Model (PTM) was applied to predict the potential impacts of dredging-associated sediment exposure on the coral reef ecosystems of Apra Harbor, Guam. The data were interpreted using maps of bathymetry and coral abundance and distribution in conjunction with impact parameters of suspended sediment concentration (turbidity) and sedimentation using defined coral response thresholds. The results are presented using a "stoplight" model of negligible or limited impacts to coral reefs (green), moderate stress from which some corals would be expected to recover while others would not (yellow) and severe stress resulting in mortality (red). The red conditions for sediment deposition rate and suspended sediment concentration (SSC) were defined as values exceeding 25 mg cm(-2) d(-1) over any 30 day window and >20 mg/l for any 18 days in any 90 day period over a column of water greater than 2 m, respectively. The yellow conditions were defined as values >10 mg cm(-2) d(-1) and <25 mg cm(-2) d(-1) over any 30 day period, and as 20% of 3 months' concentration exceeding 10 mg/l for the deposition and SSC, respectively. The model also incorporates the potential for cumulative effects on the assumption that even sub-lethal stress levels can ultimately lead to mortality in a multi-stressor system. This modeling approach can be applied by resource managers and regulatory agencies to support management decisions related to planning, site selection, damage reduction, and compensatory mitigation. Published by Elsevier Ltd.
Proteomic Approaches Identify Members of Cofilin Pathway Involved in Oral Tumorigenesis
Polachini, Giovana M.; Sobral, Lays M.; Mercante, Ana M. C.; Paes-Leme, Adriana F.; Xavier, Flávia C. A.; Henrique, Tiago; Guimarães, Douglas M.; Vidotto, Alessandra; Fukuyama, Erica E.; Góis-Filho, José F.; Cury, Patricia M.; Curioni, Otávio A.; Michaluart Jr, Pedro; Silva, Adriana M. A.; Wünsch-Filho, Victor; Nunes, Fabio D.; Leopoldino, Andréia M.; Tajara, Eloiza H.
2012-01-01
The prediction of tumor behavior for patients with oral carcinomas remains a challenge for clinicians. The presence of lymph node metastasis is the most important prognostic factor but it is limited in predicting local relapse or survival. This highlights the need for identifying biomarkers that may effectively contribute to prediction of recurrence and tumor spread. In this study, we used one- and two-dimensional gel electrophoresis, mass spectrometry and immunodetection methods to analyze protein expression in oral squamous cell carcinomas. Using a refinement for classifying oral carcinomas in regard to prognosis, we analyzed small but lymph node metastasis-positive versus large, lymph node metastasis-negative tumors in order to contribute to the molecular characterization of subgroups with risk of dissemination. Specific protein patterns favoring metastasis were observed in the “more-aggressive” group defined by the present study. This group displayed upregulation of proteins involved in migration, adhesion, angiogenesis, cell cycle regulation, anti-apoptosis and epithelial to mesenchymal transition, whereas the “less-aggressive” group was engaged in keratinocyte differentiation, epidermis development, inflammation and immune response. Besides the identification of several proteins not yet described as deregulated in oral carcinomas, the present study demonstrated for the first time the role of cofilin-1 in modulating cell invasion in oral carcinomas. PMID:23227181
Using cluster analysis to identify phenotypes and validation of mortality in men with COPD.
Chen, Chiung-Zuei; Wang, Liang-Yi; Ou, Chih-Ying; Lee, Cheng-Hung; Lin, Chien-Chung; Hsiue, Tzuen-Ren
2014-12-01
Cluster analysis has been proposed to examine phenotypic heterogeneity in chronic obstructive pulmonary disease (COPD). The aim of this study was to use cluster analysis to define COPD phenotypes and validate them by assessing their relationship with mortality. Male subjects with COPD were recruited to identify and validate COPD phenotypes. Seven variables were assessed for their relevance to COPD, age, FEV(1) % predicted, BMI, history of severe exacerbations, mMRC, SpO(2), and Charlson index. COPD groups were identified by cluster analysis and validated prospectively against mortality during a 4-year follow-up. Analysis of 332 COPD subjects identified five clusters from cluster A to cluster E. Assessment of the predictive validity of these clusters of COPD showed that cluster E patients had higher all cause mortality (HR 18.3, p < 0.0001), and respiratory cause mortality (HR 21.5, p < 0.0001) than those in the other four groups. Cluster E patients also had higher all cause mortality (HR 14.3, p = 0.0002) and respiratory cause mortality (HR 10.1, p = 0.0013) than patients in cluster D alone. COPD patient with severe airflow limitation, many symptoms, and a history of frequent severe exacerbations was a novel and distinct clinical phenotype predicting mortality in men with COPD.
Griffiths, Alys Wyn; Wood, Alex M; Maltby, John; Taylor, Peter J; Tai, Sara
2014-04-30
The concepts of "defeat" (representing failed social struggle) and "entrapment" (representing an inability to escape from a situation) have emerged from the animal literature, providing insight into the health consequences of low social rank. Evolutionary models suggest that these constructs co-occur and can lead to the development of mental disorders, although there is limited empirical evidence supporting these predictions. Participants (N=172) were recruited from economically deprived areas in North England. Over half of participants (58%) met clinical cut-offs for depression and anxiety, therefore we conducted analyses to establish whether participant outcomes were dependent on baseline defeat and entrapment levels. Participants completed measures of defeat, entrapment, depression and anxiety at two time-points twelve months apart. Factor analysis demonstrated that defeat and entrapment were best defined as one factor, suggesting that the experiences co-occurred. Regression analyses demonstrated that changes in depression and anxiety between T1 and T2 were predicted from baseline levels of defeat and entrapment; however, changes in defeat and entrapment were also predicted from baseline depression and anxiety. There are implications for targeting perceptions of defeat and entrapment within psychological interventions for people experiencing anxiety and depression and screening individuals to identify those at risk of developing psychopathology. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Roberts, Janet; Desai, Nisha; McCoy, John; Goren, Andy
2014-01-01
Two percent topical minoxidil is the only US Food and Drug Administration-approved drug for the treatment of female androgenetic alopecia (AGA). Its success has been limited by the low percentage of responders. Meta-analysis of several studies reporting the number of responders to 2% minoxidil monotherapy indicates moderate hair regrowth in only 13-20% of female patients. Five percent minoxidil solution, when used off-label, may increase the percentage of responders to as much as 40%. As such, a biomarker for predicting treatment response would have significant clinical utility. In a previous study, Goren et al. reported an association between sulfotransferase activity in plucked hair follicles and minoxidil response in a mixed cohort of male and female patients. The aim of this study was to replicate these findings in a well-defined cohort of female patients with AGA treated with 5% minoxidil daily for a period of 6 months. Consistent with the prior study, we found that sulfotransferase activity in plucked hair follicles predicts treatment response with 93% sensitivity and 83% specificity. Our study further supports the importance of minoxidil sulfation in eliciting a therapeutic response and provides further insight into novel targets for increasing minoxidil efficacy. © 2014 Wiley Periodicals, Inc.
Czyz, Ewa K; Berona, Johnny; King, Cheryl A
2015-04-01
The challenge of identifying suicide risk in adolescents, and particularly among high-risk subgroups such as adolescent inpatients, calls for further study of models of suicidal behavior that could meaningfully aid in the prediction of risk. This study examined how well the Interpersonal-Psychological Theory of Suicidal Behavior (IPTS)--with its constructs of thwarted belongingness (TB), perceived burdensomeness (PB), and an acquired capability (AC) for lethal self-injury--predicts suicide attempts among adolescents (N = 376) 3 and 12 months after hospitalization. The three-way interaction between PB, TB, and AC, defined as a history of multiple suicide attempts, was not significant. However, there were significant 2-way interaction effects, which varied by sex: girls with low AC and increasing TB, and boys with high AC and increasing PB, were more likely to attempt suicide at 3 months. Only high AC predicted 12-month attempts. Results suggest gender-specific associations between theory components and attempts. The time-limited effects of these associations point to TB and PB being dynamic and modifiable in high-risk populations, whereas the effects of AC are more lasting. The study also fills an important gap in existing research by examining IPTS prospectively. © 2014 The American Association of Suicidology.
Diagnostic value of highly-sensitive chimerism analysis after allogeneic stem cell transplantation.
Sellmann, Lea; Rabe, Kim; Bünting, Ivonne; Dammann, Elke; Göhring, Gudrun; Ganser, Arnold; Stadler, Michael; Weissinger, Eva M; Hambach, Lothar
2018-05-02
Conventional analysis of host chimerism (HC) frequently fails to detect relapse before its clinical manifestation in patients with hematological malignancies after allogeneic stem cell transplantation (allo-SCT). Quantitative PCR (qPCR)-based highly-sensitive chimerism analysis extends the detection limit of conventional (short tandem repeats-based) chimerism analysis from 1 to 0.01% host cells in whole blood. To date, the diagnostic value of highly-sensitive chimerism analysis is hardly defined. Here, we applied qPCR-based chimerism analysis to 901 blood samples of 71 out-patients with hematological malignancies after allo-SCT. Receiver operating characteristics (ROC) curves were calculated for absolute HC values and for the increments of HC before relapse. Using the best cut-offs, relapse was detected with sensitivities of 74 or 85% and specificities of 69 or 75%, respectively. Positive predictive values (PPVs) were only 12 or 18%, but the respective negative predictive values were 98 or 99%. Relapse was detected median 38 or 45 days prior to clinical diagnosis, respectively. Considering also durations of steadily increasing HC of more than 28 days improved PPVs to more than 28 or 59%, respectively. Overall, highly-sensitive chimerism analysis excludes relapses with high certainty and predicts relapses with high sensitivity and specificity more than a month prior to clinical diagnosis.
2013-09-01
define CSTR .5 // 1/°C #define SKBFN 6.3 // liters/(h m^2) #define Skbfmax 90. // conservative could be higher for fit person 36 #define...WarmC=0; if (Tsk<TTSK) Colds=TTSK-Tsk; if (Tc>TTCR) WarmC=Tc-TTCR; Skbf=(SKBFN+CDIL*WarmC)/(1+ CSTR *Colds); // Liters/(h m^2) if (Skbf
Quantifying the relationship between sequence and three-dimensional structure conservation in RNA
2010-01-01
Background In recent years, the number of available RNA structures has rapidly grown reflecting the increased interest on RNA biology. Similarly to the studies carried out two decades ago for proteins, which gave the fundamental grounds for developing comparative protein structure prediction methods, we are now able to quantify the relationship between sequence and structure conservation in RNA. Results Here we introduce an all-against-all sequence- and three-dimensional (3D) structure-based comparison of a representative set of RNA structures, which have allowed us to quantitatively confirm that: (i) there is a measurable relationship between sequence and structure conservation that weakens for alignments resulting in below 60% sequence identity, (ii) evolution tends to conserve more RNA structure than sequence, and (iii) there is a twilight zone for RNA homology detection. Discussion The computational analysis here presented quantitatively describes the relationship between sequence and structure for RNA molecules and defines a twilight zone region for detecting RNA homology. Our work could represent the theoretical basis and limitations for future developments in comparative RNA 3D structure prediction. PMID:20550657
From causal dynamical triangulations to astronomical observations
NASA Astrophysics Data System (ADS)
Mielczarek, Jakub
2017-09-01
This letter discusses phenomenological aspects of dimensional reduction predicted by the Causal Dynamical Triangulations (CDT) approach to quantum gravity. The deformed form of the dispersion relation for the fields defined on the CDT space-time is reconstructed. Using the Fermi satellite observations of the GRB 090510 source we find that the energy scale of the dimensional reduction is E* > 0.7 \\sqrt{4-d\\text{UV}} \\cdot 1010 \\text{GeV} at (95% CL), where d\\text{UV} is the value of the spectral dimension in the UV limit. By applying the deformed dispersion relation to the cosmological perturbations it is shown that, for a scenario when the primordial perturbations are formed in the UV region, the scalar power spectrum PS \\propto kn_S-1 , where n_S-1≈ \\frac{3 r (d\\text{UV}-2)}{(d\\text{UV}-1)r-48} . Here, r is the tensor-to-scalar ratio. We find that within the considered model, the predicted from CDT deviation from the scale invariance (n_S=1) is in contradiction with the up to date Planck and BICEP2.
The structure factor of primes
NASA Astrophysics Data System (ADS)
Zhang, G.; Martelli, F.; Torquato, S.
2018-03-01
Although the prime numbers are deterministic, they can be viewed, by some measures, as pseudo-random numbers. In this article, we numerically study the pair statistics of the primes using statistical-mechanical methods, particularly the structure factor S(k) in an interval M ≤slant p ≤slant M + L with M large, and L/M smaller than unity. We show that the structure factor of the prime-number configurations in such intervals exhibits well-defined Bragg-like peaks along with a small ‘diffuse’ contribution. This indicates that primes are appreciably more correlated and ordered than previously thought. Our numerical results definitively suggest an explicit formula for the locations and heights of the peaks. This formula predicts infinitely many peaks in any non-zero interval, similar to the behavior of quasicrystals. However, primes differ from quasicrystals in that the ratio between the location of any two predicted peaks is rational. We also show numerically that the diffuse part decays slowly as M and L increases. This suggests that the diffuse part vanishes in an appropriate infinite-system-size limit.
Model Wind Turbines Tested at Full-Scale Similarity
NASA Astrophysics Data System (ADS)
Miller, M. A.; Kiefer, J.; Westergaard, C.; Hultmark, M.
2016-09-01
The enormous length scales associated with modern wind turbines complicate any efforts to predict their mechanical loads and performance. Both experiments and numerical simulations are constrained by the large Reynolds numbers governing the full- scale aerodynamics. The limited fundamental understanding of Reynolds number effects in combination with the lack of empirical data affects our ability to predict, model, and design improved turbines and wind farms. A new experimental approach is presented, which utilizes a highly pressurized wind tunnel (up to 220 bar). It allows exact matching of the Reynolds numbers (no matter how it is defined), tip speed ratios, and Mach numbers on a geometrically similar, small-scale model. The design of a measurement and instrumentation stack to control the turbine and measure the loads in the pressurized environment is discussed. Results are then presented in the form of power coefficients as a function of Reynolds number and Tip Speed Ratio. Due to gearbox power loss, a preliminary study has also been completed to find the gearbox efficiency and the resulting correction has been applied to the data set.
Gstat: a program for geostatistical modelling, prediction and simulation
NASA Astrophysics Data System (ADS)
Pebesma, Edzer J.; Wesseling, Cees G.
1998-01-01
Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.
Review of the ionospheric model for the long wave prediction capability. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, J.A.
1992-11-01
The Naval Command, Control and Ocean Surveillance Center's Long Wave Prediction Capability (LWPC) has a built-in ionospheric model. The latter was defined after a review of the literature comparing measurements with calculations. Subsequent to this original specification of the ionospheric model in the LWPC, a new collection of data were obtained and analyzed. The new data were collected aboard a merchant ship named the Callaghan during a series of trans-Atlantic trips over a period of a year. This report presents a detailed analysis of the ionospheric model currently in use by the LWPC and the new model suggested by themore » shipboard measurements. We conclude that, although the fits to measurements are almost the same between the two models examined, the current LWPC model should be used because it is better than the new model for nighttime conditions at long ranges. This conclusion supports the primary use of the LWPC model for coverage assessment that requires a valid model at the limits of a transmitter's reception.... Communications, Very low frequency and low frequency, High voltage, Antennas, Measurement.« less
Resilient parenting of children at developmental risk across middle childhood.
Ellingsen, Ruth; Baker, Bruce L; Blacher, Jan; Crnic, Keith
2014-06-01
This paper focuses on factors that might influence positive parenting during middle childhood when a parent faces formidable challenges defined herein as "resilient parenting." Data were obtained from 162 families at child age 5 and 8 years. Using an adapted ABCX model, we examined three risk domains (child developmental delay, child ADHD/ODD diagnosis, and low family income) and three protective factors (mother's education, health, and optimism). The outcome of interest was positive parenting as coded from mother-child interactions. We hypothesized that each of the risk factors would predict poorer parenting and that higher levels of each protective factor would buffer the risk-parenting relationship. Positive parenting scores decreased across levels of increasing risk. Maternal optimism appeared to be a protective factor for resilient parenting concurrently at age 5 and predictively to age 8, as well as a predictor of positive change in parenting from age 5 to age 8, above and beyond level of risk. Maternal education and health were not significantly protective for positive parenting. Limitations, future directions, and implications for intervention are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Resilient Parenting of Children at Developmental Risk Across Middle Childhood
Baker, Bruce L.; Blacher, Jan; Crnic, Keith
2015-01-01
This paper focuses on factors that might influence positive parenting during middle childhood when a parent faces formidable challenges defined herein as “resilient parenting.” Data were obtained from 162 families at child age 5 and 8 years. Using an adapted ABCX model, we examined three risk domains (child developmental delay, child ADHD/ODD diagnosis, and low family income) and three protective factors (mother’s education, health, and optimism). The outcome of interest was positive parenting as coded from mother-child interactions. We hypothesized that each of the risk factors would predict poorer parenting and that higher levels of each protective factor would buffer the risk-parenting relationship. Positive parenting scores decreased across levels of increasing risk. Maternal optimism appeared to be a protective factor for resilient parenting concurrently at age 5 and predictively to age 8, as well as a predictor of positive change in parenting from age 5 to age 8, above and beyond level of risk. Maternal education and health were not significantly protective for positive parenting. Limitations, future directions, and implications for intervention are discussed. PMID:24713516
NASA Astrophysics Data System (ADS)
Rezvanian, O.; Brown, C.; Zikry, M. A.; Kingon, A. I.; Krim, J.; Irving, D. L.; Brenner, D. W.
2008-07-01
It is shown that measured and calculated time-dependent electrical resistances of closed gold Ohmic switches in radio frequency microelectromechanical system (rf-MEMS) devices are well described by a power law that can be derived from a single asperity creep model. The analysis reveals that the exponent and prefactor in the power law arise, respectively, from the coefficient relating creep rate to applied stress and the initial surface roughness. The analysis also shows that resistance plateaus are not, in fact, limiting resistances but rather result from the small coefficient in the power law. The model predicts that it will take a longer time for the contact resistance to attain a power law relation with each successive closing of the switch due to asperity blunting. Analysis of the first few seconds of the measured resistance for three successive openings and closings of one of the MEMS devices supports this prediction. This work thus provides guidance toward the rational design of Ohmic contacts with enhanced reliabilities by better defining variables that can be controlled through material selection, interface processing, and switch operation.
Tscharke, David C; Karupiah, Gunasegaran; Zhou, Jie; Palmore, Tara; Irvine, Kari R; Haeryfar, S M Mansour; Williams, Shanicka; Sidney, John; Sette, Alessandro; Bennink, Jack R; Yewdell, Jonathan W
2005-01-03
The large size of poxvirus genomes has stymied attempts to identify determinants recognized by CD8+ T cells and greatly impeded development of mouse smallpox vaccination models. Here, we use a vaccinia virus (VACV) expression library containing each of the predicted 258 open reading frames to identify five peptide determinants that account for approximately half of the VACV-specific CD8+ T cell response in C57BL/6 mice. We show that the primary immunodominance hierarchy is greatly affected by the route of VACV infection and the poxvirus strain used. Modified vaccinia virus ankara (MVA), a candidate replacement smallpox vaccine, failed to induce responses to two of the defined determinants. This could not be predicted by genomic comparison of viruses and is not due strictly to limited MVA replication in mice. Several determinants are immunogenic in cowpox and ectromelia (mousepox) virus infections, and immunization with the immunodominant determinant provided significant protection against lethal mousepox. These findings have important implications for understanding poxvirus immunity in animal models and bench-marking immune responses to poxvirus vaccines in humans.
Coulon, A.; Fitzpatrick, J.W.; Bowman, R.; Stith, B.M.; Makarewich, C.A.; Stenzler, L.M.; Lovette, I.J.
2008-01-01
The delimitation of populations, defined as groups of individuals linked by gene flow, is possible by the analysis of genetic markers and also by spatial models based on dispersal probabilities across a landscape. We combined these two complimentary methods to define the spatial pattern of genetic structure among remaining populations of the threatened Florida scrub-jay, a species for which dispersal ability is unusually well-characterized. The range-wide population was intensively censused in the 1990s, and a metapopulation model defined population boundaries based on predicted dispersal-mediated demographic connectivity. We subjected genotypes from more than 1000 individual jays screened at 20 microsatellite loci to two Bayesian clustering methods. We describe a consensus method for identifying common features across many replicated clustering runs. Ten genetically differentiated groups exist across the present-day range of the Florida scrub-jay. These groups are largely consistent with the dispersal-defined metapopulations, which assume very limited dispersal ability. Some genetic groups comprise more than one metapopulation, likely because these genetically similar metapopulations were sundered only recently by habitat alteration. The combined reconstructions of population structure based on genetics and dispersal-mediated demographic connectivity provide a robust depiction of the current genetic and demographic organization of this species, reflecting past and present levels of dispersal among occupied habitat patches. The differentiation of populations into 10 genetic groups adds urgency to management efforts aimed at preserving what remains of genetic variation in this dwindling species, by maintaining viable populations of all genetically differentiated and geographically isolated populations.
49 CFR 71.1 - Limits defined; exceptions authorized for certain rail operating purposes only.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Transportation STANDARD TIME ZONE BOUNDARIES § 71.1 Limits defined; exceptions authorized for certain rail... zones established by section 1 of the Standard Time Act, as amended by section 4 of the Uniform Time Act... carriers, whose operations cross the time zone boundaries prescribed by this part, authorizing them to...
40 CFR Table 3 to Subpart Jjjjjj... - Operating Limits for Boilers With Emission Limits
Code of Federal Regulations, 2013 CFR
2013-07-01
... as defined in § 63.11237. 4. Dry sorbent or activated carbon injection control Maintain the 30-day rolling average sorbent or activated carbon injection rate at or above the minimum sorbent injection rate or minimum activated carbon injection rate as defined in § 63.11237. When your boiler operates at...
NASA Technical Reports Server (NTRS)
Mosier, Carol
2015-01-01
The presentation will be given at the Annual Thermal Fluids Analysis Workshop (TFAWS 2015, NCTS 21070-15) hosted by the Goddard SpaceFlight Center (GSFC) Thermal Engineering Branch (Code 545). The powerpoint presentation details the process of defining limits throughout the lifecycle of a flight project.
Rakha, Emad A.; Badve, Sunil; Eusebi, Vincenzo; Reis-Filho, Jorge S.; Fox, Stephen B.; Dabbs, David J.; Decker, Thomas; Hodi, Zsolt; Ichihara, Shu; Lee, Andrew HS.; Palacios, José; Richardson, Andrea L.; Vincent-Salomon, Anne; Schmitt, Fernando C.; Tan, Puay-Hoon; Tse, Gary M.; Ellis, Ian O.
2016-01-01
Breast lesions comprise a family of heterogeneous entities with variable patterns of presentation, morphology and clinical behaviour. The majority of breast lesions are traditionally classified into benign and malignant conditions and their behaviour can, in the vast majority of cases, be predicted with a reasonable degree of accuracy. However, there remain lesions which show borderline features and lie in a grey-zone between benign and malignant as their behaviour cannot be predicted reliably. Defined pathological categorisation of such lesions is challenging and for some entities is recognised to be subjective and include a range of diagnoses, and forms of terminology, which may trigger over-treatment or under-treatment. The rarity of these lesions makes acquisition of clinical evidence problematic and limits the development of a sufficient evidence base to support informed decision making by clinicians and patients. Emerging molecular evidence is providing a greater understanding of the biology of these lesions, but this may or may not be reflected in their clinical behaviour. Herein we discuss some breast lesions that are associated with uncertainty regarding classification, behaviour and hence management. These include biologically invasive malignant lesions associated with uncertain metastatic potential such as low-grade adenosquamous carcinoma, low-grade fibromatosis-like spindle cell carcinoma and encapsulated papillary carcinoma. Other lesions remain of uncertain malignant nature such as mammary cylindroma, atypical microglandular adenosis, mammary pleomorphic adenoma and infiltrating epitheliosis. The concept of categories of 1) breast lesions of uncertain malignant nature and 2) breast lesions of limited metastatic potential, are proposed with details of which histological entities could be included in each category, and their management implications are discussed. PMID:26348644
Indoor determinants of dustborne allergens in Mexican homes
Hernández-Cadena, Leticia; Zeldin, Darryl C.; Sever, Michelle L.; Sly, Peter D.; London, Stephanie J.; Escamilla-Nuñez, María Consuelo; Romieu, Isabelle
2015-01-01
Exposure to indoor allergens represents a significant risk factor for allergies and asthma in several parts of the world. In Mexico, few studies have evaluated indoor allergens, including cat, dog, and mouse allergens and the factors that predict their presence. This study evaluates the main environmental and household predictors of high prenatal allergen levels and multiple allergen exposures in a birth cohort from Mexico City. A cross-sectional study was conducted as part of a birth cohort study of 1094 infants recruited during pregnancy and followed until delivery. We collected dust samples in a subset of 264 homes and assessed environmental factors. Der p 1, Der f 1, dust mite group 2, Fel d 1, Can f 1, Rat n 1, Mus m 1, and Bla g 2 concentrations in dust samples were measured using immunoassays. To define detectable allergen levels, the lowest limits of detection for each allergen were taken as cutoff points. Overall allergen exposure was considered high when four or more allergens exceeded detectable levels in the same household. Logistic regression was used for predictive models. Eighty-five percent of homes had at least one allergen in dust over the detection limit, 52.1% had high exposure (four or more allergens above detectable limits), and 11.7% of homes had detectable levels for more than eight allergens. Der p 1, Der p 2, Mus m 1, and Fel d 1 were the most frequent allergens detected. Each allergen had both common and distinct predictors. The main predictors of a high multiple allergen index were the size of the home, pesticide use, mother's age, mother as homemaker, and season. Increased indoor environmental allergen exposure is mainly related to sociodemographic factors and household cleaning. PMID:25715241
HIV-associated lung cancer: survival in an unselected cohort.
Hoffmann, Christian; Kohrs, Fabienne; Sabranski, Michael; Wolf, Eva; Jaeger, Hans; Wyen, Christoph; Siehl, Jan; Baumgarten, Axel; Hensel, Manfred; Jessen, Arne; Schaaf, Bernhard; Vogel, Martin; Bogner, Johannes; Horst, Heinz-August; Stephan, Christoph
2013-10-01
Lung cancer is one of the most common non-AIDS-defining malignancies in HIV-infected patients. However, data on clinical outcome and prognostic factors are scarce. This was a national German multicentre, retrospective cohort analysis of all cases of lung cancer seen in HIV-infected individuals from 2000 through 2010. Survival was analyzed with respect to the use of antiretroviral therapy (ART), specific lung cancer therapies, and other potential prognostic factors. A total of 72 patients (mean age 55.5 y, CD4 T-cells 383/μl) were evaluated in this analysis. At time of lung cancer diagnosis, 86% were on ART. Of these, 79% had undetectable HIV-1 RNA (< 50 copies/ml) for a mean duration of 4.0 y. All but 1 patient were current or former heavy smokers (mean 42 package y). The median estimated overall survival was 1.08 y, with a 2-y overall survival of 24%. The prognosis did not improve during the observation time. A limited lung cancer stage of I-IIIA was associated with better overall survival when compared with the advanced stages IIIb/IV (p = 0.0003). Other factors predictive of improved overall survival were better performance status, CD4 T-cells > 200/μl, and a non-intravenous drug use transmission risk for HIV. Currently, most cases of lung cancer occur in the setting of limited immune deficiency and a long-lasting viral suppression. As in HIV-negative cases, the clinical stage of lung cancer is highly predictive of survival, and long-term overall survival can only be achieved at the limited stages. The still high mortality underscores the importance of smoking cessation strategies in HIV-infected patients.
Martens, Pieter; Verbrugge, Frederik H; Boonen, Levinia; Nijst, Petra; Dupont, Matthias; Mullens, Wilfried
2018-01-01
Guidelines advocate down-titration of loop diuretics in chronic heart failure (CHF) when patients have no signs of volume overload. Limited data are available on the expected success rate of this practice or how routine diagnostic tests might help steering this process. Fifty ambulatory CHF-patients on stable neurohumoral blocker/diuretic therapy for at least 3months without any clinical sign of volume overload were prospectively included to undergo loop diuretic down-titration. All patients underwent a similar pre-down-titration evaluation consisting of a dyspnea scoring, physical examination, transthoracic echocardiography (diastolic function, right ventricular function, cardiac filling pressures and valvular disease), blood sample (serum creatinine, plasma NT-pro-BNP and neurohormones). Loop diuretic maintenance dose was subsequently reduced by 50% or stopped if dose was ≤40mg furosemide equivalents. Successful down-titration was defined as a persistent dose reduction after 30days without weight increase >1.5kg or new-onset symptoms of worsening heart failure. At 30-day follow-up, down-titration was successful in 62% (n=31). In 12/19 patients exhibiting down-titration failure, this occurred within the first week. Physical examination, transthoracic echocardiography and laboratory analysis had limited predictive capability to detect patients with down-titration success/failure (positive likelihood-ratios below 1.5, or area under the curve [AUC] non-statically different from AUC=0.5). Loop diuretic down-titration is feasible in a majority of stable CHF patients in which the treating clinician felt continuation of loops was unnecessary to sustain euvolemia. Importantly, routine diagnostics which suggest euvolemia, have limited diagnostic impact on the post-test probability. Copyright © 2017 Elsevier B.V. All rights reserved.
The role of urinary fractionated metanephrines in the diagnosis of phaeochromocytoma
Jeyaraman, Kanakamani; Natarajan, Vasanthi; Thomas, Nihal; Jacob, Paul Mazhuvanchary; Nair, Aravindan; Shanthly, Nylla; Oommen, Regi; Varghese, Gracy; Joseph, Fleming Jude; Seshadri, Mandalam Subramaniam; Rajaratnam, Simon
2013-01-01
Background & objectives: Plasma and urinary metanephrines are used as screening tests for the diagnosis of phaeochromocytoma. The recommended cut-off levels are not standardized. This study was conducted to identify a cut-off level for 24 h urinary fractionated metanephrines viz. metanephrine (uMN) and normetanephrine (uNMN) using enzyme immunoassay for the diagnosis of phaeochromocytoma. Methods: Consecutive patients suspected to have phaeochromocytoma were included in the study. uMN and uNMN in 24 h urinary sample were measured using a commercial ELISA kit. Results: Overall, 72 patients were included over a period of 18 months. Twenty patients had histopathologically confirmed phaeochromocytoma and in 52 patients phaeochromocytoma was ruled out. Using the upper limit of normal stated by the assay manufacturer as the cut-off, uMN >350 μg/day had a low sensitivity and uNMN >600 μg/day had a poor specificity. By increasing the cut-off value of uNMN to twice the upper limit, specificity increased significantly without much loss in sensitivity. Combining uMN and uNMN using a cut-off twice the upper limit improved the diagnostic performance - sensitivity (95%); specificity (92.3%); positive predictive value (PPV - 82.6%); negative predictive value (NPV - 98%). In subsets of patients with a variable pretest probability for phaeochromocytoma, the PPV correlates well with the occurred of these tumors decreased, while the NPV remained at 100 per cent. Interpretation & conclusions: ELISA is a simple and reliable method for measuring uMN and uNMN. The test has a good NPV and can be used as an initial screening test for ruling out phaeochromocytoma. Each hospital will have to define the cut-off value for the assay being used, choosing a proper control population. PMID:23563375
The role of urinary fractionated metanephrines in the diagnosis of phaeochromocytoma.
Jeyaraman, Kanakamani; Natarajan, Vasanthi; Thomas, Nihal; Jacob, Paul Mazhuvanchary; Nair, Aravindan; Shanthly, Nylla; Oommen, Regi; Varghese, Gracy; Joseph, Fleming Jude; Seshadri, Mandalam Subramaniam; Rajaratnam, Simon
2013-02-01
Plasma and urinary metanephrines are used as screening tests for the diagnosis of phaeochromocytoma. The recommended cut-off levels are not standardized. This study was conducted to identify a cut-off level for 24 h urinary fractionated metanephrines viz. metanephrine (uMN) and normetanephrine (uNMN) using enzyme immunoassay for the diagnosis of phaeochromocytoma. Consecutive patients suspected to have phaeochromocytoma were included in the study. uMN and uNMN in 24 h urinary sample were measured using a commercial ELISA kit. Overall, 72 patients were included over a period of 18 months. Twenty patients had histopathologically confirmed phaeochromocytoma and in 52 patients phaeochromocytoma was ruled out. Using the upper limit of normal stated by the assay manufacturer as the cut-off, uMN >350 μg/day had a low sensitivity and uNMN >600 μg/day had a poor specificity. By increasing the cut-off value of uNMN to twice the upper limit, specificity increased significantly without much loss in sensitivity. Combining uMN and uNMN using a cut-off twice the upper limit improved the diagnostic performance - sensitivity (95%); specificity (92.3%); positive predictive value (PPV - 82.6%); negative predictive value (NPV - 98%). In subsets of patients with a variable pretest probability for phaeochromocytoma, the PPV correlates well with the occurred of these tumors decreased, while the NPV remained at 100 per cent. ELISA is a simple and reliable method for measuring uMN and uNMN. The test has a good NPV and can be used as an initial screening test for ruling out phaeochromocytoma. Each hospital will have to define the cut-off value for the assay being used, choosing a proper control population.
Dunham, J.B.; Cade, B.S.; Terrell, J.W.
2002-01-01
We used regression quantiles to model potentially limiting relationships between the standing crop of cutthroat trout Oncorhynchus clarki and measures of stream channel morphology. Regression quantile models indicated that variation in fish density was inversely related to the width:depth ratio of streams but not to stream width or depth alone. The spatial and temporal stability of model predictions were examined across years and streams, respectively. Variation in fish density with width:depth ratio (10th-90th regression quantiles) modeled for streams sampled in 1993-1997 predicted the variation observed in 1998-1999, indicating similar habitat relationships across years. Both linear and nonlinear models described the limiting relationships well, the latter performing slightly better. Although estimated relationships were transferable in time, results were strongly dependent on the influence of spatial variation in fish density among streams. Density changes with width:depth ratio in a single stream were responsible for the significant (P < 0.10) negative slopes estimated for the higher quantiles (>80th). This suggests that stream-scale factors other than width:depth ratio play a more direct role in determining population density. Much of the variation in densities of cutthroat trout among streams was attributed to the occurrence of nonnative brook trout Salvelinus fontinalis (a possible competitor) or connectivity to migratory habitats. Regression quantiles can be useful for estimating the effects of limiting factors when ecological responses are highly variable, but our results indicate that spatiotemporal variability in the data should be explicitly considered. In this study, data from individual streams and stream-specific characteristics (e.g., the occurrence of nonnative species and habitat connectivity) strongly affected our interpretation of the relationship between width:depth ratio and fish density.
NASA Astrophysics Data System (ADS)
Cherala, Anshuman; Sreenivasan, S. V.
2018-12-01
Complex nanoshaped structures (nanoshape structures here are defined as shapes enabled by sharp corners with radius of curvature <5 nm) have been shown to enable emerging nanoscale applications in energy, electronics, optics, and medicine. This nanoshaped fabrication at high throughput is well beyond the capabilities of advanced optical lithography. While the highest-resolution e-beam processes (Gaussian beam tools with non-chemically amplified resists) can achieve <5 nm resolution, this is only available at very low throughputs. Large-area e-beam processes, needed for photomasks and imprint templates, are limited to 18 nm half-pitch lines and spaces and 20 nm half-pitch hole patterns. Using nanoimprint lithography, we have previously demonstrated the ability to fabricate precise diamond-like nanoshapes with 3 nm radius corners over large areas. An exemplary shaped silicon nanowire ultracapacitor device was fabricated with these nanoshaped structures, wherein the half-pitch was 100 nm. The device significantly exceeded standard nanowire capacitor performance (by 90%) due to relative increase in surface area per unit projected area, enabled by the nanoshape. Going beyond the previous work, in this paper we explore the scaling of these nanoshaped structures to 10 nm half-pitch and below. At these scales a new "shape retention" resolution limit is observed due to polymer relaxation in imprint resists, which cannot be predicted with a linear elastic continuum model. An all-atom molecular dynamics model of the nanoshape structure was developed here to study this shape retention phenomenon and accurately predict the polymer relaxation. The atomistic framework is an essential modeling and design tool to extend the capability of imprint lithography to sub-10 nm nanoshapes. This framework has been used here to propose process refinements that maximize shape retention, and design template assist features (design for nanoshape retention) to achieve targeted nanoshapes.
Ingham, Steven C; Fanslau, Melody A; Burnham, Greg M; Ingham, Barbara H; Norback, John P; Schaffner, Donald W
2007-06-01
A computer-based tool (available at: www.wisc.edu/foodsafety/meatresearch) was developed for predicting pathogen growth in raw pork, beef, and poultry meat. The tool, THERM (temperature history evaluation for raw meats), predicts the growth of pathogens in pork and beef (Escherichia coli O157:H7, Salmonella serovars, and Staphylococcus aureus) and on poultry (Salmonella serovars and S. aureus) during short-term temperature abuse. The model was developed as follows: 25-g samples of raw ground pork, beef, and turkey were inoculated with a five-strain cocktail of the target pathogen(s) and held at isothermal temperatures from 10 to 43.3 degrees C. Log CFU per sample data were obtained for each pathogen and used to determine lag-phase duration (LPD) and growth rate (GR) by DMFit software. The LPD and GR were used to develop the THERM predictive tool, into which chronological time and temperature data for raw meat processing and storage are entered. The THERM tool then predicts a delta log CFU value for the desired pathogen-product combination. The accuracy of THERM was tested in 20 different inoculation experiments that involved multiple products (coarse-ground beef, skinless chicken breast meat, turkey scapula meat, and ground turkey) and temperature-abuse scenarios. With the time-temperature data from each experiment, THERM accurately predicted the pathogen growth and no growth (with growth defined as delta log CFU > 0.3) in 67, 85, and 95% of the experiments with E. coli 0157:H7, Salmonella serovars, and S. aureus, respectively, and yielded fail-safe predictions in the remaining experiments. We conclude that THERM is a useful tool for qualitatively predicting pathogen behavior (growth and no growth) in raw meats. Potential applications include evaluating process deviations and critical limits under the HACCP (hazard analysis critical control point) system.
Metabolomics biomarkers to predict acamprosate treatment response in alcohol-dependent subjects.
Hinton, David J; Vázquez, Marely Santiago; Geske, Jennifer R; Hitschfeld, Mario J; Ho, Ada M C; Karpyak, Victor M; Biernacka, Joanna M; Choi, Doo-Sup
2017-05-31
Precision medicine for alcohol use disorder (AUD) allows optimal treatment of the right patient with the right drug at the right time. Here, we generated multivariable models incorporating clinical information and serum metabolite levels to predict acamprosate treatment response. The sample of 120 patients was randomly split into a training set (n = 80) and test set (n = 40) five independent times. Treatment response was defined as complete abstinence (no alcohol consumption during 3 months of acamprosate treatment) while nonresponse was defined as any alcohol consumption during this period. In each of the five training sets, we built a predictive model using a least absolute shrinkage and section operator (LASSO) penalized selection method and then evaluated the predictive performance of each model in the corresponding test set. The models predicted acamprosate treatment response with a mean sensitivity and specificity in the test sets of 0.83 and 0.31, respectively, suggesting our model performed well at predicting responders, but not non-responders (i.e. many non-responders were predicted to respond). Studies with larger sample sizes and additional biomarkers will expand the clinical utility of predictive algorithms for pharmaceutical response in AUD.
Leigh, Margaret W; Ferkol, Thomas W; Davis, Stephanie D; Lee, Hye-Seung; Rosenfeld, Margaret; Dell, Sharon D; Sagel, Scott D; Milla, Carlos; Olivier, Kenneth N; Sullivan, Kelli M; Zariwala, Maimoona A; Pittman, Jessica E; Shapiro, Adam J; Carson, Johnny L; Krischer, Jeffrey; Hazucha, Milan J; Knowles, Michael R
2016-08-01
Primary ciliary dyskinesia (PCD), a genetically heterogeneous, recessive disorder of motile cilia, is associated with distinct clinical features. Diagnostic tests, including ultrastructural analysis of cilia, nasal nitric oxide measurements, and molecular testing for mutations in PCD genes, have inherent limitations. To define a statistically valid combination of systematically defined clinical features that strongly associates with PCD in children and adolescents. Investigators at seven North American sites in the Genetic Disorders of Mucociliary Clearance Consortium prospectively and systematically assessed individuals (aged 0-18 yr) referred due to high suspicion for PCD. The investigators defined specific clinical questions for the clinical report form based on expert opinion. Diagnostic testing was performed using standardized protocols and included nasal nitric oxide measurement, ciliary biopsy for ultrastructural analysis of cilia, and molecular genetic testing for PCD-associated genes. Final diagnoses were assigned as "definite PCD" (hallmark ultrastructural defects and/or two mutations in a PCD-associated gene), "probable/possible PCD" (no ultrastructural defect or genetic diagnosis, but compatible clinical features and nasal nitric oxide level in PCD range), and "other diagnosis or undefined." Criteria were developed to define early childhood clinical features on the basis of responses to multiple specific queries. Each defined feature was tested by logistic regression. Sensitivity and specificity analyses were conducted to define the most robust set of clinical features associated with PCD. From 534 participants 18 years of age and younger, 205 were identified as having "definite PCD" (including 164 with two mutations in a PCD-associated gene), 187 were categorized as "other diagnosis or undefined," and 142 were defined as having "probable/possible PCD." Participants with "definite PCD" were compared with the "other diagnosis or undefined" group. Four criteria-defined clinical features were statistically predictive of PCD: laterality defect; unexplained neonatal respiratory distress; early-onset, year-round nasal congestion; and early-onset, year-round wet cough (adjusted odds ratios of 7.7, 6.6, 3.4, and 3.1, respectively). The sensitivity and specificity based on the number of criteria-defined clinical features were four features, 0.21 and 0.99, respectively; three features, 0.50 and 0.96, respectively; and two features, 0.80 and 0.72, respectively. Systematically defined early clinical features could help identify children, including infants, likely to have PCD. Clinical trial registered with ClinicalTrials.gov (NCT00323167).
The development of methods for predicting and measuring distribution patterns of aerial sprays
NASA Technical Reports Server (NTRS)
Ormsbee, A. I.; Bragg, M. B.; Maughmer, M. D.
1979-01-01
The capability of conducting scale model experiments which involve the ejection of small particles into the wake of an aircraft close to the ground is developed. A set of relationships used to scale small-sized dispersion studies to full-size results are experimentally verified and, with some qualifications, basic deposition patterns are presented. In the process of validating these scaling laws, the basic experimental techniques used in conducting such studies, both with and without an operational propeller, were developed. The procedures that evolved are outlined. The envelope of test conditions that can be accommodated in the Langley Vortex Research Facility, which were developed theoretically, are verified using a series of vortex trajectory experiments that help to define the limitations due to wall interference effects for models of different sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagoner, C.L.; Wessel, R.A.
1986-01-01
Empiricism has traditionally been used to relate laboratory and pilot-scale measurements of fuel characteristics with the design, performance, and the slagging and fouling behavior of steam generators. Currently, a new engineering approach is being evaluated. The goal is to develop and use calculations and measurements from several engineering disciplines that exceed the demonstrated limitations of present empirical techniques for predicting slagging/fouling behavior. In Part I of this paper, the generic approach to deposits and boiler performance is defined and a matrix of engineering concepts is described. General relationships are presented for assessing the effects of deposits and sootblowing on themore » real-time performance of heat transfer surfaces in pilot- and commercial-scale steam generators.« less
Antonelli, Cristian; Mecozzi, Antonio; Shtaif, Mark; Winzer, Peter J
2015-02-09
Mode-dependent loss (MDL) is a major factor limiting the achievable information rate in multiple-input multiple-output space-division multiplexed systems. In this paper we show that its impact on system performance, which we quantify in terms of the capacity reduction relative to a reference MDL-free system, may depend strongly on the operation of the inline optical amplifiers. This dependency is particularly strong in low mode-count systems. In addition, we discuss ways in which the signal-to-noise ratio of the MDL-free reference system can be defined and quantify the differences in the predicted capacity loss. Finally, we stress the importance of correctly accounting for the effect of MDL on the accumulation of amplification noise.
The Synchrotron Shock Model Confronts a "Line of Death" in the BATSE Gamma-Ray Burst Data
NASA Technical Reports Server (NTRS)
Preece, Robert D.; Briggs, Michael S.; Mallozzi, Robert S.; Pendleton, Geoffrey N.; Paciesas, W. S.; Band, David L.
1998-01-01
The synchrotron shock model (SSM) for gamma-ray burst emission makes a testable prediction: that the observed low-energy power-law photon number spectral index cannot exceed -2/3 (where the photon model is defined with a positive index: $dN/dE \\propto E{alpha}$). We have collected time-resolved spectral fit parameters for over 100 bright bursts observed by the Burst And Transient Source Experiment on board the {\\it Compton Gamma Ray Observatory}. Using this database, we find 23 bursts in which the spectral index limit of the SSM is violated, We discuss elements of the analysis methodology that affect the robustness of this result, as well as some of the escape hatches left for the SSM by theory.
Ballo, J M; Dunne, M J; McMeekin, R R
1978-01-01
Digital simulation of aircraft-accident kinematics has heretofore been used almost exclusively as a design tool to explore structural load limits, precalculate decelerative forces at various cabin stations, and describe the effect of protective devices in the crash environment. In an effort to determine the value of digital computer simulation of fatal aircraft accidents, a fatality involving an ejection-system failure (out-of-envelope ejection) was modeled, and the injuries actually incurred were compared to those predicted; good agreement was found. The simulation of fatal aircraft accidents is advantageous because of a well-defined endpoint (death), lack of therapeutic intervention, and a static anatomic situation that can be minutely investigated. Such simulation techniques are a useful tool in the study of experimental trauma.
[Non alcoholic fatty liver. A frequent entity with an unknown outcome].
Barisio D'Angelo, María Gabriela; Mariel Actis, Andrea; Outomuro, Delia
2009-01-01
Non-alcoholic fatty liver disease (NAFLD), defined as excessive fat accumulation into the hepatocytes, has a prevalence of approximately 15 to 25%. Frequently associated risk factors for NAFLD are obesity, type 2 diabetes and dyslipidemia. It has been proponed that a mitochondrial dysfunction would play a crucial role in the disease development.On the other hand, focus is on insulin resistance syndrome, the only metabolic alteration strongly associated with this malady. The disease is suspected in individuals with insulina resistance characteristics such as metabolic syndrome and also in those with augmented serum aminotransferases levels. Different tests with biochemical markers have been proposed to predict the development of fibrosis or steatohepatitis. Therapeutic options in NAFLD patients are limited and weight lost remains as the most recommended one.
Application of Landsat imagery to problems of petroleum exploration in Qaidam Basin, China
Bailey, G.B.; Anderson, P.D.
1982-01-01
Tertiary and Quaternary nonmarine, petroleum-bearing sedimentary rocks have been extensively deformed by compressive forces. These forces created many folds which are current targets of Chinese exploration programs. Image-derived interpretations of folds, strike-slip faults, thrust faults, normal or reverse faults, and fractures compared very favorably, in terms of locations and numbers mapped, with Chinese data compiled from years of extensive field mapping. Many potential hydrocarbon trapping structures were precisely located. Orientations of major structural trends defined from Landsat imagery correlate well with those predicted for the area based on global tectonic theory. These correlations suggest that similar orientations exist in the eastern half of the basin where folded rocks are mostly obscured by unconsolidated surface sediments and where limited exploration has occurred.--Modified journal abstract.
Lawes, Timothy; Lopez-Lozano, José-María; Nebot, Cesar A; Macartney, Gillian; Subbarao-Sharma, Rashmi; Wares, Karen D; Sinclair, Carolyn; Gould, Ian M
2017-02-01
Whereas many antibiotics increase risk of Clostridium difficile infection through dysbiosis, epidemic C difficile ribotypes characterised by multidrug resistance might depend on antibiotic selection pressures arising from population use of specific drugs. We examined the effect of a national antibiotic stewardship intervention limiting the use of 4C antibiotics (fluoroquinolones, clindamycin, co-amoxiclav, and cephalosporins) and other infection prevention and control strategies on the clinical and molecular epidemiology of C difficile infections in northeast Scotland. We did a non-linear time-series analysis and quasi-experimental study to explore ecological determinants of clinical burdens from C difficile infections and ribotype distributions in a health board serving 11% of the Scottish population. Study populations were adults (aged ≥16 years) registered with primary carer providers in the community (mean 455 508 inhabitants) or admitted to tertiary level, district general, or geriatric hospitals (mean 33 049 total admissions per month). A mixed persuasive-restrictive 4C antibiotic stewardship intervention was initiated in all populations on May 1, 2009. Other population-specific interventions considered included limiting indications for macrolide prescriptions, introduction of alcohol-based hand sanitiser, a national hand-hygiene campaign, national auditing and inspections of hospital environment cleanliness, and reminders to reduce inappropriate use of proton-pump inhibitors. The total effect of interventions was defined as the difference between observations and projected scenarios without intervention. Primary outcomes were prevalence density of C difficile infection per 1000 occupied bed-days in hospitals or per 100 000 inhabitant-days in the community. Between Jan 1, 1997, and Dec 31, 2012, we identified 4885 cases of hospital-onset C difficile infection among 1 289 929 admissions to study hospitals, and a further 1625 cases of community-onset C difficile infection among 455 508 adults registered in primary care. Use of 4C antibiotics was reduced by 50% in both hospitals (mean reduction 193 defined daily doses per 1000 occupied bed-days, 95% CI 45-328, p=0·008) and the community (1·85 defined daily doses per 1000 inhabitant-days, 95% CI 0·23-3·48, p=0·025) during antibiotic stewardship. Falling 4C use predicted rapid declines in multidrug-resistant ribotypes R001 and R027. Hospital-onset C difficile infection prevalence densities were associated with fluoroquinolone, third-generation cephalosporin, macrolides, and carbapenem use, exceeding hospital population specific total use thresholds. Community-onset C difficile infection prevalence density was predicted by recent hospital C difficile infection rates, introduction of mandatory surveillance in individuals older than 65 years, and primary-case use of fluoroquinolones and clindamycin exceeding total use thresholds. Compared with predictions without intervention, C difficile infection prevalence density fell by 68% (mean reduction 1·01 per 1000 occupied bed-days, 0·27-1·76, p=0·008) in hospitals and 45% (0·083, 0·045-0·121 cases per 100 000 inhabitant-days, p<0·0001) in the community, during antibiotic stewardship. We identified no significant effects from other interventions. Limiting population use of 4C antibiotics reduced selective pressures favouring multidrug-resistant epidemic ribotypes and was associated with substantial declines in total C difficile infections in northeast Scotland. Efforts to control C difficile through antibiotic stewardship should account for ribotype distributions and non-linear effects. NHS Grampian Microbiology Endowment Fund. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kumar, Gautam; Maji, Kuntal
2018-04-01
This article deals with the prediction of strain-and stress-based forming limit curves for advanced high strength steel DP590 sheet using Marciniak-Kuczynski (M-K) method. Three yield criteria namely Von-Mises, Hill's 48 and Yld2000-2d and two hardening laws i.e., Hollomon power and Swift hardening laws were considered to predict the forming limit curves (FLCs) for DP590 steel sheet. The effects of imperfection factor and initial groove angle on prediction of FLC were also investigated. It was observed that the FLCs shifted upward with the increase of imperfection factor value. The initial groove angle was found to have significant effects on limit strains in the left side of FLC, and insignificant effect for the right side of FLC for certain range of strain paths. The limit strains were calculated at zero groove angle for the right side of FLC, and a critical groove angle was used for the left side of FLC. The numerically predicted FLCs considering the different combinations of yield criteria and hardening laws were compared with the published experimental results of FLCs for DP590 steel sheet. The FLC predicted using the combination of Yld2000-2d yield criterion and swift hardening law was in better coorelation with the experimental data. Stress based forming limit curves (SFLCs) were also calculated from the limiting strain values obtained by M-K model. Theoretically predicted SFLCs were compared with that obtained from the experimental forming limit strains. Stress based forming limit curves were seen to better represent the forming limits of DP590 steel sheet compared to that by strain-based forming limit curves.
Johansen, Kirsten L; Dalrymple, Lorien S; Delgado, Cynthia; Kaysen, George A; Kornak, John; Grimes, Barbara; Chertow, Glenn M
2014-10-01
A well-accepted definition of frailty includes measurements of physical performance, which may limit its clinical utility. In a cross-sectional study, we compared prevalence and patient characteristics based on a frailty definition that uses self-reported function to the classic performance-based definition and developed a modified self-report-based definition. Prevalent adult patients receiving hemodialysis in 14 centers around San Francisco and Atlanta in 2009-2011. Self-report-based frailty definition in which a score lower than 75 on the Physical Function scale of the 36-Item Short Form Health Survey (SF-36) was substituted for gait speed and grip strength in the classic definition; modified self-report definition with optimized Physical Function score cutoff points derived in a development (one-half) cohort and validated in the other half. Performance-based frailty defined as 3 of the following: weight loss, weakness, exhaustion, low physical activity, and slow gait speed. 387 (53%) patients were frail based on self-reported function, of whom 209 (29% of the cohort) met the performance-based definition. Only 23 (3%) met the performance-based definition of frailty only. The self-report definition had 90% sensitivity, 64% specificity, 54% positive predictive value, 93% negative predictive value, and 72.5% overall accuracy. Intracellular water per kilogram of body weight and serum albumin, prealbumin, and creatinine levels were highest among nonfrail individuals, intermediate among those who were frail by self-report, and lowest among those who also were frail by performance. Age, percentage of body fat, and C-reactive protein level followed an opposite pattern. The modified self-report definition had better accuracy (84%; 95% CI, 79%-89%) and superior specificity (88%) and positive predictive value (67%). Our study did not address prediction of outcomes. Patients who meet the self-report-based but not the performance-based definition of frailty may represent an intermediate phenotype. A modified self-report definition can improve the accuracy of a questionnaire-based method of defining frailty. Published by Elsevier Inc.
2014-01-01
Background Dengue is a disease that has undergone significant expansion over the past hundred years. Understanding what factors limit the distribution of transmission can be used to predict current and future limits to further dengue expansion. While not the only factor, temperature plays an important role in defining these limits. Previous attempts to analyse the effect of temperature on the geographic distribution of dengue have not considered its dynamic intra-annual and diurnal change and its cumulative effects on mosquito and virus populations. Methods Here we expand an existing modelling framework with new temperature-based relationships to model an index proportional to the basic reproductive number of the dengue virus. This model framework is combined with high spatial and temporal resolution global temperature data to model the effects of temperature on Aedes aegypti and Ae. albopictus persistence and competence for dengue virus transmission. Results Our model predicted areas where temperature is not expected to permit transmission and/or Aedes persistence throughout the year. By reanalysing existing experimental data our analysis indicates that Ae. albopictus, often considered a minor vector of dengue, has comparable rates of virus dissemination to its primary vector, Ae. aegypti, and when the longer lifespan of Ae. albopictus is considered its competence for dengue virus transmission far exceeds that of Ae. aegypti. Conclusions These results can be used to analyse the effects of temperature and other contributing factors on the expansion of dengue or its Aedes vectors. Our finding that Ae. albopictus has a greater capacity for dengue transmission than Ae. aegypti is contrary to current explanations for the comparative rarity of dengue transmission in established Ae. albopictus populations. This suggests that the limited capacity of Ae. albopictus to transmit DENV is more dependent on its ecology than vector competence. The recommendations, which we explicitly outlined here, point to clear targets for entomological investigation. PMID:25052008
Welling, Theodore H; Eddinger, Kevin; Carrier, Kristen; Zhu, Danting; Kleaveland, Tyler; Moore, Derek E; Schaubel, Douglas E; Abt, Peter L
2018-05-05
Orthotopic liver transplantation (OLT) and resection are effective treatments for hepatocellular carcinoma (HCC). However, optimizing OLT and limiting HCC recurrence remains a vexing problem. New HCC MELD and allocation algorithms provide greater observation of HCC patients, many while receiving local-regional treatments. Potential benefits of local-regional treatment for limiting HCC recurrence post-OLT remain incompletely understood. Therefore we aimed to define HCC specific prognostic factors affecting recurrence in a contemporary, multi-center cohort of HCC patients undergoing OLT and specifically whether local-regional therapies limited recurrence. We identified 441 patients undergoing OLT for HCC at three major transplant centers from 2008-2013. Cox regression was used to analyze covariate-adjusted recurrence and mortality rates post-OLT. "Bridging" or "down-staging" therapy was used in 238 patients (54%) with transarterial chemoembolization (TACE) being used in 170 (71%) of treated patients. The survival rate post-OLT was 88% and 78% at 1 and 3 years, respectively, with HCC recurrence (28% of deaths) significantly increasing mortality rate (HR=19.87, p<0.0001). Tumor size, not tumor number, either at presentation or on explant independently predicted HCC recurrence (HR 1.36 and 1.73, respectively, p<0.05) with a threshold effect noted at 4.0 cm size. Local-regional therapy (TACE) reduced HCC recurrence by 64% when adjusting for presenting tumor size (HR 0.36, p<0.05). Explant tumor size and microvascular invasion predicted mortality (HR 1.19 and 1.51, respectively, p<0.05) and pathologic response to therapy (TACE or RFA) significantly decreased explant tumor size (0.56-1.62 cm diameter reduction, p<0.05). HCC tumor size at presentation or explant is the most important predictor for HCC recurrence post-OLT. Local-regional therapy to achieve a pathologic response (decreasing tumor size) can limit HCC recurrences post-OLT. This article is protected by copyright. All rights reserved. © 2018 by the American Association for the Study of Liver Diseases.
Glassman, Tavis; Braun, Robert E; Dodd, Virginia; Miller, Jeffrey M; Miller, E Maureen
2010-04-01
This study assessed the extent to which the Theory of Planned Behavior (TPB) correctly predicted college student's motivation to consume alcohol on game day based on alcohol consumption rates. Three cohorts of 1,000 participants each (N = 3,000) were randomly selected and invited to complete an anonymous web-based survey the Monday following one of three designated college home football games. Path analyses were conducted to determine which of the TPB constructs were most effective in predicting Behavioral Intention and alcohol consumption among social, high-risk, and extreme drinkers. Social drinkers, high-risk, and those drinkers who engage in Extreme Ritualistic Alcohol Consumption (ERAC) were defined as males who consumed 1-4, 5-9, or 10 or more drinks on game day (1-3, 4-8, or nine or more drinks for females), respectively. Attitude Towards the Behavior and Subjective Norm constructs predicted participant's intentions to consume alcohol and corresponding behavior among all three classifications of drinkers; whereas the Perceived Behavioral Control (PBC) construct inconsistently predicted intention and alcohol consumption. Based on Behavioral Intention, the proportion of variance the TPB model explained decreased as participants alcohol consumption increased. It appears that the TPB constructs Attitude Toward the Behavior and Subjective Norm can effectively be utilized when designing universal prevention interventions targeting game day alcohol consumption among college students. However, the applicability of the PBC construct remains in question. While select constructs in the TPB appear to have predictive ability, the usefulness of the complete theoretical framework is limited when trying to predict high-risk drinking and ERAC. These findings suggest that other behavioral theories should be considered when addressing the needs of high-risk and extreme drinkers.
NASA Astrophysics Data System (ADS)
Perez-Saez, Javier; Mande, Theophile; Larsen, Joshua; Ceperley, Natalie; Rinaldo, Andrea
2017-12-01
The transmission of waterborne diseases hinges on the interactions between hydrology and ecology of hosts, vectors and parasites, with the long-term absence of water constituting a strict lower bound. However, the link between spatio-temporal patterns of hydrological ephemerality and waterborne disease transmission is poorly understood and difficult to account for. The use of limited biophysical and hydroclimate information from otherwise data scarce regions is therefore needed to characterize, classify, and predict river network ephemerality in a spatially explicit framework. Here, we develop a novel large-scale ephemerality classification and prediction methodology based on monthly discharge data, water and energy availability, and remote-sensing measures of vegetation, that is relevant to epidemiology, and maintains a mechanistic link to catchment hydrologic processes. Specifically, with reference to the context of Burkina Faso in sub-Saharan Africa, we extract a relevant set of catchment covariates that include the aridity index, annual runoff estimation using the Budyko framework, and hysteretical relations between precipitation and vegetation. Five ephemerality classes, from permanent to strongly ephemeral, are defined from the duration of 0-flow periods that also accounts for the sensitivity of river discharge to the long-lasting drought of the 70's-80's in West Africa. Using such classes, a gradient-boosted tree-based prediction yielded three distinct geographic regions of ephemerality. Importantly, we observe a strong epidemiological association between our predictions of hydrologic ephemerality and the known spatial patterns of schistosomiasis, an endemic parasitic waterborne disease in which infection occurs with human-water contact, and requires aquatic snails as an intermediate host. The general nature of our approach and its relevance for predicting the hydrologic controls on schistosomiasis occurrence provides a pathway for the explicit inclusion of hydrologic drivers within epidemiological models of waterborne disease transmission.
A new method to predict anatomical outcome after idiopathic macular hole surgery.
Liu, Peipei; Sun, Yaoyao; Dong, Chongya; Song, Dan; Jiang, Yanrong; Liang, Jianhong; Yin, Hong; Li, Xiaoxin; Zhao, Mingwei
2016-04-01
To investigate whether a new macular hole closure index (MHCI) could predict anatomic outcome of macular hole surgery. A vitrectomy with internal limiting membrane peeling, air-fluid exchange, and gas tamponade were performed on all patients. The postoperative anatomic status of the macular hole was defined by spectral-domain OCT. MHCI was calculated as (M+N)/BASE based on the preoperative OCT status. M and N were the curve lengths of the detached photoreceptor arms, and BASE was the length of the retinal pigment epithelial layer (RPE layer) detaching from the photoreceptors. Postoperative anatomical outcomes were divided into three grades: A (bridge-like closure), B (good closure), and C (poor closure or no closure). Correlation analysis was performed between anatomical outcomes and MHCI. Receiver operating characteristic (ROC) curves were derived for MHCI, indicating good model discrimination. ROC curves were also assessed by the area under the curve, and cut-offs were calculated. Other predictive parameters reported previously, which included the MH minimum, the MH height, the macular hole index (MHI), the diameter hole index (DHI), and the tractional hole index (THI) had been compared as well. MHCI correlated significantly with postoperative anatomical outcomes (r = 0.543, p = 0.000), but other predictive parameters did not. The areas under the curves indicated that MHCI could be used as an effective predictor of anatomical outcome. Cut-off values of 0.7 and 1.0 were obtained for MHCI from ROC curve analysis. MHCI demonstrated a better predictive effect than other parameters, both in the correlation analysis and ROC analysis. MHCI could be an easily measured and accurate predictive index for postoperative anatomical outcomes.
Amin, Elham E; van Kuijk, Sander M J; Joore, Manuela A; Prandoni, Paolo; Cate, Hugo Ten; Cate-Hoek, Arina J Ten
2018-06-04
Post-thrombotic syndrome (PTS) is a common chronic consequence of deep vein thrombosis that affects the quality of life and is associated with substantial costs. In clinical practice, it is not possible to predict the individual patient risk. We develop and validate a practical two-step prediction tool for PTS in the acute and sub-acute phase of deep vein thrombosis. Multivariable regression modelling with data from two prospective cohorts in which 479 (derivation) and 1,107 (validation) consecutive patients with objectively confirmed deep vein thrombosis of the leg, from thrombosis outpatient clinic of Maastricht University Medical Centre, the Netherlands (derivation) and Padua University hospital in Italy (validation), were included. PTS was defined as a Villalta score of ≥ 5 at least 6 months after acute thrombosis. Variables in the baseline model in the acute phase were: age, body mass index, sex, varicose veins, history of venous thrombosis, smoking status, provoked thrombosis and thrombus location. For the secondary model, the additional variable was residual vein obstruction. Optimism-corrected area under the receiver operating characteristic curves (AUCs) were 0.71 for the baseline model and 0.60 for the secondary model. Calibration plots showed well-calibrated predictions. External validation of the derived clinical risk scores was successful: AUC, 0.66 (95% confidence interval [CI], 0.63-0.70) and 0.64 (95% CI, 0.60-0.69). Individual risk for PTS in the acute phase of deep vein thrombosis can be predicted based on readily accessible baseline clinical and demographic characteristics. The individual risk in the sub-acute phase can be predicted with limited additional clinical characteristics. Schattauer GmbH Stuttgart.
Do Intracerebral Hemorrhage Nonexpanders Actually Expand Into the Ventricular Space?
Dowlatshahi, Dar; Deshpande, Anirudda; Aviv, Richard I; Rodriguez-Luna, David; Molina, Carlos A; Blas, Yolanda Silva; Dzialowski, Imanuel; Kobayashi, Adam; Boulanger, Jean-Martin; Lum, Cheemun; Gubitz, Gordon J; Padma, Vasantha; Roy, Jayanta; Kase, Carlos S; Bhatia, Rohit; Hill, Michael D; Demchuk, Andrew M
2018-01-01
The computed tomographic angiography spot sign as a predictor of hematoma expansion is limited by its modest sensitivity and positive predictive value. It is possible that hematoma expansion in spot-positive patients is missed because of decompression of intracerebral hemorrhage (ICH) into the ventricular space. We hypothesized that revising hematoma expansion definitions to include intraventricular hemorrhage (IVH) expansion will improve the predictive performance of the spot sign. Our objectives were to determine the proportion of ICH nonexpanders who actually have IVH expansion, determine the proportion of false-positive spot signs that have IVH expansion, and compare the known predictive performance of the spot sign to a revised definition incorporating IVH expansion. We analyzed patients from the multicenter PREDICT ICH spot sign study. We defined hematoma expansion as ≥6 mL or ≥33% ICH expansion or >2 mL IVH expansion and compared spot sign performance using this revised definition with the conventional 6 mL/33% definition using receiver operating curve analysis. Of 311 patients, 213 did not meet the 6-mL/33% expansion definition (nonexpanders). Only 13 of 213 (6.1%) nonexpanders had ≥2 mL IVH expansion. Of the false-positive spot signs, 4 of 40 (10%) had >2 mL ventricular expansion. The area under the curve for spot sign to predict significant ICH expansion was 0.65 (95% confidence interval, 0.58-0.72), which was no different than when IVH expansion was added to the definition (area under the curve, 0.66; 95% confidence interval, 0.58-0.71). Although IVH expansion does indeed occur in a minority of ICH nonexpanders, its inclusion into a revised hematoma expansion definition does not alter the predictive performance of the spot sign. © 2017 American Heart Association, Inc.
Fleck, David E; Ernest, Nicholas; Adler, Caleb M; Cohen, Kelly; Eliassen, James C; Norris, Matthew; Komoroski, Richard A; Chu, Wen-Jang; Welge, Jeffrey A; Blom, Thomas J; DelBello, Melissa P; Strakowski, Stephen M
2017-06-01
Individualized treatment for bipolar disorder based on neuroimaging treatment targets remains elusive. To address this shortcoming, we developed a linguistic machine learning system based on a cascading genetic fuzzy tree (GFT) design called the LITHium Intelligent Agent (LITHIA). Using multiple objectively defined functional magnetic resonance imaging (fMRI) and proton magnetic resonance spectroscopy ( 1 H-MRS) inputs, we tested whether LITHIA could accurately predict the lithium response in participants with first-episode bipolar mania. We identified 20 subjects with first-episode bipolar mania who received an adequate trial of lithium over 8 weeks and both fMRI and 1 H-MRS scans at baseline pre-treatment. We trained LITHIA using 18 1 H-MRS and 90 fMRI inputs over four training runs to classify treatment response and predict symptom reductions. Each training run contained a randomly selected 80% of the total sample and was followed by a 20% validation run. Over a different randomly selected distribution of the sample, we then compared LITHIA to eight common classification methods. LITHIA demonstrated nearly perfect classification accuracy and was able to predict post-treatment symptom reductions at 8 weeks with at least 88% accuracy in training and 80% accuracy in validation. Moreover, LITHIA exceeded the predictive capacity of the eight comparator methods and showed little tendency towards overfitting. The results provided proof-of-concept that a novel GFT is capable of providing control to a multidimensional bioinformatics problem-namely, prediction of the lithium response-in a pilot data set. Future work on this, and similar machine learning systems, could help assign psychiatric treatments more efficiently, thereby optimizing outcomes and limiting unnecessary treatment. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Early detection of Alzheimer disease: methods, markers, and misgivings.
Green, R C; Clarke, V C; Thompson, N J; Woodard, J L; Letz, R
1997-01-01
There is at present no reliable predictive test for most forms of Alzheimer disease (AD). Although some information about future risk for disease is available in theory through ApoE genotyping, it is of limited accuracy and utility. Once neuroprotective treatments are available for AD, reliable early detection will become a key component of the treatment strategy. We recently conducted a pilot survey eliciting attitudes and beliefs toward an unspecified and hypothetical predictive test for AD. The survey was completed by a convenience sample of 176 individuals, aged 22-77, which was 75% female, 30% African-American, and of which 33% had a family member with AD. The survey revealed that 69% of this sample would elect to obtain predictive testing for AD if the test were 100% accurate. Individuals were more likely to desire predictive testing if they had an a priori belief that they would develop AD (p = 0.0001), had a lower educational level (p = 0.003), were worried that they would develop AD (p = 0.02), had a self-defined history of depression (p = 0.04), and had a family member with AD (p = 0.04). However, the desire for predictive testing was not significantly associated with age, gender, ethnicity, or income. The desire to obtain predictive testing for AD decreased as the assumed accuracy of the hypothetical test decreased. A better short-term strategy for early detection of AD may be computer-based neuropsychological screening of at-risk (older aged) individuals to identify very early cognitive impairment. Individuals identified in this manner could be referred for diagnostic evaluation and early cases of AD could be identified and treated. A new self-administered, touch-screen, computer-based, neuropsychological screening instrument called Neurobehavioral Evaluation System-3 is described, which may facilitate this type of screening.
Wiswell, Jeffrey; Tsao, Kenyon; Bellolio, M Fernanda; Hess, Erik P; Cabrera, Daniel
2013-10-01
System 1 decision-making is fast, resource economic, and intuitive (eg, "your gut feeling") and System 2 is slow, resource intensive, and analytic (eg, "hypothetico-deductive"). We evaluated the performance of disposition and acuity prediction by emergency physicians (EPs) using a System 1 decision-making process. We conducted a prospective observational study of attending EPs and emergency medicine residents. Physicians were provided patient demographics, chief complaint, and vital sign data and made two assessments on initial presentation: (1) likely disposition (discharge vs admission) and (2) "sick" vs "not-sick". A patient was adjudicated as sick if he/she had a disease process that was potentially life or limb threatening based on pre-defined operational, financial, or educationally derived criteria. We obtained 266 observations in 178 different patients. Physicians predicted patient disposition with the following performance: sensitivity 87.7% (95% CI 81.4-92.1), specificity 65.0% (95% CI 56.1-72.9), LR+ 2.51 (95% CI 1.95-3.22), LR- 0.19 (95% CI 0.12-0.30). For the sick vs not-sick assessment, providers had the following performance: sensitivity 66.2% (95% CI 55.1-75.8), specificity 88.4% (95% CI 83.0-92.2), LR+ 5.69 (95% CI 3.72-8.69), LR- 0.38 (95% CI 0.28-0.53). EPs are able to accurately predict the disposition of ED patients using system 1 diagnostic reasoning based on minimal available information. However, the prognostic accuracy of acuity prediction was limited. © 2013.
Terminal Investment Strategies and Male Mate choice: Extreme Tests of Bateman.
Andrade, Maydianne C B; Kasumovic, Michael M
2005-11-01
Bateman's principle predicts the intensity of sexual selection depends on rates of increase of fecundity with mating success for each sex (Bateman slopes). The sex with the steeper increase (usually males) is under more intense sexual selection and is expected to compete for access to the sex under less intense sexual selection (usually females). Under Bateman and modern refinements of his ideas, differences in parental investment are key to defining Bateman slopes and thus sex roles. Other theories predict sex differences in mating investment, or any expenditures that reduce male potential reproductive rate, can also control sex roles. We focus on sexual behaviour in systems where males have low paternal investment but frequently mate only once in their lifetimes, after which they are often killed by the female. Mating effort (=terminal investment) is high for these males, and many forms of investment theory might predict sex role reversal. We find no qualitative evidence for sex role reversal in a sample of spiders that show this extreme male investment pattern. We also present new data for terminally-investing redback spiders (Latrodectus hasselti). Bateman slopes are relatively steep for male redbacks, and, as predicted by Bateman, there is little evidence for role reversal. Instead, males are competitive and show limited choosiness despite wide variation in female reproductive value. This study supports the proposal that high male mating investment coupled with low parental investment may predispose males to choosiness but will not lead to role reversal. We support the utility of using Bateman slopes to predict sex roles, even in systems with extreme male mating investment.
Spashett, Renee; Fernie, Gordon; Reid, Ian C; Cameron, Isobel M
2014-09-01
This study aimed to explore the relationship of Montgomery-Åsberg Depression Rating Scale (MADRS) symptom subtypes with response to electroconvulsive therapy (ECT) and subsequent ECT treatment within 12 months. A consecutive sample of 414 patients with depression receiving ECT in the North East of Scotland was assessed by retrospective chart review. Response rate was defined as greater than or equal to 50% decrease in pretreatment total MADRS score or a posttreatment total MADRS less than or equal to 10. Principal component analyses were conducted on a sample with psychotic features (n = 124) and a sample without psychotic features (n = 290). Scores on extracted factor subscales, clinical and demographic characteristics were assessed for association with response and subsequent ECT treatment within 12 months. Where more than 1 variable was associated with response or subsequent ECT, logistic regression analysis was applied. MADRS symptom subtypes formed 3 separate factors in both samples. Logistic regression revealed older age and high "Despondency" subscale score predicted response in the nonpsychotic group. Older age alone predicted response in the group with psychotic features. Nonpsychotic patients subsequently re-treated with ECT were older than those not prescribed subsequent ECT. No association of variables emerged with subsequent ECT treatment in the group with psychotic features. Being of older age and the presence of psychotic features predicted response. Presence of psychotic features alone predicted subsequent retreatment. Subscale scores of the MADRS are of limited use in predicting which patients with depression will respond to ECT, with the exception of "Despondency" subscale scores in patients without psychotic features.
Mallampati test as a predictor of laryngoscopic view.
Adamus, Milan; Fritscherova, Sarka; Hrabalek, Lumir; Gabrhelik, Tomas; Zapletalova, Jana; Janout, Vladimir
2010-12-01
To determine the accuracy of the modified Mallampati test for predicting difficult tracheal intubation. A cross-sectional, clinical, observational, non-blinded study. A quality analysis of anesthetic care. Operating theatres and department of anesthesiology in a university hospital. Following the local ethics committee approval and patients' informed consent to anesthesia, all adult patients (> 18 yrs) presenting for any type of non-emergency surgical procedures under general anesthesia requiring endotracheal intubation were enrolled. Prior to anesthesia, Samsoon and Young's modification of the Mallampati test (modified Mallampati test) was performed. Following induction, the anesthesiologist described the laryngoscopic view using the Cormack-Lehane scale. Classes 3 or 4 of the modified Mallampati test were considered a predictor of difficult intubation. Grades 3 or 4 of the Cormack-Lehane classification of the laryngoscopic view were defined as impaired glottic exposure. The sensitivity, specificity, positive and negative predictive value, relative risk, likelihood ratio and accuracy of the modified Mallampati test were calculated on 2x2 contingency tables. Of the total 1,518 patients enrolled, 48 had difficult intubation (3.2%). We failed to detect as many as 35.4% patients in whom glottis exposure during direct laryngoscopy was inadequate (sensitivity 0.646). Compared to the original article by Mallampati, we found lower specificity (0.824 vs. 0.995), lower positive predictive value (0.107 vs. 0.933), higher negative predictive value (0.986 vs. 0.928), lower likelihood ratio (3.68 vs. 91.0) and accuracy (0.819 vs. 0.929). When used as a single examination, the modified Mallampati test is of limited value in predicting difficult intubation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Orth, Dr. Donald J; Dolloff, Dr. Charles A
2013-01-01
In order for habitat restoration in regulated rivers to be effective at large scales, broadly applicable frameworks are needed that provide measurable objectives and contexts for management. The Ecological Limits of Hydrologic Alteration (ELOHA) framework was created as a template to assess hydrologic alterations, develop relationships between altered streamflow and ecology, and establish environmental flow standards. We tested the utility of ELOHA in informing flow restoration applications for fish and riparian communities in regulated rivers in the Upper Tennessee River Basin (UTRB). We followed the steps of ELOHA to generate flow alteration-ecological response relationships and then determined whether those relationshipsmore » could predict fish and riparian responses to flow restoration in the Cheoah River, a regulated system within the UTRB. Although ELOHA provided a robust template to construct hydrologic information and predict hydrology for ungaged locations, our results do not support the assertion that over-generalized univariate relationships between flow and ecology can produce results sufficient to guide management in regulated rivers. After constructing multivariate models, we successfully developed predictive relationships between flow alterations and fish/riparian responses. In accordance with model predictions, riparian encroachment displayed consistent decreases with increases in flow magnitude in the Cheoah River; however, fish richness did not increase as predicted four years post- restoration. Our results suggest that altered temperature and substrate and the current disturbance regime may have reduced opportunities for fish species colonization. Our case study highlights the need for interdisciplinary science in defining environmental flows for regulated rivers and the need for adaptive management approaches once flows are restored.« less
Wilson, Francis P.; Sheehan, Jessica M.; Mariani, Laura H.; Berns, Jeffrey S.
2012-01-01
Background Existing systems for grading severity of acute kidney injury (AKI) rely on a change of serum creatinine concentration over a defined time interval. The rate of change in serum creatinine increases by degree of reduction in glomerular filtration rate, but is mitigated by low creatinine generation rate (CGR). Failure to appreciate variation in CGR may lead to erroneous conclusions regarding severity of AKI and distorted predictions regarding patient outcomes based on AKI severity. Methods Cohort study of 103 patients who received continuous venovenous hemodialysis (CVVHD) over a 2-year period in a tertiary care hospital setting. Study participants entered the cohort when they were anuric, receiving a stable and uninterrupted dose of CVVHD with serum creatinine in steady state. They were followed until hospital discharge. CGR was measured based on dialyzate effluent volume and effluent creatinine concentration (prospective cohort) and via effluent volume and serum creatinine concentration (retrospective cohort). Results CGR (mean 10.5, range 1.7–22.4 mg/kg/day) was substantially lower in this patient population than what would be predicted from existing equations. Correlates of CGR in multivariable analysis included the length of hospitalization prior to measurement and presence of an oncologic diagnosis. Lower CGR was independently associated with in-hospital mortality in unadjusted analysis and after multivariable adjustment for measures of severity of illness. Conclusions Grading systems for severity of AKI fail to account for variation in CGR, limiting their ability to predict relevant outcomes. Calculation of CGR is superior to other risk metrics in predicting hospital mortality in this population. PMID:22273668
How accurate is our clinical prediction of "minimal prostate cancer"?
Leibovici, Dan; Shikanov, Sergey; Gofrit, Ofer N; Zagaja, Gregory P; Shilo, Yaniv; Shalhav, Arieh L
2013-07-01
Recommendations for active surveillance versus immediate treatment for low risk prostate cancer are based on biopsy and clinical data, assuming that a low volume of well-differentiated carcinoma will be associated with a low progression risk. However, the accuracy of clinical prediction of minimal prostate cancer (MPC) is unclear. To define preoperative predictors for MPC in prostatectomy specimens and to examine the accuracy of such prediction. Data collected on 1526 consecutive radical prostatectomy patients operated in a single center between 2003 and 2008 included: age, body mass index, preoperative prostate-specific antigen level, biopsy Gleason score, clinical stage, percentage of positive biopsy cores, and maximal core length (MCL) involvement. MPC was defined as < 5% of prostate volume involvement with organ-confined Gleason score < or = 6. Univariate and multivariate logistic regression analyses were used to define independent predictors of minimal disease. Classification and Regression Tree (CART) analysis was used to define cutoff values for the predictors and measure the accuracy of prediction. MPC was found in 241 patients (15.8%). Clinical stage, biopsy Gleason's score, percent of positive biopsy cores, and maximal involved core length were associated with minimal disease (OR 0.42, 0.1, 0.92, and 0.9, respectively). Independent predictors of MPC included: biopsy Gleason score, percent of positive cores and MCL (OR 0.21, 095 and 0.95, respectively). CART showed that when the MCL exceeded 11.5%, the likelihood of MPC was 3.8%. Conversely, when applying the most favorable preoperative conditions (Gleason < or = 6, < 20% positive cores, MCL < or = 11.5%) the chance of minimal disease was 41%. Biopsy Gleason score, the percent of positive cores and MCL are independently associated with MPC. While preoperative prediction of significant prostate cancer was accurate, clinical prediction of MPC was incorrect 59% of the time. Caution is necessary when implementing clinical data as selection criteria for active surveillance.